-
Notifications
You must be signed in to change notification settings - Fork 4
20. ver. 2.0 milestones
With version 2.0 tuyaDAEMON reaches 2 important goals: the ability to model the Tuya and custom devices in an OO perspective, and the ability to create distributed devices on TuyaDAEMON network:
The Object-Oriented approach has many advantages, including:
- A robust and accepted general model, a common language.
- Some OO concepts, like inheritance and data hiding, are powerful and very useful in simplifying the projects.
- Ability to use UML tools, with the prospect of automating certain tasks.
- Easily obtainable documentation and artifacts, from the earliest stages of the project, which help the development.
As an example, we use the class diagram
of the 'watering system', a node-red app implemented as a 'derived' tuyaDAEMON device.
Let's start with some terminology, trying to relate the three contexts: OO, tuyaDAEMON, and Tuya-cloud.
-
A new second-level device (e.g. 'watering_sys') or
derived
device can use one or morebase
devices, exclusively (composition
) or partially (aggregation
) or shared (use
):- All 'smart_breaker' resources are required by the 'irrigation system' device. No interference is allowed.
- The 'Smart_Switch01' device has the 'countdown' resource which can still be used, e.g. as 'tuya_bridge' by some tuyaDAEMON instance.
- The sensor 'temperature' data can be used by many other devices
For a device with a
component
role, this leads to an immediate consequence: the device can behidden
in tuyaDAEMON, i.e. not reachable via SET, GET, and without any log on DB. -
A device can use the node-red dashboard or other user interfaces: they are UI to set a
publicly accessible
(+
)attribute
(dp, property, or field), listed in UML at the top of theclass
, or to do actions using apublic method
(+
) (SET, GET, push, trigger, command, function), presented at the bottom of theclass
. -
A device can have some
attributes
(values, data points)accessible
with standard GET() and SET() (#
dp,#
property), orinaccessible
directly (-
dp,-
property) but communicate through#
push()method
on the initiative of the device (timed or in case of changes).
So this class diagram
contains many pieces of information about the new device:
- the 'inheritance' defines some attribute specification (i.e. the 'circulate' in 'watering_sys' is the same as in 'smart_breaker').
- Regarding the 'methods', they can be
inherited
. In the example, watering_sys.relayoverrides
smart_breaker.relay by adding some side effects (button color change) to the inherited ON/OFF functionality.
New features are added to tuyaDAEMON since ver. 2.0 to handle the OO paradigm.
-
A new entry point 'fast_cmds' for fast command processing is defined in tuyaDAEMON CORE: it works like the standard '
std_cmd
', but with some differences:- Implements SET, GET, SCHEMA, and MULTIPLE
- It doesn't check the
device
anddp
capabilities defined inalldevices
(exception: "WW", "GW", "SKIP" dp capabilities). - The request is 'quiet' processed: no info on the debug pad and doesn't store a log on DBs.
pros:
- faster process
- commands are directly sent to any device (also to devices with 'capabilities' = 'NONE'). So a base device can be protected from any direct access, still receiving fast_cmds from a derived device.
cons:
- the control absence makes the use limited to tested and safe operations: essentially for internal use.
-
'share' option in 'global.alldevices' JSON structure: it can be added to any
dp
, allowing tuyaDAEMON to send automatically a new command triggered by the answer. The "share" command uses the "fast_IN" entry point. For the syntax details see alldevices documentation. -
SKIP new
dp
capability: gosth DP, the cammands are NOT sended to device. Any command produces an immediate response, and the DP can have a 'share'. Added to handle extradp
(like "_connected") as standard property, or to add derivated DPs without code, but implemented only via 'share' (e.g.trigger._testPing24H
).
use
The watering_sys 'uses' the temperature data pushed by a sensor to update the chart and control the output.
The device pushes a message (same format as the answer for a GET(103) command) when data changes.
You must update 'alldevices' adding to "sensor", dp.name = "temperature", the following 'share' object, where '201' is the dp
reserved for the temperature in the "watering_sys":
"share": [{
"action": [{
"device": "watering_sys",
"property": "201" //SET: value is missed => inherited from input msg.
}] }]
Implementing the "watering_sys" flow, the "pick" function node does a function of demultiplexer and format changer. In the "pick" node, inside a switch(msg.infodp)
you must change the msg
as required by the chart UI, and send it to the "chart" node:
case "201":
return [null, null, null, {
payload: msg.payload.value,
topic: "Temp." } ]; // out[4] is the output for the chart node.
inheritance
The smart_breaker.relay (dp 1) is inherited by watering_system (dp 1):
A command SET(1) to "watering_system" is processed by 'do_pre', in the in the "pick" function node, inside a switch(msg.infodp)
to build the message for the "smart_breaker":
case "1":
return [null, { payload: {
device:"smart_breaker",
property:"relay",
value: msg.payload.value}}]; // out[2] is the output for the 'fast IN' node.
This is processed as usual, and the smart_breaker sends the answer. In alldevices.real.smart_breaker.dps.dp = 1
is defined also a 'share':
"share": [{ "action":[{
"device":"watering_sys",
"property":"1ans" }]}]
Note that the property is "1ans" and not "1": if we send a new command to "1" we will get an infinite loop.
The shared message is a SET(1ans), processed by 'do_post', also in the "pick" function node, inside the switch(msg.infodp)
. Here is built also the answer message, but now for dp(1), and not for '1ans'!.
case "1ans":
// optional, here more side effects code...
return [null, null, { payload: {
"deviceId": "watering_sys_ID",
"data":{
"1" : msg.payload.value
}}}]; // out[3] is the output for the 'logging' node.
This closes the processing of the initial command SET(1) to the "watering_system".
encapsulation
To hide the internals of the derived
device we can limit the visibility of a component
device:
- using 'fast IN' and setting the
device capability
to 'NONE': this will not allow standard commands. - setting
device hide='K'
to filter thecomponent
device logs (since 2.2.0).
remote control
Using the 'smartLife' application, we cannot see a fake device like "watering_sys", but we can see the 'base' devices: "smart_breaker" and "Smart_Switch01". So, to send commands to "watering_sys" from anywhere:
- We can define some TUYATRGXXXX associated with actions on "watering_sys", and defined in 'smartLife' as automation 'Tap to run' (they are also required to use voice control). Fast and intuitive. Example TUYATRG1700, as 'STOP watering'. On 'smartLife':
set tuya_bridge:countdown = 1700
On tuyaDAEMOM:
- In 'smartLife' we can access directly the 'base devices'. As you can see in the
Sequence Diagram
, a 'smartLife' command to a base device (e.g. the "smart_breaker") is correctly sent, by inheritance, to the "watering_sys" device. This way is not really user-friendly, but much information and control can be done directly on the base devices.
The need for a distributed tuyaDAEMON system can derive from very different exigencies, such as:
- To have separate node-red dashboard UI for different purposes.
- Required security (UPS, battery, ...) and access (local, public,...) different levels.
- Redundancy for better fault tolerance.
- Limits on devices, divers, node-red versus OS (e.g. the PM_sensor requires a 'serial node' that doesn't work on Android).
- WiFi capacity limits, network segmentation.
- tuyaDAEMON limits, better performances.
- Centralization or specialization of DB... etc.
In any case, in version 2.0 new tuyaDAEMON features can satisfy different requirements, enabling exchanges between distributed tuyaDAEMON servers.
-
remotemap for tuyaDEAMON network.
This global structure (in the CORE.CONFIG node) maps all instances of tuyaDAEMON, with local and remote URLs. The
itself
identifies the local tuyaDAEMON instance, the 'NAMEx' (any) is used to identify the instances, and it is used also on DB, in the new fieldtuyathome.messages.daemon
, to identify the record provenience. Only 'itself' must be updated in any instance. Example:
{ "itself":"TEST1",
"local": {
"TEST1" : "http://localhost:1984",
"WIN" : "http://localhost:1985",
"ANDROID" : "http://localhost:1880" },
"remote": {
"TEST1" : "http://192.168.1.3:1984",
"WIN" : "http://192.168.1.3:1985",
"ANDROID" : "http://192.168.1.43:1880" } }
Since ver. 2.2.0: global.remotemap.itself
is obsolete, the value is now stored in global.instance_name
.
-
New message format for remote access.
A new optional field is added to the standard messages:
"remote": "NAMExx"
to access a device in a remote tuyaDAEMON instance. The 'remote' style command can be used in standard tuyaDEAMON inputs ("IN command" and "fast IN") and also in "share" definitions. Example:
{ "remote": "ANDROID",
"device": "switch module #1", // name|id
"property": "switch", // name|dp: in 'share' can be missed or start with '@' to be evalued
"value": "OFF" } // any: with 'share' can be missed or start with '@' to be evalued
-
_system._proxy: the communication process is implemented as a new property of
_system
, named_proxy
, and it utilizes the REST channel. So the remote message in the example is equivalent to the standard message:
{ "device": "_system",
"property": "_proxy",
"value": {
"remote": "ANDROID",
"device": "switch module #1",
"property": "switch",
"value": "OFF" } }
- The answer, if required, is available locally at
global.tuyastatus._system._proxy
but it will be overwritten by any new remote message:
- remote SET:
{"remote": "xxx", "device": "yyy", "property": "zzz", "value": "vvv"}
The SET is fired on the remote server, and the answer is:{"from": "xxx", "device": "yyy", "property": "zzz", "status": "sent"}
. - remote GET:
{"remote": "xxx", "device": "yyy", "property": "zzz" }
The GET read the value from the remotetuyastatus
:{"from": "xxx", "device": "yyy", "property": "zzz", "value": "www"}
. - remote SCHEMA:
{"remote": "xxx", "device": "yyy" }
The SCHEMA is read from the remotetuyastatus
:{"from": "xxx", "device": "yyy", "schema": {"pp1":"vv1", "pp2":"vv2",...}}
. - remote LIST:
{"remote": "xxx"}
returns the LIST of devices found in 'tuyastatus' (not inalldevices
):{from: "xxx", "list":["dev1","dev2",...]}
Any node-red instance as one and only one dashboard. One solution is to run more node-red instances to have many separate UI for different purposes (security, climatization, watering, etc.). Or you like to test and update a tuyaDEAMON development version while a second stable instance runs your devices.
Once installed node-red, this can be done this way:
- You must use different directories (e.g. "D:/node-red/flow-1984", "D:/node-red/node-1985"..) and different ports (e.g. 1984, 1985..) for each instance.
- To do this easily, I use the following 'start_node1984.bat' (Windows) to start the node-red instance at
d:/node-red/flow-1984
, using the port 1984:
REM nodered/flows-1984: tuyaDEAMON project
REM (if required) set DEBUG=*
start /b cmd /c node-red -p 1984 -u d:/nodered/flow-1984
ping -n 6 127.0.0.1 > nul
REM to start the interface:
start chrome http://localhost:1984
REM to start the dashboard:
REM width= (7*60) //Widh of content window (7 modules)
REM height= (11*60)+40 //Height of content window (11 modules)
REM all next in a single line!
start "" "C:\Program Files\Google\Chrome\Application\chrome.exe"
--chrome-frame
--user-data-dir=D:\temp
--window-size=420,700
--app=http://localhost:1984/ui/#/0
On first run node-red
will create the required structures in your working directory.
The default flowFile is: d:/nodered/flow-1984/flows_<hostname>.json
It is easy to update flows between instances: use the standard flow import-export with the clipboard.
In the development phase, all my tuyaDAEMON flows are equal, so it is easy to update the network.
Instance installation checklist (since 2.2.0):
- in
Global CORE config
node set theinstance_name
- in
Global CORE config
node setalldevice_file_path
to a local value. - in
Global TRIGGER config
node set only oneis_master
= true. - in all CORE
DB node
verify the correct associations with the DB configuration nodes - in
core_MQTT
enable/disable the broker, and verify client associations - in 'core.MQTT', update the
client MQTT in: commands
node setting the topic to:tuyaDAEMON/<instance_name>/+/command/#
- disable some
tuya-device-nodes
, to reduce the 'device duplication' to only really useful cases. - disable the
mirror
andfake
device flows not handled by the instance.
In production, fine-tuning will produce more differences.
The same device can be handled by more than one tuyaDAEMON instance (with the limit of the max number of MQTT connections available per device see 6). This is the case of my 'Zigbee Gateway' device, car I wanna handle the sub-devices (virtual devices) in more than one tuyaDAEMOM instance.
To cancel the pushed data by some device in one instance, you can use hide in global.alldevices
.
To process remote devices from local, the local global.alldevice
MUST also contain info about the remote devices, as the response/event is processed locally.
To handle some old and new features in tuya-smart-device
node (see ISSUE#54 and ISSUE#57) in tuyaDAEMON ver. 2.0 is introduced the concept of pseudo-dp.
They are like standard DP, automatically added to devices but not sent to devices, used internally to perform special tasks.
For the pseudoDP
the presence in global.alldevices
is required only to define a user-friendly name (see table).
Some are for internal use only, but some can be used in standard tuyaDAEMON commands:
DP | capability | value | name | description |
---|---|---|---|---|
list | NONE | NONE | internal, only in msg.infodp , signals a device LIST operation |
|
schema | NONE | NONE | internal, only in msg.infodp , signals a tuya SCHEMA operation |
|
multiple | WO | { dp:value,... } |
in cmd and msg.infodp , signals a tuya MULTIPLE operations |
|
_connected | PUSH | true|false |
for all devices, also in tuyastatus
|
|
_t | PUSH | timestamp [s] | all devices, time of the last device update, only in tuyastatus
|
|
_refresh | WO | any |
yes | trigger, sends a REFRESH operation, only for some devices |
_refreshCycle | RW | 1..N|(false|OFF|NONE|NO) | yes | to auto-repeat the tuya REFRESH operation, only for some devices |
_findTimeout | WO |
1000... [ms] |
yes | the interval between re-connection tentatives, for real/custom devices |
_standby | WO | true|false |
yes | sets the device in a hibernate disconnected state, for real/custom devices |
notes:
- Some
pseudoDP
can be implemented also on custom 'fake' devices, if possible and useful (e.g._refreshCycle
in PM_detector).
Some new features are implemented as subflows, that the user can use on-device basis, as in the figure:
With tuyapi ver. 7.1.0 is introduced a new command, refresh()
. This command, in some new devices (e.g. 'AC power meters'), forces a data sample, while a GET returns the last PUSHed value. An application can control the sampling rate of a device by repeating the refresh()
command.
So does smartLife
, using sec 5 rate, only when required (e.g. UI open) to not waste resources.
From ver 2.0, tuyaDAEMON can handle the refresh of such devices by adding the 'REFRESH' subflow. That adds to the device 2 pseudoDP "_refresh" (any) and "_refeshCycle" (interval|END).
- The "_refresh" pseudoDP sends a single REFRESH, that forces an immediate data sampling.
- In the 'cycle' mode, many REFRESHs are sent: the sampling rate for the device is set in the command argument
interval[seconds]
. Two limits exist, max-Loop (hardcoded to 1'000'000) and max-Time, set in the node properties. - To get a neverending cycle mode, the user can use a 'timer' to send a '_refreshCycle' before the max-Time.
-
REFRESH
device command can take many forms and the REFRESH subflow takes this into account: you can try{operation: REFRESH, dps:20}
or{operation: REFRESH, schema: true}
or{operation: REFRESH, requestedDPS:[1,18,19,20]}
, see ISSUE#407 and ISSUE#469.
note: Usually a device sends only the changed values after a REFRESH. So it is possible to send a REFRESH and not get any value.
From node-red-contrib-tuya-smart-device ver 4.0.x
two new device commands are available: 'findTimeout'
and 'standby'
, to have better control on connected/disconnected devices.
From ver. 2.0 tuyaDAEMON uses the new features:
-
_standby(true|false) is a new
pseudoDP
valid for all real devices (and for the custom devices that implement this), to enter/exit from the 'standby' status on request. -
_findTimeout(time) is a new
pseudoDP
for all real devices for the user dynamic control of the interval between connections retry.It is better to have a fast (2-10 s) timeout when the device is working (to reconnect it soon in case of any problem) and a very slow (60-600 s) timeout when the device is OFF (so tuyaDAEMON is polling it, and auto-reconnect when the device is turned on).
A new subflow, dynamic retry can automatically handle the devices often OFF (e.g. power plugs, bulbs, etc), using _findTimeout
without user intervention. This subflow has two parameters:
- MinTimeout: the timeout of fast attempts, in sec (default 1).
- MaxTimeout: the timeout of slow attempts, in sec (default 60).
With these default settings, the minimum retry connection interval is approximately 10 seconds (affected by many factors). If the device is disconnected, the interval increases to approximately sec. 70 in 9/10 minutes. The increments are randomized, tho spread better the connection operations.
The user can never use the _findTimeout
pseudoDP; it must choose, for any device, between the static retry
or the dynamic retry
subflow, i.e. a fixed interval defined in the 'tuya-smart-device'
node, or a variable delay.
Tuya devices normally use atomic data points (dP
), with values of simple types: boolean
or number
or string
(e.g. '7', 'OFF').
There may be devices that push too much data, causing problems for both TuyaDAEMON and the database.
In these cases, the RT/AVG filter can be used, which can be transparent, or which accumulates the data arriving from some dPs and presents them in output only on request, as average, maximum, or minimum (example: BreakerDIN. dP 6, PUSHed every 1 s (for the UI interface) and averaged every 10 s).
Some dP provide strings that actually represent encoded data structures (usually base64). The encode()
and decode()
functions are in the CORE.'*ENCODE/DECODE user library'
node, and in global.alldevices
a decoding function can be associated with a dP
.
But the use of structures is undesirable in some cases:
- It is preferable to have atomic values in the DB for subsequent processing.
- Maybe you want to use the
'RT/AVG filter'
node, which only processes atomic values.
In these cases, the explode subflow can be used to process a dP
output.
This subflow is a filter that decodes the data structures of any deep and pushes the atomic data as new dPs
.
note: you must add in global.alldevices
the new dPs
(see BreakerDIN, dP
6 is a structure, that is exploded in _6.V
, _6.A
, _6.W
, _6.Leack
new atomic dPs
).
The fake device '_system' has been updated to be more performant in a multi-server environment, and to work with a super-system general app. Since ver. 2.0 '_system' is implemented in a dedicated flow. See '_system' intro. and, for details, the standard device documentation.
The _system ver. 2.0 introduces:
-
timers for tasks Now it is possible to schedule any tuyaDAEMON command (local or remote) using the _timerON property. More, because there is used a 'share' structure, any timer can handle many tasks, and any task can be conditioned and parametrized.
-
new properties In version 2 are added many new properties to
_system
, so now is present a rich set of utilities: see details.
Since ver. 2.2.0 also CORE and core-TRIGGER are implemented as 'fake' devices, to get better module independence and test facilities.
Since ver. 2.2.0 TuyaDAEMON's main data structure, alldevices
, and all shared libraries are implemented as
global singletons
objects, accessible by any flow and node (see Implemention notes).
The TuyaDAMON STARTUP is a three-step process, to grant:
- Execution of all
On-start
functions and allconfig
nodes, in any flow. - CORE builds all
global singleton
objects/libraries. - Then all
xxxx flow startup
nodes can do their own initialization, also usingglobal singletons
. - Easy user access for updates to:
global xxx config
nodes,CORE.ENCODE/DECODE user library
node.
More in detail:
On-start:
- All
Global MODULE config
nodes updateglobal/flow values
-
On-Start function
in any flow/node (usually inmodule_x flow startup
node): can do init tasks, and setglobal|flow|context values
, but CAN NOT useglobal singletons
.
node-red
grants that all 'On-start' code is executed before the flow start, but none is defined about the order of execution in different flows.
CORE setup (first)
- builds libraries and singleton objects in
context.global
- runs in sequence
core flow setup
function node (can useglobal singletons
)
other MODULES setup (delayed)
- note: after
CORE setup
- run a
module_x flow setup
function node from any module (can useglobal singletons
) - ERROR msg if
CORE setup
not finish (to fine-tune the 'startup delay') - note: the execution order is undefined