Skip to content

Comments

Add support for Tuya WSD024-W-433#3477

Open
DennisKehrig wants to merge 6 commits intomerbanan:masterfrom
DennisKehrig:tuya_wsd024w433
Open

Add support for Tuya WSD024-W-433#3477
DennisKehrig wants to merge 6 commits intomerbanan:masterfrom
DennisKehrig:tuya_wsd024w433

Conversation

@DennisKehrig
Copy link
Contributor

No description provided.

@DennisKehrig
Copy link
Contributor Author

Companion PR: merbanan/rtl_433_tests/pull/489

@ProfBoc75
Copy link
Collaborator

@DennisKehrig

Why not using the bitbuffer_find_repeated_row() function ? like that you can analyse MIC only the repeated row and it's much more simple, no ?

@ProfBoc75
Copy link
Collaborator

Few advice to simplify your code, lines 83 to 137 can be replaced by these few lines:

bitpos is probably = 0, this is the first bit position. You can directly replace bitpos by 0.

    int row = bitbuffer_find_repeated_row(bitbuffer, 4, BITS_PER_ROW); // return the row number if the row is found 4 times into the frame, with a BITS_PER_ROW length (ie 72)

    if (row < 0) {  // repeated rows not satisfied.
        return DECODE_ABORT_EARLY;
    }

    if (bitbuffer->bits_per_row[row] > BITS_PER_ROW) { // row too long
        return DECODE_ABORT_LENGTH;
        }

    bitbuffer_invert(bitbuffer);
    bitbuffer_extract_bytes(bitbuffer, row, bitpos, b, BITS_PER_ROW); // you may replace bitpos by 0, to be confirmed/tested.

Then proceed with the dewhitening code.

@DennisKehrig
Copy link
Contributor Author

DennisKehrig commented Feb 19, 2026

Few advice to simplify your code, lines 83 to 137 can be replaced by these few lines:

bitpos is probably = 0, this is the first bit position. You can directly replace bitpos by 0.

    int row = bitbuffer_find_repeated_row(bitbuffer, 4, BITS_PER_ROW); // return the row number if the row is found 4 times into the frame, with a BITS_PER_ROW length (ie 72)

    if (row < 0) {  // repeated rows not satisfied.
        return DECODE_ABORT_EARLY;
    }

    if (bitbuffer->bits_per_row[row] > BITS_PER_ROW) { // row too long
        return DECODE_ABORT_LENGTH;
        }

    bitbuffer_invert(bitbuffer);
    bitbuffer_extract_bytes(bitbuffer, row, bitpos, b, BITS_PER_ROW); // you may replace bitpos by 0, to be confirmed/tested.

Then proceed with the dewhitening code.

I disagree. This fails to meet the stated goal of reporting every unique valid row with plausible data. Your version rules out such rows if they occur less than four times, mine does not. Your version would also ignore valid data from another sensor contained in the same bitbuffer, mine does not. I have some recordings with five valid rows from one sensor and five valid rows from a second sensor, all in the same recording. And many where they occur less than five times, in several cases only one valid row per sensor made it into the recording.

Example scenario with overlapping transmissions from two sensors:

Sensor 1: 1 2 3 4 5
Sensor 2:   1 2 3 4 5

Only the first row from sensor 1 and the last row from sensor 2 make it through clearly, the rest will likely cause many shorter rows.

Less severe scenario:

Sensor 1: 1 2 3 4 5
Sensor 2:       1 2 3 4 5

I'd want to report the data from sensor 1 with count 3 and the data from sensor 2 with count 3, in the same decoder invocation.

If you feel like I can make this more clear in my comments, please point out how.

#define MAX_CANDIDATES 4

/**
# Tuya WSD024-W-433 Temperature & Humidity Sensor.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs to be a plain first sentence without a H1 heading.

Copy link
Contributor Author

@DennisKehrig DennisKehrig Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I submit the same change for the WallarGe CLTX001 as separate PR or sneak it into this one? I also want to move the comment block there to directly above the decoder function so it doesn't show as documentation for the first #define statement, I missed that in the last one.

@zuckschwerdt
Copy link
Collaborator

@ProfBoc75's advice is how we generally handle robustness in decoders. We expect that transmission collisions are rare and not fully recoverable.
If a majority of packets are successfully demodulated, or the MIC is strong we have an easy win without anything extra to do for partially received transmissions.
We consider a bitbuffer with packets from different transmissions to be extremely rare and not recoverable at all.

It's is good that you thoroughly explained the special circumstances (high transmission rate, many senders) in the comments. Generally your code comments are on point and very nice to have!

It sounds like you've done the testing with good results and improved robustness. (Your work on the checksum is especially impressive, AI or not.) We just need to emphasize that the usual decoder should look at lot simpler ;) We need to maintain ~300 after all!

The bitbuffer_find_repeated_row() is really not sophisticated enough. Could your scheme be abstracted into some improved variant of that? Would be great to have this as tool where needed or maybe even to benefit all decoders? Ideal case with low complexity in decoders. E.g. some ranking of rows instead of just the one most common row.

I'm not sure about the count output. We did ponder something like a "reliability" or "quality" output, e.g. basic case actual_good_rows / nominal_expected__rows. We should use this opportunity to find a good standard for this case of reporting or just apply a best guess to report or drop the decoding -- how frequent do you see and how do you rank/trust counts of 1 to 5?

@zuckschwerdt
Copy link
Collaborator

Do you have search keywords to find these sets? I could not locate any "Tuya WSD024 433", are they mostly the same as the ubiquitous Tuya Zigbee Thermometer Hygrometer sets?

@DennisKehrig
Copy link
Contributor Author

Do you have search keywords to find these sets? I could not locate any "Tuya WSD024 433", are they mostly the same as the ubiquitous Tuya Zigbee Thermometer Hygrometer sets?

You can find more info in the README.md of the companion PR: merbanan/rtl_433_tests#489
I'm curious whether you feel like most of that info should be in this repo instead, or whether more of it should be duplicated across both. Since I can add images in the rtl433_tests repo I'm more inclined to more details there, but many regular users probably will never even see that repo.
"WSD024-W-433" is what it says on a sticker on the box for the extra sensors when you order a kit more with more than 2, while "WSD023-WIF-433-W12" is the base station (based on a sticker on its box and an image in the description on AliExpress saying "WSD023").
While the base station has a Tuya CBU module and uses the Tuya cloud infrastructure, the sensors themselves might not actually be made by Tuya, based on what a Tuya support person suggested. He provided details about their BT sensors and when I clarified I'm asking about these 433 MHz ones, he suggested their module doesn't have 433 MHz support and I should contact "the manufacturer". Hmmm. I suspect the actual manufacturer is that SMATRUL seller on AliExpress since most of the other products they sell has their branding, but not this one.
When you search for "WSD024-W-433" you should find this: arendst/Tasmota#23291, but that's just someone else making the same assumption as I did.

@DennisKehrig
Copy link
Contributor Author

DennisKehrig commented Feb 23, 2026

@ProfBoc75's advice is how we generally handle robustness in decoders. We expect that transmission collisions are rare and not fully recoverable. If a majority of packets are successfully demodulated, or the MIC is strong we have an easy win without anything extra to do for partially received transmissions. We consider a bitbuffer with packets from different transmissions to be extremely rare and not recoverable at all.

It's is good that you thoroughly explained the special circumstances (high transmission rate, many senders) in the comments. Generally your code comments are on point and very nice to have!

It sounds like you've done the testing with good results and improved robustness. (Your work on the checksum is especially impressive, AI or not.) We just need to emphasize that the usual decoder should look at lot simpler ;) We need to maintain ~300 after all!

I appreciate that and agree in general. I consider this one a special case (albeit based on limited knowledge of the market) because someone might easily have ten of these sensors, and because they don't transmit regularly unless they see a change in the measurements, so missing a transmission is more consequential than with my WallarGe sensors, which transmit every 31-35 seconds no matter what. And especially if we require a large number of identical rows, chances are you'll ignore valid transmissions from two sensors at the same time. I'd like to empower users to make that choice for themselves, but I recognize that this is a design decision that could be useful more generally and might be odd to have such isolated support for.
Personally, I'd be happy to trust everything that has a valid checksum and occurs at least twice, but I know that means throwing away some likely valid transmissions, too.

During my testing, with the MIC algorithm still being a mystery, I only considered cases where I've seen five identical rows, and frequently I saw the sensor's LED blink, and nothing showed up, which was frustrating. Now as a user I'm glad I can simply display everything with a valid MIC, and highlight when I saw a row only once.
Of course a regular user might not really care at all, depending on the use case. I want to show something like a heatmap of the temperature in my apartment over time after opening a window, so I'd like the data to be as complete as possible.

The bitbuffer_find_repeated_row() is really not sophisticated enough. Could your scheme be abstracted into some improved variant of that? Would be great to have this as tool where needed or maybe even to benefit all decoders? Ideal case with low complexity in decoders. E.g. some ranking of rows instead of just the one most common row.

I agree, that would be nice. Especially if it enabled flex decoders with a config like "bits=72, repeats>=2, unique" to produce data for all unique 72-bit rows that occur at least twice, not just the first one.

The way I approach it here is already fairly general, turning a list of rows into a list of unique rows and how often they occur. I'm limiting it to four candidates which works well in this case where the worst case I've seen is valid data from two sensors and one row having a bad checksum (thus producing a unique third candidate), but that could be adjustable.

The nice thing about bitbuffer_find_repeated_row is that it doesn't need to store a lot of data and can stop early, while my approach has to consider every single row. In specific cases there could be optimizations, like "if the last row is the same as the first, chances are low there'll be valid data from a different sensor inbetween those rows", but a generic solution can't make those assumptions.

One way to make it easier to reuse is with a struct to hold the row index and the number of times the row occurs, then the caller could allocate an array with however many candidates they expect to need at most and pass a pointer to that array to the function. A second variant of the function could omit the count information and just provide indexes of unique rows that are repeated enough times (closer to bitbuffer_find_repeated_row). If the number of candidates is limited, we could choose to replace the oldest existing candidate that has only been seen once so far since duplicates tend to be consecutive, at the risk of missing that the just replaced candidate occurs a second time later, now looking like a new candidate (I hope that makes sense).

One question to ponder is how often anyone will see transmissions from more than two devices in the same batch of rows. If only two are expected, we could do the regular bitbuffer_find_repeated_row to find candidate A, then move backwards from the last row until it we either see A again or find a different row the required number of times.

So in a case like this:

Sensor 1: 1 2 3 4 5
Sensor 2:         1 2 3 4 5

We could find 1+2 from sensor 1, then 5+4 from sensor 2 and stop, skipping all the rows in the middle.
The user couldn't do their own filtering, but that's not a new situation, for example with the Kerui decoder requiring at least 9 identical rows, with no option for the user to change that limit.

I'm not sure about the count output. We did ponder something like a "reliability" or "quality" output, e.g. basic case actual_good_rows / nominal_expected__rows. We should use this opportunity to find a good standard for this case of reporting or just apply a best guess to report or drop the decoding -- how frequent do you see and how do you rank/trust counts of 1 to 5?

Yeah, there's a lot of options for that part. I considered calling it "repeats", but don't like the ambiguity that technically something that is sent once is not repeated at all and therefore should have "repeats: 0", but people tend to think of something transmitted five times as "repeats: 5", not "repeats: 4".
Or similar to battery_ok not showing when it's 1, it could be something like "confident: 0" or "redundant: 0" that's not present when a minimum redundancy threshold has been crossed.
The more standardized it is, rtl_433 could also filter it out itself based on a user configurable setting insted of requiring external post-processing, i.e. users could choose whether they want low confidence values reported.

@zuckschwerdt
Copy link
Collaborator

You can find more info in the README.md of the companion PR

My bad. That's actually the preferred way to present the info. General protocol details here and specific device details with the tests.

There'll be some integration of the different types of info soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants