Replies: 4 comments
-
|
Nice refactor. Makes things a bit more logical/modular. Did notice one revert with the refactor that probably needs to be fixed. When the historydays argument is passed to pull the historical daily/hourly data, the refactor only writes the data back to the DB after pulling ALL of the historical data. Depending on the number of datapoints, it can result in a timeout issue when the write request is submitted to the DB. I had fixed the issue back in my Dec '24 pull request but it has been reintroduced due to the refactor. Would you prefer that I submit a pull request or simply document the fixes here and you can incorporate? The fix involves modification to one line of code and adding two additional lines to collect.py. |
Beta Was this translation helpful? Give feedback.
-
|
Thanks for looking it over and raising this concern. I deliberately removed the chunked history DB writes due to the risk that it can lead to a partial history load. Example: If the first history loop iteration succeeds but a subsequent iteration fails, such as with a network failure, the user will likely re-run the command again with the same I just pushed a change to help deal with this: The InfluxDB socket will now timeout after 60s. On my system I can load 2 years of history and it takes about 7 minutes to read it from Emporia, and then about 15 seconds to write it to InfluxDB. The 60s timeout gives more leeway for slower systems. The timeout value is also configurable in the config file if a system exceeds 60s. |
Beta Was this translation helpful? Give feedback.
-
|
With a standardized timestamp (23:59:59), one shouldn't end up with duplicate datapoints in the database even with repeated historyruns over the same timeperiod. Influx natively updates (if necessary) the existing datapoint assuming that the timestamp, tag/s and measurement are all the same. See https://docs.influxdata.com/influxdb/v2/write-data/best-practices/duplicate-points/. I would think that the risk of a "failure to write to DB" would be much higher with a huge dataset than with with smaller chunks. Consider a single Vue device (with 16 channels) and you are pulling 2 years worth of historical data: The single write numbers get worse with additional devices (I have 3 Vues and an Outlet)... ~900,000 vs ~25,000. Writing smaller sets of datapoints is more performant for the DB as well. In case of a failure, a savvy user would recognize at which point the failure/s occurred and hopefully adjust their historydays argument value accordingly for a re-run in the case of chunked writes vs being forced to re-run for the entire timespan since nothing was written in the case of a single write... |
Beta Was this translation helpful? Give feedback.
-
|
Hi there, I'm not sure if this is a bug or if I'm misunderstanding something, but I wanted to report it here. After a fresh installation of version 1.9, I noticed that if I don't use channel mapping, the channel names appear in a format like However, once I use channel name mapping to assign specific names to each channel, the chart displays normally. I'll continue to monitor this for a while. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Breaking changes
New features
Other changes
This discussion was created from the release 1.9.0.
Beta Was this translation helpful? Give feedback.
All reactions