Replies: 1 comment
-
|
I have made some modifications to the function so that the master is not appended. This has helped a lot reducing the size, however it is still bigger than originally (94MB vs 73MB). If a file is re-processed with this method, the size marginally changes. I have also noticed that if I simply load the original file and then save it with compression=2, I am still reducing the file size. All of these make me think there must be something going on during the iteration+append that ends up generating a different (larger) mdf file. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I am working on a script that makes modifications (corrections) to existing data. At the moment it changes units and applies a factor to the data (multiplying the data by a factor). At the moment, the function I have come up with the following function (this works):
The idea here is to use the same timebase for the whole group so it saves space (in v1 I was not doing so and the resulting file was much bigger).
However, I am seeing that the resulting file is still significantly bigger than the original (125MB vs 73MB). Checking it in CANape, I can see that in the new file, there is a "t" signal for each Message that is not present in the original file; which makes me guess that the increased size comes from here. Is it there a better way of doing this? Is it there a better way of doing the "append"? Many thanks in advance.
Beta Was this translation helpful? Give feedback.
All reactions