v0.11.0 - Getting ready for 1.0
[0.11.0] - 2025-06-16
This is the last release before 1.0.
Most things are stable now and the algorithms published in this release have been validated against there old
implementations.
Results of this validation can be in the docs under the new "Revalidation" section.
Raw results for the revalidation can be found in the companion
revalidation repository.
Beyond that this release contains the last expected set of changes to the main pipelines.
With the 1.0 release, we will expect the outputs of the main pipelines to be stable (unless actual bugs are found).
Potential improvements to algorithms will always require manual opt-in.
SCIENTIFIC CHANGES
- The default ICC calculation was changed from ICC1,1 to ICC2,1
(#176) - The GsdIluz algorithm was reworked. This fixes some discrepancies with the original implementation and should improve
the results in many cases.
We now also provide a version of the original peak detection algorithm that was used in the matlab implementation.
This can be used by setting `GsdIluz(**GsdIluz.PredefinedParameters.original).
(#182, #187) - The models behind the Left-Right detection algorithm by Ullrich et al. have been retrained and made reproducible.
This results in some changes in the results (mostly improvements) and some changes in the output, even for the old
models, as parts of the pre-processing was adapted to better reflect the original implementation.
(#186)
Added
- We added first scripts for the revalidation.
This includes a script to extract the old results and put them into files the new evaluation pipeline can read.
Further, we have a first sceleton for what the gsd-revalidation could look like.
(#182) - All pipelines now pass much more metadata to the algorithms action methods.
This should make it easier to implement algorithms that need more context.
At the moment this is only used in theDummyAlgorithmto load existing results based on the participant and test.
For this to work across all algorithms, we had to change some logic in theLrcUllrichalgorithm to selectively
parse the kwargs that it needs.
(#182) - When using Custom Aggregations, it is now easier to pass arguments to the underlying aggregation functions.
This is a breaking change, as we require you to now pass the desired name of the output column as part of the
column_name.
CustomOperation(... func=A.icc, column_name="my_column")->CustomOperation(... func=A.icc, column_name=("icc", "my_column"))
This removes the requirement thatfuncneeds to be a proper function object with a__name__attribute.
Hence, you can now use lambdas and partials without issue.
(#176) - It is now possible to split custom aggregations into multiple columns.
For example,A.iccreturned two values, the ICC and the confidence intervals.
Before, both values where put into the same column.
Now you can specify two column names in Custom Aggregation to split the values.
CustomOperation(... func=A.icc, column_name=[("icc", "my_column"), ("icc_ci", "my_ci_column")])
(#176) - Confidence intervals where added as a aggregation function.
(#176) - Helper for styling and formatting tables. They are primarily used within the revalidation docs.
But feel free to copy them for your own use.
(#192 , #199) - Plotting helper for validation plots (#204)
- The MSDataset now has a dedicated dataset class that can be used to work with it. The dataset will be published on
Zenodo in the future. (#186)
Fixed
- Obscure bug when loading the TVS dataset causing a
IntCastingNaNError, when theparticipant_information.xlsxfile
was opened and resaved using Microsoft Excel. For more infos see (#197) - When calculating the total wear time during waking hours, waking hours were not considered correctly and were affected
by the daylight saving state at the time the script was run! This is now fixed, by not using actual dates for the
comparison, but just seconds from midnight. apply_transformationnow returns a normal index instead of MultiIndex, when all expected output col names are
strings (#199)
Removed
- In anticipation of the tpcp>2.0 update, we removed the
scoremethods from all pipelines.
You now need to provide the scorer directly to the MetaPipelines/Optimizers.
(#182)