-
Notifications
You must be signed in to change notification settings - Fork 51
BP5+groupbased: allow only up to 100 steps #1732
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BP5+groupbased: allow only up to 100 steps #1732
Conversation
OPENPMD_BP5_GROUPENCODING_MAX_STEPS=1000
docs/source/backends/adios2.rst
Outdated
``OPENPMD_ADIOS2_BP5_NumSubFiles`` ``0`` ADIOS2 BP5 engine: num of subfiles | ||
``OPENPMD_ADIOS2_BP5_NumAgg`` ``0`` ADIOS2 BP5 engine: num of aggregators | ||
``OPENPMD_ADIOS2_BP5_TypeAgg`` *empty* ADIOS2 BP5 engine: aggregation type. (EveryoneWrites, EveryoneWritesSerial, TwoLevelShm) | ||
``OPENPMD_BP5_GROUPENCODING_MAX_STEPS`` ``1000`` ADIOS2 BP5 engine: max number of allowed output steps in group encoding. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's lower to 100 or 200 by default, to cover more meta-data heavy cases safer by default.
Is it possible to automatically switch to file based after this OPENPMD_BP5_GROUPENCODING_MAX_STEP ? I will find it annoying to have to restart the simulation due to this limit. |
If you have simulations that run into this limit, that means that you are probably already creating amounts of metadata with a size orders of magnitude more than in a "good" configuration for BP5. Switching to file encoding during a running simulation might be possible, but more effort than it is worth. The proper solution will be to use variable encoding for this kind of setup automatically and I prefer putting my time into making that work properly. |
0dea887
to
9b821d4
Compare
OK. Thanks |
* BP5+groupbased: allow only up to 1000 steps * Configure this via env variable OPENPMD_BP5_GROUPENCODING_MAX_STEPS=1000 * Add documentation * Lower limit to 100
* BP5+groupbased: allow only up to 1000 steps * Configure this via env variable OPENPMD_BP5_GROUPENCODING_MAX_STEPS=1000 * Add documentation * Lower limit to 100
* Fix: Late unique_ptr puts without CLOSE_FILE or ADVANCE operations (#1744) * Add failing test * Add failing test * Revert "Add failing test" This reverts commit 5e04ece. * Reactivate writing from unique_ptr in finalize() * BP5+groupbased: allow only up to 100 steps (#1732) * BP5+groupbased: allow only up to 1000 steps * Configure this via env variable OPENPMD_BP5_GROUPENCODING_MAX_STEPS=1000 * Add documentation * Lower limit to 100 * Add compile-time check for #1720 (#1722) * WarpX: Repo Moved (#1733) Update a link to WarpX. * Fix zero-sized storeChunk for Span API in Python (#1738) * working around an unusual encounter when the joined_dim has actual value "max::size_t - 1" (#1740) * working around an unusal encounter when the joined_dim has actual value "max::size_t - 1" * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add test for redundant resetDataset() * Merge check into above logic * Better error messages in verifyDataset * Add further safety guards to createDataset and extendDataset tasks * Move joinedDim logic into middle-end for extendDataset * Update include/openPMD/IO/ADIOS/ADIOS2IOHandler.hpp --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Franz Pöschel <[email protected]> * ADIOS2 bugfix: Always use CurrentStep() in mode::Read (#1749) * Always use CurrentStep() in mode::Read * Remove manual step counting m_currentStep only necessary for SetStepSelection, it seems * Clean up logic that is no longer needed * Add test --------- Co-authored-by: Axel Huebl <[email protected]> Co-authored-by: Junmin Gu <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Check:
In a test simulation with PIConGPU, this canceled the simulation after step 999, creating output that was still readable, but had already accumulated ~1GB of metadata.
TODO: