-
Notifications
You must be signed in to change notification settings - Fork 51
Description
I'm in the process of adding support for OpenPMD output to our block structured AMR code https://github.com/parthenon-hpc-lab/parthenon
In light of the open PR on the standard for mesh refinement openPMD/openPMD-standard#252 I'm wondering what the current best practice is (also with regard to achieving good performance at scale).
In our case, each rank owns variable number of meshblocks (though their size is fixed) with a (potentially variable) number of variables and at variable levels (depending on the chosen ordering of blocks).
The most straightforward approach approach would be to create one mesh record per bock and variable.
Alternatively, I imagine pooling record by level (so that the coordinate information is shared wrt to dx
) to increase the size of the output buffer.
Are there other approaches/recommendations?
And what's the impact on performance when we write one chunk per block (which at the api level would effectively be a serial "write" as each block/variable combination is unique)?
Are the actual writes on flush optimized/pooled/...?
For reference, our current HDF5 output wrote the data of all blocks in parallel for each variable with the corresponding offsets.
The coordinate information was stored separately, so that this large output buffer didn't need to handle different dx
.
This approach is currently not compatible with the OpenPMD standard for meshes with varying dx
as each record has a tight connection to fixed coordinates, correct?
Thanks,
Philipp
Software Environment:
Have you already installed openPMD-api?
If so, please tell us which version of openPMD-api your question is about:
- version of openPMD-api: [0.15.2]
- installed openPMD-api via: [from source]