Description
In AMR codes such as WarpX, we have the challenge that the number of storeChunk
calls is a runtime parameter for the number of blocks of an MPI rank. This means that we cannot express those calls as collective calls.
Nevertheless, something that we can enforce (as an API contract) is that the order of variable writing (and reading) in parallel I/O needs to be in-order. By that way, we can create "collective" regions of I/O within time steps (Iteration::close()
) and even record and record component transactions.
Concretely, as in for Iteration::close()
in #746, we should add MPI-collective member functions:
RecordComponent::close()
Record::close()
to signal no further storeChunk
(or loadChunk
) calls will be issued on those variables (or blocks of variables). As always (e.g. with particles), it is okay that some ranks might contribute zero blocks (no storeChunk()
) to a variable - and we can contractually state that ::close()
still needs to be called in such a case (collective call).
Adding such methods will enable backends to perform optimizations in streaming and staging to reduce the overhead from many storeChunk
calls.
Refs.: ADIOS2-WarpX meeting from Sep 16th, 2020.