Skip to content

Close for Record and RecordComponent #778

Open
@ax3l

Description

@ax3l

In AMR codes such as WarpX, we have the challenge that the number of storeChunk calls is a runtime parameter for the number of blocks of an MPI rank. This means that we cannot express those calls as collective calls.

Nevertheless, something that we can enforce (as an API contract) is that the order of variable writing (and reading) in parallel I/O needs to be in-order. By that way, we can create "collective" regions of I/O within time steps (Iteration::close()) and even record and record component transactions.

Concretely, as in for Iteration::close() in #746, we should add MPI-collective member functions:

  • RecordComponent::close()
  • Record::close()

to signal no further storeChunk (or loadChunk) calls will be issued on those variables (or blocks of variables). As always (e.g. with particles), it is okay that some ranks might contribute zero blocks (no storeChunk()) to a variable - and we can contractually state that ::close() still needs to be called in such a case (collective call).

Adding such methods will enable backends to perform optimizations in streaming and staging to reduce the overhead from many storeChunk calls.

cc @guj @franzpoeschel

Refs.: ADIOS2-WarpX meeting from Sep 16th, 2020.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions