|
1 | 1 | ## Note on parallel I/O data consistency |
2 | 2 |
|
3 | | -PnetCDF follows the same parallel I/O data consistency as MPI-IO standard. |
4 | | -Refer the URL below for more information. |
| 3 | +PnetCDF follows the same parallel I/O data consistency as MPI-IO standard, |
| 4 | +quoted below. |
| 5 | + |
| 6 | +``` |
| 7 | +Consistency semantics define the outcome of multiple accesses to a single file. |
| 8 | +All file accesses in MPI are relative to a specific file handle created from a |
| 9 | +collective open. MPI provides three levels of consistency: |
| 10 | + * sequential consistency among all accesses using a single file handle, |
| 11 | + * sequential consistency among all accesses using file handles created from a |
| 12 | + single collective open with atomic mode enabled, and |
| 13 | + * user-imposed consistency among accesses other than the above. |
| 14 | +Sequential consistency means the behavior of a set of operations will be as if |
| 15 | +the operations were performed in some serial order consistent with program |
| 16 | +order; each access appears atomic, although the exact ordering of accesses is |
| 17 | +unspecified. User-imposed consistency may be obtained using program order and |
| 18 | +calls to MPI_FILE_SYNC. |
| 19 | +``` |
| 20 | + |
| 21 | +Users are referred to the MPI standard Chapter 14.6 Consistency and Semantics |
| 22 | +for more information. |
5 | 23 | http://www.mpi-forum.org/docs/mpi-2.2/mpi22-report/node296.htm#Node296 |
6 | 24 |
|
7 | 25 | Readers are also referred to the following paper. |
8 | 26 | Rajeev Thakur, William Gropp, and Ewing Lusk, On Implementing MPI-IO Portably |
9 | 27 | and with High Performance, in the Proceedings of the 6th Workshop on I/O in |
10 | 28 | Parallel and Distributed Systems, pp. 23-32, May 1999. |
11 | 29 |
|
12 | | -If users would like PnetCDF to enforce a stronger consistency, they should add |
13 | | -NC_SHARE flag when open/create the file. By doing so, PnetCDF adds |
14 | | -MPI_File_sync() after each MPI I/O calls. |
15 | | - * For PnetCDF collective APIs, an MPI_Barrier() will also be called right |
16 | | - after MPI_File_sync(). |
17 | | - * For independent APIs, there is no need for calling MPI_Barrier(). |
18 | | - |
19 | | -Users are warned that the I/O performance when using NC_SHARE flag could become |
20 | | -significantly slower than not using it. |
21 | | - |
22 | | -If NC_SHARE is not set, then users are responsible for their desired data |
23 | | -consistency. To enforce a stronger consistency, users can explicitly call |
24 | | -ncmpi_sync(). In ncmpi_sync(), MPI_File_sync() and MPI_Barrier() are called. |
| 30 | +* NC_SHARE has been deprecated in PnetCDF release of 1.13.0. |
| 31 | + + NC_SHARE is a legacy flag inherited from NetCDF-3, whose purpose is to |
| 32 | + provide some degree of data consistency for multiple processes concurrently |
| 33 | + accessing a shared file. To achieve a stronger consistency, user |
| 34 | + applications are required to also synchronize the processes, such as |
| 35 | + calling MPI_Barrier, together with nc_sync. |
| 36 | + + Because PnetCDF follows the MPI file consistency, which only addresses the |
| 37 | + case when all file accesses are relative to a specific file handle created |
| 38 | + from a collective open, NC_SHARE becomes invalid. Note that NetCDF-3 |
| 39 | + supports only sequential I/O and thus has no collective file open per se. |
| 40 | + |
| 41 | +If users would like a stronger consistency, they may consider using the code |
| 42 | +fragment below after each collective write API call (e.g. |
| 43 | +`ncmpi_put_vara_int_all`, `ncmpi_wait_all` `ncmpi_enddef`, `ncmpi_redef`, |
| 44 | +`ncmpio_begin_indep_data`, `ncmpio_end_indep_data`). |
| 45 | +``` |
| 46 | + ncmpi_sync(ncid); |
| 47 | + MPI_Barrier(comm); |
| 48 | + ncmpi_sync(ncid); |
| 49 | +``` |
| 50 | +Users are warned that the I/O performance could become significantly slower. |
25 | 51 |
|
26 | 52 | ### Note on header consistency in memory and file |
27 | 53 | In data mode, changes to file header can happen in the following scenarios. |
|
0 commit comments