@@ -6,6 +6,91 @@ PnetCDF Release Notes
66Version _PNETCDF_VERSION_ (_PNETCDF_RELEASE_DATE_)
77-------------------------------------
88
9+ * New features
10+ + Intra-node aggregation for write requests -- When the number of MPI
11+ processes allocated to a compute node is large, this feature can
12+ effectively reduce the communication congestion caused by an overwhelming
13+ large number of asynchronous messages posted during the collective write
14+ of MPI-IO. This new feature can be enabled by setting the PnetCDF I/O hint
15+ 'nc_num_aggrs_per_node' to the desired number of aggregators per compute
16+ node. The non-aggregators send their requests to the assigned aggregators,
17+ and then the aggregators make aggregated requests to the file.
18+ [PR #156](https://github.com/Parallel-NetCDF/PnetCDF/pull/156).
19+ + Support MPI derived data types that are constructed from the large-count
20+ derived datatype constructors introduced in MPI 4.0.
21+ [PR #145](https://github.com/Parallel-NetCDF/PnetCDF/pull/145).
22+
23+ * New optimization
24+ + When running sequentially (i.e. number of processes is 1), PnetCDF calls
25+ the MPI independent I/O functions and avoids calls to MPI_Barrier,
26+ MPI_Bcast, and MPI_Allreduce.
27+ [PR #149](https://github.com/Parallel-NetCDF/PnetCDF/pull/149).
28+
29+ * Configure options changed
30+ + The default has been changed to build both shared and static libraries.
31+ [PR #143](https://github.com/Parallel-NetCDF/PnetCDF/pull/143).
32+
33+ * Configure updates:
34+ + Fix `pnetcdf-config` of reflecting the installation path when installation
35+ is done by running command `make install DESTDIR=/alternate/directory`
36+ which prepends '/alternate/directory' before all installation names.
37+ [PR #154](https://github.com/Parallel-NetCDF/PnetCDF/pull/154).
38+
39+ * New constants
40+ + A new C macro `NC_FillValue` replaces `_FillValue` and thus `_FillValue` is
41+ now deprecated This conforms with NetCDF4's change in its version 4.9.3
42+ release. [PR #153](https://github.com/Parallel-NetCDF/PnetCDF/pull/153).
43+
44+ * New PnetCDF hints
45+ + 'nc_num_aggrs_per_node' -- To enable the intra-node aggregation, this I/O
46+ hint can set to a positive integral value, which indicates the desired
47+ number of processes per compute node to be selected as the aggregators.
48+ Setting it to 0 disables the aggregation, which is also the default mode.
49+ [PR #156](https://github.com/Parallel-NetCDF/PnetCDF/pull/156).
50+
51+ * Build recipes
52+ + When using OpenMPI on Mac OSX, a link error may appear. The work around is
53+ to add `LDFLAGS=-ld_classic` into the configure command line. Thanks to
54+ Rui Chen for reporting and provide the solution.
55+ [Issue #139](https://github.com/Parallel-NetCDF/PnetCDF/issues/139).
56+
57+ * Updated utility programs
58+ + none
59+
60+ * Other updates:
61+ + More document for comparing PnetCDF and NetCDF4 has been added to file
62+ doc/netcdf4_vs_pnetcdf.md
63+ [PR #152](https://github.com/Parallel-NetCDF/PnetCDF/pull/152) and
64+ [PR #140](https://github.com/Parallel-NetCDF/PnetCDF/pull/140).
65+
66+ * New example programs
67+ + C/flexible_bottom.c and C/vard_bottom.c - These two examples construct MPI
68+ derived data types using absolute memory addresses first and use MPI_BOTTOM
69+ when calling the PnetCDF flexible APIs.
70+
71+ * New programs for I/O benchmarks
72+ + C/pnetcdf_put_vara.c --
73+ * This program writes a series of 3D variables with 2D block-block
74+ partitioning pattern. Each variable is a record variable.
75+ [PR #150](https://github.com/Parallel-NetCDF/PnetCDF/pull/150).
76+ + C/netcdf_put_vara.c --
77+ * This sequential NetCDF-C program writes a series of 3D variables. Each
78+ variable is a record variable.
79+ * This program and `C/pnetcdf_put_vara.c` can be used to compare the
80+ performance of NetCDF and PnetCDF when running sequentially, i.e. one
81+ process.
82+ [PR #150](https://github.com/Parallel-NetCDF/PnetCDF/pull/150).
83+
84+ * New test program
85+ + test/testcases/flexible_large_count.c - tests flexible APIs that use MPI
86+ derived datatypes created by MPI large-count datatype constructors.
87+ [PR #145](https://github.com/Parallel-NetCDF/PnetCDF/pull/145).
88+
89+
90+ -------------------------------------
91+ Version 1.13.0 (March 29, 2024)
92+ -------------------------------------
93+
994* New features
1095 + A single read/write request made by an MPI process is now allowed to be of
1196 size larger than 2 GiB. Such large requests will be passed to the MP-IO
0 commit comments