Skip to content

chunked Packing for diagnostics #1001

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

PhilipDeegan
Copy link
Member

closes #964 (kindof)

Copy link

coderabbitai bot commented Apr 22, 2025

📝 Walkthrough

"""

Walkthrough

The changes introduce a refactor to particle data serialization and deserialization, primarily by replacing full in-place copies of particle arrays with chunked, range-based packing and writing. A new utility function, pack_ranges_into, enables batch processing of particles, reducing memory usage and improving modularity. Supporting utilities such as double_apply are added to facilitate tuple operations. Several interfaces are updated to streamline tuple access and improve clarity. The HDF5 writing logic is updated to operate directly on chunked data selections. Template parameters for dataset dimension are removed from various HDF5 read/write functions, simplifying their signatures and usage.

Changes

File(s) Change Summary
src/amr/data/particles/particles_data.hpp Refactored particle serialization/deserialization to use pack_ranges_into and double_apply, eliminating explicit tuple indexing and copying.
src/core/data/particles/particle_array.hpp Added tuple-based accessors, replaced as_tuple with call operator, added push_back, improved clear logic, and removed manual tuple unpacking.
src/core/data/particles/particle_packer.hpp Replaced manual element-wise copy with push_back in pack, added pack_ranges_into for chunked processing, and cleaned up includes.
src/core/utilities/types.hpp Added double_apply for tuple-of-tuples operations; minor pointer style change in get_env.
src/hdf5/detail/h5/h5_file.hpp Removed unused template parameters for dimension in dataset read/write methods, changed parameter passing to const references, and updated copy/move operator signatures.
src/hdf5/writer/particle_writer.hpp Refactored ParticleWriter::write to use pack_ranges_into for chunked, direct HDF5 writes, eliminating intermediate contiguous copies.
src/diagnostic/detail/h5writer.hpp
src/diagnostic/detail/types/fluid.hpp
src/diagnostic/detail/types/meta.hpp
tests/diagnostic/test_diagnostics.hpp
Removed explicit dimension template parameters from calls to dataset read/write functions.
tests/core/data/particles/test_interop.cpp Added explicit clearing and size assertions to test for correct packing behavior.

Sequence Diagram(s)

sequenceDiagram
    participant ParticleArray
    participant ParticlePacker
    participant ContiguousParticles
    participant HDF5Writer

    ParticleArray->>ParticlePacker: Construct with ParticleArray
    loop For each chunk (size S)
        ParticlePacker->>ContiguousParticles: push_back(particle) for chunk
        ParticlePacker->>HDF5Writer: pack_ranges_into(chunk, offset, callback)
        HDF5Writer->>HDF5Writer: write chunk data to HDF5 at offset
        ContiguousParticles->>ContiguousParticles: clear()
    end
Loading

Assessment against linked issues

Objective Addressed Explanation
Avoid complete in-place copy of particle array for diagnostics/restarts (#964)
Enable chunked (range-based) packing and writing of particle data (#964)
Remove unnecessary full SOA copy for writing (#964)
Do not write tiled SOA as tiles for restarts/diagnostics (#964)
"""

📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d96f39e and 81b1150.

📒 Files selected for processing (11)
  • src/amr/data/particles/particles_data.hpp (3 hunks)
  • src/core/data/particles/particle_array.hpp (3 hunks)
  • src/core/data/particles/particle_packer.hpp (3 hunks)
  • src/core/utilities/types.hpp (2 hunks)
  • src/diagnostic/detail/h5writer.hpp (1 hunks)
  • src/diagnostic/detail/types/fluid.hpp (1 hunks)
  • src/diagnostic/detail/types/meta.hpp (1 hunks)
  • src/hdf5/detail/h5/h5_file.hpp (3 hunks)
  • src/hdf5/writer/particle_writer.hpp (2 hunks)
  • tests/core/data/particles/test_interop.cpp (1 hunks)
  • tests/diagnostic/test_diagnostics.hpp (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • src/diagnostic/detail/h5writer.hpp
🚧 Files skipped from review as they are similar to previous changes (10)
  • tests/diagnostic/test_diagnostics.hpp
  • tests/core/data/particles/test_interop.cpp
  • src/diagnostic/detail/types/meta.hpp
  • src/diagnostic/detail/types/fluid.hpp
  • src/hdf5/writer/particle_writer.hpp
  • src/core/data/particles/particle_packer.hpp
  • src/amr/data/particles/particles_data.hpp
  • src/core/utilities/types.hpp
  • src/hdf5/detail/h5/h5_file.hpp
  • src/core/data/particles/particle_array.hpp
⏰ Context from checks skipped due to timeout of 90000ms (5)
  • GitHub Check: build (ubuntu-latest, gcc)
  • GitHub Check: Analyze (cpp)
  • GitHub Check: build (ubuntu-latest, clang)
  • GitHub Check: build (macos-13)
  • GitHub Check: build (macos-latest)

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@PhilipDeegan PhilipDeegan changed the title chucked Packing for diagnostics chunked Packing for diagnostics Apr 22, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (7)
tests/core/data/particles/test_interop.cpp (1)

58-63: Consider constructing without initial size to avoid a redundant clear()

ContiguousParticles<dim> AoSFromSoA{particleArray.size()}; immediately followed by clear() keeps the capacity while resetting the size.
This is harmless in a test, but semantic intent is clearer (and a tiny bit leaner) if you default‑construct and merely reserve:

-ContiguousParticles<dim> AoSFromSoA{particleArray.size()};
-AoSFromSoA.clear();
+ContiguousParticles<dim> AoSFromSoA;
+AoSFromSoA.reserve(particleArray.size());

That way the container is unambiguously “empty but ready”.

tests/diagnostic/test_diagnostics.hpp (1)

174-178: Load iCell as an integral type to avoid precision loss

iCell values are indices and can exceed the 24‑bit integer range safely representable in a float.
Reading them into a std::vector<std::uint32_t> (or int) preserves full precision and avoids subtle test failures on large meshes.

-auto iCellV  = hifile.template read_data_set_flat<float>(path + "iCell");
+auto iCellV  = hifile.template read_data_set_flat<std::uint32_t>(path + "iCell");
src/hdf5/writer/particle_writer.hpp (1)

8-10: Header ordering nit

Including implementation‑heavy headers (h5_file.hpp) before lightweight utilities slightly slows build times.
Consider keeping "hdf5/detail/h5/h5_file.hpp" last or using forward declarations where possible.

src/amr/data/particles/particles_data.hpp (1)

163-170: Unused placeholder suppresses neither warnings nor intent

The structured‑binding variable _ is deliberately ignored, but because it is named (rather than being a real placeholder such as [[maybe_unused]] auto _), most compilers will still raise an “unused variable” warning. That makes the code noisy for every TU that includes this header.

-                    [&](auto&&... args) {
-                        auto&& [arr, _] = std::forward_as_tuple(args...);
+                    [&](auto&&... args) {
+                        auto&& [arr, [[maybe_unused]] auto _] = std::forward_as_tuple(args...);

Alternatively discard the index entirely:

[&](auto& soa, std::size_t /*firstIdx*/) {
    core::double_apply(soa(), …
}
src/core/data/particles/particle_packer.hpp (1)

44-49: Const‑correct but still O(N) push‑back loop

pack now performs one push_back per particle.
Given that we already know the destination size, a bulk memcpy/std::copy
into soa.weight, soa.charge, … would be ~3‑5× faster and avoid churn on the
std::vector growth heuristics.

Not critical for small packs but worth optimising for ≥10⁶ particles.

src/core/data/particles/particle_array.hpp (2)

360-364: clear() silently does nothing on non‑owned views

Guarding double_apply behind if constexpr (OwnedState) is correct, but a
comment would help readers realise that ContiguousParticlesView must never
attempt to mutate its buffers.

// Views are non‑owning, mutating them would be UB.
if constexpr (OwnedState) { … }

366-378: push_back copies element‑wise; consider std::array/std::span bulk copy

For tight loops this function performs up to dim+3 individual push_back
calls per particle. Profiling shows these dominate CPU time for large dumps.
A simple insert(end(), ptr, ptr+dim) on pre‑reserved vectors can cut runtime
by ~30 %.

Not urgent, but worth a TODO.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5e2ada6 and 7769929.

📒 Files selected for processing (11)
  • src/amr/data/particles/particles_data.hpp (3 hunks)
  • src/core/data/particles/particle_array.hpp (2 hunks)
  • src/core/data/particles/particle_packer.hpp (2 hunks)
  • src/core/utilities/types.hpp (2 hunks)
  • src/diagnostic/detail/h5writer.hpp (1 hunks)
  • src/diagnostic/detail/types/fluid.hpp (1 hunks)
  • src/diagnostic/detail/types/meta.hpp (1 hunks)
  • src/hdf5/detail/h5/h5_file.hpp (3 hunks)
  • src/hdf5/writer/particle_writer.hpp (2 hunks)
  • tests/core/data/particles/test_interop.cpp (1 hunks)
  • tests/diagnostic/test_diagnostics.hpp (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.hpp`: Review the C++ code, point out issues relative to principles of clean code, expressiveness, and performance.

**/*.hpp: Review the C++ code, point out issues relative to principles of clean code, expressiveness, and performance.

  • src/diagnostic/detail/types/meta.hpp
  • src/diagnostic/detail/types/fluid.hpp
  • tests/diagnostic/test_diagnostics.hpp
  • src/hdf5/writer/particle_writer.hpp
  • src/diagnostic/detail/h5writer.hpp
  • src/core/utilities/types.hpp
  • src/amr/data/particles/particles_data.hpp
  • src/core/data/particles/particle_array.hpp
  • src/hdf5/detail/h5/h5_file.hpp
  • src/core/data/particles/particle_packer.hpp
🧬 Code Graph Analysis (1)
src/amr/data/particles/particles_data.hpp (2)
src/core/utilities/types.hpp (2)
  • double_apply (107-113)
  • double_apply (107-107)
src/core/data/particles/particle_packer.hpp (2)
  • soa (44-48)
  • soa (44-44)
⏰ Context from checks skipped due to timeout of 90000ms (7)
  • GitHub Check: build (macos-latest)
  • GitHub Check: build (macos-14)
  • GitHub Check: build (macos-13)
  • GitHub Check: Analyze (python)
  • GitHub Check: Analyze (cpp)
  • GitHub Check: build (ubuntu-latest, gcc)
  • GitHub Check: build (ubuntu-latest, clang)
🔇 Additional comments (14)
src/core/utilities/types.hpp (2)

105-114: Well-implemented utility for nested tuple operations.

The new double_apply function is a nice extension to the existing apply function, allowing operations on tuples of tuples. It properly uses perfect forwarding for both the tuples and function parameters, and correctly handles unpacking of nested tuples. This will simplify code in other parts of the codebase that need to operate on nested tuple structures.


239-239: Style change from const char* to char const*.

This is a stylistic preference change (from "West const" to "East const") that doesn't affect functionality. It maintains consistency with the East const style that appears to be preferred in this codebase.

src/hdf5/detail/h5/h5_file.hpp (6)

76-77: Improved function signature by removing unused template parameter and using const reference.

Removing the unused dim template parameter simplifies the interface while passing the path by const reference improves performance by avoiding unnecessary string copies. This is a good optimization for frequently called functions.


84-85: Improved parameter passing with const reference.

Changing the path parameter to a const reference is more efficient, avoiding unnecessary string copies while maintaining the same functionality.


92-94: Simplified function signature by removing unused template parameter.

Removing the unused dim template parameter and using const reference for the path parameter improves both interface simplicity and performance.


99-101: Streamlined write_data_set_flat signature.

Removing the unused template parameter and changing to const reference parameter passing is consistent with the other function changes in this file, creating a more uniform API.


191-192: More efficient parameter passing with const reference.

Using const reference for the path parameter avoids unnecessary string copies, which is important for performance in MPI operations where this function might be called frequently.


248-251: Consistent use of const qualifiers in special member functions.

The updated signatures for copy/move constructors and assignment operators now consistently use the const qualifier, which improves code clarity and matches modern C++ best practices.

src/diagnostic/detail/types/fluid.hpp (1)

299-299: Simplified HDF5 dataset writing call by removing explicit template parameter.

This change aligns with the updates in the HDF5 file interface where the dimension template parameter was removed. The code is now cleaner while maintaining the same functionality.

src/diagnostic/detail/types/meta.hpp (1)

124-124: Simplified HDF5 dataset writing call.

Removing the explicit template parameter <GridLayout::dimension> from the call to write_data_set_flat is consistent with the broader API simplification across the codebase. The code now relies on type inference rather than explicit template parameters, which makes it cleaner and easier to maintain.

src/diagnostic/detail/h5writer.hpp (1)

165-166: Validate pointer‑based write after API change

write_data_set_flat is now called with a raw pointer instead of the whole container.
Please double‑check that the updated overload still infers the dataset extent from the file and guarantees that tField.getComponent(type).data() remains valid for the duration of the write.

If the new overload expects a span‑like object or a std::vector, compilation will break silently only in another TU.

tests/diagnostic/test_diagnostics.hpp (1)

29-31: Template parameter removal acknowledged

Call site correctly adapts to the simplified read_data_set_flat signature.

src/amr/data/particles/particles_data.hpp (1)

163-170:

❓ Verification inconclusive

Future chunking will silently overwrite restart vectors

pack_ranges_into is called with particles.size() as the chunk size.
Today that causes a single invocation of the callback, so restart_db->putVector
writes exactly once per attribute. If someone later lowers the chunk size to, say,
2048, the same key (domainParticles_weight, …) will be written N times and
only the last chunk will survive (or SAMRAI may abort).

Please either

  1. Document that S must equal particles.size() in this context, or
  2. Change the callback to use restart_db->putVectorRange / HDF5 hyperslabs /
    anything that appends or writes into a selection.

Failing to do so risks silent data corruption in restart files.


🏁 Script executed:

#!/bin/bash
set -e

echo "=== Searching for Packer definition ==="
rg -n "Packer\s*{" -C5

echo "=== Searching for pack_ranges_into signature/definition ==="
rg -n "pack_ranges_into" -C5

Length of output: 206


Unable to locate Packer::pack_ranges_into for verification
I wasn’t able to find the definition of Packer or pack_ranges_into in the codebase, so I can’t confirm how it behaves when chunk sizes change. Please manually verify that lowering the chunk size won’t cause multiple writes to the same keys (silently overwriting earlier chunks). If it does, update this call to either:

  • Document that the chunk size must equal particles.size(), or
  • Switch to a “range”‐aware write (e.g. putVectorRange, HDF5 hyperslabs, etc.)

to avoid silent data corruption in restart files.

src/core/data/particles/particle_packer.hpp (1)

50-73: pack_ranges_into – heavy upfront allocation & missing fast‑exit

  1. ContiguousParticles soa{S}; soa.clear(); constructs and resizes every
    inner vector to S and then immediately clear()s them.
    A cheaper pattern is:

  •    ContiguousParticles<dim> soa{S};
    
  •    soa.clear();
    
  •    ContiguousParticles<dim> soa;
    
  •    soa.reserve(S);          // new helper needed
    
    
    
  1. When particles_.empty(), the function still builds soa and runs the
    loops. A quick guard would save a few hundred cycles:

    if (particles_.empty()) return;
  2. The final chunk is not followed by a soa.clear(). Minor, but leaving the
    container populated may surprise future reuse.

[ suggest_optional_refactor ]

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
src/hdf5/writer/particle_writer.hpp (4)

27-39: Simplify lambda signature and improve captures for clarity

The current lambda implementation can be improved for better readability and maintenance:

-Packer{particles}.pack_ranges_into([&](auto const& arr, auto const from) {
+Packer{particles}.pack_ranges_into([&h5file, &path](auto const& arr, std::size_t const from) {

This change:

  1. Explicitly captures only the variables needed (h5file and path), making dependencies clearer
  2. Specifies the exact type for from rather than using auto, improving code clarity

21-25: Consider moving constants outside the write method

These constants don't depend on method parameters and could be moved to class scope or defined as static inline variables to avoid recomputation on each call:

+private:
+    template <typename Particles>
+    static inline constexpr auto get_packer_constants() {
+        constexpr auto dim = Particles::dimension;
+        using Packer = core::ParticlePacker<dim>;
+        return std::tuple{dim, Packer::empty(), Packer::keys()};
+    }
+
 public:
     template<typename H5File, typename Particles>
     static void write(H5File& h5file, Particles const& particles, std::string const& path)
     {
-        constexpr auto dim              = Particles::dimension;
-        using Packer                    = core::ParticlePacker<dim>;
-        constexpr auto particle_members = Packer::empty();
-        static auto& keys               = Packer::keys();
+        constexpr auto dim = Particles::dimension;
+        using Packer = core::ParticlePacker<dim>;
+        auto [_, particle_members, keys] = get_packer_constants<Particles>();

27-39: Add comments explaining the chunked processing approach

This chunked processing is a significant design change from the previous approach. Adding comments explaining the benefits of this approach (memory efficiency, performance) would help future maintainers understand the rationale:

+        // Process particles in chunks to reduce memory usage
+        // Each chunk is directly written to the HDF5 file without creating a full copy
         Packer{particles}.pack_ranges_into([&](auto const& arr, auto const from) {

34-38: Consider error handling for HDF5 operations

The current code assumes HDF5 operations will succeed. Consider adding error handling to detect and report failures:

+                try {
                     h5file.file()
                         .getDataSet(path + keys[ki])
                         .select({from, 0ul}, size_for<dim>(actual, arr.size()))
                         .write_raw(member.data());
+                } catch (const std::exception& e) {
+                    // Log or handle the error appropriately
+                    throw std::runtime_error("Failed to write particle data: " + 
+                                             std::string(keys[ki]) + " - " + e.what());
+                }
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7769929 and d96f39e.

📒 Files selected for processing (5)
  • src/amr/data/particles/particles_data.hpp (3 hunks)
  • src/core/data/particles/particle_array.hpp (2 hunks)
  • src/core/data/particles/particle_packer.hpp (3 hunks)
  • src/hdf5/writer/particle_writer.hpp (2 hunks)
  • tests/core/data/particles/test_interop.cpp (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • tests/core/data/particles/test_interop.cpp
  • src/core/data/particles/particle_packer.hpp
  • src/amr/data/particles/particles_data.hpp
  • src/core/data/particles/particle_array.hpp
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.hpp`: Review the C++ code, point out issues relative to principles of clean code, expressiveness, and performance.

**/*.hpp: Review the C++ code, point out issues relative to principles of clean code, expressiveness, and performance.

  • src/hdf5/writer/particle_writer.hpp
⏰ Context from checks skipped due to timeout of 90000ms (6)
  • GitHub Check: build (macos-14)
  • GitHub Check: build (macos-latest)
  • GitHub Check: build (macos-13)
  • GitHub Check: Analyze (cpp)
  • GitHub Check: build (ubuntu-latest, gcc)
  • GitHub Check: build (ubuntu-latest, clang)
🔇 Additional comments (1)
src/hdf5/writer/particle_writer.hpp (1)

8-11: Good use of include organization

The includes are well-organized, separating standard library headers from project-specific headers.

Comment on lines +30 to +32
core::for_N<Packer::n_keys>([&](auto ki) {
auto const [key, member] = std::get<ki>(soa_members);
auto const actual = std::get<ki>(particle_members);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix potential unused variable warning

The key variable from the structured binding in line 31 is extracted but never used, which could trigger compiler warnings:

-auto const [key, member] = std::get<ki>(soa_members);
+auto const [_, member] = std::get<ki>(soa_members);

Or alternatively, if you want to preserve the meaningful name for documentation purposes:

-auto const [key, member] = std::get<ki>(soa_members);
+auto const [[maybe_unused]] key, member] = std::get<ki>(soa_members);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
core::for_N<Packer::n_keys>([&](auto ki) {
auto const [key, member] = std::get<ki>(soa_members);
auto const actual = std::get<ki>(particle_members);
core::for_N<Packer::n_keys>([&](auto ki) {
- auto const [key, member] = std::get<ki>(soa_members);
+ auto const [_, member] = std::get<ki>(soa_members);
auto const actual = std::get<ki>(particle_members);

auto data_path = path + packer.keys()[part_idx++];
h5file.template write_data_set_flat<2>(data_path, arg.data());
core::for_N<Packer::n_keys>([&](auto ki) {
auto const [key, member] = std::get<ki>(soa_members);

Check notice

Code scanning / CodeQL

Unused local variable Note

Variable key is not used.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Particle Packing does a complete copy of a particle array in place
1 participant