How to make grid periodic (for particles [interpolation])? #4822
-
|
If I understood correctly, following code should create a periodic 1x1x1 grid spanning [0,1] in all dimensions and interpolate grid data to one particle: but if I change e.g. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
|
The issue is not periodic boundary. The issue is #4810 (reply in thread)
You could work around the issue by doing @ax3l I think you wrote the original code, what's your take? I think we have a few options. I would go with 2.
|
Beta Was this translation helpful? Give feedback.
-
|
The standard AMReX way would be to add ghost cells, call FillBoundary with the periodicity, and let particles interpolate from the ghost cells. Redistribute would be called on the particles after every push to periodically shift the particles positions so that they are inside the domain. The advantage of this approach is that this also works if the domain is distributed across multiple MPI ranks, as both FillBoundary and Redistribute will do the necessary MPI communications. However, if you only have a single process, it can work without ghost cells by adding a modulo operation to the index calculation for particles. For this, MeshToParticle needs to be replaced with a custom function. This version does nearest grid point interpolation and does not have the 0.5 offset. const auto grid_start = geom.ProbLoArray();
const auto inv_dr = geom.InvCellSizeArray();
const amrex::IntVect domain_lo = domain.smallEnd();
const amrex::IntVect domain_len = domain.length();
amrex::MeshToParticle(particle_container, grid, 0, [&](auto& particle1, const auto& data1){
const amrex::Real xmid = (particle1.pos(0) - grid_start[0]) * inv_dr[0]; // + 0.5;
const amrex::Real ymid = (particle1.pos(1) - grid_start[1]) * inv_dr[1]; // + 0.5;
const amrex::Real zmid = (particle1.pos(2) - grid_start[2]) * inv_dr[2]; // + 0.5;
const int idx_start_x = static_cast<int>(amrex::Math::floor(xmid));
const int idx_start_y = static_cast<int>(amrex::Math::floor(ymid));
const int idx_start_z = static_cast<int>(amrex::Math::floor(zmid));
// periodically sift the index
int i = (idx_start_x - domain_lo[0]) % domain_len[0];
int j = (idx_start_y - domain_lo[1]) % domain_len[1];
int k = (idx_start_z - domain_lo[2]) % domain_len[2];
if (i < 0) {
i += domain_len[0];
}
if (j < 0) {
j += domain_len[1];
}
if (k < 0) {
k += domain_len[2];
}
i += domain_lo[0];
j += domain_lo[1];
k += domain_lo[2];
particle1.rdata(0) += data1(i, j, k, 0);
}); |
Beta Was this translation helpful? Give feedback.
The standard AMReX way would be to add ghost cells, call FillBoundary with the periodicity, and let particles interpolate from the ghost cells. Redistribute would be called on the particles after every push to periodically shift the particles positions so that they are inside the domain. The advantage of this approach is that this also works if the domain is distributed across multiple MPI ranks, as both FillBoundary and Redistribute will do the necessary MPI communications. However, if you only have a single process, it can work without ghost cells by adding a modulo operation to the index calculation for particles. For this, MeshToParticle needs to be replaced with a custom function. Th…