Description
Hello WarpX community. I wasn't sure whether this is a bug or if I'm doing something wrong. I'm trying to use the moving window with psatd solver and pml boundary conditions in the radial direction for some LWFA simulations in RZ.
As a test, I ran simulations with only a laser and no particles. If I use the domain decomposition, by setting amr.max_grid_size to be a 1/4 of the longitudinal cells for example, I get non-physical fields at the divisions between grids. The issue becomes less severe as I turn up the 'psatd.noz' parameter to higher values but doesn't go away.
So my question is, is this expected behavior? The documentation for fbpic says that a stencil order of 32 is usually a good tradeoff so it seems odd to me that I would need to turn it up to much larger values when I'm only simulating a laser in free space. I would appreciate any advice for making these RZ psatd simulations work better.
Expected behavior
I would expect all of these to look like the output that I got with the 'open' boundary condition.
How to reproduce
Here is the input deck that I used.
#################################
####### GENERAL PARAMETERS ######
#################################
max_step = 2250
amr.n_cell = 256 1024
amr.max_grid_size = 256
amr.blocking_factor = 256
geometry.dims = RZ
geometry.prob_lo = 0. -70.e-6
geometry.prob_hi = 40.e-6 10.e-6
amr.max_level = 0
warpx.n_rz_azimuthal_modes = 2
algo.maxwell_solver = psatd
#psatd.nx_guard = 32
#psatd.ny_guard = 32
#psatd.nz_guard = 256
#psatd.nox = 32
#psatd.noy = 32
#psatd.noz = 128
boundary.field_lo = none damped
boundary.field_hi = pml damped
warpx.pml_ncell = 32
warpx.do_pml_in_domain = 0
#################################
############ NUMERICS ###########
#################################
warpx.verbose = 1
warpx.do_dive_cleaning = 0
warpx.use_filter = 1
warpx.filter_npass_each_dir = 0 1
warpx.cfl = 0.99
warpx.do_moving_window = 1
warpx.moving_window_dir = z
warpx.moving_window_v = 1.0
warpx.start_moving_window_step = 728
algo.load_balance_intervals = -1
algo.particle_shape = 3
#################################
############ LASER ##############
#################################
lasers.names = laser0
laser0.profile = Gaussian
###if using gaussian fill out here
laser0.profile_duration = 2.97263e-14
laser0.profile_waist = 10e-6
laser0.profile_t_peak = 175.0e-15
laser0.profile_focal_distance = 225.0e-6
laser0.position = 0. 0. -25e-6
laser0.direction = 0. 0. 1.
laser0.polarization = 1. 0. 0.
laser0.wavelength = 0.8e-6
laser0.e_max = 1.13467e13
# Diagnostics
diagnostics.diags_names = diag
diag.diag_type = Full
diag.fields_to_plot = Er Et Ez
diag.format = openpmd
diag.openpmd_backend = h5
diag.file_prefix = diag
diag.intervals = 750::250
diag.dump_last_timestep = 1
I figured it would also be good to include the top of the output file since it has more information:
Initializing AMReX (25.03)...
MPI initialized with 1 MPI processes
MPI initialized with thread support level 3
Initializing CUDA...
CUDA initialized with 1 device.
AMReX (25.03) initialized
PICSAR (25.01)
WarpX (25.03-29-g343dfd6c3596-dirty)
__ __ __ __
\ \ / /_ _ _ __ _ __\ \/ /
\ \ /\ / / _` | '__| '_ \\ /
\ V V / (_| | | | |_) / \
\_/\_/ \__,_|_| | .__/_/\_\
|_|
Level 0: dt = 2.579909799e-16 ; dx = 1.5625e-07 ; dz = 7.8125e-08
Grids Summary:
Level 0 4 grids 262144 cells 100 % of domain
smallest grid: 256 x 256 biggest grid: 256 x 256
-------------------------------------------------------------------------------
--------------------------- MAIN EM PIC PARAMETERS ----------------------------
-------------------------------------------------------------------------------
Precision: | DOUBLE
Particle precision: | DOUBLE
Geometry: | 2D (RZ)
| - n_rz_azimuthal_modes = 2
Operation mode: | Electromagnetic
| - vacuum
-------------------------------------------------------------------------------
Current Deposition: | direct
Particle Pusher: | Boris
Charge Deposition: | standard
Field Gathering: | energy-conserving
Particle Shape Factor:| 3
-------------------------------------------------------------------------------
Maxwell Solver: | PSATD
| - update with rho is ON
| - current correction is ON
| - collocated grid
Guard cells | - ng_alloc_EB = (32,128)
(allocated for E/B) |
-------------------------------------------------------------------------------
Moving window: | ON
| - moving_window_dir = z
| - moving_window_v = 299792458
-------------------------------------------------------------------------------
System information
Please check all relevant boxes and provide details.
- Operating system (name and version):
- Linux: Rocky 8.10 Green Obsidian
- macOS: e.g., macOS Monterey 12.4
- Windows: e.g., Windows 11 Pro
- Version of WarpX: 25.03
- Installation method:
- Conda
- Spack
- PyPI
- Brew
- From source with CMake
- Module system on an HPC cluster
- Other dependencies: I don't think so.
- Computational resources:
- MPI: e.g., 2 MPI processes
- OpenMP: e.g., 2 OpenMP threads
- CPU: e.g., 2 CPUs
- GPU: NVIDIA V100
I used the HPC3 cluster at UCI, and installed using the instructions on the WarpX documentation.
Steps taken so far
To test I ran simulations scanning psatd.noz values(with default psatd.nz_guard) and psatd.nz_guard values(with default psatd.noz) with the input deck below. I also ran a test of the 'open' boundary condition and didn't have any issues which makes me think that something is going wrong with the pml boundary.
In the image below I have the results of these tests. Each row is for the input parameter in the y-axis of each column, except for the last row which is for the open boundary. I extended the psatd.noz = 64 and 128 simulations to a longer time since it took longer for the issue to appear. It looks like increasing the noz value makes the problem less severe but the nz_guard cells don't make much difference. Let me know if there's anything else you think I should test.