-
|
Apologies if this has been answered elsewhere. I am looking into using WarpX (amazing work everyone!) and I see that simulations can be done either using python and the PICMI standard scripts, or using AMREX-compatible input scripts into the standalone executable. I cannot find any information comparing and contrasting the two, or explaining when you'd want to use one versus the other. You can call MPIRUN for both methods of running. Example input files are provided for both ways. The only difference I can see is that python seems more extensible (e.g. one can instantiate custom logic by tweaking the step() method). Interfacing with other/custom codes would be much easier through the python interface. Other than that, are there any substantive differences between the two? Speed/benchmarking differences? Different capabilities? I'd not be surprised if python incurs an overhead, but given that it seems to primarily call down pybind11 to run the C++ code anyhow, I wouldn't expect the compute differences (esp. for runs for which most of the time is spent in C++) to be too large. If there's prior discussion, please share! I've looked, and couldn't find any. Thanks in advance for your time. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
|
Thanks for the question(s) and compliments to everyone involved! Yes, the answers on this might be a bit scattered in the docs, let us try to address them here together.
So start here:
Correct. It's essentially for an executable + AMReX inputs file:
e.g., https://warpx.readthedocs.io/en/latest/usage/examples/lwfa/README.html
That is exactly the point.
We also aim to provide a (cross PIC-code) standardized interface in Python if you use the PICMI interface of WarpX. As linked above, it is not required to use PICMI, and sometimes our PICMI implementation does not yet expose all features of WarpX (mainly a question of updating it periodically, which we do). You can generally add more advanced capabilities with our Python interfaces, e.g., you can couple AI/ML models seamlessly, or add complicated custom initialization logic, or use custom solvers, which would be a pain to express in the static syntax of an inputs file. (See related: [1], [2] and e.g. [3])
Exactly. Just running WarpX from Python instead of the executable + inputs file does not incur noticeable overheads, because the wrapping and indirections from Python calls have lower overhead than the runtimes of the functions we call (e.g., step/evolve are not in the micro-second range). The overheads of pybind11 (and nanobind, which we might go to later), are documented here: https://nanobind.readthedocs.io/en/latest/benchmark.html#performance The time shown for a function call uses 2.5M calls, so it adds an overhead of about 1us to call a C++ function from Python. Now, into the weeds. If you are running Python scripts generally, the Finally, once you start to extend your code from Python the answer is: it depends. For instance, you can easily code very inefficient logic to inject particles every time step into your simulation and that part will have nothing to do with WarpX itself. Luckily, we work very well with the popular libaries like cupy, cudf, numpy, PyTorch et al. so as long as you know how to generally write performant Python code, your performance will be exactly as you expect. If you like to dig a bit more in the low level, feel free to look into our pyamrex repo and its documentation: Let us know if this answers your question :) |
Beta Was this translation helpful? Give feedback.
Thanks for the question(s) and compliments to everyone involved! Yes, the answers on this might be a bit scattered in the docs, let us try to address them here together.
So start here:
Correct. It's essentially for an executable + AMReX inputs file:
[mpirun/srun/...] warpx filename.inand for Python scripts
[mpirun/srun/...] python warp_script.pye.g., https://warpx.readthedocs.io/en/latest/…