|
| 1 | +.. _building-polaris: |
| 2 | + |
| 3 | +Polaris (ALCF) |
| 4 | +============== |
| 5 | + |
| 6 | +The `Polaris cluster <https://docs.alcf.anl.gov/polaris/getting-started/>`__ is located at ALCF. |
| 7 | + |
| 8 | + |
| 9 | +Introduction |
| 10 | +------------ |
| 11 | + |
| 12 | +If you are new to this system, **please see the following resources**: |
| 13 | + |
| 14 | +* `ALCF user guide <https://docs.alcf.anl.gov/>`__ |
| 15 | +* Batch system: `PBS <https://docs.alcf.anl.gov/running-jobs/job-and-queue-scheduling/>`__ |
| 16 | +* `Filesystems <https://docs.alcf.anl.gov/data-management/filesystem-and-storage/file-systems/>`__ |
| 17 | + |
| 18 | +.. _building-polaris-preparation: |
| 19 | + |
| 20 | +Preparation |
| 21 | +----------- |
| 22 | + |
| 23 | +Use the following commands to download the WarpX source code: |
| 24 | + |
| 25 | +.. code-block:: bash |
| 26 | +
|
| 27 | + git clone https://github.com/ECP-WarpX/WarpX.git $HOME/src/warpx |
| 28 | +
|
| 29 | +On Polaris, you can run either on GPU nodes with fast A100 GPUs (recommended) or CPU nodes. |
| 30 | + |
| 31 | +.. tab-set:: |
| 32 | + |
| 33 | + .. tab-item:: A100 GPUs |
| 34 | + |
| 35 | + We use system software modules, add environment hints and further dependencies via the file ``$HOME/polaris_gpu_warpx.profile``. |
| 36 | + Create it now: |
| 37 | + |
| 38 | + .. code-block:: bash |
| 39 | +
|
| 40 | + cp $HOME/src/warpx/Tools/machines/polaris-alcf/polaris_gpu_warpx.profile.example $HOME/polaris_gpu_warpx.profile |
| 41 | +
|
| 42 | + .. dropdown:: Script Details |
| 43 | + :color: light |
| 44 | + :icon: info |
| 45 | + :animate: fade-in-slide-down |
| 46 | + |
| 47 | + .. literalinclude:: ../../../../Tools/machines/polaris-alcf/polaris_gpu_warpx.profile.example |
| 48 | + :language: bash |
| 49 | + |
| 50 | + Edit the 2nd line of this script, which sets the ``export proj=""`` variable. |
| 51 | + For example, if you are member of the project ``proj_name``, then run ``nano $HOME/polaris_gpu_warpx.profile`` and edit line 2 to read: |
| 52 | + |
| 53 | + .. code-block:: bash |
| 54 | +
|
| 55 | + export proj="proj_name" |
| 56 | +
|
| 57 | + Exit the ``nano`` editor with ``Ctrl`` + ``O`` (save) and then ``Ctrl`` + ``X`` (exit). |
| 58 | + |
| 59 | + .. important:: |
| 60 | + |
| 61 | + Now, and as the first step on future logins to Polaris, activate these environment settings: |
| 62 | + |
| 63 | + .. code-block:: bash |
| 64 | +
|
| 65 | + source $HOME/polaris_gpu_warpx.profile |
| 66 | +
|
| 67 | + Finally, since Polaris does not yet provide software modules for some of our dependencies, install them once: |
| 68 | + |
| 69 | + .. code-block:: bash |
| 70 | +
|
| 71 | + bash $HOME/src/warpx/Tools/machines/polaris-alcf/install_gpu_dependencies.sh |
| 72 | + source ${CFS}/${proj%_g}/${USER}/sw/polaris/gpu/venvs/warpx/bin/activate |
| 73 | +
|
| 74 | + .. dropdown:: Script Details |
| 75 | + :color: light |
| 76 | + :icon: info |
| 77 | + :animate: fade-in-slide-down |
| 78 | + |
| 79 | + .. literalinclude:: ../../../../Tools/machines/polaris-alcf/install_gpu_dependencies.sh |
| 80 | + :language: bash |
| 81 | + |
| 82 | + |
| 83 | + .. tab-item:: CPU Nodes |
| 84 | + |
| 85 | + *Under construction* |
| 86 | + |
| 87 | + |
| 88 | +.. _building-polaris-compilation: |
| 89 | + |
| 90 | +Compilation |
| 91 | +----------- |
| 92 | + |
| 93 | +Use the following :ref:`cmake commands <building-cmake>` to compile the application executable: |
| 94 | + |
| 95 | +.. tab-set:: |
| 96 | + |
| 97 | + .. tab-item:: A100 GPUs |
| 98 | + |
| 99 | + .. code-block:: bash |
| 100 | +
|
| 101 | + cd $HOME/src/warpx |
| 102 | + rm -rf build_pm_gpu |
| 103 | +
|
| 104 | + cmake -S . -B build_pm_gpu -DWarpX_COMPUTE=CUDA -DWarpX_PSATD=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_DIMS="1;2;RZ;3" |
| 105 | + cmake --build build_pm_gpu -j 16 |
| 106 | +
|
| 107 | + The WarpX application executables are now in ``$HOME/src/warpx/build_pm_gpu/bin/``. |
| 108 | + Additionally, the following commands will install WarpX as a Python module: |
| 109 | + |
| 110 | + .. code-block:: bash |
| 111 | +
|
| 112 | + cd $HOME/src/warpx |
| 113 | + rm -rf build_pm_gpu_py |
| 114 | +
|
| 115 | + cmake -S . -B build_pm_gpu_py -DWarpX_COMPUTE=CUDA -DWarpX_PSATD=ON -DWarpX_QED_TABLE_GEN=ON -DWarpX_APP=OFF -DWarpX_PYTHON=ON -DWarpX_DIMS="1;2;RZ;3" |
| 116 | + cmake --build build_pm_gpu_py -j 16 --target pip_install |
| 117 | +
|
| 118 | + .. tab-item:: CPU Nodes |
| 119 | + |
| 120 | + *Under construction* |
| 121 | + |
| 122 | +Now, you can :ref:`submit Polaris compute jobs <running-cpp-polaris>` for WarpX :ref:`Python (PICMI) scripts <usage-picmi>` (:ref:`example scripts <usage-examples>`). |
| 123 | +Or, you can use the WarpX executables to submit Polaris jobs (:ref:`example inputs <usage-examples>`). |
| 124 | +For executables, you can reference their location in your :ref:`job script <running-cpp-polaris>` or copy them to a location in ``$PSCRATCH``. |
| 125 | + |
| 126 | + |
| 127 | +.. _building-polaris-update: |
| 128 | + |
| 129 | +Update WarpX & Dependencies |
| 130 | +--------------------------- |
| 131 | + |
| 132 | +If you already installed WarpX in the past and want to update it, start by getting the latest source code: |
| 133 | + |
| 134 | +.. code-block:: bash |
| 135 | +
|
| 136 | + cd $HOME/src/warpx |
| 137 | +
|
| 138 | + # read the output of this command - does it look ok? |
| 139 | + git status |
| 140 | +
|
| 141 | + # get the latest WarpX source code |
| 142 | + git fetch |
| 143 | + git pull |
| 144 | +
|
| 145 | + # read the output of these commands - do they look ok? |
| 146 | + git status |
| 147 | + git log # press q to exit |
| 148 | +
|
| 149 | +And, if needed, |
| 150 | + |
| 151 | +- :ref:`update the polaris_gpu_warpx.profile or polaris_cpu_warpx files <building-polaris-preparation>`, |
| 152 | +- log out and into the system, activate the now updated environment profile as usual, |
| 153 | +- :ref:`execute the dependency install scripts <building-polaris-preparation>`. |
| 154 | + |
| 155 | +As a last step, clean the build directory ``rm -rf $HOME/src/warpx/build_pm_*`` and rebuild WarpX. |
| 156 | + |
| 157 | + |
| 158 | +.. _running-cpp-polaris: |
| 159 | + |
| 160 | +Running |
| 161 | +------- |
| 162 | + |
| 163 | +.. tab-set:: |
| 164 | + |
| 165 | + .. tab-item:: A100 (40GB) GPUs |
| 166 | + |
| 167 | + The batch script below can be used to run a WarpX simulation on multiple nodes (change ``<NODES>`` accordingly) on the supercomputer Polaris at ALCF. |
| 168 | + |
| 169 | + Replace descriptions between chevrons ``<>`` by relevant values, for instance ``<input file>`` could be ``plasma_mirror_inputs``. |
| 170 | + Note that we run one MPI rank per GPU. |
| 171 | + |
| 172 | + .. literalinclude:: ../../../../Tools/machines/polaris-alcf/polaris_gpu.pbs |
| 173 | + :language: bash |
| 174 | + :caption: You can copy this file from ``$HOME/src/warpx/Tools/machines/polaris-alcf/polaris_gpu.pbs``. |
| 175 | + |
| 176 | + To run a simulation, copy the lines above to a file ``polaris_gpu.pbs`` and run |
| 177 | + |
| 178 | + .. code-block:: bash |
| 179 | +
|
| 180 | + qsub polaris_gpu.pbs |
| 181 | +
|
| 182 | + to submit the job. |
| 183 | + |
| 184 | + |
| 185 | + .. tab-item:: CPU Nodes |
| 186 | + |
| 187 | + *Under construction* |
0 commit comments