Skip to content

Commit 04023d6

Browse files
Docs: update LXPLUS (CERN) documentation (BLAST-WarpX#6264)
This PR fixes the LXPLUS installation guide. While there is hardly any software installed on LXPLUS itself, the software is instead provided via the LCG software stack. Thus, instead of relying on Spack, an appropriate LCG view should be loaded, which provides all dependencies for WarpX. As a minor caveat, one must force picking the parallel HDF5 version over the serial one, as both are available in the LCG view. This drastically simplifies the build procedure on LXPLUS.
1 parent e22371f commit 04023d6

File tree

3 files changed

+13
-146
lines changed

3 files changed

+13
-146
lines changed

Docs/source/install/hpc/lxplus.rst

Lines changed: 9 additions & 65 deletions
Original file line numberDiff line numberDiff line change
@@ -23,8 +23,6 @@ Through LXPLUS we have access to CPU and GPU nodes (the latter equipped with NVI
2323

2424
Installation
2525
------------
26-
Only very little software is pre-installed on LXPLUS so we show how to install from scratch all the dependencies using `Spack <https://spack.io>`__.
27-
2826
For size reasons it is not advisable to install WarpX in the ``$HOME`` directory, while it should be installed in the "work directory". For this purpose we set an environment variable with the path to the "work directory"
2927

3028
.. code-block:: bash
@@ -40,98 +38,44 @@ We clone WarpX in ``$WORK``:
4038
4139
Installation profile file
4240
^^^^^^^^^^^^^^^^^^^^^^^^^
43-
The easiest way to install the dependencies is to use the pre-prepared ``warpx.profile`` as follows:
41+
For convenience, all variables, cloning WarpX, and loading the LCG view required for the dependencies are available in the profile file ``warpx.profile`` as follows:
4442

4543
.. code-block:: bash
4644
4745
cp $WORK/warpx/Tools/machines/lxplus-cern/lxplus_warpx.profile.example $WORK/lxplus_warpx.profile
4846
source $WORK/lxplus_warpx.profile
4947
50-
When doing this one can directly skip to the :ref:`Building WarpX <building-lxplus-warpx>` section.
51-
5248
To have the environment activated at every login it is then possible to add the following lines to the ``.bashrc``
5349

5450
.. code-block:: bash
5551
5652
export WORK=/afs/cern.ch/work/${USER:0:1}/$USER/
5753
source $WORK/lxplus_warpx.profile
5854
59-
GCC
60-
^^^
61-
The pre-installed GNU compiler is outdated so we need a more recent compiler. Here we use the gcc 11.2.0 from the LCG project, but other options are possible.
62-
63-
We activate it by doing
64-
65-
.. code-block:: bash
66-
67-
source /cvmfs/sft.cern.ch/lcg/releases/gcc/11.2.0-ad950/x86_64-centos7/setup.sh
68-
69-
In order to avoid using different compilers this line could be added directly into the ``$HOME/.bashrc`` file.
70-
71-
Spack
72-
^^^^^
73-
We download and activate Spack in ``$WORK``:
74-
75-
.. code-block:: bash
76-
77-
cd $WORK
78-
git clone -c feature.manyFiles=true https://github.com/spack/spack.git
79-
source spack/share/spack/setup-env.sh
80-
81-
Now we add our gcc 11.2.0 compiler to spack:
82-
83-
.. code-block:: bash
84-
85-
spack compiler find /cvmfs/sft.cern.ch/lcg/releases/gcc/11.2.0-ad950/x86_64-centos7/bin
86-
87-
Installing the Dependencies
88-
^^^^^^^^^^^^^^^^^^^^^^^^^^^
89-
90-
To install the dependencies we create a virtual environment, which we call ``warpx-lxplus``:
91-
92-
.. code-block:: bash
93-
94-
spack env create warpx-lxplus $WORK/WarpX/Tools/machines/lxplus-cern/spack.yaml
95-
spack env activate warpx-lxplus
96-
spack install
97-
98-
If the GPU support or the Python bindings are not needed, it's possible to skip the installation by respectively setting
99-
the following environment variables export ``SPACK_STACK_USE_PYTHON=0`` and ``export SPACK_STACK_USE_CUDA = 0`` before
100-
running the previous commands.
101-
102-
After the installation is done once, all we need to do in future sessions is just ``activate`` the environment again:
103-
104-
.. code-block:: bash
105-
106-
spack env activate warpx-lxplus
55+
Building WarpX
56+
^^^^^^^^^^^^^^
10757

108-
The environment ``warpx-lxplus`` (or ``-cuda`` or ``-cuda-py``) must be reactivated everytime that we log in so it could
109-
be a good idea to add the following lines to the ``.bashrc``:
58+
All dependencies are available via the LCG software stack. We choose the CUDA software stack to be able to compile both with and without CUDA without changing the stack. As both a serial and a parallel HDF5 installation are available, one has to make sure that WarpX picks up the parallel one both at build and at run time. Therefore, we load the software stack and export the path to the parallel HDF5:
11059

11160
.. code-block:: bash
11261
113-
source $WORK/spack/share/spack/setup-env.sh
114-
spack env activate -d warpx-lxplus
115-
cd $HOME
62+
source /cvmfs/sft.cern.ch/lcg/views/LCG_108_cuda/x86_64-el9-gcc13-opt/setup.sh
63+
export H5_MPI=/cvmfs/sft.cern.ch/lcg/releases/hdf5_mpi/1.14.6-967d3/x86_64-el9-gcc13-opt
11664
117-
.. _building-lxplus-warpx:
11865
119-
Building WarpX
120-
^^^^^^^^^^^^^^
12166
122-
We prepare and load the Spack software environment as above.
123-
Then we build WarpX:
67+
Then we build WarpX, enforcing a static library build of HDF5 to prevent serial HDF5 installations in the path to cause run time mix-ups:
12468

12569
.. code-block:: bash
12670
127-
cmake -S . -B build -DWarpX_DIMS="1;2;RZ;3"
71+
cmake -S . -B build -DWarpX_DIMS="1;2;RZ;3" -DHDF5_ROOT="$H5_MPI" -DHDF5_USE_STATIC_LIBRARIES=ON
12872
cmake --build build -j 6
12973
13074
Or if we need to compile with CUDA:
13175

13276
.. code-block:: bash
13377
134-
cmake -S . -B build -DWarpX_COMPUTE=CUDA -DWarpX_DIMS="1;2;RZ;3"
78+
cmake -S . -B build -DWarpX_COMPUTE=CUDA -DWarpX_DIMS="1;2;RZ;3" -DHDF5_ROOT="$H5_MPI" -DHDF5_USE_STATIC_LIBRARIES=ON
13579
cmake --build build -j 6
13680
13781
**That's it!**

Tools/machines/lxplus-cern/lxplus_warpx.profile.example

Lines changed: 4 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -7,29 +7,11 @@ then
77
git clone https://github.com/BLAST-WarpX/warpx.git $WORK/warpx
88
fi
99

10-
# activate the compiler
11-
source /cvmfs/sft.cern.ch/lcg/releases/gcc/11.2.0-ad950/x86_64-centos7/setup.sh
10+
# load the LCG software stack that has all the dependencies
11+
source /cvmfs/sft.cern.ch/lcg/views/LCG_108_cuda/x86_64-el9-gcc13-opt/setup.sh
1212

13-
# download and activate spack
14-
if [ ! -d "$WORK/spack" ]
15-
then
16-
git clone -c feature.manyFiles=true https://github.com/spack/spack.git $WORK/spack
17-
source $WORK/spack/share/spack/setup-env.sh
18-
19-
# add the compiler to spack
20-
spack compiler find /cvmfs/sft.cern.ch/lcg/releases/gcc/11.2.0-ad950/x86_64-centos7/bin
21-
22-
# create and activate the spack environment
23-
export SPACK_STACK_USE_PYTHON=1
24-
export SPACK_STACK_USE_CUDA=1
25-
spack env create warpx-lxplus-cuda-py $WORK/warpx/Tools/machines/lxplus-cern/spack.yaml
26-
spack env activate warpx-lxplus-cuda-py
27-
spack install
28-
else
29-
# activate the spack environment
30-
source $WORK/spack/share/spack/setup-env.sh
31-
spack env activate warpx-lxplus-cuda-py
32-
fi
13+
# set variable for the parallel HDF5 to avoid picking the serial one at build-time
14+
H5_MPI=/cvmfs/sft.cern.ch/lcg/releases/hdf5_mpi/1.14.6-967d3/x86_64-el9-gcc13-opt
3315

3416
export AMREX_CUDA_ARCH="7.0;7.5"
3517

Tools/machines/lxplus-cern/spack.yaml

Lines changed: 0 additions & 59 deletions
This file was deleted.

0 commit comments

Comments
 (0)