@@ -13,29 +13,29 @@ Introduction
1313
1414If you are new to this system, **please see the following resources **:
1515
16- * `Crusher user guide <https://docs.olcf.ornl.gov/systems/crusher_quick_start_guide .html >`_
17- * Batch system: `Slurm <https://docs.olcf.ornl.gov/systems/crusher_quick_start_guide .html#running-jobs >`_
18- * `Production directories <https://docs.olcf.ornl.gov/data/index .html#data-storage- and-transfers >`_:
16+ * `Crusher user guide <https://docs.olcf.ornl.gov/systems/frontier_user_guide .html >`_
17+ * Batch system: `Slurm <https://docs.olcf.ornl.gov/systems/frontier_user_guide .html#running-jobs >`_
18+ * `Production directories <https://docs.olcf.ornl.gov/systems/frontier_user_guide .html#data-and-storage >`_:
1919
2020 * ``$PROJWORK/$proj/ ``: shared with all members of a project, purged every 90 days (recommended)
21- * ``$MEMBERWORK/$proj/ ``: single user, purged every 90 days (usually smaller quota)
22- * ``$WORLDWORK/$proj/ ``: shared with all users, purged every 90 days
21+ * ``$MEMBERWORK/$proj/ ``: single user, purged every 90 days (usually smaller quota, 50TB default quota )
22+ * ``$WORLDWORK/$proj/ ``: shared with all users, purged every 90 days (50TB default quota)
2323 * Note that the ``$HOME `` directory is mounted as read-only on compute nodes.
2424 That means you cannot run in your ``$HOME ``.
25+ It's default quota is 50GB.
26+
27+ Note: the Orion lustre filesystem on Frontier and the older Alpine GPFS filesystem on Summit are not mounted on each others machines.
28+ Use `Globus <https://www.globus.org >`__ to transfer data between them if needed.
2529
2630
2731Installation
2832------------
2933
30- Use the following commands to download the WarpX source code and switch to the correct branch.
31- **You have to do this on Summit/OLCF Home/etc. since Frontier cannot connect directly to the internet **:
34+ Use the following commands to download the WarpX source code and switch to the correct branch:
3235
3336.. code-block :: bash
3437
3538 git clone https://github.com/ECP-WarpX/WarpX.git $HOME /src/warpx
36- git clone https://github.com/AMReX-Codes/amrex.git $HOME /src/amrex
37- git clone https://github.com/ECP-WarpX/picsar.git $HOME /src/picsar
38- git clone -b 0.14.5 https://github.com/openPMD/openPMD-api.git $HOME /src/openPMD-api
3939
4040 To enable HDF5, work-around the broken ``HDF5_VERSION `` variable (empty) in the Cray PE by commenting out the following lines in ``$HOME/src/openPMD-api/CMakeLists.txt ``:
4141https://github.com/openPMD/openPMD-api/blob/0.14.5/CMakeLists.txt#L216-L220
@@ -114,7 +114,7 @@ Known System Issues
114114.. warning ::
115115
116116 May 16th, 2022 (OLCFHELP-6888):
117- There is a caching bug in Libfrabric that causes WarpX simulations to occasionally hang on Frontier on more than 1 node.
117+ There is a caching bug in Libfabric that causes WarpX simulations to occasionally hang on Frontier on more than 1 node.
118118
119119 As a work-around, please export the following environment variable in your job scripts until the issue is fixed:
120120
0 commit comments