Skip to content

Commit 1abaf22

Browse files
Update pleiades.md
1 parent aab8ed1 commit 1abaf22

File tree

1 file changed

+97
-0
lines changed

1 file changed

+97
-0
lines changed

docs/installation/hpc/pleiades.md

Lines changed: 97 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -235,3 +235,100 @@ export CXXFLAGS="-g -Ofast -xCORE-AVX512,CORE-AVX2 -xAVX -std=c++11"
235235
--with-m1qn3-dir="${ISSM_DIR}/externalpackages/m1qn3/install" \
236236
--with-semic-dir="${ISSM_DIR}/externalpackages/semic/install"
237237
```
238+
239+
### Installing ISSM with CoDiPack
240+
For an installation of ISSM with CoDiPack, the following external packages are required,
241+
```sh
242+
autotools install-linux.sh
243+
codipack
244+
medipack
245+
```
246+
247+
Before configuring ISSM, run,
248+
```sh
249+
cd $ISSM_DIR
250+
autoreconf -ivf
251+
```
252+
253+
Then use the following configuring script (adapting it as needed),
254+
```sh
255+
export CFLAGS="-g -Ofast -wd2196"
256+
export CXXFLAGS="-g -Ofast -xCORE-AVX512,CORE-AVX2 -xAVX -std=c++11"
257+
258+
./configure \
259+
--prefix="${ISSM_DIR}" \
260+
--enable-development \
261+
--enable-standalone-libraries \
262+
--with-wrappers=no \
263+
--enable-tape-alloc \
264+
--without-kriging \
265+
--without-kml \
266+
--without-Sealevelchange \
267+
--without-Love \
268+
--with-fortran-lib="-L${COMP_INTEL_ROOT}/compiler/lib/intel64_lin -lifcore -lifport -lgfortran" \
269+
--with-mkl-libflags="-L${COMP_INTEL_ROOT}/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm" \
270+
--with-mpi-include="${MPI_ROOT}/include" \
271+
--with-mpi-libflags="-L${MPI_ROOT}/lib -lmpi" \
272+
--with-metis-dir="${PETSC_DIR}" \
273+
--with-parmetis-dir="${PETSC_DIR}" \
274+
--with-mumps-dir="${PETSC_DIR}" \
275+
--with-codipack-lib="${ISSM_DIR}/externalpackages/codipack/install" \
276+
--with-medipack-lib="${ISSM_DIR}/externalpackages/medipack/install"
277+
```
278+
279+
{: .highlight-title }
280+
> NOTE
281+
>
282+
> You will get a lot of warnings while compiling (i.e. *warning #2196: routine is both "inline" and "noinline"*), which can be ignored.
283+
284+
## pfe_settings
285+
You will have to add a file titled `pfe_settings.m` (or `pfe_settings.py`) in `$ISSM_DIR/src/m` on the machine that you are doing model setup and results analysis on. This file will set up your personal settings so that that machine can send solution requests to Pleiades and retrieve results. For example, this file might include,
286+
```
287+
cluster.login='mmorligh';
288+
cluster.queue='devel';
289+
cluster.codepath='/nobackup/mmorligh/ISSM/bin';
290+
cluster.executionpath='/nobackup/mmorligh/execution';
291+
cluster.grouplist='s5692';
292+
cluster.port=1099;
293+
cluster.modules={'mpi-hpe/mpt', 'comp-intel/2020.4.304', 'petsc/3.17.3_intel_mpt_py'};
294+
```
295+
296+
- `cluster.login` should be set to your NAS username
297+
- `cluster.codepath` should be set to the `bin` directory of the installation of ISSM on Pleiades that you wish to use
298+
- `cluster.executionpath` should be set to where the job should be run on Pleiades (should not be you home directory due to disk use quotas)
299+
- `cluster.grouplist` should be set to the result of running `groups <USERNAME>` on Pleiades, where `<USERNAME>` is your NAS username
300+
- `cluster.modules` should include the same modules that we set above in the 'Environment' step for compiling ISSM
301+
302+
The above settings will be found automatically by MATLAB (or Python) when setting the cluster class for your model, i.e.,
303+
```
304+
md.cluster=pfe();
305+
```
306+
307+
## Running Jobs on Pleiades
308+
On Pleiades, the more nodes and time requested, the longer your job will have to wait in the queue, so choose your settings accordingly. For example,
309+
```
310+
md.cluster=pfe('numnodes',1,'time',28,'processor','bro','queue','devel');
311+
md.cluster.time=10;
312+
```
313+
will request a maximum job time of 10 minutes and one Broadwell node. If the run lasts more than 10 minutes, it will be killed and you will not be able to retrieve your results.
314+
315+
For more information on the available processor types, please refer to the NAS HECC knowledge base article <a href="https://www.nas.nasa.gov/hecc/support/kb/pleiades-configuration-details_77.html" target="_blank">'Pleiades Configuration Details'</a>.
316+
317+
If you want to check the status of your job and the queue that you are using, run,
318+
```
319+
qstat -u <USERNAME>
320+
```
321+
You can delete your job manually by typing,
322+
```
323+
qdel <JOB_ID>
324+
```
325+
where `<JOB_ID>` is the job ID reported on your local machine when you submitted your solution request. Also reported is the directory where you can find log files associated with the run (i.e. `<JOB_ID>.outlog` and `<JOB_ID>.errlog`). The `outlog` contains the information that would normally be printed to the console if you were running the job on your local machine. Likewise, the `errlog` contains any information printed in case of error.
326+
327+
If you would like to load results from the cluster manually (for example, if you encountered an error due to network disconnection), run,
328+
```
329+
md=loadresultsfromcluster(md,'runtimename','<EXEC_DIR>');
330+
```
331+
where `<EXEC_DIR>` is the parent of the job on your local machine, for example,
332+
```
333+
${ISSM_DIR}/execution/<EXEC_DIR>/<JOB_ID>.lock
334+
```

0 commit comments

Comments
 (0)