Skip to content

Running on HPC Systems

Ritvik Rao edited this page Apr 2, 2025 · 10 revisions

OLCF Summit

Load modules

module load cmake

Get an interactive job

bsub -W 2:00 -nnodes 1 -P [your project ID] -q debug -alloc_flags "smt1" -Is /bin/bash

Build Charm++ (with PAMILRTS machine layer)

./build charm++ pamilrts-linux-ppc64le smp -j --with-production

Run a Charm++ application

jsrun -n2 -a1 -c21 -K1 -r2 ./hello +ppn 20 +pemap L0-19 +commap L20

SDSC Expanse

Load modules

module load sdsc gcc/10.2.0 openmpi/4.0.4 cmake hwloc

Get an interactive job

srun --partition=debug --pty --account=[your account #] --nodes=1 --ntasks-per-node=4 --cpus-per-task=4 --mem=32G -t 00:30:00 --wait=0 --export=ALL /bin/bash

Build Charm++ (with UCX machine layer)

./build charm++ ucx-linux-x86_64 smp -j --with-production

Run a Charm++ application

srun --cpu_bind=cores --mpi=pmi2 ./hello +ppn 3 +pemap L0-2,4-6,8-10,12-14 +commap L3,7,11,15

NCSA Delta

Load modules

module load libfabric/1.15.2.0 cray-pmi/6.1.13

Get an interactive job (2 nodes, 128 PEs total, non-SMP, for 1 hour, on the PPL account)

salloc --partition=cpu-interactive --nodes=2 --ntasks-per-node=64 --cpus-per-task=1 --account=mzu-delta-cpu --time=01:00:00

Get an interactive job (2 nodes, 112 PEs, PPN=7, SMP, for 1 hour, on the PPL account)

salloc --partition=cpu-interactive --nodes=2 --ntasks-per-node=8 --cpus-per-task=8 --account=mzu-delta-cpu --time=01:00:00

Build Charm++ (with OFI layer, non-SMP)

./buildold charm++ ofi-linux-x86_64 cxi slurmpmi2cray --with-production -j16

Build Charm++ (with OFI layer, SMP)

./buildold charm++ ofi-linux-x86_64 cxi smp slurmpmi2cray --with-production -j16

Clone this wiki locally