Skip to content

Compiling and Running

budjensen edited this page Jan 25, 2024 · 2 revisions

Compilation

Assuming that you have fortran (either ifort or gfortran) bound to an MPI distribution, compile the code by moving into the source directory, src, and building with the command make. You should see an output like:

[~/rfEDIPIC] $ cd src
[~/rfEDIPIC/src] $ make
"Using Intel compiler"
mpif90 -O2 -c parMT19937.f90
mpif90 -O2 -c parModules.f90
...
mpif90 -O2 -c parSnapshot.f90
mpif90 -O2 -o rfedipic parMT19937.o parModules.o ... parSnapshot.o

This will create an executable file named rfedipic in your source directory.

MPI Installation

If you do not have MPI installed, e.g. if you are running on a personal machine, installation tutorials for different systems are available online. A modified excerpt (C references have been replaced with Fortran references) of one such tutorial by the EuroCC National Competence Centre Sweden (which can be found in its original form here) is included below:

Click here for MPI installation instructions

Setting up your system

In order to follow this workshop, you will need access to compilers and MPI libraries. You can either use a cluster or set things up on your local computer - the instructions here are for installing on your own computer.

We recommend that participants create an isolated software environment on their computer and install a Fortran compiler along with MPI libraries inside that environment. Root-level system installation is also possible but will not be covered here due to the risk of various conflicts (or worse).

These instructions are based on installing compilers and MPI via the Conda package and environment manager, as it provides a convenient way to install binary packages in an isolated software environment.

Operating systems

The following steps are appropriate for Linux and MacOS systems. For Windows, it is necessary to first install the Windows Subsystem for Linux (see these installation instructions for WSL). Installing compilers and MPI natively on Windows is also possible through Cygwin and the Microsoft Distribution of MPICH, but we recommend WSL which is available for Windows 10 and later.

Installing conda

Begin by installing Miniconda:

  1. Download the 64-bit installer from here for your operating system:
    • for MacOS and Linux, choose the bash installer
    • on Windows, open a Linux-WSL terminal and type: wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh.
      If wget is not a recognised command, first install it by sudo apt-get install wget (provide the password you chose when installing WSL).
  2. In a terminal, run the installer with bash Miniconda3-latest-<operating-system>-x86_64.sh (replace with correct name of installer)
  3. Agree to the terms of conditions, specify the installation directory (the default is usually fine), and answer "yes" to the questions "Do you wish the installer to initialize Miniconda3 by running conda init?"

You now have miniconda and conda installed. Make sure that it works by typing which conda and see that it points to where you installed miniconda (you may have to open a new terminal first).

We recommend that you create an isolated conda environment (this is good practice in software development):

$ conda create --name mpi
$ conda activate mpi

This should create a new empty environment and activate it, which might prepend your shell prompt with the name of the conda environment.

Installing a Fortran compiler and MPI

Now install compilers and the OpenMPI implementation of MPI:

(mpi) $ conda install -c conda-forge compilers
(mpi) $ conda install -c conda-forge openmpi

If you prefer MPICH over OpenMPI (or you experience problems with OpenMPI), you can instead do::

(mpi) $ conda install -c conda-forge compilers  
(mpi) $ conda install -c conda-forge mpich

Please also verify the installation.

The following commands should give version numbers::

(mpi) $ mpif90 --version
(mpi) $ mpirun --version  

With OpenMPI you can also try the -showme flag to see what the mpif90 compiler wrapper does under the hood::

(mpi) $ mpif90 -showme

To compile an MPI code hello_mpi.f90, you should now be able to do::

(mpi) $ mpicc -o hello_mpi.x hello_mpi.f90
(mpi) $ mpirun -n 2 hello_mpi.x

Running the Code

After creating the executable, move back to the parent directory and copy the input files from the directory input_base into a new directory. Move into this directory, modify the input files as necessary (see the Input and Control page for details), and run the executable with mpirun (make sure to specify the number of cores):

[~/rfEDIPIC/src] $ cd ..
[~/rfEDIPIC] $ cp -r input_base example_run
[~/rfEDIPIC] $ cd example_run
... Modify files ...
[~/rfEDIPIC/example_run] $ mpirun -n <number_of_cores> ../src/rfedipic > output.txt

Submitting jobs via Slurm scripts

If you are running jobs on a cluster (such as PU's Stellar), jobs must be submitted via the SLURM scheduling system (for documentation of Princeton's implementation of Slurm, visit here). A sample script has been provided in the input_base directory and is available below.

#!/bin/bash
## An example SLURM script for PU's Stellar cluster
#SBATCH -J Ex_Job                        ## Job name
#SBATCH --nodes=1                        ## Number of nodes requested
#SBATCH --ntasks-per-node=48             ## Number of cores requested per node (96/node on Stellar)
#SBATCH --time=03:00:00                  ## Maximum job runtime (HH:MM:SS)
#SBATCH --mail-type=all                  ## Request emails when the job begins and ends
#SBATCH [email protected]

exec=/path/to/executable/rfedipic

srun $exec > dedipic.${SLURM_JOBID}.out

After modifying the rfEDIPIC input files as desired, fill out options on the #SBATCH lines of your Slurm script and paste the absolute path to the executable. Submit jobs with the command:

[~/rfEDIPIC/example_run] $ sbatch <slurm_scheduler>

where <slurm_scheduler> is the name of your script.

Clone this wiki locally