Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 5 additions & 6 deletions conf/imperial.config
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,15 @@ params {

// Resources
max_memory = 920.GB
max_cpus = 256
max_time = 1000.h
max_cpus = 128
max_time = 72.h
}

process {
resourceLimits = [
memory: 920.GB,
cpus: 256,
time: 1000.h
cpus: 128,
time: 72.h
]
}

Expand Down Expand Up @@ -81,7 +81,6 @@ profiles {
? '--nv --env CUDA_VISIBLE_DEVICES=$CUDA_VISIBLE_DEVICES'
: (workflow.containerEngine == "docker" ? '--gpus all' : null)
}
beforeScript = 'module load tools/prod'
}
}
}
Expand Down Expand Up @@ -109,5 +108,5 @@ executor {
singularity {
enabled = true
autoMounts = true
runOptions = "-B /rds/,/rds/general/user/${USER}/ephemeral/tmp/:/tmp,/rds/general/user/${USER}/ephemeral/tmp/:/var/tmp"
runOptions = "-B /rds/,${TMPDIR}:/tmp,${TMPDIR}:/var/tmp"
}
18 changes: 10 additions & 8 deletions docs/imperial.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,20 @@
# nf-core/configs: Imperial CX1 HPC Configuration
# nf-core/configs: Imperial CX3 HPC Configuration

All nf-core pipelines have been successfully configured for use on the CX1 cluster at Imperial College London HPC.
All nf-core pipelines have been successfully configured for use on the CX3 cluster at Imperial College London HPC.

To use, run the pipeline with `-profile imperial,standard`. This will download and launch the [`imperial.config`](../conf/imperial.config) which has been pre-configured with a setup suitable for the CX1 cluster. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline.
To use, run the pipeline with `-profile imperial,standard`. This will download and launch the [`imperial.config`](../conf/imperial.config) which has been pre-configured with a setup suitable for the CX3 cluster. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline.

Before running the pipeline you will need to load Nextflow using the environment module system on the CX1 cluster. You can do this by issuing the commands below:
Before running the pipeline you will need to install Nextflow into a conda environment. The instructions below at taken from the [`RCS guidance on using conda`](https://icl-rcs-user-guide.readthedocs.io/en/latest/hpc/applications/guides/conda/)

```bash
## Load Nextflow and Singularity environment modules
module load anaconda3/personal
conda install -c bioconda nextflow
module load miniforge/3
miniforge-setup
eval "$(~/miniforge3/bin/conda shell.bash hook)"
conda create -n nextflow -c bioconda nextflow
```

> NB: You will need an account to use the HPC cluster CX1 in order to run the pipeline. If in doubt contact IT.
> NB: Nextflow will need to submit the jobs via the job scheduler to the HPC cluster and as such the commands above will have to be executed on one of the login nodes. If in doubt contact IT.
> NB: You will need an Imperial account to use any HPC cluster managed by the RCS team. If in doubt contact the [`RCS team`](https://icl-rcs-user-guide.readthedocs.io/en/latest/support/)
> NB: Nextflow will need to submit the jobs via the job scheduler to the HPC cluster and as such the commands above will have to be executed on one of the login nodes.
> NB: To submit jobs to the Imperial College MEDBIO cluster, use `-profile imperial,medbio` instead.
> NB: You will need a restricted access account to use the HPC cluster MEDBIO.