Skip to content

Commit fe5adab

Browse files
committed
updating cactus configs to give info about cannon config rather than plugin
1 parent 6929ec2 commit fe5adab

File tree

6 files changed

+46
-55
lines changed

6 files changed

+46
-55
lines changed

docs/resources/tutorials/add-outgroup-to-whole-genome-alignment-cactus.md

Lines changed: 9 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -65,17 +65,11 @@ If the help menu displays, you already have Singularity installed. If not, you w
6565
mamba install conda-forge::singularity
6666
```
6767

68-
!!! tip "Cannon cluster Snakemake plugin"
68+
!!! tip "Cannon cluster Snakemake config"
6969

70-
If you are on the Harvard Cannon cluster, instead of the generic snakemake-executor-plugin-slurm, you can use our specific plugin for the Cannon cluster: [snakemake-executor-plugin-cannon :octicons-link-external-24:](https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/cannon.html){:target="_blank"}. This facilitates *automatic partition selection* based on requested resources. Install this in your environment with:
70+
If you are on the Harvard Cannon cluster, you can use our configuration file to facilitate automatic partition selection! Just leave the `partition:` entries blank in your run's config file. See [here](../snakemake-cannon-config.md) for more usage information.
7171

72-
```bash
73-
mamba install bioconda::snakemake-executor-plugin-cannon
74-
```
75-
76-
Then, when running the workflow, specify the cannon executor with `-e cannon` instead of `-e slurm`.
77-
78-
If you are not on the Harvard Cannon cluster, stick with the generic SLURM plugin. You will just need to directly specify the partitions for each rule in the config file ([see below](#specifying-resources-for-each-rule)).
72+
If you are not on the Harvard Cannon cluster, you can still directly specify the partitions for each rule in the config file ([see below](#specifying-resources-for-each-rule)), or you may take it upon yourself to [make a partition configuration :octicons-link-external-24:](https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/slurm.html#automatic-partition-selection){:target="_blank"} file for your own cluster.
7973

8074
### Downloading the cactus-snakemake pipeline
8175

@@ -261,7 +255,7 @@ rule_resources:
261255
* **Allocate the proper partitions based on `use_gpu`.** If you want to use the GPU version of cactus (*i.e.* you have set `use_gpu: True` in the config file), the partition for the rule **blast** must be GPU enabled. If not, the pipeline will fail to run.
262256
* The `blast: gpus:` option will be ignored if `use_gpu: False` is set.
263257
* **mem is in MB** and **time is in minutes**.
264-
* **If using the [snakemake-executor-plugin-cannon :octicons-link-external-24:](https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/cannon.html){:target="_blank"} specifically for the Harvard Cannon cluster, you can leave the `partition:` fields blank and one will be selected automatically based on the other resources requested!**
258+
* **If using the [Snakemake partition config file specifically for the Harvard Cannon cluster :material-arrow-top-right:](../snakemake-cannon-config.md){:target="_blank"}, you can leave the `partition:` fields blank and one will be selected automatically based on the other resources requested!**
265259
266260
You will have to determine the proper resource usage for your dataset. Generally, the larger the genomes, the more time and memory each job will need, and the more you will benefit from providing more CPUs and GPUs.
267261
@@ -287,7 +281,7 @@ snakemake -j <# of jobs to submit simultaneously> -e slurm -s </path/to/cactus_a
287281
| ------------------------------------------------- | ----------- |
288282
| `snakemake` | The call to the snakemake workflow program to execute the workflow. |
289283
| `-j <# of jobs to submit simultaneously>` | The maximum number of jobs that will be submitted to your SLURM cluster at one time. |
290-
| `-e slurm` | Specify to use the SLURM executor plugin, or use `-e cannon` if using the Cannon specific plugin. See: [Getting started](#getting-started) |
284+
| `-e slurm` | Specify to use the SLURM executor plugin. |
291285
| `-s </path/to/cactus_add_outgroup.smk>` | The path to the workflow file. |
292286
| `--configfile <path/to/your/snakmake-config.yml>` | The path to your config file. See: [Preparing the Snakemake config file](#preparing-the-snakemake-config-file). |
293287
| `--dryrun` | Do not execute anything, just display what would be done. |
@@ -296,6 +290,10 @@ snakemake -j <# of jobs to submit simultaneously> -e slurm -s </path/to/cactus_a
296290

297291
However even during a `--dryrun` some pre-processing steps will be run, including creation of the output directory if it doesn't exist, downloading the Cactus Singularity image if `cactus_path: download` is set in the config file, and creation of some small input files. These should all be relatively fast and not resource intensive tasks.
298292

293+
!!! tip "Use the Cannon config file for automatic partition selection"
294+
295+
Recall, if you are on the Harvard Cannon cluster, you can specify the [partition config :material-arrow-top-right:](../snakemake-cannon-config.md){:target="_blank"} file with `--slurm-partition-config` option and partitions will be selected automatically!
296+
299297
If this completes successfully, you should see a bunch of blue, yellow, and green text on the screen, ending with something like this (the number of jobs and Reasons: may differ for your project):
300298

301299
```bash

docs/resources/tutorials/add-to-whole-genome-alignment-cactus.md

Lines changed: 9 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -65,17 +65,11 @@ If the help menu displays, you already have Singularity installed. If not, you w
6565
mamba install conda-forge::singularity
6666
```
6767

68-
!!! tip "Cannon cluster Snakemake plugin"
68+
!!! tip "Cannon cluster Snakemake config"
6969

70-
If you are on the Harvard Cannon cluster, instead of the generic snakemake-executor-plugin-slurm, you can use our specific plugin for the Cannon cluster: [snakemake-executor-plugin-cannon :octicons-link-external-24:](https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/cannon.html){:target="_blank"}. This facilitates *automatic partition selection* based on requested resources. Install this in your environment with:
70+
If you are on the Harvard Cannon cluster, you can use our configuration file to facilitate automatic partition selection! Just leave the `partition:` entries blank in your run's config file. See [here](../snakemake-cannon-config.md) for more usage information.
7171

72-
```bash
73-
mamba install bioconda::snakemake-executor-plugin-cannon
74-
```
75-
76-
Then, when running the workflow, specify the cannon executor with `-e cannon` instead of `-e slurm`.
77-
78-
If you are not on the Harvard Cannon cluster, stick with the generic SLURM plugin. You will just need to directly specify the partitions for each rule in the config file ([see below](#specifying-resources-for-each-rule)).
72+
If you are not on the Harvard Cannon cluster, you can still directly specify the partitions for each rule in the config file ([see below](#specifying-resources-for-each-rule)), or you may take it upon yourself to [make a partition configuration :octicons-link-external-24:](https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/slurm.html#automatic-partition-selection){:target="_blank"} file for your own cluster.
7973

8074
### Downloading the cactus-snakemake pipeline
8175

@@ -273,7 +267,7 @@ rule_resources:
273267
* **Allocate the proper partitions based on `use_gpu`.** If you want to use the GPU version of cactus (*i.e.* you have set `use_gpu: True` in the config file), the partition for the rule **blast** must be GPU enabled. If not, the pipeline will fail to run.
274268
* The `blast: gpus:` option will be ignored if `use_gpu: False` is set.
275269
* **mem is in MB** and **time is in minutes**.
276-
* **If using the [snakemake-executor-plugin-cannon :octicons-link-external-24:](https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/cannon.html){:target="_blank"} specifically for the Harvard Cannon cluster, you can leave the `partition:` fields blank and one will be selected automatically based on the other resources requested!**
270+
* **If using the [Snakemake partition config file specifically for the Harvard Cannon cluster :material-arrow-top-right:](../snakemake-cannon-config.md){:target="_blank"}, you can leave the `partition:` fields blank and one will be selected automatically based on the other resources requested!**
277271
278272
You will have to determine the proper resource usage for your dataset. Generally, the larger the genomes, the more time and memory each job will need, and the more you will benefit from providing more CPUs and GPUs.
279273
@@ -299,7 +293,7 @@ snakemake -j <# of jobs to submit simultaneously> -e slurm -s </path/to/cactus_u
299293
| ------------------------------------------------- | ----------- |
300294
| `snakemake` | The call to the snakemake workflow program to execute the workflow. |
301295
| `-j <# of jobs to submit simultaneously>` | The maximum number of jobs that will be submitted to your SLURM cluster at one time. |
302-
| `-e slurm` | Specify to use the SLURM executor plugin, or use `-e cannon` if using the Cannon specific plugin. See: [Getting started](#getting-started) |
296+
| `-e slurm` | Specify to use the SLURM executor plugin. |
303297
| `-s </path/to/cactus_update.smk>` | The path to the workflow file. |
304298
| `--configfile <path/to/your/snakmake-config.yml>` | The path to your config file. See: [Preparing the Snakemake config file](#preparing-the-snakemake-config-file). |
305299
| `--dryrun` | Do not execute anything, just display what would be done. |
@@ -308,6 +302,10 @@ snakemake -j <# of jobs to submit simultaneously> -e slurm -s </path/to/cactus_u
308302

309303
However even during a `--dryrun` some pre-processing steps will be run, including creation of the output directory if it doesn't exist, downloading the Cactus Singularity image if `cactus_path: download` is set in the config file, and running `cactus-update-prepare`. These should all be relatively fast and not resource intensive tasks.
310304

305+
!!! tip "Use the Cannon config file for automatic partition selection"
306+
307+
Recall, if you are on the Harvard Cannon cluster, you can specify the [partition config :material-arrow-top-right:](../snakemake-cannon-config.md){:target="_blank"} file with `--slurm-partition-config` option and partitions will be selected automatically!
308+
311309
If this completes successfully, you should see a bunch of blue, yellow, and green text on the screen, ending with something like this (the number of jobs and Reasons: may differ for your project):
312310

313311
```bash

docs/resources/tutorials/pangenome-cactus-minigraph.md

Lines changed: 9 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -58,17 +58,11 @@ If the help menu displays, you already have Singularity installed. If not, you w
5858
mamba install conda-forge::singularity
5959
```
6060

61-
!!! tip "Cannon cluster Snakemake plugin"
61+
!!! tip "Cannon cluster Snakemake config"
6262

63-
If you are on the Harvard Cannon cluster, instead of the generic snakemake-executor-plugin-slurm, you can use our specific plugin for the Cannon cluster: [snakemake-executor-plugin-cannon :octicons-link-external-24:](https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/cannon.html){:target="_blank"}. This facilitates *automatic partition selection* based on requested resources. Install this in your environment with:
63+
If you are on the Harvard Cannon cluster, you can use our configuration file to facilitate automatic partition selection! Just leave the `partition:` entries blank in your run's config file. See [here](../snakemake-cannon-config.md) for more usage information.
6464

65-
```bash
66-
mamba install bioconda::snakemake-executor-plugin-cannon
67-
```
68-
69-
Then, when running the workflow, specify the cannon executor with `-e cannon` instead of `-e slurm`.
70-
71-
If you are not on the Harvard Cannon cluster, stick with the generic SLURM plugin. You will just need to explicitly specify the partitions for each rule in the config file ([see below](#specifying-resources-for-each-rule)).
65+
If you are not on the Harvard Cannon cluster, you can still directly specify the partitions for each rule in the config file ([see below](#specifying-resources-for-each-rule)), or you may take it upon yourself to [make a partition configuration :octicons-link-external-24:](https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/slurm.html#automatic-partition-selection){:target="_blank"} file for your own cluster.
7266

7367
### Downloading the cactus-snakemake pipeline
7468

@@ -191,7 +185,7 @@ rule_resources:
191185
* Be sure to use partition names appropriate your cluster. Several examples in this tutorial have partition names that are specific to the Harvard cluster, so be sure to change them.
192186
* The steps in the cactus-minigraph pipeline are not GPU compatible, so there are no GPU options in this pipeline.
193187
* **mem_mb is in MB** and **time is in minutes**.
194-
* **If using the [snakemake-executor-plugin-cannon :octicons-link-external-24:](https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/cannon.html){:target="_blank"} specifically for the Harvard Cannon cluster, you can leave the `partition:` fields blank and one will be selected automatically based on the other resources requested!**
188+
* **If using the [Snakemake partition config file specifically for the Harvard Cannon cluster :material-arrow-top-right:](../snakemake-cannon-config.md){:target="_blank"}, you can leave the `partition:` fields blank and one will be selected automatically based on the other resources requested!**
195189

196190
You will have to determine the proper resource usage for your dataset. Generally, the larger the genomes, the more time and memory each job will need, and the more you will benefit from providing more CPUs.
197191

@@ -215,7 +209,7 @@ snakemake -j <# of jobs to submit simultaneously> -e slurm -s </path/to/cactus_m
215209
| ------------------------------------------------- | ----------- |
216210
| `snakemake` | The call to the snakemake workflow program to execute the workflow. |
217211
| `-j <# of jobs to submit simultaneously>` | The maximum number of jobs that will be submitted to your SLURM cluster at one time. |
218-
| `-e slurm` | Specify to use the SLURM executor plugin, or use `-e cannon` if using the Cannon specific plugin. See: [Getting started](#getting-started) |
212+
| `-e slurm` | Specify to use the SLURM executor plugin. |
219213
| `-s </path/to/cactus_minigraph.smk>` | The path to the workflow file. |
220214
| `--configfile <path/to/your/snakmake-config.yml>` | The path to your config file. See: [Preparing the Snakemake config file](#preparing-the-snakemake-config-file). |
221215
| `--dryrun` | Do not execute anything, just display what would be done. |
@@ -224,6 +218,10 @@ snakemake -j <# of jobs to submit simultaneously> -e slurm -s </path/to/cactus_m
224218

225219
However even during a `--dryrun` some pre-processing steps will be run, including creation of the output directory if it doesn't exist and downloading the Cactus Singularity image if `cactus_path: download` is set in the config file. These should be relatively fast and not resource intensive tasks.
226220

221+
!!! tip "Use the Cannon config file for automatic partition selection"
222+
223+
Recall, if you are on the Harvard Cannon cluster, you can specify the [partition config :material-arrow-top-right:](../snakemake-cannon-config.md){:target="_blank"} file with `--slurm-partition-config` option and partitions will be selected automatically!
224+
227225
If this completes successfully, you should see a bunch of blue, yellow, and green text on the screen, ending with something like this (the number of jobs and Reasons: may differ for your project):
228226

229227
```bash

0 commit comments

Comments
 (0)