You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/resources/tutorials/add-outgroup-to-whole-genome-alignment-cactus.md
+9-11Lines changed: 9 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -65,17 +65,11 @@ If the help menu displays, you already have Singularity installed. If not, you w
65
65
mamba install conda-forge::singularity
66
66
```
67
67
68
-
!!! tip "Cannon cluster Snakemake plugin"
68
+
!!! tip "Cannon cluster Snakemake config"
69
69
70
-
If you are on the Harvard Cannon cluster, instead of the generic snakemake-executor-plugin-slurm, you can use our specific plugin for the Cannon cluster: [snakemake-executor-plugin-cannon :octicons-link-external-24:](https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/cannon.html){:target="_blank"}. This facilitates *automatic partition selection* based on requested resources. Install this in your environment with:
70
+
If you are on the Harvard Cannon cluster, you can use our configuration file to facilitate automatic partition selection! Just leave the `partition:` entries blank in your run's config file. See [here](../snakemake-cannon-config.md) for more usage information.
Then, when running the workflow, specify the cannon executor with `-e cannon` instead of `-e slurm`.
77
-
78
-
If you are not on the Harvard Cannon cluster, stick with the generic SLURM plugin. You will just need to directly specify the partitions for each rule in the config file ([see below](#specifying-resources-for-each-rule)).
72
+
If you are not on the Harvard Cannon cluster, you can still directly specify the partitions for each rule in the config file ([see below](#specifying-resources-for-each-rule)), or you may take it upon yourself to [make a partition configuration :octicons-link-external-24:](https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/slurm.html#automatic-partition-selection){:target="_blank"} file for your own cluster.
79
73
80
74
### Downloading the cactus-snakemake pipeline
81
75
@@ -261,7 +255,7 @@ rule_resources:
261
255
* **Allocate the proper partitions based on `use_gpu`.** If you want to use the GPU version of cactus (*i.e.* you have set `use_gpu: True` in the config file), the partition for the rule **blast** must be GPU enabled. If not, the pipeline will fail to run.
262
256
* The `blast: gpus:` option will be ignored if `use_gpu: False` is set.
263
257
* **mem is in MB** and **time is in minutes**.
264
-
* **If using the [snakemake-executor-plugin-cannon :octicons-link-external-24:](https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/cannon.html){:target="_blank"} specifically for the Harvard Cannon cluster, you can leave the `partition:` fields blank and one will be selected automatically based on the other resources requested!**
258
+
* **If using the [Snakemake partition config file specifically for the Harvard Cannon cluster :material-arrow-top-right:](../snakemake-cannon-config.md){:target="_blank"}, you can leave the `partition:` fields blank and one will be selected automatically based on the other resources requested!**
265
259
266
260
You will have to determine the proper resource usage for your dataset. Generally, the larger the genomes, the more time and memory each job will need, and the more you will benefit from providing more CPUs and GPUs.
267
261
@@ -287,7 +281,7 @@ snakemake -j <# of jobs to submit simultaneously> -e slurm -s </path/to/cactus_a
| `snakemake` | The call to the snakemake workflow program to execute the workflow. |
289
283
| `-j <# of jobs to submit simultaneously>` | The maximum number of jobs that will be submitted to your SLURM cluster at one time. |
290
-
| `-e slurm` | Specify to use the SLURM executor plugin, or use `-e cannon` if using the Cannon specific plugin. See: [Getting started](#getting-started) |
284
+
| `-e slurm` | Specify to use the SLURM executor plugin. |
291
285
| `-s </path/to/cactus_add_outgroup.smk>` | The path to the workflow file. |
292
286
| `--configfile <path/to/your/snakmake-config.yml>` | The path to your config file. See: [Preparing the Snakemake config file](#preparing-the-snakemake-config-file). |
293
287
| `--dryrun` | Do not execute anything, just display what would be done. |
@@ -296,6 +290,10 @@ snakemake -j <# of jobs to submit simultaneously> -e slurm -s </path/to/cactus_a
296
290
297
291
However even during a `--dryrun` some pre-processing steps will be run, including creation of the output directory if it doesn't exist, downloading the Cactus Singularity image if `cactus_path: download` is set in the config file, and creation of some small input files. These should all be relatively fast and not resource intensive tasks.
298
292
293
+
!!! tip "Use the Cannon config file for automatic partition selection"
294
+
295
+
Recall, if you are on the Harvard Cannon cluster, you can specify the [partition config :material-arrow-top-right:](../snakemake-cannon-config.md){:target="_blank"} file with `--slurm-partition-config` option and partitions will be selected automatically!
296
+
299
297
If this completes successfully, you should see a bunch of blue, yellow, and green text on the screen, ending with something like this (the number of jobs and Reasons: may differ for your project):
Copy file name to clipboardExpand all lines: docs/resources/tutorials/add-to-whole-genome-alignment-cactus.md
+9-11Lines changed: 9 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -65,17 +65,11 @@ If the help menu displays, you already have Singularity installed. If not, you w
65
65
mamba install conda-forge::singularity
66
66
```
67
67
68
-
!!! tip "Cannon cluster Snakemake plugin"
68
+
!!! tip "Cannon cluster Snakemake config"
69
69
70
-
If you are on the Harvard Cannon cluster, instead of the generic snakemake-executor-plugin-slurm, you can use our specific plugin for the Cannon cluster: [snakemake-executor-plugin-cannon :octicons-link-external-24:](https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/cannon.html){:target="_blank"}. This facilitates *automatic partition selection* based on requested resources. Install this in your environment with:
70
+
If you are on the Harvard Cannon cluster, you can use our configuration file to facilitate automatic partition selection! Just leave the `partition:` entries blank in your run's config file. See [here](../snakemake-cannon-config.md) for more usage information.
Then, when running the workflow, specify the cannon executor with `-e cannon` instead of `-e slurm`.
77
-
78
-
If you are not on the Harvard Cannon cluster, stick with the generic SLURM plugin. You will just need to directly specify the partitions for each rule in the config file ([see below](#specifying-resources-for-each-rule)).
72
+
If you are not on the Harvard Cannon cluster, you can still directly specify the partitions for each rule in the config file ([see below](#specifying-resources-for-each-rule)), or you may take it upon yourself to [make a partition configuration :octicons-link-external-24:](https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/slurm.html#automatic-partition-selection){:target="_blank"} file for your own cluster.
79
73
80
74
### Downloading the cactus-snakemake pipeline
81
75
@@ -273,7 +267,7 @@ rule_resources:
273
267
* **Allocate the proper partitions based on `use_gpu`.** If you want to use the GPU version of cactus (*i.e.* you have set `use_gpu: True` in the config file), the partition for the rule **blast** must be GPU enabled. If not, the pipeline will fail to run.
274
268
* The `blast: gpus:` option will be ignored if `use_gpu: False` is set.
275
269
* **mem is in MB** and **time is in minutes**.
276
-
* **If using the [snakemake-executor-plugin-cannon :octicons-link-external-24:](https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/cannon.html){:target="_blank"} specifically for the Harvard Cannon cluster, you can leave the `partition:` fields blank and one will be selected automatically based on the other resources requested!**
270
+
* **If using the [Snakemake partition config file specifically for the Harvard Cannon cluster :material-arrow-top-right:](../snakemake-cannon-config.md){:target="_blank"}, you can leave the `partition:` fields blank and one will be selected automatically based on the other resources requested!**
277
271
278
272
You will have to determine the proper resource usage for your dataset. Generally, the larger the genomes, the more time and memory each job will need, and the more you will benefit from providing more CPUs and GPUs.
279
273
@@ -299,7 +293,7 @@ snakemake -j <# of jobs to submit simultaneously> -e slurm -s </path/to/cactus_u
| `snakemake` | The call to the snakemake workflow program to execute the workflow. |
301
295
| `-j <# of jobs to submit simultaneously>` | The maximum number of jobs that will be submitted to your SLURM cluster at one time. |
302
-
| `-e slurm` | Specify to use the SLURM executor plugin, or use `-e cannon` if using the Cannon specific plugin. See: [Getting started](#getting-started) |
296
+
| `-e slurm` | Specify to use the SLURM executor plugin. |
303
297
| `-s </path/to/cactus_update.smk>` | The path to the workflow file. |
304
298
| `--configfile <path/to/your/snakmake-config.yml>` | The path to your config file. See: [Preparing the Snakemake config file](#preparing-the-snakemake-config-file). |
305
299
| `--dryrun` | Do not execute anything, just display what would be done. |
@@ -308,6 +302,10 @@ snakemake -j <# of jobs to submit simultaneously> -e slurm -s </path/to/cactus_u
308
302
309
303
However even during a `--dryrun` some pre-processing steps will be run, including creation of the output directory if it doesn't exist, downloading the Cactus Singularity image if `cactus_path: download` is set in the config file, and running `cactus-update-prepare`. These should all be relatively fast and not resource intensive tasks.
310
304
305
+
!!! tip "Use the Cannon config file for automatic partition selection"
306
+
307
+
Recall, if you are on the Harvard Cannon cluster, you can specify the [partition config :material-arrow-top-right:](../snakemake-cannon-config.md){:target="_blank"} file with `--slurm-partition-config` option and partitions will be selected automatically!
308
+
311
309
If this completes successfully, you should see a bunch of blue, yellow, and green text on the screen, ending with something like this (the number of jobs and Reasons: may differ for your project):
Copy file name to clipboardExpand all lines: docs/resources/tutorials/pangenome-cactus-minigraph.md
+9-11Lines changed: 9 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -58,17 +58,11 @@ If the help menu displays, you already have Singularity installed. If not, you w
58
58
mamba install conda-forge::singularity
59
59
```
60
60
61
-
!!! tip "Cannon cluster Snakemake plugin"
61
+
!!! tip "Cannon cluster Snakemake config"
62
62
63
-
If you are on the Harvard Cannon cluster, instead of the generic snakemake-executor-plugin-slurm, you can use our specific plugin for the Cannon cluster: [snakemake-executor-plugin-cannon :octicons-link-external-24:](https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/cannon.html){:target="_blank"}. This facilitates *automatic partition selection* based on requested resources. Install this in your environment with:
63
+
If you are on the Harvard Cannon cluster, you can use our configuration file to facilitate automatic partition selection! Just leave the `partition:` entries blank in your run's config file. See [here](../snakemake-cannon-config.md) for more usage information.
Then, when running the workflow, specify the cannon executor with `-e cannon` instead of `-e slurm`.
70
-
71
-
If you are not on the Harvard Cannon cluster, stick with the generic SLURM plugin. You will just need to explicitly specify the partitions for each rule in the config file ([see below](#specifying-resources-for-each-rule)).
65
+
If you are not on the Harvard Cannon cluster, you can still directly specify the partitions for each rule in the config file ([see below](#specifying-resources-for-each-rule)), or you may take it upon yourself to [make a partition configuration :octicons-link-external-24:](https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/slurm.html#automatic-partition-selection){:target="_blank"} file for your own cluster.
72
66
73
67
### Downloading the cactus-snakemake pipeline
74
68
@@ -191,7 +185,7 @@ rule_resources:
191
185
* Be sure to use partition names appropriate your cluster. Several examples in this tutorial have partition names that are specific to the Harvard cluster, so be sure to change them.
192
186
* The steps in the cactus-minigraph pipeline are not GPU compatible, so there are no GPU options in this pipeline.
193
187
* **mem_mb is in MB** and **time is in minutes**.
194
-
* **If using the [snakemake-executor-plugin-cannon :octicons-link-external-24:](https://snakemake.github.io/snakemake-plugin-catalog/plugins/executor/cannon.html){:target="_blank"} specifically for the Harvard Cannon cluster, you can leave the `partition:` fields blank and one will be selected automatically based on the other resources requested!**
188
+
* **If using the [Snakemake partition config file specifically for the Harvard Cannon cluster :material-arrow-top-right:](../snakemake-cannon-config.md){:target="_blank"}, you can leave the `partition:` fields blank and one will be selected automatically based on the other resources requested!**
195
189
196
190
You will have to determine the proper resource usage for your dataset. Generally, the larger the genomes, the more time and memory each job will need, and the more you will benefit from providing more CPUs.
197
191
@@ -215,7 +209,7 @@ snakemake -j <# of jobs to submit simultaneously> -e slurm -s </path/to/cactus_m
| `snakemake` | The call to the snakemake workflow program to execute the workflow. |
217
211
| `-j <# of jobs to submit simultaneously>` | The maximum number of jobs that will be submitted to your SLURM cluster at one time. |
218
-
| `-e slurm` | Specify to use the SLURM executor plugin, or use `-e cannon` if using the Cannon specific plugin. See: [Getting started](#getting-started) |
212
+
| `-e slurm` | Specify to use the SLURM executor plugin. |
219
213
| `-s </path/to/cactus_minigraph.smk>` | The path to the workflow file. |
220
214
| `--configfile <path/to/your/snakmake-config.yml>` | The path to your config file. See: [Preparing the Snakemake config file](#preparing-the-snakemake-config-file). |
221
215
| `--dryrun` | Do not execute anything, just display what would be done. |
@@ -224,6 +218,10 @@ snakemake -j <# of jobs to submit simultaneously> -e slurm -s </path/to/cactus_m
224
218
225
219
However even during a `--dryrun` some pre-processing steps will be run, including creation of the output directory if it doesn't exist and downloading the Cactus Singularity image if `cactus_path: download` is set in the config file. These should be relatively fast and not resource intensive tasks.
226
220
221
+
!!! tip "Use the Cannon config file for automatic partition selection"
222
+
223
+
Recall, if you are on the Harvard Cannon cluster, you can specify the [partition config :material-arrow-top-right:](../snakemake-cannon-config.md){:target="_blank"} file with `--slurm-partition-config` option and partitions will be selected automatically!
224
+
227
225
If this completes successfully, you should see a bunch of blue, yellow, and green text on the screen, ending with something like this (the number of jobs and Reasons: may differ for your project):
0 commit comments