You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/further.md
+17-11Lines changed: 17 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,18 +7,25 @@ To avoid redundancy, the plugin deletes the SLURM log file for successful jobs,
7
7
Remote executors submit Snakemake jobs to ensure unique functionalities — such as piped group jobs and rule wrappers — are available on cluster nodes.
8
8
The memory footprint varies based on these functionalities; for instance, rules with a run directive that import modules and read data may require more memory.
9
9
10
-
#### Usage Hints
10
+
###Installation
11
11
12
-
Install this plugin into your Snakemake base environment using conda.
13
-
This process also installs the 'jobstep' plugin, utilized on cluster nodes.
14
-
Additionally, we recommend installing the `snakemake-storage-plugin-fs` for automated stage-in and stage-out procedures.
12
+
Installing this plugin into your Snakemake base environment using conda will also install the 'jobstep' plugin, utilized on cluster nodes.
13
+
Additionally, we recommend installing the `snakemake-storage-plugin-fs`, which will automate transferring data from the main file system to slurm execution nodes and back (stage-in and stage-out).
15
14
16
-
#### Reporting Bugs and Feature Requests
15
+
###Contributions
17
16
18
-
We welcome bug reportsand feature requests!
17
+
We welcome bug reports, feature requests and pull requests!
19
18
Please report issues specific to this plugin [in the plugin's GitHub repository](https://github.com/snakemake/snakemake-executor-plugin-slurm/issue).
20
-
For other concerns, refer to the [Snakemake main repository](https://github.com/snakemake/snakemake/issues) or the relevant Snakemake plugin repository.
21
-
Cluster-related issues should be directed to your cluster administrator.
19
+
Additionally, bugs related to the plugin can originate in the:
20
+
21
+
*[`snakemake-executor-plugin-slurm-jobstep`](https://github.com/snakemake/snakemake-executor-plugin-slurm-jobstep), which runs snakemake within slurm jobs
22
+
*[`snakemake-interface-executor-plugins`](https://github.com/snakemake/snakemake-interface-executor-plugins), which connects it to the main snakemake application
If you can pinpoint the exact repository your issue pertains to, file you issue or pull request there.
26
+
If unsure, posting here should ensure that we can direct you to right one.
27
+
28
+
For issues that are specific to your local cluster-setup, please contact your cluster administrator.
22
29
23
30
### Specifying Account and Partition
24
31
@@ -285,14 +292,11 @@ This directive allows you to specify a comma-separated list of rules that should
285
292
localrules: <rule_a>, <rule_b>
286
293
```
287
294
288
-
### Advanced Resource Specifications
289
295
290
296
In Snakemake workflows executed on SLURM clusters, it's essential to map Snakemake's resource specifications to SLURM's resource management parameters.
291
297
This ensures that each job receives the appropriate computational resources.
292
298
Below is a guide on how to align these specifications:
293
299
294
-
#### Mapping Snakemake Resources to SLURM Parameters
295
-
296
300
Snakemake allows the definition of resources within each rule, which can be translated to corresponding SLURM command-line flags:
297
301
298
302
- Partition: Specifies the partition or queue to which the job should be submitted.
By leveraging configuration profiles, you can tailor resource specifications to different computing environments without modifying the core workflow definitions, thereby enhancing reproducibility and flexibility.
370
374
375
+
### Advanced Resource Specifications
376
+
371
377
#### Multicluster Support
372
378
373
379
In Snakemake, specifying the target cluster for a particular rule is achieved using the `cluster` resource flag within the rule definition.
0 commit comments