Skip to content

Commit ed1beaf

Browse files
committed
docs: rework contributions section
1 parent 7d0b44c commit ed1beaf

File tree

1 file changed

+17
-11
lines changed

1 file changed

+17
-11
lines changed

docs/further.md

Lines changed: 17 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -7,18 +7,25 @@ To avoid redundancy, the plugin deletes the SLURM log file for successful jobs,
77
Remote executors submit Snakemake jobs to ensure unique functionalities — such as piped group jobs and rule wrappers — are available on cluster nodes.
88
The memory footprint varies based on these functionalities; for instance, rules with a run directive that import modules and read data may require more memory.
99

10-
#### Usage Hints
10+
### Installation
1111

12-
Install this plugin into your Snakemake base environment using conda.
13-
This process also installs the 'jobstep' plugin, utilized on cluster nodes.
14-
Additionally, we recommend installing the `snakemake-storage-plugin-fs` for automated stage-in and stage-out procedures.
12+
Installing this plugin into your Snakemake base environment using conda will also install the 'jobstep' plugin, utilized on cluster nodes.
13+
Additionally, we recommend installing the `snakemake-storage-plugin-fs`, which will automate transferring data from the main file system to slurm execution nodes and back (stage-in and stage-out).
1514

16-
#### Reporting Bugs and Feature Requests
15+
### Contributions
1716

18-
We welcome bug reports and feature requests!
17+
We welcome bug reports, feature requests and pull requests!
1918
Please report issues specific to this plugin [in the plugin's GitHub repository](https://github.com/snakemake/snakemake-executor-plugin-slurm/issue).
20-
For other concerns, refer to the [Snakemake main repository](https://github.com/snakemake/snakemake/issues) or the relevant Snakemake plugin repository.
21-
Cluster-related issues should be directed to your cluster administrator.
19+
Additionally, bugs related to the plugin can originate in the:
20+
21+
* [`snakemake-executor-plugin-slurm-jobstep`](https://github.com/snakemake/snakemake-executor-plugin-slurm-jobstep), which runs snakemake within slurm jobs
22+
* [`snakemake-interface-executor-plugins`](https://github.com/snakemake/snakemake-interface-executor-plugins), which connects it to the main snakemake application
23+
* [`snakemake`](https://github.com/snakemake/snakemake) itself
24+
25+
If you can pinpoint the exact repository your issue pertains to, file you issue or pull request there.
26+
If unsure, posting here should ensure that we can direct you to right one.
27+
28+
For issues that are specific to your local cluster-setup, please contact your cluster administrator.
2229

2330
### Specifying Account and Partition
2431

@@ -285,14 +292,11 @@ This directive allows you to specify a comma-separated list of rules that should
285292
localrules: <rule_a>, <rule_b>
286293
```
287294

288-
### Advanced Resource Specifications
289295

290296
In Snakemake workflows executed on SLURM clusters, it's essential to map Snakemake's resource specifications to SLURM's resource management parameters.
291297
This ensures that each job receives the appropriate computational resources.
292298
Below is a guide on how to align these specifications:
293299

294-
#### Mapping Snakemake Resources to SLURM Parameters
295-
296300
Snakemake allows the definition of resources within each rule, which can be translated to corresponding SLURM command-line flags:
297301

298302
- Partition: Specifies the partition or queue to which the job should be submitted.
@@ -368,6 +372,8 @@ snakemake --profile path/to/profile
368372

369373
By leveraging configuration profiles, you can tailor resource specifications to different computing environments without modifying the core workflow definitions, thereby enhancing reproducibility and flexibility.
370374

375+
### Advanced Resource Specifications
376+
371377
#### Multicluster Support
372378

373379
In Snakemake, specifying the target cluster for a particular rule is achieved using the `cluster` resource flag within the rule definition.

0 commit comments

Comments
 (0)