Open
Description
My institution's cluster has a strong preference for scientific workflows to be self-contained in a single, long-running SLURM job.
Our current solution is to use Dask's SLURM Runner feature https://jobqueue.dask.org/en/stable/generated/dask_jobqueue.slurm.SLURMRunner.html.
However, I'm thinking of implementing snakemake for its workflow management features.
Is there any way to support these kinds of monolithic SLURM jobs?
Ideally, I'd simply be able to provide Snakemake with resource specifications -- 256 cores across 4 nodes, along with memory specs, say -- and the snakemake scheduler should see those 256 cores and parallelize appropriately as it would on a local multi-core machine.
Metadata
Metadata
Assignees
Labels
No labels