Skip to content

Best way to bundle ALL jobs together into a single Slurm allocation? #181

Open
@tjweitzel225

Description

@tjweitzel225

My institution's cluster has a strong preference for scientific workflows to be self-contained in a single, long-running SLURM job.

Our current solution is to use Dask's SLURM Runner feature https://jobqueue.dask.org/en/stable/generated/dask_jobqueue.slurm.SLURMRunner.html.

However, I'm thinking of implementing snakemake for its workflow management features.

Is there any way to support these kinds of monolithic SLURM jobs?

Ideally, I'd simply be able to provide Snakemake with resource specifications -- 256 cores across 4 nodes, along with memory specs, say -- and the snakemake scheduler should see those 256 cores and parallelize appropriately as it would on a local multi-core machine.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions