Skip to content

Localrules are ignored for grouped jobs  #162

Open
@NoahHenrikKleinschmidt

Description

@NoahHenrikKleinschmidt

I am using the latest versions of snakemake (8.25) and am trying to use the executor plugin (0.11.1) for slurm for my pipeline.
However, when I use the command-line to run snakemake with the slurm executor all rules are considered SLURM-rules. That is to say, neither localrules: a, b, c nor the per-rule localrule: True directive have any effect. After some digging I found that this only occurs if the group directive is used on the respective rules. I.e. if I remove the grouping then everything runs fine and localrules are recognized properly.

Minimal Snakefile if you want to try it

The rule a belongs to group local

DIRECTORY = "test"

rule all_a:
    input: 
        [f"{DIRECTORY}/test_file-a-{i}.txt" for i in range(10)],
    localrule: True

rule a:
    input: 
        directory(DIRECTORY)
    output:
        DIRECTORY + "/test_file-a-{i}.txt"
    localrule: True
    group:
        "local"
    shell:
        "touch {output[0]}"

Running all_a reveals that both all_a as well as a are considered normal rules rather than localrules

$ snakemake --executor slurm -R all_a

Job stats:
job      count
-----  -------
a           10
all_a        1
total       11

Select jobs to execute...
Execute 10 jobs...


[Fri Nov  1 13:57:51 2024]
rule a:
    input: test
    output: test/test_file-a-4.txt
    jobid: 5
    reason: Missing output files: test/test_file-a-4.txt
    wildcards: i=4
    resources: mem_mb=<TBD>, disk_mb=<TBD>, tmpdir=<TBD>, slurm_account=<TBD>


[Fri Nov  1 13:57:51 2024]
rule all_a:
    input: test/test_file-a-0.txt, test/test_file-a-1.txt, test/test_file-a-2.txt, test/test_file-a-3.txt, test/test_file-a-4.txt, test/test_file-a-5.txt, test/test_file-a-6.txt, test/test_file-a-7.txt, test/test_file-a-8.txt, test/test_file-a-9.txt
    jobid: 0
    reason: Forced execution
    resources: mem_mb=<TBD>, disk_mb=<TBD>, tmpdir=<TBD>, slurm_account=<TBD>

on the other hand if we remove the group clearative and run the same command in the terminal we get:

$ snakemake --executor slurm -R all_a

Job stats:
job      count
-----  -------
a           10
all_a        1
total       11

Select jobs to execute...
Execute 10 jobs...

[Fri Nov  1 14:05:23 2024]
localrule a:
    input: test
    output: test/test_file-a-4.txt
    jobid: 5
    reason: Missing output files: test/test_file-a-4.txt
    wildcards: i=4
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=/tmp, slurm_account=bczk-delta-gpu

[Fri Nov  1 14:05:23 2024]
localrule all_a:
    input: test/test_file-a-0.txt, test/test_file-a-1.txt, test/test_file-a-2.txt, test/test_file-a-3.txt, test/test_file-a-4.txt, test/test_file-a-5.txt, test/test_file-a-6.txt, test/test_file-a-7.txt, test/test_file-a-8.txt, test/test_file-a-9.txt
    jobid: 0
    reason: Forced execution
    resources: mem_mb=1000, mem_mib=954, disk_mb=1000, disk_mib=954, tmpdir=/tmp, slurm_account=bczk-delta-gpu

indicating that now the rules are indeed localrules.

PS: as you can probably see from the copy/paste output, the actual commands used also specified some resources and partitions which is actually how I found the bug in the first place as I was trying to run simple CPU stuff on a GPU partition since SLURM is only supposed to be used by gpu-using rules in my pipeline...

Thanks a lot already,

Cheers,
Noah ☀️

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions