Skip to content

How to submit .ksh scripts on HPC using sbatch for WRFDA without staying on the head node? #22

@haritha1022

Description

@haritha1022

Hi,

I am currently working on generating background error (BE) statistics using WRFDA on an HPC cluster that uses SLURM for job submission.

I was able to successfully run gen_be_stage0, stage1, stage2, and stage3 by directly executing the .exe files through simple shell commands, even though some of those executions happened on the head node (which I understand is not recommended).

However, I am stuck at stage4 (gen_be_stage4). I am unable to run this script properly, and I’m unsure how to write an appropriate SLURM shell script for it. The wrapper script seems to be originally written for an environment like LSF (e.g., using bsub or rsh), but I want to execute it via sbatch on compute nodes, not on the head node.

Additionally, I have a main .ksh script (ge_be_wrapper.ksh) that calls another script (gen_be.ksh), which in turn calls gen_be_stage4_regional.ksh. These scripts execute several stages of the workflow, including calls to .exe binaries.

I want to submit this full workflow using sbatch, without staying on the head node or running interactively there. My goal is to just submit the job and let the compute nodes handle the execution.

Could you please advise:

  • How to structure these .ksh scripts for SLURM?
  • What needs to be done differently to avoid head node usage?
  • Any best practices for running KornShell-based workflows with sbatch?

Thank you for your guidance!

gen_be_stage4_regional.docx

gen_be_wrapper.docx

gen_be.docx please find the attachemets

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions