Skip to content

Commit 7ec6568

Browse files
committed
differences for PR #489
1 parent a2f4dc7 commit 7ec6568

File tree

17 files changed

+271
-162
lines changed

17 files changed

+271
-162
lines changed

12-cluster.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -311,8 +311,7 @@ each resource is.
311311
The local filesystems (ext, tmp, xfs, zfs) will depend on whether you're
312312
on the same login node (or compute node, later on). Networked filesystems
313313
(beegfs, cifs, gpfs, nfs, pvfs) will be similar --- but may include
314-
yourUsername, depending on how it is [mounted](
315-
https://en.wikipedia.org/wiki/Mount_(computing)).
314+
`/usr`, depending on how it is [mounted](https://en.wikipedia.org/wiki/Mount_(computing)).
316315
:::
317316

318317
::: callout

13-scheduler.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -170,7 +170,7 @@ following the `#SBATCH` comment is interpreted as an
170170
instruction to the scheduler.
171171

172172
Let's illustrate this by example. By default, a job's name is the name of the
173-
script, but the `-J` option can be used to change the
173+
script, but the `--job-name` option can be used to change the
174174
name of a job. Add an option to the script:
175175

176176
```bash
@@ -179,7 +179,7 @@ name of a job. Add an option to the script:
179179

180180
```bash
181181
#!/bin/bash
182-
#SBATCH -Jhello-world
182+
#SBATCH --job-namehello-world
183183

184184
echo -n "This script is running on "
185185
hostname
@@ -253,7 +253,7 @@ for it on the cluster.
253253

254254
```bash
255255
#!/bin/bash
256-
#SBATCH -t 00:01 # timeout in HH:MM
256+
#SBATCH --time 00:01 # timeout in HH:MM
257257

258258
echo -n "This script is running on "
259259
sleep 20 # time in seconds
@@ -282,8 +282,8 @@ wall time, and attempt to run a job for two minutes.
282282

283283
```bash
284284
#!/bin/bash
285-
#SBATCH -Jlong_job
286-
#SBATCH -t 00:01 # timeout in HH:MM
285+
#SBATCH --job-namelong_job
286+
#SBATCH --time 00:01 # timeout in HH:MM
287287

288288
echo "This script is running on ... "
289289
sleep 240 # time in seconds

14-environment-variables.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -210,7 +210,7 @@ job was submitted.
210210

211211
```output
212212
#!/bin/bash
213-
#SBATCH -t 00:00:30
213+
#SBATCH --time 00:00:30
214214
215215
echo -n "This script is running on "
216216
hostname

15-modules.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -381,7 +381,8 @@ compute node).
381381
```output
382382
#!/bin/bash
383383
#SBATCH
384-
r config$sched$comment` -t 00:00:30
384+
385+
#SBATCH --time 00:00:30
385386
386387
module load Python
387388

17-parallel.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -172,8 +172,8 @@ Create a submission file, requesting one task on a single node, then launch it.
172172

173173
```bash
174174
#!/bin/bash
175-
#SBATCH -J solo-job
176-
#SBATCH -p cpubase_bycore_b1
175+
#SBATCH --job-name solo-job
176+
#SBATCH --partition cpubase_bycore_b1
177177
#SBATCH -N 1
178178
#SBATCH -n 1
179179

@@ -294,8 +294,8 @@ Let's modify the job script to request more cores and use the MPI run-time.
294294

295295
```bash
296296
#!/bin/bash
297-
#SBATCH -J parallel-job
298-
#SBATCH -p cpubase_bycore_b1
297+
#SBATCH --job-name parallel-job
298+
#SBATCH --partition cpubase_bycore_b1
299299
#SBATCH -N 1
300300
#SBATCH -n 4
301301

@@ -411,8 +411,8 @@ code gets.
411411

412412
```bash
413413
#!/bin/bash
414-
#SBATCH -J parallel-job
415-
#SBATCH -p cpubase_bycore_b1
414+
#SBATCH --job-name parallel-job
415+
#SBATCH --partition cpubase_bycore_b1
416416
#SBATCH -N 1
417417
#SBATCH -n 8
418418

18-resources.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ To get info about a specific job (for example, 347087), we change command
8787
slightly.
8888

8989
```bash
90-
[yourUsername@login1 ~]$ sacct -u yourUsername -l -j 347087
90+
[yourUsername@login1 ~]$ sacct -u yourUsername --long --jobs 347087
9191
```
9292

9393
It will show a lot of info; in fact, every single piece of info collected on
@@ -96,7 +96,7 @@ information to `less` to make it easier to view (use the left and right arrow
9696
keys to scroll through fields).
9797

9898
```bash
99-
[yourUsername@login1 ~]$ sacct -u yourUsername -l -j 347087 | less -S
99+
[yourUsername@login1 ~]$ sacct -u yourUsername --long --jobs 347087 | less -S
100100
```
101101

102102
:::::::::::::::::::::::::::::::::::::: discussion
@@ -132,7 +132,7 @@ get your job dispatched earlier.
132132
Edit `parallel_job.sh` to set a better time estimate. How close can
133133
you get?
134134

135-
Hint: use `-t`.
135+
Hint: use `--time`.
136136

137137
::::::::::::::: solution
138138

@@ -142,7 +142,7 @@ The following line tells Slurm that our job should
142142
finish within 2 minutes:
143143

144144
```bash
145-
#SBATCH -t 00:02:00
145+
#SBATCH --time 00:02:00
146146
```
147147

148148
:::::::::::::::::::::::::

config.yaml

Lines changed: 0 additions & 119 deletions
This file was deleted.

files/customization/Ghastly_Mistakes/_config_options.yml

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,11 @@
77
# chain-loader, per @tobyhodges' suggestion.
88
#
99
# Compute irresponsibly.
10+
#
11+
# Use the HPC_CARPENTRY_CUSTOMIZATION variable to invoke these:
12+
#
13+
# > HPC_CARPENTRY_CUSTOMIZATION=episodes/files/customization/Ghastly_Mistakes/_config_options.yml
14+
# > export HPC_CARPENTRY_CUSTOMIZATION
1015
---
1116

1217
snippets: "Ghastly_Mistakes"
@@ -22,7 +27,8 @@ remote:
2227
host: "castle"
2328
node: "turtle"
2429
location: "World 8-4"
25-
homedir: "/darkland"
30+
fs:
31+
home: "/darkland"
2632
user: "luigi"
2733
module_python3: "Boa"
2834
prompt: "luigi@castle:~$"

files/customization/HPCC_MagicCastle_slurm/_config_options.yml

Lines changed: 18 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,11 @@
99
# account, please visit <cluster.hpc-carpentry.org>.
1010
#
1111
# Compute responsibly.
12+
#
13+
# Use the HPC_CARPENTRY_CUSTOMIZATION variable to invoke these:
14+
#
15+
# > HPC_CARPENTRY_CUSTOMIZATION=episodes/files/customization/HPCC_MagicCastle_slurm/_config_options.yml
16+
# > export HPC_CARPENTRY_CUSTOMIZATION
1217
---
1318

1419
snippets: "HPCC_MagicCastle_slurm"
@@ -24,11 +29,14 @@ remote:
2429
host: "login1"
2530
node: "smnode1"
2631
location: "cluster.hpc-carpentry.org"
27-
homedir: "/home"
2832
user: "yourUsername"
2933
module_python3: "Python"
3034
prompt: "[yourUsername@login1 ~]$"
3135
shebang: "#!/bin/bash"
36+
37+
fs:
38+
home: "/home"
39+
3240
modules:
3341
python: "Python"
3442

@@ -45,10 +53,15 @@ sched:
4553
flag:
4654
user: "-u yourUsername"
4755
interactive: ""
48-
histdetail: "-l -j"
49-
name: "-J"
50-
time: "-t"
51-
queue: "-p"
56+
histdetail: "--long --jobs"
57+
name: "--job-name"
58+
account: "--account"
59+
array: "--array"
60+
time: "--time"
61+
queue: "--partition"
62+
nodes: "--nodes"
63+
tasks: "--tasks"
64+
threads: "--cpus-per-task"
5265
del: "scancel"
5366
interactive: "srun"
5467
info: "sinfo"
Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
```output
2+
1 a
3+
1 and
4+
1 be
5+
1 can
6+
1 count
7+
1 in
8+
1 is
9+
1 it
10+
1 look
11+
1 most
12+
1 often
13+
1 our
14+
1 out
15+
1 script
16+
1 see
17+
1 small
18+
1 some
19+
1 them
20+
1 to
21+
1 trying
22+
1 useful
23+
1 very
24+
1 we
25+
1 which
26+
1 will
27+
2 are
28+
2 file
29+
2 for
30+
2 this
31+
3 repeated
32+
3 words
33+
```

0 commit comments

Comments
 (0)