Skip to content

Commit 388373a

Browse files
committed
Making GPU templates more resilient to YAML syntax errors
1 parent f6fc6a0 commit 388373a

File tree

2 files changed

+8
-4
lines changed

2 files changed

+8
-4
lines changed

templates/ai/usage_scenario.yml

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,22 +14,25 @@ flow:
1414
container: gcb-ai-model
1515
commands:
1616
- type: console
17-
command: ollama pull '__GMT_VAR_MODEL__'
17+
command: |
18+
ollama pull '__GMT_VAR_MODEL__'
1819
read-notes-stdout: true
1920
log-stdout: true
2021

2122
- name: Load model into memory
2223
container: gcb-ai-model
2324
commands:
2425
- type: console
25-
command: ollama run '__GMT_VAR_MODEL__' ''
26+
command: |
27+
ollama run '__GMT_VAR_MODEL__' ''
2628
read-notes-stdout: true
2729
log-stdout: true
2830

2931
- name: Run Inference
3032
container: gcb-ai-model
3133
commands:
3234
- type: console
33-
command: ollama run '__GMT_VAR_MODEL__' '__GMT_VAR_PROMPT__'
35+
command: |
36+
ollama run '__GMT_VAR_MODEL__' '__GMT_VAR_PROMPT__'
3437
read-notes-stdout: true
3538
log-stdout: true

templates/ai/usage_scenario_gpu.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ flow:
1515
container: gcb-ai-model
1616
commands:
1717
- type: console
18-
command: ollama run '__GMT_VAR_MODEL__' '__GMT_VAR_PROMPT__'
18+
command: |
19+
ollama run '__GMT_VAR_MODEL__' '__GMT_VAR_PROMPT__'
1920
read-notes-stdout: true
2021
log-stdout: true

0 commit comments

Comments
 (0)