Skip to content

Commit a3c6e5e

Browse files
authored
Change wording for -nvidia flag and added more context (#704)
1 parent 7206a36 commit a3c6e5e

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

ai/orchestrators/start-orchestrator.mdx

+3-1
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ Please follow the steps below to start your **combined AI orchestrator** node.
9393
-orchestrator \
9494
-serviceAddr 0.0.0.0:8936 \
9595
-v 6 \
96-
-nvidia "all" \
96+
-nvidia 0 \
9797
-aiWorker \
9898
-aiModels /root/.lpData/aiModels.json \
9999
-aiModelsDir ~/.lpData/models \
@@ -109,6 +109,8 @@ Please follow the steps below to start your **combined AI orchestrator** node.
109109

110110
Moreover, the `--network host` flag facilitates communication between the AI Orchestrator and the AI Runner container.
111111

112+
Lastly, the `-nvidia` can be configured in a few ways. Use a comma seperated list of GPUs ie. `0,1` to activate specific GPU slots, each GPU will need it's own config item in `aiModels.json`. Alternativly we can use `"all"` to activate all GPUs on the machine with a single model loaded in `aiModels.json` (Warning: If different RAM size GPUs are installed it may cause containers to fail if they have less than the required RAM).
113+
112114
<Warning>Please note that since we use [docker-out-of-docker](https://tdongsi.github.io/blog/2017/04/23/docker-out-of-docker/), the `aiModelsDir` path should be defined as being on the host machine.</Warning>
113115
</Step>
114116
<Step title="Confirm Successful Startup of the AI Orchestrator">

0 commit comments

Comments
 (0)