diff --git a/ai/orchestrators/start-orchestrator.mdx b/ai/orchestrators/start-orchestrator.mdx
index faed9580..ae273b9f 100644
--- a/ai/orchestrators/start-orchestrator.mdx
+++ b/ai/orchestrators/start-orchestrator.mdx
@@ -93,7 +93,7 @@ Please follow the steps below to start your **combined AI orchestrator** node.
-orchestrator \
-serviceAddr 0.0.0.0:8936 \
-v 6 \
- -nvidia "all" \
+ -nvidia 0 \
-aiWorker \
-aiModels /root/.lpData/aiModels.json \
-aiModelsDir ~/.lpData/models \
@@ -109,6 +109,8 @@ Please follow the steps below to start your **combined AI orchestrator** node.
Moreover, the `--network host` flag facilitates communication between the AI Orchestrator and the AI Runner container.
+ Lastly, the `-nvidia` can be configured in a few ways. Use a comma seperated list of GPUs ie. `0,1` to activate specific GPU slots, each GPU will need it's own config item in `aiModels.json`. Alternativly we can use `"all"` to activate all GPUs on the machine with a single model loaded in `aiModels.json` (Warning: If different RAM size GPUs are installed it may cause containers to fail if they have less than the required RAM).
+
Please note that since we use [docker-out-of-docker](https://tdongsi.github.io/blog/2017/04/23/docker-out-of-docker/), the `aiModelsDir` path should be defined as being on the host machine.