From 8921f0601a9a9169b2d7d48f182ed3bc4257147a Mon Sep 17 00:00:00 2001 From: Titan Node Date: Mon, 27 Jan 2025 22:55:01 -0800 Subject: [PATCH] Change wording for `-nvidia` flag and added more context --- ai/orchestrators/start-orchestrator.mdx | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/ai/orchestrators/start-orchestrator.mdx b/ai/orchestrators/start-orchestrator.mdx index faed9580..ae273b9f 100644 --- a/ai/orchestrators/start-orchestrator.mdx +++ b/ai/orchestrators/start-orchestrator.mdx @@ -93,7 +93,7 @@ Please follow the steps below to start your **combined AI orchestrator** node. -orchestrator \ -serviceAddr 0.0.0.0:8936 \ -v 6 \ - -nvidia "all" \ + -nvidia 0 \ -aiWorker \ -aiModels /root/.lpData/aiModels.json \ -aiModelsDir ~/.lpData/models \ @@ -109,6 +109,8 @@ Please follow the steps below to start your **combined AI orchestrator** node. Moreover, the `--network host` flag facilitates communication between the AI Orchestrator and the AI Runner container. + Lastly, the `-nvidia` can be configured in a few ways. Use a comma seperated list of GPUs ie. `0,1` to activate specific GPU slots, each GPU will need it's own config item in `aiModels.json`. Alternativly we can use `"all"` to activate all GPUs on the machine with a single model loaded in `aiModels.json` (Warning: If different RAM size GPUs are installed it may cause containers to fail if they have less than the required RAM). + Please note that since we use [docker-out-of-docker](https://tdongsi.github.io/blog/2017/04/23/docker-out-of-docker/), the `aiModelsDir` path should be defined as being on the host machine.