You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> :bulb: Get help - [❓FAQ](https://localai.io/faq/)[💭Discussions](https://github.com/go-skynet/LocalAI/discussions)[:speech_balloon: Discord](https://discord.gg/uJAeKSAGDy)[:book: Documentation website](https://localai.io/)
> 💡 Get help - [❓FAQ](https://localai.io/faq/)[❓How tos](https://localai.io/howtos/)[💭Discussions](https://github.com/go-skynet/LocalAI/discussions)[💭Discord](https://discord.gg/uJAeKSAGDy)
| --cors-allow-origins value | $CORS_ALLOW_ORIGINS | | Specify origins allowed for CORS |
379
+
| --threads value | $THREADS | 4 | Number of threads to use for parallel computation |
380
+
| --models-path value | $MODELS_PATH | ./models | Path to the directory containing models used for inferencing |
381
+
| --preload-models value | $PRELOAD_MODELS | | List of models to preload in JSON format at startup |
382
+
| --preload-models-config value | $PRELOAD_MODELS_CONFIG | | A config with a list of models to apply at startup. Specify the path to a YAML config file |
383
+
| --config-file value | $CONFIG_FILE | | Path to the config file |
384
+
| --address value | $ADDRESS | :8080 | Specify the bind address for the API server |
385
+
| --image-path value | $IMAGE_PATH | | Path to the directory used to store generated images |
386
+
| --context-size value | $CONTEXT_SIZE | 512 | Default context size of the model |
387
+
| --upload-limit value | $UPLOAD_LIMIT | 15 | Default upload limit in megabytes (audio file upload) |
388
+
| --galleries | $GALLERIES | | Allows to set galleries from command line |
389
+
|--parallel-requests | $PARALLEL_REQUESTS | false | Enable backends to handle multiple requests in parallel. This is for backends that supports multiple requests in parallel, like llama.cpp or vllm |
390
+
| --single-active-backend | $SINGLE_ACTIVE_BACKEND | false | Allow only one backend to be running |
391
+
| --api-keys value | $API_KEY | empty | List of API Keys to enable API authentication. When this is set, all the requests must be authenticated with one of these API keys.
392
+
| --enable-watchdog-idle | $WATCHDOG_IDLE | false | Enable watchdog for stopping idle backends. This will stop the backends if are in idle state for too long. (default: false) [$WATCHDOG_IDLE]
393
+
| --enable-watchdog-busy | $WATCHDOG_BUSY | false | Enable watchdog for stopping busy backends that exceed a defined threshold.|
394
+
| --watchdog-busy-timeout value | $WATCHDOG_BUSY_TIMEOUT | 5m | Watchdog timeout. This will restart the backend if it crashes. |
395
+
| --watchdog-idle-timeout value | $WATCHDOG_IDLE_TIMEOUT | 15m | Watchdog idle timeout. This will restart the backend if it crashes. |
396
+
| --preload-backend-only | $PRELOAD_BACKEND_ONLY | false | If set, the api is NOT launched, and only the preloaded models / backends are started. This is intended for multi-node setups. |
397
+
| --external-grpc-backends | EXTERNAL_GRPC_BACKENDS | none | Comma separated list of external gRPC backends to use. Format: `name:host:port`or `name:/path/to/file` |
0 commit comments