Skip to content

Commit ce724a7

Browse files
authored
docs: improve getting started (#1553)
* docs: improve getting started Signed-off-by: Ettore Di Giacinto <[email protected]> * cleanups * Use dockerhub links * Shrink command to minimum --------- Signed-off-by: Ettore Di Giacinto <[email protected]>
1 parent 0a06c80 commit ce724a7

File tree

4 files changed

+73
-220
lines changed

4 files changed

+73
-220
lines changed

README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,9 @@
2020
</a>
2121
</p>
2222

23+
[<img src="https://img.shields.io/badge/dockerhub-images-important.svg?logo=Docker">](https://hub.docker.com/r/localai/localai)
24+
[<img src="https://img.shields.io/badge/quay.io-images-important.svg?">](https://quay.io/repository/go-skynet/local-ai?tab=tags&tag=latest)
25+
2326
> :bulb: Get help - [❓FAQ](https://localai.io/faq/) [💭Discussions](https://github.com/go-skynet/LocalAI/discussions) [:speech_balloon: Discord](https://discord.gg/uJAeKSAGDy) [:book: Documentation website](https://localai.io/)
2427
>
2528
> [💻 Quickstart](https://localai.io/basics/getting_started/) [📣 News](https://localai.io/basics/news/) [ 🛫 Examples ](https://github.com/go-skynet/LocalAI/tree/master/examples/) [ 🖼️ Models ](https://localai.io/models/) [ 🚀 Roadmap ](https://github.com/mudler/LocalAI/issues?q=is%3Aissue+is%3Aopen+label%3Aroadmap)

docs/content/_index.en.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,9 @@ title = "LocalAI"
1818
</a>
1919
</p>
2020

21+
[<img src="https://img.shields.io/badge/dockerhub-images-important.svg?logo=Docker">](https://hub.docker.com/r/localai/localai)
22+
[<img src="https://img.shields.io/badge/quay.io-images-important.svg?">](https://quay.io/repository/go-skynet/local-ai?tab=tags&tag=latest)
23+
2124
> 💡 Get help - [❓FAQ](https://localai.io/faq/) [❓How tos](https://localai.io/howtos/) [💭Discussions](https://github.com/go-skynet/LocalAI/discussions) [💭Discord](https://discord.gg/uJAeKSAGDy)
2225
>
2326
> [💻 Quickstart](https://localai.io/basics/getting_started/) [📣 News](https://localai.io/basics/news/) [ 🛫 Examples ](https://github.com/go-skynet/LocalAI/tree/master/examples/) [ 🖼️ Models ](https://localai.io/models/) [ 🚀 Roadmap ](https://github.com/mudler/LocalAI/issues?q=is%3Aissue+is%3Aopen+label%3Aroadmap)

docs/content/advanced/_index.en.md

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -365,6 +365,36 @@ docker run --env REBUILD=true localai
365365
docker run --env-file .env localai
366366
```
367367

368+
### CLI parameters
369+
370+
You can control LocalAI with command line arguments, to specify a binding address, or the number of threads.
371+
372+
373+
| Parameter | Environmental Variable | Default Variable | Description |
374+
| ------------------------------ | ------------------------------- | -------------------------------------------------- | ------------------------------------------------------------------- |
375+
| --f16 | $F16 | false | Enable f16 mode |
376+
| --debug | $DEBUG | false | Enable debug mode |
377+
| --cors | $CORS | false | Enable CORS support |
378+
| --cors-allow-origins value | $CORS_ALLOW_ORIGINS | | Specify origins allowed for CORS |
379+
| --threads value | $THREADS | 4 | Number of threads to use for parallel computation |
380+
| --models-path value | $MODELS_PATH | ./models | Path to the directory containing models used for inferencing |
381+
| --preload-models value | $PRELOAD_MODELS | | List of models to preload in JSON format at startup |
382+
| --preload-models-config value | $PRELOAD_MODELS_CONFIG | | A config with a list of models to apply at startup. Specify the path to a YAML config file |
383+
| --config-file value | $CONFIG_FILE | | Path to the config file |
384+
| --address value | $ADDRESS | :8080 | Specify the bind address for the API server |
385+
| --image-path value | $IMAGE_PATH | | Path to the directory used to store generated images |
386+
| --context-size value | $CONTEXT_SIZE | 512 | Default context size of the model |
387+
| --upload-limit value | $UPLOAD_LIMIT | 15 | Default upload limit in megabytes (audio file upload) |
388+
| --galleries | $GALLERIES | | Allows to set galleries from command line |
389+
|--parallel-requests | $PARALLEL_REQUESTS | false | Enable backends to handle multiple requests in parallel. This is for backends that supports multiple requests in parallel, like llama.cpp or vllm |
390+
| --single-active-backend | $SINGLE_ACTIVE_BACKEND | false | Allow only one backend to be running |
391+
| --api-keys value | $API_KEY | empty | List of API Keys to enable API authentication. When this is set, all the requests must be authenticated with one of these API keys.
392+
| --enable-watchdog-idle | $WATCHDOG_IDLE | false | Enable watchdog for stopping idle backends. This will stop the backends if are in idle state for too long. (default: false) [$WATCHDOG_IDLE]
393+
| --enable-watchdog-busy | $WATCHDOG_BUSY | false | Enable watchdog for stopping busy backends that exceed a defined threshold.|
394+
| --watchdog-busy-timeout value | $WATCHDOG_BUSY_TIMEOUT | 5m | Watchdog timeout. This will restart the backend if it crashes. |
395+
| --watchdog-idle-timeout value | $WATCHDOG_IDLE_TIMEOUT | 15m | Watchdog idle timeout. This will restart the backend if it crashes. |
396+
| --preload-backend-only | $PRELOAD_BACKEND_ONLY | false | If set, the api is NOT launched, and only the preloaded models / backends are started. This is intended for multi-node setups. |
397+
| --external-grpc-backends | EXTERNAL_GRPC_BACKENDS | none | Comma separated list of external gRPC backends to use. Format: `name:host:port` or `name:/path/to/file` |
368398

369399

370400
### Extra backends

0 commit comments

Comments
 (0)