Skip to content

Commit e55776d

Browse files
committed
docs: Update README examples for Docker setup and LLM configuration
Signed-off-by: Eden Reich <[email protected]>
1 parent cfcf147 commit e55776d

File tree

1 file changed

+8
-6
lines changed

1 file changed

+8
-6
lines changed

examples/README.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,17 +4,19 @@ Before starting with the examples, ensure you have the inference-gateway up and
44

55
1. Copy the `.env.example` file to `.env` and set your provider key.
66

7-
2. Set your preferred Large Language Model (LLM) provider for the examples:
7+
2. Run the Docker container:
88

9-
```sh
10-
export LLM_NAME=groq/meta-llama/llama-4-scout-17b-16e-instruct
9+
```
10+
docker run --rm -it -p 8080:8080 --env-file .env -e $LLM_NAME ghcr.io/inference-gateway/inference-gateway:latest
1111
```
1212

13-
3. Run the Docker container:
13+
3. In a new terminal, set your preferred Large Language Model (LLM) provider for the examples:
1414

15+
```sh
16+
export LLM_NAME=groq/meta-llama/llama-4-scout-17b-16e-instruct
1517
```
16-
docker run --rm -it -p 8080:8080 --env-file .env -e $LLM_NAME ghcr.io/inference-gateway/inference-gateway:0.7.1
17-
```
18+
19+
And cd into the specific examples.
1820

1921
Recommended is to set the environment variable `ENVIRONMENT=development` in your `.env` file to enable debug mode.
2022

0 commit comments

Comments
 (0)