Commit 4b2d68f
authored
fix: use env to configure vLLM (#49)
# What does this PR do?
This PR allows running LLS without `vLLM` provider. It allows configuring `vLLM` url through env vars.
Currently, the default config using `run.yaml` requires `vLLM` by default. Upon configuring other providers, vLLM is not needed. This behavior is not always correct, as using a different providers does _not_ requires a running vLLM instance.
cc @leseb @derekhiggins
## Summary by CodeRabbit
- New Features
- Conditional activation of the VLLM inference provider and related models based on environment variables for opt-in usage.
- Bug Fixes
- Avoids unintended connections to a localhost inference endpoint by removing hardcoded default URLs.
- Chores
- Simplified configuration defaults for inference and evaluation endpoints (empty unless set), and a fallback model ID to ensure predictable startup.
Approved-by: nathan-weinberg
Approved-by: derekhiggins1 file changed
+5
-5
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
13 | 13 | | |
14 | 14 | | |
15 | 15 | | |
16 | | - | |
| 16 | + | |
17 | 17 | | |
18 | 18 | | |
19 | | - | |
| 19 | + | |
20 | 20 | | |
21 | 21 | | |
22 | 22 | | |
| |||
107 | 107 | | |
108 | 108 | | |
109 | 109 | | |
110 | | - | |
| 110 | + | |
111 | 111 | | |
112 | 112 | | |
113 | 113 | | |
| |||
175 | 175 | | |
176 | 176 | | |
177 | 177 | | |
178 | | - | |
179 | | - | |
| 178 | + | |
| 179 | + | |
180 | 180 | | |
181 | 181 | | |
182 | 182 | | |
| |||
0 commit comments