You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/llm-providers.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -74,3 +74,7 @@ uv run oss-bugfind-crs build example_configs/test-local json-c
74
74
# Should say hello from LLM
75
75
uv run oss-bugfind-crs run example_configs/test-local json-c json_array_fuzzer
76
76
```
77
+
78
+
# Known Issues with Aliasing Models
79
+
80
+
Recent LLMs like GPT-5 no longer support the `temperature` parameter. This can cause silent failures if you’re swapping model backends in LiteLLM while keeping the original model name (e.g., model name set to gpt-4o but relay configured to gpt-5). The temperature param gets passed through and the API rejects it.
0 commit comments