Skip to content

Remove thinking from the LM output#377

Open
akdjka wants to merge 1 commit intostanford-oval:mainfrom
akdjka:main
Open

Remove thinking from the LM output#377
akdjka wants to merge 1 commit intostanford-oval:mainfrom
akdjka:main

Conversation

@akdjka
Copy link

@akdjka akdjka commented Jul 7, 2025

Some local models (like Qwen3) get confused by their own thinking getting injected into subsequent prompts. This PR fixes that for LiteLLM models.

Please note that this could be applied to other models too, though they are noted to be deprecated so I skipped them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant