Replies: 1 comment
-
|
streaming depends on the LLM adapter in langchain, it works with most providers and also with local runners I guess we need to check the Gemini LLM config and whether it supports streaming Leaving this issue open as I have no time to check this |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I'm using Cheshire Cat with Gemini 2.5 which is very slow when it generates a lot of text, do we have a way of getting the generated tokens a bit a time like ChatGPT does?
This would allow the user to start reading the response before Cheshire Cat has done writing it
Beta Was this translation helpful? Give feedback.
All reactions