Replies: 1 comment
-
Use custom model context for your agent helps: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/agents.html#using-model-context |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I want to be able to prompt the agent to "continue" when it reaches its token limit. This is to support a use case of long text parsing/generation, where we occasionally hit the 32k output token limit on GPT-4.1.
Here's some code that consistently hits the token limit.
By subclassing
AssistantAgent
, I've found that there is afinish_reason
onmodel_result
that gets set to"length"
when the token limit is reached.Ideally, if that's the case, I want to prompt the model to continue, but I don't see this finish reason propagated to the caller.
What's the right approach to this?
Beta Was this translation helpful? Give feedback.
All reactions