Caching in AutoGen V0.4.7 #5792
Replies: 3 comments 7 replies
-
Can you post the exact error message and stack trace? Could you help me understand why this is related to caching? For caching: https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.models.cache.html |
Beta Was this translation helpful? Give feedback.
-
I'm assuming that if I use the cache_client in an agent, The token consumption will reduce? I'm trying to do this, and the token consumption somehow seems to increase. `
` Or does the LLM caching only work when I use something like this? |
Beta Was this translation helpful? Give feedback.
-
So, if I call the on_reset() method on the agent, and then use the cache_client, then a separate LLM will not be made for the same task, rather the cached response would be used? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have a working flow, and I'm using Jupyter notebook to run this flow.
This is the only block of cell that I'll ideally have to re-run by changing the task:
` critic_tet_mention_termination = TextMentionTermination("ALL-GOOD")
terminator_text_mention_termination = TextMentionTermination("max-3-tries")
max_messages_termination = MaxMessageTermination(max_messages=25)
When I run the cell for the first time, everything works according to plan. But when I change the Task and re-run the cell, it throws an error saying 'critic message not found'. This issue is solved when I restart the kernel and run the whole notebook.
Previously I did not have this problem because caching was automaticall implemented in V0.2, is there any way to do it in V0.4 aswell?
Beta Was this translation helpful? Give feedback.
All reactions