Replies: 1 comment
-
Hi @adgren, we have been working on a solution to help folks componentize this type of behavior and plug them into an agent. Note that this functionality is still experimental at the moment, so we may still make changes to the design. We are currently working hard on stabilizing hits though. The idea is that we support a new type of component called an AI Context Provider. This component can be registered on a thread that is used with an agent. All messages added to the thread (both from the user and agent) are passed to the component, and also, before the underlying agent/llm is invoked, the component has an opportunity to inject additional context into the agent in the form of text instructions or functions. We actually have an experimental implementation of a RAG AI Context Provider called Here is a sample showing usage: The sample is using a very simple and opinionated If you are unable to rely on the experimental code we have, you can of course do the RAG pipeline outside of the agent, and just inject the search results with instructions into the agent on invocation via the AdditionalInstructions option.
If you provide some more info on the specific area of building the prompt router that you would like advice on, I'd be happy to try and help. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm new to Semantic Kernel and have been tasked with porting our current RAG system to SK. We're using C#.
We work in the legal space and the current RAG system is pretty basic. It consists of:
I have built an agent which performs the similarity search using a ChatCompletionAgent, but I'm having trouble figuring out the correct way to build the prompt router and how to hook it up to this agent.
Any direction will be greatly appreciated.
Adam
Beta Was this translation helpful? Give feedback.
All reactions