Get object response without AutoInvokeKernelFunctions. #8308
-
Hey all. Not sure if I have missed something, but is there an obvious way for me to do something like this:
I am able to do this via. the plugins e.g. I can call a plugin directly, but in that case it wants me to provide function param, that the model generates itself on chatCompletion tool calling. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 4 replies
-
Hi @SalomonHenke, if I am reading this correctly, you do want to invoke the function and not have the LLM put a final answer once function invocation is complete. Is that true? |
Beta Was this translation helpful? Give feedback.
-
Not answered. I might not be explaining it well enough. Essentially I want to achieve function calling like it was possible via openAI API previously. e.g. Define a structure I want returned e.g.
And I am able to do this via. ChatCompletion and AutoInvoke, but it seems wasteful as I know I want to generate a Poem and i am not interested in any other tool call. I essentially kill the flow the second my plugin is called by invoking a delegate to return the generated request object as that is the only thing im interested in. This doesn't seem like the way to do it. Or perhaps my use-case isn't supported in a convenient way? |
Beta Was this translation helpful? Give feedback.
@SalomonHenke if I understand correctly you want the model to generate the function call args (i.e. request object) and that is the outcome you want for your function. If that's correct you can use the
IAutoFunctionInvocationFilter
to achieve what you want i.e. when the filter gets invoked you have access to the arguments the model generated and you can set then as the result and terminate the function calling loop.