-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Labels
Description
Most "tools" currently used are JSON arrays that are parsed into simple editor events. This is not necessarily bad even though we have a few repetitions occurring already. In the future, we should consider this massive refactoring to replace this with native tool using and this will make the Cheerleader much more powerful (it can act more like Copilot Agentic or Claude Desktop with their MCP)
Relevant docs
- General guide: https://code.visualstudio.com/api/extension-guides/tools
- Most LM Types: https://code.visualstudio.com/api/references/vscode-api#LanguageModelChat.id
- LM namespace variables and functions: https://code.visualstudio.com/api/references/vscode-api#lm
Proposed Changes
- Refactor all the "tool calling" related functions to
utils-> renamedtoolssuch as Markdown rendering... - Introduce a tools definition for each of them, ref:
LanguageModelChatToolwith description and JSON schema, you probably also need to register them usinglm.registerTool(...)and declare them in the JSON - Expose these tools when sending a request to LLM LanguageModelChatRequestOptions as part of send request, this is basically a JSON array of tools defined in step 2
- When receiving the response, partition the regular response and tools use response: LanguageModelToolCallPart
Example
Consider this current getAiresponse(), we will change it to getAIResponseWithTools():
export async function getAIResponseWithTools(
userText: string | null = null,
tools: vscode.LanguageModelChatTool[],
options: LanguageModelOptions = {}
): Promise<string> {
try {
// DO THE SAME SETUP
const chatResponse = await model.sendRequest(
messages,
{ tools }, // Include tools in the request options
new vscode.CancellationTokenSource().token,
);
let fullResponse = ""; // the text response
let toolResponse: vscode.LanguageModelToolResult[] = []; // the tool response
for await (const part of chatResponse.stream) {
if (part instanceof vscode.LanguageModelTextPart) {
fullResponse += part.value;
} else if (part instanceof vscode.LanguageModelToolCallPart) {
// Invoke the tool, this requires the tool to be registered by lm.registerTool
const toolResult = await vscode.lm.invokeTool(
part.name,
{
input: part.input,
toolInvocationToken: undefined
}
);
toolResponse.push(toolResult);
}
}
return fullResponse;
} catch (error) {
console.error("Language model error:", error);
throw error;
}
}I DO NOT KNOW HOW TO HANDLE TOOL CALLING RESULTS!! We need to figure this out, it seems like we can append them back to the messages list so the LM has memory of tool calling