Skip to content

Refactor with LanguageModelChatTool for native tool using #27

@supreme-gg-gg

Description

@supreme-gg-gg

Most "tools" currently used are JSON arrays that are parsed into simple editor events. This is not necessarily bad even though we have a few repetitions occurring already. In the future, we should consider this massive refactoring to replace this with native tool using and this will make the Cheerleader much more powerful (it can act more like Copilot Agentic or Claude Desktop with their MCP)

Relevant docs

Proposed Changes

  1. Refactor all the "tool calling" related functions to utils -> renamed tools such as Markdown rendering...
  2. Introduce a tools definition for each of them, ref: LanguageModelChatTool with description and JSON schema, you probably also need to register them using lm.registerTool(...) and declare them in the JSON
  3. Expose these tools when sending a request to LLM LanguageModelChatRequestOptions as part of send request, this is basically a JSON array of tools defined in step 2
  4. When receiving the response, partition the regular response and tools use response: LanguageModelToolCallPart

Example

Consider this current getAiresponse(), we will change it to getAIResponseWithTools():

export async function getAIResponseWithTools(
  userText: string | null = null,
  tools: vscode.LanguageModelChatTool[],
  options: LanguageModelOptions = {}
): Promise<string> {
  try {
    // DO THE SAME SETUP

    const chatResponse = await model.sendRequest(
      messages,
      { tools }, // Include tools in the request options
      new vscode.CancellationTokenSource().token,
    );

    let fullResponse = ""; // the text response
    let toolResponse: vscode.LanguageModelToolResult[] = []; // the tool response
    for await (const part of chatResponse.stream) {
      if (part instanceof vscode.LanguageModelTextPart) {
        fullResponse += part.value;
      } else if (part instanceof vscode.LanguageModelToolCallPart) {
        // Invoke the tool, this requires the tool to be registered by lm.registerTool
        const toolResult = await vscode.lm.invokeTool(
          part.name,
          {
            input: part.input,
            toolInvocationToken: undefined
          }
        );
        toolResponse.push(toolResult);
      }
    }

    return fullResponse;
  } catch (error) {
    console.error("Language model error:", error);
    throw error;
  }
}

I DO NOT KNOW HOW TO HANDLE TOOL CALLING RESULTS!! We need to figure this out, it seems like we can append them back to the messages list so the LM has memory of tool calling

Metadata

Metadata

Assignees

No one assigned

    Labels

    ^^Mid priorityenhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions