Skip to content

Tool detection not working in Windows with Qwen model #10

@Foadsf

Description

@Foadsf

Environment

  • Windows 10
  • Node.js v20.18.0
  • Ollama installed via winget in %LOCALAPPDATA%\Programs\Ollama\ollama.exe

What I did

  1. Cloned the repo and installed dependencies:
git clone https://github.com/patruff/ollama-mcp-bridge.git
cd ollama-mcp-bridge
npm install
  1. Installed MCP servers:
npm install -g @modelcontextprotocol/server-filesystem
npm install -g @modelcontextprotocol/server-memory
  1. Created a bridge_config.json file:
{
  "mcpServers": {
    "filesystem": {
      "command": "node",
      "args": [
        "C:/Users/{username}/AppData/Roaming/npm/node_modules/@modelcontextprotocol/server-filesystem/dist/index.js",
        "C:\\dev\\typescript\\20250318\\ollama-mcp-bridge"
      ]
    },
    "memory": {
      "command": "node",
      "args": [
        "C:/Users/{username}/AppData/Roaming/npm/node_modules/@modelcontextprotocol/server-memory/dist/index.js"
      ]
    }
  },
  "llm": {
    "model": "qwen2.5-coder:7b-instruct",
    "baseUrl": "http://localhost:11434"
  }
}
  1. Started the bridge:
npm run start

Issues

  1. Extremely verbose startup logs - The application produces hundreds of lines of debugging output during startup, making it hard to use in a terminal.

  2. Even more verbose list-tools output - When running the list-tools command, the terminal is flooded with detailed JSON schema for each tool.

  3. The LLM doesn't use the tools - When I ask it to perform tool operations like "Show me what files are in this directory" or "Create a new folder called test-project", the model responds as if it has no access to tools:

When asking "Show me what files are in this directory":

Response: I'm sorry, but as an AI language model, I don't have access to your local file system or any specific directory on your computer. However, if you provide me with the path to the directory, I can try to help you list the files and directories within it using a command-line interface (CLI) tool such as `ls` or `dir`.

When asking "Create a new folder called "test-project"":

Response: I'm unable to create physical folders or directories as I am an AI running in a text-based environment. However, you can easily create a new folder named "test-project" on your computer using your operating system's file manager.
...

Possible cause

The bridge seems to be connecting to the MCP servers correctly, but the LLM doesn't appear to understand how to format tool calls. Looking at the logs:

14:38:17 DEBUG:     LLMBridge - Response is not a structured tool call: Unexpected token 'I', "I'm sorry,"... is not valid JSON

This suggests that the Qwen model might need specific prompting or formatting to generate the expected JSON structure for tool calls.

Questions

  1. Does this bridge work correctly with the Qwen model? The README mentions it, but are there specific prompt templates required?

  2. Has this been tested on Windows specifically? Are there any known issues?

  3. Are there any logging level options to reduce verbosity?

Any help or guidance would be appreciated!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions