-
Notifications
You must be signed in to change notification settings - Fork 111
Description
Environment
- Windows 10
- Node.js v20.18.0
- Ollama installed via winget in
%LOCALAPPDATA%\Programs\Ollama\ollama.exe
What I did
- Cloned the repo and installed dependencies:
git clone https://github.com/patruff/ollama-mcp-bridge.git
cd ollama-mcp-bridge
npm install- Installed MCP servers:
npm install -g @modelcontextprotocol/server-filesystem
npm install -g @modelcontextprotocol/server-memory- Created a
bridge_config.jsonfile:
{
"mcpServers": {
"filesystem": {
"command": "node",
"args": [
"C:/Users/{username}/AppData/Roaming/npm/node_modules/@modelcontextprotocol/server-filesystem/dist/index.js",
"C:\\dev\\typescript\\20250318\\ollama-mcp-bridge"
]
},
"memory": {
"command": "node",
"args": [
"C:/Users/{username}/AppData/Roaming/npm/node_modules/@modelcontextprotocol/server-memory/dist/index.js"
]
}
},
"llm": {
"model": "qwen2.5-coder:7b-instruct",
"baseUrl": "http://localhost:11434"
}
}- Started the bridge:
npm run startIssues
-
Extremely verbose startup logs - The application produces hundreds of lines of debugging output during startup, making it hard to use in a terminal.
-
Even more verbose
list-toolsoutput - When running thelist-toolscommand, the terminal is flooded with detailed JSON schema for each tool. -
The LLM doesn't use the tools - When I ask it to perform tool operations like "Show me what files are in this directory" or "Create a new folder called test-project", the model responds as if it has no access to tools:
When asking "Show me what files are in this directory":
Response: I'm sorry, but as an AI language model, I don't have access to your local file system or any specific directory on your computer. However, if you provide me with the path to the directory, I can try to help you list the files and directories within it using a command-line interface (CLI) tool such as `ls` or `dir`.
When asking "Create a new folder called "test-project"":
Response: I'm unable to create physical folders or directories as I am an AI running in a text-based environment. However, you can easily create a new folder named "test-project" on your computer using your operating system's file manager.
...
Possible cause
The bridge seems to be connecting to the MCP servers correctly, but the LLM doesn't appear to understand how to format tool calls. Looking at the logs:
14:38:17 DEBUG: LLMBridge - Response is not a structured tool call: Unexpected token 'I', "I'm sorry,"... is not valid JSON
This suggests that the Qwen model might need specific prompting or formatting to generate the expected JSON structure for tool calls.
Questions
-
Does this bridge work correctly with the Qwen model? The README mentions it, but are there specific prompt templates required?
-
Has this been tested on Windows specifically? Are there any known issues?
-
Are there any logging level options to reduce verbosity?
Any help or guidance would be appreciated!