You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am sending my prompt prompt should go to backend then to mcp server how to connect it with llm so that I can get the result and render back same on frontend
I'm building a backend workflow using FastMCP on Windows to connect a local AI model via CLI. My goal is to send prompts from a frontend (via JSON-RPC) to MCP, which then invokes a model subprocess and returns the response.
I’ve tried integrating Cursor CLI (cursor-agent), but it’s not supported on Windows (Git Bash returns Unsupported operating system: MINGW64_NT-10.0-26100, and PowerShell says 'cursor-agent' is not recognized).
So I switched to Ollama, which runs fine on Windows. I installed it and can run ollama run mistral manually.
In my MCP server, I use this subprocess call:
process = subprocess.Popen(
["ollama", "run", "mistral"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
shell=True
)
stdout, stderr = process.communicate(input=prompt, timeout=30)
But when I send a prompt from the frontend, I get a blank response:
{"jsonrpc":"2.0","result":"","id":...}
✅ MCP logs show:
Prompt received
Subprocess launched
stdout is empty
stderr is empty
❓ What I need help with:
Why is ollama run mistral returning nothing when called via subprocess?
Is there a better way to invoke Ollama from Python on Windows?
How can I confirm that MCP is correctly capturing the model output?
Any help or working examples would be appreciated!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am sending my prompt prompt should go to backend then to mcp server how to connect it with llm so that I can get the result and render back same on frontend
I'm building a backend workflow using FastMCP on Windows to connect a local AI model via CLI. My goal is to send prompts from a frontend (via JSON-RPC) to MCP, which then invokes a model subprocess and returns the response.
I’ve tried integrating Cursor CLI (cursor-agent), but it’s not supported on Windows (Git Bash returns Unsupported operating system: MINGW64_NT-10.0-26100, and PowerShell says 'cursor-agent' is not recognized).
So I switched to Ollama, which runs fine on Windows. I installed it and can run ollama run mistral manually.
In my MCP server, I use this subprocess call:
process = subprocess.Popen(
["ollama", "run", "mistral"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
shell=True
)
stdout, stderr = process.communicate(input=prompt, timeout=30)
But when I send a prompt from the frontend, I get a blank response:
{"jsonrpc":"2.0","result":"","id":...}
✅ MCP logs show:
❓ What I need help with:
Any help or working examples would be appreciated!
Beta Was this translation helpful? Give feedback.
All reactions