Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 13 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -222,25 +222,28 @@ async def handler(request: Request):
return await transport.starlette_dispatch(request, scope)
```

## Benchmark - MiniMCP vs FastMCP
## Benchmark - MiniMCP vs FastMCP vs MCP Low-Level

Benchmarked against the standalone [`fastmcp`](https://github.com/jlowin/fastmcp) package (v3.1.1, by Jeremiah Lowin).
Benchmarked against the standalone [`fastmcp`](https://github.com/jlowin/fastmcp) package (v3.1.1,
by Jeremiah Lowin) and the official MCP Python SDK [`mcp`](https://github.com/modelcontextprotocol/python-sdk)
low-level server (v1.24.0).

In our benchmarks, MiniMCP consistently outperforms FastMCP across all transport types and workloads:
MiniMCP is the fastest server in every one of the 36 test scenarios against both competitors:

- **28–64% faster response times** across all load levels
- **38–126% higher throughput** (STDIO transport; advantage persists under heavy load)
- **44–66% lower peak memory usage** — FastMCP grows with concurrency, MiniMCP stays flat
- **Wins all 36 test scenarios** (3 transports × 3 workloads × 4 load levels)
- **Wins all 36 test scenarios** (3 transports × 3 workloads × 4 load levels) against both FastMCP and MCP Low-Level
- **vs FastMCP**: 28–64% faster response times; 37–126% higher throughput; 44–66% lower peak memory usage
- **vs MCP Low-Level (STDIO)**: 8–52% faster response times; up to 54% higher throughput — MCP Low-Level ranks 2nd on STDIO
- **vs MCP Low-Level (HTTP)**: 48–60% faster response times; 46–136% higher throughput — MCP Low-Level struggles at high concurrency, plateauing at ~180 RPS regardless of load
- **Memory (HTTP)**: MiniMCP holds flat at ~22 MB under heavy load; FastMCP reaches ~63 MB, MCP Low-Level ~56 MB

For detailed results and architectural analysis, see the [benchmark analysis report](https://github.com/cloudera/minimcp/blob/main/benchmarks/reports/MINIMCP_VS_FASTMCP_ANALYSIS.md).
For detailed results and architectural analysis, see the [benchmark analysis report](https://github.com/cloudera/minimcp/blob/main/benchmarks/reports/BENCHMARK_ANALYSIS_REPORT.md).

### Test Environment

- **Python Version**: 3.10.12
- **OS**: Linux 6.8.0-106-generic
- **Test Date**: March 21, 2026
- **Total Test Duration**: ~5.8 hours
- **Test Date**: March 22, 2026
- **Total Test Duration**: ~9.3 hours

## API Reference

Expand Down
8 changes: 4 additions & 4 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# MiniMCP vs FastMCP · Benchmarks
# MiniMCP vs FastMCP vs MCP Low-Level Server · Benchmarks

Latest report: [MiniMCP vs FastMCP Analysis](./reports/MINIMCP_VS_FASTMCP_ANALYSIS.md)

Once you've set up a development environment as described in [CONTRIBUTING.md](../../CONTRIBUTING.md), you can run the benchmark scripts.

## Running Benchmarks

Each transport has a separate benchmark script that can be run with the following commands. Only tool calling is used for benchmarking as other primitives aren't much different functionally. Each script produces two result files: one for sync tool calls and another for async tool calls.
Each transport has a separate benchmark script that can be run with the following commands. Only tool calling is used for benchmarking as other primitives aren't much different functionally. Each script compares three servers — MiniMCP, FastMCP, and the MCP SDK's low-level `Server` — and produces three result files: one each for sync, I/O-bound async, and noop tool calls.

```bash
# Stdio
Expand All @@ -19,7 +19,7 @@ uv run python -m benchmarks.macro.http_mcp_server_benchmark
uv run python -m benchmarks.macro.streamable_http_mcp_server_benchmark
```

> **FastMCP Version:** The benchmarks compare MiniMCP against the [FastMCP](https://pypi.org/project/fastmcp/) package. The version in use is pinned in the `dev` dependency group in `pyproject.toml`. To temporarily use a different version, run `uv pip install fastmcp==<version>` before running the scripts.
> **Versions:** The benchmarks compare MiniMCP against the [FastMCP](https://pypi.org/project/fastmcp/) package and the [MCP Python SDK](https://pypi.org/project/mcp/)'s low-level `Server`. Both versions are pinned in the `dev` dependency group in `pyproject.toml`. To temporarily use a different version, run `uv pip install fastmcp==<version>` or `uv pip install mcp==<version>` before running the scripts.

### System Preparation - Best practice in Ubuntu

Expand Down Expand Up @@ -64,7 +64,7 @@ The benchmark uses four load profiles to test performance under different concur

### Analyze Results

The `analyze_results.py` script provides a visual comparison of benchmark results between MiniMCP and FastMCP. It displays response time comparisons across all load profiles with visual bar charts, performance improvements as percentages, memory usage comparisons, key findings, and metadata.
The `analyze_results.py` script provides a visual comparison of benchmark results across all servers (MiniMCP, FastMCP, and MCP low-level). It displays response time and memory usage bar charts across all load profiles, performance improvements as percentages relative to FastMCP as the baseline, key findings per server, and run metadata.

You can run it for each result JSON file with:

Expand Down
Loading
Loading