feat(benchmarks): add MCP SDK low-level server as third benchmark target#19
Merged
sreenaths merged 5 commits intocloudera:mainfrom Mar 23, 2026
Merged
feat(benchmarks): add MCP SDK low-level server as third benchmark target#19sreenaths merged 5 commits intocloudera:mainfrom
sreenaths merged 5 commits intocloudera:mainfrom
Conversation
Extends all three transport benchmarks (HTTP, Streamable HTTP, stdio) to compare FastMCP and MiniMCP against the MCP Python SDK's low-level Server, giving a direct baseline for framework overhead relative to the raw SDK. - Add mcp_lowlevel_streamable_http_server and mcp_lowlevel_stdio_server - Rename fastmcp_http_server -> fastmcp_streamable_http_server to reflect that its transport is Streamable HTTP, matching the client used - Update all three benchmark runners to include mcp-lowlevel as a server - Update benchmarks/README.md to reflect the three-way comparison
7485cac to
f6a36cc
Compare
… close The 0.5 s sleep was unreliable for near-zero-latency tools (e.g. noop_tool) because rapid connection churn made the fixed window insufficient. Closing the write channel signals EOF to the streamable_http_client dispatcher task, letting it exit cleanly after the final POST rather than being cancelled mid-read. This eliminates the httpx.ReadError / ExceptionGroup race deterministically, without relying on timing. Docs updated accordingly in docs/ISSUES.md.
- Add BASELINE_SERVER and SERVER_ORDER constants so ordering and baseline are explicit and easy to change without touching logic - Guard against KeyError when a server in SERVER_ORDER is absent from results (e.g. HTTP benchmark has no mcp-lowlevel) - Call organize_results once in main and pass data down to avoid redundant work and make the data flow explicit - Extract _collect_improvements to remove three structurally identical metric-collection loops in print_key_findings - Replace unit-string-based formatting branch with decimal_places param and add explicit better_label/worse_label params to fix misleading "slower" annotation on memory charts
Re-run all 9 benchmark files against three servers (MiniMCP 0.5.0, FastMCP 3.1.1, MCP Low-Level 1.24.0) across all transport types and workloads. Rename MINIMCP_VS_FASTMCP_ANALYSIS.md to BENCHMARK_ANALYSIS_REPORT.md and expand it to cover the three-way comparison, including corrected summary ranges and prose claims. Update README benchmark section to reflect the new results.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Extends all three transport benchmarks (HTTP, Streamable HTTP, stdio) to compare FastMCP and MiniMCP against the MCP Python SDK's low-level Server, giving a direct baseline for framework overhead relative to the raw SDK.