An MCP server that exposes django-silk profiling data as tools for any MCP-compatible AI coding assistant. Enables query-level investigation and optimization directly from your conversation — N+1 detection, EXPLAIN ANALYZE, over-fetch analysis, and Python profiling without leaving your editor.
pip install django-silk-mcp
# or
uv add django-silk-mcpINSTALLED_APPS = [
...
"silk",
"django_silk_mcp",
]Expose the MCP server as a URL inside your Django app. Works on both WSGI and ASGI with no extra infrastructure.
Add to your root urls.py:
urlpatterns = [
...
path("silk/", include("silk.urls", namespace="silk")),
path("silk/mcp", include("django_silk_mcp.urls")),
]This serves the MCP endpoint at /silk/mcp. The MCP server runs as part of your Django app — no separate process needed, it shares the same port as your dev server. Then configure your MCP client:
Claude Code:
claude mcp add silk-mcp --transport http http://localhost:8000/silk/mcpCursor: Go to Settings → Toos & MCPs → New MCP Server and add a new server with the following configuration:
{
"mcpServers": {
"silk-mcp": {
"url": "http://localhost:8000/silk/mcp"
}
}
}The same URL works with any MCP-compatible AI tool.
Warning: The MCP endpoint uses
AllowAnywith no authentication by default, because MCP clients (Claude Code, Cursor, etc.) connect over plain HTTP and do not send credentials. This is intentional for local development but must not be exposed in production.Silk profiling data includes raw SQL query strings, which can contain business data at runtime (user IDs, emails, search terms, etc.). Anyone who can reach the Django host can read this data through the MCP endpoint.
Recommended: Only mount in DEBUG mode:
# urls.py
from django.conf import settings
if settings.DEBUG:
urlpatterns = [
path("silk/", include("silk.urls", namespace="silk")),
path("silk/mcp", include("django_silk_mcp.urls")),
]| Tool | Purpose |
|---|---|
get_most_expensive_endpoints |
Rank all endpoints by average DB time |
get_request_time_breakdown |
DB% vs Python% — confirm where time is spent |
get_duplicate_queries |
Detect N+1 patterns |
get_query_sources |
Which code lines triggered each query |
get_overfetched_fields |
SQL columns vs serializer fields — .only() candidates |
explain_slow_queries |
EXPLAIN (ANALYZE, BUFFERS) on slow SELECTs |
get_request_queries |
All SQL for a specific request |
get_slow_queries |
Slowest individual queries across all requests |
get_slow_requests |
Slowest HTTP requests |
compare_requests |
Before/after comparison with burst deduplication |
get_python_profiles |
@silk_profile decorated block timings |
get_cprofile_hotspots |
cProfile hotspots — no decorators needed |
# Hit an endpoint first so Silk has data
curl http://localhost:8000/api/your-endpoint/
# Then ask your AI assistant
# "What are the slowest endpoints?"
# "Find N+1 queries in /api/your-endpoint/"
# "Run EXPLAIN on the slow queries for /api/your-endpoint/"