The Confident AI MCP Server connects AI-powered tools to Confident AI, a platform to evaluate, observe, and iterate on AI quality. It gives you full control over your resources directly from your editor:
- Cloud evaluations and metric collections
- Evaluation datasets
- Prompt versioning and management
- Production tracing and observability
- Human annotations and feedback
For users of DeepEval, Confident AI is also the native backend and persistence layer for your evaluation results. This MCP server gives you the ability to iterate on your AI application by bringing all of that data directly into tools like Cursor and Claude Code.
Warning
This MCP server is currently in beta. We invite everyone to try it out but also reach out to the Confident AI team before doing so to avoid any surprises in functionality.
Built for developers who want to iterate faster on their AI applications from inside editors like Cursor, Claude Code, and Windsurf — from simple queries to fully automated improvement workflows:
- 10x your iteration speed. Run an eval, check if a set of prompts are better — in one continuous workflow instead of scattered across tools. What used to take an hour of tab-switching now takes one conversation.
- Go from eval results to action plan automatically. Your AI assistant can pull eval results, read what failed and why, and draft a plan for what to improve next — no manual analysis needed.
- Use production traces for iteration. Pull the trace, see what went in and what came out, read what users said — and fix it before anyone else notices.
- Let human feedback drive your next iteration. Pull annotation data your team left on production traces and have your AI assistant use it to decide what to fix and how.
Every time you leave your editor to check eval results, tweak a prompt in a dashboard, or look up what your team annotated — you lose context and iteration speed.
Confident AI has a full web UI where you can do all of this with a mouse. This MCP server is the same platform, accessed from your editor instead. Think: AWS web console vs. AWS CLI — same resources, different interface.
The server speaks the Model Context Protocol (MCP), so any compatible client connects out of the box. The web UI isn't going anywhere. This is just another way in.
- Prerequisites — What you need before you start
- Quickstart — Get up and running in under a minute
- Configuration — Environment variables for regions, on-prem, and advanced setup
- Available Tools — Full reference of all 27 tools
- License
- A Confident AI API key.
- An MCP-compatible client — Cursor, Claude, Windsurf, or any other client that supports the Model Context Protocol.
Confident AI hosts the MCP server for you. Pick your region:
| Region | MCP Server URL |
|---|---|
| US (default) | https://mcp.confident-ai.com/mcp |
| EU | https://eu.mcp.confident-ai.com/mcp |
| AU | https://au.mcp.confident-ai.com/mcp |
| Self-hosted | Use your own deployment URL |
Tip
The examples below use the US server URL. For other regions, swap the URL:
- EU:
https://eu.mcp.confident-ai.com/mcp - AU:
https://au.mcp.confident-ai.com/mcp - Self-hosted / On-prem: If you're running your own instance of Confident AI, you can run this MCP server yourself and point it at your deployment. See Running the Server Locally for setup instructions.
Add the following to your .cursor/mcp.json file:
{
"mcpServers": {
"confident-ai": {
"url": "https://mcp.confident-ai.com/mcp",
"headers": {
"Authorization": "Bearer <YOUR_CONFIDENT_API_KEY>"
}
}
}
}Claude Code — run the following command in your terminal:
claude mcp add --transport sse confident-ai https://mcp.confident-ai.com/mcp --header "Authorization: Bearer <YOUR_CONFIDENT_API_KEY>"Claude Desktop — add the following to your claude_desktop_config.json file:
{
"mcpServers": {
"confident-ai": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://mcp.confident-ai.com/mcp",
"--header",
"Authorization: Bearer <YOUR_CONFIDENT_API_KEY>"
]
}
}
}Add the following to your Windsurf MCP configuration:
{
"mcpServers": {
"confident-ai": {
"serverUrl": "https://mcp.confident-ai.com/mcp",
"headers": {
"Authorization": "Bearer <YOUR_CONFIDENT_API_KEY>"
}
}
}
}If you're self-hosting or contributing to this project, you can run the server from source.
Prerequisites: Python >= 3.12, Poetry
poetry install
uv run server.pyThe server will start on http://0.0.0.0:8081 with two endpoints:
| Endpoint | Method | Description |
|---|---|---|
/mcp |
GET | SSE connection endpoint for MCP clients |
/messages |
POST | Message passing endpoint for tool execution |
Both endpoints require a Bearer token in the Authorization header (your Confident AI API key).
When running locally, point your MCP client to http://localhost:8081/mcp instead of the hosted URLs above.
To run in stdio mode instead (for MCP clients that communicate over stdin/stdout), uncomment the relevant block at the bottom of server.py:
if __name__ == "__main__":
mcp.run(transport="stdio")Note
This section is only relevant if you're running the server locally. If you're using the hosted server, the only thing you need is your API key in the quickstart configs above.
The server is configured through environment variables. You can set these in a .env file in the project root.
| Variable | Description | Default |
|---|---|---|
CONFIDENT_API_KEY |
Your Confident AI API key | Required |
CONFIDENT_ENVIRONMENT |
LOCAL, PROD, or ON_PREM |
LOCAL |
CONFIDENT_REGION |
US, EU, or AU (only used when CONFIDENT_ENVIRONMENT=PROD) |
US |
CONFIDENT_BACKEND_LOCAL_URL |
Backend URL for local development | — |
CONFIDENT_BACKEND_US_PROD_URL |
US production backend URL | — |
CONFIDENT_BACKEND_EU_PROD_URL |
EU production backend URL | — |
CONFIDENT_BACKEND_AU_PROD_URL |
AU production backend URL | — |
CONFIDENT_BACKEND_ON_PREM_URL |
On-prem backend URL (required when CONFIDENT_ENVIRONMENT=ON_PREM) |
— |
Prompts — 7 tools
Manage prompt templates with full version control — pull, push, version, and interpolate.
| Tool | Description |
|---|---|
pull_prompt |
Fetch a prompt by alias, version, label, or commit hash |
push_prompt |
Create or update a prompt template on Confident AI |
interpolate_prompt |
Locally render a prompt template by replacing placeholders with values |
create_prompt_version |
Assign a version string to a specific prompt commit |
list_prompt_versions |
List all formal versions of a prompt |
list_prompt_commits |
List the full commit history of a prompt |
list_prompts |
List all prompts in your project |
Datasets — 2 tools
Pull evaluation datasets for use in local test runs or agent workflows.
| Tool | Description |
|---|---|
pull_dataset |
Fetch a dataset (single-turn or multi-turn) by alias |
list_datasets |
List all datasets in your project |
Evaluate — 2 tools
Trigger cloud evaluations and simulate multi-turn conversations.
| Tool | Description |
|---|---|
run_llm_evals |
Run cloud evaluations on a batch of test cases against a metric collection |
simulate_conversation |
Simulate the next turn of a multi-turn conversation using a scenario and expected outcome |
Traces, Threads, and Spans — 9 tools
Browse, inspect, and evaluate production observability data at every level of your LLM pipeline.
| Tool | Description |
|---|---|
list_traces |
List traces with filtering by environment, time range, and sort order |
get_trace |
Get full details of a specific trace, including all spans |
list_threads |
List conversation threads with filtering and pagination |
get_thread |
Get full details of a thread, including all traces and thread-level metrics |
list_spans |
List spans with filtering by type, error state, prompt version, and more |
get_span |
Get full details of a span, including I/O, cost, metrics, and annotations |
evaluate_trace |
Trigger a cloud evaluation on a specific trace |
evaluate_thread |
Trigger a cloud evaluation on a conversation thread |
evaluate_span |
Trigger a cloud evaluation on a specific span |
Annotations — 4 tools
Create and manage human feedback on traces, spans, and threads.
| Tool | Description |
|---|---|
list_annotations |
List annotations with filtering by target, type, and rating range |
get_annotation |
Get full details of a specific annotation |
create_annotation |
Create a new annotation (thumbs rating or star rating) on a trace, span, or thread |
update_annotation |
Update an existing annotation's rating, explanation, or expected output |
Test Runs — 2 tools
Inspect past evaluation runs and their results.
| Tool | Description |
|---|---|
list_test_runs |
List test runs with filtering by status, time range, and multi-turn type |
get_test_run |
Get full details of a test run, including per-test-case metric scores and reasoning |
Metric Collections — 1 tool
Discover available metric collections before triggering evaluations.
| Tool | Description |
|---|---|
list_metric_collections |
List all metric collections, including their metrics and thresholds |
Caution
The hosted /mcp endpoint is strictly for internal development and experimental use. It is not designed for public consumption. The API and its underlying data structures are unstable and subject to change, breaking updates, or removal at any time without prior notice. Do not build production applications or rely on this public endpoint for any critical workflows.
This project is licensed under the terms of the MIT License.
