- Overview
- Prerequisites
- Installation & Initial Setup
- Configure MCP Server's Access to Deephaven Community Core
- Configure Your LLM Tool to Use MCP Servers
- Troubleshooting
- Contributing
- Advanced Usage & Further Information
- Community & Support
- License
Deephaven MCP, which implements the Model Context Protocol (MCP) standard, provides tools to orchestrate, inspect, and interact with Deephaven Community Core servers, and to access conversational documentation via LLM-powered Docs Servers. It's designed for data scientists, engineers, and anyone looking to leverage Deephaven's capabilities through programmatic interfaces or integrated LLM tools.
Manages and connects to multiple Deephaven Community Core worker nodes. This allows for unified control and interaction with your Deephaven instances from various client applications.
Provides access to an LLM-powered conversational Q&A interface for Deephaven documentation. Get answers to your Deephaven questions in natural language.
- Integrate Deephaven with LLM-powered development tools (e.g., Claude Desktop, GitHub Copilot) for AI-assisted data exploration, code generation, and analysis.
- Programmatically manage and query multiple Deephaven worker nodes.
- Quickly find information and examples from Deephaven documentation using natural language queries.
graph TD
A[Clients: MCP Inspector / Claude Desktop / etc.] -- SSE/stdio (MCP) --> B(MCP Community Server);
B -- Manages --> C(Deephaven Core Worker 1);
B -- Manages --> D(Deephaven Core Worker N);
Clients connect to the MCP Community Server, which in turn manages and communicates with one or more Deephaven Community Core workers.
graph TD
A[User/Client/API e.g., Claude Desktop] -- stdio (MCP) --> PROXY(mcp-proxy);
PROXY -- HTTP (SSE) --> B(MCP Docs Server - FastAPI, LLM);
B -- Accesses --> C[Deephaven Documentation Corpus];
LLM tools and other stdio-based clients connect to the Docs Server via the mcp-proxy
, which forwards requests to the main HTTP/SSE-based Docs Server.
- Python: Version 3.9 or later. (Download Python)
- Access to Deephaven Community Core instance(s): To use the MCP Community Server for interacting with Deephaven, you will need one or more Deephaven Community Core instances running and network-accessible.
- Choose your Python environment setup method:
- Option A:
uv
(Recommended): A very fast Python package installer and resolver. If you don't have it, you can install it viapip install uv
or see the uv installation guide. - Option B: Standard Python
venv
andpip
: Uses Python's built-in virtual environment (venv
) tools andpip
.
- Option A:
The recommended way to install deephaven-mcp
is from PyPI. This provides the latest stable release and is suitable for most users.
Choose one of the following Python environment and package management tools:
If you have uv
installed (or install it via pip install uv
):
-
Create and activate a virtual environment with your desired Python version: uv works best when operating within a virtual environment. To create one (e.g., named
.venv
) using a specific Python interpreter (e.g., Python 3.9), run:uv venv .venv -p 3.9
Replace
3.9
with your target Python version (e.g.,3.10
,3.11
) or the full path to a Python executable. Then, activate it:- On macOS/Linux:
source .venv/bin/activate
- On Windows (PowerShell):
.venv\Scripts\Activate.ps1
- On Windows (CMD):
.venv\Scripts\activate.bat
- On macOS/Linux:
-
Install
deephaven-mcp
:uv pip install deephaven-mcp
This command installs deephaven-mcp
and its dependencies into the active virtual environment. If you skipped the explicit virtual environment creation step above, uv
might still create or use one automatically (typically .venv
in your current directory if UV_AUTO_CREATE_VENV
is not false
, or a globally managed one). In any case where a virtual environment is used (either explicitly created or automatically by uv
), ensure it remains active for manual command-line use of dh-mcp-community
or dh-mcp-docs
, or if your LLM tool requires an active environment.
- Create a virtual environment (e.g., named
.venv
):python -m venv .venv
- Activate the virtual environment:
- On macOS/Linux:
source .venv/bin/activate
- On Windows (Command Prompt/PowerShell):
.venv\Scripts\activate
- On macOS/Linux:
- Install
deephaven-mcp
into the activated virtual environment:Ensure this virtual environment is active in any terminal session where you intend to runpip install deephaven-mcp
dh-mcp-community
ordh-mcp-docs
manually, or if your LLM tool requires an active environment when spawning these processes.
This section explains how to configure the Deephaven MCP Community Server to connect to and manage your Deephaven Community Core instances. This involves creating a worker definition file and understanding how the server locates this file.
The Deephaven MCP Community Server requires a JSON configuration file that describes the Deephaven Community Core worker instances it can connect to.
- The file must be a JSON object with a top-level key named
"workers"
. - The value of
"workers"
is an object where each key is a unique worker name (e.g.,"local_worker"
,"prod_cluster_1"
) and the value is a configuration object for that worker.
All fields are optional. Default values are applied by the server if a field is omitted.
host
(string): Hostname or IP address of the Deephaven Community Core worker (e.g.,"localhost"
).port
(integer): Port number for the worker connection (e.g.,10000
).auth_type
(string): Authentication type. Supported values include:"token"
: For token-based authentication."basic"
: For username/password authentication (useauth_token
forusername:password
or see server docs for separate fields if supported)."anonymous"
: For no authentication.
auth_token
(string): The authentication token ifauth_type
is"token"
. For"basic"
auth, this is typically the password, orusername:password
if the server expects it combined. Consult your Deephaven server's authentication documentation for specifics.never_timeout
(boolean): Iftrue
, the MCP server will attempt to configure the session to this worker to never time out. Server-side configurations may still override this.session_type
(string): Specifies the type of session to create. Common values are"groovy"
or"python"
.use_tls
(boolean): Set totrue
if the connection to the worker requires TLS/SSL.tls_root_certs
(string): Absolute path to a PEM file containing trusted root CA certificates for TLS verification. If omitted, system CAs might be used, or verification might be less strict depending on the client library.client_cert_chain
(string): Absolute path to a PEM file containing the client's TLS certificate chain. Used for client-side certificate authentication (mTLS).client_private_key
(string): Absolute path to a PEM file containing the client's private key. Used for client-side certificate authentication (mTLS).
{
"workers": {
"my_local_deephaven": {
"host": "localhost",
"port": 10000
},
"secure_remote_worker": {
"host": "secure.deephaven.example.com",
"port": 10001,
"auth_type": "token",
"auth_token": "your-secret-api-token-here",
"use_tls": true,
"tls_root_certs": "/path/to/root.crt",
"client_cert_chain": "/path/to/client.crt",
"client_private_key": "/path/to/client.key"
}
}
}
The deephaven_workers.json
file can contain sensitive information such as authentication tokens, usernames, and passwords. Ensure that this file is protected with appropriate filesystem permissions to prevent unauthorized access. For example, on Unix-like systems (Linux, macOS), you can restrict permissions to the owner only using the command:
chmod 600 /path/to/your/deephaven_workers.json
- Ensure all file paths within the config (e.g., for TLS certificates if used) are absolute and accessible by the server process.
- The worker names are arbitrary and used to identify workers in client tools.
The DH_MCP_CONFIG_FILE
environment variable tells the Deephaven MCP Community Server where to find your deephaven_workers.json
file (detailed in The deephaven_workers.json
File (Defining Your Core Workers)). You will set this environment variable as part of the server launch configuration within your LLM tool, as detailed in the Configure Your LLM Tool to Use MCP Servers section.
When launched by an LLM tool, the MCP Community Server process reads this variable to load your worker definitions. For general troubleshooting or if you need to set other environment variables like PYTHONLOGLEVEL
(e.g., to DEBUG
for verbose logs), these are also typically set within the LLM tool's MCP server configuration (see Defining MCP Servers for Your LLM Tool (The mcpServers
JSON Object)).
This section details how to configure your LLM tool (e.g., Claude Desktop, GitHub Copilot) to launch and communicate with the Deephaven MCP Community Server and the Deephaven MCP Docs Server. This involves providing a JSON configuration, known as the "mcpServers"
object, to your LLM tool.
LLM tools that support the Model Context Protocol (MCP) can be configured to use the Deephaven MCP Community and Docs Servers. The LLM tool's configuration will typically define how to start the necessary MCP server processes.
The MCP Community Server, launched by your LLM tool, will attempt to connect to the Deephaven Community Core instances defined in your deephaven_workers.json
file (pointed to by DH_MCP_CONFIG_FILE
as described in Setting DH_MCP_CONFIG_FILE
(Informing the MCP Server)).
It's important to understand the following:
- MCP Server Independence: The MCP Community Server itself will start and be available to your LLM tool even if some or all configured Deephaven Community Core workers are not currently running or accessible. The LLM tool will be able to list the configured workers and see their status (e.g., unavailable, connected).
- Worker Interaction: To successfully perform operations on a specific Deephaven Community Core worker (e.g., list tables, execute scripts), that particular worker must be running and network-accessible from the environment where the MCP Community Server process is executing.
- Configuration is Key: Ensure your
deephaven_workers.json
file accurately lists the workers you intend to use. The MCP server uses this configuration to know which workers to attempt to manage.
Your LLM tool requires a specific JSON configuration to define how MCP servers are launched. This configuration is structured as a JSON object with a top-level key named "mcpServers"
. This "mcpServers"
object tells the tool how to start the Deephaven MCP Community Server (for interacting with Deephaven Community Core) and the mcp-proxy
(for interacting with the Docs Server).
Depending on your LLM tool, this "mcpServers"
object might be:
- The entire content of a dedicated file (e.g., named
mcp.json
in VS Code). - A part of a larger JSON configuration file used by the tool (e.g., for Claude Desktop).
Consult your LLM tool's documentation for the precise file name and location. Below are two examples of the "mcpServers"
JSON structure. Choose the one that matches your Python environment setup (either uv
or pip + venv
).
Important: All paths in the JSON examples (e.g., /full/path/to/...
) must be replaced with actual, absolute paths on your system.
{
"mcpServers": {
"deephaven-community": {
"command": "uv",
"args": [
"--directory",
"/full/path/to/deephaven-mcp",
"run",
"dh-mcp-community"
],
"env": {
"DH_MCP_CONFIG_FILE": "/full/path/to/your/deephaven_workers.json",
"PYTHONLOGLEVEL": "INFO"
}
},
"deephaven-docs": {
"command": "uv",
"args": [
"--directory",
"/full/path/to/deephaven-mcp",
"run",
"mcp-proxy",
"https://deephaven-mcp-docs-prod.dhc-demo.deephaven.io/sse"
]
}
}
}
Note: You can change "PYTHONLOGLEVEL": "INFO"
to "PYTHONLOGLEVEL": "DEBUG"
for more detailed server logs, as further detailed in the Troubleshooting section.
{
"mcpServers": {
"deephaven-community": {
"command": "/full/path/to/your/deephaven-mcp/.venv/bin/dh-mcp-community",
"args": [],
"env": {
"DH_MCP_CONFIG_FILE": "/full/path/to/your/deephaven_workers.json",
"PYTHONLOGLEVEL": "INFO"
}
},
"deephaven-docs": {
"command": "/full/path/to/your/deephaven-mcp/.venv/bin/mcp-proxy",
"args": [
"https://deephaven-mcp-docs-prod.dhc-demo.deephaven.io/sse"
]
}
}
}
Note: You can change "PYTHONLOGLEVEL": "INFO"
to "PYTHONLOGLEVEL": "DEBUG"
for more detailed server logs, as further detailed in the Troubleshooting section.
The "mcpServers"
JSON object, whose structure is detailed in Defining MCP Servers for Your LLM Tool (The mcpServers
JSON Object), needs to be placed in a specific configuration file or setting area for your LLM tool. Here’s how to integrate it with common tools:
- Claude Desktop:
- The
mcpServers
object should be added to the main JSON object within this file: - macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
(e.g.,C:\Users\<YourUsername>\AppData\Roaming\Claude\claude_desktop_config.json
) - Linux:
~/.config/Claude/claude_desktop_config.json
- The
- GitHub Copilot (Visual Studio Code):
- In your project's root directory, create or edit the file
.vscode/mcp.json
. - This file's content should be the
"mcpServers"
JSON object, as shown in the examples in Defining MCP Servers for Your LLM Tool (ThemcpServers
JSON Object).
- In your project's root directory, create or edit the file
- GitHub Copilot (JetBrains IDEs - IntelliJ IDEA, PyCharm, etc.):
- The method for configuring custom MCP servers may vary. Please consult the official GitHub Copilot extension documentation for your specific JetBrains IDE for the most current instructions. It might involve a specific settings panel or a designated configuration file.
Once you have saved the "mcpServers"
JSON object in the correct location for your LLM tool, restart the tool (Claude Desktop, VS Code, JetBrains IDEs, etc.). The configured servers (e.g., deephaven-community
, deephaven-docs
) should then be available in its MCP interface.
After restarting your LLM tool, the first step is to verify that the MCP servers are recognized:
- Open your LLM tool's interface where it lists available MCP servers or data sources.
- You should see
deephaven-community
anddeephaven-docs
(or the names you configured in themcpServers
object) listed. - Attempt to connect to or interact with one of them (e.g., by listing available Deephaven Community Core workers via the
deephaven-community
server).
If the servers are not listed or you encounter errors at this stage, please proceed to the Troubleshooting section for guidance.
- LLM Tool Can't Connect / Server Not Found:
- Verify all paths in your LLM tool's JSON configuration are absolute and correct.
- Ensure
DH_MCP_CONFIG_FILE
environment variable is correctly set in the JSON config and points to a valid worker file. - Ensure any Deephaven Community Core workers you intend to use (as defined in
deephaven_workers.json
) are running and accessible from the MCP Community Server's environment. - Check for typos in server names, commands, or arguments in the JSON config.
- Validate the syntax of your JSON configurations (
mcpServers
object in the LLM tool, anddeephaven_workers.json
). A misplaced comma or incorrect quote can prevent the configuration from being parsed correctly. Use a JSON validator tool or your IDE's linting features.- Set
PYTHONLOGLEVEL=DEBUG
in theenv
block of your JSON config to get more detailed logs from the MCP servers. For example, Claude Desktop often saves these to files like~/Library/Logs/Claude/mcp-server-SERVERNAME.log
. Consult your LLM tool's documentation for specific log file locations.
- Set
- Firewall or Network Issues:
* Ensure that there are no firewall rules (local or network) preventing:
* The MCP Community Server from connecting to your Deephaven Community Core instances on their specified hosts and ports.
* Your LLM tool or client from connecting to the
mcp-proxy
's target URL ([https://deephaven-mcp-docs-prod.dhc-demo.deephaven.io](https://deephaven-mcp-docs-prod.dhc-demo.deephaven.io)
) if using the Docs Server. * Test basic network connectivity (e.g., usingping
orcurl
from the relevant machine) if connections are failing. command not found
foruv
(in LLM tool logs):- Ensure
uv
is installed and its installation directory is in your system'sPATH
environment variable, accessible by the LLM tool.
- Ensure
command not found
fordh-mcp-community
ormcp-proxy
(venv option in LLM tool logs):- Double-check that the
command
field in your JSON config uses the correct absolute path to the executable within your.venv/bin/
(or.venv\Scripts\
) directory.
- Double-check that the
- Port Conflicts: If a server fails to start (check logs), another process might be using the required port (e.g., port 8000 for default SSE).
- Python Errors in Server Logs: Check the server logs for Python tracebacks. Ensure all dependencies were installed correctly (see Installation & Initial Setup).
- Worker Configuration Issues:
* If the Community Server starts but can't connect to Deephaven Community Core workers, verify your
deephaven_workers.json
file (see Thedeephaven_workers.json
File (Defining Your Core Workers) for details on its structure and content). * Ensure the target Deephaven Community Core instances are running and network-accessible. * Confirm that the process running the MCP Community Server has read permissions for thedeephaven_workers.json
file itself.
We warmly welcome contributions to Deephaven MCP! Whether it's bug reports, feature suggestions, documentation improvements, or code contributions, your help is valued.
- Reporting Issues: Please use the GitHub Issues tracker.
- Development Guidelines: For details on setting up your development environment, coding standards, running tests, and the pull request process, please see our Developer & Contributor Guide.
- Detailed Server APIs and Tools: For in-depth information about the tools exposed by the Community Server (e.g.,
refresh
,describe_workers
) and the Docs Server (docs_chat
), refer to the Developer & Contributor Guide. uv
Workflow: For more details on usinguv
for project management, see docs/UV.md.
- GitHub Issues: For bug reports and feature requests: https://github.com/deephaven/deephaven-mcp/issues
- Deephaven Community Slack: Join the conversation and ask questions: https://deephaven.io/slack
This project is licensed under the Apache 2.0 License. See the LICENSE file for details.