Skip to content

deephaven/deephaven-mcp

Repository files navigation

deephaven-mcp

PyPI License GitHub Workflow Status

Table of Contents


Overview

Deephaven MCP, which implements the Model Context Protocol (MCP) standard, provides tools to orchestrate, inspect, and interact with Deephaven Community Core servers, and to access conversational documentation via LLM-powered Docs Servers. It's designed for data scientists, engineers, and anyone looking to leverage Deephaven's capabilities through programmatic interfaces or integrated LLM tools.

Deephaven MCP Components

Community Server

Manages and connects to multiple Deephaven Community Core worker nodes. This allows for unified control and interaction with your Deephaven instances from various client applications.

Docs Server

Provides access to an LLM-powered conversational Q&A interface for Deephaven documentation. Get answers to your Deephaven questions in natural language.

Key Use Cases

  • Integrate Deephaven with LLM-powered development tools (e.g., Claude Desktop, GitHub Copilot) for AI-assisted data exploration, code generation, and analysis.
  • Programmatically manage and query multiple Deephaven worker nodes.
  • Quickly find information and examples from Deephaven documentation using natural language queries.

Architecture Diagrams

Community Server Architecture

graph TD
    A[Clients: MCP Inspector / Claude Desktop / etc.] -- SSE/stdio (MCP) --> B(MCP Community Server);
    B -- Manages --> C(Deephaven Core Worker 1);
    B -- Manages --> D(Deephaven Core Worker N);
Loading

Clients connect to the MCP Community Server, which in turn manages and communicates with one or more Deephaven Community Core workers.

Docs Server Architecture

graph TD
    A[User/Client/API e.g., Claude Desktop] -- stdio (MCP) --> PROXY(mcp-proxy);
    PROXY -- HTTP (SSE) --> B(MCP Docs Server - FastAPI, LLM);
    B -- Accesses --> C[Deephaven Documentation Corpus];
Loading

LLM tools and other stdio-based clients connect to the Docs Server via the mcp-proxy, which forwards requests to the main HTTP/SSE-based Docs Server.


Prerequisites


Installation & Initial Setup

The recommended way to install deephaven-mcp is from PyPI. This provides the latest stable release and is suitable for most users.

Installing from PyPI (Recommended for Users)

Choose one of the following Python environment and package management tools:

Option A: Using uv (Fast, Recommended)

If you have uv installed (or install it via pip install uv):

  1. Create and activate a virtual environment with your desired Python version: uv works best when operating within a virtual environment. To create one (e.g., named .venv) using a specific Python interpreter (e.g., Python 3.9), run:

    uv venv .venv -p 3.9 

    Replace 3.9 with your target Python version (e.g., 3.10, 3.11) or the full path to a Python executable. Then, activate it:

    • On macOS/Linux: source .venv/bin/activate
    • On Windows (PowerShell): .venv\Scripts\Activate.ps1
    • On Windows (CMD): .venv\Scripts\activate.bat
  2. Install deephaven-mcp:

    uv pip install deephaven-mcp

This command installs deephaven-mcp and its dependencies into the active virtual environment. If you skipped the explicit virtual environment creation step above, uv might still create or use one automatically (typically .venv in your current directory if UV_AUTO_CREATE_VENV is not false, or a globally managed one). In any case where a virtual environment is used (either explicitly created or automatically by uv), ensure it remains active for manual command-line use of dh-mcp-community or dh-mcp-docs, or if your LLM tool requires an active environment.

Option B: Using Standard pip and venv

  1. Create a virtual environment (e.g., named .venv):
    python -m venv .venv
  2. Activate the virtual environment:
    • On macOS/Linux:
      source .venv/bin/activate
    • On Windows (Command Prompt/PowerShell):
      .venv\Scripts\activate
  3. Install deephaven-mcp into the activated virtual environment:
    pip install deephaven-mcp
    Ensure this virtual environment is active in any terminal session where you intend to run dh-mcp-community or dh-mcp-docs manually, or if your LLM tool requires an active environment when spawning these processes.

Configure MCP Server's Access to Deephaven Community Core

This section explains how to configure the Deephaven MCP Community Server to connect to and manage your Deephaven Community Core instances. This involves creating a worker definition file and understanding how the server locates this file.

The deephaven_workers.json File (Defining Your Core Workers)

Purpose and Structure

The Deephaven MCP Community Server requires a JSON configuration file that describes the Deephaven Community Core worker instances it can connect to.

  • The file must be a JSON object with a top-level key named "workers".
  • The value of "workers" is an object where each key is a unique worker name (e.g., "local_worker", "prod_cluster_1") and the value is a configuration object for that worker.

Worker Configuration Fields

All fields are optional. Default values are applied by the server if a field is omitted.

  • host (string): Hostname or IP address of the Deephaven Community Core worker (e.g., "localhost").
  • port (integer): Port number for the worker connection (e.g., 10000).
  • auth_type (string): Authentication type. Supported values include:
    • "token": For token-based authentication.
    • "basic": For username/password authentication (use auth_token for username:password or see server docs for separate fields if supported).
    • "anonymous": For no authentication.
  • auth_token (string): The authentication token if auth_type is "token". For "basic" auth, this is typically the password, or username:password if the server expects it combined. Consult your Deephaven server's authentication documentation for specifics.
  • never_timeout (boolean): If true, the MCP server will attempt to configure the session to this worker to never time out. Server-side configurations may still override this.
  • session_type (string): Specifies the type of session to create. Common values are "groovy" or "python".
  • use_tls (boolean): Set to true if the connection to the worker requires TLS/SSL.
  • tls_root_certs (string): Absolute path to a PEM file containing trusted root CA certificates for TLS verification. If omitted, system CAs might be used, or verification might be less strict depending on the client library.
  • client_cert_chain (string): Absolute path to a PEM file containing the client's TLS certificate chain. Used for client-side certificate authentication (mTLS).
  • client_private_key (string): Absolute path to a PEM file containing the client's private key. Used for client-side certificate authentication (mTLS).

Example deephaven_workers.json

{
  "workers": {
    "my_local_deephaven": {
      "host": "localhost",
      "port": 10000
    },
    "secure_remote_worker": {
      "host": "secure.deephaven.example.com",
      "port": 10001,
      "auth_type": "token",
      "auth_token": "your-secret-api-token-here",
      "use_tls": true,
      "tls_root_certs": "/path/to/root.crt",
      "client_cert_chain": "/path/to/client.crt",
      "client_private_key": "/path/to/client.key"
    }
  }
}

Security Note for deephaven_workers.json

The deephaven_workers.json file can contain sensitive information such as authentication tokens, usernames, and passwords. Ensure that this file is protected with appropriate filesystem permissions to prevent unauthorized access. For example, on Unix-like systems (Linux, macOS), you can restrict permissions to the owner only using the command:

chmod 600 /path/to/your/deephaven_workers.json

Additional Notes for deephaven_workers.json

  • Ensure all file paths within the config (e.g., for TLS certificates if used) are absolute and accessible by the server process.
  • The worker names are arbitrary and used to identify workers in client tools.

Setting DH_MCP_CONFIG_FILE (Informing the MCP Server)

The DH_MCP_CONFIG_FILE environment variable tells the Deephaven MCP Community Server where to find your deephaven_workers.json file (detailed in The deephaven_workers.json File (Defining Your Core Workers)). You will set this environment variable as part of the server launch configuration within your LLM tool, as detailed in the Configure Your LLM Tool to Use MCP Servers section.

When launched by an LLM tool, the MCP Community Server process reads this variable to load your worker definitions. For general troubleshooting or if you need to set other environment variables like PYTHONLOGLEVEL (e.g., to DEBUG for verbose logs), these are also typically set within the LLM tool's MCP server configuration (see Defining MCP Servers for Your LLM Tool (The mcpServers JSON Object)).


Configure Your LLM Tool to Use MCP Servers

This section details how to configure your LLM tool (e.g., Claude Desktop, GitHub Copilot) to launch and communicate with the Deephaven MCP Community Server and the Deephaven MCP Docs Server. This involves providing a JSON configuration, known as the "mcpServers" object, to your LLM tool.

How LLM Tools Launch MCP Servers (Overview)

LLM tools that support the Model Context Protocol (MCP) can be configured to use the Deephaven MCP Community and Docs Servers. The LLM tool's configuration will typically define how to start the necessary MCP server processes.

Understanding Deephaven Core Worker Status (via MCP)

The MCP Community Server, launched by your LLM tool, will attempt to connect to the Deephaven Community Core instances defined in your deephaven_workers.json file (pointed to by DH_MCP_CONFIG_FILE as described in Setting DH_MCP_CONFIG_FILE (Informing the MCP Server)).

It's important to understand the following:

  • MCP Server Independence: The MCP Community Server itself will start and be available to your LLM tool even if some or all configured Deephaven Community Core workers are not currently running or accessible. The LLM tool will be able to list the configured workers and see their status (e.g., unavailable, connected).
  • Worker Interaction: To successfully perform operations on a specific Deephaven Community Core worker (e.g., list tables, execute scripts), that particular worker must be running and network-accessible from the environment where the MCP Community Server process is executing.
  • Configuration is Key: Ensure your deephaven_workers.json file accurately lists the workers you intend to use. The MCP server uses this configuration to know which workers to attempt to manage.

Defining MCP Servers for Your LLM Tool (The mcpServers JSON Object)

Your LLM tool requires a specific JSON configuration to define how MCP servers are launched. This configuration is structured as a JSON object with a top-level key named "mcpServers". This "mcpServers" object tells the tool how to start the Deephaven MCP Community Server (for interacting with Deephaven Community Core) and the mcp-proxy (for interacting with the Docs Server).

Depending on your LLM tool, this "mcpServers" object might be:

  • The entire content of a dedicated file (e.g., named mcp.json in VS Code).
  • A part of a larger JSON configuration file used by the tool (e.g., for Claude Desktop).

Consult your LLM tool's documentation for the precise file name and location. Below are two examples of the "mcpServers" JSON structure. Choose the one that matches your Python environment setup (either uv or pip + venv).

Important: All paths in the JSON examples (e.g., /full/path/to/...) must be replaced with actual, absolute paths on your system.

Example "mcpServers" object for uv users:

{
  "mcpServers": {
    "deephaven-community": {
      "command": "uv",
      "args": [
        "--directory",
        "/full/path/to/deephaven-mcp",
        "run",
        "dh-mcp-community"
      ],
      "env": {
        "DH_MCP_CONFIG_FILE": "/full/path/to/your/deephaven_workers.json",
        "PYTHONLOGLEVEL": "INFO" 
      }
    },
    "deephaven-docs": {
      "command": "uv",
      "args": [
        "--directory",
        "/full/path/to/deephaven-mcp",
        "run",
        "mcp-proxy",
        "https://deephaven-mcp-docs-prod.dhc-demo.deephaven.io/sse"
      ]
    }
  }
}

Note: You can change "PYTHONLOGLEVEL": "INFO" to "PYTHONLOGLEVEL": "DEBUG" for more detailed server logs, as further detailed in the Troubleshooting section.

Example "mcpServers" object for pip + venv users:

{
  "mcpServers": {
    "deephaven-community": {
      "command": "/full/path/to/your/deephaven-mcp/.venv/bin/dh-mcp-community",
      "args": [], 
      "env": {
        "DH_MCP_CONFIG_FILE": "/full/path/to/your/deephaven_workers.json",
        "PYTHONLOGLEVEL": "INFO"
      }
    },
    "deephaven-docs": {
      "command": "/full/path/to/your/deephaven-mcp/.venv/bin/mcp-proxy",
      "args": [
        "https://deephaven-mcp-docs-prod.dhc-demo.deephaven.io/sse"
      ]
    }
  }
}

Note: You can change "PYTHONLOGLEVEL": "INFO" to "PYTHONLOGLEVEL": "DEBUG" for more detailed server logs, as further detailed in the Troubleshooting section.

Tool-Specific File Locations for the mcpServers Configuration

The "mcpServers" JSON object, whose structure is detailed in Defining MCP Servers for Your LLM Tool (The mcpServers JSON Object), needs to be placed in a specific configuration file or setting area for your LLM tool. Here’s how to integrate it with common tools:

  • Claude Desktop:
    • The mcpServers object should be added to the main JSON object within this file:
    • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
    • Windows: %APPDATA%\Claude\claude_desktop_config.json (e.g., C:\Users\<YourUsername>\AppData\Roaming\Claude\claude_desktop_config.json)
    • Linux: ~/.config/Claude/claude_desktop_config.json
  • GitHub Copilot (Visual Studio Code):
  • GitHub Copilot (JetBrains IDEs - IntelliJ IDEA, PyCharm, etc.):
    • The method for configuring custom MCP servers may vary. Please consult the official GitHub Copilot extension documentation for your specific JetBrains IDE for the most current instructions. It might involve a specific settings panel or a designated configuration file.

Restarting Your LLM Tool (Applying the Configuration)

Once you have saved the "mcpServers" JSON object in the correct location for your LLM tool, restart the tool (Claude Desktop, VS Code, JetBrains IDEs, etc.). The configured servers (e.g., deephaven-community, deephaven-docs) should then be available in its MCP interface.

Verifying Your Setup

After restarting your LLM tool, the first step is to verify that the MCP servers are recognized:

  • Open your LLM tool's interface where it lists available MCP servers or data sources.
  • You should see deephaven-community and deephaven-docs (or the names you configured in the mcpServers object) listed.
  • Attempt to connect to or interact with one of them (e.g., by listing available Deephaven Community Core workers via the deephaven-community server).

If the servers are not listed or you encounter errors at this stage, please proceed to the Troubleshooting section for guidance.


Troubleshooting

  • LLM Tool Can't Connect / Server Not Found:
    • Verify all paths in your LLM tool's JSON configuration are absolute and correct.
    • Ensure DH_MCP_CONFIG_FILE environment variable is correctly set in the JSON config and points to a valid worker file.
    • Ensure any Deephaven Community Core workers you intend to use (as defined in deephaven_workers.json) are running and accessible from the MCP Community Server's environment.
    • Check for typos in server names, commands, or arguments in the JSON config.
    • Validate the syntax of your JSON configurations (mcpServers object in the LLM tool, and deephaven_workers.json). A misplaced comma or incorrect quote can prevent the configuration from being parsed correctly. Use a JSON validator tool or your IDE's linting features.
      • Set PYTHONLOGLEVEL=DEBUG in the env block of your JSON config to get more detailed logs from the MCP servers. For example, Claude Desktop often saves these to files like ~/Library/Logs/Claude/mcp-server-SERVERNAME.log. Consult your LLM tool's documentation for specific log file locations.
  • Firewall or Network Issues: * Ensure that there are no firewall rules (local or network) preventing: * The MCP Community Server from connecting to your Deephaven Community Core instances on their specified hosts and ports. * Your LLM tool or client from connecting to the mcp-proxy's target URL ([https://deephaven-mcp-docs-prod.dhc-demo.deephaven.io](https://deephaven-mcp-docs-prod.dhc-demo.deephaven.io)) if using the Docs Server. * Test basic network connectivity (e.g., using ping or curl from the relevant machine) if connections are failing.
  • command not found for uv (in LLM tool logs):
    • Ensure uv is installed and its installation directory is in your system's PATH environment variable, accessible by the LLM tool.
  • command not found for dh-mcp-community or mcp-proxy (venv option in LLM tool logs):
    • Double-check that the command field in your JSON config uses the correct absolute path to the executable within your .venv/bin/ (or .venv\Scripts\) directory.
  • Port Conflicts: If a server fails to start (check logs), another process might be using the required port (e.g., port 8000 for default SSE).
  • Python Errors in Server Logs: Check the server logs for Python tracebacks. Ensure all dependencies were installed correctly (see Installation & Initial Setup).
  • Worker Configuration Issues: * If the Community Server starts but can't connect to Deephaven Community Core workers, verify your deephaven_workers.json file (see The deephaven_workers.json File (Defining Your Core Workers) for details on its structure and content). * Ensure the target Deephaven Community Core instances are running and network-accessible. * Confirm that the process running the MCP Community Server has read permissions for the deephaven_workers.json file itself.

Contributing

We warmly welcome contributions to Deephaven MCP! Whether it's bug reports, feature suggestions, documentation improvements, or code contributions, your help is valued.

  • Reporting Issues: Please use the GitHub Issues tracker.
  • Development Guidelines: For details on setting up your development environment, coding standards, running tests, and the pull request process, please see our Developer & Contributor Guide.

Advanced Usage & Further Information


Community & Support


License

This project is licensed under the Apache 2.0 License. See the LICENSE file for details.

About

Deephaven Model Context Protocol

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •