Model Context Protocol (MCP) server that allows AI chatbots to search, explore, and analyze datasets from data.gouv.fr, the French national Open Data platform, directly through conversation.
The data.gouv.fr MCP server is a tool that allows AI chatbots (like Claude, Gemini, or Cursor) to interact with datasets from data.gouv.fr. Instead of manually browsing the website, you can simply ask questions like "Quels jeux de données sont disponibles sur les prix de l'immobilier ?" or "Montre-moi les dernières données de population pour Paris" and get instant answers. A hosted endpoint is available at https://mcp.data.gouv.fr/mcp, and you can also run the server locally if you prefer.
The server is built using the official Python SDK for MCP servers and clients and uses the Streamable HTTP transport protocol.
Use the hosted endpoint https://mcp.data.gouv.fr/mcp (recommended). If you self-host, swap in your own URL.
The MCP server configuration depends on your client. Use the appropriate configuration format for your client:
ChatGPT can connect via βConnectorsβ (beta, paid plans only: Plus/Pro/Team/Enterprise depending on workspace enablement). Create a custom connector and set the URL to https://mcp.data.gouv.fr/mcp (no API key needed, tools are read-only). If the connector option is missing in your account, it has not yet been enabled by OpenAI.
Add the following to your Claude Desktop configuration file (typically ~/Library/Application Support/Claude/claude_desktop_config.json on MacOS, or %APPDATA%\Claude\claude_desktop_config.json on Windows):
{
"mcpServers": {
"datagouv": {
"command": "npx",
"args": [
"mcp-remote",
"https://mcp.data.gouv.fr/mcp"
]
}
}
}Use the claude mcp command to add the MCP server:
claude mcp add --transport http datagouv https://mcp.data.gouv.fr/mcpAdd the following to your ~/.gemini/settings.json file:
{
"mcpServers": {
"datagouv": {
"transport": "http",
"httpUrl": "https://mcp.data.gouv.fr/mcp"
}
}
}Edit your Vibe config (default ~/.vibe/config.toml) and add the MCP server:
[[mcp_servers]]
name = "datagouv"
transport = "streamable-http"
url = "https://mcp.data.gouv.fr/mcp"See the full Vibe MCP options in the official docs: MCP server configuration.
-
Locate the
anythingllm_mcp_servers.jsonfile in your AnythingLLM storage plugins directory:- Mac:
~/Library/Application Support/anythingllm-desktop/storage/plugins/anythingllm_mcp_servers.json - Linux:
~/.config/anythingllm-desktop/storage/plugins/anythingllm_mcp_servers.json - Windows:
C:\Users\<username>\AppData\Roaming\anythingllm-desktop\storage\plugins\anythingllm_mcp_servers.json
- Mac:
-
Add the following configuration:
{
"mcpServers": {
"datagouv": {
"type": "streamable",
"url": "https://mcp.data.gouv.fr/mcp"
}
}
}For more details, see the AnythingLLM MCP documentation.
Add the following to your VS Code settings.json:
{
"servers": {
"datagouv": {
"url": "https://mcp.data.gouv.fr/mcp",
"type": "http"
}
}
}Cursor supports MCP servers through its settings. To configure the server:
- Open Cursor Settings
- Search for "MCP" or "Model Context Protocol"
- Add a new MCP server with the following configuration:
{
"mcpServers": {
"datagouv": {
"url": "https://mcp.data.gouv.fr/mcp",
"transport": "http"
}
}
}Add the following to your ~/.codeium/mcp_config.json:
{
"mcpServers": {
"datagouv": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://mcp.data.gouv.fr/mcp"
]
}
}
}Note:
- The hosted endpoint is
https://mcp.data.gouv.fr/mcp. If you run the server yourself, replace it with your own URL (see βRun locallyβ below for the default local endpoint). - This MCP server only exposes read-only tools for now, so no API key is required.
Before starting, clone this repository and browse into it:
git clone [email protected]:datagouv/datagouv-mcp.git
cd datagouv-mcpDocker is required for the recommended setup. Install it via Docker Desktop or any compatible Docker Engine before continuing.
# With default settings (port 8000, prod environment)
docker compose up -d
# With custom environment variables
MCP_PORT=8007 DATAGOUV_ENV=demo docker compose up -d
# Stop
docker compose downEnvironment variables:
MCP_PORT: port for the MCP HTTP server (defaults to8000when unset).DATAGOUV_ENV:prod(default) ordemo. This controls which data.gouv.fr environement it uses the data from (https://www.data.gouv.fr or https://demo.data.gouv.fr). By default the MCP server talks to the production data.gouv.fr. SetDATAGOUV_ENV=demoif you specifically need the demo environment.
You will need uv to install dependencies and run the server.
- Install dependencies
uv sync- Prepare the environment file
Copy the example environment file to create your own .env file:
cp .env.example .envThen optionnaly edit .env and set the variables that matter for your run:
MCP_PORT=8007 # (defaults to 8000 when unset)
DATAGOUV_ENV=prod # Allowed values: demo | prod (defaults to prod when unset)
Load the variables with your preferred method, e.g.:
set -a && source .env && set +a- Start the HTTP MCP server
uv run main.pyFollow the steps in Connect your chatbot to the MCP server and simply swap the hosted URL for your local endpoint (default: http://127.0.0.1:${MCP_PORT:-8000}/mcp).
This MCP server uses FastMCP and implements the Streamable HTTP transport only.
STDIO and SSE are not supported.
Streamable HTTP transport (standards-compliant):
POST /mcp- JSON-RPC messages (client β server)GET /health- Simple JSON health probe ({"status":"ok","timestamp":"..."})
The MCP server provides tools to interact with data.gouv.fr datasets:
-
search_datasets- Search for datasets by keywords. Returns datasets with metadata (title, description, organization, tags, resource count).Parameters:
query(required),page(optional, default: 1),page_size(optional, default: 20, max: 100) -
get_dataset_info- Get detailed information about a specific dataset (metadata, organization, tags, dates, license, etc.).Parameters:
dataset_id(required) -
list_dataset_resources- List all resources (files) in a dataset with their metadata (format, size, type, URL).Parameters:
dataset_id(required) -
get_resource_info- Get detailed information about a specific resource (format, size, MIME type, URL, dataset association, Tabular API availability).Parameters:
resource_id(required) -
query_resource_data- Query data from a specific resource via the Tabular API. Fetches rows from a resource to answer questions.Parameters:
question(required),resource_id(required),page(optional, default: 1)Note: Each call retrieves up to 200 rows (the maximum allowed by the Tabular API). Recommended workflow: 1) Use
search_datasetsto find the dataset, 2) Uselist_dataset_resourcesto see available resources, 3) Usequery_resource_datawith the chosen resource ID. If the answer is not in the first page, usepage=2,page=3, etc. to navigate through large datasets. Works for CSV/XLS resources within Tabular API size limits (CSV β€ 100 MB, XLSX β€ 12.5 MB). -
download_and_parse_resource- Download and parse a resource that is not accessible via Tabular API (files too large, formats not supported, external URLs).Parameters:
resource_id(required),max_rows(optional, default: 1000),max_size_mb(optional, default: 500)Supported formats: CSV, CSV.GZ, JSON, JSONL. Useful for files exceeding Tabular API limits or formats not supported by Tabular API.
-
get_metrics- Get metrics (visits, downloads) for a dataset and/or a resource.Parameters:
dataset_id(optional),resource_id(optional),limit(optional, default: 12, max: 100)Returns monthly statistics including visits and downloads, sorted by month in descending order (most recent first). At least one of
dataset_idorresource_idmust be provided. Note: This tool only works with the production environment (DATAGOUV_ENV=prod). The Metrics API does not have a demo/preprod environment.
Run the tests with pytest (these cover helper modules; the MCP server wiring is best exercised via the MCP Inspector):
# Run all tests
uv run pytest
# Run with verbose output
uv run pytest -v
# Run specific test file
uv run pytest tests/test_tabular_api.py
# Run with custom resource ID
RESOURCE_ID=3b6b2281-b9d9-4959-ae9d-c2c166dff118 uv run pytest tests/test_tabular_api.py
# Run with prod environment
DATAGOUV_ENV=prod uv run pytestUse the official MCP Inspector to interactively test the server tools and resources.
Prerequisites:
- Node.js with
npxavailable
Steps:
- Start the MCP server (see above)
- In another terminal, launch the inspector:
Adjust the URL if you exposed the server on another host/port.
npx @modelcontextprotocol/inspector --http-url "http://127.0.0.1:${MCP_PORT}/mcp"
This project follows PEP 8 style guidelines using Ruff for linting and formatting.
Either running these commands manually or installing the pre-commit hook is required before submitting contributions.
# Lint and sort imports, and format code
uv run ruff check --select I --fix && uv run ruff formatThis repository uses a pre-commit hook which lint and format code before each commit. Installing the pre-commit hook is strongly recommended so the checks run automatically.
Install pre-commit hooks:
uv run pre-commit installThe pre-commit hook that automatically:
- Check YAML syntax
- Fix end-of-file issues
- Remove trailing whitespace
- Check for large files
- Run Ruff linting and formatting
The release process uses the tag_version.sh script to create git tags, GitHub releases and update CHANGELOG.md automatically. Package version numbers are automatically derived from git tags using setuptools_scm, so no manual version updates are needed in pyproject.toml.
Prerequisites: GitHub CLI must be installed and authenticated, and you must be on the main branch with a clean working directory.
# Create a new release
./tag_version.sh <version>
# Example
./tag_version.sh 2.5.0
# Dry run to see what would happen
./tag_version.sh 2.5.0 --dry-runThe script automatically:
- Extracts commits since the last tag and formats them for CHANGELOG.md
- Identifies breaking changes (commits with
!:in the subject) - Creates a git tag and pushes it to the remote repository
- Creates a GitHub release with the changelog content
This project is licensed under the MIT License - see the LICENSE file for details.