A Model Context Protocol (MCP) server that provides AI-powered Google search using Gemini Flash with search grounding. This server enables Claude Desktop, Claude Code, and other MCP clients to perform real-time web searches and get AI-synthesized answers with citations.
- Real-time Google search via Gemini with search grounding
- Configurable model (defaults to Gemini Flash Latest for cost-efficiency)
- Search grounding with source citations
- Optimized responses for AI agent consumption
- Terse, structured output (bullet points, tables, code blocks)
- Automatic source attribution and search query tracking
- Node.js >= 20.0.0 (Node.js 18.x reached End-of-Life April 2025)
- Google API Key with Gemini API access
Install globally from NPM:
npm install -g @gpriday/ask-google-mcpThe ask-google-mcp command will be available globally.
For development or local testing:
# Clone the repository
git clone https://github.com/gpriday/ask-google-mcp.git
cd ask-google-mcp
# Install dependencies
npm installCreate a .env file in your project root or home directory (~/.env):
GOOGLE_API_KEY=your_api_key_hereYou can get a Google API key from Google AI Studio.
For local development, validate your configuration with:
npm run check-envThe server will automatically load .env from:
- Current working directory (
.env) - Home directory (
~/.env) as fallback - Or use environment variables directly
If installed globally:
ask-google-mcpIf running locally:
npm startThe server runs on stdio and communicates via JSON-RPC 2.0.
Note: When running globally, the server will look for .env in the current directory or use environment variables directly.
Unit tests
npm testIntegration test
npm run test:integrationAll tests
npm run test:allGrounded Google web research (Gemini).
Use when: user says "check online", "ask google", "research"; asks for latest standards/versions, compares releases, or requests up-to-date facts.
Input:
question(string, required) — the research question (1-10,000 characters)output_file(string, optional) — file path to save the response. Supports both absolute paths (/Users/name/research.md) and relative paths (./docs/research.md). Relative paths resolve from your project root.model(string, optional) — Gemini model to use:flash(default, recommended),flash-lite(faster/cheaper), orpro(most capable)
Output:
- Concise answer with citations
- Source URLs
- Search queries performed
- If
output_fileis provided, response is also written to the specified file
Examples:
Basic query (uses flash model by default):
{
"name": "ask_google",
"arguments": {
"question": "Latest ECMAScript standard and new features"
}
}With file output (relative path):
{
"name": "ask_google",
"arguments": {
"question": "React 19: what's new vs 18?",
"output_file": "./docs/react19-research.md"
}
}With file output (absolute path):
{
"name": "ask_google",
"arguments": {
"question": "React 19: what's new vs 18?",
"output_file": "/Users/john/Documents/react19-research.md"
}
}Using Pro model for complex queries:
{
"name": "ask_google",
"arguments": {
"question": "Compare microservice patterns: event sourcing vs CQRS vs saga",
"model": "pro"
}
}Model Selection Guide:
flash(default) — Best for most queries. Fast, cost-effective, excellent with search grounding.flash-lite— Use for simple factual lookups where speed is critical.pro— Use only for complex analysis or when flash results are insufficient. Slower and more expensive.
Add this server to your Claude Desktop configuration.
Edit ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"ask-google": {
"command": "ask-google-mcp",
"env": {
"GOOGLE_API_KEY": "your_api_key_here"
}
}
}
}Edit %APPDATA%\Claude\claude_desktop_config.json:
{
"mcpServers": {
"ask-google": {
"command": "ask-google-mcp",
"env": {
"GOOGLE_API_KEY": "your_api_key_here"
}
}
}
}Edit ~/.config/Claude/claude_desktop_config.json:
{
"mcpServers": {
"ask-google": {
"command": "ask-google-mcp",
"env": {
"GOOGLE_API_KEY": "your_api_key_here"
}
}
}
}Edit ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"ask-google": {
"command": "node",
"args": ["/path/to/ask-google-mcp/src/index.js"],
"env": {
"GOOGLE_API_KEY": "your_api_key_here"
}
}
}
}Edit %APPDATA%\Claude\claude_desktop_config.json:
{
"mcpServers": {
"ask-google": {
"command": "node",
"args": ["C:\\path\\to\\ask-google-mcp\\src\\index.js"],
"env": {
"GOOGLE_API_KEY": "your_api_key_here"
}
}
}
}Edit ~/.config/Claude/claude_desktop_config.json:
{
"mcpServers": {
"ask-google": {
"command": "node",
"args": ["/path/to/ask-google-mcp/src/index.js"],
"env": {
"GOOGLE_API_KEY": "your_api_key_here"
}
}
}
}After updating the configuration, restart Claude Desktop.
Add the MCP server using the claude mcp add command.
For current project only:
claude mcp add --scope project ask-google -e GOOGLE_API_KEY=your_api_key_here -- ask-google-mcpFor your user (available in all projects):
claude mcp add --scope user ask-google -e GOOGLE_API_KEY=your_api_key_here -- ask-google-mcpFor local directory:
claude mcp add --scope local ask-google -e GOOGLE_API_KEY=your_api_key_here -- ask-google-mcpFor current project:
claude mcp add --scope project ask-google -e GOOGLE_API_KEY=your_api_key_here -- node /path/to/ask-google-mcp/src/index.jsVerify the server is running:
claude mcp listNote: This refers to the OpenAI Codex CLI (released April 2025), a terminal-based coding agent with MCP support. This is different from the deprecated "OpenAI Codex" model from 2021-2023.
Add the MCP server using the codex mcp add command or by editing the ~/.codex/config.toml file.
If installed globally:
codex mcp add ask-google --env GOOGLE_API_KEY=your_api_key_here -- ask-google-mcpIf running locally:
codex mcp add ask-google --env GOOGLE_API_KEY=your_api_key_here -- node /path/to/ask-google-mcp/src/index.jsVerify the server:
codex mcp listEdit ~/.codex/config.toml:
If installed globally:
[mcp.ask-google]
command = "ask-google-mcp"
env = ["GOOGLE_API_KEY=your_api_key_here"]If running locally:
[mcp.ask-google]
command = "node"
args = ["/path/to/ask-google-mcp/src/index.js"]
env = ["GOOGLE_API_KEY=your_api_key_here"]Note: Restart Codex CLI or IDE extension after editing config.toml for changes to take effect.
The server provides structured, terse responses optimized for AI consumption:
- Bullet points for lists
- Tables for comparisons
- Code blocks for examples
- Exact commands and configuration snippets
- Side-by-side wrong/correct code examples
- Version numbers and breaking changes
- Source citations with URLs
- Search queries performed
This project follows MCP best practices for Node.js dependency management:
- Semver Ranges: Dependencies use caret (
^) ranges inpackage.jsonto automatically receive patch and minor security updates - Lockfile:
package-lock.jsonis committed to ensure reproducible builds - CI/CD: Use
npm ci(notnpm install) to enforce lockfile versions in production - Security: Run
npm run security:auditregularly and schedulenpm run security:updatefor patch updates
ask-google/
├── src/
│ └── index.js # Main MCP server
├── scripts/
│ └── check-env.js # Environment validation
├── test/
│ ├── unit/
│ │ └── tool-handler.test.js # Unit tests
│ └── test-gemini-mcp.js # Integration tests
├── package.json
├── package-lock.json # Committed for reproducibility
├── .env # API key (git-ignored)
├── .env.example # API key template
├── .gitignore
├── LICENSE
└── README.md
Development:
npm start- Start the MCP server (auto-runs environment validation)npm test- Run unit testsnpm run test:integration- Run integration testsnpm run test:all- Run all tests (unit + integration)npm run dev- Run server with auto-reload (Node 18+ with --watch)
Environment & Security:
npm run check-env- Validate environment configurationnpm run security:audit- Check for security vulnerabilitiesnpm run security:fix- Auto-fix security issues (within semver ranges)npm run security:update- Update dependencies and audit for vulnerabilities
GOOGLE_API_KEY(required) - Your Google API key for Gemini API access
The server supports three Gemini models via the model parameter:
- flash (default) —
models/gemini-flash-latest— Best balance of speed, cost, and quality - flash-lite —
models/gemini-flash-lite-latest— Fastest and cheapest, good for simple queries - pro —
models/gemini-pro-latest— Most capable but slower and more expensive
Flash is used by default and recommended for most use cases. The model can be changed per-query using the model parameter (see examples above).
The server provides categorized error handling:
- Input Validation: Questions are validated for presence, type, length (max 10,000 chars)
- [AUTH_ERROR]: Missing or invalid API keys
- [QUOTA_ERROR]: API quota or rate limit exceeded
- [TIMEOUT_ERROR]: Request timeout errors
- [API_ERROR]: General API errors
- Process Stability: Unhandled rejections and exceptions trigger clean shutdown
MIT
Contributions welcome! Please open an issue or PR.
For issues or questions:
- Check the MCP documentation
- Review Google Gemini API docs
- Open an issue in this repository