Skip to content

pietz/mcp-web-tools

Repository files navigation

MCP Web Tools

This package provides a powerful MCP server to equip LLMs with web access, going beyond naive methods of searching, fetching and extracting content.

Introduction

I created this package out of the frustration that most MCP servers enabling web access to LLMs, didn't perform as well as I hoped. Some of these shortcomings I wanted fix, include:

  • Good search results without requiring an API key
  • Sophisticated fetching for more complex JavaScript sites
  • Extracting content in nicely formatted Markdown
  • Support for extracting content from PDFs
  • Support for loading and displaying images
  • Capture rendered webpage screenshots for visual context
  • Usage options for advanced cases like loading raw HTML

Installation

Claude Desktop

Claude Code

claude mcp add web-tools uvx mcp-web-tools

Or to also set the Brave Search API key:

claude mcp add web-tools uvx mcp-web-tools -e BRAVE_SEARCH_API_KEY=<key>

Provide a Perplexity Search API key to prioritize their fresh, citation-rich index:

claude mcp add web-tools uvx mcp-web-tools -e PERPLEXITY_API_KEY=<key>

You can mix both environment variables to fall back from Perplexity to Brave seamlessly.

Internals

The package is written in Python using powerful libraries and services under the hood to improve results.

Searching

We use the Perplexity Search API when a PERPLEXITY_API_KEY is configured. It delivers ranked snippets with citations from Perplexity's continuously refreshed index. If no Perplexity key is available, we fall back to the Brave Search API (via BRAVE_SEARCH_API_KEY), then a lightweight Google workaround, and finally DuckDuckGo. While we recommend adding at least one API key, the chained fallbacks continue working for most workloads.

Fetching

The fetching of web content is based on Zendriver, a fork of nodriver for next level webscraping and performance. It should stay undetected for most anti-bot solutions and fetch content even from complex JS-based sites.

Extracting

For web extraction, we use Trafilatura which consistently outperforms other alternatives for extracting content from HTML pages. For PDFs, we use PyMuPDF4LLM which similarly extracts content in an easy-to-read format for LLMs, with advanced layout support.

Screenshots

Rendered page previews are powered by Zendriver. The view_website tool navigates to a URL in a headless Chromium session and returns the resulting page as a PNG screenshot. By default only the current viewport is captured, but callers can request a full-page image by setting the full_page argument to true.

Contributing

While it's impossible to support all pages and layouts, we thrive to make this package better over time. For unsupported sites, problems, or feature requests open an issue.

CI, Releases, and Publishing

This repo includes a GitHub Actions workflow that:

  • Runs tests via uv on PRs and pushes to main.
  • On push to main, if project.version in pyproject.toml changed, it:
    • Builds distributions with uv build.
    • Creates a GitHub Release tagged v<version> with autogenerated notes.
    • Publishes the package to PyPI using uv publish.
  • Merge a PR that bumps project.version in pyproject.toml to trigger a release.

Rollback:

  • If a release was created erroneously, delete the GitHub Release and tag v<version>.
  • Yank the version on PyPI if needed.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages