⚠️ Experimental: This MCP server is in an experimental state and may have rough edges. Please report any issues you encounter.
A Python implementation of the fal.ai MCP (Model Context Protocol) server using FastMCP. This server provides universal access to all fal.ai models without hardcoding specific model support, dynamically discovering model capabilities through OpenAPI schemas.
This MCP server provides comprehensive access to fal.ai's AI model infrastructure with automatic schema discovery and validation.
- 🤖 Universal Model Support: Works with any current or future fal.ai model
- 🔍 Dynamic Schema Discovery: Automatically fetches and validates model parameters
- 🚀 Multiple Execution Modes: Support for synchronous, asynchronous, and subscription-based execution
- 📁 File Management: Background upload system with progress tracking (no timeouts!)
- ⬇️ Download Support: Download generated files and artifacts to local storage
- 📊 Queue Management: Track and manage async jobs with status checking
- 💾 Schema Caching: Improved performance through intelligent caching
- ✅ Automatic Validation: Parameters are validated against model schemas before execution
- 🔐 API Key Authentication: Secure access to fal.ai services
- 🎯 Smart Parameter Validation: Only validates required parameters, allowing flexibility
- 📡 Comprehensive Error Handling: Clear error messages with helpful context
- 🧠 Model Discovery: List and search available models with pagination
- 📄 OpenAPI Integration: Full schema access for understanding model requirements
- ⏱️ No Upload Timeouts: Background process architecture handles large files seamlessly
- 📈 Upload Progress Tracking: Monitor upload status with session-based tracking
- 🔄 Automatic Retry Logic: Failed uploads retry with exponential backoff
git clone https://github.com/yourusername/pmind-fal-ai-mcp
cd pmind-fal-ai-mcp# Install dependencies using uv
uv sync- Go to https://fal.ai/dashboard/keys
- Sign in or create an account
- Generate a new API key
- Copy the key for use in configuration
Create a .env file by copying the example:
cp .env.example .envEdit .env and configure:
# fal.ai API Configuration
# Get your API key from https://fal.ai/dashboard/keys
FAL_API_KEY=your-api-key-here
# Cache directory for model schemas
# OpenAPI schemas are cached here to improve performance
FAL_CACHE_DIR=~/.pmind-fal-ai/cache
# Optional: Download directory for files (defaults to current directory)
# FAL_DOWNLOAD_DIR=/path/to/downloads
# Required: Upload state directory for async uploads
FAL_UPLOAD_STATE_DIR=/tmp/fal-uploadsAdd the MCP server to your client's MCP configuration:
{
"mcpServers": {
"pmind-fal-ai-mcp": {
"command": "uv",
"args": ["--directory", "/path/to/pmind-fal-ai-mcp", "run", "pmind-fal-ai-mcp"]
}
}
}Replace /path/to/pmind-fal-ai-mcp with the actual path where you cloned the repository.
Use the following command to add the server:
claude mcp add pmind-fal-ai-mcp -- uv run --directory /path/to/pmind-fal-ai-mcp pmind-fal-ai-mcpRequired:
FAL_API_KEY: Your fal.ai API key for authenticationFAL_CACHE_DIR: Directory for caching model schemasFAL_UPLOAD_STATE_DIR: Directory for upload state tracking
Optional:
FAL_DOWNLOAD_DIR: Directory for downloaded files (defaults to current directory)
Once configured, you can start using the fal.ai MCP server through your client. The server will automatically start when your client connects.
For a complete list of all tools with detailed parameters and examples, see TOOLS.md.
Generate an image of a serene mountain landscape at sunset using FLUX
Generate a video of a cat playing with yarn using Veo3
Submit a video generation job and check its status
Upload my image.jpg and use it as input for image enhancement
Download the generated image from the URL to my local folder
To test the server manually:
# Run the MCP server
uv run pmind-fal-ai-mcp- Ensure your
FAL_API_KEYis set correctly in.env - Verify the key at https://fal.ai/dashboard/keys
- Use
list_modelsto see available models - Check the exact model ID with
search_models
- Use
get_model_schemato see required parameters - Check parameter types and constraints in the schema
- Remember that only required parameters are validated
- Clear the cache directory if you encounter stale schema issues
- The cache directory is specified in
FAL_CACHE_DIR
- Check upload status with
check_upload_statususing the session ID - View all uploads with
list_uploadsto find lost sessions - Clean up old uploads with
cleanup_old_uploads - Ensure files are under 10MB (fal.ai limit)
- Check
FAL_UPLOAD_STATE_DIRfor state files if debugging
This project is licensed under the MIT License - see the LICENSE file for details.