An MCP (Model Context Protocol) server that integrates with the AI or Not API to detect AI-generated content in images, videos, audio, and text.
- Image Analysis: Detect AI-generated images, deepfakes, NSFW content, and image quality issues
- Video Analysis: Detect AI-generated video, synthetic voices, AI music, and video deepfakes
- Audio Analysis: Detect AI-generated music and synthetic voices
- Text Analysis: Detect AI-written text with confidence scoring and annotations
- API Health Check: Verify API availability
- Node.js 18+
- An API key from AI or Not
git clone https://github.com/tymrtn/aiornot-mcp.git
cd aiornot-mcp
npm install
npm run buildnpm install -g aiornot-mcp| Variable | Required | Default | Description |
|---|---|---|---|
AIORNOT_API_KEY |
Yes | - | Your AI or Not API key |
AIORNOT_API_URL |
No | https://api.aiornot.com |
API base URL |
Add to your MCP settings file:
{
"mcpServers": {
"aiornot": {
"command": "node",
"args": ["/path/to/aiornot-mcp/build/index.js"],
"env": {
"AIORNOT_API_KEY": "your_api_key_here"
}
}
}
}Settings file locations:
- Claude Desktop (macOS):
~/Library/Application Support/Claude/claude_desktop_config.json - Claude Code:
~/.claude/mcp_servers.json
AIORNOT_API_KEY="your_api_key_here" node build/index.jsAnalyze media content for AI generation.
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
media_type |
string | Yes | One of: image, video, text, audio_music, audio_voice |
file_path |
string | Conditional | Path to file (required for image/video/audio) |
text |
string | Conditional | Text content (required for text, min 250 chars) |
only |
string[] | No | Report types to include |
excluding |
string[] | No | Report types to exclude |
external_id |
string | No | Tracking identifier |
include_annotations |
boolean | No | Include block-level annotations (text only) |
timeout_ms |
number | No | Override request timeout |
Report Types by Media:
| Media Type | Available Reports |
|---|---|
| Image | ai_generated, deepfake, nsfw, quality, reverse_search |
| Video | ai_video, ai_music, ai_voice, deepfake_video (off by default) |
Check if the AI or Not API is available.
Analyze an image:
{
"media_type": "image",
"file_path": "/path/to/image.jpg"
}Analyze an image for specific checks:
{
"media_type": "image",
"file_path": "/path/to/image.jpg",
"only": ["ai_generated", "deepfake"]
}Analyze video including deepfake detection:
{
"media_type": "video",
"file_path": "/path/to/video.mp4",
"only": ["ai_video", "deepfake_video"]
}Analyze text:
{
"media_type": "text",
"text": "Your text content here (minimum 250 characters)...",
"include_annotations": true
}Analyze audio for synthetic voice:
{
"media_type": "audio_voice",
"file_path": "/path/to/audio.mp3"
}The server returns structured JSON with:
media_type: The analyzed media typescores: Extracted confidence scores and verdictsresponse: Full API response
Example response for image analysis:
{
"media_type": "image",
"scores": {
"ai_generated": {
"verdict": "ai",
"ai_confidence": 0.95,
"human_confidence": 0.05
},
"deepfake": {
"is_detected": false,
"confidence": 0.02
}
},
"response": { ... }
}Default timeouts vary by media type:
| Media Type | Default Timeout |
|---|---|
| Image | 30 seconds |
| Text | 30 seconds |
| Video | 120 seconds |
| Audio (music) | 120 seconds |
| Audio (voice) | 120 seconds |
Use timeout_ms to override if needed.
# Install dependencies
npm install
# Build
npm run build
# Watch mode
npm run watch
# Test with MCP Inspector
npm run inspectorMIT - see LICENSE
- AI or Not - API provider
- Model Context Protocol - MCP specification
- MCP SDK - TypeScript SDK