Skip to content

Latest commit

 

History

History
286 lines (227 loc) · 14.8 KB

File metadata and controls

286 lines (227 loc) · 14.8 KB

GEO/AEO Tracker – Open-source AI visibility dashboard

Powered by Bright Data Live Demo Deploy to Vercel

GEO/AEO Tracker

Open-source, local-first AI visibility intelligence dashboard.
Track your brand across 6 AI models with zero vendor lock-in.
Now with SRO Analysis — deep cross-platform search result optimization.
Mobile-responsive — works seamlessly on desktop, tablet, and phone.

Features · Quick Start · Deploy · Cloud Sync · API


🌐 Built with Bright Data — the world's leading web data platform. GEO/AEO Tracker uses Bright Data's AI Scraper API to reliably collect structured responses from 6 AI models. Get your API key →


Why

AI models are replacing traditional search for millions of queries. If your brand isn't visible in ChatGPT, Perplexity, or Gemini responses, you're invisible to a growing audience.

Existing tools charge $200–$500+/month, lock you into closed ecosystems, and store your data on their servers.

GEO/AEO Tracker is the alternative:

  • 🔑 BYOK (Bring Your Own Keys): your data never leaves your machine
  • 🤖 6 AI models simultaneously: more coverage than paid tools
  • 💸 $0/month: self-hosted, open-source, forever free
  • 🛡️ Local-first by default: IndexedDB + localStorage, no external database
  • ☁️ Optional cloud sync via your own Supabase project (free tier) for multi-device access

Features

📋 13 Feature Tabs

Tab What it does
⚙️ Project Settings Brand name, aliases, website, industry, keywords, description
💬 Prompt Hub Manage tracking prompts with {brand} injection. Run single or batch across models in parallel
🎭 Persona Fan-Out Generate persona-specific prompt variants (CMO, SEO Lead, Founder, etc.)
🔍 Niche Explorer AI-generated high-intent queries for your niche
📝 Responses Browse AI responses with brand/competitor highlighting, filters, and search
📊 Visibility Analytics Score trends over time via Recharts line charts. CSV export
🔗 Citations Domain-grouped citation frequency analysis
🎯 Citation Opportunities URLs where competitors get cited but you don't, with outreach briefs
⚔️ Competitor Battlecards AI-generated side-by-side competitor analysis with strengths/weaknesses
🏥 AEO Audit Site readiness check: llms.txt, Schema.org, BLUF density, heading structure
📡 SRO Analysis 6-stage deep pipeline: Gemini Grounding → Cross-Platform Citations → SERP → Page Scraping → Site Context → LLM Analysis. Produces SRO Score (0–100), prioritized recommendations, content gaps & competitor insights
⏱️ Automation Cron / GitHub Actions templates for scheduled runs
📖 Documentation Searchable 18-section guide covering every feature

🚀 Core Capabilities

  • 🤖 Multi-model tracking across ChatGPT, Perplexity, Gemini, Copilot, Google AI Overview, Grok
  • SRO Analysis: 6-stage pipeline scoring how well your page is optimized for AI search results
  • 📈 Visibility scoring (0–100): brand mentions, position, frequency, citations, sentiment
  • 🔔 Drift alerts: automatic notifications when your score changes significantly
  • Scheduled auto-runs: configurable interval-based batch scraping
  • 🚀 Fully parallel batch runs: all prompt × provider combos execute simultaneously
  • 📅 Historical comparison: delta tracking across time periods
  • 🏢 Multi-workspace: manage multiple brands/projects independently
  • 🎨 Dark/light/system theme with a polished sidebar UI
  • 📱 Mobile-responsive: collapsible sidebar with hamburger menu, adaptive KPI grid, and scrollable model toolbar — works on any screen size

Architecture

Next.js 16.1 + Turbopack
├── app/
│   ├── page.tsx                    # Main dashboard (or demo via env var)
│   ├── demo/page.tsx               # Standalone demo route
│   └── api/
│       ├── scrape/route.ts         # Bright Data AI Scrapers (Node runtime)
│       ├── analyze/route.ts        # OpenRouter LLM analysis (Edge runtime)
│       ├── audit/route.ts          # AEO site audit crawler
│       ├── sro-analyze/route.ts    # SRO final LLM analysis
│       ├── serp/route.ts           # Bright Data SERP results
│       ├── site-context/route.ts   # Homepage context extraction
│       ├── unlocker/route.ts       # Bright Data Web Unlocker (single/batch)
│       ├── brightdata-platforms/   # 6-platform AI citation polling
│       ├── bulk-sro/route.ts       # SSE bulk SRO analysis
│       └── state/route.ts          # Cloud KV store (GET/PUT/DELETE — Node runtime)
├── components/
│   ├── sovereign-dashboard.tsx     # Main shell — state, tabs, KPIs
│   └── dashboard/
│       ├── types.ts                # AppState, ScrapeRun, Provider, etc.
│       └── tabs/                   # 13 tab components
├── lib/
│   ├── client/
│   │   ├── sovereign-store.ts      # Storage API — IDB default, cloud when configured
│   │   └── cloud-mode.ts           # isCloudActive / isCloudAvailable helpers
│   ├── server/
│   │   ├── supabase.ts             # Server-side Supabase singleton (service_role)
│   │   ├── kv-store.ts             # kvGet / kvSet / kvDelete helpers
│   │   ├── brightdata-scraper.ts   # Bright Data AI Scraper integration
│   │   ├── brightdata-platforms.ts # 6-platform citation scraper
│   │   ├── gemini-grounding.ts     # Gemini Grounding via Google Search
│   │   ├── openrouter-sro.ts       # SRO analysis via OpenRouter
│   │   ├── serp.ts                 # SERP data via Bright Data
│   │   ├── sro-types.ts            # SRO type definitions
│   │   └── unlocker.ts             # Web Unlocker scraping
│   └── demo-data.ts                # Deterministic seed data for demo mode
├── supabase/
│   └── migrations/
│       └── 001_kv_store.sql        # kv_store table + updated_at trigger
└── scripts/
    ├── test-scraper.js             # API validation script
    └── test-pillar.js              # Feature pillar tests

Key decisions:

  • IndexedDB primary store (no size limit) with localStorage as best-effort cache; same public API whether cloud is active or not
  • Cloud storage routes through /api/state — the client never calls Supabase directly, so service_role stays server-side and RLS isn't a concern
  • Auto-opt-in cloud: when NEXT_PUBLIC_CLOUD_STORAGE_ENABLED=true the IDB write path becomes a cache; IDB is the authoritative fallback if the cloud route fails
  • Edge runtime for /api/analyze (Gemini Flash via OpenRouter) — fast global inference
  • Bright Data Web Scraper API for AI model scraping — reliable, structured data
  • Bright Data SERP + Web Unlocker for SRO pipeline data gathering
  • Google Gemini API for grounding analysis in SRO pipeline
  • Zod schema validation on all API routes
  • Recharts for analytics visualizations
  • Tailwind CSS v4 with CSS custom properties for theming

Quick Start

Prerequisites

Install & Run

git clone https://github.com/danishashko/sovereign-aeo-tracker.git
cd sovereign-aeo-tracker
npm install

Create .env in the project root:

BRIGHT_DATA_KEY=your_bright_data_api_key

# AI Scraper dataset IDs (from Bright Data Scrapers Library)
BRIGHT_DATA_DATASET_CHATGPT=gd_xxx
BRIGHT_DATA_DATASET_PERPLEXITY=gd_xxx
BRIGHT_DATA_DATASET_COPILOT=gd_xxx
BRIGHT_DATA_DATASET_GEMINI=gd_xxx
BRIGHT_DATA_DATASET_GOOGLE_AI=gd_xxx
BRIGHT_DATA_DATASET_GROK=gd_xxx

# OpenRouter (powers /api/analyze, /api/sro-analyze, /api/site-context)
OPENROUTER_KEY=your_openrouter_api_key

# Gemini API (powers Gemini Grounding in SRO Analysis)
GEMINI_API_KEY=your_gemini_api_key
npm run dev

Open http://localhost:3000.

Validate Setup

npm run test:scraper    # Test Bright Data API connection
npm run build           # Full production build check
npm run lint            # ESLint

Deploy to Vercel

The deploy button launches a fully functional production instance. You'll be prompted for your API keys during setup. No demo mode, no restrictions.

Deploy with Vercel

  1. Click the button above (or run vercel --prod from your clone)
  2. Enter your Bright Data and OpenRouter API keys when prompted
  3. Done! Your tracker deploys automatically with full production capabilities

🧪 Demo-Only Mode (optional)

Want to deploy a read-only preview with sample data and no API keys?

  1. Add env var NEXT_PUBLIC_DEMO_ONLY = true in Vercel → Project Settings → Environment Variables
  2. Redeploy. The dashboard will load with pre-generated demo data instead of making live API calls

☁️ Optional: Cloud persistence with Supabase

By default, all your data stays in your browser (IndexedDB). That's great for a single device, but if you want your runs, prompts, and settings to persist across devices — or survive clearing browser data — you can plug in a free Supabase project.

Why it's optional

  • Local-first still works 100% without Supabase.
  • When enabled, the client never talks to Supabase directly. Every read/write goes through a server-side Next.js route using your service-role key, so your key stays private and RLS is a non-issue.
  • You can toggle cloud sync on/off per-browser from Project Settings → Cloud Sync.

Setup (5 minutes)

  1. Create a free project at supabase.com.

  2. In the Supabase SQL editor, paste and run supabase/migrations/001_kv_store.sql from this repo. This creates a single kv_store table with RLS enabled.

  3. In Supabase → Project Settings → API, grab:

    • Project URLSUPABASE_URL
    • service_role secret key → SUPABASE_SERVICE_ROLE_KEY
  4. In Vercel → Project Settings → Environment Variables, add:

    SUPABASE_URL=https://your-project.supabase.co
    SUPABASE_SERVICE_ROLE_KEY=your_service_role_key
    NEXT_PUBLIC_CLOUD_STORAGE_ENABLED=true
  5. Redeploy. The Project Settings tab will now show a Cloud Sync card.

Free tier caveats (always check supabase.com/pricing for current limits)

  • Free tier includes a generous Postgres database, but projects pause after a week of inactivity on the free plan — first request after a pause may be slow.
  • The service_role key has full DB access — keep it server-side only (Vercel env vars are fine; never commit it).
  • Single-tenant by design: one deployment = one Supabase project = your data. If you want multi-user auth, you'll need to extend the schema with a user_id column + RLS policies.

What gets synced

  • All app state keyed by workspace (sovereign-aeo-tracker-*) — runs, prompts, settings, SRO results.
  • NOT synced (kept local on purpose): theme preference, workspace list, active workspace — these are per-device UI choices.

API Routes

Route Runtime Purpose
POST /api/scrape Node.js Bright Data AI Scrapers — query AI models for brand mentions
POST /api/analyze Edge OpenRouter LLM inference — battlecards, niche queries
POST /api/audit Node.js AEO site audit — llms.txt, schema, BLUF, heading checks
POST /api/sro-analyze Node.js SRO final analysis — synthesizes all data into score + recommendations
POST /api/serp Node.js Bright Data SERP — organic search results for a keyword
POST /api/site-context Node.js Homepage scrape + context extraction via OpenRouter
POST /api/unlocker Node.js Bright Data Web Unlocker — single or batch URL scraping
POST /api/brightdata-platforms Node.js 6-platform AI citation polling via Bright Data datasets
POST /api/bulk-sro Node.js SSE streaming — bulk SRO analysis across multiple keywords
GET/PUT/DELETE /api/state Node.js Cloud KV store proxy — reads/writes to Supabase using service-role key (disabled when cloud not configured)

All routes include input validation and error handling. Most routes use in-memory caching to minimize API costs.

Tech Stack

Layer Technology
Framework Next.js 16.1 + Turbopack
Language TypeScript (strict)
Styling Tailwind CSS v4 with @theme inline
Charts Recharts
Validation Zod
Storage IndexedDB (idb-keyval) + localStorage, optional Supabase cloud sync
AI Scraping Bright Data Web Scraper API
LLM Inference OpenRouter (Gemini Flash)
SRO Grounding Google Gemini API (@google/genai)
SERP Data Bright Data SERP API
Web Scraping Bright Data Web Unlocker
Deployment Vercel

License

MIT — use it, fork it, ship it.


Built by Daniel Shashko
Powered by Bright Data