Real-Time Shadow AI/IT Detection Platform
AI-powered security monitoring for enterprise environments
ShadowGuard AI is an enterprise-grade security platform that detects and monitors unauthorized AI tools and Shadow IT usage in real-time. It uses a multi-layer detection engine combining semantic analysis, behavioral analysis, and rule-based detection to identify potential data exfiltration risks.
- π§ AI-Powered Detection β Semantic similarity analysis using embeddings
- π Real-Time Dashboard β Live alerts with risk scores and explanations
- π Browser Extension β Capture real user browsing activity
- β‘ Instant Alerts β Slack notifications for high-risk events
- π³ Docker-Ready β Full containerized deployment
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β ShadowGuard AI β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
ββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββ
β β β
βΌ βΌ βΌ
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Browser Extensionβ β Generator β β Dashboard β
β (Real Traffic) β β (Synthetic Logs)β β (React + API) β
ββββββββββ¬βββββββββ ββββββββββ¬βββββββββ ββββββββββ²βββββββββ
β β β
ββββββββββββββββββ¬ββββββββββββ β
βΌ β
βββββββββββββββββββ β
β Collector β β
β (FastAPI) β β
ββββββββββ¬βββββββββ β
β Redis Pub/Sub β
βΌ β
βββββββββββββββββββ β
β Worker β β
β (Detection Engine) β
β βββ Semantic β β
β βββ Behavioral β β
β βββ Fusion βββββββββββββββββββββββββββββββββ
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ
β Slack Notifier β
β (Real-time Alerts)β
βββββββββββββββββββ
shadowguard-ai/
βββ π collector/ # Log ingestion service (FastAPI)
β βββ app/ # API routes and handlers
β βββ core/ # Configuration and Redis client
β βββ models/ # Data models
β βββ schemas/ # Pydantic schemas
β βββ services/ # Business logic
β βββ main.py # Application entry point
β βββ Dockerfile
β βββ requirements.txt
β
βββ π worker/ # Multi-layer detection engine
β βββ worker.py # Main event consumer
β βββ fusion.py # Risk score fusion algorithm
β βββ semantic.py # Semantic similarity analysis
β βββ behavior.py # Behavioral anomaly detection
β βββ rules.py # Rule-based detection
β βββ slack_notifier.py # Slack alert integration
β βββ Dockerfile
β βββ requirements.txt
β
βββ π dashboard/ # Web UI and backend API
β βββ frontend/ # React 19 + Vite + TailwindCSS v4
β β βββ src/
β β β βββ components/ # UI components
β β β βββ pages/ # Page components
β β β βββ App.tsx
β β βββ package.json
β β βββ vite.config.ts
β βββ backend/ # FastAPI backend
β β βββ main.py # API endpoints
β β βββ requirements.txt
β βββ nginx.conf # Reverse proxy configuration
β βββ start.sh # Orchestration script
β βββ Dockerfile # Multi-stage build
β
βββ π generator/ # Synthetic log generator
β βββ generate_logs.py # Log generation script
β βββ Dockerfile
β βββ requirements.txt
β
βββ π extension/ # Chrome browser extension
β βββ manifest.json # MV3 extension manifest
β βββ background.js # Service worker
β βββ content.js # Content script
β βββ popup/ # Extension popup UI
β βββ options/ # Extension settings page
β βββ icons/
β
βββ π config/ # Shared configuration
β βββ anchors.json # Category definitions for semantic analysis
β βββ blacklist.json # Blocked domains
β βββ whitelist.json # Allowed domains
β
βββ π docs/ # Documentation
β βββ API.md
β βββ ARCHITECTURE.md
β βββ SETUP.md
β
βββ docker-compose.yml # Service orchestration
βββ .env.example # Environment variables template
βββ .gitignore
βββ TESTING.md
βββ README.md
| Category | Technology |
|---|---|
| Frontend |
|
| Backend |
|
| Database |
|
| AI/ML |
|
| Infrastructure |
|
| Extension |
|
| Layer | Description |
|---|---|
| Semantic Analysis | Uses AI embeddings to detect category matches (Generative AI, File Storage, Anonymous Services, etc.) |
| Behavioral Analysis | Tracks user patterns and flags anomalies (first-time access, unusual upload volumes) |
| Rule-Based Detection | Configurable whitelist/blacklist for immediate allow/block decisions |
| Fusion Engine | Combines all signals with intent-aware scoring (POST/PUT uploads weighted differently than GET) |
- Live Alert Feed β Real-time security events with risk scores
- Alert Simulation β Test scenarios for demo purposes
- Risk Level Indicators β CRITICAL, HIGH, MEDIUM, LOW, SAFE
- AI-Generated Explanations β Powered by Gemini API
- Responsive Design β Modern glassmorphism UI with animations
- Passive Monitoring β Captures browsing activity without user intervention
- Configurable Endpoint β Point to any collector instance
- Privacy-Focused β Only sends metadata, not page content
- Chrome MV3 β Built on the latest manifest version
- Slack Integration β Real-time alerts to security team channels
- Threshold-Based β Only notify on HIGH/CRITICAL events
- Docker + Docker Compose
- Node.js (v18+) β for frontend development
- Python 3.11 β for local development
- API keys (optional but recommended):
OPENROUTER_API_KEYβ For semantic embeddingsGEMINI_API_KEYβ For AI-generated explanations
git clone https://github.com/your-org/shadowguard-ai.git
cd shadowguard-ai
# Copy environment template
cp .env.example .env
# Edit .env with your API keys
nano .envdocker-compose up --buildThis starts:
| Service | Port | Description |
|---|---|---|
| Redis | 6379 | Message broker & data store |
| Collector | Internal | Log ingestion (accessible via dashboard) |
| Worker | 8000 | Detection engine |
| Dashboard | 3000 | UI + API gateway |
Open http://localhost:3000 in your browser.
Option A: Use the Browser Extension (Recommended)
- Open
chrome://extensions/ - Enable "Developer mode"
- Click "Load unpacked" β Select the
extension/folder - Configure the collector URL:
http://localhost:3000/logs - Browse the web normally β events are captured automatically
Option B: Synthetic Logs
python generator/generate_logs.py \
--url http://localhost:3000/logs \
--type mixed \
--num-logs 50 \
--onceCreate a .env file at the project root:
# Redis Configuration
REDIS_HOST=redis
REDIS_PORT=6379
# Service Ports
COLLECTOR_PORT=8000
DASHBOARD_PORT=3000
# AI/ML APIs (optional but recommended)
OPENROUTER_API_KEY=your_openrouter_api_key_here
GEMINI_API_KEY=your_gemini_api_key_here
# Self-hosted Embedding Model (alternative to OpenRouter)
EMBEDDING_API_URL=http://YOUR_VM_IP:8000/embed| Key | Purpose | Required |
|---|---|---|
OPENROUTER_API_KEY |
Semantic embeddings for domain categorization | Optional (falls back to keyword matching) |
GEMINI_API_KEY |
AI-generated alert explanations | Optional (shows "AI explanation unavailable" if missing) |
EMBEDDING_API_URL |
Self-hosted embedding model endpoint | Optional (alternative to OpenRouter) |
Terminal 1 β Backend Services (Docker)
docker-compose up redis collector workerTerminal 2 β Frontend (Vite HMR)
cd dashboard/frontend
npm install
npm run devAccess at: http://localhost:3000 (with hot reload)
Collector:
cd collector
pip install -r requirements.txt
uvicorn main:app --reload --port 8000Worker:
cd worker
pip install -r requirements.txt
python worker.pyDashboard Backend:
cd dashboard/backend
pip install -r requirements.txt
uvicorn main:app --reload --port 8001Dashboard Frontend:
cd dashboard/frontend
npm install
npm run dev| Method | Endpoint | Description |
|---|---|---|
| GET | /api/health |
Health check with Redis status |
| GET | /api/alerts |
Fetch security alerts |
| GET | /api/users |
Get user statistics |
| Method | Endpoint | Description |
|---|---|---|
| POST | /logs |
Ingest log events |
| GET | /logs?params |
Ingest via query params (for proxies) |
| GET | /health |
Collector health check |
docker-compose down -v
docker-compose up --buildThe -v flag removes volumes, wiping all stored alerts.
# All services
docker-compose logs -f
# Specific service
docker-compose logs -f worker# Dashboard API
curl http://localhost:3000/api/health
# Collector (via proxy)
curl http://localhost:3000/health