Skip to content

skrithivasan/guardrailsmcp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Brand Manager with MCP and Guardrails

Enterprise-grade AI Brand Manager using Model Context Protocol (MCP), NVIDIA NIM guardrails, and social media integration.

🚀 Features

  • Real MCP Protocol: Actual Model Context Protocol client-server communication
  • Multi-layered Guardrails: NVIDIA NeMo Guardrails with Content Safety and Topic Control
  • Social Media Integration: Automated Mastodon posting with safety validation
  • Enterprise UI: Professional Gradio interface for brand management workflows
  • GPU-accelerated: NVIDIA NIM containers for high-performance inference

🏗️ Architecture

UI → AI Agent → MCP Server → NeMo Guardrails → NVIDIA NIMs → Mastodon API
  • UI Layer: Gradio-based professional interface
  • Agent Layer: Strategic content planning and execution
  • MCP Layer: Standards-compliant tool calling protocol
  • Guardrails Layer: 23-category safety validation
  • Infrastructure: Docker containers with GPU allocation

📋 Prerequisites

  • NVIDIA GPU with Docker support
  • NGC API Key from NVIDIA
  • Mastodon account with API access
  • Python 3.8+
  • Docker with NVIDIA Container Runtime

🛠️ Setup Instructions

1. Clone the Repository

git clone ssh://[email protected]:12051/skrithivasan/guardrailsmcp.git
cd guardrailsmcp

2. Environment Configuration

cp .env.example .env
# Edit .env with your actual API keys

3. Install Python Dependencies

pip install -r requirements.txt

4. Start NVIDIA NIM Services

# Set up NIM cache
export LOCAL_NIM_CACHE=~/.cache/nim
mkdir -p "${LOCAL_NIM_CACHE}"

# Create Docker network
docker network create nemoguard-local

# Start LLM service (GPU 0)
docker run --rm -d --network=nemoguard-local --name=llm \
  --gpus="device=0" --runtime=nvidia \
  -e NVIDIA_API_KEY \
  -u $(id -u) \
  -v "${LOCAL_NIM_CACHE}:/opt/nim/.cache" \
  -p 8000:8000 \
  nvcr.io/nim/meta/llama-3.1-8b-instruct:1.8.6

# Start Content Safety service (GPU 1)
docker run --rm -d --network=nemoguard-local --name=contentsafety \
  --gpus="device=1" --runtime=nvidia \
  -e NVIDIA_API_KEY \
  -u $(id -u) \
  -v "${LOCAL_NIM_CACHE}:/opt/nim/.cache" \
  -p 8001:8000 \
  nvcr.io/nim/meta/llama-3.1-nemoguard-8b-content-safety:1.0.0

# Start Topic Control service (GPU 2)
docker run --rm -d --network=nemoguard-local --name=topiccontrol \
  --gpus="device=2" --runtime=nvidia \
  -e NVIDIA_API_KEY \
  -u $(id -u) \
  -v "${LOCAL_NIM_CACHE}:/opt/nim/.cache" \
  -p 8002:8000 \
  nvcr.io/nim/meta/llama-3.1-nemoguard-8b-topic-control:1.0.0

5. Start NeMo Guardrails

docker run --rm -d --network=nemoguard-local --name=guardrails \
  --gpus=all --runtime=nvidia \
  -e NVIDIA_API_KEY \
  -v $(pwd)/config-store:/opt/app/config-store \
  -p 7331:8000 \
  nvcr.io/nim/nemo-guardrails:24.08

6. Launch the Application

# Start the AI Brand Manager UI
python3 real_mcp_ui.py

🔧 Usage

  1. Access the UI: Open the Gradio interface (typically http://localhost:7862)
  2. Enter Task: Describe your brand management task
  3. Review Process: Watch real-time execution with safety checks
  4. Approve Content: Review generated content and guardrails analysis
  5. Social Publishing: Content is automatically posted to Mastodon if safe

Example Tasks

  • "Announce our new AI product launch"
  • "Create a customer appreciation post"
  • "Share company quarterly results"
  • "Promote our upcoming webinar"

🛡️ Safety Features

Content Safety Categories (S1-S23)

  • Violence, Sexual content, Criminal planning
  • Weapons, Controlled substances, Self-harm
  • Minor safety, Hate speech, PII/Privacy
  • Harassment, Threats, Profanity
  • Copyright/Trademark violations
  • Unauthorized advice, Illegal activities

Topic Control Guidelines

  • Business announcements ✅
  • Technology discussions ✅
  • Customer appreciation ✅
  • Offensive content ❌
  • Spam/promotional abuse ❌

📊 Key Files

  • real_mcp_ui.py: Main Gradio UI application
  • mastodon_mcp_server.py: MCP server for social media tools
  • config-store/nemoguard/: Guardrails configuration
  • requirements.txt: Python dependencies
  • technical_flow_explanation.md: Detailed architecture documentation

🔍 Troubleshooting

Common Issues

  1. GPU Memory: Ensure sufficient GPU memory for 3 NIM containers
  2. API Keys: Verify NVIDIA_API_KEY is valid and exported
  3. Network: Check Docker network connectivity between services
  4. Ports: Ensure ports 7331, 8000-8002 are available

Debugging Commands

# Check container status
docker ps

# View container logs
docker logs llm
docker logs guardrails

# Test API endpoints
curl http://localhost:7331/v1/health
curl http://localhost:8000/v1/health

📈 Performance

  • End-to-end Workflow: ~4-6 seconds
  • Safety Validation: ~2-3 seconds (parallel GPU processing)
  • Social Media Posting: ~1-2 seconds
  • MCP Protocol Overhead: ~0.1-0.3 seconds

🤝 Contributing

This project demonstrates enterprise AI workflows with:

  • Standards-compliant MCP protocol implementation
  • Production-ready safety validation
  • Real-time social media integration
  • Professional UI/UX design

📝 License

NVIDIA Internal Project

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages