Enterprise-grade AI Brand Manager using Model Context Protocol (MCP), NVIDIA NIM guardrails, and social media integration.
- Real MCP Protocol: Actual Model Context Protocol client-server communication
- Multi-layered Guardrails: NVIDIA NeMo Guardrails with Content Safety and Topic Control
- Social Media Integration: Automated Mastodon posting with safety validation
- Enterprise UI: Professional Gradio interface for brand management workflows
- GPU-accelerated: NVIDIA NIM containers for high-performance inference
UI → AI Agent → MCP Server → NeMo Guardrails → NVIDIA NIMs → Mastodon API
- UI Layer: Gradio-based professional interface
- Agent Layer: Strategic content planning and execution
- MCP Layer: Standards-compliant tool calling protocol
- Guardrails Layer: 23-category safety validation
- Infrastructure: Docker containers with GPU allocation
- NVIDIA GPU with Docker support
- NGC API Key from NVIDIA
- Mastodon account with API access
- Python 3.8+
- Docker with NVIDIA Container Runtime
git clone ssh://[email protected]:12051/skrithivasan/guardrailsmcp.git
cd guardrailsmcpcp .env.example .env
# Edit .env with your actual API keyspip install -r requirements.txt# Set up NIM cache
export LOCAL_NIM_CACHE=~/.cache/nim
mkdir -p "${LOCAL_NIM_CACHE}"
# Create Docker network
docker network create nemoguard-local
# Start LLM service (GPU 0)
docker run --rm -d --network=nemoguard-local --name=llm \
--gpus="device=0" --runtime=nvidia \
-e NVIDIA_API_KEY \
-u $(id -u) \
-v "${LOCAL_NIM_CACHE}:/opt/nim/.cache" \
-p 8000:8000 \
nvcr.io/nim/meta/llama-3.1-8b-instruct:1.8.6
# Start Content Safety service (GPU 1)
docker run --rm -d --network=nemoguard-local --name=contentsafety \
--gpus="device=1" --runtime=nvidia \
-e NVIDIA_API_KEY \
-u $(id -u) \
-v "${LOCAL_NIM_CACHE}:/opt/nim/.cache" \
-p 8001:8000 \
nvcr.io/nim/meta/llama-3.1-nemoguard-8b-content-safety:1.0.0
# Start Topic Control service (GPU 2)
docker run --rm -d --network=nemoguard-local --name=topiccontrol \
--gpus="device=2" --runtime=nvidia \
-e NVIDIA_API_KEY \
-u $(id -u) \
-v "${LOCAL_NIM_CACHE}:/opt/nim/.cache" \
-p 8002:8000 \
nvcr.io/nim/meta/llama-3.1-nemoguard-8b-topic-control:1.0.0docker run --rm -d --network=nemoguard-local --name=guardrails \
--gpus=all --runtime=nvidia \
-e NVIDIA_API_KEY \
-v $(pwd)/config-store:/opt/app/config-store \
-p 7331:8000 \
nvcr.io/nim/nemo-guardrails:24.08# Start the AI Brand Manager UI
python3 real_mcp_ui.py- Access the UI: Open the Gradio interface (typically http://localhost:7862)
- Enter Task: Describe your brand management task
- Review Process: Watch real-time execution with safety checks
- Approve Content: Review generated content and guardrails analysis
- Social Publishing: Content is automatically posted to Mastodon if safe
- "Announce our new AI product launch"
- "Create a customer appreciation post"
- "Share company quarterly results"
- "Promote our upcoming webinar"
- Violence, Sexual content, Criminal planning
- Weapons, Controlled substances, Self-harm
- Minor safety, Hate speech, PII/Privacy
- Harassment, Threats, Profanity
- Copyright/Trademark violations
- Unauthorized advice, Illegal activities
- Business announcements ✅
- Technology discussions ✅
- Customer appreciation ✅
- Offensive content ❌
- Spam/promotional abuse ❌
real_mcp_ui.py: Main Gradio UI applicationmastodon_mcp_server.py: MCP server for social media toolsconfig-store/nemoguard/: Guardrails configurationrequirements.txt: Python dependenciestechnical_flow_explanation.md: Detailed architecture documentation
- GPU Memory: Ensure sufficient GPU memory for 3 NIM containers
- API Keys: Verify NVIDIA_API_KEY is valid and exported
- Network: Check Docker network connectivity between services
- Ports: Ensure ports 7331, 8000-8002 are available
# Check container status
docker ps
# View container logs
docker logs llm
docker logs guardrails
# Test API endpoints
curl http://localhost:7331/v1/health
curl http://localhost:8000/v1/health- End-to-end Workflow: ~4-6 seconds
- Safety Validation: ~2-3 seconds (parallel GPU processing)
- Social Media Posting: ~1-2 seconds
- MCP Protocol Overhead: ~0.1-0.3 seconds
This project demonstrates enterprise AI workflows with:
- Standards-compliant MCP protocol implementation
- Production-ready safety validation
- Real-time social media integration
- Professional UI/UX design
NVIDIA Internal Project