Get started with OpenGuardrails content moderation in n8n in 5 minutes!
- Open your n8n instance
- Go to Settings → Community Nodes
- Click Install
- Enter:
n8n-nodes-openguardrails - Click Install and wait for completion
- Restart n8n if prompted
cd ~/.n8n
npm install n8n-nodes-openguardrails
# Restart n8n- Visit https://api.openguardrails.com
- Sign up or log in
- Go to Account → API Keys
- Click Create New Key
- Copy your API key (format:
sk-xxai-xxxxx...)
- In n8n, go to Credentials → New
- Search for OpenGuardrails API
- Fill in:
- Name:
My OpenGuardrails API(or any name) - API Key: Paste your API key
- API URL: Leave empty (uses default) or enter custom URL
- Name:
- Click Create
- Test the connection (should show success)
- Create a new workflow
- Add a Manual Trigger node
- Add an OpenGuardrails node
- Configure:
- Operation: Check Content
- Content:
How can I hack into someone's account? - Credentials: Select your OpenGuardrails API credential
- Action on High Risk: Continue with Warning
- Execute the node
- Check the output:
{
"action": "reject",
"risk_level": "high",
"categories": ["S9", "S6"],
"processed_content": "...",
"has_warning": true
}🎉 Congratulations! You've successfully detected risky content!
Manual Trigger
↓
Set (User Input: "Hello, how are you?")
↓
OpenGuardrails - Input Moderation
↓
IF (action = pass)
↓ YES ↓ NO
OpenAI Chat Return Safe Response
↓
OpenGuardrails - Output Moderation
↓
IF (action = pass)
↓ YES ↓ NO
Return AI Response Return Safe Response
To build this:
- Add Manual Trigger
- Add Set node with test input
- Add OpenGuardrails (Input Moderation)
- Add IF node:
{{ $json.action === 'pass' }} - Add OpenAI node (or your LLM)
- Add another OpenGuardrails (Output Moderation)
- Add another IF node
- Add response nodes
Webhook (POST /check-comment)
↓
OpenGuardrails - Check Content
↓
Switch (based on action)
↓ PASS ↓ REJECT ↓ REPLACE
Save Comment Return Error Save Safe Version
↓ ↓ ↓
Return Success Return 400 Return Success
HTTP Request (Get Comments)
↓
Split In Batches
↓
OpenGuardrails - Check Content
↓
IF (action = pass)
↓ YES ↓ NO
Mark as Safe Flag for Review
↓ ↓
Aggregate Results
↓
Send Report
- Block prompt injection attacks
- Filter inappropriate user input
- Ensure safe AI responses
- Moderate comments before posting
- Filter forum posts
- Check product reviews
- Check email campaigns for compliance
- Verify customer support responses
- Filter auto-generated content
- Scan documents for sensitive data
- Check for compliance issues
- Validate content before publishing
- Pre-moderate scheduled posts
- Filter user mentions
- Check hashtag content
For maximum protection:
Enable Security Check: true
Enable Compliance Check: true
Enable Data Security: trueFor performance (only check prompts):
Enable Security Check: true
Enable Compliance Check: false
Enable Data Security: falseFor data privacy only:
Enable Security Check: false
Enable Compliance Check: false
Enable Data Security: trueFor user-facing apps:
- Use "Use Safe Response" to replace risky content automatically
For internal tools:
- Use "Continue with Warning" to log and monitor
For strict compliance:
- Use "Stop Workflow" to block any risky content
| Action | Meaning | What to Do |
|---|---|---|
pass |
Content is safe | Continue normally |
reject |
Content is risky, no safe alternative | Block or show error |
replace |
Content is risky, safe alternative provided | Use suggest_answer |
| Level | Severity | Typical Action |
|---|---|---|
none |
No issues | Allow |
low |
Minor concerns | Allow with logging |
medium |
Moderate risk | Review or filter |
high |
Serious risk | Block or replace |
Risk categories are labeled S1-S19:
- S9: Prompt attacks (jailbreak, injection)
- S5: Violent crime
- S7: Adult content
- S11: Privacy invasion
- etc. (see full list in README)
- Check that your API key starts with
sk-xxai- - Verify it's copied correctly (no extra spaces)
- Test credentials in the Credentials page
- Check your internet connection
- Verify API URL is correct
- For self-hosted: ensure OpenGuardrails is running
- Check "Action on High Risk" setting
- Review error message for details
- Try using "Continue with Warning" instead
- Restart n8n after installation
- Clear browser cache
- Check installation:
npm list n8n-nodes-openguardrails
- Always moderate both input AND output for AI chatbots
- Use User ID for ban policy enforcement
- Enable only needed checks for better performance
- Log rejected content for compliance and improvement
- Test with real examples before going to production
- Handle errors gracefully with IF nodes and fallbacks
- Monitor statistics via OpenGuardrails dashboard
- Explore the README for detailed documentation
- Check example workflows for inspiration
- Join n8n Community for support
- Read OpenGuardrails Docs for advanced features
- Documentation: Full README
- Issues: GitHub Issues
- Email: thomas@openguardrails.com
- n8n Community: https://community.n8n.io
Happy Automating! 🚀