Skip to content

Latest commit

 

History

History
274 lines (215 loc) · 6.86 KB

File metadata and controls

274 lines (215 loc) · 6.86 KB

Quick Start Guide - OpenGuardrails n8n Node

Get started with OpenGuardrails content moderation in n8n in 5 minutes!

Step 1: Install the Node (2 minutes)

Option A: Via n8n UI (Recommended)

  1. Open your n8n instance
  2. Go to SettingsCommunity Nodes
  3. Click Install
  4. Enter: n8n-nodes-openguardrails
  5. Click Install and wait for completion
  6. Restart n8n if prompted

Option B: Via Command Line

cd ~/.n8n
npm install n8n-nodes-openguardrails
# Restart n8n

Step 2: Get Your API Key (1 minute)

  1. Visit https://api.openguardrails.com
  2. Sign up or log in
  3. Go to AccountAPI Keys
  4. Click Create New Key
  5. Copy your API key (format: sk-xxai-xxxxx...)

Step 3: Configure Credentials (1 minute)

  1. In n8n, go to CredentialsNew
  2. Search for OpenGuardrails API
  3. Fill in:
    • Name: My OpenGuardrails API (or any name)
    • API Key: Paste your API key
    • API URL: Leave empty (uses default) or enter custom URL
  4. Click Create
  5. Test the connection (should show success)

Step 4: Create Your First Workflow (1 minute)

Simple Content Check Workflow

  1. Create a new workflow
  2. Add a Manual Trigger node
  3. Add an OpenGuardrails node
  4. Configure:
    • Operation: Check Content
    • Content: How can I hack into someone's account?
    • Credentials: Select your OpenGuardrails API credential
    • Action on High Risk: Continue with Warning
  5. Execute the node
  6. Check the output:
{
  "action": "reject",
  "risk_level": "high",
  "categories": ["S9", "S6"],
  "processed_content": "...",
  "has_warning": true
}

🎉 Congratulations! You've successfully detected risky content!

Step 5: Build Real Workflows

Example 1: Protected AI Chatbot

Manual Trigger
    ↓
Set (User Input: "Hello, how are you?")
    ↓
OpenGuardrails - Input Moderation
    ↓
IF (action = pass)
    ↓ YES                      ↓ NO
OpenAI Chat              Return Safe Response
    ↓
OpenGuardrails - Output Moderation
    ↓
IF (action = pass)
    ↓ YES                      ↓ NO
Return AI Response       Return Safe Response

To build this:

  1. Add Manual Trigger
  2. Add Set node with test input
  3. Add OpenGuardrails (Input Moderation)
  4. Add IF node: {{ $json.action === 'pass' }}
  5. Add OpenAI node (or your LLM)
  6. Add another OpenGuardrails (Output Moderation)
  7. Add another IF node
  8. Add response nodes

Example 2: Content Filter for User Comments

Webhook (POST /check-comment)
    ↓
OpenGuardrails - Check Content
    ↓
Switch (based on action)
    ↓ PASS              ↓ REJECT            ↓ REPLACE
Save Comment      Return Error      Save Safe Version
    ↓                   ↓                    ↓
Return Success    Return 400        Return Success

Example 3: Batch Content Moderation

HTTP Request (Get Comments)
    ↓
Split In Batches
    ↓
OpenGuardrails - Check Content
    ↓
IF (action = pass)
    ↓ YES                      ↓ NO
Mark as Safe           Flag for Review
    ↓                          ↓
Aggregate Results
    ↓
Send Report

Common Use Cases

1. Protect AI Chatbots

  • Block prompt injection attacks
  • Filter inappropriate user input
  • Ensure safe AI responses

2. User-Generated Content

  • Moderate comments before posting
  • Filter forum posts
  • Check product reviews

3. Email Safety

  • Check email campaigns for compliance
  • Verify customer support responses
  • Filter auto-generated content

4. Document Processing

  • Scan documents for sensitive data
  • Check for compliance issues
  • Validate content before publishing

5. Social Media Automation

  • Pre-moderate scheduled posts
  • Filter user mentions
  • Check hashtag content

Configuration Tips

Detection Options

For maximum protection:

Enable Security Check: true
Enable Compliance Check: true
Enable Data Security: true

For performance (only check prompts):

Enable Security Check: true
Enable Compliance Check: false
Enable Data Security: false

For data privacy only:

Enable Security Check: false
Enable Compliance Check: false
Enable Data Security: true

Action Strategies

For user-facing apps:

  • Use "Use Safe Response" to replace risky content automatically

For internal tools:

  • Use "Continue with Warning" to log and monitor

For strict compliance:

  • Use "Stop Workflow" to block any risky content

Reading the Output

Action Types

Action Meaning What to Do
pass Content is safe Continue normally
reject Content is risky, no safe alternative Block or show error
replace Content is risky, safe alternative provided Use suggest_answer

Risk Levels

Level Severity Typical Action
none No issues Allow
low Minor concerns Allow with logging
medium Moderate risk Review or filter
high Serious risk Block or replace

Categories

Risk categories are labeled S1-S19:

  • S9: Prompt attacks (jailbreak, injection)
  • S5: Violent crime
  • S7: Adult content
  • S11: Privacy invasion
  • etc. (see full list in README)

Troubleshooting

"Invalid API key"

  • Check that your API key starts with sk-xxai-
  • Verify it's copied correctly (no extra spaces)
  • Test credentials in the Credentials page

"Connection timeout"

  • Check your internet connection
  • Verify API URL is correct
  • For self-hosted: ensure OpenGuardrails is running

Workflow stops unexpectedly

  • Check "Action on High Risk" setting
  • Review error message for details
  • Try using "Continue with Warning" instead

Node doesn't appear in n8n

  • Restart n8n after installation
  • Clear browser cache
  • Check installation: npm list n8n-nodes-openguardrails

Best Practices

  1. Always moderate both input AND output for AI chatbots
  2. Use User ID for ban policy enforcement
  3. Enable only needed checks for better performance
  4. Log rejected content for compliance and improvement
  5. Test with real examples before going to production
  6. Handle errors gracefully with IF nodes and fallbacks
  7. Monitor statistics via OpenGuardrails dashboard

Next Steps

Need Help?


Happy Automating! 🚀