- Helicone API Key: Get your API key from Helicone Dashboard
- OpenAI API Key: For testing OpenAI provider
- Anthropic API Key: For testing Anthropic provider (optional)
- Azure OpenAI: For testing Azure provider (optional)
pnpm install
pnpm buildNavigate to your n8n folder (usually ~/.n8n on macOS/Linux):
cd ~/.n8n
mkdir custom
cd custom
pnpm initLink your built node to the n8n custom folder:
pnpm link /path/to/your/helicone-n8n-nodeReplace /path/to/your/helicone-n8n-node with the actual path to your repository.
n8n startOpen your browser and go to: http://localhost:5678
- In n8n, go to Settings → Credentials
- Click Add Credential
- Search for "Helicone API" and select it
- Enter your Helicone API key (starts with
pk-for write access) - Save the credential
- Click New Workflow
- Add a Helicone node (search for "Helicone" in the nodes panel)
- Configure the node with the following settings:
- LLM Provider: OpenAI
- OpenAI API Key: Your OpenAI API key
- Model:
gpt-4o-mini - Messages:
[{"role": "user", "content": "Hello! Tell me a joke."}] - Max Tokens: 100
- Temperature: 0.7
- LLM Provider: Anthropic
- Anthropic API Key: Your Anthropic API key
- Model:
claude-3-opus-20240229 - Messages:
[{"role": "user", "content": "Hello! Tell me a joke."}] - System Message:
You are a helpful assistant. - Max Tokens: 100
- Temperature: 0.7
Add custom properties to track your requests:
{
"test": "true",
"environment": "development",
"user_id": "12345"
}- Session ID:
test-session-123 - Session Name:
Test Session - Session Path:
testing/unit-tests
- Enable Caching:
true - Cache TTL:
3600(1 hour)
- Click Execute Workflow button
- Check the output in the node
- Verify the response contains the expected LLM response
- Go to Helicone Dashboard
- Check the Requests tab
- Verify your test request appears with:
- Custom properties
- Session information
- Request/response data
- Performance metrics
- Provider: OpenAI
- Model: gpt-4o-mini
- Simple user message
- Verify response and Helicone tracking
- Provider: Anthropic
- Model: claude-3-opus-20240229
- Include system message
- Verify response format
- Add multiple custom properties
- Verify they appear in Helicone dashboard
- Check property filtering
- Use session ID, name, and path
- Make multiple requests with same session
- Verify session grouping in dashboard
- Enable caching with TTL
- Make identical requests
- Verify cached responses
- Use invalid API keys
- Test with malformed messages
- Verify error responses
- Node not found: Ensure you've run
npm linkand restarted n8n - Build errors: Check TypeScript compilation with
npm run build - Credential errors: Verify API keys are correct (Helicone key should start with
pk-) - Network errors: Check internet connection and API endpoints
Start n8n with debug logging:
DEBUG=n8n:* n8n startVerify your node is registered:
n8n list-nodes | grep helicone- ✅ Helicone node appears in n8n interface
- ✅ Credentials can be configured
- ✅ Requests are sent to Helicone proxy
- ✅ Responses are received correctly
- ✅ Data appears in Helicone dashboard
- ✅ Custom properties are tracked
- ✅ Session information is preserved
- ✅ Caching works as expected
- Load Testing: Send multiple concurrent requests
- Latency Testing: Measure response times
- Error Rate Testing: Test with various error conditions
- Memory Testing: Monitor memory usage during extended use
- API Key Security: Verify keys are not logged
- Data Privacy: Check sensitive data handling
- Input Validation: Test with malicious inputs
- Rate Limiting: Test with high request volumes