Version: ≥5.6.x
This document describes the security measures applied to EDDI's AI Agent Tooling system, particularly for tools that execute in response to LLM-generated arguments.
When an LLM is given access to tools, every argument it supplies must be treated as untrusted input. An attacker can craft prompts that cause the LLM to pass malicious arguments to tools — a class of attacks known as prompt injection. EDDI mitigates these risks at the tool-execution layer so that individual tools do not need to implement their own defences.
Applies to: PDF Reader, Web Scraper, and any future tool that fetches remote resources.
Server-Side Request Forgery (SSRF) occurs when an attacker tricks a server-side application into making requests to internal services. EDDI prevents this with UrlValidationUtils.validateUrl(url):
Only http and https URLs are accepted. All other schemes are rejected:
| Blocked | Example |
|---|---|
file:// |
file:///etc/passwd |
ftp:// |
ftp://internal-server/data |
jar:// |
jar:file:///app.jar!/secret |
gopher:// |
gopher://127.0.0.1:25/... |
DNS resolution is performed and the resolved address is checked before any connection is made:
| Range | Description |
|---|---|
127.0.0.0/8 |
Loopback addresses |
10.0.0.0/8 |
Private network (Class A) |
172.16.0.0/12 |
Private network (Class B) |
192.168.0.0/16 |
Private network (Class C) |
169.254.0.0/16 |
Link-local (AWS/GCP metadata) |
fd00::/8 |
IPv6 unique-local |
fe80::/10 |
IPv6 link-local |
::1 |
IPv6 loopback |
Cloud provider metadata services are explicitly blocked by IP and hostname:
169.254.169.254(AWS, GCP, Azure metadata)metadata.google.internal(GCP)
Hostnames that indicate internal services are rejected:
localhost- Any hostname ending in
.local - Any hostname ending in
.internal
import static ai.labs.eddi.modules.langchain.tools.UrlValidationUtils.validateUrl;
// In any tool method that accepts a URL:
validateUrl(url); // throws IllegalArgumentException if blockedApplies to: Calculator tool.
The original implementation used Java's ScriptEngine (Nashorn/Rhino) to evaluate math expressions. A malicious expression could execute arbitrary JavaScript:
// DANGEROUS — would execute arbitrary code in old implementation:
java.lang.Runtime.getRuntime().exec('rm -rf /')
The Calculator tool now uses SafeMathParser, a recursive-descent parser written in pure Java. It:
- Recognises only numeric literals, arithmetic operators (
+,-,*,/,%,^), and parentheses - Supports a fixed allowlist of math functions (
sqrt,pow,abs,sin,cos,log,exp, etc.) - Supports only two constants (
PI,E) - Has no code execution capability — unrecognised tokens cause an immediate parse error
- Requires no external dependencies (no Rhino/Nashorn/GraalJS)
expression → term (('+' | '-') term)*
term → power (('*' | '/' | '%') power)*
power → unary ('^' unary)*
unary → ('-' | '+')? primary
primary → NUMBER | FUNCTION '(' args ')' | '(' expression ')' | CONSTANT
sqrt, pow, abs, ceil, floor, round, sin, cos, tan, asin, acos, atan, atan2, log, log10, exp, signum/sign, toRadians, toDegrees, cbrt, min, max
All tool invocations — both built-in and HTTP-call-based — are routed through ToolExecutionService.executeToolWrapped(). This ensures consistent security and operational controls:
Tool Call ──▶ Rate Limiter ──▶ Cache Check ──▶ Execute Tool ──▶ Cost Tracker ──▶ Result
- Algorithm: Token-bucket per tool name
- Configuration:
enableRateLimiting(defaulttrue),defaultRateLimit(default100),toolRateLimits(per-tool overrides) - Behaviour: Requests exceeding the limit receive a "Rate limit exceeded" error message returned to the LLM, which can then retry or use a different approach
- Key: SHA-256 hash of
toolName + arguments - Configuration:
enableToolCaching(defaulttrue) - Behaviour: Identical tool calls within the same conversation return cached results, reducing redundant API calls and cost
- Configuration:
enableCostTracking(defaulttrue),maxBudgetPerConversation(no default — unlimited) - Eviction: To prevent unbounded memory growth, the tracker caps per-conversation entries at 10 000 and evicts the oldest ~10% when the limit is reached
- Behaviour: When the budget is exceeded, tools return a "Budget exceeded" message to the LLM
{
"tasks": [{
"actions": ["help"],
"type": "openai",
"enableBuiltInTools": true,
"enableRateLimiting": true,
"defaultRateLimit": 100,
"toolRateLimits": { "websearch": 30, "weather": 50 },
"enableToolCaching": true,
"enableCostTracking": true,
"maxBudgetPerConversation": 5.0,
"parameters": {
"apiKey": "...",
"modelName": "gpt-4o",
"systemMessage": "You are a helpful assistant."
}
}]
}The ConversationCoordinator ensures that messages for the same conversation are processed sequentially, preventing race conditions in conversation state. The isEmpty() → offer() → submit() sequence is wrapped in a synchronized block to prevent two concurrent requests from both being submitted to the thread pool simultaneously.
Different conversations are processed concurrently — only same-conversation messages are serialised.
The HttpCallExecutor uses strict equality (equals) rather than prefix matching (startsWith) when checking the Content-Type header against application/json. This prevents content types like application/json-patch+json from being incorrectly deserialised as standard JSON.
When adding a new tool to EDDI:
- Validate all URLs with
UrlValidationUtils.validateUrl()before making any outbound request - Never use
ScriptEngineor any form of dynamic code evaluation - Add
@Toolannotations with clear descriptions so the LLM understands the tool's purpose and constraints - Write unit tests that specifically verify rejection of malicious inputs (SSRF URLs, injection strings)
- Route execution through
ToolExecutionServiceto inherit rate limiting, caching, and cost tracking
- LangChain Integration — Full agent configuration reference
- Bot Father LangChain Tools Guide — Guided tool setup
- Architecture — EDDI's lifecycle pipeline and concurrency model
- Metrics — Monitoring tool execution performance