Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
491e658
fix(graph): implement exponential backoff for node retries
vakrahul Jan 25, 2026
5923147
chore(graph): fix lint issues in retry backoff loggings
vakrahul Jan 26, 2026
0653519
verifying
vakrahul Jan 26, 2026
1a7ed9c
style: fix F821 undefined name and E501 line length errors
vakrahul Jan 26, 2026
5168ed3
fix(tools): validate Content-Type in web_scrape tool (Closes #487)
gaurav-code098 Jan 26, 2026
d558bf4
feat(tools): add CSV tools with DuckDB SQL support
Hundao Jan 26, 2026
af3b8b1
Fix: Add MockLLMProvider to enable mock mode execution
savankansagara1 Jan 26, 2026
40e39d2
docs(llm): add DeepSeek models support documentation and examples
SoulSniper-V2 Jan 26, 2026
82c32e8
refactor(mcp): replace print() with logging in setup scripts
Aryanycoder Jan 26, 2026
8516eba
feat(testing): add configurable LLM provider to LLMJudge
pradyten Jan 26, 2026
69ad0be
Merge branch 'main' into feat/llm-judge-configurable-provider
pradyten Jan 26, 2026
798f3cf
Merge pull request #349 from Himanshu-ABES/feat/pydantic-llm-validation
acho-dev Jan 26, 2026
0a8c30c
Merge pull request #788 from SoulSniper-V2/feat/add-deepseek-docs
RichardTang-Aden Jan 26, 2026
396e5c3
Merge pull request #528 from gaurav-code098/fix/web-scrape-content-type
bryanadenhq Jan 26, 2026
25fabd8
Merge pull request #576 from savankansagara1/fix/mock-mode-llm-provider
bryanadenhq Jan 26, 2026
d064c98
fixed linter
bryanadenhq Jan 26, 2026
5cf25c6
Merge pull request #906 from adenhq/fix/ruff-tests
bryanadenhq Jan 26, 2026
9230ac6
Merge pull request #871 from pradyten/feat/llm-judge-configurable-pro…
bryanadenhq Jan 26, 2026
2b86046
fix(types): correct type annotation from lowercase 'callable' to 'Cal…
not-anas-ali Jan 27, 2026
8523324
fix(graph): add logging for JSON parsing failures in worker_node
saboor2632 Jan 27, 2026
0cf9e39
docs(tools): fix tool name in README table (execute_command → execute…
adionit7 Jan 27, 2026
e57cad7
ci: make Validate Agent Exports skip clearly when exports/ is missing…
adionit7 Jan 27, 2026
e846ad6
refactor: implement provider-agnostic logic for test templates
Jan 27, 2026
1631d01
merge: resolve conflicts in executor.pyx
vakrahul Jan 27, 2026
68264b5
style: fix linting issues in output_cleaner.py
vakrahul Jan 27, 2026
ed88129
Merge pull request #927 from saboor2632/fix/worker-node-json-logging
bryanadenhq Jan 27, 2026
3eb964e
Merge pull request #933 from adionit7/docs/fix-execute-command-tool-n…
bryanadenhq Jan 27, 2026
b0435a1
Merge branch 'adenhq:main' into refactor/provider-agnostic-prompts
TanujaNair03 Jan 27, 2026
8525aec
Merge pull request #934 from adionit7/fix/validate-exports-skip-when-…
bryanadenhq Jan 27, 2026
6d025c8
Merge pull request #946 from not-anas-ali/fix/callable-type-annotations
bryanadenhq Jan 27, 2026
a122345
fix(graph): restore node.max_retries and fix type check per review
vakrahul Jan 27, 2026
03910d5
Merge branch 'main' into fix/graph-retry-backoff
vakrahul Jan 27, 2026
e59bb2d
style: fix linting issues (whitespace and newline)
Jan 27, 2026
500876d
style: add required trailing newline to prompts.py
Jan 27, 2026
bc8cdfd
Merge pull request #941 from vakrahul/fix/graph-retry-backoff
bryanadenhq Jan 27, 2026
a4b0c66
Merge pull request #558 from Hundao/feature/csv-tools
Hundao Jan 27, 2026
6acdb65
Merge pull request #948 from TanujaNair03/refactor/provider-agnostic-…
Hundao Jan 27, 2026
407816d
style: fix ruff quote style violations (Q000)
AryanyAI Jan 27, 2026
3605f37
refactor: make LLMJudge provider-agnostic with OpenAI support (#1103)
Jan 27, 2026
598cc8b
refactor: provider-agnostic LLMJudge with ruff styling fixes (#1103)
Jan 27, 2026
9d39c09
Merge pull request #973 from AryanyAI/refactor/logging-mcp-scripts
Hundao Jan 27, 2026
a59d6ac
refactor(tools): add multi-provider support to web_search tool (#795)
vrijmetse Jan 27, 2026
112b1ba
fix(memory): patch ConcurrentStorage leak with WeakValueDictionary (I…
Jan 27, 2026
0381a5c
Merge branch 'adenhq:main' into fix/concurrent-storage-file-locks-leak
Tahir-yamin Jan 27, 2026
197f4f9
Merge pull request #1353 from Tahir-yamin/fix/concurrent-storage-file…
TimothyZhang7 Jan 27, 2026
e1bea18
Merge pull request #1113 from TanujaNair03/refactor/llm-judge-agnostic
TimothyZhang7 Jan 27, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 23 additions & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -79,9 +79,31 @@ jobs:
- name: Validate exported agents
run: |
# Check that agent exports have valid structure
for agent_dir in exports/*/; do
if [ ! -d "exports" ]; then
echo "No exports/ directory found, skipping validation"
exit 0
fi

shopt -s nullglob
agent_dirs=(exports/*/)
shopt -u nullglob

if [ ${#agent_dirs[@]} -eq 0 ]; then
echo "No agent directories in exports/, skipping validation"
exit 0
fi

validated=0
for agent_dir in "${agent_dirs[@]}"; do
if [ -f "$agent_dir/agent.json" ]; then
echo "Validating $agent_dir"
python -c "import json; json.load(open('$agent_dir/agent.json'))"
validated=$((validated + 1))
fi
done

if [ "$validated" -eq 0 ]; then
echo "No agent.json files found in exports/, skipping validation"
else
echo "Validated $validated agent(s)"
fi
3 changes: 2 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Removed
- N/A


### Fixed
- N/A
- tools: Fixed web_scrape tool attempting to parse non-HTML content (PDF, JSON) as HTML (#487)

### Security
- N/A
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -331,7 +331,7 @@ No. Aden is built from the ground up with no dependencies on LangChain, CrewAI,

**Q: What LLM providers does Aden support?**

Aden supports 100+ LLM providers through LiteLLM integration, including OpenAI (GPT-4, GPT-4o), Anthropic (Claude models), Google Gemini, Mistral, Groq, and many more. Simply set the appropriate API key environment variable and specify the model name.
Aden supports 100+ LLM providers through LiteLLM integration, including OpenAI (GPT-4, GPT-4o), Anthropic (Claude models), Google Gemini, DeepSeek, Mistral, Groq, and many more. Simply set the appropriate API key environment variable and specify the model name.

**Q: Can I use Aden with local AI models like Ollama?**

Expand Down
68 changes: 47 additions & 21 deletions core/framework/graph/executor.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
5. Returns the final result
"""

import asyncio
import logging
from collections.abc import Callable
from dataclasses import dataclass, field
Expand Down Expand Up @@ -118,11 +119,13 @@ def _validate_tools(self, graph: GraphSpec) -> list[str]:
if node.tools:
missing = set(node.tools) - available_tool_names
if missing:
avail = sorted(available_tool_names) if available_tool_names else "none"
available = (
sorted(available_tool_names) if available_tool_names else "none"
)
errors.append(
f"Node '{node.name}' (id={node.id}) requires tools "
f"{sorted(missing)} but they are not registered. "
f"Available tools: {avail}"
f"Available tools: {available}"
)

return errors
Expand Down Expand Up @@ -164,7 +167,7 @@ async def execute(
success=False,
error=(
f"Missing tools: {'; '.join(tool_errors)}. "
"Register tools via ToolRegistry or remove tool declarations."
"Register tools via ToolRegistry or remove tool declarations from nodes."
),
)

Expand All @@ -174,16 +177,19 @@ async def execute(
# Restore session state if provided
if session_state and "memory" in session_state:
memory_data = session_state["memory"]
if isinstance(memory_data, dict):
# Restore memory from previous session
for key, value in memory_data.items():
memory.write(key, value)
self.logger.info(f"📥 Restored session state with {len(memory_data)} memory keys")
else:
# [RESTORED] Type safety check
if not isinstance(memory_data, dict):
self.logger.warning(
f"⚠️ Invalid memory data type in session state: "
f"{type(memory_data).__name__}, expected dict"
)
else:
# Restore memory from previous session
for key, value in memory_data.items():
memory.write(key, value)
self.logger.info(
f"📥 Restored session state with {len(memory_data)} memory keys"
)

# Write new input data to memory (each key individually)
if input_data:
Expand Down Expand Up @@ -319,40 +325,52 @@ async def execute(
node_retry_counts.get(current_node_id, 0) + 1
)

if node_retry_counts[current_node_id] < node_spec.max_retries:
# [CORRECTED] Use node_spec.max_retries instead of hardcoded 3
max_retries = getattr(node_spec, "max_retries", 3)

if node_retry_counts[current_node_id] < max_retries:
# Retry - don't increment steps for retries
steps -= 1

# --- EXPONENTIAL BACKOFF ---
retry_count = node_retry_counts[current_node_id]
# Backoff formula: 1.0 * (2^(retry - 1)) -> 1s, 2s, 4s...
delay = 1.0 * (2 ** (retry_count - 1))
self.logger.info(f" Using backoff: Sleeping {delay}s before retry...")
await asyncio.sleep(delay)
# --------------------------------------

self.logger.info(
f" ↻ Retrying ({retry_count}/{node_spec.max_retries})..."
f" ↻ Retrying ({node_retry_counts[current_node_id]}/"
f"{max_retries})..."
)
continue
else:
# Max retries exceeded - fail the execution
self.logger.error(
f" ✗ Max retries ({node_spec.max_retries}) exceeded "
f"for node {current_node_id}"
f" ✗ Max retries ({max_retries}) "
f"exceeded for node {current_node_id}"
)
self.runtime.report_problem(
severity="critical",
description=(
f"Node {current_node_id} failed after "
f"{node_spec.max_retries} attempts: {result.error}"
f"{max_retries} attempts: {result.error}"
),
)
self.runtime.end_run(
success=False,
output_data=memory.read_all(),
narrative=(
f"Failed at {node_spec.name} after "
f"{node_spec.max_retries} retries: {result.error}"
f"{max_retries} retries: {result.error}"
),
)
return ExecutionResult(
success=False,
error=(
f"Node '{node_spec.name}' failed after "
f"{node_spec.max_retries} attempts: {result.error}"
f"{max_retries} attempts: {result.error}"
),
output=memory.read_all(),
steps_executed=steps,
Expand Down Expand Up @@ -557,8 +575,12 @@ def _follow_edges(
memory=memory.read_all(),
llm=self.llm,
goal=goal,
source_node_name=current_node_spec.name if current_node_spec else current_node_id,
target_node_name=target_node_spec.name if target_node_spec else edge.target,
source_node_name=current_node_spec.name
if current_node_spec
else current_node_id,
target_node_name=target_node_spec.name
if target_node_spec
else edge.target,
):
# Validate and clean output before mapping inputs
if self.cleansing_config.enabled and target_node_spec:
Expand All @@ -571,7 +593,9 @@ def _follow_edges(
)

if not validation.valid:
self.logger.warning(f"⚠ Output validation failed: {validation.errors}")
self.logger.warning(
f"⚠ Output validation failed: {validation.errors}"
)

# Clean the output
cleaned_output = self.output_cleaner.clean_output(
Expand All @@ -596,14 +620,16 @@ def _follow_edges(
)

if revalidation.valid:
self.logger.info("✓ Output cleaned and validated successfully")
self.logger.info(
"✓ Output cleaned and validated successfully"
)
else:
self.logger.error(
f"✗ Cleaning failed, errors remain: {revalidation.errors}"
)
# Continue anyway if fallback_to_raw is True

# Map inputs
# Map inputsss
mapped = edge.map_inputs(result.output, memory.read_all())
for key, value in mapped.items():
memory.write(key, value)
Expand Down
40 changes: 26 additions & 14 deletions core/framework/graph/node.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,9 @@
from abc import ABC, abstractmethod
from collections.abc import Callable
from dataclasses import dataclass, field
from typing import Any, Type
from typing import Any

from pydantic import BaseModel, Field, ValidationError
from pydantic import BaseModel, Field

from framework.llm.provider import LLMProvider, Tool
from framework.runtime.core import Runtime
Expand Down Expand Up @@ -145,9 +145,12 @@ class NodeSpec(BaseModel):
retry_on: list[str] = Field(default_factory=list, description="Error types to retry on")

# Pydantic model for output validation
output_model: Type[BaseModel] | None = Field(
output_model: type[BaseModel] | None = Field(
default=None,
description="Optional Pydantic model class for validating and parsing LLM output. When set, the LLM response will be validated against this model."
description=(
"Optional Pydantic model class for validating and parsing LLM output. "
"When set, the LLM response will be validated against this model."
),
)
max_validation_retries: int = Field(
default=2,
Expand Down Expand Up @@ -355,7 +358,7 @@ class NodeResult:
# Metadata
tokens_used: int = 0
latency_ms: int = 0

# Pydantic validation errors (if any)
validation_errors: list[str] = field(default_factory=list)

Expand Down Expand Up @@ -622,10 +625,12 @@ def executor(tool_use: ToolUse) -> ToolResult:
"strict": True,
}
}
logger.info(f" 📐 Using JSON schema from Pydantic model: {ctx.node_spec.output_model.__name__}")
model_name = ctx.node_spec.output_model.__name__
logger.info(f" 📐 Using JSON schema from Pydantic model: {model_name}")

# Phase 2: Retry loop for Pydantic validation
max_validation_retries = ctx.node_spec.max_validation_retries if ctx.node_spec.output_model else 0
max_retries = ctx.node_spec.max_validation_retries
max_validation_retries = max_retries if ctx.node_spec.output_model else 0
validation_attempt = 0
total_input_tokens = 0
total_output_tokens = 0
Expand Down Expand Up @@ -668,7 +673,8 @@ def executor(tool_use: ToolUse) -> ToolResult:

if validation_result.success:
# Validation passed, break out of retry loop
logger.info(f" ✓ Pydantic validation passed for {ctx.node_spec.output_model.__name__}")
model_name = ctx.node_spec.output_model.__name__
logger.info(f" ✓ Pydantic validation passed for {model_name}")
break
else:
# Validation failed
Expand All @@ -680,10 +686,11 @@ def executor(tool_use: ToolUse) -> ToolResult:
validation_result, ctx.node_spec.output_model
)
logger.warning(
f" ⚠ Pydantic validation failed (attempt {validation_attempt}/{max_validation_retries}): "
f" ⚠ Pydantic validation failed "
f"(attempt {validation_attempt}/{max_validation_retries}): "
f"{validation_result.error}"
)
logger.info(f" 🔄 Retrying with validation feedback...")
logger.info(" 🔄 Retrying with validation feedback...")

# Add the assistant's failed response and feedback
current_messages.append({
Expand All @@ -698,9 +705,10 @@ def executor(tool_use: ToolUse) -> ToolResult:
else:
# Max retries exceeded
latency_ms = int((time.time() - start) * 1000)
err = validation_result.error
logger.error(
f" ✗ Pydantic validation failed after {max_validation_retries} retries: "
f"{validation_result.error}"
f" ✗ Pydantic validation failed after "
f"{max_validation_retries} retries: {err}"
)
ctx.runtime.record_outcome(
decision_id=decision_id,
Expand All @@ -709,9 +717,13 @@ def executor(tool_use: ToolUse) -> ToolResult:
tokens_used=total_input_tokens + total_output_tokens,
latency_ms=latency_ms,
)
error_msg = (
f"Pydantic validation failed after "
f"{max_validation_retries} retries: {err}"
)
return NodeResult(
success=False,
error=f"Pydantic validation failed after {max_validation_retries} retries: {validation_result.error}",
error=error_msg,
output=parsed,
tokens_used=total_input_tokens + total_output_tokens,
latency_ms=latency_ms,
Expand Down Expand Up @@ -760,7 +772,7 @@ def executor(tool_use: ToolUse) -> ToolResult:
# Use validated model's dict representation
if validated_model:
parsed = validated_model.model_dump()

for key in ctx.node_spec.output_keys:
if key in parsed:
value = parsed[key]
Expand Down
Loading
Loading