Summary
praisonaiagents/agent/router_agent.py:234 contains:
# TODO: Implement token tracking when LLM.get_response() is updated to return token usage
The RouterAgent — which dispatches to other agents — does not track token usage, making it impossible to attribute costs when using routing patterns.
Impact
- In multi-agent systems with routing, token costs are invisible for the routing decisions
- Cannot set budgets or alerts on routing overhead
- Cost attribution is incomplete in production deployments
Suggested Fix
- Update
LLM.get_response() to return token usage metadata alongside the response
- Have
RouterAgent aggregate token usage from routing decisions + downstream agents
- Expose token metrics via the existing telemetry/trace infrastructure
- Add cost tracking to the agent's
chat_history or session metadata
Summary
praisonaiagents/agent/router_agent.py:234contains:# TODO: Implement token tracking when LLM.get_response() is updated to return token usageThe
RouterAgent— which dispatches to other agents — does not track token usage, making it impossible to attribute costs when using routing patterns.Impact
Suggested Fix
LLM.get_response()to return token usage metadata alongside the responseRouterAgentaggregate token usage from routing decisions + downstream agentschat_historyor session metadata