Skip to content

[Feat]: Add Redis-backed QueueManager for Production Deployments #446

@mjunaidca

Description

@mjunaidca

Is your feature request related to a problem? Please describe.

The current A2A Python SDK only provides InMemoryQueueManager, which makes it impossible to run agentic applications in production environments. In distributed setups like Kubernetes with multiple pods, the in-memory queue cannot share state between instances, leading to:

  • Lost messages between pods
  • Inconsistent task state across the cluster
  • Impossible to scale horizontally
  • No persistence of events across pod restarts

Describe the solution you'd like

Implement a Redis-backed QueueManager that uses Redis Streams for reliable, distributed event queuing. This would enable:

  • Production-ready deployments in Kubernetes and other distributed environments
  • Horizontal scaling across multiple pods
  • Persistent event storage and recovery
  • Consistent state management across the cluster

Describe alternatives you've considered

  • Database-backed queues (more complex, higher latency)
  • Message queue systems like RabbitMQ (additional infrastructure complexity)
  • Shared memory solutions (not viable in containerized environments)

Additional context

Redis is already widely used in agentic AI platforms like LangGraph and provides the perfect balance of performance, reliability, and simplicity for distributed event streaming. Many serverless and microservices architectures already use Redis, making this a natural fit for production A2A deployments.

Reference implementations needed: Are there any existing Redis queue implementations in the A2A ecosystem that could serve as a reference?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions