Replies: 1 comment 1 reply
-
|
Just gonna spin on this, reason about some caveats and ideas how it can be handled. The balance problem with consistent hash + auto-delete queues One concern with the "republish on delete" approach used in isolation with a consistent hash exchange: the hash ring changes every time a queue unbinds, so republished messages land wherever the remaining queues happen to be — not where they'd belong once consumers scale back up. Example: 5 auto-delete queues, 3 consumers restart Starting state:
Consistent hashing is broken — messages that belong in partition 2 are stuck in q1/q4. No rebalancing happens. Two features to make this work
With both in place, restarts preserve balance:
Each consumer gets exactly its partition's backlog back. But what about permanent scaling? This is where it gets tricky. There are two very different scenarios, and the exchange can't tell them apart:
If the overflow drains on unbind, restarts break (messages scatter before the consumer returns). If it only drains on bind, permanent scale-downs leave messages orphaned in the buffer forever. The missing piece: explicit ring size The exchange needs to know the intended number of partitions, separate from how many queues happen to be bound right now. Something like an
This separates topology intent from queue lifecycle. The ring size is the source of truth for "how many partitions should exist." Scale-down becomes an explicit action (update ring size) rather than something the broker has to guess from queue disappearances. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Is your feature request related to a problem? Please describe.
Looking at this scenario to explain the problem
I have messages being published into a Consistent Hash Exchange, bound to that exchange I have a number of queues. Messages are routed based on the hash of the routing key so I get the same type of message to the same queue, so sharding the messages.
I have one consumer per queue that reads and process each message.
Since the number of messages going into the exchange can vary I have auto scaling enabled for my workers, so when messages starts to build up another worker is added which create a new queue and binds it to the same exchange.
Until this point everything works fine but when messages rates goes down again I want to be able to reduce the number of workers to save cost. If I just stop the worker I still have the queue left and bound to the exchange so it continues to get messages and eventually hit a
max-lengthor just fill up the disk.I can also script a automatic deletion of the queue when I remove the worker but if that queue has any messages these will be deleted which is not good, so I first have to tell the broker to move the messages, but to where, it needs to fit with the hashing function in the exchange to get routed to correct queue.
So it can probably be handled but not in an easy way but I think this can be solved in the broker.
Describe the solution you'd like
Ideas to solve this:
Idea 1
A feature to tell the broker when this queue is deleted, take all the messages and route them back to a specific exchange. It could be an argument on the queue or perhaps a policy.
Idea 2
A queue argument, something like
on-deleteso you can tell what will happen to the messages when the queue is deleted. The default value should bediscardbut it can also be 'republish`. This is very smiliar to the above suggestion but probably more flexible for additional options in the future.In both cases the queues used for Consistent hashing could then be
auto-deletesince the messages will be republished when the queue is deleted. One gotchas here is that if you delete the last queue, or it's auto-deleted. Where should the messages go? Republish them back to the exchange it has no queue bound and messages will be gone. Should they be deleted? Can we use Alternate exchange? have aStreamas a backup option for the exchange for these cases maybe?Beta Was this translation helpful? Give feedback.
All reactions