-
Notifications
You must be signed in to change notification settings - Fork 12
Add concurrency control parameter to Triggers #127
Description
Use case
There are cases in which you need to control the maximum number of concurrent in-flight events being sent to a target.
For example, 1000 messages land on an SQS queue in one go. The SQS source consumes these messages as fast as it can. The goal is to deliver them to a Service, but the service can only consume at most 10 messages in parallel.
Solution
One idea is to implement a concurrency control on Triggers, such that I can specify the maximum number of non-acknowledged in-flight requests at any given time. Once the limit is reached, the Trigger waits for an event to either be acknowledged by the target or dropped before starting to deliver a new event. This would apply to MemoryBroker and RedisBroker, with different fault tolerance guarantees for each.
The benefit of this solution is that by being on the broker, it can be used regardless of which source connector you're using. But, it requires necessarily using a TriggerMesh broker.
An example of the Trigger configuration could be something like this, which includes the new maxConcurrency parameter that I've tentatively added to the delivery section, alongside retries and dead-letter sink:
apiVersion: eventing.triggermesh.io/v1alpha1
kind: Trigger
metadata:
name: broker-to-service
spec:
broker:
group: eventing.triggermesh.io
kind: MemoryBroker
name: mybroker
target:
ref:
apiVersion: v1
kind: Service
name: my-service
delivery:
maxConcurrency: 10
deadLetterSink:
ref:
apiVersion: v1
kind: Service
name: dead-letter-service
backoffDelay: "PT0.5S" # ISO8601 duration
backoffPolicy: exponential # exponential or linear
retry: 2An alternative solution would be to implement this on the source connectors themselves, or both, but this is more costly as it requires updates to all source components to reach a consistent feature set.