Skip to content

Alertmanager pod msg="dropping messages because too many are queued" #2440

Open
@nmizeb

Description

@nmizeb

hello,

What did you do?
I'am using alertmanager in a kubernetes pod, it's connected to Prometheus, Karma and Kthnxbye to ack alerts.

What did you expect to see?

normal memory usage as before

What did you see instead?

Recently, the memory usage graph of alertmanager is experiencing a linear increase.
In the alertmanager logs, I have this message:
level=warn ts=2020-12-17T09:32:04.281Z caller=delegate.go:272 component=cluster msg="dropping messages because too many are queued" current=4100 limit=4096
Rule expression of the message :

// handleQueueDepth ensures that the queue doesn't grow unbounded by pruning
// older messages at regular interval.
func (d *delegate) handleQueueDepth() {
	for {
		select {
		case <-d.stopc:
			return
		case <-time.After(15 * time.Minute):
			n := d.bcast.NumQueued()
			if n > maxQueueSize {
				level.Warn(d.logger).Log("msg", "dropping messages because too many are queued", "current", n, "limit", maxQueueSize)
				d.bcast.Prune(maxQueueSize)
				d.messagesPruned.Add(float64(n - maxQueueSize))
			}
		}
	}
}

Please note that there is no action to justify this increase.

Environment
Alertmanager : v0.21.0
Prometheus : v2.18.2

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions