Skip to content

Sentry Concurrency Limits #932

@bmteller

Description

@bmteller

Problem Statement

What we want to do is be able to use send_event type :none while not risking blowing up the node because too many asynchronous errors are created. At the moment if you use send_event type :none then there are two places where events can start queuing up if they are not processed fast enough and which will eventually lead to an out of memory error. This is in the hackney pool, if there are not enough connections available because they are being used to process requests then requests start queuing up in hackney. Also, a similar thing can happen with Transport.Sender where each sender can only process 1 request at a time and if the incoming request rate is higher than what the Transport.Senders are capable of processing then a queue will form in the Transport.Senders message queues.

Ideally for us we would like to either just start dropping messages once queues begin to form or to limit the size of the queue to something that won't cause a node to run out of memory. One option for us is to just use send_event_type :sync and then just wrap calls to Sentry.capture_message, Sentry.capture_exception to go through some kind of semaphore that would limit the concurrency to X where X is a manageable amount for us. This is fine but it also means we have to rewrite Sentry.LoggerBackend to call our wrapper instead of Sentry as well.

Solution Brainstorm

So I'm wondering if adding some kind of concurrency limit would be something that Sentry would be willing to add to the library. Or alternatively, maybe just adding something to LoggerBackend to allow an alternative wrapped Sentry implementation to be used.

Metadata

Metadata

Assignees

No one assigned

    Projects

    Status

    Waiting for: Product Owner

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions