-
Notifications
You must be signed in to change notification settings - Fork 2.7k
fix(kafkareceiver): enforce a backoff mechanism on exporterhelper.ErrQueueIsFull error #39581
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix(kafkareceiver): enforce a backoff mechanism on exporterhelper.ErrQueueIsFull error #39581
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please also resolve the conflicts.
@@ -851,7 +852,8 @@ func newExponentialBackOff(config configretry.BackOffConfig) *backoff.Exponentia | |||
} | |||
|
|||
func errorRequiresBackoff(err error) bool { | |||
return err.Error() == errMemoryLimiterDataRefused.Error() | |||
return err.Error() == errMemoryLimiterDataRefused.Error() || |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this used either errors.Is
or errors.As
instead of comparing the strings?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, I don't see how this can be utilized without changes in opentelemetry-collector, since the memory limiter error is in an internal package: https://github.com/open-telemetry/opentelemetry-collector/blob/main/internal/memorylimiter/memorylimiter.go#L28
Moving to draft while this is being worked on |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@an-mmx did you consider updating the logic to retry on any error that is not considered permanent, like the exporterhelper retry sender? https://github.com/open-telemetry/opentelemetry-collector/blob/d020c9074f873c54e8ae7b8eaa4a08e13157cb76/exporter/exporterhelper/internal/retry_sender.go#L96-L99
I think that would be the ideal solution
I hadn’t considered this approach before. |
Description
In the current implementation, the BackOff mechanism is trigerred only by memory limiter error. However, the same behavior is expected for the
exporterhelper.ErrQueueIsFull
error, which occurs when there is a sending queue overflow (both memory queue and persistent queue).Link to tracking issue
#39580
Testing
Unit coverage added
Documentation
No documentation updated