-
Notifications
You must be signed in to change notification settings - Fork 193
Fix appendOrMergeRPC inefficiency in message size recalculation #582
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Generally looks good. Make sure to run the Fuzz tests as well. A benchmark might also be helpful here. |
The Fuzz looks working too well and finds broken input even on master branch (rerunning makes it pass tho):
The same story is on my feature branch so I can't tell if it is a new or pre-existing issue. I explained where "1+" comes from (it is a field key size from pb generator) and added a benchmark. |
@MarcoPolo could you retrigger the testing job, TestMessageBatchPublish timeout and it never happened on my local machines. |
Okay, it is
|
A couple of things to respond to:
|
Absolutely!
Sent an email with details |
} | ||
return RPC{ | ||
RPC: pb.RPC{ | ||
Publish: msgs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In your workload, do you see RPCs being split primarily due to many messages in a single RPC? I ask because we could add some optimizations if so.
Summary
As discussed in #581 there an inefficiency in
appendOrMergeRPC
in callingSize()
more times than needed.Fix
Instead of calling
lastRPC.Size()
that iterates overRPC.Publish
, save the last known size and sum with a current message size and protobuf upper bound overhead.Status and Evaluation
Benchmark results