-
Notifications
You must be signed in to change notification settings - Fork 937
Rework span record benchmark and publish results #8031
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
|
||
| /** | ||
| * The number of record operations per benchmark invocation. By using a constant across benchmarks | ||
| * of different signals, it's easier to compare benchmark results across signals. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If span, metric, and log benchmarks all record the same number of operations per benchmark invocation, we can see the relative cost of spans vs. logs. vs. metrics. Even though it will never be a perfect apples to apples comparison, its still useful to know the order of magnitude cost of the different signals.
Make sense?
| /** | ||
| * Notes on interpreting the data: | ||
| * This benchmark measures the performance of recording metrics and includes the following | ||
| * dimensions: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One of my initial concerns with public benchmarks was that they need to be contextualized.
To address this, I'd like to:
- Put some effort into making the javadoc for our public benchmarks up to date and useful
- Update the benchmark static webpage to link to the relevant javadoc for each benchmark
| * BatchSpanProcessor} paired with a noop {@link SpanExporter}. In order to avoid quickly outpacing | ||
| * the batch processor queue and dropping spans, the processor is configured with a queue size of | ||
| * {@link SpanRecordBenchmark#RECORDS_PER_INVOCATION} * {@link SpanRecordBenchmark#MAX_THREADS} and | ||
| * is flushed after each invocation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a key aspect to a useful span record benchmark (and log record benchmark) IMO. We need to isolate from the export path, which is noisy due to the network dependency, while also being realistic. My definition of realistic is a batch span processor and a harness that makes sure that spans aren't just being dropped on the floor from a full queue.
| } | ||
| } | ||
|
|
||
| public enum SpanSize { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check this out: if we have individual parameters for the num attributes, num events, num links, we end up with combinatorial explosion and a lot of noise. What we really want to characterize is the performance of different sizes of spans, where a size is a composite of a variety of dimensions.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #8031 +/- ##
=========================================
Coverage 90.16% 90.16%
Complexity 7484 7484
=========================================
Files 836 836
Lines 22562 22562
Branches 2237 2237
=========================================
Hits 20344 20344
Misses 1515 1515
Partials 703 703 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Followup to #8000