-
Notifications
You must be signed in to change notification settings - Fork 25.3k
[Profiling] Add support for variable sampling frequency #128086
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
rockdaboot
merged 4 commits into
elastic:main
from
rockdaboot:profiling-variable-sampling-frequency
May 21, 2025
Merged
Changes from all commits
Commits
Show all changes
4 commits
Select commit
Hold shift + click to select a range
8b4139d
[Profiling] Add support for variable sampling frequency
rockdaboot a8cd7d4
Update x-pack/plugin/profiling/src/main/java/org/elasticsearch/xpack/…
rockdaboot cc8454c
Add comments and remove superfluous debug log
rockdaboot 7470ed7
Merge branch 'main' into profiling-variable-sampling-frequency
rockdaboot File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -115,6 +115,12 @@ public class TransportGetStackTracesAction extends TransportAction<GetStackTrace | |
*/ | ||
private static final String CUSTOM_EVENT_SUB_AGGREGATION_NAME = "custom_event_group"; | ||
|
||
/** | ||
* This is the default sampling rate for profiling events that we use if no sampling rate is | ||
* stored in the backend (backwards compatibility). | ||
*/ | ||
public static final double DEFAULT_SAMPLING_FREQUENCY = 19.0d; | ||
|
||
private final NodeClient nodeClient; | ||
private final ProfilingLicenseChecker licenseChecker; | ||
private final ClusterService clusterService; | ||
|
@@ -249,7 +255,6 @@ private void searchGenericEventGroupedByStackTrace( | |
ActionListener<GetStackTracesResponse> submitListener, | ||
GetStackTracesResponseBuilder responseBuilder | ||
) { | ||
|
||
CountedTermsAggregationBuilder groupByStackTraceId = new CountedTermsAggregationBuilder("group_by").size( | ||
MAX_TRACE_EVENTS_RESULT_SIZE | ||
).field(request.getStackTraceIdsField()); | ||
|
@@ -286,7 +291,7 @@ private void searchGenericEventGroupedByStackTrace( | |
|
||
String stackTraceID = stacktraceBucket.getKeyAsString(); | ||
|
||
TraceEventID eventID = new TraceEventID("", "", "", stackTraceID); | ||
TraceEventID eventID = new TraceEventID("", "", "", stackTraceID, DEFAULT_SAMPLING_FREQUENCY); | ||
TraceEvent event = stackTraceEvents.computeIfAbsent(eventID, k -> new TraceEvent()); | ||
event.count += count; | ||
subGroups.collectResults(stacktraceBucket, event); | ||
|
@@ -337,6 +342,16 @@ private void searchEventGroupedByStackTrace( | |
// Especially with high cardinality fields, this makes aggregations really slow. | ||
.executionHint("map") | ||
.subAggregation(groupByHostId); | ||
TermsAggregationBuilder groupByExecutableName = new TermsAggregationBuilder("group_by") | ||
// 'size' specifies the max number of host IDs we support per request. | ||
.size(MAX_TRACE_EVENTS_RESULT_SIZE) | ||
.field("process.executable.name") | ||
// missing("") is used to include documents where the field is missing. | ||
.missing("") | ||
// 'execution_hint: map' skips the slow building of ordinals that we don't need. | ||
// Especially with high cardinality fields, this makes aggregations really slow. | ||
.executionHint("map") | ||
.subAggregation(groupByThreadName); | ||
SubGroupCollector subGroups = SubGroupCollector.attach(groupByStackTraceId, request.getAggregationFields()); | ||
client.prepareSearch(eventsIndex.getName()) | ||
.setTrackTotalHits(false) | ||
|
@@ -351,53 +366,89 @@ private void searchEventGroupedByStackTrace( | |
new TermsAggregationBuilder("group_by") | ||
// 'size' specifies the max number of host ID we support per request. | ||
.size(MAX_TRACE_EVENTS_RESULT_SIZE) | ||
.field("process.executable.name") | ||
// missing("") is used to include documents where the field is missing. | ||
.missing("") | ||
.field("Stacktrace.sampling_frequency") | ||
// missing(DEFAULT_SAMPLING_RATE) is used to include documents where the field is missing. | ||
.missing((long) DEFAULT_SAMPLING_FREQUENCY) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This allows compatibility with old data that doesn't have the |
||
// 'execution_hint: map' skips the slow building of ordinals that we don't need. | ||
// Especially with high cardinality fields, this makes aggregations really slow. | ||
.executionHint("map") | ||
.subAggregation(groupByThreadName) | ||
.subAggregation(groupByExecutableName) | ||
.subAggregation(new SumAggregationBuilder("total_count").field("Stacktrace.count")) | ||
) | ||
.addAggregation(new SumAggregationBuilder("total_count").field("Stacktrace.count")) | ||
.execute(handleEventsGroupedByStackTrace(submitTask, client, responseBuilder, submitListener, searchResponse -> { | ||
long totalCount = getAggValueAsLong(searchResponse, "total_count"); | ||
// The count values for events are scaled up to the highest sampling frequency. | ||
// For example, if the highest sampling frequency is 100, an event with frequency=20 and count=1 | ||
// will be upscaled to count=5 (100/20 * count). | ||
// For this, we need to find the highest frequency in the result set. | ||
long maxSamplingFrequency = 0; | ||
Terms samplingFrequencies = searchResponse.getAggregations().get("group_by"); | ||
for (Terms.Bucket samplingFrequencyBucket : samplingFrequencies.getBuckets()) { | ||
final double samplingFrequency = samplingFrequencyBucket.getKeyAsNumber().doubleValue(); | ||
if (samplingFrequency > maxSamplingFrequency) { | ||
maxSamplingFrequency = (long) samplingFrequency; | ||
} | ||
} | ||
|
||
// Calculate a scaled-up total count (scaled up to the highest sampling frequency). | ||
long totalCount = 0; | ||
for (Terms.Bucket samplingFrequencyBucket : samplingFrequencies.getBuckets()) { | ||
InternalNumericMetricsAggregation.SingleValue count = samplingFrequencyBucket.getAggregations().get("total_count"); | ||
final double samplingFrequency = samplingFrequencyBucket.getKeyAsNumber().doubleValue(); | ||
rockdaboot marked this conversation as resolved.
Show resolved
Hide resolved
|
||
final double samplingFactor = maxSamplingFrequency / samplingFrequency; | ||
totalCount += Math.round(count.value() * samplingFactor); | ||
} | ||
|
||
Resampler resampler = new Resampler(request, responseBuilder.getSamplingRate(), totalCount); | ||
|
||
// Sort items lexicographically to access Lucene's term dictionary more efficiently when issuing an mget request. | ||
// The term dictionary is lexicographically sorted and using the same order reduces the number of page faults | ||
// The term dictionary is lexicographically sorted, and using the same order reduces the number of page faults | ||
// needed to load it. | ||
long totalFinalCount = 0; | ||
Map<TraceEventID, TraceEvent> stackTraceEvents = new HashMap<>(MAX_TRACE_EVENTS_RESULT_SIZE); | ||
|
||
Terms executableNames = searchResponse.getAggregations().get("group_by"); | ||
for (Terms.Bucket executableBucket : executableNames.getBuckets()) { | ||
String executableName = executableBucket.getKeyAsString(); | ||
|
||
Terms threads = executableBucket.getAggregations().get("group_by"); | ||
for (Terms.Bucket threadBucket : threads.getBuckets()) { | ||
String threadName = threadBucket.getKeyAsString(); | ||
|
||
Terms hosts = threadBucket.getAggregations().get("group_by"); | ||
for (Terms.Bucket hostBucket : hosts.getBuckets()) { | ||
String hostID = hostBucket.getKeyAsString(); | ||
|
||
Terms stacktraces = hostBucket.getAggregations().get("group_by"); | ||
for (Terms.Bucket stacktraceBucket : stacktraces.getBuckets()) { | ||
Sum count = stacktraceBucket.getAggregations().get("count"); | ||
int finalCount = resampler.adjustSampleCount((int) count.value()); | ||
if (finalCount <= 0) { | ||
continue; | ||
// Walk over all nested aggregations. | ||
// The outermost aggregation is the sampling frequency. | ||
// The next level is the executable name, followed by the thread name, host ID and stacktrace ID. | ||
// the innermost aggregation contains the count of samples for each stacktrace ID. | ||
for (Terms.Bucket samplingFrequencyBucket : samplingFrequencies.getBuckets()) { | ||
final double samplingFrequency = samplingFrequencyBucket.getKeyAsNumber().doubleValue(); | ||
final double samplingFactor = maxSamplingFrequency / samplingFrequency; | ||
|
||
rockdaboot marked this conversation as resolved.
Show resolved
Hide resolved
|
||
Terms executableNames = samplingFrequencyBucket.getAggregations().get("group_by"); | ||
for (Terms.Bucket executableBucket : executableNames.getBuckets()) { | ||
String executableName = executableBucket.getKeyAsString(); | ||
|
||
Terms threads = executableBucket.getAggregations().get("group_by"); | ||
for (Terms.Bucket threadBucket : threads.getBuckets()) { | ||
String threadName = threadBucket.getKeyAsString(); | ||
|
||
Terms hosts = threadBucket.getAggregations().get("group_by"); | ||
for (Terms.Bucket hostBucket : hosts.getBuckets()) { | ||
String hostID = hostBucket.getKeyAsString(); | ||
|
||
Terms stacktraces = hostBucket.getAggregations().get("group_by"); | ||
for (Terms.Bucket stacktraceBucket : stacktraces.getBuckets()) { | ||
Sum count = stacktraceBucket.getAggregations().get("count"); | ||
int finalCount = resampler.adjustSampleCount((int) Math.round(count.value() * samplingFactor)); | ||
rockdaboot marked this conversation as resolved.
Show resolved
Hide resolved
|
||
if (finalCount <= 0) { | ||
continue; | ||
} | ||
|
||
totalFinalCount += finalCount; | ||
|
||
String stackTraceID = stacktraceBucket.getKeyAsString(); | ||
TraceEventID eventID = new TraceEventID( | ||
executableName, | ||
threadName, | ||
hostID, | ||
stackTraceID, | ||
maxSamplingFrequency | ||
); | ||
TraceEvent event = stackTraceEvents.computeIfAbsent(eventID, k -> new TraceEvent()); | ||
event.count += finalCount; | ||
subGroups.collectResults(stacktraceBucket, event); | ||
} | ||
totalFinalCount += finalCount; | ||
|
||
String stackTraceID = stacktraceBucket.getKeyAsString(); | ||
|
||
TraceEventID eventID = new TraceEventID(executableName, threadName, hostID, stackTraceID); | ||
TraceEvent event = stackTraceEvents.computeIfAbsent(eventID, k -> new TraceEvent()); | ||
event.count += finalCount; | ||
subGroups.collectResults(stacktraceBucket, event); | ||
} | ||
} | ||
} | ||
|
@@ -629,8 +680,8 @@ public void calculateCO2AndCosts() { | |
); | ||
|
||
responseBuilder.getStackTraceEvents().forEach((eventId, event) -> { | ||
event.annualCO2Tons += co2Calculator.getAnnualCO2Tons(eventId.hostID(), event.count); | ||
event.annualCostsUSD += costCalculator.annualCostsUSD(eventId.hostID(), event.count); | ||
event.annualCO2Tons += co2Calculator.getAnnualCO2Tons(eventId.hostID(), event.count, eventId.samplingFrequency()); | ||
event.annualCostsUSD += costCalculator.annualCostsUSD(eventId.hostID(), event.count, eventId.samplingFrequency()); | ||
}); | ||
|
||
log.debug(watch::report); | ||
|
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using
long
here toa) avoid hardly compressible FP type (no use case in sight for non-integer frequencies)
b)
long
compresses as well as any other integer type (variable length encoding)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On a second thought, in the real world the frequency values will be very low cardinality. So we might consider using an FP type to be prepared for future enhancements. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the future we should switch to the same representation as OTel:
period
and maybe if we want to allow more user-friendly queries a frequency.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To confirm, we talked about this in a separate zoom. As long as the period isn't defined by semantic conventions, we don't need to store it. Both,
period
andfrequency
can be transformed into each other on-the-fly and user expect to usefrequency
in the KQL filtering UI.