ci: Add comparison and visualization workflow for agent performance testing in CI#3496
Merged
nr-ahemsath merged 9 commits intomainfrom Mar 20, 2026
Merged
Conversation
1. Change the agent health check to just look for "agent fully connected" so we don't have to have the agent log level set to DEBUG 2. Don't dump traffic driver and test app logs by default 3. Parameterize the python executable used to clean up the test output (part of follow-on work to make running comparisons locally convenient)
…ic/newrelic-dotnet-agent into ci/performance-test-visualization
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #3496 +/- ##
==========================================
- Coverage 81.79% 81.77% -0.02%
==========================================
Files 508 508
Lines 34220 34220
Branches 4040 4040
==========================================
- Hits 27990 27984 -6
- Misses 5265 5269 +4
- Partials 965 967 +2
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
nrcventura
approved these changes
Mar 19, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Implements #3308, adding the ability to run up to four performance tests with different agent versions (plus a no-agent baseline) and produce a summary table and charts for easy comparison of results.
Here is an example of the workflow, comparing the latest release (10.50.0) with the latest overnight all_solutions build (which should actually be identical, but this is just a demo): https://github.com/newrelic/newrelic-dotnet-agent/actions/runs/23308138260
Details:
performance_tests.ymlworkflow (which ran a single performance test) was refactored into an action at.github/actions/run-perf-test/action.ymlcompare_performance.ymlis added to run up to four test scenarios, with the option to include a no-agent baseline (defaulting to true). Each run takes a label and a string which can be either an agent version number or a GitHub actions workflow run ID, which should be the run ID of anall_solutions.ymlrun from which to obtain the agent bits. If the input string is blank it defaults to the latest released agent version.ReportGeneratorprocess that uses ScottPlot.There will be a follow-up PR or PRs which will do (at least) the following: