[RFC] feat: change Evaluator.evaluate() to return list[EvaluationOutput]#23
Merged
jjbuck merged 1 commit intostrands-agents:mainfrom Nov 2, 2025
Merged
Conversation
BREAKING CHANGE: Evaluator.evaluate() and evaluate_async() now return list[EvaluationOutput] instead of single EvaluationOutput to support multi-metric evaluation scenarios. - Add aggregator property to Evaluator base class with default mean aggregation - Update all evaluator implementations to return lists - InteractionsEvaluator now returns all intermediate evaluations instead of only the last - Add detailed_results field to EvaluationReport for drill-down into individual metrics - Update display to show detailed metrics tree when cases are expanded - Dataset aggregates multiple outputs per case using evaluator's aggregator function
00fd063 to
8db4c70
Compare
poshinchen
approved these changes
Nov 1, 2025
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
BREAKING CHANGE:
Evaluator.evaluate()andevaluate_async()now returnslist[EvaluationOutput]instead of singleEvaluationOutputto support multi-metric evaluation scenarios.Evaluatorbase class with default mean aggregationInteractionsEvaluatornow returns all intermediate evaluations instead of only the lastdetailed_resultsfield toEvaluationReportfor drill-down into individual metricsMotivation
The current evaluator interface assumes a 1:1 relationship between test cases and evaluation metrics. However, many real-world evaluation scenarios produce multiple metrics per test case. For example, evaluating tool parameter accuracy across a multi-turn conversation should produce one metric per turn, not a single aggregate score. Similarly, the InteractionsEvaluator was already evaluating each interaction individually but discarding all intermediate results except the last one.
This change makes the evaluator interface more expressive by returning a list of metrics. While this is a breaking change to the return type, the evaluation logic itself remains unchanged—existing evaluators simply wrap their single output in a list. The Dataset layer handles aggregation transparently, so the EvaluationReport structure (one score per case) stays intact.
Each evaluator now has a configurable
aggregatorfunction that determines how multiple metrics combine into a case-level score. The default aggregator computes the mean of scores, requires all metrics to pass for the case to pass, and concatenates reasons with " | " separators. Evaluators can override this with custom aggregation logic (e.g., min, max, weighted average) to match their specific semantics. Detailed individual metrics are preserved inEvaluationReport.detailed_resultsfor drill-down analysis.The attached figure illustrates the modifications to the relevant data models.
Related Issues
N/A
Documentation PR
N/A
Type of Change
Breaking change
Testing
Ran
pytestafter updating all affected unit tests.hatch run prepareChecklist
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.