You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**Tests**: See [tests/test_trackeval_evaluator.py](tests/test_trackeval_evaluator.py) for comprehensive test suite with 16 test cases covering configuration, processing, evaluation, and integration workflows.
98
98
99
+
### DiagnosticEvaluator
100
+
101
+
**Purpose**: Per-frame location comparison and error analysis between matched output tracks and ground-truth tracks.
102
+
103
+
**Status**: **FULLY IMPLEMENTED** - Bipartite track matching with per-frame location and distance CSV/plot outputs.
104
+
105
+
**Supported Metrics**:
106
+
107
+
-**LOC_T_X**: Per-frame X position of each matched (output, GT) track pair
108
+
-**LOC_T_Y**: Per-frame Y position of each matched (output, GT) track pair
109
+
-**DIST_T**: Per-frame Euclidean distance error between each matched pair
110
+
111
+
**Key Features**:
112
+
113
+
-**Track Matching**: Bipartite assignment (Hungarian algorithm) minimizing mean Euclidean distance over overlapping frames. Requires a minimum of 10 overlapping frames (`MIN_OVERLAP_FRAMES`).
114
+
-**Missing Frame Handling**: Frames where only one side (output or GT) has data produce `NaN` in CSV output, preserving full temporal context.
115
+
-**CSV Output**: Per-metric CSV files with headers:
**Tests**: See [tests/test_diagnostic_evaluator.py](tests/test_diagnostic_evaluator.py) for unit tests covering track matching, scalar metrics, CSV output, and reset workflows.
148
+
99
149
## Adding New Evaluators
100
150
101
151
To add support for a new metric computation library:
0 commit comments