Skip to content

Commit 44e4f4f

Browse files
committed
README
1 parent 238a681 commit 44e4f4f

File tree

2 files changed

+51
-51
lines changed

2 files changed

+51
-51
lines changed

tools/tracker/evaluation/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -147,8 +147,8 @@ evaluation/
147147
| Evaluator | Metrics | Description |
148148
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
149149
| `TrackEvalEvaluator` | HOTA, MOTA, IDF1, and more | Industry-standard tracking accuracy metrics via the TrackEval library |
150-
| `JitterEvaluator` | `rms_jerk`, `rms_jerk_gt`, `rms_jerk_ratio`, `acceleration_variance`, `acceleration_variance_gt`, `acceleration_variance_ratio` | Trajectory smoothness metrics based on numerical differentiation of 3D positions; GT and ratio variants allow comparing tracker-added jitter against test-data jitter |
151150
| `DiagnosticEvaluator` | `LOC_T_X`, `LOC_T_Y`, `DIST_T` → summary scalars: `DIST_T_mean`, `LOC_T_X_mae`, `LOC_T_Y_mae`, `num_matches` | Per-frame location and distance error between matched tracker output tracks and ground-truth tracks; uses bipartite (Hungarian) assignment over overlapping frames |
151+
| `JitterEvaluator` | `rms_jerk`, `rms_jerk_gt`, `rms_jerk_ratio`, `acceleration_variance`, `acceleration_variance_gt`, `acceleration_variance_ratio` | Trajectory smoothness metrics based on numerical differentiation of 3D positions; GT and ratio variants allow comparing tracker-added jitter against test-data jitter |
152152

153153
## Canonical Data Formats
154154

tools/tracker/evaluation/evaluators/README.md

Lines changed: 50 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -96,6 +96,56 @@ print(f"IDF1: {metrics['IDF1']:.3f}")
9696

9797
**Tests**: See [tests/test_trackeval_evaluator.py](tests/test_trackeval_evaluator.py) for comprehensive test suite with 16 test cases covering configuration, processing, evaluation, and integration workflows.
9898

99+
### DiagnosticEvaluator
100+
101+
**Purpose**: Per-frame location comparison and error analysis between matched output tracks and ground-truth tracks.
102+
103+
**Status**: **FULLY IMPLEMENTED** - Bipartite track matching with per-frame location and distance CSV/plot outputs.
104+
105+
**Supported Metrics**:
106+
107+
- **LOC_T_X**: Per-frame X position of each matched (output, GT) track pair
108+
- **LOC_T_Y**: Per-frame Y position of each matched (output, GT) track pair
109+
- **DIST_T**: Per-frame Euclidean distance error between each matched pair
110+
111+
**Key Features**:
112+
113+
- **Track Matching**: Bipartite assignment (Hungarian algorithm) minimizing mean Euclidean distance over overlapping frames. Requires a minimum of 10 overlapping frames (`MIN_OVERLAP_FRAMES`).
114+
- **Missing Frame Handling**: Frames where only one side (output or GT) has data produce `NaN` in CSV output, preserving full temporal context.
115+
- **CSV Output**: Per-metric CSV files with headers:
116+
- LOC_T_X / LOC_T_Y: `[frame_id, track_id, gt_id, value_track, value_gt]`
117+
- DIST_T: `[frame_id, track_id, gt_id, distance]`
118+
- **Plot Output**: One matplotlib figure per metric with all matched pairs overlaid.
119+
- **Summary Scalars**: `evaluate_metrics()` returns `DIST_T_mean`, `LOC_T_X_mae`, `LOC_T_Y_mae`, and `num_matches`.
120+
121+
**Usage Example**:
122+
123+
```python
124+
from evaluators.diagnostic_evaluator import DiagnosticEvaluator
125+
from pathlib import Path
126+
127+
evaluator = DiagnosticEvaluator()
128+
metrics = (evaluator
129+
.configure_metrics(['LOC_T_X', 'LOC_T_Y', 'DIST_T'])
130+
.set_output_folder(Path('/path/to/results'))
131+
.process_tracker_outputs(tracker_outputs, gt_file_path)
132+
.evaluate_metrics())
133+
print(f"Mean distance: {metrics['DIST_T_mean']:.3f}")
134+
print(f"X MAE: {metrics['LOC_T_X_mae']:.3f}")
135+
print(f"Y MAE: {metrics['LOC_T_Y_mae']:.3f}")
136+
print(f"Matched pairs: {int(metrics['num_matches'])}")
137+
```
138+
139+
**Current Limitations**:
140+
141+
- Uses only X and Y coordinates (Z ignored)
142+
- Single-sequence evaluation only
143+
- No configurable overlap threshold (fixed at 10 frames)
144+
145+
**Implementation**: [diagnostic_evaluator.py](diagnostic_evaluator.py)
146+
147+
**Tests**: See [tests/test_diagnostic_evaluator.py](tests/test_diagnostic_evaluator.py) for unit tests covering track matching, scalar metrics, CSV output, and reset workflows.
148+
99149
### JitterEvaluator
100150

101151
**Purpose**: Evaluate tracker smoothness by measuring positional jitter in tracked object trajectories, and compare it against jitter already present in the ground-truth test data.
@@ -181,56 +231,6 @@ evaluators:
181231
182232
**Tests**: See [tests/test_jitter_evaluator.py](tests/test_jitter_evaluator.py).
183233
184-
### DiagnosticEvaluator
185-
186-
**Purpose**: Per-frame location comparison and error analysis between matched output tracks and ground-truth tracks.
187-
188-
**Status**: **FULLY IMPLEMENTED** - Bipartite track matching with per-frame location and distance CSV/plot outputs.
189-
190-
**Supported Metrics**:
191-
192-
- **LOC_T_X**: Per-frame X position of each matched (output, GT) track pair
193-
- **LOC_T_Y**: Per-frame Y position of each matched (output, GT) track pair
194-
- **DIST_T**: Per-frame Euclidean distance error between each matched pair
195-
196-
**Key Features**:
197-
198-
- **Track Matching**: Bipartite assignment (Hungarian algorithm) minimizing mean Euclidean distance over overlapping frames. Requires a minimum of 10 overlapping frames (`MIN_OVERLAP_FRAMES`).
199-
- **Missing Frame Handling**: Frames where only one side (output or GT) has data produce `NaN` in CSV output, preserving full temporal context.
200-
- **CSV Output**: Per-metric CSV files with headers:
201-
- LOC_T_X / LOC_T_Y: `[frame_id, track_id, gt_id, value_track, value_gt]`
202-
- DIST_T: `[frame_id, track_id, gt_id, distance]`
203-
- **Plot Output**: One matplotlib figure per metric with all matched pairs overlaid.
204-
- **Summary Scalars**: `evaluate_metrics()` returns `DIST_T_mean`, `LOC_T_X_mae`, `LOC_T_Y_mae`, and `num_matches`.
205-
206-
**Usage Example**:
207-
208-
```python
209-
from evaluators.diagnostic_evaluator import DiagnosticEvaluator
210-
from pathlib import Path
211-
212-
evaluator = DiagnosticEvaluator()
213-
metrics = (evaluator
214-
.configure_metrics(['LOC_T_X', 'LOC_T_Y', 'DIST_T'])
215-
.set_output_folder(Path('/path/to/results'))
216-
.process_tracker_outputs(tracker_outputs, gt_file_path)
217-
.evaluate_metrics())
218-
print(f"Mean distance: {metrics['DIST_T_mean']:.3f}")
219-
print(f"X MAE: {metrics['LOC_T_X_mae']:.3f}")
220-
print(f"Y MAE: {metrics['LOC_T_Y_mae']:.3f}")
221-
print(f"Matched pairs: {int(metrics['num_matches'])}")
222-
```
223-
224-
**Current Limitations**:
225-
226-
- Uses only X and Y coordinates (Z ignored)
227-
- Single-sequence evaluation only
228-
- No configurable overlap threshold (fixed at 10 frames)
229-
230-
**Implementation**: [diagnostic_evaluator.py](diagnostic_evaluator.py)
231-
232-
**Tests**: See [tests/test_diagnostic_evaluator.py](tests/test_diagnostic_evaluator.py) for unit tests covering track matching, scalar metrics, CSV output, and reset workflows.
233-
234234
## Adding New Evaluators
235235
236236
To add support for a new metric computation library:

0 commit comments

Comments
 (0)