Skip to content

Commit 7b741f4

Browse files
authored
Merge branch 'main' into DOCS-Restructure-get-started-and-chapters-main
2 parents ef3e543 + 85e5a84 commit 7b741f4

20 files changed

+2857
-65
lines changed

.github/resources/requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
# This file is licensed under Apache 2.0 License.
44

55
pip == 26.0.1
6-
pytest == 9.0.2
6+
pytest == 9.0.3
77
numpy == 2.2.6
88
selenium == 4.41.0
99
selenium-wire == 5.1.0

tools/tracker/evaluation/Agents.md

Lines changed: 29 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,9 @@ SPDX-License-Identifier: Apache-2.0
2121
- Architecture & flow: [docs/design/tracker-evaluation-pipeline.md](../../../docs/design/tracker-evaluation-pipeline.md)
2222
- Main tracker evaluation README (canonical formats, usage, CLI): [README.md](README.md)
2323
- ADR context: [docs/adr/0009-tracking-evaluation.md](../../../docs/adr/0009-tracking-evaluation.md)
24-
- Example configuration: [pipeline_configs/metric_test_evaluation.yaml](pipeline_configs/metric_test_evaluation.yaml)
24+
- Example configurations:
25+
- Full tracker evaluation: [pipeline_configs/metric_test_evaluation.yaml](pipeline_configs/metric_test_evaluation.yaml)
26+
- Camera projection accuracy: [pipeline_configs/camera_projection_evaluation.yaml](pipeline_configs/camera_projection_evaluation.yaml)
2527

2628
## Folders structure
2729

@@ -56,6 +58,19 @@ Check `datasets/README.md` for more details
5658
- Dependent on internal implementation: loads configuration file and calls API of SceneScape classes from scene_common and controller modules.
5759
- Uses separate frame ingestion logic depending on enabling time-chunking in the configuration.
5860

61+
- **CameraProjectionHarness**: `harnesses/camera_projection_harness/camera_projection_harness.py`
62+
- Bypasses the full tracker and only applies camera-pose projection to isolate per-camera calibration error.
63+
- Runs `run_projection.py` inside the `scenescape-controller` Docker container (requires `scene_common`, OpenCV, open3d).
64+
- Supports two projection modes per object category via the `object_classes` custom config key:
65+
- **TYPE_1** (`shift_type: 1`, default): projects bounding-box bottom-centre `(centre_x, bottom_y)` to world XY plane using `CameraPose.cameraPointToWorldPoint()`.
66+
- **TYPE_2** (`shift_type: 2`): shifts the projection point upward by `(height/2) * (baseAngle/90)` before projecting, using `CameraPose.projectBounds()` to derive the base angle.
67+
- After projection, applies a size offset: pushes the world position `mean([x_size, y_size]) / 2` metres away from the camera, matching `MovingObject.mapObjectDetectionToWorld()`.
68+
- `set_custom_config()` accepts `object_classes` (list of `{name, shift_type, x_size, y_size}` dicts) and `container_image`.
69+
- `process_inputs()` serialises `object_classes` to `params.json` in the shared temp dir before launching the container; `run_projection.py` reads it at startup.
70+
- `reset()` clears `_object_classes` in addition to other state.
71+
- Encodes output object IDs as `"{camera_id}:{object_id}"` for downstream splitting by `CameraAccuracyEvaluator`.
72+
- Pair this harness with `CameraAccuracyEvaluator` and the `camera_projection_evaluation.yaml` pipeline config.
73+
5974
Check `harnesses/README.md` for more details
6075

6176
## Evaluators
@@ -69,20 +84,33 @@ Check `harnesses/README.md` for more details
6984
- **JitterEvaluator**: `evaluators/jitter_evaluator.py`
7085
Measures trajectory smoothness via RMS jerk and acceleration variance, computed from both tracker outputs and ground-truth tracks. Supports GT and ratio variants to isolate tracker-added jitter from dataset-inherent jitter.
7186

87+
- **CameraAccuracyEvaluator**: `evaluators/camera_accuracy_evaluator.py`
88+
Consumes output from `CameraProjectionHarness` and measures per-camera, per-object projection accuracy:
89+
- `DIST_T`: mean Euclidean distance error (m) between projected and GT world position per (camera, object) pair.
90+
- `VISIBILITY`: frame count and percentage each camera detects each object.
91+
- `set_scene_config()` resolves each camera's world position (`_solve_camera_position`: `cv2.solvePnP``C = -R^T @ t`) and 2-D viewing direction (`_solve_camera_view_dir`: `R^T @ [0, 0, 1]`, XY normalized) from the scene's calibration data.
92+
- `trajectories_{cam}.png` includes a star marker at the camera position and an arrow showing its view direction; both X and Y axes are flipped when `cam_y > mean(gt_y)` (180° rotation so camera always appears at visual bottom with correct chirality).
93+
- Writes `distance_errors.csv`, `visibility_summary.csv`, `accuracy_summary.csv`, `summary_table.csv` (human-readable column names), per-camera `distance_errors_{cam}.png`, `trajectories_{cam}.png`, and `error_vs_cam_distance_{cam}.png` plots, and a `visibility_bar_chart.png`. `format_summary()` returns a terminal-ready table.
94+
7295
Multiple evaluators can be configured in a single YAML pipeline; each runs independently against the same tracker outputs and writes results to its own subfolder under the run output directory.
7396

7497
Check `evaluators/README.md` for more details
7598

7699
## Code Entry Points
77100

78101
- **Pipeline orchestration**: [pipeline_engine.py](pipeline_engine.py) (methods `load_configuration()`, `run()`, `evaluate()`, CLI via `python -m pipeline_engine <config>`).
102+
- `_configure_harness()` forwards `object_classes` from the YAML `harness.config` block to the harness via `set_custom_config({'object_classes': ...})`.
103+
- `_configure_evaluators()` calls `set_scene_config(scene_config)` on each evaluator that exposes the method (checked via `hasattr`), passing the scene config returned by `dataset.get_scene_config()`.
104+
- `main()` prints evaluator results using `evaluator.format_summary()` when available; otherwise falls back to printing each metric value individually.
79105
- **Component base classes** (implement to extend pipeline):
80106
- Dataset: [base/tracking_dataset.py](base/tracking_dataset.py)
81107
- Harness: [base/tracker_harness.py](base/tracker_harness.py)
82108
- Evaluator: [base/tracker_evaluator.py](base/tracker_evaluator.py)
83109
- **TrackEval adapter & helpers**: [evaluators/trackeval_evaluator.py](evaluators/trackeval_evaluator.py), [utils/format_converters/](./utils/format_converters.py).
84110
- **Jitter adapter**: [evaluators/jitter_evaluator.py](evaluators/jitter_evaluator.py).
85111
- **Diagnostic adapter**: [evaluators/diagnostic_evaluator.py](evaluators/diagnostic_evaluator.py).
112+
- **Camera projection harness**: [harnesses/camera_projection_harness/camera_projection_harness.py](harnesses/camera_projection_harness/camera_projection_harness.py), container script: [harnesses/camera_projection_harness/run_projection.py](harnesses/camera_projection_harness/run_projection.py).
113+
- **Camera accuracy evaluator**: [evaluators/camera_accuracy_evaluator.py](evaluators/camera_accuracy_evaluator.py).
86114

87115
## Guidelines for Adding New Component or Updating Existing One
88116

tools/tracker/evaluation/README.md

Lines changed: 47 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,8 @@ pip install -r requirements.txt
5050

5151
Create a YAML configuration file (see `pipeline_configs/` directory):
5252

53+
**Full tracker evaluation** (`pipeline_configs/controller_evaluation.yaml`):
54+
5355
```yaml
5456
pipeline:
5557
output:
@@ -59,7 +61,7 @@ dataset:
5961
class: datasets.metric_test_dataset.MetricTestDataset
6062
config:
6163
data_path: /path/to/dataset
62-
cameras: [x1, x2]
64+
cameras: [Cam_x1_0, Cam_x2_0]
6365
camera_fps: 30
6466

6567
harness:
@@ -88,6 +90,42 @@ evaluators:
8890
]
8991
```
9092
93+
**Camera projection accuracy** (`pipeline_configs/camera_projection_evaluation.yaml`):
94+
95+
Bypasses the tracker and only applies camera-pose projection to isolate per-camera calibration error:
96+
97+
```yaml
98+
pipeline:
99+
output:
100+
path: /tmp/camera-projection-evaluation
101+
102+
dataset:
103+
class: datasets.metric_test_dataset.MetricTestDataset
104+
config:
105+
data_path: /path/to/dataset
106+
cameras: [Cam_x1_0, Cam_x2_0]
107+
camera_fps: 30
108+
109+
harness:
110+
class: harnesses.camera_projection_harness.CameraProjectionHarness
111+
config:
112+
container_image: scenescape-controller:latest
113+
# Optional: per-category projection settings.
114+
# shift_type 1 = bottom-centre (TYPE_1, default)
115+
# shift_type 2 = perspective-corrected point (TYPE_2)
116+
# x_size / y_size push the result away from the camera by mean([x,y])/2 metres.
117+
object_classes:
118+
- name: person
119+
shift_type: 1
120+
x_size: 0.5
121+
y_size: 0.5
122+
123+
evaluators:
124+
- class: evaluators.camera_accuracy_evaluator.CameraAccuracyEvaluator
125+
config:
126+
metrics: [DIST_T, VISIBILITY]
127+
```
128+
91129
Run the pipeline:
92130

93131
```bash
@@ -144,11 +182,12 @@ evaluation/
144182
145183
### Available Evaluators
146184
147-
| Evaluator | Metrics | Description |
148-
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
149-
| `TrackEvalEvaluator` | HOTA, MOTA, IDF1, and more | Industry-standard tracking accuracy metrics via the TrackEval library |
150-
| `DiagnosticEvaluator` | `LOC_T_X`, `LOC_T_Y`, `DIST_T` → summary scalars: `DIST_T_mean`, `LOC_T_X_mae`, `LOC_T_Y_mae`, `num_matches` | Per-frame location and distance error between matched tracker output tracks and ground-truth tracks; uses bipartite (Hungarian) assignment over overlapping frames |
151-
| `JitterEvaluator` | `rms_jerk`, `rms_jerk_gt`, `rms_jerk_ratio`, `acceleration_variance`, `acceleration_variance_gt`, `acceleration_variance_ratio` | Trajectory smoothness metrics based on numerical differentiation of 3D positions; GT and ratio variants allow comparing tracker-added jitter against test-data jitter |
185+
| Evaluator | Metrics | Description |
186+
| ------------------------- | ------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
187+
| `TrackEvalEvaluator` | HOTA, MOTA, IDF1, and more | Industry-standard tracking accuracy metrics via the TrackEval library |
188+
| `DiagnosticEvaluator` | `LOC_T_X`, `LOC_T_Y`, `DIST_T` → summary scalars: `DIST_T_mean`, `LOC_T_X_mae`, `LOC_T_Y_mae`, `num_matches` | Per-frame location and distance error between matched tracker output tracks and ground-truth tracks; uses bipartite (Hungarian) assignment over overlapping frames |
189+
| `JitterEvaluator` | `rms_jerk`, `rms_jerk_gt`, `rms_jerk_ratio`, `acceleration_variance`, `acceleration_variance_gt`, `acceleration_variance_ratio` | Trajectory smoothness metrics based on numerical differentiation of 3D positions; GT and ratio variants allow comparing tracker-added jitter against test-data jitter |
190+
| `CameraAccuracyEvaluator` | `DIST_T` → `dist_mean_all`, `dist_mean_{cam}`, `dist_mean_{cam}_{obj}`; `VISIBILITY` → `visibility_{cam}_{obj}` (frames + %) | Per-camera, per-object projection accuracy: mean distance error and visibility frame count. Designed to pair with `CameraProjectionHarness`. |
152191
153192
## Canonical Data Formats
154193
@@ -221,7 +260,8 @@ The evaluation pipeline has comprehensive test coverage:
221260

222261
- **Unit Tests**: Fast tests without external dependencies, located in component-specific test directories
223262
- `datasets/tests/test_*.py`: Datasets unit tests
224-
- `harnesses/tests/test_*.py`: Harnesses unit tests
263+
- `harnesses/tests/test_*.py`: Harnesses unit tests (includes `CameraProjectionHarness` — 23 tests; `run_projection.py` helpers — 7 tests)
264+
- `evaluators/tests/test_*.py`: Evaluator unit tests (includes `CameraAccuracyEvaluator` — 37 tests)
225265
- `tests/test_format_converters.py`: Format converter unit tests
226266

227267
- **Integration Tests**: Tests requiring Docker and real components, located in `tests/`

tools/tracker/evaluation/datasets/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Dataset adapters convert dataset-specific formats to SceneScape canonical format
2525

2626
**Key Features**:
2727
- Single scene: `Retail_Demo`
28-
- Two cameras: `x1`, `x2` (Cam_x1_0, Cam_x2_0)
28+
- Two cameras: `Cam_x1_0`, `Cam_x2_0`
2929
- Multiple FPS options: 1, 10, 30 (separate JSON files per FPS)
3030
- Ground truth in MOTChallenge 3D CSV format (see [Canonical Data Formats](../README.md#canonical-data-formats))
3131

@@ -42,13 +42,13 @@ from datasets.metric_test_dataset import MetricTestDataset
4242
dataset = MetricTestDataset("../../../tests/system/metric/dataset")
4343

4444
# Configure dataset
45-
dataset.set_cameras(["x1", "x2"]).set_camera_fps(30)
45+
dataset.set_cameras(["Cam_x1_0", "Cam_x2_0"]).set_camera_fps(30)
4646

4747
# Get scene configuration
4848
scene_config = dataset.get_scene_config()
4949

5050
# Get camera inputs
51-
for camera_input in dataset.get_inputs("x1"):
51+
for camera_input in dataset.get_inputs("Cam_x1_0"):
5252
# Process detection data
5353
pass
5454

tools/tracker/evaluation/datasets/metric_test_dataset.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -21,15 +21,15 @@ class MetricTestDataset(TrackingDataset):
2121
2222
This dataset contains:
2323
- Scene: Retail_Demo (single built-in scene)
24-
- Cameras: x1, x2 (Cam_x1_0, Cam_x2_0)
24+
- Cameras: Cam_x1_0, Cam_x2_0
2525
- FPS options: 1, 10, 30 (separate JSON files per FPS)
2626
- Ground truth: gtLoc.json with object locations
2727
- Scene config: config.json with camera calibration
2828
"""
2929

3030
# Constants
3131
SCENE_NAME = "Retail_Demo"
32-
SUPPORTED_CAMERAS = ["x1", "x2"]
32+
SUPPORTED_CAMERAS = ["Cam_x1_0", "Cam_x2_0"]
3333
SUPPORTED_FPS = [1, 10, 30]
3434
DEFAULT_FPS = 30
3535

@@ -74,7 +74,7 @@ def set_cameras(self, cameras: Optional[List[str]] = None) -> 'MetricTestDataset
7474
"""Set cameras to use.
7575
7676
Args:
77-
cameras: List of camera IDs (subset of ["x1", "x2"])
77+
cameras: List of camera IDs (subset of ["Cam_x1_0", "Cam_x2_0"])
7878
7979
Returns:
8080
Self for method chaining
@@ -382,7 +382,7 @@ def _get_input_filename(self, cam_id: str) -> Path:
382382
raise ValueError(f"Camera {cam_id} not in configured cameras")
383383

384384
fps_suffix = f"_{int(self._camera_fps)}fps" if self._camera_fps != 30 else ""
385-
input_file = self._dataset_path / f"Cam_{cam_id}_0{fps_suffix}.json"
385+
input_file = self._dataset_path / f"{cam_id}{fps_suffix}.json"
386386

387387
if not input_file.exists():
388388
raise FileNotFoundError(f"Input file not found: {input_file}")

0 commit comments

Comments
 (0)