Skip to content
Draft
41 changes: 26 additions & 15 deletions .github/copilot-instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,8 +88,8 @@ Sensors β†’ MQTT (broker) β†’ Scene Controller β†’ Manager/Web UI
**Key Targets** (from root `Makefile`):

```bash
make build-core # Default: core services (autocalibration, controller, manager, model_installer)
make build-all # Includes experimental (mapping + cluster_analytics)
make build-core # Default: core services
make build-all # Includes experimental
make build-experimental # Mapping + cluster_analytics only
make rebuild-core # Clean + build (useful after code changes)
```
Expand All @@ -105,15 +105,24 @@ make rebuild-core # Clean + build (useful after code changes)

**For comprehensive test creation guidance, see `.github/skills/testing.md`** - detailed instructions on creating unit, functional, integration, UI, and smoke tests with both positive and negative cases.

**Running Tests** (must have containers running via docker-compose):
**Test Prerequisites** (required before running any tests):

```bash
SUPASS=<password> make setup_tests # Build test images
make run_basic_acceptance_tests # Quick acceptance tests
make -C tests unit-tests # Unit tests only
make -C tests geometry-unit # Specific test (e.g., geometry)
make setup_tests SUPASS=<password> # Rebuilds ALL images (runtime + test), initializes secrets & .env
# MUST run after code changes to pick them up
```

**Running Tests**:

```bash
make run_unit_tests # All unit tests (requires setup_tests first)
make -C tests reid-unique-count # Specific functional test (requires setup_tests first)
make -C tests geometry-unit # Specific unit test (requires setup_tests first)
make run_basic_acceptance_tests # Quick acceptance/smoke tests (requires setup_tests first)
```

**Key Point**: Always run `make setup_tests` after code changes - it rebuilds Docker images to pick up your modifications.

### Completion Gate For Test Tasks (Critical)

For runtime test verification requirements, use
Expand Down Expand Up @@ -167,21 +176,23 @@ pubsub.publish(topic, json_payload)
**Modifying a Microservice** (e.g., controller):

1. Edit source in `controller/src/`
2. Rebuild: `make rebuild-controller` (cleans old image, rebuilds)
3. Restart containers: `docker compose up -d scene` (or full `docker compose up`)
2. Rebuild: `make rebuild-core` (from root) or per-service builds (see service's Agents.md for commands)
3. For testing: `make setup_tests SUPASS=<password>` - Rebuilds ALL runtime + test images
4. Check logs: `docker compose logs scene -f`

**Adding Dependencies**:

- Python: Update `requirements-runtime.txt`, rebuild image
- Python: Update `requirements-runtime.txt` in service folder
- System: Add to `Dockerfile` RUN section (apt packages)
- Shared lib changes: Rebuild `scene_common`, then dependent services
- Shared lib (scene_common): Use `make rebuild-core` to propagate changes

**Running Tests After Code Changes**:

**Debugging Tests**:
1. `make setup_tests SUPASS=<password>` (required before any tests)
2. Run specific test targets (e.g., `make -C tests reid-unique-count`)
3. See `.github/skills/testing.md` for detailed test creation and debugging

- Use `debugtest.py` for running tests without pytest harness (useful in containers)
- View test output: `docker compose exec <service> cat <logfile>`
- Specific test: `pytest tests/sscape_tests/geometry/test_point.py::TestPoint::test_constructor -v`
**Service-Specific Commands**: Check each service's `Agents.md` file for build and test details.

## Integration Points & Dependencies

Expand Down
24 changes: 16 additions & 8 deletions autocalibration/Agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,22 +74,30 @@ The **Auto Camera Calibration** service (formerly `camcalibration`) computes cam
### Building the Service

```bash
# From root directory
make autocalibration # Build image
make rebuild-autocalibration # Clean + rebuild

# Build with dependencies
make build-core # Includes autocalibration
# From repo root
make -C autocalibration # Build autocalibration
make -C autocalibration test-build # Build autocalibration + test image

# OR from autocalibration/ directory
cd autocalibration && make # Build autocalibration
cd autocalibration && make test-build # Build autocalibration + test image

# Root-level builds (handles all dependencies)
make rebuild-core # Rebuild all core services with dependencies
make build-core # Build all core services
make setup_tests SUPASS=<password> # Full test environment setup
```

### Testing

```bash
# Setup test images first
make setup_tests SUPASS=<password>

# Unit tests
make -C tests autocalibration-unit

# Functional tests (requires running containers)
SUPASS=<password> make setup_tests
# Functional tests
make -C tests autocalibration-functional
```

Expand Down
20 changes: 15 additions & 5 deletions cluster_analytics/Agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,21 +132,31 @@ The **Cluster Analytics** service provides advanced object clustering, tracking,
### Building the Service

```bash
# From root directory
make cluster_analytics # Build image
make rebuild-cluster_analytics # Clean + rebuild
# From repo root
make -C cluster_analytics # Build cluster_analytics
make -C cluster_analytics test-build # Build cluster_analytics + test image

# OR from cluster_analytics/ directory
cd cluster_analytics && make # Build cluster_analytics
cd cluster_analytics && make test-build # Build cluster_analytics + test image
Comment thread
saratpoluri marked this conversation as resolved.

# Root-level builds (handles all dependencies)
make rebuild-core # Rebuild core services
make build-experimental # Build experimental services
make build-all # All services including experimental
make setup_tests SUPASS=<password> # Full test environment setup
```

### Testing

```bash
# Setup test images first
make setup_tests SUPASS=<password>

# Unit tests
make -C tests cluster-analytics-unit

# Functional tests (requires running containers)
SUPASS=<password> make setup_tests
# Functional tests
make -C tests cluster-analytics-functional

# Specific test module
Expand Down
29 changes: 20 additions & 9 deletions controller/Agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,24 +135,35 @@ The **Scene Controller** is the central runtime state management service for Int
### Building the Service

```bash
# From root directory
make controller # Build image (alias: scene)
make rebuild-controller # Clean + rebuild
# From repo root
make -C controller # Build controller
make -C controller test-build # Build controller + test image

# OR from controller/ directory
cd controller && make # Build controller
cd controller && make test-build # Build controller + test image

# Root-level builds (handles all dependencies)
make rebuild-core # Rebuild all core services with dependencies
make build-core # Build all core services
make setup_tests SUPASS=<password> # Full test environment setup
```

### Testing

```bash
# Unit tests
make -C tests controller-unit
make -C tests geometry-unit # Test fast_geometry
# Setup test images first (required before running any tests)
make setup_tests SUPASS=<password>

# Functional tests (requires running containers)
SUPASS=<password> make setup_tests
make -C tests controller-functional
# Run tests
make -C tests controller-unit # Controller unit tests
make -C tests geometry-unit # Test fast_geometry
make -C tests controller-functional # Functional tests
```

# Specific test module

```bash
pytest tests/sscape_tests/controller/test_tracking.py -v
```

Expand Down
9 changes: 9 additions & 0 deletions controller/src/controller/detections_builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,15 @@ def prepareObjDict(scene, obj, update_visibility, include_sensors=False):
obj_dict['similarity'] = aobj.similarity
if hasattr(aobj, 'first_seen'):
obj_dict['first_seen'] = get_iso_time(aobj.first_seen)

# Add reid state for downstream business logic to distinguish "never queried" from "query made"
if hasattr(aobj, 'reid_state'):
obj_dict['reid_state'] = aobj.reid_state.value # Convert enum to string

# Add previous IDs chain for post-mortem object stitching analysis
if hasattr(aobj, 'previous_ids_chain') and aobj.previous_ids_chain:
obj_dict['previous_ids_chain'] = aobj.previous_ids_chain

if isinstance(obj, TripwireEvent):
obj_dict['direction'] = obj.direction
if hasattr(aobj, 'asset_scale'):
Expand Down
53 changes: 53 additions & 0 deletions controller/src/controller/moving_object.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
import struct
import warnings
from dataclasses import dataclass, field
from enum import Enum
from threading import Lock
from typing import Dict, List

Expand All @@ -27,6 +28,19 @@
LOCATION_LIMIT = 20
SPEED_THRESHOLD = 0.1

class ReidState(Enum):
"""State of ReID query and matching for an object.

PENDING_COLLECTION: Collecting embeddings, query not yet made
QUERY_NO_MATCH: Query made but no match found (new object)
MATCHED: Successfully matched to previous object (reID)
REID_DISABLED: ReID system is disabled, no query will be made
"""
PENDING_COLLECTION = "pending_collection"
QUERY_NO_MATCH = "query_no_match"
MATCHED = "matched"
REID_DISABLED = "reid_disabled"

@dataclass
class ChainData:
regions: Dict
Expand Down Expand Up @@ -113,6 +127,9 @@ def __init__(self, info, when, camera):
self.intersected = False
self.reid = {} # Initialize reid as empty dict
self.metadata = {} # Initialize metadata as empty dict
self.reid_state = ReidState.PENDING_COLLECTION # Track reID state
self.similarity = None # Similarity score from last reID match
self.previous_ids_chain = [] # Track object ID history: [{'id': gid, 'timestamp': ts, 'similarity_score': score}, ...]
Comment thread
saratpoluri marked this conversation as resolved.
# Extract reid from metadata if present and preserve metadata attribute
metadata_from_info = self.info.get('metadata', {})
if metadata_from_info and isinstance(metadata_from_info, dict):
Expand Down Expand Up @@ -217,6 +234,9 @@ def setPrevious(self, otherObj):
self.gid = otherObj.gid
self.first_seen = otherObj.first_seen
self.frameCount = otherObj.frameCount + 1
self.reid_state = otherObj.reid_state
self.similarity = otherObj.similarity
self.previous_ids_chain = otherObj.previous_ids_chain.copy()

del self.chain_data.publishedLocations[LOCATION_LIMIT:]

Expand Down Expand Up @@ -302,6 +322,39 @@ def _projectBounds(self):
def when(self):
return self.location[0].when

def record_id_change(self, new_id, similarity_score=None, timestamp=None):
"""Record a change in object ID (for post-mortem stitching analysis).

@param new_id: The new global ID assigned to this object
@param similarity_score: Similarity score from reID matching (if matched), or None if new object
@param timestamp: When the change occurred (epoch time), defaults to current time
"""
if timestamp is None:
import time
timestamp = time.time()

self.previous_ids_chain.append({
'id': new_id,
'timestamp': timestamp,
'similarity_score': similarity_score
})
log.debug(f"MovingObject.record_id_change: rv_id={getattr(self, 'rv_id', 'unknown')}, "
f"new_id={new_id}, similarity={similarity_score}, state={self.reid_state.value}")

def is_reided(self):
"""Check if this object resulted from successful reID matching.

@return: True if object was matched to a previous object, False otherwise
"""
return self.reid_state == ReidState.MATCHED

def get_previous_ids(self):
"""Get chain of previous IDs for this object.

@return: List of dicts with 'id', 'timestamp', 'similarity_score' for post-mortem analysis
"""
return self.previous_ids_chain.copy()

def __repr__(self):
return "%s: %s/%s %s %s vectors: %s" % \
(self.__class__.__name__,
Expand Down
Loading
Loading