diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index a824655e4..bfba94ced 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -48,6 +48,13 @@ Skills are loaded on-demand based on task context to optimize token usage: Skills are detected and loaded based on file type, task keywords, and context signals. Explicitly request a skill if the auto-detection doesn't load it. +### Instruction Placement Policy (Critical) + +- Prefer skill files under `.github/skills/` for detailed procedural rules. +- Keep this file focused on high-level routing and references to canonical skill documents. +- Avoid duplicating policy/checklist text across this file and skills. +- If overlap is found, retain one canonical source and replace duplicates with a short pointer. + ## Architecture Overview **Core Components:** @@ -107,6 +114,20 @@ make -C tests unit-tests # Unit tests only make -C tests geometry-unit # Specific test (e.g., geometry) ``` +### Completion Gate For Test Tasks (Critical) + +For runtime test verification requirements, use +`.github/skills/test-verification-gate.md`. + +### Containerized Test Image Freshness Gate (Critical) + +Use `.github/skills/test-verification-gate.md` as the single source of truth +for image freshness checks, rebuild-before-test requirements, and retry policy +for containerized test targets. + +Service-specific examples belong in each service guide (for controller, see +`controller/Agents.md`). + ## Code Patterns & Conventions **Python Packaging**: diff --git a/.github/prompts/reflect.prompt.md b/.github/prompts/reflect.prompt.md index ae2944bc5..e1e00e250 100644 --- a/.github/prompts/reflect.prompt.md +++ b/.github/prompts/reflect.prompt.md @@ -5,8 +5,26 @@ description: "Reflect on this conversation and suggest instruction updates" # Self-Reflection Task -1. Review the entire conversation history. -2. Identify patterns where I had to correct you or clarify my intent. -3. Suggest specific additions or modifications to the `.github/copilot-instructions.md`, files under `.github/skills` directory, `Agents.md` in each service directory and other relevant documentation to prevent these issues in the future. -4. Recommend any new 'Agent Skills', tools or prompts that would have made this task easier. -5. Provide the output as a set of actionable diffs or markdown blocks. +## Instruction Placement Rule (Critical) + +Before proposing documentation changes, apply this hierarchy: + +1. Put detailed procedural policy in the most specific skill file under `.github/skills/`. +2. Keep `.github/copilot-instructions.md` as orchestration/entry-point guidance with pointers to skill files. +3. Keep `Agents.md` files service-specific with concrete examples and commands, not duplicated global policy text. +4. Do not duplicate the same checklist/policy text across global instructions and skill files. +5. If overlap is unavoidable, keep one canonical source and replace duplicates with short references. + +6. Review the entire conversation history. +7. Identify patterns where I had to correct you or clarify my intent. +8. Suggest specific additions or modifications to the `.github/copilot-instructions.md`, files under `.github/skills` directory, `Agents.md` in each service directory and other relevant documentation to prevent these issues in the future. +9. Recommend any new 'Agent Skills', tools or prompts that would have made this task easier. +10. Provide the output as a set of actionable diffs or markdown blocks. +11. Explicitly identify any missed instruction and classify the root cause as: + - discovery failure + - execution failure + - verification failure +12. For test-related tasks, always include: + - the Makefile target that should have been run + - whether it was actually run + - the exact command and pass/fail summary (or the blocker) diff --git a/.github/skills/test-verification-gate.md b/.github/skills/test-verification-gate.md new file mode 100644 index 000000000..ac34e9a5b --- /dev/null +++ b/.github/skills/test-verification-gate.md @@ -0,0 +1,55 @@ + + +# AI Agent Skill: Test Verification Gate + +Use this skill whenever a task adds or modifies tests. + +## Goal + +Ensure runtime verification is completed and reported consistently. + +## Required Checklist + +1. Select a repository Makefile target that covers the modified tests. +2. Prefer a root target when practical (for example, `make run_unit_tests`). +3. Otherwise select the narrowest scoped target in `tests/Makefile` + (for example, `make -C tests scenescape-unit`). +4. If the selected target runs in a service `...-test` container image, + rebuild images for changed services before executing tests. +5. Execute the target. +6. If failures occur, confirm image freshness before code-level debugging: + - Rebuild the impacted service runtime and test images if not rebuilt. + - Rerun the same target once on fresh images. +7. If still failing, fix and rerun the same target. +8. Report exact command and concise pass/fail summary. + +## Image Freshness Mapping (Common) + +- Changed `controller/src/**` + `make -C tests scene-unit`: + - `make controller` + - `make -C controller test-build` + - then run `make -C tests scene-unit SUPASS=` + +Apply the same pattern to other services: rebuild runtime + test image before +running containerized test targets. + +## Blocked Execution Policy + +If execution is blocked (missing environment, skipped setup, unavailable +runtime), report: + +1. What is blocked. +2. The exact command that should be run once unblocked. +3. Whether task completion is partial. + +## Not Sufficient + +- Lint success only +- Syntax-only checks +- IDE static errors only +- Repeated reruns against stale container images + +These checks are useful but do not replace runtime test execution. diff --git a/.github/skills/testing.md b/.github/skills/testing.md index 21c3d0799..b1c2d66d7 100644 --- a/.github/skills/testing.md +++ b/.github/skills/testing.md @@ -15,6 +15,68 @@ This guide provides comprehensive instructions for AI agents to create high-qual - Each test should set up its own data and clean up after itself - Use mocking to isolate units from external dependencies +## Verification Workflow For AI Agents (Mandatory) + +Use `.github/skills/test-verification-gate.md` for runtime verification, +command selection, and completion reporting rules after creating or modifying +tests. + +## Test Import Path Policy (Mandatory) + +Before adding imports or path setup in any new or modified test file, run this +discovery workflow: + +1. Check shared pytest bootstrap files first: + - `tests/conftest.py` + - nearest local `conftest.py` in the target test directory tree +2. Verify whether the required modules are already importable via existing + fixtures/path setup. +3. Use direct imports (for example `from controller...`) when existing + `conftest.py` already establishes paths. +4. Only add path manipulation if no appropriate shared bootstrap exists. +5. If path setup is genuinely required, prefer adding it once in the nearest + relevant `conftest.py` rather than per-test-file setup. + +### Prohibited Pattern + +- Do not add `sys.path.insert(...)` blocks in individual test modules when + equivalent setup can live in shared `conftest.py`. + +### Completion Check For Test Authoring + +When creating or updating tests, report this explicitly: + +- whether `conftest.py` files were checked +- where import-path setup is defined (file path) +- confirmation that no unnecessary per-file `sys.path.insert` was introduced + +## Test Target Mapping Workflow (Mandatory) + +Run this workflow before executing any test command: + +1. Identify changed files. +2. Classify each changed file scope: unit, functional, ui, perf, or integration. +3. Resolve the concrete target from the relevant Makefile(s): + - `tests/Makefile` + - `tests/Makefile.functional` + - `tests/Makefile.user_interface` + - `tests/Makefile.sscape` +4. Choose the narrowest target that directly validates the changed file(s). +5. Run that target with required environment variables. + +### Anti-Miss Checklist + +- Do not treat unit target success as validation for functional test changes. +- Do not run broad aggregate targets unless no narrow target exists or the user explicitly requests a sweep. +- Do not report completion without runtime verification for the resolved target (unless blocked). +- Always report: should-run target, whether it was run, exact command, and pass/fail summary (or blocker). + +### Quick Mapping Examples + +- `tests/functional/tc_sensors_send_mqtt_messages.py` -> `make -C tests sensors-send-events` +- `tests/functional/tc_mqtt_sensor_roi.py` -> `make -C tests mqtt-sensor-roi` +- `tests/functional/tc_tripwire_mqtt.py` -> `make -C tests mqtt-tripwire` + ## Test Categories ### 1. Unit Tests @@ -1047,13 +1109,16 @@ pytest -v -s # -s shows print statements ## Test Checklist -When creating a new test, verify: +This checklist is mandatory before marking any testing task complete. +If any item is not satisfied, the final response must explicitly state why. +When creating or modifying tests, verify: - [ ] Test has Zephyr ID (NEX-T#####) - [ ] Test file named `test_*.py` - [ ] Test functions named `test_*` - [ ] Both positive and negative cases covered - [ ] Boundary conditions tested +- [ ] At least one negative case exists per function or behavior under test, unless explicitly not applicable - [ ] Appropriate markers applied (`@pytest.mark.unit`, etc.) - [ ] Mocking used for external dependencies (unit tests) - [ ] Real data used for integration tests @@ -1063,6 +1128,9 @@ When creating a new test, verify: - [ ] Test is independent (doesn't rely on other tests) - [ ] Fixtures used for shared data - [ ] Documentation strings explain what is being tested +- [ ] Repo-preferred test command used for validation (prefer `make -C tests ` when available) +- [ ] New or changed tests executed after the last code edit +- [ ] Final response includes current pass/fail status and any warnings or known gaps ## Quick Reference diff --git a/controller/src/controller/cache_manager.py b/controller/src/controller/cache_manager.py index bf1f2de03..3b84411be 100644 --- a/controller/src/controller/cache_manager.py +++ b/controller/src/controller/cache_manager.py @@ -63,7 +63,12 @@ def refreshScenes(self): uid = scene_data['uid'] if uid not in self.cached_scenes_by_uid: + # Creating new scene - check if there was an old scene with sensor cache scene = Scene.deserialize(scene_data) + + old_scene = self._sensorNeedsRestoring(uid) + if old_scene: + self._restoreSensorCache(uid, old_scene, scene) else: scene = self.cached_scenes_by_uid[uid] scene.updateScene(scene_data) @@ -73,9 +78,41 @@ def refreshScenes(self): for sensorID in scene.sensors.keys(): self._cached_scenes_by_sensorID[sensorID] = scene self.cached_scenes_by_uid[scene.uid] = scene + + # Clear old scene cache after processing all scenes + if hasattr(self, '_old_scene_cache'): + self._old_scene_cache = None + self._cache_refreshed = get_epoch_time() return + def _sensorNeedsRestoring(self, uid): + # Check if any old scene has sensors with cache values that can be restored + if hasattr(self, '_old_scene_cache') and self._old_scene_cache: + return self._old_scene_cache.get(uid) + return None + + def _restoreSensorCache(self, uid, old_scene, scene): + """Restore sensor cache values from old_scene to new scene""" + restored_count = 0 + for sensor_id, old_sensor in old_scene.sensors.items(): + if hasattr(scene, 'sensors') and sensor_id in scene.sensors: + new_sensor = scene.sensors[sensor_id] + restored = False + if hasattr(old_sensor, 'value'): + new_sensor.value = old_sensor.value + restored = True + if hasattr(old_sensor, 'lastValue'): + new_sensor.lastValue = old_sensor.lastValue + restored = True + if hasattr(old_sensor, 'lastWhen'): + new_sensor.lastWhen = old_sensor.lastWhen + restored = True + if restored: + restored_count += 1 + if restored_count > 0: + log.debug(f"Restored sensor cache for {restored_count} sensor(s) in scene {uid}") + def _refreshCameras(self, scene_data): for camera in scene_data.get('cameras', []): update_data = {} @@ -188,6 +225,8 @@ def sceneWithRemoteChildID(self, childID): return self.cached_child_transforms_by_uid.get(childID, None) def invalidate(self): + # Preserve old scene cache for sensor value restoration + self._old_scene_cache = self.cached_scenes_by_uid if hasattr(self, 'cached_scenes_by_uid') else {} self.cached_scenes_by_uid = None if not hasattr(self, 'cached_child_transforms_by_uid') or self.cached_child_transforms_by_uid is None: self.cached_child_transforms_by_uid = {} diff --git a/controller/src/controller/detections_builder.py b/controller/src/controller/detections_builder.py index 7e0c0ff9c..44c504e0b 100644 --- a/controller/src/controller/detections_builder.py +++ b/controller/src/controller/detections_builder.py @@ -9,21 +9,21 @@ from scene_common.timestamp import get_iso_time -def buildDetectionsDict(objects, scene): +def buildDetectionsDict(objects, scene, include_sensors=False): result_dict = {} for obj in objects: - obj_dict = prepareObjDict(scene, obj, False) + obj_dict = prepareObjDict(scene, obj, False, include_sensors) result_dict[obj_dict['id']] = obj_dict return result_dict -def buildDetectionsList(objects, scene, update_visibility=False): +def buildDetectionsList(objects, scene, update_visibility=False, include_sensors=False): result_list = [] for obj in objects: - obj_dict = prepareObjDict(scene, obj, update_visibility) + obj_dict = prepareObjDict(scene, obj, update_visibility, include_sensors) result_list.append(obj_dict) return result_list -def prepareObjDict(scene, obj, update_visibility): +def prepareObjDict(scene, obj, update_visibility, include_sensors=False): aobj = obj if isinstance(obj, TripwireEvent): aobj = obj.object @@ -37,7 +37,9 @@ def prepareObjDict(scene, obj, update_visibility): if not velocity.is3D: velocity = Point(velocity.x, velocity.y, DEFAULTZ) - obj_dict = aobj.info + # Build a fresh top-level dict per serialization so optional fields like + # sensors do not leak between scene, regulated, and external outputs. + obj_dict = dict(aobj.info) obj_dict.update({ 'id': aobj.gid, # gid is the global ID - computed by SceneScape server. 'type': otype, @@ -83,11 +85,37 @@ def prepareObjDict(scene, obj, update_visibility): if update_visibility: computeCameraBounds(scene, aobj, obj_dict) - chain_data = aobj.chain_data - if len(chain_data.regions): - obj_dict['regions'] = chain_data.regions - if len(chain_data.sensors): - obj_dict['sensors'] = chain_data.sensors + if hasattr(aobj, 'chain_data'): + chain_data = aobj.chain_data + if len(chain_data.regions): + obj_dict['regions'] = chain_data.regions + + if include_sensors: + sensors_output = {} + + # Copy sensor data while holding lock, then release + with chain_data._lock: + env_state_copy = dict(chain_data.env_sensor_state) + attr_events_copy = dict(chain_data.attr_sensor_events) + + # Environmental sensors: timestamped readings + for sensor_id, state in env_state_copy.items(): + values = state['readings'] if 'readings' in state and state['readings'] else [] + + sensors_output[sensor_id] = { + 'values': values + } + + # Attribute sensors: events as structured object + for sensor_id, events in attr_events_copy.items(): + if events: + sensors_output[sensor_id] = { + 'values': events + } + + if sensors_output: + obj_dict['sensors'] = sensors_output + if hasattr(aobj, 'confidence'): obj_dict['confidence'] = aobj.confidence if hasattr(aobj, 'similarity'): diff --git a/controller/src/controller/moving_object.py b/controller/src/controller/moving_object.py index 765b42337..727d626da 100644 --- a/controller/src/controller/moving_object.py +++ b/controller/src/controller/moving_object.py @@ -5,7 +5,7 @@ import datetime import struct import warnings -from dataclasses import dataclass +from dataclasses import dataclass, field from threading import Lock from typing import Dict, List @@ -31,8 +31,11 @@ class ChainData: regions: Dict publishedLocations: List[Point] - sensors: Dict persist: Dict + active_sensors: set = field(default_factory=set) + env_sensor_state: Dict = field(default_factory=dict) # {'sensor_id': {'readings': [(ts, val), ...]}} + attr_sensor_events: Dict = field(default_factory=dict) # {'sensor_id': [(ts, val), ...]} + _lock: Lock = field(default_factory=Lock) class Chronoloc: def __init__(self, point: Point, when: datetime, bounds: Rectangle): @@ -166,7 +169,7 @@ def setPersistentAttributes(self, info, persist_attributes): @param persist_attributes List of attributes to persist (may include sub-attributes) """ if self.chain_data is None: - self.chain_data = ChainData(regions={}, publishedLocations=[], sensors={}, persist={}) + self.chain_data = ChainData(regions={}, publishedLocations=[], persist={}) for attribute in persist_attributes: attr, sub_attrs = (list(attribute.items())[0] if isinstance(attribute, dict) else (attribute, None)) if attr in info: @@ -189,7 +192,7 @@ def setPersistentAttributes(self, info, persist_attributes): def setGID(self, gid): if self.chain_data is None: - self.chain_data = ChainData(regions={}, publishedLocations=[], sensors={}, persist={}) + self.chain_data = ChainData(regions={}, publishedLocations=[], persist={}) self.gid = gid self.first_seen = self.when return diff --git a/controller/src/controller/scene.py b/controller/src/controller/scene.py index 104c091f8..be1bd0df4 100644 --- a/controller/src/controller/scene.py +++ b/controller/src/controller/scene.py @@ -7,6 +7,7 @@ import numpy as np import robot_vision as rv from controller.controller_mode import ControllerMode +from controller.moving_object import ChainData from scene_common import log from scene_common.camera import Camera from scene_common.earth_lla import convertLLAToECEF, calculateTRSLocal2LLAFromSurfacePoints @@ -282,47 +283,121 @@ def _finishProcessing(self, detectionType, when, objects, already_tracked_object self._updateEvents(detectionType, when) return - def _updateSensorObjects(self, name, sensor, objects=None): - if not hasattr(sensor, 'value'): - return - - if objects is None: - objects = itertools.chain.from_iterable(sensor.objects.values()) - - for obj in objects: - if name not in obj.chain_data.sensors: - obj.chain_data.sensors[name] = [] - ts_str = get_iso_time(sensor.lastWhen) - existing = [x[0] for x in obj.chain_data.sensors[name]] - if ts_str not in existing: - obj.chain_data.sensors[name].append((ts_str, sensor.value)) - return - def processSensorData(self, jdata, when): sensor_id = jdata['id'] sensor = None if sensor_id in self.sensors: sensor = self.sensors[sensor_id] + log.debug("SENSOR DATA RECEIVED", sensor_id, jdata.get('value'), "type:", getattr(sensor, 'singleton_type', 'NONE')) else: log.error("Unknown sensor", sensor_id, self.sensors) return False if hasattr(sensor, 'lastWhen') and sensor.lastWhen is not None and when <= sensor.lastWhen: - log.info("DISCARDING PAST DATA", sensor_id, when) + log.debug("DISCARDING PAST DATA", sensor_id, when) return True - self.events = {} + # Initialize events dict if needed, but don't clear existing events + if not hasattr(self, 'events') or self.events is None: + self.events = {} + old_value = getattr(sensor, 'value', None) cur_value = jdata['value'] - self.events['value'] = [(sensor_id, sensor)] + # Don't create 'value' event - sensor data is included in object entry/exit events sensor.value = cur_value sensor.lastValue = old_value sensor.lastWhen = when - self._updateSensorObjects(sensor_id, sensor) + + timestamp_str = get_iso_time(when) + timestamp_epoch = when + + # Skip processing if no tracker (analytics-only mode) + if self.tracker is None: + return True + + # Find all objects currently in the sensor region across ALL detection types + # Optimization: check if scene-wide to avoid redundant isPointWithin calls + # TODO: Further optimize for scenes with many objects: spatial indexing (R-tree), + # bounding box pre-filtering, or tracking only recently-moved objects + is_scene_wide = sensor.area == Region.REGION_SCENE + objects_in_sensor = [] + for detectionType in self.tracker.trackers.keys(): + for obj in self.tracker.currentObjects(detectionType): + # When tracking is disabled, do not rely on obj.frameCount being initialized + if (not self.use_tracker or obj.frameCount > 3) and (is_scene_wide or sensor.isPointWithin(obj.sceneLoc)): + objects_in_sensor.append(obj) + # Ensure active_sensors is updated (handles scene-wide sensors or objects existing before sensor creation) + obj.chain_data.active_sensors.add(sensor_id) + + log.debug("SENSOR OBJECTS FOUND", sensor_id, len(objects_in_sensor), "type:", sensor.singleton_type) + + # Update sensor data on objects based on sensor type + if objects_in_sensor: + if sensor.singleton_type == "environmental": + # Environmental sensors: track timestamped readings with value-change detection + # TODO: Implement bounded cache for readings arrays to prevent memory exhaustion + # in long-running scenarios. Consider: max size with FIFO eviction, time-based + # cleanup, or periodic consolidation. Currently, unchanged values update timestamps + # instead of appending, but frequent value changes can still cause unbounded growth. + if not self._updateEnvironmentalSensorReadings(objects_in_sensor, sensor_id, cur_value, timestamp_str): + return False + + elif sensor.singleton_type == "attribute": + # Event history tracking - append discrete events (or update timestamp if value unchanged) + # TODO: Implement bounded cache for attr_sensor_events to prevent memory exhaustion + # in long-running scenarios with frequent attribute changes. + self._updateAttributeSensorEvents(objects_in_sensor, sensor_id, cur_value, timestamp_str) return True + def _updateEnvironmentalSensorReadings(self, objects_in_sensor, sensor_id, cur_value, timestamp_str): + try: + cur_value_float = float(cur_value) + except (ValueError, TypeError): + log.error("Invalid sensor value", sensor_id, cur_value) + return False + + for obj in objects_in_sensor: + with obj.chain_data._lock: + if sensor_id in obj.chain_data.env_sensor_state: + state = obj.chain_data.env_sensor_state[sensor_id] + + # Update readings array: append if value changed, update timestamp if same + if 'readings' not in state: + state['readings'] = [] + if state['readings'] and state['readings'][-1][1] == cur_value_float: + # Value unchanged - update timestamp + state['readings'][-1] = (timestamp_str, cur_value_float) + else: + # Value changed - append new reading + state['readings'].append((timestamp_str, cur_value_float)) + else: + # First reading - initialize readings array + obj.chain_data.env_sensor_state[sensor_id] = { + 'readings': [(timestamp_str, cur_value_float)] + } + + return True + + def _updateAttributeSensorEvents(self, objects_in_sensor, sensor_id, cur_value, timestamp_str): + # Convert to string for consistent type comparison (attributes can be non-numeric) + cur_value_str = str(cur_value) + for obj in objects_in_sensor: + with obj.chain_data._lock: + if sensor_id not in obj.chain_data.attr_sensor_events: + obj.chain_data.attr_sensor_events[sensor_id] = [] + + events = obj.chain_data.attr_sensor_events[sensor_id] + if events and events[-1][1] == cur_value_str: + # Value unchanged - update timestamp of last event instead of appending + events[-1] = (timestamp_str, cur_value_str) + else: + # Value changed - append new event + events.append((timestamp_str, cur_value_str)) + + return + def updateTrackedObjects(self, detection_type, objects): """ Update the cache of tracked objects from MQTT. @@ -435,10 +510,26 @@ def _deserializeTrackedObjects(self, serialized_objects): else: obj._camera_bounds = None - obj.chain_data = SimpleNamespace() - obj.chain_data.regions = obj_data.get('regions', {}) - obj.chain_data.sensors = obj_data.get('sensors', {}) - obj.chain_data.persist = obj_data.get('persistent_data', {}) + # Deserialize chain_data: convert sensors into env_sensor_state and attr_sensor_events + obj.chain_data = ChainData( + regions=obj_data.get('regions', {}), + publishedLocations=[], + persist=obj_data.get('persistent_data', {}), + ) + + # Convert serialized sensors into env_sensor_state and attr_sensor_events + sensors_data = obj_data.get('sensors', {}) + for sensor_id, sensor_info in sensors_data.items(): + values = sensor_info.get('values', []) + if not values: + continue + + is_environmental = self._isEnvironmentalSensor(sensor_id, values) + + if is_environmental: + obj.chain_data.env_sensor_state[sensor_id] = {'readings': values} + else: + obj.chain_data.attr_sensor_events[sensor_id] = values obj_id = obj.gid if obj_id in self.object_history_cache: @@ -455,8 +546,17 @@ def _deserializeTrackedObjects(self, serialized_objects): return objects + def _isEnvironmentalSensor(self, sensor_id, values): + sensor = self.sensors.get(sensor_id) + if sensor is not None and getattr(sensor, 'singleton_type', None) is not None: + return sensor.singleton_type == "environmental" + + return True + def _updateEvents(self, detectionType, now, curObjects=None): - self.events = {} + # Preserve existing events (e.g., sensor 'value' events) instead of clearing + if not hasattr(self, 'events') or self.events is None: + self.events = {} now_str = get_iso_time(now) if curObjects is None: if ControllerMode.isAnalyticsOnly(): @@ -517,18 +617,48 @@ def _updateRegionEvents(self, detectionType, regions, now, now_str, curObjects): new = cur - prev old = prev - cur newObjects = [x for x in objects if x.gid in new] + + # Entry initialization for new objects for obj in newObjects: if key not in obj.chain_data.regions: obj.chain_data.regions[key] = {'entered': now_str} updated.add(key) - # For sensors add the current sensor value to any new objects - if hasattr(region, 'value') and region.singleton_type=="environmental": + # For all singleton sensors, handle entry tracking + if region.singleton_type is not None: + # Mark sensor as active for new objects for obj in newObjects: - obj.chain_data.sensors[key] = [] - self._updateSensorObjects(key, region, newObjects) - - if (len(new) or len(old)) and now - region.when > DEBOUNCE_DELAY: + obj.chain_data.active_sensors.add(key) + + # Initialize sensor state based on type + if region.singleton_type == "environmental": + + # For environmental sensors, initialize state with current value if available + with obj.chain_data._lock: + if (hasattr(region, 'value') and + hasattr(region, 'lastWhen') and + region.value is not None and + region.lastWhen is not None): + # Sensor has cached value - initialize with it + ts_str = get_iso_time(region.lastWhen) + obj.chain_data.env_sensor_state[key] = { + 'readings': [(ts_str, float(region.value))] + } + else: + # No cached value yet + obj.chain_data.env_sensor_state[key] = { + 'readings': [] + } + + elif region.singleton_type == "attribute": + # Attribute sensors only tag objects present when MQTT arrives + # Do NOT initialize with cached values (those belong to other objects) + with obj.chain_data._lock: + if key not in obj.chain_data.attr_sensor_events: + obj.chain_data.attr_sensor_events[key] = [] + + emit_region_event = (len(new) or len(old)) and now - region.when > DEBOUNCE_DELAY + if emit_region_event: log.debug("REGION EVENT", key, now_str, regionObjects, len(objects)) entered = [] for obj in objects: @@ -545,7 +675,7 @@ def _updateRegionEvents(self, detectionType, regions, now, now_str, curObjects): entered = get_epoch_time(obj.chain_data.regions[key]['entered']) dwell = now - entered exited.append((obj, dwell)) - obj.chain_data.regions.pop(key, None) + if not hasattr(region, 'exited'): region.exited = {} region.exited[detectionType] = exited @@ -561,6 +691,24 @@ def _updateRegionEvents(self, detectionType, regions, now, now_str, curObjects): self.events['count'] = [] self.events['count'].append((key, region)) + # Clean up exited objects only after an exit event can be emitted, + # so entered timestamps remain available for dwell-time calculation. + for obj in regionObjects: + if obj.gid in old: + with obj.chain_data._lock: + obj.chain_data.regions.pop(key, None) + + # Clean up sensor tracking on exit + if region.singleton_type is not None: + obj.chain_data.active_sensors.discard(key) + + # Environmental sensors: clear state on exit (data doesn't persist) + if region.singleton_type == "environmental": + obj.chain_data.env_sensor_state.pop(key, None) + + # Attribute sensors: keep event history (data persists after exit) + # attr_sensor_events[key] intentionally not removed + return updated def isIntersecting(self, obj, region): @@ -645,18 +793,54 @@ def updateCameras(self, newCameras): return def _updateRegions(self, existingRegions, newRegions): + # Sentinel value to distinguish "attribute doesn't exist" from "attribute is None" + _NOTSET = object() + old = set(existingRegions.keys()) new = set([x['uid'] for x in newRegions]) for regionData in newRegions: region_uuid = regionData['uid'] region_name = regionData['name'] if region_uuid in existingRegions: - existingRegions[region_uuid].updatePoints(regionData) - existingRegions[region_uuid].updateSingletonType(regionData) - existingRegions[region_uuid].updateVolumetricInfo(regionData) - existingRegions[region_uuid].name = region_name + region = existingRegions[region_uuid] + + # Preserve sensor cache, event state, and region state before geometry updates + # Use sentinel to distinguish missing attributes from None values + cached_value = getattr(region, 'value', _NOTSET) + cached_last_value = getattr(region, 'lastValue', _NOTSET) + cached_last_when = getattr(region, 'lastWhen', _NOTSET) + cached_entered = getattr(region, 'entered', _NOTSET) + cached_exited = getattr(region, 'exited', _NOTSET) + cached_objects = getattr(region, 'objects', _NOTSET) + cached_when = getattr(region, 'when', _NOTSET) + + region.updatePoints(regionData) + region.updateSingletonType(regionData) + region.updateVolumetricInfo(regionData) + region.name = region_name + + # Restore sensor cache, event state, and region state after geometry updates + # Only restore if attribute existed before (even if value was None) + if cached_value is not _NOTSET: + region.value = cached_value + if cached_last_value is not _NOTSET: + region.lastValue = cached_last_value + if cached_last_when is not _NOTSET: + region.lastWhen = cached_last_when + if cached_entered is not _NOTSET: + region.entered = cached_entered + if cached_exited is not _NOTSET: + region.exited = cached_exited + if cached_objects is not _NOTSET: + region.objects = cached_objects + if cached_when is not _NOTSET: + region.when = cached_when else: - existingRegions[region_uuid] = Region(region_uuid, region_name, regionData) + region = Region(region_uuid, region_name, regionData) + existingRegions[region_uuid] = region + # Log sensor configuration for debugging + if hasattr(region, 'singleton_type') and region.singleton_type: + log.debug("SENSOR LOADED", region_name, "area:", region.area, "singleton_type:", region.singleton_type) deleted = old - new for region_uuid in deleted: existingRegions.pop(region_uuid) diff --git a/controller/src/controller/scene_controller.py b/controller/src/controller/scene_controller.py index 7e1f1e4fa..da3231c59 100644 --- a/controller/src/controller/scene_controller.py +++ b/controller/src/controller/scene_controller.py @@ -181,7 +181,8 @@ def shouldPublish(self, last, now, max_delay): return last is None or now - last >= max_delay def publishSceneDetections(self, scene, objects, otype, jdata): - jdata['objects'] = buildDetectionsList(objects, scene, self.visibility_topic == 'unregulated') + # Full rate output (30fps): exclude sensor data for performance + jdata['objects'] = buildDetectionsList(objects, scene, self.visibility_topic == 'unregulated', include_sensors=False) olen = len(jdata['objects']) cid = scene.name + "/" + otype if olen > 0 or cid not in scene.lastPubCount or scene.lastPubCount[cid] > 0: @@ -192,14 +193,22 @@ def publishSceneDetections(self, scene, objects, otype, jdata): new_topic = PubSub.formatTopic(PubSub.DATA_SCENE, scene_id=scene.uid, thing_type=otype) self.pubsub.publish(new_topic, jstr) - self.publishExternalDetections(scene, otype, jstr) + # External detections need sensor data, so pass objects to rebuild + self.publishExternalDetections(scene, otype, objects, jdata) scene.lastPubCount[cid] = olen return - def publishExternalDetections(self, scene, otype, jstr): + def publishExternalDetections(self, scene, otype, objects, jdata_base): + # External rate output (0.5fps): include sensor data now = get_epoch_time() if self.shouldPublish(scene.last_published_detection[otype], now, 1/scene.external_update_rate): scene.last_published_detection[otype] = get_epoch_time() + + # Rebuild detections list with sensor data included + jdata = jdata_base.copy() + jdata['objects'] = buildDetectionsList(objects, scene, self.visibility_topic == 'unregulated', include_sensors=True) + jstr = orjson.dumps(jdata, option=orjson.OPT_SERIALIZE_NUMPY) + scene_hierarchy_topic = PubSub.formatTopic(PubSub.DATA_EXTERNAL, scene_id=scene.uid, thing_type=otype) self.pubsub.publish(scene_hierarchy_topic, jstr) @@ -216,9 +225,8 @@ def publishRegulatedDetections(self, scene_obj, msg_objects, otype, jdata, camer 'last': None } scene = self.regulate_cache[scene_uid] - - scene['objects'][otype] = buildDetectionsList(msg_objects, scene_obj, self.visibility_topic == 'unregulated') - + # Regulated rate output (5fps): include sensor data + scene['objects'][otype] = buildDetectionsList(msg_objects, scene_obj, self.visibility_topic == 'unregulated', include_sensors=True) if camera_id is not None: scene['rate'][camera_id] = jdata.get('rate', None) elif ControllerMode.isAnalyticsOnly() and 'rate' in jdata: @@ -273,7 +281,8 @@ def publishRegionDetections(self, scene, objects, otype, jdata): for obj in objects: if rname in obj.chain_data.regions: robjects.append(obj) - jdata['objects'] = buildDetectionsList(robjects, scene) + # Region-specific detections: include sensor data + jdata['objects'] = buildDetectionsList(robjects, scene, False, include_sensors=True) olen = len(jdata['objects']) rid = scene.name + "/" + rname + "/" + otype if olen > 0 or rid not in scene.lastPubCount or scene.lastPubCount[rid] > 0: @@ -307,7 +316,7 @@ def publishEvents(self, scene, ts_str): etype + '_name': region.name, } detections_dict, num_objects = self._buildAllRegionObjsList(scene, region, event_data) - self._buildEnteredObjsList(region, event_data, detections_dict) + self._buildEnteredObjsList(scene, region, event_data, detections_dict) self._buildExitedObjsList(scene, region, event_data) log.debug("EVENT DATA", event_data) @@ -320,6 +329,10 @@ def publishEvents(self, scene, ts_str): scene_id=scene.uid, region_id=region.uuid) self.pubsub.publish(event_topic, orjson.dumps(event_data, option=orjson.OPT_SERIALIZE_NUMPY)) + # Clear objects and count events after publishing (but preserve 'value' events for sensors) + scene.events.pop('objects', None) + scene.events.pop('count', None) + self._clearSensorValuesOnExit(scene) return @@ -333,17 +346,26 @@ def _buildAllRegionObjsList(self, scene, region, event_data): num_objects += counts[otype] all_objects += objects event_data['counts'] = counts - detections_dict = buildDetectionsDict(all_objects, scene) + detections_dict = buildDetectionsDict(all_objects, scene, include_sensors=True) event_data['objects'] = list(detections_dict.values()) return detections_dict, num_objects - def _buildEnteredObjsList(self, region, event_data, detections_dict): + def _buildEnteredObjsList(self, scene, region, event_data, detections_dict): entered = getattr(region, 'entered', {}) event_data['entered'] = [] + missing_objs = [] for entered_list in entered.values(): for item in entered_list: - entered_obj = detections_dict[item.gid] - event_data['entered'].extend([entered_obj]) + # For sensor value events, objects may not be in detections_dict + if item.gid in detections_dict: + event_data['entered'].append(detections_dict[item.gid]) + else: + missing_objs.append(item) + + # Build any objects not in detections_dict (e.g., from sensor events) + if missing_objs: + entered_objs = buildDetectionsList(missing_objs, scene, False, include_sensors=True) + event_data['entered'].extend(entered_objs) def _buildExitedObjsList(self, scene, region, event_data): exited = getattr(region, 'exited', {}) @@ -354,21 +376,21 @@ def _buildExitedObjsList(self, scene, region, event_data): for exited_obj, dwell in exited_list: exited_dict[exited_obj.gid] = dwell exited_objs.extend([exited_obj]) - exited_objs = buildDetectionsList(exited_objs, scene) + # Exit events: include sensor data (timestamped readings and attribute events) + exited_objs = buildDetectionsList(exited_objs, scene, False, include_sensors=True) exited_data = [{'object': exited_obj, 'dwell': exited_dict[exited_obj['id']]} for exited_obj in exited_objs] event_data['exited'].extend(exited_data) return def _clearSensorValuesOnExit(self, scene): - """Clears the environmental sensor values accumulated by the exiting object""" + """ + Clears region entered/exited arrays after events have been published. + Note: Sensor state cleanup (readings arrays, etc.) is handled + in _updateRegionEvents before this method is called. This method only clears + the event arrays to prevent stale data from being published in subsequent frames. + """ for event_type in scene.events: for region_name, region in scene.events[event_type]: - if hasattr(region, 'exited'): - for detectionType in region.exited: - for exit_data in region.exited[detectionType]: - obj = exit_data[0] - if region.singleton_type == "environmental": - obj.chain_data.sensors.pop(region_name, None) region.exited = {} region.entered = {} return diff --git a/manager/src/django/models.py b/manager/src/django/models.py index f0f4bb3bd..14d100a61 100644 --- a/manager/src/django/models.py +++ b/manager/src/django/models.py @@ -430,6 +430,10 @@ def createSceneScapeRegion(self, existing, region): if hasattr(region, 'radius') and region.radius is not None: info['radius'] = region.radius + # Pass singleton_type from database to runtime for singleton sensors + if hasattr(region, 'singleton_type'): + info['singleton_type'] = region.singleton_type + uiPoints = region.points.all() if len(uiPoints): info['points'] = [(pt.x, pt.y) for pt in uiPoints] diff --git a/scene_common/src/scene_common/geometry.py b/scene_common/src/scene_common/geometry.py index 4988c79cc..e45fd19a5 100644 --- a/scene_common/src/scene_common/geometry.py +++ b/scene_common/src/scene_common/geometry.py @@ -37,11 +37,30 @@ def __init__(self, uuid, name, info): return def updatePoints(self, info): - if (not self.hasPointsArray(info) and 'center' in info): + # Set center if provided (needed for circles and other centered regions) + if 'center' in info and info['center'] is not None: pt = info['center'] self.center = pt if isinstance(pt, Point) else Point(pt) - if (self.hasPointsArray(info)) or ('area' in info and info['area'] == "poly"): + # Check explicit area type first - respect explicit configuration over inferred types + if 'area' in info and info['area'] == "circle": + if not hasattr(self, 'center') or self.center is None: + raise ValueError(f"Circle region '{self.name}' has invalid center value") + if 'radius' not in info or info['radius'] is None: + raise ValueError(f"Circle region '{self.name}' requires a positive 'radius' value") + try: + radius = float(info['radius']) + except (TypeError, ValueError): + raise ValueError(f"Circle region '{self.name}' requires a numeric 'radius' value") + if radius <= 0: + raise ValueError(f"Circle region '{self.name}' requires a positive 'radius' value") + self.area = Region.REGION_CIRCLE + self.radius = radius + self.boundingBox = Rectangle(self.center - (self.radius, self.radius), + self.center + (self.radius, self.radius)) + elif 'area' in info and info['area'] == "scene": + self.area = Region.REGION_SCENE + elif (self.hasPointsArray(info)) or ('area' in info and info['area'] == "poly"): self.area = Region.REGION_POLY self.points = [] if not isarray(info): @@ -52,14 +71,6 @@ def updatePoints(self, info): self.points_list = [x.as2Dxy.asCartesianVector for x in self.points] if len(self.points_list) > 2: self.polygon = Polygon(self.points_list) - elif 'area' in info and info['area'] == "circle": - self.area = Region.REGION_CIRCLE - self.radius = info['radius'] - # Rectangle is created using Point, Point constructor. - self.boundingBox = Rectangle(self.center - (self.radius, self.radius), - self.center + (self.radius, self.radius)) - elif 'area' in info and info['area'] == "scene": - self.area = Region.REGION_SCENE else: raise ValueError("Unrecognized point data", info) return diff --git a/tests/Makefile b/tests/Makefile index e6565cd33..525f88874 100644 --- a/tests/Makefile +++ b/tests/Makefile @@ -238,23 +238,8 @@ unit-tests: $(MAKE) -Otarget -j $(NPROCS) _$@ SECRETSDIR=$(PWD)/manager/secrets SUPASS=$(SUPASS) -k _unit-tests: \ - account-security-unit \ - autocamcalib-unit \ - cam-unit \ - geometry-unit \ - geospatial-unit \ - mapping-unit \ - markerless-unit \ - robot-vision-unit \ - scene-unit \ - scenescape-unit \ - schema-unit \ - singleton-sensor-unit \ - timestamp-unit \ - transform-unit \ - uuid-manager-unit \ - vdms-adapter-unit \ - views-unit \ + django-integration-unit \ + logic-unit-tests \ include Makefile.sscape Makefile.functional Makefile.perf \ Makefile.reports Makefile.user_interface Makefile.metric diff --git a/tests/Makefile.functional b/tests/Makefile.functional index 35abd351a..b4c95ca4f 100644 --- a/tests/Makefile.functional +++ b/tests/Makefile.functional @@ -305,11 +305,14 @@ scene-import: # NEX-T13051 $(eval SECRETSDIR := $(OLDSECRETSDIR)) sensors-send-events: # NEX-T10456 + $(eval IMAGE_OLD := $(BASE_IMAGE)) + $(eval override BASE_IMAGE := $(IMAGE)-controller-test) $(eval OLDSECRETSDIR := $(SECRETSDIR)) $(eval SECRETSDIR := $(PWD)/manager/secrets) $(eval COMPOSE_FILES := $(COMPOSE)/dlstreamer/broker.yml:$(COMPOSE)/ntp.yml:$(COMPOSE)/pgserver.yml:$(COMPOSE)/scene.yml:$(COMPOSE)/web.yml) $(call common-recipe, $(COMPOSE_FILES), tests/functional/tc_sensors_send_mqtt_messages.py, 'pgserver web scene', true, /run/secrets/controller.auth) $(eval SECRETSDIR := $(OLDSECRETSDIR)) + $(eval override BASE_IMAGE := $(IMAGE_OLD)) vdms-similarity-search: # NEX-T10516 $(eval override IMAGE_OLD := $(BASE_IMAGE)) diff --git a/tests/Makefile.sscape b/tests/Makefile.sscape index 10fecbc62..a56b7aff5 100644 --- a/tests/Makefile.sscape +++ b/tests/Makefile.sscape @@ -46,6 +46,34 @@ account-security-unit: cam-unit: $(call unit-recipe, cam, $(IMAGE)-manager-test) +django-integration-unit: + $(MAKE) -Otarget -j $(NPROCS) _$@ SECRETSDIR=$(PWD)/manager/secrets SUPASS=$(SUPASS) -k + +_django-integration-unit: \ + account-security-unit \ + cam-unit \ + scene-django-unit \ + singleton-sensor-unit \ + views-unit \ + +logic-unit-tests: + $(MAKE) -Otarget -j $(NPROCS) _$@ SECRETSDIR=$(PWD)/manager/secrets SUPASS=$(SUPASS) -k + +_logic-unit-tests: \ + autocamcalib-unit \ + geometry-unit \ + geospatial-unit \ + mapping-unit \ + markerless-unit \ + robot-vision-unit \ + scene-unit \ + scenescape-unit \ + schema-unit \ + timestamp-unit \ + transform-unit \ + uuid-manager-unit \ + vdms-adapter-unit \ + geometry-unit: # NEX-T10454 $(call unit-recipe, geometry, $(IMAGE)-manager-test) @@ -75,6 +103,9 @@ mesh-util-unit: robot-vision-unit: $(call unit-recipe, robot_vision, $(IMAGE)-controller-test) +scene-django-unit: + $(call unit-recipe, scene, $(IMAGE)-manager-test) + scene-unit: # NEX-T10451 $(call unit-recipe, scene_pytest, $(IMAGE)-controller-test) @@ -85,7 +116,7 @@ vdms-adapter-unit: # NEX-T19885 $(call unit-recipe, vdms_adapter, $(IMAGE)-controller-test) scenescape-unit: # NEX-T10450 - $(call unit-recipe, scenescape, $(IMAGE)-manager-test) + $(call unit-recipe, scenescape, $(IMAGE)-controller-test) schema-unit: # NEX-T10458 $(call unit-recipe, schema, $(IMAGE)-manager-test) diff --git a/tests/README.md b/tests/README.md index 321b4c4c1..9a7759420 100644 --- a/tests/README.md +++ b/tests/README.md @@ -32,6 +32,22 @@ make -C tests mqtt-roi For a complete and up-to-date list of all test targets and their definitions, see the [Tests Makefile](tests/Makefile) +## Unit test taxonomy + +The repository keeps two categories under the `unit-tests` umbrella: + +- Pure unit tests: fast logic-focused tests that typically avoid Django request/ORM integration. + - Umbrella target: `make -C tests logic-unit-tests` + - Example leaf target: `make -C tests scene-unit` +- Django integration unit tests: Django `TestCase`/test-client/ORM based backend tests grouped under a dedicated umbrella. + - Umbrella target: `make -C tests django-integration-unit` + - Included targets: `account-security-unit`, `cam-unit`, `scene-django-unit`, `singleton-sensor-unit`, `views-unit` + +Notes: + +- `make -C tests unit-tests` still runs both categories. +- The Django scene CRUD tests in `tests/sscape_tests/scene/` are run by `scene-django-unit`. + ## Running tests on kubernetes Refer to [Running tests on kubernetes](kubernetes/README.md) diff --git a/tests/functional/tc_sensors_send_mqtt_messages.py b/tests/functional/tc_sensors_send_mqtt_messages.py index ffbc8040f..8bc5054c6 100755 --- a/tests/functional/tc_sensors_send_mqtt_messages.py +++ b/tests/functional/tc_sensors_send_mqtt_messages.py @@ -24,13 +24,14 @@ TEST_NAME = "NEX-T10456" WALKING_SPEED = 1.2 # meters per second FRAMES_PER_SECOND = 10 -THING_TYPE = "person" +THING_TYPES = ["person", "chair", "table", "couch"] MAX_CONTROLLER_WAIT = 30 # seconds -class WillOurShipGo(FunctionalTest): +class SensorMqttMessageFlowTest(FunctionalTest): def __init__(self, testName, request, recordXMLAttribute): super().__init__(testName, request, recordXMLAttribute) self.sceneUID = self.params['scene_id'] + self.cameraId = "camera1" self.rest = RESTClient(self.params['resturl'], rootcert=self.params['rootcert']) assert self.rest.authenticate(self.params['user'], self.params['password']) @@ -39,60 +40,115 @@ def __init__(self, testName, request, recordXMLAttribute): self.params['broker_url']) self.eventTopic = PubSub.formatTopic(PubSub.EVENT, region_type="region", event_type="+", - scene_id="+", region_id="+") + scene_id=self.sceneUID, region_id="+") + self.sceneTopic = PubSub.formatTopic(PubSub.DATA_SCENE, scene_id=self.sceneUID, thing_type="+") + self.regulatedTopic = PubSub.formatTopic(PubSub.DATA_REGULATED, scene_id=self.sceneUID) + self.externalTopic = PubSub.formatTopic(PubSub.DATA_EXTERNAL, scene_id=self.sceneUID, thing_type="+") + self.regionEvents = {} + self.sceneMessages = [] + self.regulatedMessages = [] + self.externalMessages = [] + self.sensorPublishTimes = {} + self.geoChangeUpdateTimes = [] + self.cleanupUpdateTimes = [] + self.pubsub.onConnect = self.pubsubConnected self.pubsub.addCallback(self.eventTopic, self.eventReceived) + self.pubsub.addCallback(self.sceneTopic, self.sceneReceived) + self.pubsub.addCallback(self.regulatedTopic, self.regulatedReceived) + self.pubsub.addCallback(self.externalTopic, self.externalReceived) self.pubsub.connect() self.pubsub.loopStart() return def pubsubConnected(self, client, userdata, flags, rc): self.pubsub.subscribe(self.eventTopic) + self.pubsub.subscribe(self.sceneTopic) + self.pubsub.subscribe(self.regulatedTopic) + self.pubsub.subscribe(self.externalTopic) return def eventReceived(self, pahoClient, userdata, message): topic = PubSub.parseTopic(message.topic) - self.sensors[topic['region_id']]['received'] = get_epoch_time() + region_id = topic['region_id'] + if region_id not in self.sensors: + return + + payload = json.loads(message.payload.decode("utf-8")) + self.sensors[region_id]['received'] = get_epoch_time() + self.regionEvents.setdefault(region_id, []).append(payload) + return + + def sceneReceived(self, pahoClient, userdata, message): + payload = json.loads(message.payload.decode("utf-8")) + if 'objects' in payload and payload['objects']: + self.sceneMessages.append(payload) + return + + def regulatedReceived(self, pahoClient, userdata, message): + payload = json.loads(message.payload.decode("utf-8")) + if 'objects' in payload and payload['objects']: + self.regulatedMessages.append(payload) + return + + def externalReceived(self, pahoClient, userdata, message): + payload = json.loads(message.payload.decode("utf-8")) + if 'objects' in payload and payload['objects']: + self.externalMessages.append(payload) return def prepareScene(self): res = self.rest.getScenes({'id': self.sceneUID}) assert res and res['count'] >= 1, (res.statusCode, res.errors) - parent_id = res['results'][0]['uid'] - self.childName = "child" - res = self.rest.createScene({'name': self.childName, 'parent': parent_id}) - self.childId = res['uid'] - assert res, (res.statusCode, res.errors) self.sensors = { - 'scene_sensor': { + 'scene_env_sensor': { 'area': "scene", + 'singleton_type': "environmental", }, - 'circle_sensor': { + 'circle_env_sensor': { 'area': "circle", - 'radius': 1, + 'radius': 100, 'center': (0, 0), + 'singleton_type': "environmental", }, - 'poly_sensor': { + 'poly_env_sensor': { 'area': "poly", - 'points': ((-0.5, 0.5), (0.5, 0.5), (0.5, -0.5), (-0.5, -0.5)), - } + 'points': ((-100, 100), (100, 100), (100, -100), (-100, -100)), + 'singleton_type': "environmental", + }, + 'scene_attr_sensor': { + 'area': "scene", + 'singleton_type': "attribute", + }, + 'geo_change_sensor': { + 'area': "poly", + 'points': ((-100, 100), (100, 100), (100, -100), (-100, -100)), + 'singleton_type': "environmental", + }, + 'cleanup_env_sensor': { + 'area': "scene", + 'singleton_type': "environmental", + }, } for name in self.sensors: sensorConfig = { 'name': name, - 'scene': parent_id, + 'scene': self.sceneUID, } sensorConfig.update(self.sensors[name]) res = self.rest.createSensor(sensorConfig) assert res, (res.statusCode, res.errors) + self.sensors[name]['uid'] = res['uid'] return def plotCourse(self): - startPosition = (-2, -2, 0) - endPosition = (2, 2, 0) + # Keep y in a normalized image range so camera detections remain valid + # while still moving enough to exercise region/sensor transitions. + startPosition = (-3.0, 0.1, 0) + endPosition = (3.0, 0.9, 0) stepDistance = WALKING_SPEED / FRAMES_PER_SECOND # FIXME - should probably use whichever dimension results in the @@ -103,23 +159,380 @@ def plotCourse(self): course = np.dstack(course) return course[0] - def createDetection(self, begin, idx, positionNow, positionLast): - velocity = positionNow - positionLast + def createDetection(self, positionNow): detection = { - 'id': self.childName, - 'timestamp': get_iso_time(begin + idx / FRAMES_PER_SECOND), - 'objects': [ - { - 'id': 1, - 'category': THING_TYPE, - 'translation': positionNow.asNumpyCartesian.tolist(), - 'size': [1.5, 1.5, 1.5], - 'velocity': velocity.asNumpyCartesian.tolist(), - }, - ], + 'id': self.cameraId, + 'timestamp': get_iso_time(get_epoch_time()), + 'objects': { + 'person': [ + { + 'id': 1, + 'category': 'person', + 'bounding_box': { + 'x': 0.56, + 'y': positionNow.y, + 'width': 0.24, + 'height': 0.49, + }, + }, + ], + 'chair': [ + { + 'id': 2, + 'category': 'chair', + 'bounding_box': { + 'x': 0.68, + 'y': positionNow.y, + 'width': 0.24, + 'height': 0.49, + }, + }, + ], + 'table': [ + { + 'id': 3, + 'category': 'table', + 'bounding_box': { + 'x': 0.44, + 'y': positionNow.y, + 'width': 0.30, + 'height': 0.20, + }, + }, + ], + 'couch': [ + { + 'id': 4, + 'category': 'couch', + 'bounding_box': { + 'x': 0.80, + 'y': positionNow.y, + 'width': 0.36, + 'height': 0.28, + }, + }, + ], + }, + 'rate': 9.8, } return detection + def _publish_scheduled_sensor_value(self, idx, now, schedule, sensor_name): + for publish_idx, value in schedule: + if idx == publish_idx: + self.pushSensorValue(sensor_name, value, now) + return + + def _apply_scheduled_geo_update(self, idx, sensor_uid, geometry_schedule, sensor_name=None): + for publish_idx, geo_update in geometry_schedule: + if idx == publish_idx: + res = self.rest.updateSensor(sensor_uid, geo_update) + assert res, f"Failed to update geo_change_sensor at frame {idx}: {res.statusCode}" + when = get_iso_time(get_epoch_time()) + if sensor_name == 'geo_change_sensor': + self.geoChangeUpdateTimes.append(when) + if sensor_name == 'cleanup_env_sensor': + self.cleanupUpdateTimes.append(when) + return + + def pushSensorValue(self, sensor_name, value, ts=None): + when = ts if ts is not None else get_epoch_time() + iso_when = get_iso_time(when) + message_dict = { + 'timestamp': iso_when, + 'id': sensor_name, + 'value': value + } + result = self.pubsub.publish( + PubSub.formatTopic(PubSub.DATA_SENSOR, sensor_id=sensor_name), + json.dumps(message_dict) + ) + assert result[0] == 0 + self.sensorPublishTimes.setdefault(sensor_name, {}).setdefault(value, []).append(iso_when) + return + + def _extract_obj_id(self, obj): + if 'id' in obj: + return obj['id'] + if 'object_id' in obj: + return obj['object_id'] + if 'track_id' in obj: + return obj['track_id'] + return None + + def _extract_entry_value_timestamp(self, entry): + if isinstance(entry, (list, tuple)) and len(entry) >= 2: + return entry[1], entry[0] + if isinstance(entry, dict): + val = entry.get('value', entry.get('event')) + return val, entry.get('timestamp') + return None, None + + def _timestamp_for_value(self, sensor_values, target_value): + for entry in sensor_values: + val, ts = self._extract_entry_value_timestamp(entry) + if val == target_value and ts is not None: + return str(ts) + return None + + def _assert_dedup_timestamp_refresh(self, sensor_name, sensor_values, target_value): + publish_times = self.sensorPublishTimes.get(sensor_name, {}).get(target_value, []) + assert len(publish_times) >= 2, ( + f"Need at least two publishes for dedup assertion on {sensor_name}:{target_value}", + publish_times + ) + reported_ts = self._timestamp_for_value(sensor_values, target_value) + assert reported_ts is not None, ( + f"Expected a timestamped value entry for {sensor_name}:{target_value}", + sensor_values + ) + assert str(reported_ts) >= str(publish_times[1]), ( + f"Expected dedup timestamp refresh for {sensor_name}:{target_value}", + reported_ts, + publish_times + ) + + def _sensor_objects_in_region_event(self, region_event): + objs = [] + objs.extend(region_event.get('objects', [])) + objs.extend(region_event.get('entered', [])) + for exited in region_event.get('exited', []): + if isinstance(exited, dict) and 'object' in exited: + objs.append(exited['object']) + return objs + + def _extract_values(self, sensor_values): + values = [] + for entry in sensor_values: + if isinstance(entry, (list, tuple)) and len(entry) >= 2: + values.append(entry[1]) + elif isinstance(entry, dict): + if 'value' in entry: + values.append(entry['value']) + elif 'event' in entry: + values.append(entry['event']) + return values + + def _extract_timestamps(self, sensor_values): + timestamps = [] + for entry in sensor_values: + if isinstance(entry, (list, tuple)) and len(entry) >= 2: + timestamps.append(entry[0]) + elif isinstance(entry, dict): + if 'timestamp' in entry: + timestamps.append(entry['timestamp']) + return timestamps + + def _assert_timestamp_accumulation(self, sensor_name, sensor_values): + timestamps = self._extract_timestamps(sensor_values) + assert timestamps, f"Expected timestamps in sensor values for {sensor_name}, got {sensor_values}" + assert len(timestamps) == len(sensor_values), ( + f"Missing timestamp entries for {sensor_name}", sensor_values + ) + if len(timestamps) > 1: + sortable = [str(ts) for ts in timestamps] + assert sortable == sorted(sortable), ( + f"Expected timestamped sensor values in chronological order for {sensor_name}", timestamps + ) + + def _extract_obj_type(self, obj): + if 'category' in obj: + return obj['category'] + if 'type' in obj: + return obj['type'] + return None + + def _verify_region_events(self): + event_sensor_names = ['circle_env_sensor', 'poly_env_sensor', 'cleanup_env_sensor'] + sensorsReceived = [name for name in event_sensor_names if 'received' in self.sensors[name]] + assert len(sensorsReceived) == len(event_sensor_names), ( + "Expected region sensor events", sensorsReceived, event_sensor_names + ) + + for sensor_name in event_sensor_names: + events = self.regionEvents.get(sensor_name, []) + assert events, f"No events received for sensor {sensor_name}" + + saw_sensor_payload = False + exited_ids = set() + for event in events: + for entered in event.get('entered', []): + entered_id = self._extract_obj_id(entered) + if entered_id is not None and entered_id in exited_ids: + exited_ids.remove(entered_id) + + # Cleanup check: once an object exits a region, subsequent object updates + # must not keep carrying that region's sensor values until it re-enters. + for obj in event.get('objects', []): + obj_id = self._extract_obj_id(obj) + if obj_id is None or obj_id not in exited_ids: + continue + sensors = obj.get('sensors', {}) + stale_values = sensors.get(sensor_name, {}).get('values', []) + assert not stale_values, ( + f"Expected cleaned sensor state after exit for {sensor_name} object {obj_id}", + stale_values + ) + + for exited in event.get('exited', []): + if isinstance(exited, dict): + exited_obj = exited.get('object', exited) + exited_id = self._extract_obj_id(exited_obj) + if exited_id is not None: + exited_ids.add(exited_id) + + for obj in self._sensor_objects_in_region_event(event): + sensors = obj.get('sensors', {}) + if sensor_name in sensors and sensors[sensor_name].get('values'): + saw_sensor_payload = True + assert saw_sensor_payload, f"No sensor payload found in events for sensor {sensor_name}" + return + + def _verify_scene_topic_excludes_sensors(self): + assert self.sceneMessages, "No scene topic messages received" + for payload in self.sceneMessages: + for obj in payload.get('objects', []): + assert 'sensors' not in obj, f"Scene topic unexpectedly included sensors: {obj}" + return + + def _find_sensor_values_in_messages(self, messages, sensor_name): + """Return list of sensor value-lists for sensor_name found across messages.""" + samples = [] + for payload in messages: + for obj in payload.get('objects', []): + sensors = obj.get('sensors', {}) + if sensor_name in sensors and sensors[sensor_name].get('values'): + samples.append(sensors[sensor_name]['values']) + return samples + + def _verify_sensor_payloads(self): + assert self.regulatedMessages, "No regulated messages received" + assert self.externalMessages, "No external messages received" + + seen_types = set() + scene_env_types = set() + scene_attr_types = set() + circle_env_samples = [] + poly_env_samples = [] + all_payloads = self.regulatedMessages + self.externalMessages + + scene_env_samples = [] + scene_attr_samples = [] + for payload in all_payloads: + for obj in payload.get('objects', []): + obj_type = self._extract_obj_type(obj) + if obj_type is not None: + seen_types.add(obj_type) + + sensors = obj.get('sensors', {}) + if 'scene_env_sensor' in sensors: + env_values = sensors['scene_env_sensor'].get('values', []) + if env_values: + scene_env_samples.append(env_values) + scene_env_types.add(obj_type) + if 'scene_attr_sensor' in sensors: + attr_values = sensors['scene_attr_sensor'].get('values', []) + if attr_values: + scene_attr_samples.append(attr_values) + scene_attr_types.add(obj_type) + if 'circle_env_sensor' in sensors and sensors['circle_env_sensor'].get('values'): + circle_env_samples.append(sensors['circle_env_sensor']['values']) + if 'poly_env_sensor' in sensors and sensors['poly_env_sensor'].get('values'): + poly_env_samples.append(sensors['poly_env_sensor']['values']) + + for thing_type in THING_TYPES: + assert thing_type in seen_types, f"Expected cross-detection sensor tagging for {thing_type}" + assert thing_type in scene_env_types, f"Expected scene environmental sensor tagging for {thing_type}" + assert thing_type in scene_attr_types, f"Expected scene attribute sensor tagging for {thing_type}" + + assert scene_env_samples, "Did not observe scene environmental sensor values" + assert scene_attr_samples, "Did not observe scene attribute sensor values" + assert circle_env_samples, "Did not observe circle environmental sensor values" + assert poly_env_samples, "Did not observe polygon environmental sensor values" + + env_values = self._extract_values(scene_env_samples[-1]) + attr_values = self._extract_values(scene_attr_samples[-1]) + circle_values = self._extract_values(circle_env_samples[-1]) + poly_values = self._extract_values(poly_env_samples[-1]) + + self._assert_timestamp_accumulation('scene_env_sensor', scene_env_samples[-1]) + self._assert_timestamp_accumulation('scene_attr_sensor', scene_attr_samples[-1]) + self._assert_timestamp_accumulation('circle_env_sensor', circle_env_samples[-1]) + self._assert_timestamp_accumulation('poly_env_sensor', poly_env_samples[-1]) + + assert env_values.count(20.5) == 1, f"Expected environmental dedup for 20.5, got {env_values}" + assert env_values[-1] == 21.0, f"Expected latest environmental value 21.0, got {env_values}" + assert attr_values.count("badge-A") == 1, f"Expected attribute dedup for badge-A, got {attr_values}" + assert attr_values[-1] == "badge-B", f"Expected latest attribute value badge-B, got {attr_values}" + assert circle_values.count(30.0) == 1, f"Expected environmental dedup for circle sensor, got {circle_values}" + assert circle_values[-1] == 31.0, f"Expected latest circle sensor value 31.0, got {circle_values}" + assert poly_values.count(40.0) == 1, f"Expected environmental dedup for polygon sensor, got {poly_values}" + assert poly_values[-1] == 41.0, f"Expected latest polygon sensor value 41.0, got {poly_values}" + + self._assert_dedup_timestamp_refresh('scene_env_sensor', scene_env_samples[-1], 20.5) + self._assert_dedup_timestamp_refresh('scene_attr_sensor', scene_attr_samples[-1], 'badge-A') + self._assert_dedup_timestamp_refresh('circle_env_sensor', circle_env_samples[-1], 30.0) + self._assert_dedup_timestamp_refresh('poly_env_sensor', poly_env_samples[-1], 40.0) + + # Type consistency: environmental values must be numeric, attribute values string. + assert all(isinstance(v, (int, float)) for v in env_values), env_values + assert all(isinstance(v, (int, float)) for v in circle_values), circle_values + assert all(isinstance(v, (int, float)) for v in poly_values), poly_values + assert all(isinstance(v, str) for v in attr_values), attr_values + return + + def _verify_sensor_cache_across_geometry_changes(self): + """Verify sensor cached values survive geometry updates (poly->circle->scene->poly). + + Geometry changes are performed inline during the walk loop + (see geo_change_geometry_schedule in checkForMalfunctions) so the controller + is always processing live detections when CMD_SCENE_UPDATE arrives. + This method validates that geo_change_sensor values appear in the regulated/external + messages collected during the walk, confirming the cache was preserved through each + geometry transition. + """ + SENSOR_NAME = 'geo_change_sensor' + CACHED_VALUE = 55.5 + + all_messages = self.regulatedMessages + self.externalMessages + samples = self._find_sensor_values_in_messages(all_messages, SENSOR_NAME) + assert samples, f"No {SENSOR_NAME} values found in regulated/external messages" + found_values = [] + for s in samples: + found_values.extend(self._extract_values(s)) + assert CACHED_VALUE in found_values, ( + f"Expected cached value {CACHED_VALUE} in {SENSOR_NAME} after geometry changes, " + f"got {found_values}" + ) + + assert self.geoChangeUpdateTimes, ( + "Expected geometry updates to record scene reload events. " + f"Geometry updates: {self.geoChangeUpdateTimes}, " + f"Found cached values: {found_values}" + ) + return + + def _verify_cleanup_sensor_detached_after_region_removal(self): + assert self.cleanupUpdateTimes, "Expected cleanup sensor geometry updates to be recorded" + + all_messages = self.regulatedMessages + self.externalMessages + assert all_messages, "No regulated/external messages to validate cleanup sensor detachment" + + last_cleanup_update = max(str(ts) for ts in self.cleanupUpdateTimes) + for payload in all_messages: + for obj in payload.get('objects', []): + sensors = obj.get('sensors', {}) + values = sensors.get('cleanup_env_sensor', {}).get('values', []) + for entry in values: + _, ts = self._extract_entry_value_timestamp(entry) + if ts is not None: + assert str(ts) < last_cleanup_update, ( + "Expected cleanup_env_sensor to stop receiving new values after region removal", + entry, + last_cleanup_update + ) + return + def checkForMalfunctions(self): if self.testName and self.recordXMLAttribute: self.recordXMLAttribute("name", self.testName) @@ -128,45 +541,94 @@ def checkForMalfunctions(self): self.prepareScene() course = self.plotCourse() - topic = PubSub.formatTopic(PubSub.DATA_EXTERNAL, - scene_id=self.childId, thing_type=THING_TYPE) begin = get_epoch_time() - positionLast = Point(course[0]) waitTopic = PubSub.formatTopic(PubSub.DATA_SCENE, - scene_id=self.sceneUID, thing_type=THING_TYPE) + scene_id=self.sceneUID, thing_type=THING_TYPES[0]) positionNow = Point(course[0]) - detection = self.createDetection(begin, 0, positionNow, positionNow) + detection = self.objData() + detection['timestamp'] = get_iso_time(begin) + detection['objects']['person'][0]['bounding_box']['y'] = positionNow.y + topic = PubSub.formatTopic(PubSub.DATA_CAMERA, camera_id=self.cameraId) count = self.sceneControllerReady(waitTopic, topic, MAX_CONTROLLER_WAIT, begin, 1 / FRAMES_PER_SECOND, detection) assert count, "Scene controller not ready" + # Prime region sensors before entry so region entered events can + # serialize sensor values on first region transition. + prime_ts = get_epoch_time() + self.pushSensorValue('circle_env_sensor', 29.0, prime_ts) + self.pushSensorValue('poly_env_sensor', 39.0, prime_ts) + self.pushSensorValue('cleanup_env_sensor', 49.0, prime_ts) + + # Publish an initial cache value for geo_change_sensor before the first + # geometry change so the cache-preservation path is exercised. + self.pushSensorValue('geo_change_sensor', 55.5, get_epoch_time()) + + # Sensor value schedules: (frame_index, value) + scene_env_schedule = [(27, 20.5), (28, 20.5), (29, 21.0)] + scene_attr_schedule = [(30, 'badge-A'), (31, 'badge-A'), (32, 'badge-B')] + circle_env_schedule = [(24, 30.0), (25, 30.0), (26, 31.0)] + poly_env_schedule = [(24, 40.0), (25, 40.0), (26, 41.0)] + geo_change_schedule = [(27, 55.5), (28, 55.5), (29, 55.5)] + cleanup_env_schedule = [(24, 50.0), (25, 50.0), (26, 51.0)] + + # Geometry transitions for geo_change_sensor performed inline during the + # walk so the tracker is active when CMD_SCENE_UPDATE is processed. + # Sequence: initial poly -> circle -> scene -> poly (restored). + sensor_uid = self.sensors['geo_change_sensor']['uid'] + cleanup_sensor_uid = self.sensors['cleanup_env_sensor']['uid'] + geo_change_geometry_schedule = [ + (36, {'area': 'circle', 'radius': 200, 'center': (0, 0)}), + (40, {'area': 'scene'}), + (44, {'area': 'poly', + 'points': ((-100, 100), (100, 100), (100, -100), (-100, -100))}), + ] + cleanup_geometry_schedule = [ + (35, {'area': 'poly', + 'points': ((900, 910), (910, 910), (910, 900), (900, 900))}), + ] + for idx in range(len(course)): positionNow = Point(course[idx]) - detection = self.createDetection(begin, idx + count, positionNow, positionLast) + detection = self.createDetection(positionNow) self.pubsub.publish(topic, json.dumps(detection)) - time.sleep(1 / FRAMES_PER_SECOND) - sensorsReceived = [name for name in self.sensors if 'received' in self.sensors[name]] - if len(sensorsReceived) == len(self.sensors): - self.exitCode = 0 - break + now = get_epoch_time() + self._publish_scheduled_sensor_value(idx, now, scene_env_schedule, 'scene_env_sensor') + self._publish_scheduled_sensor_value(idx, now, scene_attr_schedule, 'scene_attr_sensor') + self._publish_scheduled_sensor_value(idx, now, circle_env_schedule, 'circle_env_sensor') + self._publish_scheduled_sensor_value(idx, now, poly_env_schedule, 'poly_env_sensor') + self._publish_scheduled_sensor_value(idx, now, cleanup_env_schedule, 'cleanup_env_sensor') + self._apply_scheduled_geo_update( + idx, sensor_uid, geo_change_geometry_schedule, sensor_name='geo_change_sensor' + ) + self._apply_scheduled_geo_update( + idx, cleanup_sensor_uid, cleanup_geometry_schedule, sensor_name='cleanup_env_sensor' + ) + self._publish_scheduled_sensor_value(idx, now, geo_change_schedule, 'geo_change_sensor') - positionLast = positionNow + time.sleep(1 / FRAMES_PER_SECOND) - print("Received events from sensors:", sensorsReceived) + time.sleep(2) + self._verify_scene_topic_excludes_sensors() + self._verify_region_events() + self._verify_sensor_payloads() + self._verify_sensor_cache_across_geometry_changes() + self._verify_cleanup_sensor_detached_after_region_removal() + self.exitCode = 0 finally: self.recordTestResult() return -def test_sensor_region_events(request, record_xml_attribute): - test = WillOurShipGo(TEST_NAME, request, record_xml_attribute) +def test_sensor_mqtt_message_flow(request, record_xml_attribute): + test = SensorMqttMessageFlowTest(TEST_NAME, request, record_xml_attribute) test.checkForMalfunctions() assert test.exitCode == 0 return def main(): - return test_sensor_region_events(None, None) + return test_sensor_mqtt_message_flow(None, None) if __name__ == '__main__': os._exit(main() or 0) diff --git a/tests/sscape_tests/scene/pytest.ini b/tests/sscape_tests/scene/pytest.ini new file mode 100644 index 000000000..78b16f842 --- /dev/null +++ b/tests/sscape_tests/scene/pytest.ini @@ -0,0 +1,7 @@ +; SPDX-FileCopyrightText: (C) 2026 Intel Corporation +; SPDX-License-Identifier: Apache-2.0 + +# file_name: pytest.ini + +[pytest] +DJANGO_SETTINGS_MODULE = tests.sscape_tests.settings_unittest diff --git a/tests/sscape_tests/scene/test_scene_detail.py b/tests/sscape_tests/scene/test_scene_detail.py deleted file mode 100644 index 9cd979d4d..000000000 --- a/tests/sscape_tests/scene/test_scene_detail.py +++ /dev/null @@ -1,21 +0,0 @@ -# SPDX-FileCopyrightText: (C) 2025 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 - -from django.test import TestCase -from django.urls import reverse -from manager.models import Scene -from django.contrib.auth.models import User -from django.test.client import RequestFactory - -class SceneDetailTestCase(TestCase): - def setUp(self): - self.factory = RequestFactory() - request = self.factory.get('/') - self.user = User.objects.create_superuser('test_user', 'test_user@intel.com', 'testpassword') - self.client.post(reverse('sign_in'), data = {'username': 'test_user', 'password': 'testpassword', 'request': request}) - testScene = Scene.objects.create(name = "test_scene") - self.test_scene_id = testScene.id - - def test_scene_detail_page(self): - response = self.client.get(reverse('sceneDetail', args=[self.test_scene_id])) - self.assertEqual(response.status_code, 200) diff --git a/tests/sscape_tests/scene_pytest/test_scene.py b/tests/sscape_tests/scene_pytest/test_scene.py index 6e90791c6..c8f8748e9 100644 --- a/tests/sscape_tests/scene_pytest/test_scene.py +++ b/tests/sscape_tests/scene_pytest/test_scene.py @@ -3,11 +3,15 @@ # SPDX-FileCopyrightText: (C) 2022 - 2025 Intel Corporation # SPDX-License-Identifier: Apache-2.0 -import enum import cv2 import pytest import numpy as np import copy +from types import SimpleNamespace +from unittest.mock import Mock + +import controller.scene as scene_module +from controller.moving_object import ChainData from scene_common.timestamp import get_epoch_time from scene_common.geometry import Region, Point @@ -199,7 +203,7 @@ def test_convert_pixel_bbox(scene_obj, objects): assert_bounding_box(obj, original_obj) # Verify bounding boxes for sub_detections for key in obj.get('sub_detections', []): - for sub_obj, original_sub_obj in zip(enumerate(obj[key]), enumerate(original_obj[key])): + for sub_obj, original_sub_obj in zip(obj[key], original_obj[key]): assert_bounding_box(sub_obj, original_sub_obj) return @@ -216,3 +220,743 @@ def assert_bounding_box(obj, original_obj): assert 'height' in obj['bounding_box'], f"'height' missing in bounding box for object: {obj}" else: assert 'bounding_box' not in obj, f"Unexpected 'bounding_box' in object: {obj}" + +def _make_chain_data(): + return ChainData( + regions={}, + persist={}, + publishedLocations=[], + ) + +def _make_obj(gid="obj-1", frame_count=4, scene_loc=None, when=1.0): + if scene_loc is None: + scene_loc = Point(0.0, 0.0, 0.0) + return SimpleNamespace( + gid=gid, + frameCount=frame_count, + sceneLoc=scene_loc, + when=when, + chain_data=_make_chain_data(), + ) + +def test_processCameraData_unknown_camera_returns_false(scene_obj): + payload = { + 'id': 'unknown-camera', + 'timestamp': '2023-05-16T21:22:58.388Z', + 'objects': {'person': []} + } + assert scene_obj.processCameraData(payload) is False + +def test_processCameraData_camera_without_pose_returns_true(scene_obj): + scene_obj.cameras['camera1'] = SimpleNamespace(cameraID='camera1') + payload = { + 'id': 'camera1', + 'timestamp': '2023-05-16T21:22:58.388Z', + 'objects': {'person': []} + } + assert scene_obj.processCameraData(payload) is True + +def test_processCameraData_intrinsics_present_skips_bbox_conversion(scene_obj, camera_obj, monkeypatch): + scene_obj.cameras[camera_obj.cameraID] = camera_obj + convert_mock = Mock() + monkeypatch.setattr(scene_obj, '_convertPixelBoundingBoxesToMeters', convert_mock) + monkeypatch.setattr(scene_obj, '_createMovingObjectsForDetection', lambda *args, **kwargs: []) + monkeypatch.setattr(scene_obj, '_finishProcessing', lambda *args, **kwargs: None) + payload = { + 'id': camera_obj.cameraID, + 'timestamp': '2023-05-16T21:22:58.388Z', + 'intrinsics': {'fx': 1.0}, + 'objects': {'person': []} + } + assert scene_obj.processCameraData(payload) is True + convert_mock.assert_not_called() + +def test_deserialize_tracked_objects_uses_configured_attribute_singleton_type(): + """Configured attribute sensors stay in attr_sensor_events even for numeric-like values.""" + scene = scene_module.Scene.__new__(scene_module.Scene) + scene.sensors = { + 'weight-sensor': SimpleNamespace(singleton_type='attribute') + } + scene.object_history_cache = {} + + objects = scene._deserializeTrackedObjects([ + { + 'id': 'object-1', + 'translation': [1, 2, 3], + 'sensors': { + 'weight-sensor': { + 'values': [('2026-03-26T20:53:29.761Z', '48')], + } + }, + } + ]) + + assert len(objects) == 1 + assert 'weight-sensor' not in objects[0].chain_data.env_sensor_state + assert objects[0].chain_data.attr_sensor_events['weight-sensor'] == [ + ('2026-03-26T20:53:29.761Z', '48') + ] + + +def test_deserialize_tracked_objects_defaults_unknown_sensor_to_environmental(): + """Unknown sensors deserialize as environmental when no metadata is available.""" + scene = scene_module.Scene.__new__(scene_module.Scene) + scene.sensors = {} + scene.object_history_cache = {} + + objects = scene._deserializeTrackedObjects([ + { + 'id': 'object-1', + 'translation': [1, 2, 3], + 'sensors': { + 'unknown-sensor': { + 'values': [('2026-03-26T20:53:29.761Z', '48')], + } + }, + } + ]) + + assert len(objects) == 1 + assert objects[0].chain_data.env_sensor_state['unknown-sensor'] == { + 'readings': [('2026-03-26T20:53:29.761Z', '48')] + } + assert 'unknown-sensor' not in objects[0].chain_data.attr_sensor_events + + +def test_deserialize_tracked_objects_defaults_missing_singleton_type_to_environmental(): + """Sensors with missing singleton_type also default to environmental storage.""" + scene = scene_module.Scene.__new__(scene_module.Scene) + scene.sensors = { + 'sensor-without-type': SimpleNamespace(singleton_type=None) + } + scene.object_history_cache = {} + + objects = scene._deserializeTrackedObjects([ + { + 'id': 'object-1', + 'translation': [1, 2, 3], + 'sensors': { + 'sensor-without-type': { + 'values': [('2026-03-26T20:53:29.761Z', 'not-a-number')], + } + }, + } + ]) + + assert len(objects) == 1 + assert objects[0].chain_data.env_sensor_state['sensor-without-type'] == { + 'readings': [('2026-03-26T20:53:29.761Z', 'not-a-number')] + } + assert 'sensor-without-type' not in objects[0].chain_data.attr_sensor_events + +def test_processCameraData_ignore_time_flag_uses_now(scene_obj, camera_obj, monkeypatch): + scene_obj.cameras[camera_obj.cameraID] = camera_obj + captured = {} + + def _capture_create(detection_type, detections, when_value, camera): + captured['when'] = when_value + return [] + + monkeypatch.setattr(scene_obj, '_createMovingObjectsForDetection', _capture_create) + monkeypatch.setattr(scene_obj, '_finishProcessing', lambda *args, **kwargs: None) + payload = { + 'id': camera_obj.cameraID, + 'timestamp': 'not-used', + 'objects': {'person': []} + } + assert scene_obj.processCameraData(payload, when=None, ignoreTimeFlag=True) is True + assert 'when' in captured + assert isinstance(captured['when'], float) + +def test_updateTracker_only_reconfigures_on_change(scene_obj, monkeypatch): + set_tracker_mock = Mock() + monkeypatch.setattr(scene_obj, '_setTracker', set_tracker_mock) + + scene_obj.updateTracker(scene_obj.max_unreliable_time, + scene_obj.non_measurement_time_dynamic, + scene_obj.non_measurement_time_static) + set_tracker_mock.assert_not_called() + + scene_obj.trackerType = scene_module.Scene.DEFAULT_TRACKER + scene_obj.updateTracker(scene_obj.max_unreliable_time + 1.0, + scene_obj.non_measurement_time_dynamic, + scene_obj.non_measurement_time_static) + set_tracker_mock.assert_called_once_with(scene_obj.trackerType) + +def test_createMovingObjectsForDetection_propagates_scene_mesh(scene_obj): + scene_obj.map_triangle_mesh = 'mesh' + scene_obj.mesh_translation = [1, 2, 3] + scene_obj.mesh_rotation = [0, 0, 0, 1] + scene_obj.persist_attributes = {'person': {'foo': 'bar'}} + created = SimpleNamespace() + scene_obj.tracker = SimpleNamespace(createObject=lambda *args: created) + + result = scene_obj._createMovingObjectsForDetection('person', [{'id': 'x'}], 1.23, SimpleNamespace()) + assert len(result) == 1 + assert result[0].map_triangle_mesh == 'mesh' + assert result[0].map_translation == [1, 2, 3] + assert result[0].map_rotation == [0, 0, 0, 1] + +def test_processSceneData_rejects_lat_long_alt_plus_translation(scene_obj, monkeypatch): + finish_mock = Mock() + monkeypatch.setattr(scene_obj, '_finishProcessing', finish_mock) + child = SimpleNamespace(name='child', retrack=True) + camera_pose = SimpleNamespace(pose_mat=np.eye(4)) + payload = {'objects': [{'lat_long_alt': [0, 0, 0], 'translation': [1, 2, 3]}]} + + assert scene_obj.processSceneData(payload, child, camera_pose, 'person', when=1.0) is True + finish_mock.assert_not_called() + +def test_processSceneData_splits_retracked_vs_child_objects(scene_obj, monkeypatch): + calls = [] + + def _create_object(detection_type, info, when, child_obj, persist): + assert 'reid' not in info + return SimpleNamespace(oid='oid-1', sceneLoc=Point(1.0, 2.0, 0.0), chain_data=_make_chain_data()) + + def _capture_finish(detection_type, when, objects, child_objects): + calls.append((objects, child_objects)) + + scene_obj.tracker = SimpleNamespace(createObject=_create_object) + monkeypatch.setattr(scene_obj, '_finishProcessing', _capture_finish) + child = SimpleNamespace(name='child', retrack=False) + camera_pose = SimpleNamespace(pose_mat=np.eye(4)) + payload = {'objects': [{'translation': [1, 2, 3], 'reid': [0.1, 0.2]}]} + + assert scene_obj.processSceneData(payload, child, camera_pose, 'person', when=1.0) is True + assert len(calls) == 1 + assert len(calls[0][0]) == 0 + assert len(calls[0][1]) == 1 + +def test_finishProcessing_tracks_when_not_analytics_only(scene_obj, monkeypatch): + update_visible_mock = Mock() + update_events_mock = Mock() + track_mock = Mock() + monkeypatch.setattr(scene_module.ControllerMode, 'isAnalyticsOnly', lambda: False) + monkeypatch.setattr(scene_obj, '_updateVisible', update_visible_mock) + monkeypatch.setattr(scene_obj, '_updateEvents', update_events_mock) + scene_obj.tracker = SimpleNamespace(trackObjects=track_mock) + + scene_obj._finishProcessing('person', 10.0, [], []) + update_visible_mock.assert_called_once() + track_mock.assert_called_once() + update_events_mock.assert_called_once_with('person', 10.0) + +def test_finishProcessing_skips_tracker_in_analytics_only(scene_obj, monkeypatch): + update_events_mock = Mock() + track_mock = Mock() + monkeypatch.setattr(scene_module.ControllerMode, 'isAnalyticsOnly', lambda: True) + monkeypatch.setattr(scene_obj, '_updateVisible', lambda objects: None) + monkeypatch.setattr(scene_obj, '_updateEvents', update_events_mock) + scene_obj.tracker = SimpleNamespace(trackObjects=track_mock) + + scene_obj._finishProcessing('person', 10.0, [], []) + track_mock.assert_not_called() + update_events_mock.assert_called_once_with('person', 10.0) + +def test_processSensorData_unknown_sensor_returns_false(scene_obj): + assert scene_obj.processSensorData({'id': 'nope', 'value': 1}, when=1.0) is False + +def test_processSensorData_discards_past_data(scene_obj): + sensor = SimpleNamespace(lastWhen=10.0) + scene_obj.sensors['sensor1'] = sensor + assert scene_obj.processSensorData({'id': 'sensor1', 'value': 1}, when=9.0) is True + +def test_processSensorData_environmental_sensor_updates_state(scene_obj): + sensor = SimpleNamespace( + singleton_type='environmental', + area=Region.REGION_SCENE, + value=None, + lastValue=None, + lastWhen=None, + ) + scene_obj.sensors['sensor1'] = sensor + obj = _make_obj(gid='obj-1', frame_count=4) + scene_obj.use_tracker = True + scene_obj.tracker = SimpleNamespace( + trackers={'person': object()}, + currentObjects=lambda detection_type: [obj], + ) + + assert scene_obj.processSensorData({'id': 'sensor1', 'value': 12.5}, when=11.0) is True + assert 'sensor1' in obj.chain_data.active_sensors + assert obj.chain_data.env_sensor_state['sensor1']['readings'][-1][1] == 12.5 + +def test_processSensorData_attribute_sensor_updates_events(scene_obj): + sensor = SimpleNamespace( + singleton_type='attribute', + area=Region.REGION_SCENE, + value=None, + lastValue=None, + lastWhen=None, + ) + scene_obj.sensors['sensor1'] = sensor + obj = _make_obj(gid='obj-1', frame_count=4) + scene_obj.use_tracker = True + scene_obj.tracker = SimpleNamespace( + trackers={'person': object()}, + currentObjects=lambda detection_type: [obj], + ) + + assert scene_obj.processSensorData({'id': 'sensor1', 'value': 'A'}, when=11.0) is True + assert 'sensor1' in obj.chain_data.attr_sensor_events + assert obj.chain_data.attr_sensor_events['sensor1'][-1][1] == 'A' + +def test_processSensorData_scene_wide_skips_immature_objects(scene_obj): + sensor = SimpleNamespace( + singleton_type='environmental', + area=Region.REGION_SCENE, + value=None, + lastValue=None, + lastWhen=None, + ) + scene_obj.sensors['sensor1'] = sensor + obj = _make_obj(gid='obj-1', frame_count=3) + scene_obj.use_tracker = True + scene_obj.tracker = SimpleNamespace( + trackers={'person': object()}, + currentObjects=lambda detection_type: [obj], + ) + + assert scene_obj.processSensorData({'id': 'sensor1', 'value': 12.5}, when=11.0) is True + assert 'sensor1' not in obj.chain_data.active_sensors + assert obj.chain_data.env_sensor_state == {} + +def test_getTrackedObjects_analytics_mode_uses_cache(scene_obj, monkeypatch): + monkeypatch.setattr(scene_module.ControllerMode, 'isAnalyticsOnly', lambda: True) + scene_obj.updateTrackedObjects('person', [{'id': '1', 'type': 'person', 'translation': [1, 2, 3]}]) + + objs = scene_obj.getTrackedObjects('person') + assert len(objs) == 1 + assert objs[0].gid == '1' + assert objs[0].category == 'person' + +def test_getTrackedObjects_non_analytics_uses_tracker(scene_obj, monkeypatch): + monkeypatch.setattr(scene_module.ControllerMode, 'isAnalyticsOnly', lambda: False) + expected = [_make_obj(gid='direct-1')] + scene_obj.tracker = SimpleNamespace(currentObjects=lambda detection_type: expected) + assert scene_obj.getTrackedObjects('person') == expected + +def test_deserializeTrackedObjects_uses_configured_sensor_types(scene_obj, monkeypatch): + """Analytics deserialization uses scene sensor metadata for mixed sensor payloads.""" + monkeypatch.setattr(scene_module.ControllerMode, 'isAnalyticsOnly', lambda: True) + scene_obj.sensors = { + 'temperature': SimpleNamespace(singleton_type='environmental'), + 'status': SimpleNamespace(singleton_type='attribute'), + 'humidity': SimpleNamespace(singleton_type='environmental') + } + obj_data = { + 'id': 'obj-3', + 'type': 'person', + 'translation': [3.0, 4.0, 5.0], + 'sensors': { + 'temperature': { + 'values': [('2026-03-26T20:53:29.761Z', 25.5)] + }, + 'status': { + 'values': [('2026-03-26T20:53:29.761Z', 'active')] + }, + 'humidity': { + 'values': [ + ('2026-03-26T20:53:29.761Z', 65.0), + ('2026-03-26T20:53:30.761Z', 67.0), + ] + }, + } + } + scene_obj.updateTrackedObjects('person', [obj_data]) + + objs = scene_obj.getTrackedObjects('person') + assert len(objs) == 1 + assert 'temperature' in objs[0].chain_data.env_sensor_state + assert 'humidity' in objs[0].chain_data.env_sensor_state + assert 'status' in objs[0].chain_data.attr_sensor_events + assert objs[0].chain_data.env_sensor_state['temperature']['readings'][0][1] == 25.5 + assert objs[0].chain_data.attr_sensor_events['status'][0][1] == 'active' + assert len(objs[0].chain_data.env_sensor_state['humidity']['readings']) == 2 + +def test_deserializeTrackedObjects_empty_sensors(scene_obj, monkeypatch): + """Test deserialization with empty sensor values.""" + monkeypatch.setattr(scene_module.ControllerMode, 'isAnalyticsOnly', lambda: True) + obj_data = { + 'id': 'obj-4', + 'type': 'person', + 'translation': [4.0, 5.0, 6.0], + 'sensors': { + 'empty_sensor': {'values': []}, + 'normal_sensor': {'values': [('2026-03-26T20:53:29.761Z', 10)]} + } + } + scene_obj.updateTrackedObjects('person', [obj_data]) + + objs = scene_obj.getTrackedObjects('person') + assert len(objs) == 1 + assert 'empty_sensor' not in objs[0].chain_data.env_sensor_state + assert 'empty_sensor' not in objs[0].chain_data.attr_sensor_events + assert 'normal_sensor' in objs[0].chain_data.env_sensor_state + +def test_deserialize_sets_core_fields(monkeypatch): + monkeypatch.setattr(scene_module.ControllerMode, 'isAnalyticsOnly', lambda: False) + data = { + 'uid': 'scene-1', + 'name': 'scene-name', + 'map': 'sample_data/HazardZoneSceneLarge.png', + 'scale': 123, + 'children': [{'name': 'child-1'}], + 'use_tracker': True, + 'tracker_config': [1.0, 2.0, 3.0], + } + scene = scene_module.Scene.deserialize(data) + assert scene.uid == 'scene-1' + assert scene.name == 'scene-name' + assert scene.scale == 123 + assert scene.children == ['child-1'] + +def test_updateScene_updates_fields_and_invokes_helpers(scene_obj, monkeypatch): + update_children_mock = Mock() + update_cameras_mock = Mock() + update_regions_mock = Mock() + update_tripwires_mock = Mock() + update_tracker_mock = Mock() + invalidate_mock = Mock() + + monkeypatch.setattr(scene_obj, '_updateChildren', update_children_mock) + monkeypatch.setattr(scene_obj, 'updateCameras', update_cameras_mock) + monkeypatch.setattr(scene_obj, '_updateRegions', update_regions_mock) + monkeypatch.setattr(scene_obj, '_updateTripwires', update_tripwires_mock) + monkeypatch.setattr(scene_obj, 'updateTracker', update_tracker_mock) + monkeypatch.setattr(scene_obj, '_invalidate_trs_xyz_to_lla', invalidate_mock) + + scene_obj._trs_xyz_to_lla = np.array([1]) + scene_data = { + 'name': 'new-name', + 'children': [], + 'cameras': [], + 'regions': [], + 'tripwires': [], + 'sensors': [], + 'use_tracker': False, + 'tracker_config': [4.0, 5.0, 6.0], + 'scale': 321, + 'regulated_rate': 12, + 'external_update_rate': 34, + 'output_lla': False, + 'map_corners_lla': None, + } + scene_obj.updateScene(scene_data) + + assert scene_obj.name == 'new-name' + assert scene_obj.scale == 321 + assert scene_obj.regulated_rate == 12 + assert scene_obj.external_update_rate == 34 + assert scene_obj.use_tracker is False + update_children_mock.assert_called_once() + update_cameras_mock.assert_called_once() + assert update_regions_mock.call_count == 2 + update_tripwires_mock.assert_called_once() + update_tracker_mock.assert_called_once_with(4.0, 5.0, 6.0) + invalidate_mock.assert_called_once() + +def test_updateRegions_preserves_sensor_cache_and_state(scene_obj): + class FakeRegion: + def __init__(self): + self.name = 'old-name' + self.value = 10 + self.lastValue = None + self.lastWhen = 99.0 + self.entered = {'person': []} + self.exited = {'person': []} + self.objects = {'person': []} + self.when = 98.0 + self.singleton_type = 'environmental' + + def updatePoints(self, region_data): + self.points = region_data['points'] + + def updateSingletonType(self, region_data): + self.singleton_type = region_data.get('singleton_type', None) + + def updateVolumetricInfo(self, region_data): + self.volumetric = region_data.get('volumetric', False) + + existing = {'region-1': FakeRegion()} + new_regions = [{ + 'uid': 'region-1', + 'name': 'new-name', + 'points': [[0, 0], [1, 0], [1, 1], [0, 1]], + 'singleton_type': 'attribute', + }] + + scene_obj._updateRegions(existing, new_regions) + region = existing['region-1'] + assert region.name == 'new-name' + assert region.value == 10 + assert region.lastValue is None + assert region.lastWhen == 99.0 + assert region.entered == {'person': []} + assert region.exited == {'person': []} + assert region.objects == {'person': []} + assert region.when == 98.0 + +def test_updateTripwires_adds_and_removes(scene_obj): + scene_obj.tripwires = {'old-id': SimpleNamespace()} + scene_obj._updateTripwires([ + {'uid': 'new-id', 'name': 'trip-1', 'points': [[0, 0], [1, 1]]} + ]) + assert 'new-id' in scene_obj.tripwires + assert 'old-id' not in scene_obj.tripwires + +def test_updateEvents_inserts_published_locations(scene_obj, monkeypatch): + obj = _make_obj(gid='obj-1') + monkeypatch.setattr(scene_obj, '_updateRegionEvents', lambda *args, **kwargs: set()) + monkeypatch.setattr(scene_obj, '_updateTripwireEvents', lambda *args, **kwargs: None) + scene_obj.tracker = SimpleNamespace(currentObjects=lambda detection_type: [obj]) + + scene_obj._updateEvents('person', now=50.0) + assert len(obj.chain_data.publishedLocations) == 1 + assert obj.chain_data.publishedLocations[0] == obj.sceneLoc + +def test_updateTripwireEvents_emits_tripwire_event(scene_obj): + tripwire = SimpleNamespace( + objects={}, + when=0.0, + lineCrosses=lambda line: 1, + ) + scene_obj.tripwires = {'tw-1': tripwire} + scene_obj.events = {} + obj = _make_obj(gid='obj-1', frame_count=5) + obj.chain_data.publishedLocations = [Point(1.0, 1.0, 0.0), Point(0.0, 0.0, 0.0)] + + scene_obj._updateTripwireEvents('person', now=2.0, curObjects=[obj]) + assert 'objects' in scene_obj.events + assert scene_obj.events['objects'][0][0] == 'tw-1' + +def test_trs_xyz_to_lla_is_cached_and_invalidate_resets(scene_obj, monkeypatch): + calls = {'count': 0} + + def _fake_calc(mesh_corners, map_corners_lla): + calls['count'] += 1 + return np.array([[1.0]]) + + monkeypatch.setattr(scene_module, 'getMeshAxisAlignedProjectionToXY', lambda mesh: np.array([[0, 0, 0]])) + monkeypatch.setattr(scene_module, 'calculateTRSLocal2LLAFromSurfacePoints', _fake_calc) + scene_obj.output_lla = True + scene_obj.map_corners_lla = [[0, 0, 0], [1, 1, 1]] + + first = scene_obj.trs_xyz_to_lla + second = scene_obj.trs_xyz_to_lla + assert calls['count'] == 1 + assert np.array_equal(first, second) + + scene_obj._invalidate_trs_xyz_to_lla() + _ = scene_obj.trs_xyz_to_lla + assert calls['count'] == 2 + +def test_setTracker_invalid_type_keeps_existing_tracker(scene_obj): + original_tracker = scene_obj.tracker + original_tracker_type = scene_obj.trackerType + + scene_obj._setTracker('missing-tracker') + + assert scene_obj.tracker is original_tracker + assert scene_obj.trackerType == original_tracker_type + +def test_processCameraData_processes_each_detection_type(scene_obj, camera_obj, monkeypatch): + scene_obj.cameras[camera_obj.cameraID] = camera_obj + converted = [] + created = [] + finished = [] + + def _capture_convert(detections, intrinsics_matrix, distortion_matrix): + converted.append(detections) + + def _capture_create(detection_type, detections, when, camera): + created.append((detection_type, detections, camera.cameraID)) + return [detection_type] + + def _capture_finish(detection_type, when, objects, child_objects=[]): + finished.append((detection_type, objects, child_objects)) + + monkeypatch.setattr(scene_obj, '_convertPixelBoundingBoxesToMeters', _capture_convert) + monkeypatch.setattr(scene_obj, '_createMovingObjectsForDetection', _capture_create) + monkeypatch.setattr(scene_obj, '_finishProcessing', _capture_finish) + + payload = { + 'id': camera_obj.cameraID, + 'timestamp': '2023-05-16T21:22:58.388Z', + 'objects': { + 'person': [{'id': 'p-1'}], + 'vehicle': [{'id': 'v-1'}], + } + } + + assert scene_obj.processCameraData(payload) is True + assert converted == [payload['objects']['person'], payload['objects']['vehicle']] + assert [call[0] for call in created] == ['person', 'vehicle'] + assert [call[0] for call in finished] == ['person', 'vehicle'] + assert finished[0][1] == ['person'] + assert finished[1][1] == ['vehicle'] + +def test_processSensorData_invalid_environmental_value_returns_false(scene_obj): + sensor = SimpleNamespace( + singleton_type='environmental', + area=Region.REGION_SCENE, + value=None, + lastValue=None, + lastWhen=None, + ) + scene_obj.sensors['sensor1'] = sensor + obj = _make_obj(gid='obj-1', frame_count=4) + scene_obj.use_tracker = True + scene_obj.tracker = SimpleNamespace( + trackers={'person': object()}, + currentObjects=lambda detection_type: [obj], + ) + + assert scene_obj.processSensorData({'id': 'sensor1', 'value': 'not-a-number'}, when=11.0) is False + assert obj.chain_data.env_sensor_state == {} + +def test_deserializeTrackedObjects_uses_cached_first_seen(scene_obj): + scene_obj.object_history_cache['obj-1'] = { + 'first_seen': 12.5, + 'publishedLocations': [Point(9.0, 8.0, 7.0)], + } + + objs = scene_obj._deserializeTrackedObjects([ + {'id': 'obj-1', 'type': 'person', 'translation': [1.0, 2.0, 3.0]} + ]) + + assert len(objs) == 1 + assert objs[0].when == 12.5 + assert objs[0].first_seen == 12.5 + assert objs[0].chain_data.publishedLocations[0] == Point(9.0, 8.0, 7.0) + +def test_deserializeTrackedObjects_missing_first_seen_uses_current_time(scene_obj, monkeypatch): + monkeypatch.setattr(scene_module, 'get_epoch_time', lambda *args, **kwargs: 77.0) + + objs = scene_obj._deserializeTrackedObjects([ + {'id': 'obj-2', 'type': 'person', 'translation': [1.0, 2.0, 3.0]} + ]) + + assert len(objs) == 1 + assert objs[0].when == 77.0 + assert objs[0].first_seen == 77.0 + assert scene_obj.object_history_cache['obj-2']['first_seen'] == 77.0 + +def test_updateRegionEvents_environmental_sensor_exit_clears_state(scene_obj): + obj = _make_obj(gid='obj-1', frame_count=4, when=1.0) + obj.chain_data.regions['sensor1'] = {'entered': '2026-03-26T20:53:29.761Z'} + obj.chain_data.active_sensors.add('sensor1') + obj.chain_data.env_sensor_state['sensor1'] = {'readings': [('2026-03-26T20:53:29.761Z', 21.5)]} + region = SimpleNamespace( + objects={'person': [obj]}, + when=0.0, + singleton_type='environmental', + entered={}, + exited={}, + isPointWithin=lambda scene_loc: False, + compute_intersection=False, + ) + scene_obj.events = {} + + updated = scene_obj._updateRegionEvents('person', {'sensor1': region}, 2.0, '2026-03-26T20:53:31.761Z', []) + + assert updated == {'sensor1'} + assert 'sensor1' not in obj.chain_data.regions + assert 'sensor1' not in obj.chain_data.active_sensors + assert 'sensor1' not in obj.chain_data.env_sensor_state + assert region.objects['person'] == [] + assert scene_obj.events['objects'][0][0] == 'sensor1' + +def test_updateRegionEvents_attribute_sensor_exit_preserves_history(scene_obj): + obj = _make_obj(gid='obj-1', frame_count=4, when=1.0) + obj.chain_data.regions['sensor1'] = {'entered': '2026-03-26T20:53:29.761Z'} + obj.chain_data.active_sensors.add('sensor1') + obj.chain_data.attr_sensor_events['sensor1'] = [('2026-03-26T20:53:29.761Z', 'red')] + region = SimpleNamespace( + objects={'person': [obj]}, + when=0.0, + singleton_type='attribute', + entered={}, + exited={}, + isPointWithin=lambda scene_loc: False, + compute_intersection=False, + ) + scene_obj.events = {} + + updated = scene_obj._updateRegionEvents('person', {'sensor1': region}, 2.0, '2026-03-26T20:53:31.761Z', []) + + assert updated == {'sensor1'} + assert 'sensor1' not in obj.chain_data.regions + assert 'sensor1' not in obj.chain_data.active_sensors + assert obj.chain_data.attr_sensor_events['sensor1'] == [('2026-03-26T20:53:29.761Z', 'red')] + assert region.objects['person'] == [] + +def test_updateRegionEvents_debounce_preserves_exit_state_until_event_emits(scene_obj, monkeypatch): + monkeypatch.setattr(scene_module, 'DEBOUNCE_DELAY', 0.5) + + obj = _make_obj(gid='obj-1', frame_count=4, when=1.0) + obj.chain_data.regions['sensor1'] = {'entered': '2026-03-26T20:53:29.761Z'} + obj.chain_data.active_sensors.add('sensor1') + obj.chain_data.env_sensor_state['sensor1'] = {'readings': [('2026-03-26T20:53:29.761Z', 21.5)]} + region = SimpleNamespace( + objects={'person': [obj]}, + when=1.9, + singleton_type='environmental', + entered={}, + exited={}, + isPointWithin=lambda scene_loc: False, + compute_intersection=False, + ) + scene_obj.events = {} + + updated = scene_obj._updateRegionEvents('person', {'sensor1': region}, 2.0, '2026-03-26T20:53:31.761Z', []) + + assert updated == set() + assert obj.chain_data.regions['sensor1']['entered'] == '2026-03-26T20:53:29.761Z' + assert 'sensor1' in obj.chain_data.active_sensors + assert 'sensor1' in obj.chain_data.env_sensor_state + assert scene_obj.events == {} + assert region.objects['person'] == [obj] + +def test_updateRegionEvents_emits_delayed_exit_with_dwell_and_then_cleans_up(scene_obj, monkeypatch): + monkeypatch.setattr(scene_module, 'DEBOUNCE_DELAY', 0.5) + + obj = _make_obj(gid='obj-1', frame_count=4, when=1.0) + entered_ts = '2026-03-26T20:53:29.761Z' + obj.chain_data.regions['region1'] = {'entered': entered_ts} + region = SimpleNamespace( + objects={'person': [obj]}, + when=1.9, + singleton_type=None, + entered={}, + exited={}, + isPointWithin=lambda scene_loc: False, + compute_intersection=False, + ) + scene_obj.events = {} + + # Debounce suppresses immediate event emission. + updated = scene_obj._updateRegionEvents('person', {'region1': region}, 2.0, '2026-03-26T20:53:31.761Z', []) + assert updated == set() + assert 'region1' in obj.chain_data.regions + + # Once debounce delay has passed, emit exit and compute dwell from preserved entered timestamp. + scene_obj._updateRegionEvents('person', {'region1': region}, 2.6, '2026-03-26T20:53:32.361Z', []) + + assert region.exited['person'] + exited_obj, dwell = region.exited['person'][0] + assert exited_obj == obj + assert dwell == pytest.approx(2.6 - get_epoch_time(entered_ts)) + assert 'region1' not in obj.chain_data.regions + assert region.objects['person'] == [] + +def test_isIntersecting_createObjectMesh_value_error_returns_false(scene_obj, monkeypatch): + def _raise_value_error(obj): + raise ValueError('invalid object geometry') + + monkeypatch.setattr(scene_module, 'createObjectMesh', _raise_value_error) + region = SimpleNamespace(compute_intersection=True, mesh=object()) + obj = SimpleNamespace() + + assert scene_obj.isIntersecting(obj, region) is False diff --git a/tests/sscape_tests/scenescape/conftest.py b/tests/sscape_tests/scenescape/conftest.py index 8611b54a7..408481125 100644 --- a/tests/sscape_tests/scenescape/conftest.py +++ b/tests/sscape_tests/scenescape/conftest.py @@ -5,6 +5,7 @@ import os import pytest +from unittest.mock import Mock from scene_common.scenescape import SceneLoader import tests.common_test_utils as common @@ -31,3 +32,26 @@ def manager(): """! Creates a scenescape class object as a fixture. """ return SceneLoader(CONFIG_FULLPATH) + + +@pytest.fixture +def mock_rest_client(): + """Create a mock REST client for testing.""" + mock_client = Mock() + mock_client.getScenes.return_value = { + 'results': [ + { + 'uid': 'scene-1', + 'name': 'Test Scene', + 'map_file': 'map.obj', + 'cameras': [], + 'sensors': [], + 'children': [], + 'objects': [] + } + ] + } + mock_client.updateCamera.return_value = True + mock_client.getCamera.return_value = {'uid': 'cam-1'} + + return mock_client diff --git a/tests/sscape_tests/scenescape/test_cache_manager.py b/tests/sscape_tests/scenescape/test_cache_manager.py new file mode 100644 index 000000000..da817bd46 --- /dev/null +++ b/tests/sscape_tests/scenescape/test_cache_manager.py @@ -0,0 +1,514 @@ +# SPDX-FileCopyrightText: (C) 2026 Intel Corporation +# SPDX-License-Identifier: Apache-2.0 + +import pytest +from unittest.mock import Mock, MagicMock, patch + +from controller.cache_manager import CacheManager +from controller.data_source import FileSceneDataSource, RestSceneDataSource +from controller.scene import Scene + + +class TestCacheManagerInitialization: + """Test CacheManager initialization with different data sources.""" + + def test_init_with_file_data_source_mock(self): + """Test initialization with file-based data source (mocked).""" + mock_data_source = Mock(spec=FileSceneDataSource) + mock_data_source.getScenes.return_value = {'results': []} + + cache_mgr = CacheManager.__new__(CacheManager) + cache_mgr.cached_scenes_by_uid = {} + cache_mgr._cached_scenes_by_cameraID = {} + cache_mgr._cached_scenes_by_sensorID = {} + cache_mgr.camera_parameters = {} + cache_mgr.tracker_config_data = {} + cache_mgr.data_source = mock_data_source + + assert cache_mgr is not None + assert hasattr(cache_mgr, 'cached_scenes_by_uid') + assert hasattr(cache_mgr, 'camera_parameters') + + def test_init_with_rest_data_source(self, mock_rest_client): + """Test initialization with REST data source.""" + with patch('controller.data_source.RESTClient', return_value=mock_rest_client): + cache_mgr = CacheManager.__new__(CacheManager) + cache_mgr.cached_scenes_by_uid = {} + cache_mgr._cached_scenes_by_cameraID = {} + cache_mgr._cached_scenes_by_sensorID = {} + cache_mgr.tracker_config_data = {} + cache_mgr.data_source = mock_rest_client + + assert cache_mgr is not None + + def test_init_with_no_data_source_raises_error(self): + """Test that initialization without data source raises ValueError.""" + with pytest.raises(ValueError, match="Invalid configuration"): + CacheManager() + + def test_init_with_tracker_config(self): + """Test initialization with tracker configuration.""" + tracker_config = { + 'max_unreliable_time': 5.0, + 'non_measurement_time_dynamic': 2.0, + 'non_measurement_time_static': 10.0, + 'effective_object_update_rate': 30, + 'time_chunking_enabled': False, + 'time_chunking_rate_fps': 30, + 'suspended_track_timeout_secs': 60, + 'persist_attributes': {'test_attr': 'value'} + } + + mock_data_source = Mock() + mock_data_source.getScenes.return_value = {'results': []} + + cache_mgr = CacheManager.__new__(CacheManager) + cache_mgr.cached_scenes_by_uid = {} + cache_mgr._cached_scenes_by_cameraID = {} + cache_mgr._cached_scenes_by_sensorID = {} + cache_mgr.tracker_config_data = tracker_config + cache_mgr.data_source = mock_data_source + + assert cache_mgr.tracker_config_data == tracker_config + + +class TestCacheManagerRefreshScenes: + """Test scene refresh functionality.""" + + def test_refresh_scenes_with_empty_results(self): + """Test that refreshScenes handles empty results gracefully.""" + mock_data_source = Mock() + mock_data_source.getScenes.return_value = {'results': []} + + cache_mgr = CacheManager.__new__(CacheManager) + cache_mgr.cached_scenes_by_uid = {} + cache_mgr._cached_scenes_by_cameraID = {} + cache_mgr._cached_scenes_by_sensorID = {} + cache_mgr.tracker_config_data = {} + cache_mgr.data_source = mock_data_source + + cache_mgr.refreshScenes() + + assert len(cache_mgr.cached_scenes_by_uid) == 0 + + def test_refresh_scenes_handles_failed_request(self): + """Test that refreshScenes handles failed data source requests.""" + mock_data_source = Mock() + mock_response = Mock() + mock_response.statusCode = 500 + mock_response.__contains__ = Mock(return_value=False) + mock_data_source.getScenes.return_value = mock_response + + cache_mgr = CacheManager.__new__(CacheManager) + cache_mgr.cached_scenes_by_uid = {} + cache_mgr._cached_scenes_by_cameraID = {} + cache_mgr._cached_scenes_by_sensorID = {} + cache_mgr.tracker_config_data = {} + cache_mgr.data_source = mock_data_source + + with patch('controller.cache_manager.log.error') as mock_log_error: + # Should not raise, just return without updating cache + cache_mgr.refreshScenes() + + mock_log_error.assert_called_once_with("Failed to get results, error code: ", 500) + + assert len(cache_mgr.cached_scenes_by_uid) == 0 + + def test_refresh_scenes_sets_cache_timestamp(self): + """Test that refreshScenes sets the cache refresh timestamp.""" + mock_data_source = Mock() + mock_data_source.getScenes.return_value = {'results': []} + + cache_mgr = CacheManager.__new__(CacheManager) + cache_mgr.cached_scenes_by_uid = {} + cache_mgr._cached_scenes_by_cameraID = {} + cache_mgr._cached_scenes_by_sensorID = {} + cache_mgr.tracker_config_data = {} + cache_mgr.data_source = mock_data_source + + cache_mgr.refreshScenes() + + assert hasattr(cache_mgr, '_cache_refreshed') + assert cache_mgr._cache_refreshed > 0 + + +class TestCacheManagerCameraParameters: + """Test camera parameter management.""" + + def test_camera_parameters_changed_with_intrinsics(self): + """Test detecting intrinsics parameter changes.""" + cache_mgr = CacheManager.__new__(CacheManager) + cache_mgr.camera_parameters = {} + + message = { + 'id': 'cam-1', + 'intrinsics': {'cx': 320, 'cy': 240, 'fx': 500, 'fy': 500} + } + + result = cache_mgr.cameraParametersChanged(message, 'intrinsics') + + # Should detect change on first call + assert result is True + assert cache_mgr.camera_parameters['cam-1']['intrinsics'] == message['intrinsics'] + + def test_camera_parameters_changed_with_distortion(self): + """Test detecting distortion parameter changes.""" + cache_mgr = CacheManager.__new__(CacheManager) + cache_mgr.camera_parameters = {} + + message = { + 'id': 'cam-1', + 'distortion': {'k1': 0.1, 'k2': 0.01, 'p1': 0.001, 'p2': 0.001, 'k3': 0.0} + } + + result = cache_mgr.cameraParametersChanged(message, 'distortion') + + assert result is True + assert cache_mgr.camera_parameters['cam-1']['distortion'] == message['distortion'] + + def test_camera_parameters_no_change_on_duplicate(self): + """Test that duplicate parameters are not considered changed.""" + cache_mgr = CacheManager.__new__(CacheManager) + cache_mgr.camera_parameters = {} + + message = { + 'id': 'cam-1', + 'intrinsics': {'cx': 320, 'cy': 240} + } + + # First call should detect change + result1 = cache_mgr.cameraParametersChanged(message, 'intrinsics') + assert result1 is True + + # Second call with same data should not detect change + result2 = cache_mgr.cameraParametersChanged(message, 'intrinsics') + assert result2 is False + + def test_camera_parameters_changed_no_message_parameters(self): + """Test handling when message has no parameters.""" + cache_mgr = CacheManager.__new__(CacheManager) + cache_mgr.camera_parameters = {} + + message = {'id': 'cam-1'} + + result = cache_mgr.cameraParametersChanged(message, 'intrinsics') + + assert result is False + + +class TestCacheManagerQueryMethods: + """Test cache query methods.""" + + def test_all_scenes_returns_cached_scenes(self): + """Test allScenes returns all cached scenes.""" + cache_mgr = CacheManager.__new__(CacheManager) + mock_scene1 = Mock(spec=Scene) + mock_scene2 = Mock(spec=Scene) + cache_mgr.cached_scenes_by_uid = {'scene-1': mock_scene1, 'scene-2': mock_scene2} + cache_mgr.data_source = Mock() + cache_mgr.data_source.getScenes.return_value = {'results': []} + cache_mgr._cache_refreshed = 0 + + scenes = list(cache_mgr.allScenes()) + + assert len(scenes) == 2 + + def test_scene_with_id(self): + """Test retrieving scene by ID.""" + cache_mgr = CacheManager.__new__(CacheManager) + mock_scene = Mock(spec=Scene) + mock_scene.uid = 'scene-1' + cache_mgr.cached_scenes_by_uid = {'scene-1': mock_scene} + cache_mgr.data_source = Mock() + cache_mgr.data_source.getScenes.return_value = {'results': []} + cache_mgr._cache_refreshed = 0 + + scene = cache_mgr.sceneWithID('scene-1') + + assert scene is not None + assert scene.uid == 'scene-1' + + def test_scene_with_invalid_id_returns_none(self): + """Test that invalid scene ID returns None.""" + cache_mgr = CacheManager.__new__(CacheManager) + cache_mgr.cached_scenes_by_uid = {} + cache_mgr._cache_refreshed = 0 + cache_mgr.data_source = Mock() + cache_mgr.data_source.getScenes.return_value = {'results': []} + + scene = cache_mgr.sceneWithID('invalid-uid') + + assert scene is None + + def test_scene_with_camera_id(self): + """Test retrieving scene by camera ID.""" + cache_mgr = CacheManager.__new__(CacheManager) + mock_scene = Mock(spec=Scene) + mock_scene.uid = 'scene-1' + cache_mgr.cached_scenes_by_uid = {'scene-1': mock_scene} + cache_mgr._cached_scenes_by_cameraID = {'cam-1': mock_scene} + cache_mgr.data_source = Mock() + cache_mgr.data_source.getScenes.return_value = {'results': []} + cache_mgr._cache_refreshed = 0 + + scene = cache_mgr.sceneWithCameraID('cam-1') + + assert scene is not None + assert scene.uid == 'scene-1' + + def test_scene_with_invalid_camera_id_returns_none(self): + """Test that invalid camera ID returns None.""" + cache_mgr = CacheManager.__new__(CacheManager) + cache_mgr._cached_scenes_by_cameraID = {} + cache_mgr._cache_refreshed = 0 + cache_mgr.data_source = Mock() + cache_mgr.data_source.getScenes.return_value = {'results': []} + + scene = cache_mgr.sceneWithCameraID('invalid-cam-id') + + assert scene is None + + def test_scene_with_sensor_id(self): + """Test retrieving scene by sensor ID.""" + cache_mgr = CacheManager.__new__(CacheManager) + mock_scene = Mock(spec=Scene) + mock_scene.uid = 'scene-1' + cache_mgr.cached_scenes_by_uid = {'scene-1': mock_scene} + cache_mgr._cached_scenes_by_sensorID = {'sensor-1': mock_scene} + cache_mgr.data_source = Mock() + cache_mgr.data_source.getScenes.return_value = {'results': []} + cache_mgr._cache_refreshed = 0 + + scene = cache_mgr.sceneWithSensorID('sensor-1') + + assert scene is not None + assert scene.uid == 'scene-1' + + def test_scene_with_invalid_sensor_id_returns_none(self): + """Test that invalid sensor ID returns None.""" + cache_mgr = CacheManager.__new__(CacheManager) + cache_mgr._cached_scenes_by_sensorID = {} + cache_mgr._cache_refreshed = 0 + cache_mgr.data_source = Mock() + cache_mgr.data_source.getScenes.return_value = {'results': []} + + scene = cache_mgr.sceneWithSensorID('invalid-sensor-id') + + assert scene is None + + +class TestCacheManagerInvalidation: + """Test cache invalidation.""" + + def test_invalidate_clears_cache(self): + """Test that invalidate clears the scene cache.""" + cache_mgr = CacheManager.__new__(CacheManager) + mock_scene = Mock(spec=Scene) + cache_mgr.cached_scenes_by_uid = {'scene-1': mock_scene} + cache_mgr.cached_child_transforms_by_uid = {} + + cache_mgr.invalidate() + + assert cache_mgr.cached_scenes_by_uid is None + + def test_invalidate_preserves_old_cache(self): + """Test that invalidate preserves old cache for sensor restoration.""" + cache_mgr = CacheManager.__new__(CacheManager) + mock_scene = Mock(spec=Scene) + original_cache = {'scene-1': mock_scene} + cache_mgr.cached_scenes_by_uid = original_cache.copy() + cache_mgr.cached_child_transforms_by_uid = {} + + cache_mgr.invalidate() + + # Old cache should be preserved + assert hasattr(cache_mgr, '_old_scene_cache') + assert cache_mgr._old_scene_cache is not None + + def test_check_refresh_recreates_cache(self): + """Test that checkRefresh recreates cache when None.""" + mock_data_source = Mock() + mock_data_source.getScenes.return_value = {'results': []} + + cache_mgr = CacheManager.__new__(CacheManager) + cache_mgr.cached_scenes_by_uid = None + cache_mgr._cached_scenes_by_cameraID = {} + cache_mgr._cached_scenes_by_sensorID = {} + cache_mgr.tracker_config_data = {} + cache_mgr.data_source = mock_data_source + + cache_mgr.checkRefresh() + + assert cache_mgr.cached_scenes_by_uid is not None + + +class TestCacheManagerSensorRestoration: + """Test sensor cache restoration functionality.""" + + def test_restore_sensor_cache_copies_values(self): + """Test that sensor cache values are properly restored.""" + cache_mgr = CacheManager.__new__(CacheManager) + + # Create mock old and new sensors + old_sensor = Mock() + old_sensor.value = 42 + old_sensor.lastValue = 41 + old_sensor.lastWhen = 1234567890 + + new_sensor = Mock() + new_sensor.value = None + new_sensor.lastValue = None + new_sensor.lastWhen = None + + old_scene = Mock() + old_scene.sensors = {'sensor-1': old_sensor} + + new_scene = Mock() + new_scene.sensors = {'sensor-1': new_sensor} + + cache_mgr._restoreSensorCache('scene-uid', old_scene, new_scene) + + assert new_sensor.value == 42 + assert new_sensor.lastValue == 41 + assert new_sensor.lastWhen == 1234567890 + + def test_sensor_needs_restoring_returns_none_when_no_cache(self): + """Test that sensorNeedsRestoring returns None when no old cache.""" + cache_mgr = CacheManager.__new__(CacheManager) + + if hasattr(cache_mgr, '_old_scene_cache'): + delattr(cache_mgr, '_old_scene_cache') + + result = cache_mgr._sensorNeedsRestoring('scene-uid') + + assert result is None + + def test_sensor_needs_restoring_returns_old_scene(self): + """Test that sensorNeedsRestoring returns old scene when available.""" + cache_mgr = CacheManager.__new__(CacheManager) + + old_scene = Mock() + cache_mgr._old_scene_cache = {'scene-uid': old_scene} + + result = cache_mgr._sensorNeedsRestoring('scene-uid') + + assert result == old_scene + + +class TestCacheManagerRefreshCameras: + """Test camera refresh functionality.""" + + def test_refresh_cameras_processes_all_cameras(self): + """Test that _refreshCameras processes camera data.""" + cache_mgr = CacheManager.__new__(CacheManager) + cache_mgr.data_source = Mock() + cache_mgr.camera_parameters = {} + + scene_data = { + 'cameras': [ + { + 'uid': 'cam-1', + 'intrinsics': {'cx': 320, 'cy': 240}, + 'distortion': {} + } + ] + } + + cache_mgr._refreshCameras(scene_data) + + # Should not raise any errors + + def test_refresh_cameras_with_no_cameras(self): + """Test _refreshCameras with empty camera list.""" + cache_mgr = CacheManager.__new__(CacheManager) + cache_mgr.data_source = Mock() + cache_mgr.camera_parameters = {} + + scene_data = {'cameras': []} + + cache_mgr._refreshCameras(scene_data) + + # Should not raise any errors + + def test_refresh_scenes_for_cam_params(self): + """Test refreshScenesForCamParams updates camera parameters.""" + cache_mgr = CacheManager.__new__(CacheManager) + + mock_scene = Mock(spec=Scene) + mock_camera = Mock() + mock_camera.cameraID = 'cam-1' + mock_camera.pose = Mock() + mock_camera.pose.resolution = [640, 480] + + mock_scene.cameras = {'cam-1': mock_camera} + cache_mgr.cached_scenes_by_uid = {'scene-1': mock_scene} + cache_mgr.camera_parameters = {} + cache_mgr.data_source = Mock() + cache_mgr.data_source.getScenes.return_value = {'results': []} + cache_mgr._cached_scenes_by_cameraID = {} + cache_mgr._cached_scenes_by_sensorID = {} + cache_mgr.tracker_config_data = {} + + jdata = { + 'id': 'cam-1', + 'intrinsics': {'cx': 320, 'cy': 240, 'fx': 500, 'fy': 500} + } + + cache_mgr.refreshScenesForCamParams(jdata) + + # Should not raise any errors + + +class TestCacheManagerEdgeCases: + """Test edge cases and error conditions.""" + + def test_multiple_refresh_calls_are_idempotent(self): + """Test that multiple refresh calls don't cause issues.""" + mock_data_source = Mock() + mock_data_source.getScenes.return_value = {'results': []} + + cache_mgr = CacheManager.__new__(CacheManager) + cache_mgr.cached_scenes_by_uid = {} + cache_mgr._cached_scenes_by_cameraID = {} + cache_mgr._cached_scenes_by_sensorID = {} + cache_mgr.tracker_config_data = {} + cache_mgr.data_source = mock_data_source + + cache_mgr.refreshScenes() + count_before = len(cache_mgr.cached_scenes_by_uid) + + cache_mgr.refreshScenes() + count_after = len(cache_mgr.cached_scenes_by_uid) + + assert count_before == count_after + + def test_cache_access_without_initialization(self): + """Test cache access methods handle uninitialized state.""" + mock_data_source = Mock() + mock_data_source.getScenes.return_value = {'results': []} + + cache_mgr = CacheManager.__new__(CacheManager) + cache_mgr.cached_scenes_by_uid = None + cache_mgr._cached_scenes_by_cameraID = {} + cache_mgr._cached_scenes_by_sensorID = {} + cache_mgr.data_source = mock_data_source + + # Should handle gracefully + scenes = list(cache_mgr.allScenes()) + + assert len(scenes) == 0 + + def test_concurrent_cache_access(self): + """Test cache can be accessed without errors.""" + cache_mgr = CacheManager.__new__(CacheManager) + mock_scene = Mock(spec=Scene) + cache_mgr.cached_scenes_by_uid = {'scene-1': mock_scene, 'scene-2': mock_scene} + cache_mgr._cache_refreshed = 0 + cache_mgr.data_source = Mock() + cache_mgr.data_source.getScenes.return_value = {'results': []} + + # Simulate concurrent access + scenes1 = list(cache_mgr.allScenes()) + scenes2 = list(cache_mgr.allScenes()) + + assert len(scenes1) == len(scenes2) diff --git a/tests/sscape_tests/scenescape/test_detections_builder.py b/tests/sscape_tests/scenescape/test_detections_builder.py new file mode 100644 index 000000000..df554f2b4 --- /dev/null +++ b/tests/sscape_tests/scenescape/test_detections_builder.py @@ -0,0 +1,164 @@ +# SPDX-FileCopyrightText: (C) 2026 Intel Corporation +# SPDX-License-Identifier: Apache-2.0 + +from types import SimpleNamespace +from unittest.mock import patch + +import numpy as np +import pytest + +from controller.detections_builder import buildDetectionsDict, buildDetectionsList, prepareObjDict +from controller.scene import TripwireEvent +from controller.moving_object import ChainData +from scene_common.geometry import Point +from scene_common.timestamp import get_iso_time + + +def _build_object(*, velocity=None, include_sensor_payload=True): + chain_data = ChainData( + regions={'region-a': {'entered': '2026-03-31T10:00:00Z'}}, + publishedLocations=[], + persist={'asset_tag': 'forklift-7'}, + ) + + if include_sensor_payload: + chain_data.env_sensor_state['temp-1'] = {'readings': [('2026-03-31T10:00:00Z', 21.5)]} + chain_data.attr_sensor_events['badge-1'] = [('2026-03-31T10:00:00Z', 'authorized')] + + return SimpleNamespace( + category='person', + gid='object-1', + sceneLoc=Point(1.0, 2.0, 3.0), + velocity=velocity, + info={'source': 'camera-1', 'bb_meters': {'width': 1.2, 'height': 2.3}}, + size={'width': 1.2, 'height': 2.3, 'length': 0.7}, + rotation={'yaw': 90.0}, + metadata={'age': 'adult', 'reid': 'discard-me'}, + reid={'embedding_vector': np.array([0.1, 0.2], dtype=np.float32), 'model_name': 'reid-model'}, + visibility={'cam-1': True}, + vectors=[SimpleNamespace(camera=SimpleNamespace(cameraID='cam-1'))], + boundingBoxPixels=SimpleNamespace(asDict={'x': 10, 'y': 20, 'width': 30, 'height': 40}), + chain_data=chain_data, + confidence=0.97, + similarity=0.88, + first_seen=1711886400, + asset_scale=1.25 + ) + + +class TestDetectionsBuilder: + def test_build_detections_list_returns_empty_for_no_objects(self): + scene = SimpleNamespace(output_lla=False) + + detections = buildDetectionsList([], scene) + + assert detections == [] + + def test_build_detections_dict_returns_empty_for_no_objects(self): + scene = SimpleNamespace(output_lla=False) + + detections = buildDetectionsDict([], scene) + + assert detections == {} + + def test_build_detections_list_serializes_metadata_sensors_and_visibility(self): + obj = _build_object(velocity=Point(4.0, 5.0)) + scene = SimpleNamespace(output_lla=False) + + detections = buildDetectionsList([obj], scene, update_visibility=True, include_sensors=True) + + assert len(detections) == 1 + + detection = detections[0] + assert detection['id'] == 'object-1' + assert detection['type'] == 'person' + assert detection['translation'] == [1.0, 2.0, 3.0] + assert detection['velocity'] == [4.0, 5.0] + assert detection['rotation'] == {'yaw': 90.0} + assert detection['metadata']['age'] == 'adult' + assert np.allclose(detection['metadata']['reid']['embedding_vector'], [0.1, 0.2]) + assert detection['metadata']['reid']['model_name'] == 'reid-model' + assert detection['sensors']['temp-1']['values'][0][1] == 21.5 + assert detection['sensors']['badge-1']['values'][0][1] == 'authorized' + assert detection['regions'] == {'region-a': {'entered': '2026-03-31T10:00:00Z'}} + assert detection['camera_bounds'] == {'cam-1': {'x': 10, 'y': 20, 'width': 30, 'height': 40}} + assert detection['persistent_data'] == {'asset_tag': 'forklift-7'} + assert detection['first_seen'] == get_iso_time(obj.first_seen) + + def test_build_detections_list_omits_sensor_data_when_disabled(self): + obj = _build_object(velocity=Point(4.0, 5.0)) + scene = SimpleNamespace(output_lla=False) + + detections = buildDetectionsList([obj], scene, include_sensors=False) + + assert len(detections) == 1 + assert 'sensors' not in detections[0] + assert detections[0]['regions'] == {'region-a': {'entered': '2026-03-31T10:00:00Z'}} + + def test_build_detections_list_does_not_leak_sensors_between_calls(self): + obj = _build_object(velocity=Point(4.0, 5.0)) + scene = SimpleNamespace(output_lla=False) + + with_sensors = buildDetectionsList([obj], scene, include_sensors=True) + without_sensors = buildDetectionsList([obj], scene, include_sensors=False) + + assert 'sensors' in with_sensors[0] + assert 'sensors' not in without_sensors[0] + + def test_build_detections_dict_handles_tripwire_and_defaults_missing_velocity(self): + obj = _build_object(velocity=None, include_sensor_payload=False) + scene = SimpleNamespace(output_lla=False) + event = TripwireEvent(obj, 'entering') + + detections = buildDetectionsDict([event], scene) + + assert list(detections.keys()) == ['object-1'] + detection = detections['object-1'] + assert detection['velocity'] == [0, 0] + assert detection['direction'] == 'entering' + assert 'sensors' not in detection + + def test_prepare_obj_dict_omits_reid_metadata_when_embedding_is_none(self): + obj = _build_object(velocity=Point(4.0, 5.0), include_sensor_payload=False) + obj.reid = {'embedding_vector': None, 'model_name': 'ignored-model'} + + detection = prepareObjDict(SimpleNamespace(output_lla=False), obj, update_visibility=False) + + assert 'reid' not in detection['metadata'] + + def test_prepare_obj_dict_handles_chain_data_without_sensor_fields(self): + obj = _build_object(velocity=Point(4.0, 5.0), include_sensor_payload=False) + obj.chain_data = ChainData(regions=[], persist={}, publishedLocations=[]) + + detection = prepareObjDict(SimpleNamespace(output_lla=False), obj, update_visibility=False, include_sensors=True) + + assert 'sensors' not in detection + + def test_prepare_obj_dict_raises_with_missing_gid(self): + obj = _build_object(velocity=Point(4.0, 5.0), include_sensor_payload=False) + del obj.gid + + with pytest.raises(AttributeError): + prepareObjDict(SimpleNamespace(output_lla=False), obj, update_visibility=False) + + def test_prepare_obj_dict_raises_with_missing_chain_data(self): + obj = _build_object(velocity=Point(4.0, 5.0), include_sensor_payload=False) + del obj.chain_data + + with pytest.raises(AttributeError): + prepareObjDict(SimpleNamespace(output_lla=False), obj, update_visibility=False) + + @patch('controller.detections_builder.calculateHeading') + @patch('controller.detections_builder.convertXYZToLLA') + def test_prepare_obj_dict_adds_lla_output_when_enabled(self, mock_convert_xyz_to_lla, mock_calculate_heading): + obj = _build_object(velocity=Point(4.0, 5.0, 6.0), include_sensor_payload=False) + scene = SimpleNamespace(output_lla=True, trs_xyz_to_lla='trs-transform') + mock_convert_xyz_to_lla.return_value = np.array([45.0, -122.0, 12.0]) + mock_calculate_heading.return_value = np.array([180.0]) + + detection = prepareObjDict(scene, obj, update_visibility=False) + + assert detection['lat_long_alt'] == [45.0, -122.0, 12.0] + assert detection['heading'] == [180.0] + mock_convert_xyz_to_lla.assert_called_once_with('trs-transform', [1.0, 2.0, 3.0]) + mock_calculate_heading.assert_called_once_with('trs-transform', [1.0, 2.0, 3.0], [4.0, 5.0, 6.0]) diff --git a/tests/sscape_tests/scenescape/test_moving_object.py b/tests/sscape_tests/scenescape/test_moving_object.py new file mode 100644 index 000000000..ebfe4df28 --- /dev/null +++ b/tests/sscape_tests/scenescape/test_moving_object.py @@ -0,0 +1,144 @@ +# SPDX-FileCopyrightText: (C) 2026 Intel Corporation +# SPDX-License-Identifier: Apache-2.0 + +import base64 +import datetime +from types import SimpleNamespace +from unittest.mock import patch + +import numpy as np + +from controller.moving_object import ChainData, Chronoloc, MovingObject +from scene_common.geometry import Point, Rectangle + + +def _camera(): + return SimpleNamespace(cameraID='cam-1') + + +def _base_info(*, object_id='obj-1', metadata=None): + info = { + 'id': object_id, + 'category': 'person', + 'confidence': 0.9, + 'bounding_box': {'x': 0.1, 'y': 0.2, 'width': 0.3, 'height': 0.4} + } + if metadata is not None: + info['metadata'] = metadata + return info + + +class TestChainData: + def test_chain_data_defaults_include_sensor_maps(self): + chain_data = ChainData(regions={}, publishedLocations=[], persist={}) + + assert chain_data.active_sensors == set() + assert chain_data.env_sensor_state == {} + assert chain_data.attr_sensor_events == {} + + +class TestMovingObject: + def test_init_extracts_reid_and_keeps_metadata(self): + when = datetime.datetime.now(datetime.timezone.utc) + metadata = { + 'age': 'adult', + 'reid': {'embedding_vector': [0.1, 0.2], 'model_name': 'reid-v1'} + } + + obj = MovingObject(_base_info(metadata=metadata), when, _camera()) + + assert obj.metadata['age'] == 'adult' + assert obj.reid['model_name'] == 'reid-v1' + assert obj.reid['embedding_vector'] == [0.1, 0.2] + assert 'metadata' not in obj.info + + def test_init_decodes_base64_reid_vector(self): + when = datetime.datetime.now(datetime.timezone.utc) + vector = np.zeros(256, dtype=np.float32) + encoded = base64.b64encode(vector.tobytes()).decode('utf-8') + metadata = { + 'reid': {'embedding_vector': encoded, 'model_name': 'reid-v2'} + } + + obj = MovingObject(_base_info(metadata=metadata), when, _camera()) + + assert obj.reid['model_name'] == 'reid-v2' + assert obj.reid['embedding_vector'].shape == (1, 256) + assert np.allclose(obj.reid['embedding_vector'], 0.0) + + def test_set_persistent_attributes_stores_full_and_partial_values(self): + when = datetime.datetime.now(datetime.timezone.utc) + obj = MovingObject(_base_info(), when, _camera()) + info = { + 'color': [{'value': 'red', 'model_name': 'palette-v1', 'confidence': 0.88}], + 'license': {'plate': 'ABC123', 'state': 'CA', 'country': 'US'} + } + + obj.setPersistentAttributes(info, ['color', {'license': 'plate,state'}]) + + assert obj.chain_data.persist['color']['value'] == 'red' + assert obj.chain_data.persist['color']['model_name'] == 'palette-v1' + assert obj.chain_data.persist['license']['plate'] == 'ABC123' + assert obj.chain_data.persist['license']['state'] == 'CA' + assert 'country' not in obj.chain_data.persist['license'] + + def test_set_previous_merges_persistent_data_and_carries_chain_fields(self): + when = datetime.datetime.now(datetime.timezone.utc) + bounds = Rectangle({'x': 0.0, 'y': 0.0, 'width': 1.0, 'height': 1.0}) + + current_obj = MovingObject(_base_info(object_id='obj-current'), when, _camera()) + current_obj.location = [Chronoloc(Point(1.0, 1.0, 0.0), when, bounds)] + current_obj.chain_data = ChainData( + regions={}, + publishedLocations=[Point(0.0, 0.0, 0.0)], + persist={'attr': {'a': None, 'b': 'new'}} + ) + + previous_obj = MovingObject(_base_info(object_id='obj-prev'), when, _camera()) + previous_obj.location = [Chronoloc(Point(2.0, 2.0, 0.0), when, bounds)] + previous_obj.chain_data = ChainData( + regions={}, + publishedLocations=[Point(0.0, 0.0, 0.0)], + persist={'attr': {'a': 'old', 'b': 'old-b'}} + ) + previous_obj.gid = 'gid-1' + previous_obj.first_seen = when + previous_obj.frameCount = 3 + + current_obj.setPrevious(previous_obj) + + assert current_obj.gid == 'gid-1' + assert current_obj.frameCount == 4 + assert current_obj.first_seen == when + assert len(current_obj.location) == 2 + assert current_obj.chain_data.persist['attr']['a'] == 'old' + assert current_obj.chain_data.persist['attr']['b'] == 'old-b' + + def test_infer_rotation_from_velocity_applies_quaternion_above_threshold(self): + when = datetime.datetime.now(datetime.timezone.utc) + obj = MovingObject(_base_info(), when, _camera()) + obj.rotation_from_velocity = True + obj.velocity = Point(1.0, 0.0, 0.0) + + with patch('controller.moving_object.rotationToTarget') as mock_rotation_to_target: + mock_rotation_to_target.return_value = SimpleNamespace( + as_quat=lambda: np.array([0.0, 0.0, 0.0, 1.0]) + ) + + obj.inferRotationFromVelocity() + + mock_rotation_to_target.assert_called_once() + assert obj.rotation == [0.0, 0.0, 0.0, 1.0] + + def test_infer_rotation_from_velocity_skips_when_speed_below_threshold(self): + when = datetime.datetime.now(datetime.timezone.utc) + obj = MovingObject(_base_info(), when, _camera()) + obj.rotation_from_velocity = True + obj.velocity = Point(0.01, 0.0, 0.0) + original_rotation = list(obj.rotation) + + with patch('controller.moving_object.rotationToTarget') as mock_rotation_to_target: + obj.inferRotationFromVelocity() + + mock_rotation_to_target.assert_not_called() + assert obj.rotation == original_rotation diff --git a/tests/sscape_tests/scenescape/test_scene_controller.py b/tests/sscape_tests/scenescape/test_scene_controller.py new file mode 100644 index 000000000..bb2c68685 --- /dev/null +++ b/tests/sscape_tests/scenescape/test_scene_controller.py @@ -0,0 +1,144 @@ +#!/usr/bin/env python3 + +# SPDX-FileCopyrightText: (C) 2026 Intel Corporation +# SPDX-License-Identifier: Apache-2.0 + +import pytest + +from controller.scene_controller import SceneController + + +class TestSceneControllerExtractTrackerRate: + """Unit tests for SceneController._extractTrackerRate.""" + + def test_extract_tracker_rate_returns_default_when_missing(self): + """Returns default fps when parameter is not present in config.""" + scene_controller = SceneController.__new__(SceneController) + + tracker_config = {} + default_rate = 15 + + result = scene_controller._extractTrackerRate( + tracker_config, + 'effective_object_update_rate', + default_rate, + ) + + assert result == default_rate + + @pytest.mark.parametrize( + 'raw_rate,expected_rate', + [ + (30, 30), + ('24', 24), + ], + ) + def test_extract_tracker_rate_returns_valid_integer_rates(self, raw_rate, expected_rate): + """Returns parsed integer when config contains a valid rate.""" + scene_controller = SceneController.__new__(SceneController) + tracker_config = {'effective_object_update_rate': raw_rate} + + result = scene_controller._extractTrackerRate( + tracker_config, + 'effective_object_update_rate', + 15, + ) + + assert result == expected_rate + + def test_extract_tracker_rate_accepts_min_and_max_boundaries(self): + """Accepts values equal to provided min/max boundaries.""" + scene_controller = SceneController.__new__(SceneController) + + min_config = {'effective_object_update_rate': 10} + max_config = {'effective_object_update_rate': 30} + + min_result = scene_controller._extractTrackerRate( + min_config, + 'effective_object_update_rate', + 15, + min_rate=10, + ) + max_result = scene_controller._extractTrackerRate( + max_config, + 'effective_object_update_rate', + 15, + max_rate=30, + ) + + assert min_result == 10 + assert max_result == 30 + + @pytest.mark.parametrize( + 'raw_rate,min_rate,max_rate', + [ + (0, None, None), + ('abc', None, None), + (5, 10, None), + (45, None, 30), + ], + ) + def test_extract_tracker_rate_raises_for_invalid_values( + self, + raw_rate, + min_rate, + max_rate, + ): + """Raises ValueError for malformed or out-of-range rates.""" + scene_controller = SceneController.__new__(SceneController) + tracker_config = {'effective_object_update_rate': raw_rate} + + with pytest.raises(ValueError, match='Invalid value for effective_object_update_rate'): + scene_controller._extractTrackerRate( + tracker_config, + 'effective_object_update_rate', + 30, + min_rate=min_rate, + max_rate=max_rate, + ) + + +class _BoolRaises: + """Helper that raises during bool conversion to exercise exception path.""" + + def __bool__(self): + raise TypeError('cannot convert to bool') + + +class TestSceneControllerExtractTimeChunkingEnabled: + """Unit tests for SceneController._extractTimeChunkingEnabled.""" + + def test_extract_time_chunking_enabled_defaults_to_false_when_missing(self): + """Sets time chunking to False when key is missing.""" + scene_controller = SceneController.__new__(SceneController) + scene_controller.tracker_config_data = {} + + scene_controller._extractTimeChunkingEnabled({}) + + assert scene_controller.tracker_config_data['time_chunking_enabled'] is False + + @pytest.mark.parametrize( + 'raw_value,expected_value', + [ + (True, True), + (False, False), + (1, True), + (0, False), + ], + ) + def test_extract_time_chunking_enabled_sets_boolean_value(self, raw_value, expected_value): + """Stores bool-converted value when key is present.""" + scene_controller = SceneController.__new__(SceneController) + scene_controller.tracker_config_data = {} + + scene_controller._extractTimeChunkingEnabled({'time_chunking_enabled': raw_value}) + + assert scene_controller.tracker_config_data['time_chunking_enabled'] is expected_value + + def test_extract_time_chunking_enabled_raises_for_unboolable_value(self): + """Raises ValueError when bool conversion fails.""" + scene_controller = SceneController.__new__(SceneController) + scene_controller.tracker_config_data = {} + + with pytest.raises(ValueError, match='Invalid value for time_chunking_enabled'): + scene_controller._extractTimeChunkingEnabled({'time_chunking_enabled': _BoolRaises()}) diff --git a/tests/sscape_tests/singleton_sensor/__init__.py b/tests/sscape_tests/singleton_sensor/__init__.py index c6953bbd2..e87e499e6 100644 --- a/tests/sscape_tests/singleton_sensor/__init__.py +++ b/tests/sscape_tests/singleton_sensor/__init__.py @@ -3,6 +3,5 @@ from .test_singleton_sensor_delete import SingletonSensorDeleteTestCase from .test_singleton_sensor_update import SingletonSensorUpdateTestCase -from .test_singleton_sensor_detail import SingletonSensorDetailTestCase from .test_singleton_sensor_create import SingletonSensorCreateTestCase from .test_singleton_sensor_list import SingletonSensorListTestCase diff --git a/tests/sscape_tests/singleton_sensor/test_singleton_sensor_detail.py b/tests/sscape_tests/singleton_sensor/test_singleton_sensor_detail.py deleted file mode 100644 index 67baa0add..000000000 --- a/tests/sscape_tests/singleton_sensor/test_singleton_sensor_detail.py +++ /dev/null @@ -1,21 +0,0 @@ -# SPDX-FileCopyrightText: (C) 2025 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 - -from django.test import TestCase -from django.urls import reverse -from manager.models import SingletonSensor, Scene -from django.contrib.auth.models import User -from django.test.client import RequestFactory - -class SingletonSensorDetailTestCase(TestCase): - def setUp(self): - self.factory = RequestFactory() - request = self.factory.get('/') - self.user = User.objects.create_superuser('test_user', 'test_user@intel.com', 'testpassword') - self.client.post(reverse('sign_in'), data = {'username': 'test_user', 'password': 'testpassword', 'request': request}) - testScene = Scene.objects.create(name = "test_scene") - SingletonSensor.objects.create(sensor_id="100", name="test_camera", scene = testScene) - - def test_singleton_sensor_detail_page(self): - response = self.client.get('/singleton_sensor/calibrate/1') - self.assertEqual(response.status_code, 200)