I’m investigating replayability/determinism of estimator update boundaries (fp32) under valid message ordering differences (callback timing / batching / scheduling).
Concrete repro in NASA ODTBX (kalmup): batch vs sequential updates and different valid update orders can yield different fp32 bit patterns.
Repro repo: https://github.com/StanByriukov02/odtbx-order-sensitivity
sha256(odtbx_order_sensitivity.zip)=6c45f8a650d1b8e67730cbf0c3f8d4a9b214dd04d0b04c1bc3e76a257cfe6cfc
For robot_localization:
Which test/bag/launch is the maintainer-approved “canonical boundary” for EKF/UKF regression (the one you’d want to be replayable under dispute)?
I want the smallest reproducible entrypoint so I can measure:
- bit-level determinism (digest of state/cov after N updates),
- sensitivity to measurement ordering,
- and propose a fix only if there’s a real failing boundary.
Owner-routing: who owns determinism/replayability expectations for robot_localization filters?