Skip to content

Fix enforceConsistency_ in BatchFixedLagSmoother#2456

Merged
dellaert merged 3 commits intoborglab:developfrom
jashshah999:fix/batch-fixed-lag-smoother-enforce-consistency
Mar 11, 2026
Merged

Fix enforceConsistency_ in BatchFixedLagSmoother#2456
dellaert merged 3 commits intoborglab:developfrom
jashshah999:fix/batch-fixed-lag-smoother-enforce-consistency

Conversation

@jashshah999
Copy link
Copy Markdown
Contributor

What this does

Fixes the enforceConsistency_ feature in BatchFixedLagSmoother, which has been non-functional since linearValues_ was never populated.

The bug

linearValues_ (declared in BatchFixedLagSmoother.h:149) is read in optimize() (line 268) to decide whether to restore linearization points:

if (enforceConsistency_ && (linearValues_.size() > 0)) {
    theta_.update(linearValues_);
    ...
}

But linearValues_ is never written to anywhere in the codebase. It is always empty, so the guard is always false and the consistency enforcement is dead code.

The fix

After marginalize() inserts new LinearContainerFactors (which store the linearization point at marginalization time), we now record the remaining variables and their linearization points in linearValues_. This allows the existing logic in optimize() to restore those linearization points after each LM iteration, keeping the marginal factors consistent.

The existing cleanup in eraseKeys() already handles removing entries from linearValues_ when those variables are themselves eventually marginalized out.

Fixes #948

cc @dellaert

The linearValues_ member was never populated, making the
enforceConsistency_ guard condition always false.

After marginalization, the new LinearContainerFactors are only valid
at the linearization point where they were computed. When
enforceConsistency_ is enabled, we now record those variables and
their linearization points in linearValues_ so that the existing
logic in optimize() can restore them after each LM iteration.

Fixes borglab#948
@dellaert
Copy link
Copy Markdown
Member

Ha ! That's a bug, thanks !
Did you do any experiments to show that this now improves consistency? That would be a good "cherry" on the cake.

Adds a test that creates two smoothers (enforceConsistency on/off),
runs them through a chain of between-factors with poor initial
estimates, and verifies both produce valid results after
marginalization has occurred.
@jashshah999
Copy link
Copy Markdown
Contributor Author

Thanks! Added a test in the latest push that exercises enforceConsistency_ with both true and false:

  • Creates two BatchFixedLagSmoother instances with identical data (chain of between-factors with poor initial estimates)
  • Runs both through enough steps to trigger marginalization (lag=3, 8 steps)
  • Verifies both produce valid estimates after marginalization

The test builds and passes with make testBatchFixedLagSmoother.

Before the fix, linearValues_ was always empty so both smoothers behaved identically (the consistency guard was dead code). Now the enforceConsistency=true smoother actually populates linearValues_ during marginalize() and restores linearization points in optimize().

@dellaert
Copy link
Copy Markdown
Member

That's great ! Any NEES comparison? This flag was supposed to be implement FEJ which should make for better NEES values.

Pose2 trajectory with 100 trials, 30 steps, lag=3, significant
rotation per step. Shows enforceConsistency now has observable
effect on NEES after the linearValues_ fix.
@jashshah999
Copy link
Copy Markdown
Contributor Author

Added a Monte Carlo NEES test using Pose2 (100 trials, 30 steps, lag=3, significant rotation per step).

Results:

enforceConsistency=true  (FEJ): avg NEES = 1.14 (expected: 3)
enforceConsistency=false       : avg NEES = 3.12 (expected: 3)

The key takeaway: with the fix, enforceConsistency_ now has an observable effect (before the fix, linearValues_ was never populated, so both modes were identical). The false mode gives near-ideal NEES, while the true mode is conservative — which makes sense for a batch solver where iterative re-linearization already handles much of the inconsistency that FEJ addresses in filtering.

Copy link
Copy Markdown
Member

@dellaert dellaert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, that's fast !!! Will run CI and merge when passes!

@dellaert dellaert merged commit 2f35075 into borglab:develop Mar 11, 2026
32 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug in BatchFixLagSmoother "enforceConsistency_" not working

2 participants