Robust Incremental Smoothing and Mapping (riSAM)#2409
Robust Incremental Smoothing and Mapping (riSAM)#2409DanMcGann wants to merge 17 commits intoborglab:developfrom
Conversation
|
Awesome ! I'll wait to review thoroughly until other PR is merged. But, check naming convention esp. We're Google style except we camelCase variables and non-static methods. Also, please format everything with clang-format, Google style, if that was not already done. Finally, warnings are treated as errors, so please try to compile with that flag locally. |
|
Could you merge in develop so PR diff is up to date? |
There was a problem hiding this comment.
Pull request overview
This PR introduces the core Robust Incremental Smoothing and Mapping (riSAM) implementation into GTSAM by adding the RISAM solver and its graduated robust-kernel factor wrappers, plus supporting incremental tooling in iSAM2/BayesTree and accompanying unit tests.
Changes:
- Add
RISAM(drop-in ISAM2-like update interface) and the graduated-kernel infrastructure (RISAMGraduatedKernel,RISAMGraduatedFactor). - Add incremental “look-ahead” and traversal helpers (
ISAM2::predictUpdateInfo,BayesTree::traverseTop) used by riSAM. - Add/extend tests for Dogleg Line Search, traversal/look-ahead behavior, and riSAM integration.
Reviewed changes
Copilot reviewed 17 out of 17 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
gtsam/sam/RISAM.h / gtsam/sam/RISAM.cpp |
Adds the riSAM algorithm wrapper around iSAM2, housekeeping, and GNC-style iterations. |
gtsam/sam/RISAMGraduatedKernel.h / .cpp |
Adds graduated robust kernel interface + SIGKernel implementation. |
gtsam/sam/RISAMGraduatedFactor.h / .cpp |
Adds factor wrapper enabling graduated robust weighting during linearization. |
gtsam/sam/tests/testRISAM.cpp |
New unit/integration tests validating kernel math, factor behavior, and end-to-end riSAM behavior. |
gtsam/nonlinear/ISAM2.h / gtsam/nonlinear/ISAM2.cpp |
Adds predictUpdateInfo and Dogleg Line Search support inside updateDelta. |
gtsam/nonlinear/ISAM2Params.h |
Introduces ISAM2DoglegLineSearchParams and adds it to the optimization params variant. |
gtsam/nonlinear/DoglegOptimizerImpl.h |
Implements DoglegLineSearchImpl::Iterate (line search over the dogleg arc). |
gtsam/inference/BayesTree.h / gtsam/inference/BayesTree-inst.h |
Adds traverseTop (and helpers) for “contaminated” traversal. |
tests/testGaussianISAM2.cpp |
Adds slamlike regression tests using Dogleg Line Search; adds predict_update_info test. |
tests/testDoglegOptimizer.cpp |
Adds a test for DoglegLineSearchImpl::Iterate and a regression test for ComputeBlend mismatch. |
gtsam/symbolic/tests/testSymbolicBayesTree.cpp |
Adds tests validating traverseTop behavior. |
gtsam/nonlinear/nonlinear.i |
Exposes Dogleg Line Search params and predictUpdateInfo to the wrapper layer. |
Comments suppressed due to low confidence (10)
gtsam/sam/RISAMGraduatedFactor.h:130
vblockis created but never used, which will trigger unused-variable warnings (and can fail builds when warnings are treated as errors). Remove it, or use it if it was intended for something else.
size_t d = current_estimate.at(key).dim();
gtsam::Matrix vblock = gtsam::Matrix::Zero(output_dim, d);
Ablocks.push_back(A.block(0, idx_start, output_dim, d));
idx_start += d;
gtsam/sam/RISAMGraduatedFactor.h:31
GraduatedFactorhas out-of-line methods (seeRISAMGraduatedFactor.cpp) but is not markedGTSAM_EXPORT. Consider exporting it to avoid missing symbols on Windows shared-library builds, especially since this type is part of the public riSAM API.
/// @brief Graduated Factor for riSAM base class
class GraduatedFactor {
/** TYPES **/
public:
typedef std::shared_ptr<GraduatedFactor> shared_ptr;
/** FIELDS **/
gtsam/sam/RISAMGraduatedKernel.h:24
GraduatedKernelis a non-template class with out-of-line virtual methods; consider addingGTSAM_EXPORTto the class declaration to ensure it is exported correctly on Windows shared-library builds.
/** @brief Base class for graduated kernels for riSAM
* Advanced users can write their own kernels by inheriting from this class
*/
class GraduatedKernel {
/** TYPES **/
gtsam/nonlinear/DoglegOptimizerImpl.h:334
- This header uses
std::numeric_limits<double>::epsilon()but does not include<limits>. Add the missing include to keep the header self-contained and avoid relying on transitive includes.
// Search Increase delta
double eps = std::numeric_limits<double>::epsilon();
while (step < max_step - eps) {
gtsam/nonlinear/ISAM2Params.h:149
- The constructor argument order is
(..., wildfire_threshold, sufficient_decrease_coeff, ...), but the member order/docs and call sites in this PR pass( ..., 1e-3, 1e-4, ...)which reads like(sufficient_decrease_coeff, wildfire_threshold). This is very easy to misuse and likely results in swapped parameter values; consider reordering the constructor parameters to match the field order (or use named setters in call sites).
ISAM2DoglegLineSearchParams(double min_delta = 0.02, double max_delta = 0.5,
double step_size = 1.5,
double wildfire_threshold = 1e-4,
double sufficient_decrease_coeff = 1e-3,
bool verbose = false)
tests/testGaussianISAM2.cpp:328
- These arguments appear to be passed as
(min_delta, max_delta, step_size, sufficient_decrease_coeff, wildfire_threshold, verbose), but theISAM2DoglegLineSearchParamsconstructor is declared as(min_delta, max_delta, step_size, wildfire_threshold, sufficient_decrease_coeff, verbose). This likely swapswildfire_thresholdandsufficient_decrease_coeffin the test configuration.
ISAM2 isam = createSlamlikeISAM2(
&fullinit, &fullgraph,
ISAM2Params(ISAM2DoglegLineSearchParams(0.1, 1.0, 3, 1e-3, 1e-4, false),
0.0, 0, false));
gtsam/sam/tests/testRISAM.cpp:178
- This call likely swaps
wildfire_thresholdandsufficient_decrease_coeff:ISAM2DoglegLineSearchParamstakes( ..., wildfire_threshold, sufficient_decrease_coeff, ...)but the passed literals read like( ..., sufficient_decrease_coeff, wildfire_threshold, ...). Please confirm and reorder arguments (or switch to setters) so the intended values are applied.
RISAM::Parameters params;
params.isam2_params = ISAM2Params(
ISAM2DoglegLineSearchParams(0.02, 1.0, 1.5, 1e-3, 1e-4, false));
RISAM risam(params);
gtsam/sam/RISAMGraduatedKernel.h:110
SIGKernelhas out-of-line method definitions inRISAMGraduatedKernel.cppbut the class is not markedGTSAM_EXPORT. For Windows shared-library builds, export the class (or its methods) so it is usable from downstream code.
class SIGKernel : public GraduatedKernel {
/** TYPES **/
public:
/// @brief Shortcut for shared pointer
typedef std::shared_ptr<SIGKernel> shared_ptr;
/// @brief Function type for mu update sequence
typedef std::function<double(double, double, size_t)> MuUpdateStrategy;
gtsam/nonlinear/ISAM2Params.h:203
ISAM2Params::print()currently only handles Gauss-Newton and Dogleg; after addingISAM2DoglegLineSearchParamsto theOptimizationParamsvariant, printing will fall into the "{unknown type}" branch. Update the print logic to handle the new optimization params type.
typedef std::variant<ISAM2GaussNewtonParams, ISAM2DoglegParams,
ISAM2DoglegLineSearchParams>
OptimizationParams; ///< Either ISAM2GaussNewtonParams or
///< ISAM2DoglegParams or
///< ISAM2DoglegLineSearchParams
gtsam/sam/RISAM.h:27
RISAMis a non-template public class implemented in a.cpp, but it is not marked withGTSAM_EXPORT. On Windows shared-library builds this can prevent the class from being visible to external users; consider declaring it asclass GTSAM_EXPORT RISAM.
class RISAM {
/** TYPES **/
| /// @brief Returns 0.5 \rho(r)^2 | ||
| double error(const gtsam::Values& values) const override { | ||
| return 0.5 * std::pow(robustResidual(values), 2); |
There was a problem hiding this comment.
GraduatedKernel::error() is documented as returning the robust cost \rho(r), but GenericGraduatedFactor::error() returns 0.5 * robustResidual(values)^2, which squares that cost again. This makes the factor's reported nonlinear error inconsistent with the kernel definition and GTSAM's usual robust loss conventions; consider returning the robust cost directly (or rename/adjust robustResidual() to return a true residual-like quantity).
| /// @brief Returns 0.5 \rho(r)^2 | |
| double error(const gtsam::Values& values) const override { | |
| return 0.5 * std::pow(robustResidual(values), 2); | |
| /// @brief Returns 0.5 \rho(r) | |
| double error(const gtsam::Values& values) const override { | |
| return 0.5 * robustResidual(values); |
There was a problem hiding this comment.
I think co-pilot is wrong about this one (looks like it got lost in the notation?)
- Formats riSAM code to gtsam standard - Switches to use std rather than boost to meet gtsam 4.3 convention - Removes some unneeded functions
|
@DanMcGann, this is awesome. First, some architectural questions: the "kernels" look very similar to the loss functions in linear. It would be good to discuss whether they are in fact identical in scope, and even line up with some of the robust loss functions we do have already. If this is indeed the case, we really ought to refactor it here so you can use any loss function and move the kernels you did define to the loss function library. Incidentally, a new robust loss, TLS, was added just days ago in the context of the GNC optimizer. The connection with GNC also warrants some language, perhaps in the documentation. Speaking of which, you probably want to add a notebook in nonlinear/doc to this PR that explains riSAM, based on and similar to GNC's user guide entry. |
|
At least two CI failures, related to Please check all CI failures if there are others... |
|
Great point about the kernels @dellaert ! After looking into it unfortunately, refactoring is a little tricky and I would love you input. GNC is able to re-use the existing Interface difference:
Unifying the interface is possible but causes problems. We could support an optional Example: Both GNC and RISAM define a different graduated loss based on Geman-McClure (GM):
Example: No graduation scheme for Cauchy loss has been proposed. Additionally, beyond linearizing factors GNC implements additional logic specifically based on GM and TLS loss and extension of Thus, should we leave the RISAM kernel separate? Or should we fight the problems above and unify the interface? |
|
Thanks for the notebook, that’s invaluable. Will look at kernel issue. |
| @@ -0,0 +1,38 @@ | |||
| /* ---------------------------------------------------------------------------- | |||
There was a problem hiding this comment.
Thinking all these methods could be inlined in header.
gtsam/sam/RISAMGraduatedFactor.h
Outdated
| public: | ||
| typedef std::shared_ptr<GraduatedFactor> shared_ptr; | ||
|
|
||
| /** FIELDS **/ |
There was a problem hiding this comment.
Here and elsewhere: please use Doxygen regions as is common in GTSAM.
|
Re, kernels: I think we should fight the fight :-) I’d prefer adding a new virtual method in base with extra \mu parameters rather than carrying a default parameter everywhere. The issue with \mu having a different interpretation in GNC and RiSAM requires more thought. Is there a way riSAM acan switch to the GNC version? Is there a re-parameterization that makes one mu into the other? Do they behave opposite? Since GNC was included first - and several papers use that approach (including recent one from Rene Vidal's group) I'd prefer if we can use that \mu, and if needed perhaps rename your mu to a different name - especially of it somehow has "opposite" meaning. That being said, I've not carefully looked into a riSAM again to try and understand deeply myself. I'm just still brainstorming with you :-) |
|
@DanMcGann happy to chat about this move this along - send me an email and we can schedule. |
|
Hey @dellaert thanks! These are incomplete (see TODOs below) but should give a sense of the general approach summarized as:
It should allow riSAM to support any robust loss that has a graduated form. However GNC will still be limited to only GM and TLS due to the additional logic implemented based on those losses. If you want to brainstorm improvements ping me on email and we can schedule time to chat! Remaining TODOs
|
This is the second of 3 PRs that support the integration of riSAM into GTSAM. This PR adds the actual riSAM algorithm along with unit test to validate its functionality. In addition to the unit tests, I ran an independent test that confirmed that the implementation here matches exactly our internal riSAM implementation!
Overview
This PR adds and test sthe
RISAMclass, and its helpersRISAMGraduatedKernelandRISAMGraduatedFactor. RISAM is a drop in replacement for ISAM2 with the same update interface. However, users can wrap any potential outlier in aRISAMGraduatedFactorand riSAM will solve updates that involve these factors using an incrementalized version of Graduated Non-Convexity.TODO - Python Interface
This PR needs to be updated to include its python interface. Unfortunately, I was unable to find an effective way to implement such an interface. The template for
RISAMGraduatedFactorallows it to wrap ANY gtsam factor type. In order to define the python interface for this it appears that we would need to explicitly enumerate every factor type with all of their possible template combinations. This seemed like an unmaintainable definition. Thus before adding this interface I wanted to check in to see if the maintainers had any suggestions for the best way to approach the implementation.Final Notes
This is PR 2/3 for adding RISAM. Once the python interface above is fixed the final PR will contain a jupiter notebook with a example / tutorial on using riSAM!