-
Notifications
You must be signed in to change notification settings - Fork 4
Manellic code shuffle towards DFG v1 compliance #325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
|
|
||
| Statistics.mean(m::MvNormalKernel) = m.μ # mean(m.p) | ||
| Statistics.cov(m::MvNormalKernel) = cov(m.p) # note also about m.sqrt_iΣ | ||
| Statistics.std(m::MvNormalKernel) = sqrt(cov(m)) # regular sqrt (not of inverse) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @Affie , I've been working through ManellicTree as prototype for HomotopyBelief. I'm calling here to check for discussion points on:
- check that the contact points with parametric is only
meanandcov? TheseStatistics.overloads would be the interface for all things "hybrid-parametric" (separately we need to know depth in tree for getting the number of modes). - see [Design] Should HomotopyBelief support heterogeneous kernel types (aka be a mixture)? #326
ManellicTreeBelief_likely to be renamed toHomotopyBeliefas part of stabilizing DFG v1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For factors, the common entry point is getMeasurementParametric, calling mean and invcov.
For variables .val and .bw is used directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have a good example (recently) of using .val perhaps please? I'm trying to figure out if that should be renamed to getPoints or getModes etc...
I'm looking for a not dehann usage example if you have one pls
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the last idea was to go through calls like these:
- https://github.com/JuliaRobotics/IncrementalInference.jl/blob/c360a66ba21b1b88ee2a2aab004ced6ad977254f/IncrementalInference/src/parametric/services/ParametricUtils.jl#L822
- https://github.com/JuliaRobotics/IncrementalInference.jl/blob/c360a66ba21b1b88ee2a2aab004ced6ad977254f/IncrementalInference/src/parametric/services/ParametricManopt.jl#L333
There are still a few direct field access methods in the same files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay thanks -- so I think the way to go here is something like modes = getMixture(_, nmodes=1) which returns a Vector{ConcentratedGaussian{M}} (legacy name is MvNormalKernel). From there manipoint := getMean(modes[1]) and sqrt(1/Σ^2) := getInformationmat(modes[1]); also 1/Σ^2 := getPrecisionmat(modes[1]).
There is some nuance on how getMixture works. I think just return the kernels (i.e. mixture) at depth = ceil(log2(nmodes)) in the belief tree.
This also means that convenience wrappers can exist (but we miss the opportunity to promote the new solver) getMean(getBelief(getVariable(dfg,:w_X1))).
My sense at the moment is that we should promote getBelief and getMixture as the front facing API. We can then find a way to document getPoints(belief) or cov(getMixture(_, 1)). This would also mean happenstance equivalents such as getPoints(getMixture(fg, lb, nmodes=1))[1] == getMean(getBelief(fg, lb)), where the second is the least desirable signature in terms of overall UX.
|
adding assign for project board filters. |
| - FIXME use manifold mean and cov calculation instead | ||
| - TODO, use recursive power series for next largest eigen vector down depth of tree for efficiency | ||
| """ | ||
| function splitPointsEigen( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note, I already did some previous effort towards the homotopy concept by using eigen for finding the next largest variance down the tree. Just adding the note here that a much more efficient approach would be to only extract the largest eigen vector via Krylov methods instead of recalculating the whole eigen decomp each time.
No description provided.