-
Notifications
You must be signed in to change notification settings - Fork 16
Open
Description
Related to #43
Scenario
Measuring how well a KF fits with measurement is very important, especially when dealing with KF parameters settings which is often the hardest part of KF.
Actual
In the README.md we suggest to use a simple mahalanobis distance to compare a KF model with observation.
This methods is direct and simple but one may wonder :
- why we choose this distance and not another distribution to distribution distance
- How could we compare this to a measurement made in python (for example using pykalman)
Expected
In order to improve our ability to measure the quality of a model, a logLikelihood method could be useful.
Proposal
(1) implement state.logNormalDensity({kf, observation, obsIndexes}) (python example in pykalman ) inspired from https://github.com/piercus/kalman-filter/blob/master/lib/state.js#L133-L157
(2) Use state.logNormalDensity in a kf.logLikelihood({observations}) function
Metadata
Metadata
Assignees
Labels
No labels