Skip to content

Implement loglikelihood #44

@piercus

Description

@piercus

Related to #43

Scenario

Measuring how well a KF fits with measurement is very important, especially when dealing with KF parameters settings which is often the hardest part of KF.

Actual

In the README.md we suggest to use a simple mahalanobis distance to compare a KF model with observation.

This methods is direct and simple but one may wonder :

  • why we choose this distance and not another distribution to distribution distance
  • How could we compare this to a measurement made in python (for example using pykalman)

Expected

In order to improve our ability to measure the quality of a model, a logLikelihood method could be useful.

Proposal

(1) implement state.logNormalDensity({kf, observation, obsIndexes}) (python example in pykalman ) inspired from https://github.com/piercus/kalman-filter/blob/master/lib/state.js#L133-L157

(2) Use state.logNormalDensity in a kf.logLikelihood({observations}) function

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions