Skip to content

Partial exact inference, inspired by "Delayed Sampling and Automatic Rao–Blackwellization of Probabilistic Programs" and ProbZelus #224

@turion

Description

@turion

While reading through the ProbZelus paper, I became aware that there are techniques to do partial exact inference in Bayesian networks, using conjugate priors. This is described e.g. in https://arxiv.org/pdf/1708.07787.pdf, and the algorithm is improved in the ProbZelus paper. I believe that after #177, something similar can be implemented here. Basically, we would need a way to specify a Bayesian network formally, through a monad interface. All the probability distribution functions like normal etc. would then not output a sample of that distribution, but a formal variable that can later be used to condition. This can not quite be implemented as a MonadCond, but a more specialised version (that equates two expressions in the monad instead of conditioning on an arbitrary boolean) would have to be implemented. There is some discussion and links to some initial work here: #144, #144 (comment) #144 (comment)

Some pseudocode sketch:

data BayesianNetworkT m a = ...

type VarIndex = Int
data FormalVar = Constant Double | UniformDist VarIndex | NormalDist VarIndex FormalVar FormalVar | (:+: FormalVar FormalVar) | ...

newVar :: BayesianNetworkT m FormalVar

instance MonadSample (BayesianNetworkT m) where
  type Real = FormalVar

  random = newVar

observe :: BayesianNetworkT m FormalVar -> FormalVar -> BayesianNetworkT m ()

The observe function would have to condition on the conjugate priors and output a simplified network.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions