Open
Description
Moving a discussion with @yebai from Slack to here. @PavanChaggar asked if there was a way to do Laplace approximation in Turing, and I gave this little example of how it could be accomplished with MarginalLogDensities.jl:
using MarginalLogDensities
using ReverseDiff, FiniteDiff
using Turing
# simple density function
f(x, p) = -sum(abs2, diff(x))
# define a MarginalLogDensity that integrates out parameters 5 thru 100
mld = MarginalLogDensity(f, rand(100), 5:100, (), LaplaceApprox(adtype=AutoReverseDiff()))
@model function MLDTest(mld)
θ ~ MvNormal(zeros(njoint(mld)), 1.0)
Turing.Turing.@addlogprob! mld(θ)
end
mod = MLDTest(mld)
chn = sample(mod, NUTS(adtype = AutoFiniteDiff()), 100)
This issue is to discuss how this capability might be integrated better into Turing, probably via a package extension. (See also #1382, which I opened before I made MLD). From a user perspective, an interface like this makes sense to me:
@model function Example(y)
a ~ SomeDist()
b ~ AnotherDist()
mu = somefunction(a, b)
x ~ MvNormal(mu, 1.0)
y ~ MvNormal(x, 1.0)
end
fullmodel = Example(ydata)
marginalmodel = marginalize(fullmodel, (:x,))
sample(marginalmodel, NUTS(), 1000)
maximum_a_posteriori(marginalmodel, LBFGS())
I think there are two basic ways to implement this:
marginalize
constructs a new, marginalizedDynamicPPL.Model
, or- It returns a
MarginalLogDensities.MarginalLogDensity
, with new methods forsample
,maximum_a_posteriori
, andmaximum_likelihood
defined for it.
I'm not very familiar with Turing's internals, so happy to be corrected if there are other approaches that make more sense....
The other current roadblock is making calls to MarginalLogDensity
objects differentiable (ElOceanografo/MarginalLogDensities.jl#34). This is doable, I just need to do it.