Tempered optimisation #2524
SamuelBrand1
started this conversation in
Ideas
Replies: 2 comments
-
PS obviously I can (and will) hand roll this, e.g. via |
Beta Was this translation helpful? Give feedback.
0 replies
-
By the way, I've realised that you can do this trivially by using e.g. using DynamicPPL, Turing
ctx = DynamicPPL.MiniBatchContext(DynamicPPL.DefaultContext(), tau)
opt_result = maximum_a_posteriori(DynamicPPL.contextualize(model, ctx)) Leads to optimisation with the loglike term scaled by |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
For problems with a complicated log-posterior landscape I've found doing a sequence of tempered optimisation quite a successful approach.
This looks pretty doable in$s \in [0,1]$ .
Turing
becauseDynamicPPL
has aMiniBatchContext
, which is essentially an equivalent approach, .e.g the log-likelihood contribution to the log-posterior density is scaled by some valueThe snag in code seems to be passing this context to
estimate_mode
which at the moment is hard coded to be either theDefaultContext
or theLikelihoodContext
which passes through toOptimizationContext
for (i think) just checking.Is there any chance of loosening this up to allow
MiniBatchContext
based evaluation of the mode?Beta Was this translation helpful? Give feedback.
All reactions