Skip to content

Thinking about the guided filter #73

Closed
@charlesknipp

Description

@charlesknipp

Guided Filter Construction

Given the changes proposed in #37 the interface is much more oriented to in-place operations. This breaks the original implementation of the guided filter, but as Tim suggested there are plenty of ways to get around this. I propose a handful of options below.

Method 1 (the overhauled predict)

We can reimplement the guided filter calculations such that the log weights must include the transition log density and proposal log density in the predict method. Ideally, the update method remains unchanged with a different type signature. Unfortunately, this implies that the log-likelihood can not be marginalized between iterations.

function predict(...)
    # forward simulation from a proposal
    proposed_particles = map(
        x -> SSMProblems.simulate(rng, model, filter.proposal, step, x, observation; kwargs...),
        collect(state),
    )

    log_increments = map(zip(proposed_particles, state.particles)) do (new_state, prev_state)
        log_f = SSMProblems.logdensity(model.dyn, step, prev_state, new_state; kwargs...)
        log_q = SSMProblems.logdensity(
            model, filter.proposal, step, prev_state, new_state, observation; kwargs...
        )

        (log_f - log_q)
    end

    proposed_state = ParticleDistribution(
        proposed_particles, deepcopy(state.log_weights) + log_increments
    )

    return update_ref!(proposed_state, ref_state, step)
end

Method 2 (the overwritten step method)

Instead of keeping with the format above, we can instead overload step to perform all the computations in a single step. While this is quite inelegant, it may well be the most efficient version of the filter. It also solves the issue of the marginal log-likelihood term from the first method. Regardless, this is quite messy since we no longer have methods for predict and update.

function step(...)
    prev_state = resample(rng, alg.resampler, state)
    marginalization_term = logsumexp(prev_state.log_weights)

    isnothing(callback) || callback(model, alg, iter, prev_state, observation, PostResample; kwargs...)

    state.particles = map(
        x -> SSMProblems.simulate(rng, model, filter.proposal, step, x, observation; kwargs...),
        collect(state)
    )
    
    state = update_ref!(state, ref_state, step)

    isnothing(callback) || callback(model, alg, iter, state, observation, PostPredict; kwargs...)

    particle_collection = zip(state.particles, prev_state.particles)
    state.log_weights += map(particle_collection) do (prop_state, state)
        log_f = SSMProblems.logdensity(model.dyn, step, state, prop_state; kwargs...)
        log_g = SSMProblems.logdensity(model.obs, step, prop_state, observation; kwargs...)
        log_q = SSMProblems.logdensity(
            model, filter.proposal, step, state, prop_state, observation; kwargs...
        )

        (log_f + log_g - log_q)
    end

    ll_increment = logsumexp(state.log_weights) - marginalization_term

    isnothing(callback) || callback(model, alg, iter, state, observation, PostUpdate; kwargs...)

    return state, ll_increment
end

Motivation

An operational guided filter in conjunction with automatic differentiation (see #26) allows for the use of variational algorithms to tune a proposal. Since I have the algorithms written with a primitive filtering interface, it would be another low hanging fruit.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions