Skip to content

optimize lagrangian implementation in oop dispatch for autosparse backends #134

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jan 22, 2025

Conversation

baggepinnen
Copy link
Contributor

σ = 0 is a common special case and it makes sense to optimize for it by not calling the cost function in this case

@ChrisRackauckas
Copy link
Member

@gdalle @adrhill is there a way to force SCT to take the false path here?

@adrhill
Copy link

adrhill commented Jan 22, 2025

If lagrangian(θ, σ, λ, p) is a stable interface of OptimizationBase.jl, we could add a package extension to SCT to enable global sparsity patterns.

@baggepinnen
Copy link
Contributor Author

lagrangian here is an inner function that calls other inner functions in the same scope.

@adrhill
Copy link

adrhill commented Jan 22, 2025

If multiple dispatch is an option, you could add a method that evaluates lagrangian(θ, σ, λ, p) = σ * f.f(θ, p) + dot(λ, cons_oop(θ)) on our global tracer types.

If MD is not an option, you could use the ifelse function, for which global sparsity detection will evaluate both branches.

@baggepinnen
Copy link
Contributor Author

If MD is not an option, you could use the ifelse function, for which global sparsity detection will evaluate both branches.

Julia will also evaluate both branches, negating the optimization

@baggepinnen baggepinnen reopened this Jan 22, 2025
@gdalle
Copy link
Contributor

gdalle commented Jan 22, 2025

Is sigma related to the input here? Like, if x was a dual number, would sigma be one too? Or is it more of a fixed external parameter? Because depending on the answer, the if statement may cause issues with SparseConnectivityTracer

@baggepinnen
Copy link
Contributor Author

baggepinnen commented Jan 22, 2025

not sure why I closed

Is sigma related to the input here? Like, if x was a dual number, would sigma be one too? Or is it more of a fixed external parameter?

It's up to the optimizer, so we cannot tell in general. I guess we could do something like

        function lagrangian(θ, σ, λ, p)
            if eltype(θ) <: SCT.TracerType || !iszero(θ)
                return σ * f.f(θ, p) + dot(λ, cons_oop(θ))
            else
                return dot(λ, cons_oop(θ))
            end
        end

@ChrisRackauckas
Copy link
Member

That should be sufficient.

@Vaibhavdixit02 Vaibhavdixit02 changed the title optimize lagrangian implementation optimize lagrangian implementation in oop dispatch for autosparse backends Jan 22, 2025
@Vaibhavdixit02
Copy link
Member

This branch exists for most cases, but it looks like the oop case with autosparse was missed, so I have changed the title to reflect that.

@Vaibhavdixit02
Copy link
Member

Do we need a compat bound on something based on the test failure?

@ChrisRackauckas
Copy link
Member

The new title isn't correct, it's also optimizing the non-autosparse cases...

@ChrisRackauckas
Copy link
Member

Do we need a compat bound on something based on the test failure?

No, we just need to keep pinging @vchuravy until the type piracy of Base goes away 😅

@ChrisRackauckas ChrisRackauckas merged commit be9bfd1 into SciML:main Jan 22, 2025
4 of 6 checks passed
@baggepinnen baggepinnen deleted the patch-1 branch January 22, 2025 16:31
@Vaibhavdixit02
Copy link
Member

The new title isn't correct, it's also optimizing the non-autosparse cases...

No
https://github.com/SciML/OptimizationBase.jl/pull/134/files#diff-7dfd1f414a8b2066bb9d673fb65b7fc09f9e695c5611bc7ff0ef534507ba0880

@ChrisRackauckas
Copy link
Member

If you're pointing to this PR then yes it does the non sparse case...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants