-
-
Notifications
You must be signed in to change notification settings - Fork 8
optimize lagrangian implementation in oop dispatch for autosparse backends #134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
If |
|
If multiple dispatch is an option, you could add a method that evaluates If MD is not an option, you could use the |
Julia will also evaluate both branches, negating the optimization |
Is sigma related to the input here? Like, if x was a dual number, would sigma be one too? Or is it more of a fixed external parameter? Because depending on the answer, the |
not sure why I closed
It's up to the optimizer, so we cannot tell in general. I guess we could do something like function lagrangian(θ, σ, λ, p)
if eltype(θ) <: SCT.TracerType || !iszero(θ)
return σ * f.f(θ, p) + dot(λ, cons_oop(θ))
else
return dot(λ, cons_oop(θ))
end
end |
That should be sufficient. |
This branch exists for most cases, but it looks like the oop case with autosparse was missed, so I have changed the title to reflect that. |
Do we need a compat bound on something based on the test failure? |
The new title isn't correct, it's also optimizing the non-autosparse cases... |
No, we just need to keep pinging @vchuravy until the type piracy of Base goes away 😅 |
|
If you're pointing to this PR then yes it does the non sparse case... |
σ = 0
is a common special case and it makes sense to optimize for it by not calling the cost function in this case