Conversation
|
This ought to make debugging easier as well. A potential next step would be moving more bits out to NNlib(CUDA). |
src/layers/recurrent.jl
Outdated
| # TODO move to ChainRulesCore? | ||
| @adjoint function Broadcast.broadcasted(f::Recur, args...) | ||
| Zygote.∇map(__context__, f, args...) | ||
| end |
There was a problem hiding this comment.
I think the point of this is that the gradient for map reverses iteration order. That's a little dodgy, since map makes no such promise (and IIRC it only happens for some argument types, vectors but not 1-column matrices?). Should we just make broadcasting an RNN within a gradient an error?
There was a problem hiding this comment.
Moved to here, which I think should give the same results, but also warn on the forward pass:
src/utils.jl
Outdated
| @nograd modules | ||
| ChainRulesCore.@non_differentiable modules(::Any) # is this correct? |
There was a problem hiding this comment.
If the intention of modules is that something roughly like loss + sum(norm, modules(m)) should work, then doesn't this need to pass gradients through?
There was a problem hiding this comment.
Good catch. I have a sinking feeling this might be one of those things that works with implicit gradients but not with explicit ones.
There was a problem hiding this comment.
Likewise. Xref FluxML/Functors.jl#35 I guess -- is fmapreduce(x -> norm(x.weight), +, m; exclude = x -> x isa Dense) where we want to end up?
There was a problem hiding this comment.
that would be one way of doing things. The big question with any approach is how to prevent AD from balking on the cache mutation + lookup.
| end | ||
|
|
||
| ChainRulesCore.@scalar_rule xlogy(x, y) (log(y), x/y) # is this good enough? | ||
| ChainRulesCore.@scalar_rule xlogx(x) (log(y) + true) |
There was a problem hiding this comment.
Can't literally translate broadcasted(::typeof(xlogy) rule to a Zygote-free world, as unbroadcast (which sums as necessary for mismatched shapes) belongs to Zygote.
I hope that Diffractor's broadcasting will work via @scalar_rule. But the rule as written is slightly different, as it doesn't treat Δ==0 as a strong zero, when y==0. Does that matter?
There was a problem hiding this comment.
Are these needed if https://github.com/JuliaStats/LogExpFunctions.jl/blob/c8a4c28ffe7b6e4f8d5253e01cef091bb8d2f42c/src/chainrules.jl#L1-L2 is are already loaded through a transitive dep?
There was a problem hiding this comment.
Flux could switch to those. It has branches not ifelse, and different NaN behaviour, not sure if that matters:
And 5 dependencies.
There was a problem hiding this comment.
But for now perhaps it's evidence that the scalar rules are ok?
There was a problem hiding this comment.
Are you looking to do some testing soon with this and Diffractor/not Zygote? Otherwise I think it would be cleaner to have a separate PR that removes all of the code above in favour of https://github.com/FluxML/Zygote.jl/blob/master/src/lib/logexpfunctions.jl and the @scalar_rules in LogExpFunctions.
There was a problem hiding this comment.
I can remove these rules for now if you prefer. The functions ought to be differentiable without special rules, mostly. The PR just wants to translate as many things as possible over for now.
There was a problem hiding this comment.
I said:
as unbroadcast (which sums as necessary for mismatched shapes)
This is wrong, because _check_sizes demands equal size, simplifying the broadcast:
https://github.com/FluxML/Flux.jl/blob/master/src/losses/utils.jl#L27
While I guess these broadcasts aren't so performance-sensitive (since there will only be one, for the whole model) it would be nice if all loss functions were all second-differentiable. Whether that already works, or needs to be done by fiddling with broadcasting, or rules for the loss functions themselves, I don't know.
923eca0 to
0599968
Compare
| istraining() = false | ||
|
|
||
| @adjoint istraining() = true, _ -> nothing | ||
| ChainRulesCore.rrule(::typeof(istraining)) = true, _ -> (NoTangent(),) |
There was a problem hiding this comment.
I'm surprised there isn't an equivalent for this in ChainRules already.
There was a problem hiding this comment.
Somewhere I was writing a function like CRC.order().back > 0... would be good to have.
|
bors try |
tryMerge conflict. |
|
If you wouldn't mind rebasing, we can get this merged assuming that fixes the cuda tests. |
Co-authored-by: Brian Chen <ToucheSir@users.noreply.github.com>
To allow use without Zygote, we should move to defining rules via ChainRules.
Most of these are mechanical, but perhaps deserve a quick look to see if there are tests. Comments on particular ones below.