Vote How to implement gradient reversal (GRL) layer in PL? #11278
Unanswered
M-A-Hassan
asked this question in
code help: CV
Replies: 2 comments
-
Hi, have you tested GRL in PyTorch-lightning? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hi everyone, |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am training a DA network where I use GRL in the discriminator to train the encoder?
Is the GRL layer implementation in PL similar to the PyTorch one?
`
from torch.autograd import Function
class GradReverse(Function):
@staticmethod
def forward(ctx, x, alpha):
ctx.alpha = alpha
return x.view_as(x)
print(alpha)
@staticmethod
def backward(ctx, grad_output):
output = grad_output * -ctx.alpha
def grad_reverse(x,alpha):
return GradReverse.apply(x,alpha)
#In the discriminator forward pass
def forward(self, y,alpha):
self.alpha=alpha
y = grad_reverse(y,self.alpha)
`
Beta Was this translation helpful? Give feedback.
All reactions