-
Notifications
You must be signed in to change notification settings - Fork 35
Open
Description
attention_vector = torch.cat( [ self.conv_ex(Z).unsqueeze(dim=1), self.conv_ex(Z).unsqueeze(dim=1) ], dim=1)
attention_vector = self.softmax(attention_vector)
and self.softmax = nn.Softmax(dim=1)
it seems that the elements of the attention_vector are the same, so if you apply softmax on dim=1,the result of the softmax will all be the same, 0.5 for sure
so why are we doing this,i don't know if i have missed something
Metadata
Metadata
Assignees
Labels
No labels
