-
Notifications
You must be signed in to change notification settings - Fork 13
Open
Description
Hello, I am reimplementing your dance revolution code. I found that in your config file, you set lambda_v to 0.01. In that case, all your mask in training forward become 1 in a 40 epochs setting, and there is no prediction_mask at all. Could you tell me the purpose of doing this? Thank you
groundtruth_mask = torch.ones(seq_len, self.condition_step)
prediction_mask = torch.zeros(seq_len, int(epoch_i * self.lambda_v)) this line
mask = torch.cat([prediction_mask, groundtruth_mask], 1).view(-1)[:seq_len] # for random
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels