-
Notifications
You must be signed in to change notification settings - Fork 9
Open
Description
Hi,
I have fine-tuned my model using supervised mode on my custom data. However, when I switch to selfsupervised_kmeans and add the mask file, I notice that the output shapes of the data from the train_data_loader_iter.next() method are inconsistent with those from the supervised mode.
Observations:
-
Supervised Mode Output:
- First value of item: Size:
torch.Size([3, 32, 128]) - Second value of item: Size:
torch.Size([1, 25])
- First value of item: Size:
-
Self-Supervised KMeans Mode Output:
- First value of item: Size:
torch.Size([3, 3, 32, 128]) - Second value of item: Size:
torch.Size([32, 128]) - Third value of item: Size:
torch.Size([3, 3])
- First value of item: Size:
Context:
The size of each mode is printed in the training script at this line:
Line 269 in 543109a
| image_tensors, label_tensors = train_data_loader_iter.next() |
Questions:
- In the paper, it seems to mention using Self-Supervised learning by creating 2 additional augmented images to form a batch of 3
torch.Size([3, 3, 32, 128]), where the second's size is the masktorch.Size([32, 128])and the final is the affine matrixtorch.Size([3, 3]). Therefore, I believe this is not compatible with the current training script. - Could you please provide the fine-tuning code for
selfsupervisedmode?
Thank you!
Metadata
Metadata
Assignees
Labels
No labels