Skip to content

scan loss stuck, performs worse than pretext #113

@mazatov

Description

@mazatov

Hello,

I'm trying to train the model on my own dataset. I successfully trained the pretext model with very good top20 accuracy (95%, the dataset is pretty simple). However, when I run scan.py the loss gets stuck without any improvement and the final performance is pretty bad (56%). I wonder what could go wrong in scan.py for the loss to get stuck like that? The only things I changed nin the config file were the number of clusters and the crop size.

I also wonder if I should be changing anything here.

update_cluster_head_only: False # Update full network in SCAN
num_heads: 1 # Only use one head

image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions