-
Hi! On Hugging Face, I found the following pre-trained encoders:
I'm not entirely sure whether these models were pre-trained using unsupervised learning or trained for classification. I suspect the latter, as they are described as "image classification models," but I'm not sure because in some of the papers, unsupervised learning has been used for pre-training on the same dataset (ImageNet-22k), followed by fine-tuning with supervised learning for downstream tasks. Could anyone clarify? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Those are all supervised on imagenet-21k (22k) slightly different classifier layout from google vs everyone elses copy of full imagenet. It's safe to make the assumption, that if the model has a classifier, specifically an imagenet classifier (in1k, in12k, in21k, in22k, etc) and there is no mention in model cards or elsewhere of using a semi/unsupervised pretraining that it's supervised training. |
Beta Was this translation helpful? Give feedback.
Those are all supervised on imagenet-21k (22k) slightly different classifier layout from google vs everyone elses copy of full imagenet. It's safe to make the assumption, that if the model has a classifier, specifically an imagenet classifier (in1k, in12k, in21k, in22k, etc) and there is no mention in model cards or elsewhere of using a semi/unsupervised pretraining that it's supervised training.