Self-supervised ViT-S/16 (small-sized Vision Transformer with patch size 16) model

ViT-S official model trained on ImageNet-1k for 100 epochs. Self-supervision with DINO. Reproduced for ICCV 2023 SimPool paper.

SimPool is a simple attention-based pooling method at the end of network, released in this repository. Disclaimer: This model card is written by the author of SimPool, i.e. Bill Psomas.

Evaluation with k-NN

k top1 top5
10 68.918 85.432
20 68.738 87.278
100 66.746 88.52
200 65.33 88.26

BibTeX entry and citation info

@misc{psomas2023simpool,
      title={Keep It SimPool: Who Said Supervised Transformers Suffer from Attention Deficit?}, 
      author={Bill Psomas and Ioannis Kakogeorgiou and Konstantinos Karantzalos and Yannis Avrithis},
      year={2023},
      eprint={2309.06891},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
@inproceedings{caron2021emerging,
  title={Emerging properties in self-supervised vision transformers},
  author={Caron, Mathilde and Touvron, Hugo and Misra, Ishan and J{\'e}gou, Herv{\'e} and Mairal, Julien and Bojanowski, Piotr and Joulin, Armand},
  booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
  pages={9650--9660},
  year={2021}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Dataset used to train billpsomas/vits_dino_official_ep100

Collection including billpsomas/vits_dino_official_ep100