Bill Psomas commited on
Commit
44c931b
1 Parent(s): 48f55b0

first model commit

Browse files
Files changed (1) hide show
  1. README.md +13 -3
README.md CHANGED
@@ -8,7 +8,7 @@ pipeline_tag: image-classification
8
  language:
9
  - en
10
  tags:
11
- - resnet
12
  - convolutional neural network
13
  - simpool
14
  - dino
@@ -16,9 +16,9 @@ tags:
16
  - deep learning
17
  ---
18
 
19
- # Self-supervised ResNet-50 model
20
 
21
- ResNet-50 official model trained on ImageNet-1k for 100 epochs. Self-supervision with [DINO](https://arxiv.org/abs/2104.14294). Reproduced for ICCV 2023 [SimPool](https://arxiv.org/abs/2309.06891) paper.
22
 
23
  SimPool is a simple attention-based pooling method at the end of network, released in this [repository](https://github.com/billpsomas/simpool/).
24
  Disclaimer: This model card is written by the author of SimPool, i.e. [Bill Psomas](http://users.ntua.gr/psomasbill/).
@@ -36,6 +36,16 @@ Disclaimer: This model card is written by the author of SimPool, i.e. [Bill Psom
36
 
37
  ## BibTeX entry and citation info
38
 
 
 
 
 
 
 
 
 
 
 
39
  ```
40
  @misc{psomas2023simpool,
41
  title={Keep It SimPool: Who Said Supervised Transformers Suffer from Attention Deficit?},
 
8
  language:
9
  - en
10
  tags:
11
+ - convnext
12
  - convolutional neural network
13
  - simpool
14
  - dino
 
16
  - deep learning
17
  ---
18
 
19
+ # Self-supervised ConvNeXt-S model
20
 
21
+ [ConvNeXt-S](https://arxiv.org/abs/2201.03545) official model trained on ImageNet-1k for 100 epochs. Self-supervision with [DINO](https://arxiv.org/abs/2104.14294). Reproduced for ICCV 2023 [SimPool](https://arxiv.org/abs/2309.06891) paper.
22
 
23
  SimPool is a simple attention-based pooling method at the end of network, released in this [repository](https://github.com/billpsomas/simpool/).
24
  Disclaimer: This model card is written by the author of SimPool, i.e. [Bill Psomas](http://users.ntua.gr/psomasbill/).
 
36
 
37
  ## BibTeX entry and citation info
38
 
39
+ ```
40
+ @inproceedings{liu2022convnet,
41
+ title={A convnet for the 2020s},
42
+ author={Liu, Zhuang and Mao, Hanzi and Wu, Chao-Yuan and Feichtenhofer, Christoph and Darrell, Trevor and Xie, Saining},
43
+ booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
44
+ pages={11976--11986},
45
+ year={2022}
46
+ }
47
+ ```
48
+
49
  ```
50
  @misc{psomas2023simpool,
51
  title={Keep It SimPool: Who Said Supervised Transformers Suffer from Attention Deficit?},