zhoupans commited on
Commit
585a879
1 Parent(s): d310a98

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -5,7 +5,7 @@ This is a PyTorch implementation of **Mugs** proposed by our paper "**Mugs: A Mu
5
  [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/mugs-a-multi-granular-self-supervised/self-supervised-image-classification-on)](https://paperswithcode.com/sota/self-supervised-image-classification-on?p=mugs-a-multi-granular-self-supervised)
6
 
7
  <div align="center">
8
- <img width="100%" alt="Overall framework of Mugs. " src="https://huggingface.co/zhoupans/Mugs_ViT_large_pretrained/blob/main/exp_illustration/framework.png">
9
  </div>
10
 
11
  **<p align="center">Fig 1. Overall framework of Mugs.** In (a), for each image, two random crops of one image
@@ -93,7 +93,7 @@ You can choose to download only the weights of the pretrained backbone used for
93
  </table>
94
 
95
  <div align="center">
96
- <img width="100%" alt="Comparison of linear probing accuracy on ImageNet-1K." src="./exp_illustration/comparison.png">
97
  </div>
98
 
99
  **<p align="center">Fig 2. Comparison of linear probing accuracy on ImageNet-1K.**</p>
@@ -161,7 +161,7 @@ We show the fish classes in ImageNet-1K, i.e., the first six classes,
161
  including tench, goldfish, white shark, tiger shark, hammerhead, electric
162
  ray. See more examples in Appendix.
163
  <div align="center">
164
- <img width="100%" alt="T-SNE visualization of the learned feature by ViT-B/16." src="./exp_illustration/TSNE.png">
165
  </div>
166
 
167
  **<p align="center">Fig 4. T-SNE visualization of the learned feature by ViT-B/16.**</p>
 
5
  [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/mugs-a-multi-granular-self-supervised/self-supervised-image-classification-on)](https://paperswithcode.com/sota/self-supervised-image-classification-on?p=mugs-a-multi-granular-self-supervised)
6
 
7
  <div align="center">
8
+ <img width="100%" alt="Overall framework of Mugs. " src="https://huggingface.co/zhoupans/Mugs_ViT_large_pretrained/resolve/main/exp_illustration/framework.png">
9
  </div>
10
 
11
  **<p align="center">Fig 1. Overall framework of Mugs.** In (a), for each image, two random crops of one image
 
93
  </table>
94
 
95
  <div align="center">
96
+ <img width="100%" alt="Comparison of linear probing accuracy on ImageNet-1K." src="https://huggingface.co/zhoupans/Mugs_ViT_large_pretrained/blob/main/exp_illustration/comparison.png">
97
  </div>
98
 
99
  **<p align="center">Fig 2. Comparison of linear probing accuracy on ImageNet-1K.**</p>
 
161
  including tench, goldfish, white shark, tiger shark, hammerhead, electric
162
  ray. See more examples in Appendix.
163
  <div align="center">
164
+ <img width="100%" alt="T-SNE visualization of the learned feature by ViT-B/16." src="https://huggingface.co/zhoupans/Mugs_ViT_large_pretrained/blob/main/exp_illustration/attention_vis.png">
165
  </div>
166
 
167
  **<p align="center">Fig 4. T-SNE visualization of the learned feature by ViT-B/16.**</p>