Update README.md
Browse files
README.md
CHANGED
@@ -4,4 +4,83 @@ base_model:
|
|
4 |
- timm/deit_small_patch16_224.fb_in1k
|
5 |
- timm/deit_tiny_patch16_224.fb_in1k
|
6 |
- timm/cait_xxs24_224.fb_dist_in1k
|
7 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
- timm/deit_small_patch16_224.fb_in1k
|
5 |
- timm/deit_tiny_patch16_224.fb_in1k
|
6 |
- timm/cait_xxs24_224.fb_dist_in1k
|
7 |
+
metrics:
|
8 |
+
- accuracy
|
9 |
+
tags:
|
10 |
+
- Interpretability
|
11 |
+
- ViT
|
12 |
+
- Classification
|
13 |
+
- XAI
|
14 |
+
---
|
15 |
+
|
16 |
+
# ProtoViT: Interpretable Vision Transformer with Prototypical Learning
|
17 |
+
|
18 |
+
This repository contains pretrained ProtoViT models for interpretable image classification, as described in our paper "Interpretable Image Classification with Adaptive Prototype-based Vision Transformers".
|
19 |
+
|
20 |
+
## Model Description
|
21 |
+
|
22 |
+
ProtoViT combines Vision Transformers with prototype-based learning to create models that are both highly accurate and interpretable. Rather than functioning as a black box, ProtoViT learns interpretable prototypes that help explain its classification decisions through visual similarities.
|
23 |
+
|
24 |
+
### Supported Architectures
|
25 |
+
|
26 |
+
We provide three variants of ProtoViT:
|
27 |
+
|
28 |
+
- **ProtoViT-T**: Built on DeiT-Tiny backbone
|
29 |
+
- **ProtoViT-S**: Built on DeiT-Small backbone
|
30 |
+
- **ProtoViT-CaiT**: Built on CaiT-XXS24 backbone
|
31 |
+
|
32 |
+
## Performance
|
33 |
+
|
34 |
+
All models were trained and evaluated on the CUB-200-2011 fine-grained bird species classification dataset.
|
35 |
+
|
36 |
+
| Model Version | Backbone | Resolution | Top-1 Accuracy | Checkpoint |
|
37 |
+
|--------------|----------|------------|----------------|------------|
|
38 |
+
| ProtoViT-T | DeiT-Tiny | 224×224 | 83.36% | [Download](https://huggingface.co/chiyum609/ProtoViT/blob/main/DeiT_Tiny_finetuned0.8336.pth) |
|
39 |
+
| ProtoViT-S | DeiT-Small | 224×224 | 85.30% | [Download](https://huggingface.co/chiyum609/ProtoViT/blob/main/DeiT_Small_finetuned0.8530.pth) |
|
40 |
+
| ProtoViT-CaiT | CaiT_xxs24 | 224×224 | 86.02% | [Download](https://huggingface.co/chiyum609/ProtoViT/blob/main/CaiT_xxs24_224_finetuned0.8602.pth) |
|
41 |
+
|
42 |
+
## Features
|
43 |
+
|
44 |
+
- 🔍 **Interpretable Decisions**: The model performs classification with self-explainatory reasoning based on the input’s similarity to learned prototypes, the key features for each classes.
|
45 |
+
- 🎯 **High Accuracy**: Achieves competitive performance on fine-grained classification tasks
|
46 |
+
- 🚀 **Multiple Architectures**: Supports various Vision Transformer backbones
|
47 |
+
- 📊 **Analysis Tools**: Comes with tools for both local and global prototype analysis
|
48 |
+
|
49 |
+
## Requirements
|
50 |
+
|
51 |
+
- Python 3.8+
|
52 |
+
- PyTorch 1.8+
|
53 |
+
- timm==0.4.12
|
54 |
+
- torchvision
|
55 |
+
- numpy
|
56 |
+
- pillow
|
57 |
+
|
58 |
+
## Citation
|
59 |
+
|
60 |
+
If you use this model in your research, please cite:
|
61 |
+
|
62 |
+
```bibtex
|
63 |
+
@article{ma2024interpretable,
|
64 |
+
title={Interpretable Image Classification with Adaptive Prototype-based Vision Transformers},
|
65 |
+
author={Ma, Chiyu and Donnelly, Jon and Liu, Wenjun and Vosoughi, Soroush and Rudin, Cynthia and Chen, Chaofan},
|
66 |
+
journal={arXiv preprint arXiv:2410.20722},
|
67 |
+
year={2024}
|
68 |
+
}
|
69 |
+
```
|
70 |
+
|
71 |
+
## Acknowledgements
|
72 |
+
|
73 |
+
This implementation builds upon the following excellent repositories:
|
74 |
+
- [DeiT](https://github.com/facebookresearch/deit)
|
75 |
+
- [CaiT](https://github.com/facebookresearch/deit)
|
76 |
+
- [ProtoPNet](https://github.com/cfchen-duke/ProtoPNet)
|
77 |
+
|
78 |
+
## License
|
79 |
+
|
80 |
+
This project is released under [MIT] license.
|
81 |
+
|
82 |
+
## Contact
|
83 |
+
|
84 |
+
For any questions or feedback, please:
|
85 |
+
1. Open an issue in the GitHub repository
|
86 |
+
2. Contact [Your Contact Information]
|