Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
library_name: keras
|
4 |
+
---
|
5 |
+
|
6 |
+
|
7 |
+
## Description
|
8 |
+
|
9 |
+
* **Year**: 2022
|
10 |
+
* **Organisation**: [TensorFlow](https://www.tensorflow.org/)
|
11 |
+
* **Project Title**: Publish fine-tuned MobileViT in TensorFlow Hub
|
12 |
+
TensorFlow Hub is the main TensorFlow model repository with thousands of pre-trained models with documentation, sample code and readily available to use or fine-tune. The idea behind the project is to develop new State-of-the-Art models like MobileViT and publish the pre-trained models on TensorFlow Hub using the ImageNet1k dataset. MobileViT is light-weight and general-purpose vision transformer for mobile devices. MobileViT presents a different perspective for the global processing of information with transformers, i.e., transformers as convolutions. Our results show that MobileViT significantly outperforms CNN- and ViT-based networks across different tasks and datasets. On the ImageNet-1k dataset, MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters, which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based) and DeIT (ViT-based) for a similar number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than MobileNetv3 for a similar number of parameters.
|
13 |
+
* **Mentors**: [Luis Gustavo Martins](https://twitter.com/gusthema) & [Sayak Paul](https://twitter.com/RisingSayak)
|
14 |
+
|
15 |
+
# Project Report
|
16 |
+
|
17 |
+
This repository provides TensorFlow / Keras implementations of different MobileViT [1] variants. It also provides the TensorFlow / Keras models that have been populated with the original MobileViT pre-trained weights available from [2]. These models are not blackbox SavedModels i.e., they can be fully expanded into `tf.keras.Model` objects and one can call all the utility functions on them (example: `.summary()`).
|
18 |
+
|
19 |
+
As of today, all the TensorFlow / Keras variants of the models listed [here](https://github.com/apple/ml-cvnets/blob/main/docs/source/en/general/README-model-zoo.md) are available in this repository. This list includes the ImageNet-1k models.
|
20 |
+
|
21 |
+
Refer to the ["Using the models"](https://github.com/sayannath/MobileViT-GSoC#using-the-models) section to get started.
|
22 |
+
|
23 |
+
## Conversion
|
24 |
+
|
25 |
+
TensorFlow / Keras implementations are available in `mobilevit/models/mobilevit.py`. Conversion utilities are in `convert.py`.
|
26 |
+
|
27 |
+
## Models
|
28 |
+
|
29 |
+
The converted models will be available on [TF-Hub](https://tfhub.dev).
|
30 |
+
|
31 |
+
There should be a total of 3 different models each having two variants: classifier and feature extractor. You can load any model and get started like so:
|
32 |
+
|
33 |
+
```py
|
34 |
+
import tensorflow as tf
|
35 |
+
|
36 |
+
model = tf.keras.models.load_model('model_path')
|
37 |
+
print(model.summary())
|
38 |
+
```
|
39 |
+
|
40 |
+
The model names are interpreted as follows:
|
41 |
+
|
42 |
+
* `mobilevit_xxs_1k_256`: Means that the model was pre-trained on the ImageNet-1k dataset with a resolution of 256x256.
|
43 |
+
|
44 |
+
## Results
|
45 |
+
|
46 |
+
Results are on ImageNet-1k validation set (top-1 accuracy).
|
47 |
+
|
48 |
+
| name | original acc@1 | keras acc@1 |
|
49 |
+
|:-------------:|:--------------:|:-----------:|
|
50 |
+
| MobileViT_XXS | 69.0 | 68.59 |
|
51 |
+
| MobileViT_XS | 74.7 | 74.67 |
|
52 |
+
| MobileViT_S | 78.3 | 78.36 |
|
53 |
+
|
54 |
+
Differences in the results are primarily because of the differences in the library implementations especially how image resizing is implemented in PyTorch and TensorFlow. Results can be verified with the code in `imagenet_1k_eval`. Logs are available at [this URL](https://tensorboard.dev/experiment/uyWNZmrwQwW0c87qTjiMOw/#scalars).
|
55 |
+
|
56 |
+
|
57 |
+
## Using the models
|
58 |
+
|
59 |
+
### Pre-trained models:
|
60 |
+
* Off-the-shelf classification: [Colab Notebook](https://colab.research.google.com/github/sayannath/MobileViT-GSoC/blob/main/notebooks/classification.ipynb)
|
61 |
+
* Fine-tuning: [Colab Notebook]()
|
62 |
+
|
63 |
+
### Randomly initialized models:
|
64 |
+
|
65 |
+
```py
|
66 |
+
from mobilevit.models.mobilevit import get_mobilevit_model
|
67 |
+
|
68 |
+
model = get_mobilevit_model(
|
69 |
+
model_name='mobilevit_xxs', # [mobilevit_xxs, mobilevit_xs, mobilevit_s]
|
70 |
+
image_shape=(256, 256, 3),
|
71 |
+
num_classes=1000,
|
72 |
+
)
|
73 |
+
|
74 |
+
print(model.summary())
|
75 |
+
```
|
76 |
+
|
77 |
+
To view different model configurations, refer [here](https://github.com/sayannath/MobileViT-GSoC/blob/main/configs/model_config.py).
|
78 |
+
|
79 |
+
## Upcoming Contributions
|
80 |
+
|
81 |
+
- [ ] Allow the models to accept more input shapes (useful for downstream tasks)
|
82 |
+
- [ ] Convert the `saved_models` to `TFLite`.
|
83 |
+
- [ ] Fine-tuning notebook
|
84 |
+
- [x] Off-the-shelf-classification notebook
|
85 |
+
- [x] Publish models on TF-Hub
|
86 |
+
|
87 |
+
## References
|
88 |
+
|
89 |
+
[1] MobileViT Paper: [https://arxiv.org/abs/2110.02178](https://arxiv.org/abs/2110.02178)
|
90 |
+
|
91 |
+
[2] Official MobileViT weights: [https://github.com/apple/ml-cvnets](https://github.com/apple/ml-cvnets)
|
92 |
+
|
93 |
+
[3] Hugging Face MobileViT: [MobileViT-HF](https://huggingface.co/docs/transformers/v4.22.2/en/model_doc/mobilevit#mobilevit)
|
94 |
+
|
95 |
+
## Acknowledgements
|
96 |
+
|
97 |
+
* [Luiz Gustavo Martins](https://twitter.com/gusthema)
|
98 |
+
* [Sayak Paul](https://github.com/RisingSayak)
|
99 |
+
* [GSoC program](https://summerofcode.withgoogle.com)
|
100 |
+
|
101 |
+
|
102 |
+
## 🔗 Links
|
103 |
+
[![portfolio](https://img.shields.io/badge/my_portfolio-000?style=for-the-badge&logo=ko-fi&logoColor=white)](https://sayannath.biz/)
|
104 |
+
[![linkedin](https://img.shields.io/badge/linkedin-0A66C2?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/sayannath235/)
|
105 |
+
[![twitter](https://img.shields.io/badge/twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://twitter.com/sayannath2350)
|