shehan97 commited on
Commit
b7f7218
1 Parent(s): 20cef6d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -0
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - imagenet-1k
4
+ library_name: transformers
5
+ pipeline_tag: image-classification
6
+ ---
7
+
8
+ # MobileViTv2 (mobilevitv2-1.0-imagenet1k-256)
9
+
10
+ <!-- Provide a quick summary of what the model is/does. -->
11
+ MobileViTv2 is the second version of MobileViT. It was proposed in [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680) by Sachin Mehta and Mohammad Rastegari, and first released in [this](https://github.com/apple/ml-cvnets) repository. The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE).
12
+
13
+ Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team.
14
+
15
+ ### Model Description
16
+
17
+ <!-- Provide a longer summary of what this model is. -->
18
+ MobileViTv2 is constructed by replacing the multi-headed self-attention in MobileViT with separable self-attention.
19
+
20
+ ### Intended uses & limitations
21
+
22
+ You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilevitv2) to look for fine-tuned versions on a task that interests you.
23
+
24
+ ### How to use
25
+
26
+ Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
27
+
28
+ ```python
29
+ from transformers import MobileViTv2FeatureExtractor, MobileViTv2ForImageClassification
30
+ from PIL import Image
31
+ import requests
32
+
33
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
34
+ image = Image.open(requests.get(url, stream=True).raw)
35
+
36
+ feature_extractor = MobileViTv2FeatureExtractor.from_pretrained("shehan97/mobilevitv2-1.0-imagenet1k-256")
37
+ model = MobileViTv2ForImageClassification.from_pretrained("shehan97/mobilevitv2-1.0-imagenet1k-256")
38
+
39
+ inputs = feature_extractor(images=image, return_tensors="pt")
40
+
41
+ outputs = model(**inputs)
42
+ logits = outputs.logits
43
+
44
+ # model predicts one of the 1000 ImageNet classes
45
+ predicted_class_idx = logits.argmax(-1).item()
46
+ print("Predicted class:", model.config.id2label[predicted_class_idx])
47
+ ```
48
+
49
+ Currently, both the feature extractor and model support PyTorch.
50
+
51
+ ## Training data
52
+
53
+ The MobileViT model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes.
54
+
55
+
56
+ ### BibTeX entry and citation info
57
+
58
+ ```bibtex
59
+ @inproceedings{vision-transformer,
60
+ title = {Separable Self-attention for Mobile Vision Transformers},
61
+ author = {Sachin Mehta and Mohammad Rastegari},
62
+ year = {2022},
63
+ URL = {https://arxiv.org/abs/2206.02680}
64
+ }
65
+ ```