shehan97 commited on
Commit
ca6ffa5
1 Parent(s): 9b7f433

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ library_name: transformers
4
+ tags:
5
+ - vision
6
+ - image-segmentation
7
+ ---
8
+
9
+ # MobileViTv2 + DeepLabv3 (shehan97/mobilevitv2-1.0-voc-deeplabv3)
10
+
11
+ <!-- Provide a quick summary of what the model is/does. -->
12
+ MobileViTv2 model pre-trained on PASCAL VOC at resolution 512x512.
13
+ It was introduced in [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680) by Sachin Mehta and Mohammad Rastegari, and first released in [this](https://github.com/apple/ml-cvnets) repository. The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE).
14
+
15
+ Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team.
16
+
17
+ ### Model Description
18
+
19
+ <!-- Provide a longer summary of what this model is. -->
20
+ MobileViTv2 is constructed by replacing the multi-headed self-attention in MobileViT with separable self-attention.
21
+
22
+ The model in this repo adds a [DeepLabV3](https://arxiv.org/abs/1706.05587) head to the MobileViT backbone for semantic segmentation.
23
+
24
+ ### Intended uses & limitations
25
+
26
+ You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilevitv2) to look for fine-tuned versions on a task that interests you.
27
+
28
+ ### How to use
29
+
30
+ Here is how to use this model:
31
+
32
+ ```python
33
+ from transformers import MobileViTv2FeatureExtractor, MobileViTv2ForSemanticSegmentation
34
+ from PIL import Image
35
+ import requests
36
+
37
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
38
+ image = Image.open(requests.get(url, stream=True).raw)
39
+
40
+ feature_extractor = MobileViTv2FeatureExtractor.from_pretrained("shehan97/mobilevitv2-1.0-voc-deeplabv3")
41
+ model = MobileViTv2ForSemanticSegmentation.from_pretrained("shehan97/mobilevitv2-1.0-voc-deeplabv3")
42
+
43
+ inputs = feature_extractor(images=image, return_tensors="pt")
44
+
45
+ outputs = model(**inputs)
46
+ logits = outputs.logits
47
+
48
+ predicted_mask = logits.argmax(1).squeeze(0)
49
+ ```
50
+
51
+ Currently, both the feature extractor and model support PyTorch.
52
+
53
+ ## Training data
54
+
55
+ The MobileViT + DeepLabV3 model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes, and then fine-tuned on the [PASCAL VOC2012](http://host.robots.ox.ac.uk/pascal/VOC/) dataset.
56
+
57
+ ### BibTeX entry and citation info
58
+
59
+ ```bibtex
60
+ @inproceedings{vision-transformer,
61
+ title = {Separable Self-attention for Mobile Vision Transformers},
62
+ author = {Sachin Mehta and Mohammad Rastegari},
63
+ year = {2022},
64
+ URL = {https://arxiv.org/abs/2206.02680}
65
+ }
66
+ ```