Matthijs Hollemans commited on
Commit
f62ab0e
1 Parent(s): 5e4d780

clone from https://huggingface.co/shehan97/mobilevitv2-1.0-voc-deeplabv3

Browse files
Files changed (4) hide show
  1. README.md +63 -0
  2. config.json +76 -0
  3. preprocessor_config.json +16 -0
  4. pytorch_model.bin +3 -0
README.md CHANGED
@@ -1,3 +1,66 @@
1
  ---
2
  license: other
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
+ library_name: transformers
4
+ tags:
5
+ - vision
6
+ - image-segmentation
7
  ---
8
+
9
+ # MobileViTv2 + DeepLabv3 (shehan97/mobilevitv2-1.0-voc-deeplabv3)
10
+
11
+ <!-- Provide a quick summary of what the model is/does. -->
12
+ MobileViTv2 model pre-trained on PASCAL VOC at resolution 512x512.
13
+ It was introduced in [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680) by Sachin Mehta and Mohammad Rastegari, and first released in [this](https://github.com/apple/ml-cvnets) repository. The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE).
14
+
15
+ Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team.
16
+
17
+ ### Model Description
18
+
19
+ <!-- Provide a longer summary of what this model is. -->
20
+ MobileViTv2 is constructed by replacing the multi-headed self-attention in MobileViT with separable self-attention.
21
+
22
+ The model in this repo adds a [DeepLabV3](https://arxiv.org/abs/1706.05587) head to the MobileViT backbone for semantic segmentation.
23
+
24
+ ### Intended uses & limitations
25
+
26
+ You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilevitv2) to look for fine-tuned versions on a task that interests you.
27
+
28
+ ### How to use
29
+
30
+ Here is how to use this model:
31
+
32
+ ```python
33
+ from transformers import MobileViTv2FeatureExtractor, MobileViTv2ForSemanticSegmentation
34
+ from PIL import Image
35
+ import requests
36
+
37
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
38
+ image = Image.open(requests.get(url, stream=True).raw)
39
+
40
+ feature_extractor = MobileViTv2FeatureExtractor.from_pretrained("shehan97/mobilevitv2-1.0-voc-deeplabv3")
41
+ model = MobileViTv2ForSemanticSegmentation.from_pretrained("shehan97/mobilevitv2-1.0-voc-deeplabv3")
42
+
43
+ inputs = feature_extractor(images=image, return_tensors="pt")
44
+
45
+ outputs = model(**inputs)
46
+ logits = outputs.logits
47
+
48
+ predicted_mask = logits.argmax(1).squeeze(0)
49
+ ```
50
+
51
+ Currently, both the feature extractor and model support PyTorch.
52
+
53
+ ## Training data
54
+
55
+ The MobileViT + DeepLabV3 model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes, and then fine-tuned on the [PASCAL VOC2012](http://host.robots.ox.ac.uk/pascal/VOC/) dataset.
56
+
57
+ ### BibTeX entry and citation info
58
+
59
+ ```bibtex
60
+ @inproceedings{vision-transformer,
61
+ title = {Separable Self-attention for Mobile Vision Transformers},
62
+ author = {Sachin Mehta and Mohammad Rastegari},
63
+ year = {2022},
64
+ URL = {https://arxiv.org/abs/2206.02680}
65
+ }
66
+ ```
config.json ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "MobileViTv2ForSemanticSegmentation"
4
+ ],
5
+ "aspp_dropout_prob": 0.1,
6
+ "aspp_out_channels": 512,
7
+ "atrous_rates": [
8
+ 6,
9
+ 12,
10
+ 18
11
+ ],
12
+ "attn_dropout": 0.0,
13
+ "classifier_dropout_prob": 0.1,
14
+ "conv_kernel_size": 3,
15
+ "expand_ratio": 2.0,
16
+ "ffn_dropout": 0.0,
17
+ "hidden_act": "swish",
18
+ "id2label": {
19
+ "0": "background",
20
+ "1": "aeroplane",
21
+ "2": "bicycle",
22
+ "3": "bird",
23
+ "4": "boat",
24
+ "5": "bottle",
25
+ "6": "bus",
26
+ "7": "car",
27
+ "8": "cat",
28
+ "9": "chair",
29
+ "10": "cow",
30
+ "11": "diningtable",
31
+ "12": "dog",
32
+ "13": "horse",
33
+ "14": "motorbike",
34
+ "15": "person",
35
+ "16": "pottedplant",
36
+ "17": "sheep",
37
+ "18": "sofa",
38
+ "19": "train",
39
+ "20": "tvmonitor"
40
+ },
41
+ "image_size": 512,
42
+ "initializer_range": 0.02,
43
+ "label2id": {
44
+ "aeroplane": 1,
45
+ "background": 0,
46
+ "bicycle": 2,
47
+ "bird": 3,
48
+ "boat": 4,
49
+ "bottle": 5,
50
+ "bus": 6,
51
+ "car": 7,
52
+ "cat": 8,
53
+ "chair": 9,
54
+ "cow": 10,
55
+ "diningtable": 11,
56
+ "dog": 12,
57
+ "horse": 13,
58
+ "motorbike": 14,
59
+ "person": 15,
60
+ "pottedplant": 16,
61
+ "sheep": 17,
62
+ "sofa": 18,
63
+ "train": 19,
64
+ "tvmonitor": 20
65
+ },
66
+ "layer_norm_eps": 1e-05,
67
+ "mlp_ratio": 2.0,
68
+ "model_type": "mobilevitv2",
69
+ "num_channels": 3,
70
+ "output_stride": 16,
71
+ "patch_size": 2,
72
+ "semantic_loss_ignore_index": 255,
73
+ "torch_dtype": "float32",
74
+ "transformers_version": "4.29.0.dev0",
75
+ "width_multiplier": 1.0
76
+ }
preprocessor_config.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "crop_size": {
3
+ "height": 512,
4
+ "width": 512
5
+ },
6
+ "do_center_crop": true,
7
+ "do_flip_channel_order": true,
8
+ "do_rescale": true,
9
+ "do_resize": true,
10
+ "image_processor_type": "MobileViTv2ImageProcessor",
11
+ "resample": 2,
12
+ "rescale_factor": 0.00392156862745098,
13
+ "size": {
14
+ "shortest_edge": 544
15
+ }
16
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3de4592cb143dd4eb10e4c031e3a7c6db4e626abcb41599899ab8c98a68305d3
3
+ size 53468241