Matthijs commited on
Commit
b4928fc
1 Parent(s): 682a654

add model card

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ tags:
4
+ - vision
5
+ - image-classification
6
+ datasets:
7
+ - imagenet-1k
8
+ widget:
9
+ - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
10
+ example_title: Tiger
11
+ - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
12
+ example_title: Teapot
13
+ - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
14
+ example_title: Palace
15
+ ---
16
+
17
+ # MobileNet V2
18
+
19
+ MobileNet V2 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen. It was first released in [this repository](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet).
20
+
21
+ Disclaimer: The team releasing MobileNet V2 did not write a model card for this model so this model card has been written by the Hugging Face team.
22
+
23
+ ## Model description
24
+
25
+ From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md):
26
+
27
+ > MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
28
+
29
+ The checkpoints are named **mobilenet\_v2\_*depth*\_*size***, for example **mobilenet\_v2\_1.0\_224**, where **1.0** is the depth multiplier and **224** is the resolution of the input images the model was trained on.
30
+
31
+ ## Intended uses & limitations
32
+
33
+ You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you.
34
+
35
+ ### How to use
36
+
37
+ Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
38
+
39
+ ```python
40
+ from transformers import MobileNetV2FeatureExtractor, MobileNetV2ForImageClassification
41
+ from PIL import Image
42
+ import requests
43
+
44
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
45
+ image = Image.open(requests.get(url, stream=True).raw)
46
+
47
+ feature_extractor = MobileNetV2FeatureExtractor.from_pretrained("Matthijs/mobilenet_v2_1.0_224")
48
+ model = MobileNetV2ForImageClassification.from_pretrained("Matthijs/mobilenet_v2_1.0_224")
49
+
50
+ inputs = feature_extractor(images=image, return_tensors="pt")
51
+
52
+ outputs = model(**inputs)
53
+ logits = outputs.logits
54
+
55
+ # model predicts one of the 1000 ImageNet classes
56
+ predicted_class_idx = logits.argmax(-1).item()
57
+ print("Predicted class:", model.config.id2label[predicted_class_idx])
58
+ ```
59
+
60
+ Note: This model actually predicts 1001 classes, the 1000 classes from ImageNet plus an extra “background” class (index 0).
61
+
62
+ Currently, both the feature extractor and model support PyTorch.
63
+
64
+ ### BibTeX entry and citation info
65
+
66
+ ```bibtex
67
+ @inproceedings{mobilenetv22018,
68
+ title={MobileNetV2: Inverted Residuals and Linear Bottlenecks},
69
+ author={Mark Sandler and Andrew Howard and Menglong Zhu and Andrey Zhmoginov and Liang-Chieh Chen},
70
+ booktitle={CVPR},
71
+ year={2018}
72
+ }
73
+ ```