qaihm-bot commited on
Commit
cf5a84d
1 Parent(s): 40f96d9

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +40 -19
README.md CHANGED
@@ -19,7 +19,7 @@ tags:
19
 
20
  MobileNetV3Small is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
21
 
22
- This model is an implementation of MobileNet-v3-Small found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv3.py).
23
  This repository provides scripts to run MobileNet-v3-Small on Qualcomm® devices.
24
  More details on model performance across various devices, can be found
25
  [here](https://aihub.qualcomm.com/models/mobilenet_v3_small).
@@ -34,15 +34,32 @@ More details on model performance across various devices, can be found
34
  - Number of parameters: 2.54M
35
  - Model size: 9.72 MB
36
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
 
39
 
40
- | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
41
- | ---|---|---|---|---|---|---|---|
42
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.817 ms | 0 - 141 MB | FP16 | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite)
43
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.869 ms | 0 - 33 MB | FP16 | NPU | [MobileNet-v3-Small.so](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.so)
44
-
45
-
46
 
47
  ## Installation
48
 
@@ -97,16 +114,16 @@ device. This script does the following:
97
  ```bash
98
  python -m qai_hub_models.models.mobilenet_v3_small.export
99
  ```
100
-
101
  ```
102
- Profile Job summary of MobileNet-v3-Small
103
- --------------------------------------------------
104
- Device: Snapdragon X Elite CRD (11)
105
- Estimated Inference Time: 0.97 ms
106
- Estimated Peak Memory Range: 0.57-0.57 MB
107
- Compute Units: NPU (126) | Total (126)
108
-
109
-
 
110
  ```
111
 
112
 
@@ -205,15 +222,19 @@ provides instructions on how to use the `.so` shared library in an Android appl
205
  Get more details on MobileNet-v3-Small's performance across various devices [here](https://aihub.qualcomm.com/models/mobilenet_v3_small).
206
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
207
 
 
208
  ## License
209
- - The license for the original implementation of MobileNet-v3-Small can be found
210
- [here](https://github.com/pytorch/vision/blob/main/LICENSE).
211
- - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
 
212
 
213
  ## References
214
  * [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244)
215
  * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv3.py)
216
 
 
 
217
  ## Community
218
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
219
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
 
19
 
20
  MobileNetV3Small is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
21
 
22
+ This model is an implementation of MobileNet-v3-Small found [here]({source_repo}).
23
  This repository provides scripts to run MobileNet-v3-Small on Qualcomm® devices.
24
  More details on model performance across various devices, can be found
25
  [here](https://aihub.qualcomm.com/models/mobilenet_v3_small).
 
34
  - Number of parameters: 2.54M
35
  - Model size: 9.72 MB
36
 
37
+ | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
+ |---|---|---|---|---|---|---|---|---|
39
+ | MobileNet-v3-Small | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 0.812 ms | 0 - 1 MB | FP16 | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
40
+ | MobileNet-v3-Small | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 0.864 ms | 0 - 139 MB | FP16 | NPU | [MobileNet-v3-Small.so](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.so) |
41
+ | MobileNet-v3-Small | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 0.813 ms | 0 - 13 MB | FP16 | NPU | [MobileNet-v3-Small.onnx](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.onnx) |
42
+ | MobileNet-v3-Small | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 0.549 ms | 0 - 45 MB | FP16 | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
43
+ | MobileNet-v3-Small | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 0.59 ms | 1 - 16 MB | FP16 | NPU | [MobileNet-v3-Small.so](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.so) |
44
+ | MobileNet-v3-Small | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 0.586 ms | 0 - 47 MB | FP16 | NPU | [MobileNet-v3-Small.onnx](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.onnx) |
45
+ | MobileNet-v3-Small | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 0.814 ms | 0 - 16 MB | FP16 | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
46
+ | MobileNet-v3-Small | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 0.83 ms | 1 - 2 MB | FP16 | NPU | Use Export Script |
47
+ | MobileNet-v3-Small | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 0.814 ms | 0 - 17 MB | FP16 | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
48
+ | MobileNet-v3-Small | SA8255 (Proxy) | SA8255P Proxy | QNN | 0.838 ms | 1 - 2 MB | FP16 | NPU | Use Export Script |
49
+ | MobileNet-v3-Small | SA8775 (Proxy) | SA8775P Proxy | TFLITE | 0.824 ms | 0 - 2 MB | FP16 | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
50
+ | MobileNet-v3-Small | SA8775 (Proxy) | SA8775P Proxy | QNN | 0.839 ms | 1 - 2 MB | FP16 | NPU | Use Export Script |
51
+ | MobileNet-v3-Small | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 0.811 ms | 0 - 1 MB | FP16 | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
52
+ | MobileNet-v3-Small | SA8650 (Proxy) | SA8650P Proxy | QNN | 0.832 ms | 1 - 2 MB | FP16 | NPU | Use Export Script |
53
+ | MobileNet-v3-Small | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 1.098 ms | 0 - 46 MB | FP16 | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
54
+ | MobileNet-v3-Small | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 1.157 ms | 1 - 17 MB | FP16 | NPU | Use Export Script |
55
+ | MobileNet-v3-Small | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 0.457 ms | 0 - 21 MB | FP16 | NPU | [MobileNet-v3-Small.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.tflite) |
56
+ | MobileNet-v3-Small | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 0.476 ms | 0 - 11 MB | FP16 | NPU | Use Export Script |
57
+ | MobileNet-v3-Small | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 0.596 ms | 0 - 22 MB | FP16 | NPU | [MobileNet-v3-Small.onnx](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.onnx) |
58
+ | MobileNet-v3-Small | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 1.007 ms | 1 - 1 MB | FP16 | NPU | Use Export Script |
59
+ | MobileNet-v3-Small | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 0.97 ms | 6 - 6 MB | FP16 | NPU | [MobileNet-v3-Small.onnx](https://huggingface.co/qualcomm/MobileNet-v3-Small/blob/main/MobileNet-v3-Small.onnx) |
60
 
61
 
62
 
 
 
 
 
 
 
63
 
64
  ## Installation
65
 
 
114
  ```bash
115
  python -m qai_hub_models.models.mobilenet_v3_small.export
116
  ```
 
117
  ```
118
+ Profiling Results
119
+ ------------------------------------------------------------
120
+ MobileNet-v3-Small
121
+ Device : Samsung Galaxy S23 (13)
122
+ Runtime : TFLITE
123
+ Estimated inference time (ms) : 0.8
124
+ Estimated peak memory usage (MB): [0, 1]
125
+ Total # Ops : 115
126
+ Compute Unit(s) : NPU (115 ops)
127
  ```
128
 
129
 
 
222
  Get more details on MobileNet-v3-Small's performance across various devices [here](https://aihub.qualcomm.com/models/mobilenet_v3_small).
223
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
224
 
225
+
226
  ## License
227
+ * The license for the original implementation of MobileNet-v3-Small can be found [here](https://github.com/pytorch/vision/blob/main/LICENSE).
228
+ * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
229
+
230
+
231
 
232
  ## References
233
  * [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244)
234
  * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv3.py)
235
 
236
+
237
+
238
  ## Community
239
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
240
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).