qaihm-bot commited on
Commit
ca56478
·
verified ·
1 Parent(s): 5227818

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +95 -18
README.md CHANGED
@@ -20,7 +20,7 @@ tags:
20
 
21
  MobileNetV2 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
22
 
23
- This model is an implementation of MobileNet-v2-Quantized found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/mobilenetv2).
24
  This repository provides scripts to run MobileNet-v2-Quantized on Qualcomm® devices.
25
  More details on model performance across various devices, can be found
26
  [here](https://aihub.qualcomm.com/models/mobilenet_v2_quantized).
@@ -35,26 +35,39 @@ More details on model performance across various devices, can be found
35
  - Number of parameters: 3.49M
36
  - Model size: 3.42 MB
37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
 
40
 
41
- | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
42
- | ---|---|---|---|---|---|---|---|
43
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.279 ms | 0 - 2 MB | INT8 | NPU | [MobileNet-v2-Quantized.tflite](https://huggingface.co/qualcomm/MobileNet-v2-Quantized/blob/main/MobileNet-v2-Quantized.tflite)
44
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.657 ms | 0 - 9 MB | INT8 | NPU | [MobileNet-v2-Quantized.so](https://huggingface.co/qualcomm/MobileNet-v2-Quantized/blob/main/MobileNet-v2-Quantized.so)
45
-
46
-
47
 
48
  ## Installation
49
 
50
  This model can be installed as a Python package via pip.
51
 
52
  ```bash
53
- pip install "qai-hub-models[mobilenet_v2_quantized]"
54
  ```
55
 
56
 
57
-
58
  ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
59
 
60
  Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
@@ -99,18 +112,78 @@ device. This script does the following:
99
  ```bash
100
  python -m qai_hub_models.models.mobilenet_v2_quantized.export
101
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
102
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
103
  ```
104
- Profile Job summary of MobileNet-v2-Quantized
105
- --------------------------------------------------
106
- Device: Snapdragon X Elite CRD (11)
107
- Estimated Inference Time: 0.72 ms
108
- Estimated Peak Memory Range: 0.55-0.55 MB
109
- Compute Units: NPU (71) | Total (71)
110
 
 
 
 
 
 
 
 
 
 
 
 
 
111
 
112
  ```
 
 
113
 
 
 
114
 
115
 
116
 
@@ -147,15 +220,19 @@ provides instructions on how to use the `.so` shared library in an Android appl
147
  Get more details on MobileNet-v2-Quantized's performance across various devices [here](https://aihub.qualcomm.com/models/mobilenet_v2_quantized).
148
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
149
 
 
150
  ## License
151
- - The license for the original implementation of MobileNet-v2-Quantized can be found
152
- [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
153
- - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
 
154
 
155
  ## References
156
  * [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381)
157
  * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/mobilenetv2)
158
 
 
 
159
  ## Community
160
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
161
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
 
20
 
21
  MobileNetV2 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
22
 
23
+ This model is an implementation of MobileNet-v2-Quantized found [here]({source_repo}).
24
  This repository provides scripts to run MobileNet-v2-Quantized on Qualcomm® devices.
25
  More details on model performance across various devices, can be found
26
  [here](https://aihub.qualcomm.com/models/mobilenet_v2_quantized).
 
35
  - Number of parameters: 3.49M
36
  - Model size: 3.42 MB
37
 
38
+ | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
39
+ |---|---|---|---|---|---|---|---|---|
40
+ | MobileNet-v2-Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 0.434 ms | 0 - 9 MB | INT8 | NPU | [MobileNet-v2-Quantized.tflite](https://huggingface.co/qualcomm/MobileNet-v2-Quantized/blob/main/MobileNet-v2-Quantized.tflite) |
41
+ | MobileNet-v2-Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 0.665 ms | 0 - 10 MB | INT8 | NPU | [MobileNet-v2-Quantized.so](https://huggingface.co/qualcomm/MobileNet-v2-Quantized/blob/main/MobileNet-v2-Quantized.so) |
42
+ | MobileNet-v2-Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 0.306 ms | 0 - 43 MB | INT8 | NPU | [MobileNet-v2-Quantized.tflite](https://huggingface.co/qualcomm/MobileNet-v2-Quantized/blob/main/MobileNet-v2-Quantized.tflite) |
43
+ | MobileNet-v2-Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 0.487 ms | 0 - 16 MB | INT8 | NPU | [MobileNet-v2-Quantized.so](https://huggingface.co/qualcomm/MobileNet-v2-Quantized/blob/main/MobileNet-v2-Quantized.so) |
44
+ | MobileNet-v2-Quantized | RB3 Gen 2 (Proxy) | QCS6490 Proxy | TFLITE | 1.067 ms | 0 - 27 MB | INT8 | NPU | [MobileNet-v2-Quantized.tflite](https://huggingface.co/qualcomm/MobileNet-v2-Quantized/blob/main/MobileNet-v2-Quantized.tflite) |
45
+ | MobileNet-v2-Quantized | RB3 Gen 2 (Proxy) | QCS6490 Proxy | QNN | 1.49 ms | 0 - 7 MB | INT8 | NPU | Use Export Script |
46
+ | MobileNet-v2-Quantized | RB5 (Proxy) | QCS8250 Proxy | TFLITE | 12.534 ms | 0 - 6 MB | INT8 | NPU | [MobileNet-v2-Quantized.tflite](https://huggingface.co/qualcomm/MobileNet-v2-Quantized/blob/main/MobileNet-v2-Quantized.tflite) |
47
+ | MobileNet-v2-Quantized | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 0.433 ms | 0 - 5 MB | INT8 | NPU | [MobileNet-v2-Quantized.tflite](https://huggingface.co/qualcomm/MobileNet-v2-Quantized/blob/main/MobileNet-v2-Quantized.tflite) |
48
+ | MobileNet-v2-Quantized | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 0.609 ms | 0 - 1 MB | INT8 | NPU | Use Export Script |
49
+ | MobileNet-v2-Quantized | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 0.437 ms | 0 - 1 MB | INT8 | NPU | [MobileNet-v2-Quantized.tflite](https://huggingface.co/qualcomm/MobileNet-v2-Quantized/blob/main/MobileNet-v2-Quantized.tflite) |
50
+ | MobileNet-v2-Quantized | SA8255 (Proxy) | SA8255P Proxy | QNN | 0.617 ms | 0 - 1 MB | INT8 | NPU | Use Export Script |
51
+ | MobileNet-v2-Quantized | SA8775 (Proxy) | SA8775P Proxy | TFLITE | 0.433 ms | 0 - 1 MB | INT8 | NPU | [MobileNet-v2-Quantized.tflite](https://huggingface.co/qualcomm/MobileNet-v2-Quantized/blob/main/MobileNet-v2-Quantized.tflite) |
52
+ | MobileNet-v2-Quantized | SA8775 (Proxy) | SA8775P Proxy | QNN | 0.625 ms | 0 - 1 MB | INT8 | NPU | Use Export Script |
53
+ | MobileNet-v2-Quantized | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 0.486 ms | 0 - 43 MB | INT8 | NPU | [MobileNet-v2-Quantized.tflite](https://huggingface.co/qualcomm/MobileNet-v2-Quantized/blob/main/MobileNet-v2-Quantized.tflite) |
54
+ | MobileNet-v2-Quantized | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 0.725 ms | 0 - 19 MB | INT8 | NPU | Use Export Script |
55
+ | MobileNet-v2-Quantized | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 0.297 ms | 0 - 27 MB | INT8 | NPU | [MobileNet-v2-Quantized.tflite](https://huggingface.co/qualcomm/MobileNet-v2-Quantized/blob/main/MobileNet-v2-Quantized.tflite) |
56
+ | MobileNet-v2-Quantized | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 0.406 ms | 0 - 14 MB | INT8 | NPU | Use Export Script |
57
+ | MobileNet-v2-Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 0.757 ms | 1 - 1 MB | INT8 | NPU | Use Export Script |
58
 
59
 
60
 
 
 
 
 
 
 
61
 
62
  ## Installation
63
 
64
  This model can be installed as a Python package via pip.
65
 
66
  ```bash
67
+ pip install qai-hub-models
68
  ```
69
 
70
 
 
71
  ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
72
 
73
  Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
 
112
  ```bash
113
  python -m qai_hub_models.models.mobilenet_v2_quantized.export
114
  ```
115
+ ```
116
+ Profiling Results
117
+ ------------------------------------------------------------
118
+ MobileNet-v2-Quantized
119
+ Device : Samsung Galaxy S23 (13)
120
+ Runtime : TFLITE
121
+ Estimated inference time (ms) : 0.4
122
+ Estimated peak memory usage (MB): [0, 9]
123
+ Total # Ops : 109
124
+ Compute Unit(s) : NPU (109 ops)
125
+ ```
126
+
127
+
128
+ ## How does this work?
129
+
130
+ This [export script](https://aihub.qualcomm.com/models/mobilenet_v2_quantized/qai_hub_models/models/MobileNet-v2-Quantized/export.py)
131
+ leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
132
+ on-device. Lets go through each step below in detail:
133
+
134
+ Step 1: **Compile model for on-device deployment**
135
+
136
+ To compile a PyTorch model for on-device deployment, we first trace the model
137
+ in memory using the `jit.trace` and then call the `submit_compile_job` API.
138
+
139
+ ```python
140
+ import torch
141
+
142
+ import qai_hub as hub
143
+ from qai_hub_models.models.mobilenet_v2_quantized import
144
+
145
+ # Load the model
146
 
147
+ # Device
148
+ device = hub.Device("Samsung Galaxy S23")
149
+
150
+
151
+ ```
152
+
153
+
154
+ Step 2: **Performance profiling on cloud-hosted device**
155
+
156
+ After compiling models from step 1. Models can be profiled model on-device using the
157
+ `target_model`. Note that this scripts runs the model on a device automatically
158
+ provisioned in the cloud. Once the job is submitted, you can navigate to a
159
+ provided job URL to view a variety of on-device performance metrics.
160
+ ```python
161
+ profile_job = hub.submit_profile_job(
162
+ model=target_model,
163
+ device=device,
164
+ )
165
+
166
  ```
 
 
 
 
 
 
167
 
168
+ Step 3: **Verify on-device accuracy**
169
+
170
+ To verify the accuracy of the model on-device, you can run on-device inference
171
+ on sample input data on the same cloud hosted device.
172
+ ```python
173
+ input_data = torch_model.sample_inputs()
174
+ inference_job = hub.submit_inference_job(
175
+ model=target_model,
176
+ device=device,
177
+ inputs=input_data,
178
+ )
179
+ on_device_output = inference_job.download_output_data()
180
 
181
  ```
182
+ With the output of the model, you can compute like PSNR, relative errors or
183
+ spot check the output with expected output.
184
 
185
+ **Note**: This on-device profiling and inference requires access to Qualcomm®
186
+ AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
187
 
188
 
189
 
 
220
  Get more details on MobileNet-v2-Quantized's performance across various devices [here](https://aihub.qualcomm.com/models/mobilenet_v2_quantized).
221
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
222
 
223
+
224
  ## License
225
+ * The license for the original implementation of MobileNet-v2-Quantized can be found [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
226
+ * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
227
+
228
+
229
 
230
  ## References
231
  * [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381)
232
  * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/mobilenetv2)
233
 
234
+
235
+
236
  ## Community
237
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
238
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).