qaihm-bot commited on
Commit
0bccf10
1 Parent(s): 127405f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +233 -0
README.md ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - imagenet-1k
4
+ - imagenet-22k
5
+ library_name: pytorch
6
+ license: bsd-3-clause
7
+ pipeline_tag: image-classification
8
+ tags:
9
+ - quantized
10
+ - backbone
11
+ - real_time
12
+ - android
13
+
14
+ ---
15
+
16
+ ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/mobilenet_v3_large_quantized/web-assets/model_demo.png)
17
+
18
+ # MobileNet-v3-Large-Quantized: Optimized for Mobile Deployment
19
+ ## Imagenet classifier and general purpose backbone
20
+
21
+ MobileNet-v3-Large is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
22
+
23
+ This model is an implementation of MobileNet-v3-Large-Quantized found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv3.py).
24
+ This repository provides scripts to run MobileNet-v3-Large-Quantized on Qualcomm® devices.
25
+ More details on model performance across various devices, can be found
26
+ [here](https://aihub.qualcomm.com/models/mobilenet_v3_large_quantized).
27
+
28
+
29
+ ### Model Details
30
+
31
+ - **Model Type:** Image classification
32
+ - **Model Stats:**
33
+ - Model checkpoint: Imagenet
34
+ - Input resolution: 224x224
35
+ - Number of parameters: 5.47M
36
+ - Model size: 5.79 MB
37
+
38
+
39
+ | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
40
+ | ---|---|---|---|---|---|---|---|
41
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 2.972 ms | 0 - 3 MB | INT8 | NPU | [MobileNet-v3-Large-Quantized.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Large-Quantized/blob/main/MobileNet-v3-Large-Quantized.tflite)
42
+
43
+
44
+ ## Installation
45
+
46
+ This model can be installed as a Python package via pip.
47
+
48
+ ```bash
49
+ pip install qai-hub-models
50
+ ```
51
+
52
+
53
+ ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
54
+
55
+ Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
56
+ Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
57
+
58
+ With this API token, you can configure your client to run models on the cloud
59
+ hosted devices.
60
+ ```bash
61
+ qai-hub configure --api_token API_TOKEN
62
+ ```
63
+ Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
64
+
65
+
66
+
67
+ ## Demo off target
68
+
69
+ The package contains a simple end-to-end demo that downloads pre-trained
70
+ weights and runs this model on a sample input.
71
+
72
+ ```bash
73
+ python -m qai_hub_models.models.mobilenet_v3_large_quantized.demo
74
+ ```
75
+
76
+ The above demo runs a reference implementation of pre-processing, model
77
+ inference, and post processing.
78
+
79
+ **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
80
+ environment, please add the following to your cell (instead of the above).
81
+ ```
82
+ %run -m qai_hub_models.models.mobilenet_v3_large_quantized.demo
83
+ ```
84
+
85
+
86
+ ### Run model on a cloud-hosted device
87
+
88
+ In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
89
+ device. This script does the following:
90
+ * Performance check on-device on a cloud-hosted device
91
+ * Downloads compiled assets that can be deployed on-device for Android.
92
+ * Accuracy check between PyTorch and on-device outputs.
93
+
94
+ ```bash
95
+ python -m qai_hub_models.models.mobilenet_v3_large_quantized.export
96
+ ```
97
+
98
+ ```
99
+ Profile Job summary of MobileNet-v3-Large-Quantized
100
+ --------------------------------------------------
101
+ Device: Samsung Galaxy S24 (14)
102
+ Estimated Inference Time: 2.35 ms
103
+ Estimated Peak Memory Range: 0.00-44.04 MB
104
+ Compute Units: NPU (136),CPU (15) | Total (151)
105
+
106
+
107
+ ```
108
+ ## How does this work?
109
+
110
+ This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/MobileNet-v3-Large-Quantized/export.py)
111
+ leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
112
+ on-device. Lets go through each step below in detail:
113
+
114
+ Step 1: **Compile model for on-device deployment**
115
+
116
+ To compile a PyTorch model for on-device deployment, we first trace the model
117
+ in memory using the `jit.trace` and then call the `submit_compile_job` API.
118
+
119
+ ```python
120
+ import torch
121
+
122
+ import qai_hub as hub
123
+ from qai_hub_models.models.mobilenet_v3_large_quantized import Model
124
+
125
+ # Load the model
126
+ torch_model = Model.from_pretrained()
127
+ torch_model.eval()
128
+
129
+ # Device
130
+ device = hub.Device("Samsung Galaxy S23")
131
+
132
+ # Trace model
133
+ input_shape = torch_model.get_input_spec()
134
+ sample_inputs = torch_model.sample_inputs()
135
+
136
+ pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
137
+
138
+ # Compile model on a specific device
139
+ compile_job = hub.submit_compile_job(
140
+ model=pt_model,
141
+ device=device,
142
+ input_specs=torch_model.get_input_spec(),
143
+ )
144
+
145
+ # Get target model to run on-device
146
+ target_model = compile_job.get_target_model()
147
+
148
+ ```
149
+
150
+
151
+ Step 2: **Performance profiling on cloud-hosted device**
152
+
153
+ After compiling models from step 1. Models can be profiled model on-device using the
154
+ `target_model`. Note that this scripts runs the model on a device automatically
155
+ provisioned in the cloud. Once the job is submitted, you can navigate to a
156
+ provided job URL to view a variety of on-device performance metrics.
157
+ ```python
158
+ profile_job = hub.submit_profile_job(
159
+ model=target_model,
160
+ device=device,
161
+ )
162
+
163
+ ```
164
+
165
+ Step 3: **Verify on-device accuracy**
166
+
167
+ To verify the accuracy of the model on-device, you can run on-device inference
168
+ on sample input data on the same cloud hosted device.
169
+ ```python
170
+ input_data = torch_model.sample_inputs()
171
+ inference_job = hub.submit_inference_job(
172
+ model=target_model,
173
+ device=device,
174
+ inputs=input_data,
175
+ )
176
+
177
+ on_device_output = inference_job.download_output_data()
178
+
179
+ ```
180
+ With the output of the model, you can compute like PSNR, relative errors or
181
+ spot check the output with expected output.
182
+
183
+ **Note**: This on-device profiling and inference requires access to Qualcomm®
184
+ AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
185
+
186
+
187
+ ## Run demo on a cloud-hosted device
188
+
189
+ You can also run the demo on-device.
190
+
191
+ ```bash
192
+ python -m qai_hub_models.models.mobilenet_v3_large_quantized.demo --on-device
193
+ ```
194
+
195
+ **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
196
+ environment, please add the following to your cell (instead of the above).
197
+ ```
198
+ %run -m qai_hub_models.models.mobilenet_v3_large_quantized.demo -- --on-device
199
+ ```
200
+
201
+
202
+ ## Deploying compiled model to Android
203
+
204
+
205
+ The models can be deployed using multiple runtimes:
206
+ - TensorFlow Lite (`.tflite` export): [This
207
+ tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
208
+ guide to deploy the .tflite model in an Android application.
209
+
210
+
211
+ - QNN (`.so` export ): This [sample
212
+ app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
213
+ provides instructions on how to use the `.so` shared library in an Android application.
214
+
215
+
216
+ ## View on Qualcomm® AI Hub
217
+ Get more details on MobileNet-v3-Large-Quantized's performance across various devices [here](https://aihub.qualcomm.com/models/mobilenet_v3_large_quantized).
218
+ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
219
+
220
+ ## License
221
+ - The license for the original implementation of MobileNet-v3-Large-Quantized can be found
222
+ [here](https://github.com/pytorch/vision/blob/main/LICENSE).
223
+ - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
224
+
225
+ ## References
226
+ * [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244)
227
+ * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/mobilenetv3.py)
228
+
229
+ ## Community
230
+ * Join [our AI Hub Slack community](https://join.slack.com/t/qualcomm-ai-hub/shared_invite/zt-2dgf95loi-CXHTDRR1rvPgQWPO~ZZZJg) to collaborate, post questions and learn more about on-device AI.
231
+ * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
232
+
233
+