shreyajn commited on
Commit
8606c41
1 Parent(s): bf2ee02

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +242 -0
README.md ADDED
@@ -0,0 +1,242 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: pytorch
3
+ license: mit
4
+ pipeline_tag: image-classification
5
+ tags:
6
+ - foundation
7
+ - android
8
+
9
+ ---
10
+
11
+ ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/openai_clip/web-assets/banner.png)
12
+
13
+ # OpenAI-Clip: Optimized for Mobile Deployment
14
+ ## Multi-modal foundational model for vision and language tasks like image/text similarity and for zero-shot image classification
15
+
16
+ Contrastive Language-Image Pre-Training (CLIP) uses a ViT like transformer to get visual features and a causal language model to get the text features. Both the text and visual features can then be used for a variety of zero-shot learning tasks.
17
+
18
+ This model is an implementation of OpenAI-Clip found [here](https://github.com/openai/CLIP/).
19
+ This repository provides scripts to run OpenAI-Clip on Qualcomm® devices.
20
+ More details on model performance across various devices, can be found
21
+ [here](https://aihub.qualcomm.com/models/openai_clip).
22
+
23
+
24
+ ### Model Details
25
+
26
+ - **Model Type:** Image classification
27
+ - **Model Stats:**
28
+ - Model checkpoint: ViT-B/16
29
+ - Image input resolution: 224x224
30
+ - Text context length: 77
31
+ - Number of parameters (CLIPTextEncoder): 76.0M
32
+ - Model size (CLIPTextEncoder): 290 MB
33
+ - Number of parameters (CLIPImageEncoder): 115M
34
+ - Model size (CLIPImageEncoder): 437 MB
35
+
36
+
37
+ | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
+ | ---|---|---|---|---|---|---|---|
39
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 15.528 ms | 0 - 3 MB | FP16 | NPU | [CLIPTextEncoder.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite)
40
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 127.729 ms | 0 - 4 MB | FP16 | NPU | [CLIPImageEncoder.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite)
41
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 8.149 ms | 0 - 23 MB | FP16 | NPU | [CLIPTextEncoder.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.so)
42
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 50.903 ms | 0 - 57 MB | FP16 | NPU | [CLIPImageEncoder.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.so)
43
+
44
+
45
+ ## Installation
46
+
47
+ This model can be installed as a Python package via pip.
48
+
49
+ ```bash
50
+ pip install "qai-hub-models[openai_clip]"
51
+ ```
52
+
53
+
54
+
55
+ ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
56
+
57
+ Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
58
+ Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
59
+
60
+ With this API token, you can configure your client to run models on the cloud
61
+ hosted devices.
62
+ ```bash
63
+ qai-hub configure --api_token API_TOKEN
64
+ ```
65
+ Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
66
+
67
+
68
+
69
+ ## Demo off target
70
+
71
+ The package contains a simple end-to-end demo that downloads pre-trained
72
+ weights and runs this model on a sample input.
73
+
74
+ ```bash
75
+ python -m qai_hub_models.models.openai_clip.demo
76
+ ```
77
+
78
+ The above demo runs a reference implementation of pre-processing, model
79
+ inference, and post processing.
80
+
81
+ **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
82
+ environment, please add the following to your cell (instead of the above).
83
+ ```
84
+ %run -m qai_hub_models.models.openai_clip.demo
85
+ ```
86
+
87
+
88
+ ### Run model on a cloud-hosted device
89
+
90
+ In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
91
+ device. This script does the following:
92
+ * Performance check on-device on a cloud-hosted device
93
+ * Downloads compiled assets that can be deployed on-device for Android.
94
+ * Accuracy check between PyTorch and on-device outputs.
95
+
96
+ ```bash
97
+ python -m qai_hub_models.models.openai_clip.export
98
+ ```
99
+
100
+ ```
101
+ Profile Job summary of CLIPTextEncoder
102
+ --------------------------------------------------
103
+ Device: Samsung Galaxy S23 Ultra (13)
104
+ Estimated Inference Time: 15.53 ms
105
+ Estimated Peak Memory Range: 0.04-2.96 MB
106
+ Compute Units: NPU (574),CPU (2) | Total (576)
107
+
108
+ Profile Job summary of CLIPImageEncoder
109
+ --------------------------------------------------
110
+ Device: Samsung Galaxy S23 Ultra (13)
111
+ Estimated Inference Time: 127.73 ms
112
+ Estimated Peak Memory Range: 0.15-3.69 MB
113
+ Compute Units: NPU (575) | Total (575)
114
+
115
+ Profile Job summary of CLIPTextEncoder
116
+ --------------------------------------------------
117
+ Device: Samsung Galaxy S23 Ultra (13)
118
+ Estimated Inference Time: 8.15 ms
119
+ Estimated Peak Memory Range: 0.04-22.63 MB
120
+ Compute Units: NPU (377) | Total (377)
121
+
122
+ Profile Job summary of CLIPImageEncoder
123
+ --------------------------------------------------
124
+ Device: Samsung Galaxy S23 Ultra (13)
125
+ Estimated Inference Time: 50.90 ms
126
+ Estimated Peak Memory Range: 0.08-56.97 MB
127
+ Compute Units: NPU (370) | Total (370)
128
+
129
+
130
+ ```
131
+ ## How does this work?
132
+
133
+ This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/OpenAI-Clip/export.py)
134
+ leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
135
+ on-device. Lets go through each step below in detail:
136
+
137
+ Step 1: **Compile model for on-device deployment**
138
+
139
+ To compile a PyTorch model for on-device deployment, we first trace the model
140
+ in memory using the `jit.trace` and then call the `submit_compile_job` API.
141
+
142
+ ```python
143
+ import torch
144
+
145
+ import qai_hub as hub
146
+ from qai_hub_models.models.openai_clip import Model
147
+
148
+ # Load the model
149
+ torch_model = Model.from_pretrained()
150
+ torch_model.eval()
151
+
152
+ # Device
153
+ device = hub.Device("Samsung Galaxy S23")
154
+
155
+ # Trace model
156
+ input_shape = torch_model.get_input_spec()
157
+ sample_inputs = torch_model.sample_inputs()
158
+
159
+ pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
160
+
161
+ # Compile model on a specific device
162
+ compile_job = hub.submit_compile_job(
163
+ model=pt_model,
164
+ device=device,
165
+ input_specs=torch_model.get_input_spec(),
166
+ )
167
+
168
+ # Get target model to run on-device
169
+ target_model = compile_job.get_target_model()
170
+
171
+ ```
172
+
173
+
174
+ Step 2: **Performance profiling on cloud-hosted device**
175
+
176
+ After compiling models from step 1. Models can be profiled model on-device using the
177
+ `target_model`. Note that this scripts runs the model on a device automatically
178
+ provisioned in the cloud. Once the job is submitted, you can navigate to a
179
+ provided job URL to view a variety of on-device performance metrics.
180
+ ```python
181
+ profile_job = hub.submit_profile_job(
182
+ model=target_model,
183
+ device=device,
184
+ )
185
+
186
+ ```
187
+
188
+ Step 3: **Verify on-device accuracy**
189
+
190
+ To verify the accuracy of the model on-device, you can run on-device inference
191
+ on sample input data on the same cloud hosted device.
192
+ ```python
193
+ input_data = torch_model.sample_inputs()
194
+ inference_job = hub.submit_inference_job(
195
+ model=target_model,
196
+ device=device,
197
+ inputs=input_data,
198
+ )
199
+
200
+ on_device_output = inference_job.download_output_data()
201
+
202
+ ```
203
+ With the output of the model, you can compute like PSNR, relative errors or
204
+ spot check the output with expected output.
205
+
206
+ **Note**: This on-device profiling and inference requires access to Qualcomm®
207
+ AI Hub. [Sign up for early access](https://aihub.qualcomm.com/sign-up).
208
+
209
+
210
+
211
+ ## Deploying compiled model to Android
212
+
213
+
214
+ The models can be deployed using multiple runtimes:
215
+ - TensorFlow Lite (`.tflite` export): [This
216
+ tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
217
+ guide to deploy the .tflite model in an Android application.
218
+
219
+
220
+ - QNN (`.so` export ): This [sample
221
+ app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
222
+ provides instructions on how to use the `.so` shared library in an Android application.
223
+
224
+
225
+ ## View on Qualcomm® AI Hub
226
+ Get more details on OpenAI-Clip's performance across various devices [here](https://aihub.qualcomm.com/models/openai_clip).
227
+ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
228
+
229
+ ## License
230
+ - The license for the original implementation of OpenAI-Clip can be found
231
+ [here](https://github.com/openai/CLIP/blob/main/LICENSE).
232
+ - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf).
233
+
234
+ ## References
235
+ * [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020)
236
+ * [Source Model Implementation](https://github.com/openai/CLIP/)
237
+
238
+ ## Community
239
+ * Join [our AI Hub Slack community](https://join.slack.com/t/qualcomm-ai-hub/shared_invite/zt-2dgf95loi-CXHTDRR1rvPgQWPO~ZZZJg) to collaborate, post questions and learn more about on-device AI.
240
+ * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
241
+
242
+