qaihm-bot commited on
Commit
659c5db
1 Parent(s): 00aca82

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +15 -9
README.md CHANGED
@@ -34,12 +34,15 @@ More details on model performance across various devices, can be found
34
  - Model size (CLIPImageEncoder): 437 MB
35
 
36
 
 
 
37
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
  | ---|---|---|---|---|---|---|---|
39
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 13.251 ms | 0 - 3 MB | FP16 | NPU | [CLIPTextEncoder.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite)
40
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 126.637 ms | 0 - 4 MB | FP16 | NPU | [CLIPImageEncoder.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite)
41
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 7.849 ms | 0 - 24 MB | FP16 | NPU | [CLIPTextEncoder.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.so)
42
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 50.638 ms | 0 - 61 MB | FP16 | NPU | [CLIPImageEncoder.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.so)
 
43
 
44
 
45
  ## Installation
@@ -101,22 +104,24 @@ python -m qai_hub_models.models.openai_clip.export
101
  Profile Job summary of CLIPTextEncoder
102
  --------------------------------------------------
103
  Device: Snapdragon X Elite CRD (11)
104
- Estimated Inference Time: 8.46 ms
105
- Estimated Peak Memory Range: 0.14-0.14 MB
106
  Compute Units: NPU (377) | Total (377)
107
 
108
  Profile Job summary of CLIPImageEncoder
109
  --------------------------------------------------
110
  Device: Snapdragon X Elite CRD (11)
111
- Estimated Inference Time: 48.88 ms
112
  Estimated Peak Memory Range: 0.57-0.57 MB
113
  Compute Units: NPU (369) | Total (369)
114
 
115
 
116
  ```
 
 
117
  ## How does this work?
118
 
119
- This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/OpenAI-Clip/export.py)
120
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
121
  on-device. Lets go through each step below in detail:
122
 
@@ -194,6 +199,7 @@ AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
194
 
195
 
196
 
 
197
  ## Deploying compiled model to Android
198
 
199
 
@@ -215,7 +221,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
215
  ## License
216
  - The license for the original implementation of OpenAI-Clip can be found
217
  [here](https://github.com/openai/CLIP/blob/main/LICENSE).
218
- - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
219
 
220
  ## References
221
  * [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020)
 
34
  - Model size (CLIPImageEncoder): 437 MB
35
 
36
 
37
+
38
+
39
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
40
  | ---|---|---|---|---|---|---|---|
41
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 13.293 ms | 0 - 3 MB | FP16 | NPU | [CLIPTextEncoder.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.tflite)
42
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 126.539 ms | 0 - 261 MB | FP16 | NPU | [CLIPImageEncoder.tflite](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.tflite)
43
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 7.81 ms | 0 - 30 MB | FP16 | NPU | [CLIPTextEncoder.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPTextEncoder.so)
44
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 50.274 ms | 0 - 63 MB | FP16 | NPU | [CLIPImageEncoder.so](https://huggingface.co/qualcomm/OpenAI-Clip/blob/main/CLIPImageEncoder.so)
45
+
46
 
47
 
48
  ## Installation
 
104
  Profile Job summary of CLIPTextEncoder
105
  --------------------------------------------------
106
  Device: Snapdragon X Elite CRD (11)
107
+ Estimated Inference Time: 8.43 ms
108
+ Estimated Peak Memory Range: 0.15-0.15 MB
109
  Compute Units: NPU (377) | Total (377)
110
 
111
  Profile Job summary of CLIPImageEncoder
112
  --------------------------------------------------
113
  Device: Snapdragon X Elite CRD (11)
114
+ Estimated Inference Time: 48.61 ms
115
  Estimated Peak Memory Range: 0.57-0.57 MB
116
  Compute Units: NPU (369) | Total (369)
117
 
118
 
119
  ```
120
+
121
+
122
  ## How does this work?
123
 
124
+ This [export script](https://aihub.qualcomm.com/models/openai_clip/qai_hub_models/models/OpenAI-Clip/export.py)
125
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
126
  on-device. Lets go through each step below in detail:
127
 
 
199
 
200
 
201
 
202
+
203
  ## Deploying compiled model to Android
204
 
205
 
 
221
  ## License
222
  - The license for the original implementation of OpenAI-Clip can be found
223
  [here](https://github.com/openai/CLIP/blob/main/LICENSE).
224
+ - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
225
 
226
  ## References
227
  * [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020)