qaihm-bot commited on
Commit
6c387ef
1 Parent(s): 336af71

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +19 -3
README.md CHANGED
@@ -31,9 +31,12 @@ More details on model performance across various devices, can be found
31
  - Model size: 363 MB
32
 
33
 
 
 
34
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
35
  | ---|---|---|---|---|---|---|---|
36
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 950.768 ms | 134 - 136 MB | FP32 | CPU | [HuggingFace-WavLM-Base-Plus.tflite](https://huggingface.co/qualcomm/HuggingFace-WavLM-Base-Plus/blob/main/HuggingFace-WavLM-Base-Plus.tflite)
 
37
 
38
 
39
  ## Installation
@@ -91,9 +94,21 @@ device. This script does the following:
91
  python -m qai_hub_models.models.huggingface_wavlm_base_plus.export
92
  ```
93
 
 
 
 
 
 
 
 
 
 
 
 
 
94
  ## How does this work?
95
 
96
- This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/HuggingFace-WavLM-Base-Plus/export.py)
97
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
98
  on-device. Lets go through each step below in detail:
99
 
@@ -171,6 +186,7 @@ AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
171
 
172
 
173
 
 
174
  ## Deploying compiled model to Android
175
 
176
 
@@ -192,7 +208,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
192
  ## License
193
  - The license for the original implementation of HuggingFace-WavLM-Base-Plus can be found
194
  [here](https://github.com/microsoft/unilm/blob/master/LICENSE).
195
- - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
196
 
197
  ## References
198
  * [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900)
 
31
  - Model size: 363 MB
32
 
33
 
34
+
35
+
36
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
37
  | ---|---|---|---|---|---|---|---|
38
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 920.916 ms | 141 - 148 MB | FP32 | CPU | [HuggingFace-WavLM-Base-Plus.tflite](https://huggingface.co/qualcomm/HuggingFace-WavLM-Base-Plus/blob/main/HuggingFace-WavLM-Base-Plus.tflite)
39
+
40
 
41
 
42
  ## Installation
 
94
  python -m qai_hub_models.models.huggingface_wavlm_base_plus.export
95
  ```
96
 
97
+ ```
98
+ Profile Job summary of HuggingFace-WavLM-Base-Plus
99
+ --------------------------------------------------
100
+ Device: QCS8550 (Proxy) (12)
101
+ Estimated Inference Time: 932.00 ms
102
+ Estimated Peak Memory Range: 142.46-146.71 MB
103
+ Compute Units: CPU (811) | Total (811)
104
+
105
+
106
+ ```
107
+
108
+
109
  ## How does this work?
110
 
111
+ This [export script](https://aihub.qualcomm.com/models/huggingface_wavlm_base_plus/qai_hub_models/models/HuggingFace-WavLM-Base-Plus/export.py)
112
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
113
  on-device. Lets go through each step below in detail:
114
 
 
186
 
187
 
188
 
189
+
190
  ## Deploying compiled model to Android
191
 
192
 
 
208
  ## License
209
  - The license for the original implementation of HuggingFace-WavLM-Base-Plus can be found
210
  [here](https://github.com/microsoft/unilm/blob/master/LICENSE).
211
+ - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
212
 
213
  ## References
214
  * [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900)