qaihm-bot commited on
Commit
0bbb957
1 Parent(s): 413f95f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -33,7 +33,7 @@ More details on model performance across various devices, can be found
33
 
34
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
35
  | ---|---|---|---|---|---|---|---|
36
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 463.847 ms | 10 - 13 MB | FP32 | CPU | [HuggingFace-WavLM-Base-Plus.tflite](https://huggingface.co/qualcomm/HuggingFace-WavLM-Base-Plus/blob/main/HuggingFace-WavLM-Base-Plus.tflite)
37
 
38
 
39
  ## Installation
@@ -94,10 +94,10 @@ python -m qai_hub_models.models.huggingface_wavlm_base_plus.export
94
  ```
95
  Profile Job summary of HuggingFace-WavLM-Base-Plus
96
  --------------------------------------------------
97
- Device: Samsung Galaxy S23 Ultra (13)
98
- Estimated Inference Time: 463.85 ms
99
- Estimated Peak Memory Range: 10.22-13.22 MB
100
- Compute Units: GPU (88),CPU (748) | Total (836)
101
 
102
 
103
  ```
@@ -202,7 +202,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
202
  ## License
203
  - The license for the original implementation of HuggingFace-WavLM-Base-Plus can be found
204
  [here](https://github.com/microsoft/unilm/blob/master/LICENSE).
205
- - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf).
206
 
207
  ## References
208
  * [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900)
 
33
 
34
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
35
  | ---|---|---|---|---|---|---|---|
36
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 237767.939 ms | 11 - 15 MB | FP16 | NPU | [HuggingFace-WavLM-Base-Plus.tflite](https://huggingface.co/qualcomm/HuggingFace-WavLM-Base-Plus/blob/main/HuggingFace-WavLM-Base-Plus.tflite)
37
 
38
 
39
  ## Installation
 
94
  ```
95
  Profile Job summary of HuggingFace-WavLM-Base-Plus
96
  --------------------------------------------------
97
+ Device: Samsung Galaxy S24 (14)
98
+ Estimated Inference Time: 174470.19 ms
99
+ Estimated Peak Memory Range: 10.80-678.70 MB
100
+ Compute Units: NPU (848) | Total (848)
101
 
102
 
103
  ```
 
202
  ## License
203
  - The license for the original implementation of HuggingFace-WavLM-Base-Plus can be found
204
  [here](https://github.com/microsoft/unilm/blob/master/LICENSE).
205
+ - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
206
 
207
  ## References
208
  * [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900)