qaihm-bot commited on
Commit
1c4c6bf
·
verified ·
1 Parent(s): 9f08b62

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -36,8 +36,8 @@ More details on model performance across various devices, can be found
36
 
37
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
  | ---|---|---|---|---|---|---|---|
39
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 156.975 ms | 35 - 128 MB | FP16 | GPU | [WhisperEncoder.tflite](https://huggingface.co/qualcomm/Whisper-Base-En/blob/main/WhisperEncoder.tflite)
40
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 14.764 ms | 6 - 8 MB | FP16 | NPU | [WhisperDecoder.tflite](https://huggingface.co/qualcomm/Whisper-Base-En/blob/main/WhisperDecoder.tflite)
41
 
42
 
43
  ## Installation
@@ -98,16 +98,16 @@ python -m qai_hub_models.models.whisper_base_en.export
98
  ```
99
  Profile Job summary of WhisperEncoder
100
  --------------------------------------------------
101
- Device: Samsung Galaxy S23 Ultra (13)
102
- Estimated Inference Time: 156.97 ms
103
- Estimated Peak Memory Range: 35.29-128.29 MB
104
  Compute Units: GPU (315) | Total (315)
105
 
106
  Profile Job summary of WhisperDecoder
107
  --------------------------------------------------
108
- Device: Samsung Galaxy S23 Ultra (13)
109
- Estimated Inference Time: 14.76 ms
110
- Estimated Peak Memory Range: 5.52-8.29 MB
111
  Compute Units: NPU (433) | Total (433)
112
 
113
 
@@ -213,7 +213,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
213
  ## License
214
  - The license for the original implementation of Whisper-Base-En can be found
215
  [here](https://github.com/openai/whisper/blob/main/LICENSE).
216
- - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf).
217
 
218
  ## References
219
  * [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf)
 
36
 
37
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
  | ---|---|---|---|---|---|---|---|
39
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 154.406 ms | 35 - 221 MB | FP16 | GPU | [WhisperEncoder.tflite](https://huggingface.co/qualcomm/Whisper-Base-En/blob/main/WhisperEncoder.tflite)
40
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 14.139 ms | 3 - 5 MB | FP16 | NPU | [WhisperDecoder.tflite](https://huggingface.co/qualcomm/Whisper-Base-En/blob/main/WhisperDecoder.tflite)
41
 
42
 
43
  ## Installation
 
98
  ```
99
  Profile Job summary of WhisperEncoder
100
  --------------------------------------------------
101
+ Device: Samsung Galaxy S24 (14)
102
+ Estimated Inference Time: 120.44 ms
103
+ Estimated Peak Memory Range: 35.07-63.03 MB
104
  Compute Units: GPU (315) | Total (315)
105
 
106
  Profile Job summary of WhisperDecoder
107
  --------------------------------------------------
108
+ Device: Samsung Galaxy S24 (14)
109
+ Estimated Inference Time: 10.61 ms
110
+ Estimated Peak Memory Range: 1.93-91.60 MB
111
  Compute Units: NPU (433) | Total (433)
112
 
113
 
 
213
  ## License
214
  - The license for the original implementation of Whisper-Base-En can be found
215
  [here](https://github.com/openai/whisper/blob/main/LICENSE).
216
+ - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
217
 
218
  ## References
219
  * [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf)