qaihm-bot commited on
Commit
f4754e9
1 Parent(s): 2dbfc43

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +13 -5
README.md CHANGED
@@ -38,7 +38,8 @@ More details on model performance across various devices, can be found
38
 
39
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
40
  | ---|---|---|---|---|---|---|---|
41
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.24 ms | 0 - 1 MB | FP16 | NPU | [MobileNet-v2-Quantized.tflite](https://huggingface.co/qualcomm/MobileNet-v2-Quantized/blob/main/MobileNet-v2-Quantized.tflite)
 
42
 
43
 
44
  ## Installation
@@ -98,11 +99,18 @@ python -m qai_hub_models.models.mobilenet_v2_quantized.export
98
  ```
99
  Profile Job summary of MobileNet-v2-Quantized
100
  --------------------------------------------------
101
- Device: Samsung Galaxy S23 Ultra (13)
102
- Estimated Inference Time: 0.24 ms
103
- Estimated Peak Memory Range: 0.01-1.49 MB
104
  Compute Units: NPU (70) | Total (70)
105
 
 
 
 
 
 
 
 
106
 
107
  ```
108
  ## How does this work?
@@ -220,7 +228,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
220
  ## License
221
  - The license for the original implementation of MobileNet-v2-Quantized can be found
222
  [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
223
- - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf).
224
 
225
  ## References
226
  * [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381)
 
38
 
39
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
40
  | ---|---|---|---|---|---|---|---|
41
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.237 ms | 0 - 1 MB | INT8 | NPU | [MobileNet-v2-Quantized.tflite](https://huggingface.co/qualcomm/MobileNet-v2-Quantized/blob/main/MobileNet-v2-Quantized.tflite)
42
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.352 ms | 0 - 90 MB | INT8 | NPU | [MobileNet-v2-Quantized.so](https://huggingface.co/qualcomm/MobileNet-v2-Quantized/blob/main/MobileNet-v2-Quantized.so)
43
 
44
 
45
  ## Installation
 
99
  ```
100
  Profile Job summary of MobileNet-v2-Quantized
101
  --------------------------------------------------
102
+ Device: Samsung Galaxy S24 (14)
103
+ Estimated Inference Time: 0.17 ms
104
+ Estimated Peak Memory Range: 0.01-34.29 MB
105
  Compute Units: NPU (70) | Total (70)
106
 
107
+ Profile Job summary of MobileNet-v2-Quantized
108
+ --------------------------------------------------
109
+ Device: Samsung Galaxy S24 (14)
110
+ Estimated Inference Time: 0.25 ms
111
+ Estimated Peak Memory Range: 0.16-34.32 MB
112
+ Compute Units: NPU (69) | Total (69)
113
+
114
 
115
  ```
116
  ## How does this work?
 
228
  ## License
229
  - The license for the original implementation of MobileNet-v2-Quantized can be found
230
  [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
231
+ - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
232
 
233
  ## References
234
  * [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381)