Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -37,8 +37,8 @@ More details on model performance across various devices, can be found
|
|
| 37 |
|
| 38 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
| 39 |
| ---|---|---|---|---|---|---|---|
|
| 40 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.
|
| 41 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library |
|
| 42 |
|
| 43 |
|
| 44 |
## Installation
|
|
@@ -95,23 +95,6 @@ device. This script does the following:
|
|
| 95 |
python -m qai_hub_models.models.mobilenet_v2.export
|
| 96 |
```
|
| 97 |
|
| 98 |
-
```
|
| 99 |
-
Profile Job summary of MobileNet-v2
|
| 100 |
-
--------------------------------------------------
|
| 101 |
-
Device: Samsung Galaxy S24 (14)
|
| 102 |
-
Estimated Inference Time: 0.39 ms
|
| 103 |
-
Estimated Peak Memory Range: 0.01-53.52 MB
|
| 104 |
-
Compute Units: NPU (70) | Total (70)
|
| 105 |
-
|
| 106 |
-
Profile Job summary of MobileNet-v2
|
| 107 |
-
--------------------------------------------------
|
| 108 |
-
Device: Samsung Galaxy S24 (14)
|
| 109 |
-
Estimated Inference Time: 0.54 ms
|
| 110 |
-
Estimated Peak Memory Range: 0.59-35.28 MB
|
| 111 |
-
Compute Units: NPU (103) | Total (103)
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
```
|
| 115 |
## How does this work?
|
| 116 |
|
| 117 |
This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/MobileNet-v2/export.py)
|
|
|
|
| 37 |
|
| 38 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
| 39 |
| ---|---|---|---|---|---|---|---|
|
| 40 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.974 ms | 0 - 2 MB | FP16 | NPU | [MobileNet-v2.tflite](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.tflite)
|
| 41 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 1.281 ms | 1 - 7 MB | FP16 | NPU | [MobileNet-v2.so](https://huggingface.co/qualcomm/MobileNet-v2/blob/main/MobileNet-v2.so)
|
| 42 |
|
| 43 |
|
| 44 |
## Installation
|
|
|
|
| 95 |
python -m qai_hub_models.models.mobilenet_v2.export
|
| 96 |
```
|
| 97 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 98 |
## How does this work?
|
| 99 |
|
| 100 |
This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/MobileNet-v2/export.py)
|