qaihm-bot commited on
Commit
ef81c64
1 Parent(s): 4ac9898

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +16 -10
README.md CHANGED
@@ -24,16 +24,19 @@ More details on model performance across various devices, can be found
24
 
25
  - **Model Type:** Super resolution
26
  - **Model Stats:**
27
- - Model checkpoint: quicksrnet_small_4x_checkpoint_float32
28
- - Input resolution: 128x128
29
- - Number of parameters: 76.0M
30
- - Model size: 290 MB
 
 
31
 
32
 
33
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
34
  | ---|---|---|---|---|---|---|---|
35
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 1.375 ms | 0 - 2 MB | FP16 | NPU | [QuickSRNetSmall.tflite](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.tflite)
36
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.998 ms | 0 - 55 MB | FP16 | NPU | [QuickSRNetSmall.so](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.so)
 
37
 
38
 
39
  ## Installation
@@ -94,15 +97,17 @@ python -m qai_hub_models.models.quicksrnetsmall.export
94
  Profile Job summary of QuickSRNetSmall
95
  --------------------------------------------------
96
  Device: Snapdragon X Elite CRD (11)
97
- Estimated Inference Time: 1.15 ms
98
- Estimated Peak Memory Range: 0.20-0.20 MB
99
  Compute Units: NPU (11) | Total (11)
100
 
101
 
102
  ```
 
 
103
  ## How does this work?
104
 
105
- This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/QuickSRNetSmall/export.py)
106
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
107
  on-device. Lets go through each step below in detail:
108
 
@@ -179,6 +184,7 @@ spot check the output with expected output.
179
  AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
180
 
181
 
 
182
  ## Run demo on a cloud-hosted device
183
 
184
  You can also run the demo on-device.
@@ -215,7 +221,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
215
  ## License
216
  - The license for the original implementation of QuickSRNetSmall can be found
217
  [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
218
- - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
219
 
220
  ## References
221
  * [QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms](https://arxiv.org/abs/2303.04336)
 
24
 
25
  - **Model Type:** Super resolution
26
  - **Model Stats:**
27
+ - Model checkpoint: quicksrnet_small_3x_checkpoint
28
+ - Input resolution: 640x360
29
+ - Number of parameters: 27.2K
30
+ - Model size: 110 KB
31
+
32
+
33
 
34
 
35
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
36
  | ---|---|---|---|---|---|---|---|
37
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 1.334 ms | 0 - 2 MB | FP16 | NPU | [QuickSRNetSmall.tflite](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.tflite)
38
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 1.004 ms | 0 - 10 MB | FP16 | NPU | [QuickSRNetSmall.so](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.so)
39
+
40
 
41
 
42
  ## Installation
 
97
  Profile Job summary of QuickSRNetSmall
98
  --------------------------------------------------
99
  Device: Snapdragon X Elite CRD (11)
100
+ Estimated Inference Time: 1.11 ms
101
+ Estimated Peak Memory Range: 0.21-0.21 MB
102
  Compute Units: NPU (11) | Total (11)
103
 
104
 
105
  ```
106
+
107
+
108
  ## How does this work?
109
 
110
+ This [export script](https://aihub.qualcomm.com/models/quicksrnetsmall/qai_hub_models/models/QuickSRNetSmall/export.py)
111
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
112
  on-device. Lets go through each step below in detail:
113
 
 
184
  AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
185
 
186
 
187
+
188
  ## Run demo on a cloud-hosted device
189
 
190
  You can also run the demo on-device.
 
221
  ## License
222
  - The license for the original implementation of QuickSRNetSmall can be found
223
  [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
224
+ - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
225
 
226
  ## References
227
  * [QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms](https://arxiv.org/abs/2303.04336)