qaihm-bot commited on
Commit
f8d9d4a
1 Parent(s): 8981113

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +11 -5
README.md CHANGED
@@ -32,10 +32,13 @@ More details on model performance across various devices, can be found
32
  - Model size: 123 MB
33
 
34
 
 
 
35
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
36
  | ---|---|---|---|---|---|---|---|
37
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 10.505 ms | 1 - 3 MB | FP16 | NPU | [FFNet-122NS-LowRes.tflite](https://huggingface.co/qualcomm/FFNet-122NS-LowRes/blob/main/FFNet-122NS-LowRes.tflite)
38
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 10.881 ms | 6 - 29 MB | FP16 | NPU | [FFNet-122NS-LowRes.so](https://huggingface.co/qualcomm/FFNet-122NS-LowRes/blob/main/FFNet-122NS-LowRes.so)
 
39
 
40
 
41
  ## Installation
@@ -97,15 +100,17 @@ python -m qai_hub_models.models.ffnet_122ns_lowres.export
97
  Profile Job summary of FFNet-122NS-LowRes
98
  --------------------------------------------------
99
  Device: Snapdragon X Elite CRD (11)
100
- Estimated Inference Time: 17.48 ms
101
  Estimated Peak Memory Range: 6.01-6.01 MB
102
  Compute Units: NPU (348) | Total (348)
103
 
104
 
105
  ```
 
 
106
  ## How does this work?
107
 
108
- This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/FFNet-122NS-LowRes/export.py)
109
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
110
  on-device. Lets go through each step below in detail:
111
 
@@ -183,6 +188,7 @@ AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
183
 
184
 
185
 
 
186
  ## Deploying compiled model to Android
187
 
188
 
@@ -204,7 +210,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
204
  ## License
205
  - The license for the original implementation of FFNet-122NS-LowRes can be found
206
  [here](https://github.com/Qualcomm-AI-research/FFNet/blob/master/LICENSE).
207
- - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
208
 
209
  ## References
210
  * [Simple and Efficient Architectures for Semantic Segmentation](https://arxiv.org/abs/2206.08236)
 
32
  - Model size: 123 MB
33
 
34
 
35
+
36
+
37
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
  | ---|---|---|---|---|---|---|---|
39
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 9.538 ms | 0 - 2 MB | FP16 | NPU | [FFNet-122NS-LowRes.tflite](https://huggingface.co/qualcomm/FFNet-122NS-LowRes/blob/main/FFNet-122NS-LowRes.tflite)
40
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 10.684 ms | 7 - 22 MB | FP16 | NPU | [FFNet-122NS-LowRes.so](https://huggingface.co/qualcomm/FFNet-122NS-LowRes/blob/main/FFNet-122NS-LowRes.so)
41
+
42
 
43
 
44
  ## Installation
 
100
  Profile Job summary of FFNet-122NS-LowRes
101
  --------------------------------------------------
102
  Device: Snapdragon X Elite CRD (11)
103
+ Estimated Inference Time: 17.38 ms
104
  Estimated Peak Memory Range: 6.01-6.01 MB
105
  Compute Units: NPU (348) | Total (348)
106
 
107
 
108
  ```
109
+
110
+
111
  ## How does this work?
112
 
113
+ This [export script](https://aihub.qualcomm.com/models/ffnet_122ns_lowres/qai_hub_models/models/FFNet-122NS-LowRes/export.py)
114
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
115
  on-device. Lets go through each step below in detail:
116
 
 
188
 
189
 
190
 
191
+
192
  ## Deploying compiled model to Android
193
 
194
 
 
210
  ## License
211
  - The license for the original implementation of FFNet-122NS-LowRes can be found
212
  [here](https://github.com/Qualcomm-AI-research/FFNet/blob/master/LICENSE).
213
+ - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
214
 
215
  ## References
216
  * [Simple and Efficient Architectures for Semantic Segmentation](https://arxiv.org/abs/2206.08236)