qaihm-bot commited on
Commit
05e6784
1 Parent(s): bf5071e

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +11 -5
README.md CHANGED
@@ -33,10 +33,13 @@ More details on model performance across various devices, can be found
33
  - Model size: 53.1 MB
34
 
35
 
 
 
36
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
37
  | ---|---|---|---|---|---|---|---|
38
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 23.135 ms | 2 - 4 MB | FP16 | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.tflite)
39
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 17.2 ms | 22 - 40 MB | FP16 | NPU | [FFNet-40S.so](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.so)
 
40
 
41
 
42
  ## Installation
@@ -98,15 +101,17 @@ python -m qai_hub_models.models.ffnet_40s.export
98
  Profile Job summary of FFNet-40S
99
  --------------------------------------------------
100
  Device: Snapdragon X Elite CRD (11)
101
- Estimated Inference Time: 23.24 ms
102
  Estimated Peak Memory Range: 24.05-24.05 MB
103
  Compute Units: NPU (140) | Total (140)
104
 
105
 
106
  ```
 
 
107
  ## How does this work?
108
 
109
- This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/FFNet-40S/export.py)
110
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
111
  on-device. Lets go through each step below in detail:
112
 
@@ -184,6 +189,7 @@ AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
184
 
185
 
186
 
 
187
  ## Deploying compiled model to Android
188
 
189
 
@@ -205,7 +211,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
205
  ## License
206
  - The license for the original implementation of FFNet-40S can be found
207
  [here](https://github.com/Qualcomm-AI-research/FFNet/blob/master/LICENSE).
208
- - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
209
 
210
  ## References
211
  * [Simple and Efficient Architectures for Semantic Segmentation](https://arxiv.org/abs/2206.08236)
 
33
  - Model size: 53.1 MB
34
 
35
 
36
+
37
+
38
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
39
  | ---|---|---|---|---|---|---|---|
40
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 23.193 ms | 2 - 4 MB | FP16 | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.tflite)
41
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 17.411 ms | 24 - 43 MB | FP16 | NPU | [FFNet-40S.so](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.so)
42
+
43
 
44
 
45
  ## Installation
 
101
  Profile Job summary of FFNet-40S
102
  --------------------------------------------------
103
  Device: Snapdragon X Elite CRD (11)
104
+ Estimated Inference Time: 23.36 ms
105
  Estimated Peak Memory Range: 24.05-24.05 MB
106
  Compute Units: NPU (140) | Total (140)
107
 
108
 
109
  ```
110
+
111
+
112
  ## How does this work?
113
 
114
+ This [export script](https://aihub.qualcomm.com/models/ffnet_40s/qai_hub_models/models/FFNet-40S/export.py)
115
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
116
  on-device. Lets go through each step below in detail:
117
 
 
189
 
190
 
191
 
192
+
193
  ## Deploying compiled model to Android
194
 
195
 
 
211
  ## License
212
  - The license for the original implementation of FFNet-40S can be found
213
  [here](https://github.com/Qualcomm-AI-research/FFNet/blob/master/LICENSE).
214
+ - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
215
 
216
  ## References
217
  * [Simple and Efficient Architectures for Semantic Segmentation](https://arxiv.org/abs/2206.08236)