qaihm-bot commited on
Commit
40a33d5
1 Parent(s): 838bc4d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -35,8 +35,8 @@ More details on model performance across various devices, can be found
35
 
36
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
37
  | ---|---|---|---|---|---|---|---|
38
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 22.739 ms | 2 - 5 MB | FP16 | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.tflite)
39
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 17.313 ms | 24 - 49 MB | FP16 | NPU | [FFNet-40S.so](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.so)
40
 
41
 
42
  ## Installation
@@ -97,16 +97,16 @@ python -m qai_hub_models.models.ffnet_40s.export
97
  ```
98
  Profile Job summary of FFNet-40S
99
  --------------------------------------------------
100
- Device: Samsung Galaxy S23 Ultra (13)
101
- Estimated Inference Time: 22.74 ms
102
- Estimated Peak Memory Range: 2.45-4.77 MB
103
  Compute Units: NPU (92) | Total (92)
104
 
105
  Profile Job summary of FFNet-40S
106
  --------------------------------------------------
107
- Device: Samsung Galaxy S23 Ultra (13)
108
- Estimated Inference Time: 17.31 ms
109
- Estimated Peak Memory Range: 24.04-48.93 MB
110
  Compute Units: NPU (141) | Total (141)
111
 
112
 
@@ -212,7 +212,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
212
  ## License
213
  - The license for the original implementation of FFNet-40S can be found
214
  [here](https://github.com/Qualcomm-AI-research/FFNet/blob/master/LICENSE).
215
- - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf).
216
 
217
  ## References
218
  * [Simple and Efficient Architectures for Semantic Segmentation](https://arxiv.org/abs/2206.08236)
 
35
 
36
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
37
  | ---|---|---|---|---|---|---|---|
38
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 22.513 ms | 2 - 5 MB | FP16 | NPU | [FFNet-40S.tflite](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.tflite)
39
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 17.466 ms | 24 - 46 MB | FP16 | NPU | [FFNet-40S.so](https://huggingface.co/qualcomm/FFNet-40S/blob/main/FFNet-40S.so)
40
 
41
 
42
  ## Installation
 
97
  ```
98
  Profile Job summary of FFNet-40S
99
  --------------------------------------------------
100
+ Device: Samsung Galaxy S24 (14)
101
+ Estimated Inference Time: 16.61 ms
102
+ Estimated Peak Memory Range: 0.06-95.83 MB
103
  Compute Units: NPU (92) | Total (92)
104
 
105
  Profile Job summary of FFNet-40S
106
  --------------------------------------------------
107
+ Device: Samsung Galaxy S24 (14)
108
+ Estimated Inference Time: 12.68 ms
109
+ Estimated Peak Memory Range: 24.02-78.73 MB
110
  Compute Units: NPU (141) | Total (141)
111
 
112
 
 
212
  ## License
213
  - The license for the original implementation of FFNet-40S can be found
214
  [here](https://github.com/Qualcomm-AI-research/FFNet/blob/master/LICENSE).
215
+ - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
216
 
217
  ## References
218
  * [Simple and Efficient Architectures for Semantic Segmentation](https://arxiv.org/abs/2206.08236)