qaihm-bot commited on
Commit
cc8a2d9
1 Parent(s): 59a3531

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -34,8 +34,8 @@ More details on model performance across various devices, can be found
34
 
35
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
36
  | ---|---|---|---|---|---|---|---|
37
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 10.46 ms | 1 - 3 MB | FP16 | NPU | [FFNet-122NS-LowRes.tflite](https://huggingface.co/qualcomm/FFNet-122NS-LowRes/blob/main/FFNet-122NS-LowRes.tflite)
38
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 10.778 ms | 6 - 38 MB | FP16 | NPU | [FFNet-122NS-LowRes.so](https://huggingface.co/qualcomm/FFNet-122NS-LowRes/blob/main/FFNet-122NS-LowRes.so)
39
 
40
 
41
  ## Installation
@@ -96,16 +96,16 @@ python -m qai_hub_models.models.ffnet_122ns_lowres.export
96
  ```
97
  Profile Job summary of FFNet-122NS-LowRes
98
  --------------------------------------------------
99
- Device: Samsung Galaxy S23 Ultra (13)
100
- Estimated Inference Time: 10.46 ms
101
- Estimated Peak Memory Range: 0.61-2.78 MB
102
  Compute Units: NPU (216) | Total (216)
103
 
104
  Profile Job summary of FFNet-122NS-LowRes
105
  --------------------------------------------------
106
- Device: Samsung Galaxy S23 Ultra (13)
107
- Estimated Inference Time: 10.78 ms
108
- Estimated Peak Memory Range: 6.04-37.62 MB
109
  Compute Units: NPU (349) | Total (349)
110
 
111
 
@@ -211,7 +211,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
211
  ## License
212
  - The license for the original implementation of FFNet-122NS-LowRes can be found
213
  [here](https://github.com/Qualcomm-AI-research/FFNet/blob/master/LICENSE).
214
- - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf).
215
 
216
  ## References
217
  * [Simple and Efficient Architectures for Semantic Segmentation](https://arxiv.org/abs/2206.08236)
 
34
 
35
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
36
  | ---|---|---|---|---|---|---|---|
37
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 10.407 ms | 0 - 2 MB | FP16 | NPU | [FFNet-122NS-LowRes.tflite](https://huggingface.co/qualcomm/FFNet-122NS-LowRes/blob/main/FFNet-122NS-LowRes.tflite)
38
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 10.785 ms | 6 - 37 MB | FP16 | NPU | [FFNet-122NS-LowRes.so](https://huggingface.co/qualcomm/FFNet-122NS-LowRes/blob/main/FFNet-122NS-LowRes.so)
39
 
40
 
41
  ## Installation
 
96
  ```
97
  Profile Job summary of FFNet-122NS-LowRes
98
  --------------------------------------------------
99
+ Device: Samsung Galaxy S24 (14)
100
+ Estimated Inference Time: 7.37 ms
101
+ Estimated Peak Memory Range: 0.61-55.46 MB
102
  Compute Units: NPU (216) | Total (216)
103
 
104
  Profile Job summary of FFNet-122NS-LowRes
105
  --------------------------------------------------
106
+ Device: Samsung Galaxy S24 (14)
107
+ Estimated Inference Time: 7.63 ms
108
+ Estimated Peak Memory Range: 6.02-82.00 MB
109
  Compute Units: NPU (349) | Total (349)
110
 
111
 
 
211
  ## License
212
  - The license for the original implementation of FFNet-122NS-LowRes can be found
213
  [here](https://github.com/Qualcomm-AI-research/FFNet/blob/master/LICENSE).
214
+ - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
215
 
216
  ## References
217
  * [Simple and Efficient Architectures for Semantic Segmentation](https://arxiv.org/abs/2206.08236)