qaihm-bot commited on
Commit
248c407
1 Parent(s): 4f8bb47

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -39,7 +39,7 @@ More details on model performance across various devices, can be found
39
 
40
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
41
  | ---|---|---|---|---|---|---|---|
42
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 1.711 ms | 0 - 121 MB | INT8 | NPU | [ConvNext-Tiny-w8a8-Quantized.so](https://huggingface.co/qualcomm/ConvNext-Tiny-w8a8-Quantized/blob/main/ConvNext-Tiny-w8a8-Quantized.so)
43
 
44
 
45
 
@@ -101,9 +101,9 @@ python -m qai_hub_models.models.convnext_tiny_w8a8_quantized.export
101
  ```
102
  Profile Job summary of ConvNext-Tiny-w8a8-Quantized
103
  --------------------------------------------------
104
- Device: Snapdragon X Elite CRD (11)
105
- Estimated Inference Time: 1.81 ms
106
- Estimated Peak Memory Range: 0.42-0.42 MB
107
  Compute Units: NPU (215) | Total (215)
108
 
109
 
 
39
 
40
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
41
  | ---|---|---|---|---|---|---|---|
42
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 1.717 ms | 0 - 131 MB | INT8 | NPU | [ConvNext-Tiny-w8a8-Quantized.so](https://huggingface.co/qualcomm/ConvNext-Tiny-w8a8-Quantized/blob/main/ConvNext-Tiny-w8a8-Quantized.so)
43
 
44
 
45
 
 
101
  ```
102
  Profile Job summary of ConvNext-Tiny-w8a8-Quantized
103
  --------------------------------------------------
104
+ Device: SA8255 (Proxy) (13)
105
+ Estimated Inference Time: 1.72 ms
106
+ Estimated Peak Memory Range: 0.02-130.98 MB
107
  Compute Units: NPU (215) | Total (215)
108
 
109