qaihm-bot commited on
Commit
d0d9922
1 Parent(s): b3b2abf

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +17 -17
README.md CHANGED
@@ -36,9 +36,9 @@ More details on model performance across various devices, can be found
36
 
37
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
  | ---|---|---|---|---|---|---|---|
39
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Binary | 11.362 ms | 0 - 42 MB | UINT16 | NPU | [TextEncoder_Quantized.bin](https://huggingface.co/qualcomm/Stable-Diffusion/blob/main/TextEncoder_Quantized.bin)
40
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Binary | 393.878 ms | 0 - 11 MB | UINT16 | NPU | [VAEDecoder_Quantized.bin](https://huggingface.co/qualcomm/Stable-Diffusion/blob/main/VAEDecoder_Quantized.bin)
41
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Binary | 256.698 ms | 0 - 12 MB | UINT16 | NPU | [UNet_Quantized.bin](https://huggingface.co/qualcomm/Stable-Diffusion/blob/main/UNet_Quantized.bin)
42
 
43
 
44
  ## Installation
@@ -99,25 +99,25 @@ python -m qai_hub_models.models.stable_diffusion_quantized.export
99
  ```
100
  Profile Job summary of TextEncoder_Quantized
101
  --------------------------------------------------
102
- Device: Samsung Galaxy S23 (13)
103
- Estimated Inference Time: 11.36 ms
104
- Estimated Peak Memory Range: 0.05-42.00 MB
105
  Compute Units: NPU (570) | Total (570)
106
 
107
- Profile Job summary of VAEDecoder_Quantized
108
- --------------------------------------------------
109
- Device: Samsung Galaxy S23 (13)
110
- Estimated Inference Time: 393.88 ms
111
- Estimated Peak Memory Range: 0.21-11.15 MB
112
- Compute Units: NPU (409) | Total (409)
113
-
114
  Profile Job summary of UNet_Quantized
115
  --------------------------------------------------
116
- Device: Samsung Galaxy S23 (13)
117
- Estimated Inference Time: 256.70 ms
118
- Estimated Peak Memory Range: 0.14-12.25 MB
119
  Compute Units: NPU (5421) | Total (5421)
120
 
 
 
 
 
 
 
 
121
 
122
  ```
123
  ## How does this work?
@@ -229,7 +229,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
229
  ## License
230
  - The license for the original implementation of Stable-Diffusion can be found
231
  [here](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE).
232
- - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf).
233
 
234
  ## References
235
  * [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)
 
36
 
37
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
  | ---|---|---|---|---|---|---|---|
39
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Binary | 11.371 ms | 0 - 31 MB | UINT16 | NPU | [TextEncoder_Quantized.bin](https://huggingface.co/qualcomm/Stable-Diffusion/blob/main/TextEncoder_Quantized.bin)
40
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Binary | 255.354 ms | 0 - 45 MB | UINT16 | NPU | [UNet_Quantized.bin](https://huggingface.co/qualcomm/Stable-Diffusion/blob/main/UNet_Quantized.bin)
41
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Binary | 392.074 ms | 0 - 25 MB | UINT16 | NPU | [VAEDecoder_Quantized.bin](https://huggingface.co/qualcomm/Stable-Diffusion/blob/main/VAEDecoder_Quantized.bin)
42
 
43
 
44
  ## Installation
 
99
  ```
100
  Profile Job summary of TextEncoder_Quantized
101
  --------------------------------------------------
102
+ Device: Samsung Galaxy S24 (14)
103
+ Estimated Inference Time: 8.08 ms
104
+ Estimated Peak Memory Range: 0.01-137.09 MB
105
  Compute Units: NPU (570) | Total (570)
106
 
 
 
 
 
 
 
 
107
  Profile Job summary of UNet_Quantized
108
  --------------------------------------------------
109
+ Device: Samsung Galaxy S24 (14)
110
+ Estimated Inference Time: 188.59 ms
111
+ Estimated Peak Memory Range: 0.34-1242.36 MB
112
  Compute Units: NPU (5421) | Total (5421)
113
 
114
+ Profile Job summary of VAEDecoder_Quantized
115
+ --------------------------------------------------
116
+ Device: Samsung Galaxy S24 (14)
117
+ Estimated Inference Time: 295.06 ms
118
+ Estimated Peak Memory Range: 0.18-87.59 MB
119
+ Compute Units: NPU (409) | Total (409)
120
+
121
 
122
  ```
123
  ## How does this work?
 
229
  ## License
230
  - The license for the original implementation of Stable-Diffusion can be found
231
  [here](https://github.com/CompVis/stable-diffusion/blob/main/LICENSE).
232
+ - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
233
 
234
  ## References
235
  * [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)