qaihm-bot commited on
Commit
dbabf92
·
verified ·
1 Parent(s): 73bb268

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +40 -19
README.md CHANGED
@@ -16,7 +16,7 @@ tags:
16
 
17
  DeepLabV3 is designed for semantic segmentation at multiple scales, trained on the various datasets. It uses MobileNet as a backbone.
18
 
19
- This model is an implementation of DeepLabV3-Plus-MobileNet found [here](https://github.com/jfzhang95/pytorch-deeplab-xception).
20
  This repository provides scripts to run DeepLabV3-Plus-MobileNet on Qualcomm® devices.
21
  More details on model performance across various devices, can be found
22
  [here](https://aihub.qualcomm.com/models/deeplabv3_plus_mobilenet).
@@ -32,15 +32,32 @@ More details on model performance across various devices, can be found
32
  - Model size: 22.2 MB
33
  - Number of output classes: 21
34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
 
37
 
38
- | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
39
- | ---|---|---|---|---|---|---|---|
40
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 13.181 ms | 14 - 22 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.tflite)
41
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 13.15 ms | 3 - 18 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.so](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.so)
42
-
43
-
44
 
45
  ## Installation
46
 
@@ -95,16 +112,16 @@ device. This script does the following:
95
  ```bash
96
  python -m qai_hub_models.models.deeplabv3_plus_mobilenet.export
97
  ```
98
-
99
  ```
100
- Profile Job summary of DeepLabV3-Plus-MobileNet
101
- --------------------------------------------------
102
- Device: Snapdragon X Elite CRD (11)
103
- Estimated Inference Time: 12.51 ms
104
- Estimated Peak Memory Range: 3.02-3.02 MB
105
- Compute Units: NPU (124) | Total (124)
106
-
107
-
 
108
  ```
109
 
110
 
@@ -203,15 +220,19 @@ provides instructions on how to use the `.so` shared library in an Android appl
203
  Get more details on DeepLabV3-Plus-MobileNet's performance across various devices [here](https://aihub.qualcomm.com/models/deeplabv3_plus_mobilenet).
204
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
205
 
 
206
  ## License
207
- - The license for the original implementation of DeepLabV3-Plus-MobileNet can be found
208
- [here](https://github.com/jfzhang95/pytorch-deeplab-xception/blob/master/LICENSE).
209
- - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
 
210
 
211
  ## References
212
  * [Rethinking Atrous Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1706.05587)
213
  * [Source Model Implementation](https://github.com/jfzhang95/pytorch-deeplab-xception)
214
 
 
 
215
  ## Community
216
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
217
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
 
16
 
17
  DeepLabV3 is designed for semantic segmentation at multiple scales, trained on the various datasets. It uses MobileNet as a backbone.
18
 
19
+ This model is an implementation of DeepLabV3-Plus-MobileNet found [here]({source_repo}).
20
  This repository provides scripts to run DeepLabV3-Plus-MobileNet on Qualcomm® devices.
21
  More details on model performance across various devices, can be found
22
  [here](https://aihub.qualcomm.com/models/deeplabv3_plus_mobilenet).
 
32
  - Model size: 22.2 MB
33
  - Number of output classes: 21
34
 
35
+ | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
36
+ |---|---|---|---|---|---|---|---|---|
37
+ | DeepLabV3-Plus-MobileNet | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 13.441 ms | 21 - 22 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.tflite) |
38
+ | DeepLabV3-Plus-MobileNet | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 13.124 ms | 3 - 20 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.so](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.so) |
39
+ | DeepLabV3-Plus-MobileNet | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 16.946 ms | 46 - 330 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.onnx](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.onnx) |
40
+ | DeepLabV3-Plus-MobileNet | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 10.784 ms | 21 - 98 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.tflite) |
41
+ | DeepLabV3-Plus-MobileNet | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 10.749 ms | 3 - 28 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.so](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.so) |
42
+ | DeepLabV3-Plus-MobileNet | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 15.136 ms | 1 - 82 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.onnx](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.onnx) |
43
+ | DeepLabV3-Plus-MobileNet | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 13.166 ms | 21 - 65 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.tflite) |
44
+ | DeepLabV3-Plus-MobileNet | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 12.047 ms | 3 - 4 MB | FP16 | NPU | Use Export Script |
45
+ | DeepLabV3-Plus-MobileNet | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 13.288 ms | 21 - 33 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.tflite) |
46
+ | DeepLabV3-Plus-MobileNet | SA8255 (Proxy) | SA8255P Proxy | QNN | 12.206 ms | 3 - 4 MB | FP16 | NPU | Use Export Script |
47
+ | DeepLabV3-Plus-MobileNet | SA8775 (Proxy) | SA8775P Proxy | TFLITE | 13.223 ms | 14 - 19 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.tflite) |
48
+ | DeepLabV3-Plus-MobileNet | SA8775 (Proxy) | SA8775P Proxy | QNN | 12.296 ms | 3 - 4 MB | FP16 | NPU | Use Export Script |
49
+ | DeepLabV3-Plus-MobileNet | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 13.234 ms | 27 - 29 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.tflite) |
50
+ | DeepLabV3-Plus-MobileNet | SA8650 (Proxy) | SA8650P Proxy | QNN | 12.164 ms | 3 - 4 MB | FP16 | NPU | Use Export Script |
51
+ | DeepLabV3-Plus-MobileNet | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 18.816 ms | 21 - 97 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.tflite) |
52
+ | DeepLabV3-Plus-MobileNet | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 18.643 ms | 3 - 30 MB | FP16 | NPU | Use Export Script |
53
+ | DeepLabV3-Plus-MobileNet | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 7.831 ms | 19 - 56 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.tflite) |
54
+ | DeepLabV3-Plus-MobileNet | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 9.188 ms | 3 - 26 MB | FP16 | NPU | Use Export Script |
55
+ | DeepLabV3-Plus-MobileNet | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 11.971 ms | 51 - 90 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.onnx](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.onnx) |
56
+ | DeepLabV3-Plus-MobileNet | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 12.38 ms | 3 - 3 MB | FP16 | NPU | Use Export Script |
57
+ | DeepLabV3-Plus-MobileNet | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 16.661 ms | 66 - 66 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.onnx](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.onnx) |
58
 
59
 
60
 
 
 
 
 
 
 
61
 
62
  ## Installation
63
 
 
112
  ```bash
113
  python -m qai_hub_models.models.deeplabv3_plus_mobilenet.export
114
  ```
 
115
  ```
116
+ Profiling Results
117
+ ------------------------------------------------------------
118
+ DeepLabV3-Plus-MobileNet
119
+ Device : Samsung Galaxy S23 (13)
120
+ Runtime : TFLITE
121
+ Estimated inference time (ms) : 13.4
122
+ Estimated peak memory usage (MB): [21, 22]
123
+ Total # Ops : 98
124
+ Compute Unit(s) : NPU (98 ops)
125
  ```
126
 
127
 
 
220
  Get more details on DeepLabV3-Plus-MobileNet's performance across various devices [here](https://aihub.qualcomm.com/models/deeplabv3_plus_mobilenet).
221
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
222
 
223
+
224
  ## License
225
+ * The license for the original implementation of DeepLabV3-Plus-MobileNet can be found [here](https://github.com/jfzhang95/pytorch-deeplab-xception/blob/master/LICENSE).
226
+ * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
227
+
228
+
229
 
230
  ## References
231
  * [Rethinking Atrous Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1706.05587)
232
  * [Source Model Implementation](https://github.com/jfzhang95/pytorch-deeplab-xception)
233
 
234
+
235
+
236
  ## Community
237
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
238
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).