qaihm-bot commited on
Commit
10e1d0c
1 Parent(s): 24c95e1

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +7 -12
README.md CHANGED
@@ -31,10 +31,12 @@ More details on model performance across various devices, can be found
31
  - Model size: 13.2 MB
32
 
33
 
 
 
34
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
35
  | ---|---|---|---|---|---|---|---|
36
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 7.371 ms | 4 - 7 MB | FP16 | NPU | [YOLOv8-Segmentation.tflite](https://huggingface.co/qualcomm/YOLOv8-Segmentation/blob/main/YOLOv8-Segmentation.tflite)
37
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 6.414 ms | 4 - 14 MB | FP16 | NPU | [YOLOv8-Segmentation.so](https://huggingface.co/qualcomm/YOLOv8-Segmentation/blob/main/YOLOv8-Segmentation.so)
38
 
39
 
40
  ## Installation
@@ -92,19 +94,11 @@ device. This script does the following:
92
  python -m qai_hub_models.models.yolov8_seg.export
93
  ```
94
 
95
- ```
96
- Profile Job summary of YOLOv8-Segmentation
97
- --------------------------------------------------
98
- Device: Snapdragon X Elite CRD (11)
99
- Estimated Inference Time: 7.57 ms
100
- Estimated Peak Memory Range: 4.70-4.70 MB
101
- Compute Units: NPU (333) | Total (333)
102
 
103
 
104
- ```
105
  ## How does this work?
106
 
107
- This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/YOLOv8-Segmentation/export.py)
108
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
109
  on-device. Lets go through each step below in detail:
110
 
@@ -181,6 +175,7 @@ spot check the output with expected output.
181
  AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
182
 
183
 
 
184
  ## Run demo on a cloud-hosted device
185
 
186
  You can also run the demo on-device.
@@ -217,7 +212,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
217
  ## License
218
  - The license for the original implementation of YOLOv8-Segmentation can be found
219
  [here](https://github.com/ultralytics/ultralytics/blob/main/LICENSE).
220
- - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
221
 
222
  ## References
223
  * [Ultralytics YOLOv8 Docs: Instance Segmentation](https://docs.ultralytics.com/tasks/segment/)
 
31
  - Model size: 13.2 MB
32
 
33
 
34
+
35
+
36
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
37
  | ---|---|---|---|---|---|---|---|
38
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 7.329 ms | 4 - 7 MB | FP16 | NPU | [YOLOv8-Segmentation.tflite](https://huggingface.co/qualcomm/YOLOv8-Segmentation/blob/main/YOLOv8-Segmentation.tflite)
39
+
40
 
41
 
42
  ## Installation
 
94
  python -m qai_hub_models.models.yolov8_seg.export
95
  ```
96
 
 
 
 
 
 
 
 
97
 
98
 
 
99
  ## How does this work?
100
 
101
+ This [export script](https://aihub.qualcomm.com/models/yolov8_seg/qai_hub_models/models/YOLOv8-Segmentation/export.py)
102
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
103
  on-device. Lets go through each step below in detail:
104
 
 
175
  AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
176
 
177
 
178
+
179
  ## Run demo on a cloud-hosted device
180
 
181
  You can also run the demo on-device.
 
212
  ## License
213
  - The license for the original implementation of YOLOv8-Segmentation can be found
214
  [here](https://github.com/ultralytics/ultralytics/blob/main/LICENSE).
215
+ - The license for the compiled assets for on-device deployment can be found [here](https://github.com/ultralytics/ultralytics/blob/main/LICENSE)
216
 
217
  ## References
218
  * [Ultralytics YOLOv8 Docs: Instance Segmentation](https://docs.ultralytics.com/tasks/segment/)