qaihm-bot commited on
Commit
04ce2f1
1 Parent(s): 13b3fba

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +9 -3
README.md CHANGED
@@ -31,9 +31,12 @@ More details on model performance across various devices, can be found
31
  - Model size: 24.4 MB
32
 
33
 
 
 
34
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
35
  | ---|---|---|---|---|---|---|---|
36
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 15.91 ms | 1 - 3 MB | FP16 | NPU | [Yolo-v7.tflite](https://huggingface.co/qualcomm/Yolo-v7/blob/main/Yolo-v7.tflite)
 
37
 
38
 
39
  ## Installation
@@ -91,9 +94,11 @@ device. This script does the following:
91
  python -m qai_hub_models.models.yolov7.export
92
  ```
93
 
 
 
94
  ## How does this work?
95
 
96
- This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/Yolo-v7/export.py)
97
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
98
  on-device. Lets go through each step below in detail:
99
 
@@ -170,6 +175,7 @@ spot check the output with expected output.
170
  AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
171
 
172
 
 
173
  ## Run demo on a cloud-hosted device
174
 
175
  You can also run the demo on-device.
@@ -206,7 +212,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
206
  ## License
207
  - The license for the original implementation of Yolo-v7 can be found
208
  [here](https://github.com/WongKinYiu/yolov7/blob/main/LICENSE.md).
209
- - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
210
 
211
  ## References
212
  * [YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors](https://arxiv.org/abs/2207.02696)
 
31
  - Model size: 24.4 MB
32
 
33
 
34
+
35
+
36
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
37
  | ---|---|---|---|---|---|---|---|
38
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 15.912 ms | 0 - 23 MB | FP16 | NPU | [Yolo-v7.tflite](https://huggingface.co/qualcomm/Yolo-v7/blob/main/Yolo-v7.tflite)
39
+
40
 
41
 
42
  ## Installation
 
94
  python -m qai_hub_models.models.yolov7.export
95
  ```
96
 
97
+
98
+
99
  ## How does this work?
100
 
101
+ This [export script](https://aihub.qualcomm.com/models/yolov7/qai_hub_models/models/Yolo-v7/export.py)
102
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
103
  on-device. Lets go through each step below in detail:
104
 
 
175
  AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
176
 
177
 
178
+
179
  ## Run demo on a cloud-hosted device
180
 
181
  You can also run the demo on-device.
 
212
  ## License
213
  - The license for the original implementation of Yolo-v7 can be found
214
  [here](https://github.com/WongKinYiu/yolov7/blob/main/LICENSE.md).
215
+ - The license for the compiled assets for on-device deployment can be found [here](https://github.com/WongKinYiu/yolov7/blob/main/LICENSE.md)
216
 
217
  ## References
218
  * [YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors](https://arxiv.org/abs/2207.02696)