Push model using huggingface_hub.
Browse files- README.md +62 -0
- config.json +23 -0
- model.safetensors +3 -0
README.md
ADDED
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: agpl-3.0
|
3 |
+
library_name: ultralytics
|
4 |
+
tags:
|
5 |
+
- object-detection
|
6 |
+
- computer-vision
|
7 |
+
- yolov10
|
8 |
+
datasets:
|
9 |
+
- detection-datasets/coco
|
10 |
+
repo_url: https://github.com/THU-MIG/yolov10
|
11 |
+
inference: false
|
12 |
+
---
|
13 |
+
|
14 |
+
### Model Description
|
15 |
+
[YOLOv10: Real-Time End-to-End Object Detection](https://arxiv.org/abs/2405.14458v1)
|
16 |
+
|
17 |
+
- arXiv: https://arxiv.org/abs/2405.14458v1
|
18 |
+
- github: https://github.com/THU-MIG/yolov10
|
19 |
+
|
20 |
+
### Installation
|
21 |
+
```
|
22 |
+
pip install git+https://github.com/THU-MIG/yolov10.git
|
23 |
+
```
|
24 |
+
|
25 |
+
### Training and validation
|
26 |
+
```python
|
27 |
+
from ultralytics import YOLOv10
|
28 |
+
|
29 |
+
model = YOLOv10.from_pretrained('jameslahm/yolov10n')
|
30 |
+
# Training
|
31 |
+
model.train(...)
|
32 |
+
# after training, one can push to the hub
|
33 |
+
model.push_to_hub("your-hf-username/yolov10-finetuned")
|
34 |
+
|
35 |
+
# Validation
|
36 |
+
model.val(...)
|
37 |
+
```
|
38 |
+
|
39 |
+
### Inference
|
40 |
+
|
41 |
+
Here's an end-to-end example showcasing inference on a cats image:
|
42 |
+
|
43 |
+
```python
|
44 |
+
from ultralytics import YOLOv10
|
45 |
+
|
46 |
+
model = YOLOv10.from_pretrained('jameslahm/yolov10n')
|
47 |
+
source = 'http://images.cocodataset.org/val2017/000000039769.jpg'
|
48 |
+
model.predict(source=source, save=True)
|
49 |
+
```
|
50 |
+
which shows:
|
51 |
+
|
52 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/628ece6054698ce61d1e7be3/tBwAsKcQA_96HCYQp7BRr.png)
|
53 |
+
|
54 |
+
### BibTeX Entry and Citation Info
|
55 |
+
```
|
56 |
+
@article{wang2024yolov10,
|
57 |
+
title={YOLOv10: Real-Time End-to-End Object Detection},
|
58 |
+
author={Wang, Ao and Chen, Hui and Liu, Lihao and Chen, Kai and Lin, Zijia and Han, Jungong and Ding, Guiguang},
|
59 |
+
journal={arXiv preprint arXiv:2405.14458},
|
60 |
+
year={2024}
|
61 |
+
}
|
62 |
+
```
|
config.json
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"model": "yolov10m.yaml",
|
3 |
+
"names": {
|
4 |
+
"0": "baby",
|
5 |
+
"1": "close eye",
|
6 |
+
"2": "crib",
|
7 |
+
"3": "mouth",
|
8 |
+
"4": "nose",
|
9 |
+
"5": "opened eyes"
|
10 |
+
},
|
11 |
+
"nc": 6,
|
12 |
+
"roboflow": {
|
13 |
+
"license": "CC BY 4.0",
|
14 |
+
"project": "baby_monitoring",
|
15 |
+
"url": "https://universe.roboflow.com/tt-vwpdg/baby_monitoring/dataset/4",
|
16 |
+
"version": 4,
|
17 |
+
"workspace": "tt-vwpdg"
|
18 |
+
},
|
19 |
+
"task": "detect",
|
20 |
+
"test": "../test/images",
|
21 |
+
"train": "../train/images",
|
22 |
+
"val": "../valid/images"
|
23 |
+
}
|
model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:981461da078802919bbe231d3cbea85d6fe2fcbeafb1df92b28bddf381122fa3
|
3 |
+
size 66317632
|