merve HF staff commited on
Commit
a196ffa
1 Parent(s): 649d755

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -0
README.md CHANGED
@@ -1,3 +1,77 @@
1
  ---
2
  license: apache-2.0
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: object-detection
4
  ---
5
+ # YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information
6
+
7
+ This is the model repository for YOLOv9, containing the following checkpoints:
8
+
9
+ - GELAN-C (a newer, lighter architecture)
10
+ - GELAN-E
11
+ - YOLO9-C
12
+ - YOLO9-E
13
+
14
+ ### How to Use
15
+
16
+ Clone YOLOv9 repository.
17
+
18
+ ```
19
+ git clone https://github.com/WongKinYiu/yolov9.git
20
+ cd yolov9
21
+ ```
22
+
23
+ Download the weights using `hf_hub_download` and use the loading function in helpers of YOLOv9.
24
+
25
+ ```python
26
+ from huggingface_hub import hf_hub_download
27
+ hf_hub_download("merve/yolov9", filename="yolov9-c.pt", local_dir="./")
28
+ ```
29
+
30
+ Load the model.
31
+
32
+ ```python
33
+ # make sure you have the following dependencies
34
+ import torch
35
+ import numpy as np
36
+ from models.common import DetectMultiBackend
37
+ from utils.general import non_max_suppression, scale_boxes
38
+ from utils.torch_utils import select_device, smart_inference_mode
39
+ from utils.augmentations import letterbox
40
+ import PIL.Image
41
+
42
+ @smart_inference_mode()
43
+ def predict(image_path, weights='yolov9-c.pt', imgsz=640, conf_thres=0.1, iou_thres=0.45):
44
+ # Initialize
45
+ device = select_device('0')
46
+ model = DetectMultiBackend(weights='yolov9-c.pt', device="0", fp16=False, data='data/coco.yaml')
47
+ stride, names, pt = model.stride, model.names, model.pt
48
+
49
+ # Load image
50
+ image = np.array(PIL.Image.open(image_path))
51
+ img = letterbox(img0, imgsz, stride=stride, auto=True)[0]
52
+ img = img[:, :, ::-1].transpose(2, 0, 1)
53
+ img = np.ascontiguousarray(img)
54
+ img = torch.from_numpy(img).to(device).float()
55
+ img /= 255.0
56
+ if img.ndimension() == 3:
57
+ img = img.unsqueeze(0)
58
+
59
+ # Inference
60
+ pred = model(img, augment=False, visualize=False)
61
+
62
+ # Apply NMS
63
+ pred = non_max_suppression(pred[0][0], conf_thres, iou_thres, classes=None, max_det=1000)
64
+ ```
65
+
66
+ ### Citation
67
+
68
+ ```
69
+ @article{wang2024yolov9,
70
+ title={{YOLOv9}: Learning What You Want to Learn Using Programmable Gradient Information},
71
+ author={Wang, Chien-Yao and Liao, Hong-Yuan Mark},
72
+ booktitle={arXiv preprint arXiv:2402.13616},
73
+ year={2024}
74
+ }
75
+ ```
76
+
77
+ The Colab notebook can be found [here](https://colab.research.google.com/drive/1U3rbOmAZOwPUekcvpQS4GGVJQYR7VaQX?usp=sharing#scrollTo=k-JxtpQ_2e0F). 🧡