princeml commited on
Commit
8ea0741
·
1 Parent(s): 03dd878

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +102 -12
  2. best.pt +3 -0
README.md CHANGED
@@ -1,12 +1,102 @@
1
- ---
2
- title: Object Detection Using Yolov8
3
- emoji: 🦀
4
- colorFrom: red
5
- colorTo: blue
6
- sdk: gradio
7
- sdk_version: 3.18.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Men-wome-detection-using-yolov8
2
+
3
+ ![pexels-kaique-rocha-109919](https://user-images.githubusercontent.com/85225054/218306896-3ce9d1a1-96b0-42f7-8725-c3cbbab39280.jpg)
4
+
5
+ #### This guide will provide instructions on how to convert OIDv4 data into the YOLO format for use with YOLOv8 object detection algorithms.
6
+
7
+ #### Getting Started
8
+
9
+ ``` git clone https://github.com/prince0310/Men-wome-detection-using-yolov8-.git ```
10
+
11
+
12
+ <details open>
13
+ <summary>Dataset</summary>
14
+ <br>
15
+ For training custom data set on yolo model you need to have data set arrangement in yolo format. which includes Images and Their annotation file.<br>
16
+
17
+ ##### clone the repository and run donload the data set and their annotation file
18
+
19
+ ``` git clone https://github.com/prince0310/OIDv4_ToolKit.git ```
20
+
21
+ ##### Implement ```convert annotation.ipynb``` notebook <br>
22
+
23
+ it will create data in below format
24
+
25
+ ```
26
+ Custom dataset
27
+ |
28
+ |─── train
29
+ | |
30
+ | └───Images --- 0fdea8a716155a8e.jpg
31
+ | └───Labels --- 0fdea8a716155a8e.txt
32
+ |
33
+ └─── test
34
+ | └───Images --- 0b6f22bf3b586889.jpg
35
+ | └───Labels --- 0b6f22bf3b586889.txt
36
+ |
37
+ └─── validation
38
+ | └───Images --- 0fdea8a716155a8e.jpg
39
+ | └───Labels --- 0fdea8a716155a8e.txt
40
+ |
41
+ └─── data.yaml
42
+ ```
43
+
44
+ </details>
45
+
46
+
47
+ <details open>
48
+ <summary>Install</summary>
49
+
50
+ Pip install the ultralytics package including
51
+ all [requirements.txt](https://github.com/ultralytics/ultralytics/blob/main/requirements.txt) in a
52
+ [**3.10>=Python>=3.7**](https://www.python.org/) environment, including
53
+ [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/).
54
+
55
+ ```bash
56
+ pip install ultralytics
57
+ ```
58
+ </details>
59
+
60
+ <details open>
61
+ <summary>Train</summary>
62
+ <br>
63
+
64
+ Python
65
+
66
+ ```bash
67
+ from ultralytics import YOLO
68
+
69
+ # Train
70
+ model = YOLO("yolov8n.pt")
71
+
72
+ results = model.train(data="data.yaml", epochs=200, workers=1, batch=8,imgsz=640) # train the model
73
+ ```
74
+ Cli
75
+
76
+ ```bash
77
+ yolo detect train data=data.yaml model=yolov8n.pt epochs=200 imgsz=640
78
+ ```
79
+ </details>
80
+
81
+ <details open>
82
+ <summary>Detect</summary>
83
+ <br>
84
+
85
+ Python
86
+
87
+ ```bash
88
+ from ultralytics import YOLO
89
+
90
+ # Load a model
91
+ model = YOLO("best.pt") # load a custom model
92
+
93
+ # Predict with the model
94
+ results = model("image.jpg", save = True) # predict on an image
95
+ ```
96
+ Cli
97
+
98
+ ```bash
99
+ yolo detect predict model=path/to/best.pt source="images.jpg" # predict with custom model
100
+ ```
101
+
102
+ </details>
best.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17ffa274de93f7a9dd047ad3c723346ff4f16e21e260cab47d7141367ca259b9
3
+ size 6211256