glenn-jocher commited on
Commit
b60b62e
1 Parent(s): 750465e

PyCharm reformat (#4209)

Browse files

* PyCharm reformat

* YAML reformat

* Markdown reformat

.github/ISSUE_TEMPLATE/bug-report.md CHANGED
@@ -7,21 +7,24 @@ assignees: ''
7
 
8
  ---
9
 
10
- Before submitting a bug report, please be aware that your issue **must be reproducible** with all of the following, otherwise it is non-actionable, and we can not help you:
11
- - **Current repo**: run `git fetch && git status -uno` to check and `git pull` to update repo
12
- - **Common dataset**: coco.yaml or coco128.yaml
13
- - **Common environment**: Colab, Google Cloud, or Docker image. See https://github.com/ultralytics/yolov5#environments
14
-
15
- If this is a custom dataset/training question you **must include** your `train*.jpg`, `val*.jpg` and `results.png` figures, or we can not help you. You can generate these with `utils.plot_results()`.
16
 
 
 
 
 
 
 
17
 
18
  ## 🐛 Bug
19
- A clear and concise description of what the bug is.
20
 
 
21
 
22
  ## To Reproduce (REQUIRED)
23
 
24
  Input:
 
25
  ```
26
  import torch
27
 
@@ -30,6 +33,7 @@ c = a / 0
30
  ```
31
 
32
  Output:
 
33
  ```
34
  Traceback (most recent call last):
35
  File "/Users/glennjocher/opt/anaconda3/envs/env1/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3331, in run_code
@@ -39,17 +43,17 @@ Traceback (most recent call last):
39
  RuntimeError: ZeroDivisionError
40
  ```
41
 
42
-
43
  ## Expected behavior
44
- A clear and concise description of what you expected to happen.
45
 
 
46
 
47
  ## Environment
48
- If applicable, add screenshots to help explain your problem.
49
 
50
- - OS: [e.g. Ubuntu]
51
- - GPU [e.g. 2080 Ti]
52
 
 
 
53
 
54
  ## Additional context
 
55
  Add any other context about the problem here.
 
7
 
8
  ---
9
 
10
+ Before submitting a bug report, please be aware that your issue **must be reproducible** with all of the following,
11
+ otherwise it is non-actionable, and we can not help you:
 
 
 
 
12
 
13
+ - **Current repo**: run `git fetch && git status -uno` to check and `git pull` to update repo
14
+ - **Common dataset**: coco.yaml or coco128.yaml
15
+ - **Common environment**: Colab, Google Cloud, or Docker image. See https://github.com/ultralytics/yolov5#environments
16
+
17
+ If this is a custom dataset/training question you **must include** your `train*.jpg`, `val*.jpg` and `results.png`
18
+ figures, or we can not help you. You can generate these with `utils.plot_results()`.
19
 
20
  ## 🐛 Bug
 
21
 
22
+ A clear and concise description of what the bug is.
23
 
24
  ## To Reproduce (REQUIRED)
25
 
26
  Input:
27
+
28
  ```
29
  import torch
30
 
 
33
  ```
34
 
35
  Output:
36
+
37
  ```
38
  Traceback (most recent call last):
39
  File "/Users/glennjocher/opt/anaconda3/envs/env1/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3331, in run_code
 
43
  RuntimeError: ZeroDivisionError
44
  ```
45
 
 
46
  ## Expected behavior
 
47
 
48
+ A clear and concise description of what you expected to happen.
49
 
50
  ## Environment
 
51
 
52
+ If applicable, add screenshots to help explain your problem.
 
53
 
54
+ - OS: [e.g. Ubuntu]
55
+ - GPU [e.g. 2080 Ti]
56
 
57
  ## Additional context
58
+
59
  Add any other context about the problem here.
.github/ISSUE_TEMPLATE/feature-request.md CHANGED
@@ -13,7 +13,8 @@ assignees: ''
13
 
14
  ## Motivation
15
 
16
- <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
 
17
 
18
  ## Pitch
19
 
 
13
 
14
  ## Motivation
15
 
16
+ <!-- Please outline the motivation for the proposal. Is your feature request related to a problem?
17
+ e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
18
 
19
  ## Pitch
20
 
.github/ISSUE_TEMPLATE/question.md CHANGED
@@ -9,5 +9,4 @@ assignees: ''
9
 
10
  ## ❔Question
11
 
12
-
13
  ## Additional context
 
9
 
10
  ## ❔Question
11
 
 
12
  ## Additional context
CONTRIBUTING.md CHANGED
@@ -8,32 +8,44 @@ We love your input! We want to make contributing to YOLOv5 as easy and transpare
8
  - Proposing a new feature
9
  - Becoming a maintainer
10
 
11
- YOLOv5 works so well due to our combined community effort, and for every small improvement you contribute you will be helping push the frontiers of what's possible in AI 😃!
12
-
13
 
14
  ## Submitting a Pull Request (PR) 🛠️
 
15
  Submitting a PR is easy! This example shows how to submit a PR for updating `requirements.txt` in 4 steps:
16
 
17
  ### 1. Select File to Update
 
18
  Select `requirements.txt` to update by clicking on it in GitHub.
19
  <p align="center"><img width="800" alt="PR_step1" src="https://user-images.githubusercontent.com/26833433/122260847-08be2600-ced4-11eb-828b-8287ace4136c.png"></p>
20
 
21
  ### 2. Click 'Edit this file'
 
22
  Button is in top-right corner.
23
  <p align="center"><img width="800" alt="PR_step2" src="https://user-images.githubusercontent.com/26833433/122260844-06f46280-ced4-11eb-9eec-b8a24be519ca.png"></p>
24
 
25
  ### 3. Make Changes
 
26
  Change `matplotlib` version from `3.2.2` to `3.3`.
27
  <p align="center"><img width="800" alt="PR_step3" src="https://user-images.githubusercontent.com/26833433/122260853-0a87e980-ced4-11eb-9fd2-3650fb6e0842.png"></p>
28
 
29
  ### 4. Preview Changes and Submit PR
30
- Click on the **Preview changes** tab to verify your updates. At the bottom of the screen select 'Create a **new branch** for this commit', assign your branch a descriptive name such as `fix/matplotlib_version` and click the green **Propose changes** button. All done, your PR is now submitted to YOLOv5 for review and approval 😃!
 
 
 
31
  <p align="center"><img width="800" alt="PR_step4" src="https://user-images.githubusercontent.com/26833433/122260856-0b208000-ced4-11eb-8e8e-77b6151cbcc3.png"></p>
32
 
33
  ### PR recommendations
34
 
35
  To allow your work to be integrated as seamlessly as possible, we advise you to:
36
- - ✅ Verify your PR is **up-to-date with origin/master.** If your PR is behind origin/master an automatic [GitHub actions](https://github.com/ultralytics/yolov5/blob/master/.github/workflows/rebase.yml) rebase may be attempted by including the /rebase command in a comment body, or by running the following code, replacing 'feature' with the name of your local branch:
 
 
 
 
 
37
  ```bash
38
  git remote add upstream https://github.com/ultralytics/yolov5.git
39
  git fetch upstream
@@ -41,30 +53,42 @@ git checkout feature # <----- replace 'feature' with local branch name
41
  git merge upstream/master
42
  git push -u origin -f
43
  ```
44
- - ✅ Verify all Continuous Integration (CI) **checks are passing**.
45
- - ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ -Bruce Lee
46
 
 
 
 
47
 
48
  ## Submitting a Bug Report 🐛
49
 
50
  If you spot a problem with YOLOv5 please submit a Bug Report!
51
 
52
- For us to start investigating a possibel problem we need to be able to reproduce it ourselves first. We've created a few short guidelines below to help users provide what we need in order to get started.
 
53
 
54
- When asking a question, people will be better able to provide help if you provide **code** that they can easily understand and use to **reproduce** the problem. This is referred to by community members as creating a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example). Your code that reproduces the problem should be:
 
 
 
55
 
56
  * ✅ **Minimal** – Use as little code as possible that still produces the same problem
57
  * ✅ **Complete** – Provide **all** parts someone else needs to reproduce your problem in the question itself
58
  * ✅ **Reproducible** – Test the code you're about to provide to make sure it reproduces the problem
59
 
60
- In addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code should be:
61
-
62
- * ✅ **Current** – Verify that your code is up-to-date with current GitHub [master](https://github.com/ultralytics/yolov5/tree/master), and if necessary `git pull` or `git clone` a new copy to ensure your problem has not already been resolved by previous commits.
63
- * ✅ **Unmodified** – Your problem must be reproducible without any modifications to the codebase in this repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code ⚠️.
64
 
65
- If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 **Bug Report** [template](https://github.com/ultralytics/yolov5/issues/new/choose) and providing a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) to help us better understand and diagnose your problem.
 
 
 
 
66
 
 
 
 
 
67
 
68
  ## License
69
 
70
- By contributing, you agree that your contributions will be licensed under the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/)
 
 
8
  - Proposing a new feature
9
  - Becoming a maintainer
10
 
11
+ YOLOv5 works so well due to our combined community effort, and for every small improvement you contribute you will be
12
+ helping push the frontiers of what's possible in AI 😃!
13
 
14
  ## Submitting a Pull Request (PR) 🛠️
15
+
16
  Submitting a PR is easy! This example shows how to submit a PR for updating `requirements.txt` in 4 steps:
17
 
18
  ### 1. Select File to Update
19
+
20
  Select `requirements.txt` to update by clicking on it in GitHub.
21
  <p align="center"><img width="800" alt="PR_step1" src="https://user-images.githubusercontent.com/26833433/122260847-08be2600-ced4-11eb-828b-8287ace4136c.png"></p>
22
 
23
  ### 2. Click 'Edit this file'
24
+
25
  Button is in top-right corner.
26
  <p align="center"><img width="800" alt="PR_step2" src="https://user-images.githubusercontent.com/26833433/122260844-06f46280-ced4-11eb-9eec-b8a24be519ca.png"></p>
27
 
28
  ### 3. Make Changes
29
+
30
  Change `matplotlib` version from `3.2.2` to `3.3`.
31
  <p align="center"><img width="800" alt="PR_step3" src="https://user-images.githubusercontent.com/26833433/122260853-0a87e980-ced4-11eb-9fd2-3650fb6e0842.png"></p>
32
 
33
  ### 4. Preview Changes and Submit PR
34
+
35
+ Click on the **Preview changes** tab to verify your updates. At the bottom of the screen select 'Create a **new branch**
36
+ for this commit', assign your branch a descriptive name such as `fix/matplotlib_version` and click the green **Propose
37
+ changes** button. All done, your PR is now submitted to YOLOv5 for review and approval 😃!
38
  <p align="center"><img width="800" alt="PR_step4" src="https://user-images.githubusercontent.com/26833433/122260856-0b208000-ced4-11eb-8e8e-77b6151cbcc3.png"></p>
39
 
40
  ### PR recommendations
41
 
42
  To allow your work to be integrated as seamlessly as possible, we advise you to:
43
+
44
+ - ✅ Verify your PR is **up-to-date with origin/master.** If your PR is behind origin/master an
45
+ automatic [GitHub actions](https://github.com/ultralytics/yolov5/blob/master/.github/workflows/rebase.yml) rebase may
46
+ be attempted by including the /rebase command in a comment body, or by running the following code, replacing 'feature'
47
+ with the name of your local branch:
48
+
49
  ```bash
50
  git remote add upstream https://github.com/ultralytics/yolov5.git
51
  git fetch upstream
 
53
  git merge upstream/master
54
  git push -u origin -f
55
  ```
 
 
56
 
57
+ - ✅ Verify all Continuous Integration (CI) **checks are passing**.
58
+ - ✅ Reduce changes to the absolute **minimum** required for your bug fix or feature addition. _"It is not daily increase
59
+ but daily decrease, hack away the unessential. The closer to the source, the less wastage there is."_ -Bruce Lee
60
 
61
  ## Submitting a Bug Report 🐛
62
 
63
  If you spot a problem with YOLOv5 please submit a Bug Report!
64
 
65
+ For us to start investigating a possibel problem we need to be able to reproduce it ourselves first. We've created a few
66
+ short guidelines below to help users provide what we need in order to get started.
67
 
68
+ When asking a question, people will be better able to provide help if you provide **code** that they can easily
69
+ understand and use to **reproduce** the problem. This is referred to by community members as creating
70
+ a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example). Your code that reproduces
71
+ the problem should be:
72
 
73
  * ✅ **Minimal** – Use as little code as possible that still produces the same problem
74
  * ✅ **Complete** – Provide **all** parts someone else needs to reproduce your problem in the question itself
75
  * ✅ **Reproducible** – Test the code you're about to provide to make sure it reproduces the problem
76
 
77
+ In addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code
78
+ should be:
 
 
79
 
80
+ * **Current** Verify that your code is up-to-date with current
81
+ GitHub [master](https://github.com/ultralytics/yolov5/tree/master), and if necessary `git pull` or `git clone` a new
82
+ copy to ensure your problem has not already been resolved by previous commits.
83
+ * ✅ **Unmodified** – Your problem must be reproducible without any modifications to the codebase in this
84
+ repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code ⚠️.
85
 
86
+ If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 **
87
+ Bug Report** [template](https://github.com/ultralytics/yolov5/issues/new/choose) and providing
88
+ a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) to help us better
89
+ understand and diagnose your problem.
90
 
91
  ## License
92
 
93
+ By contributing, you agree that your contributions will be licensed under
94
+ the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/)
README.md CHANGED
@@ -52,31 +52,33 @@ YOLOv5 🚀 is a family of object detection architectures and models pretrained
52
 
53
  </div>
54
 
55
-
56
  ## <div align="center">Documentation</div>
57
 
58
  See the [YOLOv5 Docs](https://docs.ultralytics.com) for full documentation on training, testing and deployment.
59
 
60
-
61
  ## <div align="center">Quick Start Examples</div>
62
 
63
-
64
  <details open>
65
  <summary>Install</summary>
66
 
67
- [**Python>=3.6.0**](https://www.python.org/) is required with all [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) installed including [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/):
 
 
68
  <!-- $ sudo apt update && apt install -y libgl1-mesa-glx libsm6 libxext6 libxrender-dev -->
 
69
  ```bash
70
  $ git clone https://github.com/ultralytics/yolov5
71
  $ cd yolov5
72
  $ pip install -r requirements.txt
73
  ```
 
74
  </details>
75
 
76
  <details open>
77
  <summary>Inference</summary>
78
 
79
- Inference with YOLOv5 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36). Models automatically download from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases).
 
80
 
81
  ```python
82
  import torch
@@ -85,7 +87,7 @@ import torch
85
  model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # or yolov5m, yolov5l, yolov5x, custom
86
 
87
  # Images
88
- img = 'https://ultralytics.com/images/zidane.jpg' # or PosixPath, PIL, OpenCV, numpy, list
89
 
90
  # Inference
91
  results = model(img)
@@ -101,7 +103,9 @@ results.print() # or .show(), .save(), .crop(), .pandas(), etc.
101
  <details>
102
  <summary>Inference with detect.py</summary>
103
 
104
- `detect.py` runs inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.
 
 
105
  ```bash
106
  $ python detect.py --source 0 # webcam
107
  file.jpg # image
@@ -117,13 +121,18 @@ $ python detect.py --source 0 # webcam
117
  <details>
118
  <summary>Training</summary>
119
 
120
- Run commands below to reproduce results on [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) dataset (dataset auto-downloads on first use). Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the largest `--batch-size` your GPU allows (batch sizes shown for 16 GB devices).
 
 
 
 
121
  ```bash
122
  $ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size 64
123
  yolov5m 40
124
  yolov5l 24
125
  yolov5x 16
126
  ```
 
127
  <img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png">
128
 
129
  </details>
@@ -132,7 +141,8 @@ $ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size
132
  <summary>Tutorials</summary>
133
 
134
  * [Train Custom Data](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data)&nbsp; 🚀 RECOMMENDED
135
- * [Tips for Best Training Results](https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results)&nbsp; ☘️ RECOMMENDED
 
136
  * [Weights & Biases Logging](https://github.com/ultralytics/yolov5/issues/1289)&nbsp; 🌟 NEW
137
  * [Supervisely Ecosystem](https://github.com/ultralytics/yolov5/issues/2518)&nbsp; 🌟 NEW
138
  * [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475)
@@ -147,10 +157,11 @@ $ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size
147
 
148
  </details>
149
 
150
-
151
  ## <div align="center">Environments and Integrations</div>
152
 
153
- Get started in seconds with our verified environments and integrations, including [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme) for automatic YOLOv5 experiment logging. Click each icon below for details.
 
 
154
 
155
  <div align="center">
156
  <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb">
@@ -173,33 +184,33 @@ Get started in seconds with our verified environments and integrations, includin
173
  </a>
174
  </div>
175
 
176
-
177
  ## <div align="center">Compete and Win</div>
178
 
179
- We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competition with **$10,000** in cash prizes!
180
 
181
  <p align="center">
182
  <a href="https://github.com/ultralytics/yolov5/discussions/3213">
183
  <img width="850" src="https://github.com/ultralytics/yolov5/releases/download/v1.0/banner-export-competition.png"></a>
184
  </p>
185
 
186
-
187
  ## <div align="center">Why YOLOv5</div>
188
 
189
  <p align="center"><img width="800" src="https://user-images.githubusercontent.com/26833433/114313216-f0a5e100-9af5-11eb-8445-c682b60da2e3.png"></p>
190
  <details>
191
  <summary>YOLOv5-P5 640 Figure (click to expand)</summary>
192
-
193
  <p align="center"><img width="800" src="https://user-images.githubusercontent.com/26833433/114313219-f1d70e00-9af5-11eb-9973-52b1f98d321a.png"></p>
194
  </details>
195
  <details>
196
  <summary>Figure Notes (click to expand)</summary>
197
-
198
- * GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS.
199
- * EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8.
200
- * **Reproduce** by `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
201
- </details>
202
 
 
 
 
 
 
 
 
203
 
204
  ### Pretrained Checkpoints
205
 
@@ -221,24 +232,30 @@ We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competi
221
 
222
  <details>
223
  <summary>Table Notes (click to expand)</summary>
224
-
225
- * AP<sup>test</sup> denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results denote val2017 accuracy.
226
- * AP values are for single-model single-scale unless otherwise noted. **Reproduce mAP** by `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
227
- * Speed<sub>GPU</sub> averaged over 5000 COCO val2017 images using a GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) V100 instance, and includes FP16 inference, postprocessing and NMS. **Reproduce speed** by `python val.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45 --half`
228
- * All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation).
229
- * Test Time Augmentation ([TTA](https://github.com/ultralytics/yolov5/issues/303)) includes reflection and scale augmentation. **Reproduce TTA** by `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
230
- </details>
231
 
 
 
 
 
 
 
 
 
 
 
 
232
 
233
- ## <div align="center">Contribute</div>
234
 
235
- We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible. Please see our [Contributing Guide](CONTRIBUTING.md) to get started.
236
 
 
 
237
 
238
  ## <div align="center">Contact</div>
239
 
240
- For issues running YOLOv5 please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues). For business or professional support requests please visit
241
- [https://ultralytics.com/contact](https://ultralytics.com/contact).
242
 
243
  <br>
244
 
 
52
 
53
  </div>
54
 
 
55
  ## <div align="center">Documentation</div>
56
 
57
  See the [YOLOv5 Docs](https://docs.ultralytics.com) for full documentation on training, testing and deployment.
58
 
 
59
  ## <div align="center">Quick Start Examples</div>
60
 
 
61
  <details open>
62
  <summary>Install</summary>
63
 
64
+ [**Python>=3.6.0**](https://www.python.org/) is required with all
65
+ [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) installed including
66
+ [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/):
67
  <!-- $ sudo apt update && apt install -y libgl1-mesa-glx libsm6 libxext6 libxrender-dev -->
68
+
69
  ```bash
70
  $ git clone https://github.com/ultralytics/yolov5
71
  $ cd yolov5
72
  $ pip install -r requirements.txt
73
  ```
74
+
75
  </details>
76
 
77
  <details open>
78
  <summary>Inference</summary>
79
 
80
+ Inference with YOLOv5 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36). Models automatically download
81
+ from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases).
82
 
83
  ```python
84
  import torch
 
87
  model = torch.hub.load('ultralytics/yolov5', 'yolov5s') # or yolov5m, yolov5l, yolov5x, custom
88
 
89
  # Images
90
+ img = 'https://ultralytics.com/images/zidane.jpg' # or file, Path, PIL, OpenCV, numpy, list
91
 
92
  # Inference
93
  results = model(img)
 
103
  <details>
104
  <summary>Inference with detect.py</summary>
105
 
106
+ `detect.py` runs inference on a variety of sources, downloading models automatically from
107
+ the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.
108
+
109
  ```bash
110
  $ python detect.py --source 0 # webcam
111
  file.jpg # image
 
121
  <details>
122
  <summary>Training</summary>
123
 
124
+ Run commands below to reproduce results
125
+ on [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) dataset (dataset auto-downloads on
126
+ first use). Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the
127
+ largest `--batch-size` your GPU allows (batch sizes shown for 16 GB devices).
128
+
129
  ```bash
130
  $ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size 64
131
  yolov5m 40
132
  yolov5l 24
133
  yolov5x 16
134
  ```
135
+
136
  <img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png">
137
 
138
  </details>
 
141
  <summary>Tutorials</summary>
142
 
143
  * [Train Custom Data](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data)&nbsp; 🚀 RECOMMENDED
144
+ * [Tips for Best Training Results](https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results)&nbsp; ☘️
145
+ RECOMMENDED
146
  * [Weights & Biases Logging](https://github.com/ultralytics/yolov5/issues/1289)&nbsp; 🌟 NEW
147
  * [Supervisely Ecosystem](https://github.com/ultralytics/yolov5/issues/2518)&nbsp; 🌟 NEW
148
  * [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475)
 
157
 
158
  </details>
159
 
 
160
  ## <div align="center">Environments and Integrations</div>
161
 
162
+ Get started in seconds with our verified environments and integrations,
163
+ including [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme) for automatic YOLOv5 experiment
164
+ logging. Click each icon below for details.
165
 
166
  <div align="center">
167
  <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb">
 
184
  </a>
185
  </div>
186
 
 
187
  ## <div align="center">Compete and Win</div>
188
 
189
+ We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competition with **$10,000** in cash prizes!
190
 
191
  <p align="center">
192
  <a href="https://github.com/ultralytics/yolov5/discussions/3213">
193
  <img width="850" src="https://github.com/ultralytics/yolov5/releases/download/v1.0/banner-export-competition.png"></a>
194
  </p>
195
 
 
196
  ## <div align="center">Why YOLOv5</div>
197
 
198
  <p align="center"><img width="800" src="https://user-images.githubusercontent.com/26833433/114313216-f0a5e100-9af5-11eb-8445-c682b60da2e3.png"></p>
199
  <details>
200
  <summary>YOLOv5-P5 640 Figure (click to expand)</summary>
201
+
202
  <p align="center"><img width="800" src="https://user-images.githubusercontent.com/26833433/114313219-f1d70e00-9af5-11eb-9973-52b1f98d321a.png"></p>
203
  </details>
204
  <details>
205
  <summary>Figure Notes (click to expand)</summary>
 
 
 
 
 
206
 
207
+ * GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size
208
+ 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS.
209
+ * EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8.
210
+ * **Reproduce** by
211
+ `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
212
+
213
+ </details>
214
 
215
  ### Pretrained Checkpoints
216
 
 
232
 
233
  <details>
234
  <summary>Table Notes (click to expand)</summary>
 
 
 
 
 
 
 
235
 
236
+ * AP<sup>test</sup> denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results
237
+ denote val2017 accuracy.
238
+ * AP values are for single-model single-scale unless otherwise noted. **Reproduce mAP**
239
+ by `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
240
+ * Speed<sub>GPU</sub> averaged over 5000 COCO val2017 images using a
241
+ GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) V100 instance, and
242
+ includes FP16 inference, postprocessing and NMS. **Reproduce speed**
243
+ by `python val.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45 --half`
244
+ * All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation).
245
+ * Test Time Augmentation ([TTA](https://github.com/ultralytics/yolov5/issues/303)) includes reflection and scale
246
+ augmentation. **Reproduce TTA** by `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
247
 
248
+ </details>
249
 
250
+ ## <div align="center">Contribute</div>
251
 
252
+ We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible. Please see
253
+ our [Contributing Guide](CONTRIBUTING.md) to get started.
254
 
255
  ## <div align="center">Contact</div>
256
 
257
+ For issues running YOLOv5 please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues). For business or
258
+ professional support requests please visit [https://ultralytics.com/contact](https://ultralytics.com/contact).
259
 
260
  <br>
261
 
data/Argoverse.yaml CHANGED
@@ -15,7 +15,7 @@ test: Argoverse-1.1/images/test/ # test images (optional) https://eval.ai/web/c
15
 
16
  # Classes
17
  nc: 8 # number of classes
18
- names: [ 'person', 'bicycle', 'car', 'motorcycle', 'bus', 'truck', 'traffic_light', 'stop_sign' ] # class names
19
 
20
 
21
  # Download script/URL (optional) ---------------------------------------------------------------------------------------
 
15
 
16
  # Classes
17
  nc: 8 # number of classes
18
+ names: ['person', 'bicycle', 'car', 'motorcycle', 'bus', 'truck', 'traffic_light', 'stop_sign'] # class names
19
 
20
 
21
  # Download script/URL (optional) ---------------------------------------------------------------------------------------
data/GlobalWheat2020.yaml CHANGED
@@ -27,7 +27,7 @@ test: # test images (optional) 1276 images
27
 
28
  # Classes
29
  nc: 1 # number of classes
30
- names: [ 'wheat_head' ] # class names
31
 
32
 
33
  # Download script/URL (optional) ---------------------------------------------------------------------------------------
 
27
 
28
  # Classes
29
  nc: 1 # number of classes
30
+ names: ['wheat_head'] # class names
31
 
32
 
33
  # Download script/URL (optional) ---------------------------------------------------------------------------------------
data/Objects365.yaml CHANGED
@@ -15,47 +15,47 @@ test: # test images (optional)
15
 
16
  # Classes
17
  nc: 365 # number of classes
18
- names: [ 'Person', 'Sneakers', 'Chair', 'Other Shoes', 'Hat', 'Car', 'Lamp', 'Glasses', 'Bottle', 'Desk', 'Cup',
19
- 'Street Lights', 'Cabinet/shelf', 'Handbag/Satchel', 'Bracelet', 'Plate', 'Picture/Frame', 'Helmet', 'Book',
20
- 'Gloves', 'Storage box', 'Boat', 'Leather Shoes', 'Flower', 'Bench', 'Potted Plant', 'Bowl/Basin', 'Flag',
21
- 'Pillow', 'Boots', 'Vase', 'Microphone', 'Necklace', 'Ring', 'SUV', 'Wine Glass', 'Belt', 'Monitor/TV',
22
- 'Backpack', 'Umbrella', 'Traffic Light', 'Speaker', 'Watch', 'Tie', 'Trash bin Can', 'Slippers', 'Bicycle',
23
- 'Stool', 'Barrel/bucket', 'Van', 'Couch', 'Sandals', 'Basket', 'Drum', 'Pen/Pencil', 'Bus', 'Wild Bird',
24
- 'High Heels', 'Motorcycle', 'Guitar', 'Carpet', 'Cell Phone', 'Bread', 'Camera', 'Canned', 'Truck',
25
- 'Traffic cone', 'Cymbal', 'Lifesaver', 'Towel', 'Stuffed Toy', 'Candle', 'Sailboat', 'Laptop', 'Awning',
26
- 'Bed', 'Faucet', 'Tent', 'Horse', 'Mirror', 'Power outlet', 'Sink', 'Apple', 'Air Conditioner', 'Knife',
27
- 'Hockey Stick', 'Paddle', 'Pickup Truck', 'Fork', 'Traffic Sign', 'Balloon', 'Tripod', 'Dog', 'Spoon', 'Clock',
28
- 'Pot', 'Cow', 'Cake', 'Dinning Table', 'Sheep', 'Hanger', 'Blackboard/Whiteboard', 'Napkin', 'Other Fish',
29
- 'Orange/Tangerine', 'Toiletry', 'Keyboard', 'Tomato', 'Lantern', 'Machinery Vehicle', 'Fan',
30
- 'Green Vegetables', 'Banana', 'Baseball Glove', 'Airplane', 'Mouse', 'Train', 'Pumpkin', 'Soccer', 'Skiboard',
31
- 'Luggage', 'Nightstand', 'Tea pot', 'Telephone', 'Trolley', 'Head Phone', 'Sports Car', 'Stop Sign',
32
- 'Dessert', 'Scooter', 'Stroller', 'Crane', 'Remote', 'Refrigerator', 'Oven', 'Lemon', 'Duck', 'Baseball Bat',
33
- 'Surveillance Camera', 'Cat', 'Jug', 'Broccoli', 'Piano', 'Pizza', 'Elephant', 'Skateboard', 'Surfboard',
34
- 'Gun', 'Skating and Skiing shoes', 'Gas stove', 'Donut', 'Bow Tie', 'Carrot', 'Toilet', 'Kite', 'Strawberry',
35
- 'Other Balls', 'Shovel', 'Pepper', 'Computer Box', 'Toilet Paper', 'Cleaning Products', 'Chopsticks',
36
- 'Microwave', 'Pigeon', 'Baseball', 'Cutting/chopping Board', 'Coffee Table', 'Side Table', 'Scissors',
37
- 'Marker', 'Pie', 'Ladder', 'Snowboard', 'Cookies', 'Radiator', 'Fire Hydrant', 'Basketball', 'Zebra', 'Grape',
38
- 'Giraffe', 'Potato', 'Sausage', 'Tricycle', 'Violin', 'Egg', 'Fire Extinguisher', 'Candy', 'Fire Truck',
39
- 'Billiards', 'Converter', 'Bathtub', 'Wheelchair', 'Golf Club', 'Briefcase', 'Cucumber', 'Cigar/Cigarette',
40
- 'Paint Brush', 'Pear', 'Heavy Truck', 'Hamburger', 'Extractor', 'Extension Cord', 'Tong', 'Tennis Racket',
41
- 'Folder', 'American Football', 'earphone', 'Mask', 'Kettle', 'Tennis', 'Ship', 'Swing', 'Coffee Machine',
42
- 'Slide', 'Carriage', 'Onion', 'Green beans', 'Projector', 'Frisbee', 'Washing Machine/Drying Machine',
43
- 'Chicken', 'Printer', 'Watermelon', 'Saxophone', 'Tissue', 'Toothbrush', 'Ice cream', 'Hot-air balloon',
44
- 'Cello', 'French Fries', 'Scale', 'Trophy', 'Cabbage', 'Hot dog', 'Blender', 'Peach', 'Rice', 'Wallet/Purse',
45
- 'Volleyball', 'Deer', 'Goose', 'Tape', 'Tablet', 'Cosmetics', 'Trumpet', 'Pineapple', 'Golf Ball',
46
- 'Ambulance', 'Parking meter', 'Mango', 'Key', 'Hurdle', 'Fishing Rod', 'Medal', 'Flute', 'Brush', 'Penguin',
47
- 'Megaphone', 'Corn', 'Lettuce', 'Garlic', 'Swan', 'Helicopter', 'Green Onion', 'Sandwich', 'Nuts',
48
- 'Speed Limit Sign', 'Induction Cooker', 'Broom', 'Trombone', 'Plum', 'Rickshaw', 'Goldfish', 'Kiwi fruit',
49
- 'Router/modem', 'Poker Card', 'Toaster', 'Shrimp', 'Sushi', 'Cheese', 'Notepaper', 'Cherry', 'Pliers', 'CD',
50
- 'Pasta', 'Hammer', 'Cue', 'Avocado', 'Hamimelon', 'Flask', 'Mushroom', 'Screwdriver', 'Soap', 'Recorder',
51
- 'Bear', 'Eggplant', 'Board Eraser', 'Coconut', 'Tape Measure/Ruler', 'Pig', 'Showerhead', 'Globe', 'Chips',
52
- 'Steak', 'Crosswalk Sign', 'Stapler', 'Camel', 'Formula 1', 'Pomegranate', 'Dishwasher', 'Crab',
53
- 'Hoverboard', 'Meat ball', 'Rice Cooker', 'Tuba', 'Calculator', 'Papaya', 'Antelope', 'Parrot', 'Seal',
54
- 'Butterfly', 'Dumbbell', 'Donkey', 'Lion', 'Urinal', 'Dolphin', 'Electric Drill', 'Hair Dryer', 'Egg tart',
55
- 'Jellyfish', 'Treadmill', 'Lighter', 'Grapefruit', 'Game board', 'Mop', 'Radish', 'Baozi', 'Target', 'French',
56
- 'Spring Rolls', 'Monkey', 'Rabbit', 'Pencil Case', 'Yak', 'Red Cabbage', 'Binoculars', 'Asparagus', 'Barbell',
57
- 'Scallop', 'Noddles', 'Comb', 'Dumpling', 'Oyster', 'Table Tennis paddle', 'Cosmetics Brush/Eyeliner Pencil',
58
- 'Chainsaw', 'Eraser', 'Lobster', 'Durian', 'Okra', 'Lipstick', 'Cosmetics Mirror', 'Curling', 'Table Tennis' ]
59
 
60
 
61
  # Download script/URL (optional) ---------------------------------------------------------------------------------------
 
15
 
16
  # Classes
17
  nc: 365 # number of classes
18
+ names: ['Person', 'Sneakers', 'Chair', 'Other Shoes', 'Hat', 'Car', 'Lamp', 'Glasses', 'Bottle', 'Desk', 'Cup',
19
+ 'Street Lights', 'Cabinet/shelf', 'Handbag/Satchel', 'Bracelet', 'Plate', 'Picture/Frame', 'Helmet', 'Book',
20
+ 'Gloves', 'Storage box', 'Boat', 'Leather Shoes', 'Flower', 'Bench', 'Potted Plant', 'Bowl/Basin', 'Flag',
21
+ 'Pillow', 'Boots', 'Vase', 'Microphone', 'Necklace', 'Ring', 'SUV', 'Wine Glass', 'Belt', 'Monitor/TV',
22
+ 'Backpack', 'Umbrella', 'Traffic Light', 'Speaker', 'Watch', 'Tie', 'Trash bin Can', 'Slippers', 'Bicycle',
23
+ 'Stool', 'Barrel/bucket', 'Van', 'Couch', 'Sandals', 'Basket', 'Drum', 'Pen/Pencil', 'Bus', 'Wild Bird',
24
+ 'High Heels', 'Motorcycle', 'Guitar', 'Carpet', 'Cell Phone', 'Bread', 'Camera', 'Canned', 'Truck',
25
+ 'Traffic cone', 'Cymbal', 'Lifesaver', 'Towel', 'Stuffed Toy', 'Candle', 'Sailboat', 'Laptop', 'Awning',
26
+ 'Bed', 'Faucet', 'Tent', 'Horse', 'Mirror', 'Power outlet', 'Sink', 'Apple', 'Air Conditioner', 'Knife',
27
+ 'Hockey Stick', 'Paddle', 'Pickup Truck', 'Fork', 'Traffic Sign', 'Balloon', 'Tripod', 'Dog', 'Spoon', 'Clock',
28
+ 'Pot', 'Cow', 'Cake', 'Dinning Table', 'Sheep', 'Hanger', 'Blackboard/Whiteboard', 'Napkin', 'Other Fish',
29
+ 'Orange/Tangerine', 'Toiletry', 'Keyboard', 'Tomato', 'Lantern', 'Machinery Vehicle', 'Fan',
30
+ 'Green Vegetables', 'Banana', 'Baseball Glove', 'Airplane', 'Mouse', 'Train', 'Pumpkin', 'Soccer', 'Skiboard',
31
+ 'Luggage', 'Nightstand', 'Tea pot', 'Telephone', 'Trolley', 'Head Phone', 'Sports Car', 'Stop Sign',
32
+ 'Dessert', 'Scooter', 'Stroller', 'Crane', 'Remote', 'Refrigerator', 'Oven', 'Lemon', 'Duck', 'Baseball Bat',
33
+ 'Surveillance Camera', 'Cat', 'Jug', 'Broccoli', 'Piano', 'Pizza', 'Elephant', 'Skateboard', 'Surfboard',
34
+ 'Gun', 'Skating and Skiing shoes', 'Gas stove', 'Donut', 'Bow Tie', 'Carrot', 'Toilet', 'Kite', 'Strawberry',
35
+ 'Other Balls', 'Shovel', 'Pepper', 'Computer Box', 'Toilet Paper', 'Cleaning Products', 'Chopsticks',
36
+ 'Microwave', 'Pigeon', 'Baseball', 'Cutting/chopping Board', 'Coffee Table', 'Side Table', 'Scissors',
37
+ 'Marker', 'Pie', 'Ladder', 'Snowboard', 'Cookies', 'Radiator', 'Fire Hydrant', 'Basketball', 'Zebra', 'Grape',
38
+ 'Giraffe', 'Potato', 'Sausage', 'Tricycle', 'Violin', 'Egg', 'Fire Extinguisher', 'Candy', 'Fire Truck',
39
+ 'Billiards', 'Converter', 'Bathtub', 'Wheelchair', 'Golf Club', 'Briefcase', 'Cucumber', 'Cigar/Cigarette',
40
+ 'Paint Brush', 'Pear', 'Heavy Truck', 'Hamburger', 'Extractor', 'Extension Cord', 'Tong', 'Tennis Racket',
41
+ 'Folder', 'American Football', 'earphone', 'Mask', 'Kettle', 'Tennis', 'Ship', 'Swing', 'Coffee Machine',
42
+ 'Slide', 'Carriage', 'Onion', 'Green beans', 'Projector', 'Frisbee', 'Washing Machine/Drying Machine',
43
+ 'Chicken', 'Printer', 'Watermelon', 'Saxophone', 'Tissue', 'Toothbrush', 'Ice cream', 'Hot-air balloon',
44
+ 'Cello', 'French Fries', 'Scale', 'Trophy', 'Cabbage', 'Hot dog', 'Blender', 'Peach', 'Rice', 'Wallet/Purse',
45
+ 'Volleyball', 'Deer', 'Goose', 'Tape', 'Tablet', 'Cosmetics', 'Trumpet', 'Pineapple', 'Golf Ball',
46
+ 'Ambulance', 'Parking meter', 'Mango', 'Key', 'Hurdle', 'Fishing Rod', 'Medal', 'Flute', 'Brush', 'Penguin',
47
+ 'Megaphone', 'Corn', 'Lettuce', 'Garlic', 'Swan', 'Helicopter', 'Green Onion', 'Sandwich', 'Nuts',
48
+ 'Speed Limit Sign', 'Induction Cooker', 'Broom', 'Trombone', 'Plum', 'Rickshaw', 'Goldfish', 'Kiwi fruit',
49
+ 'Router/modem', 'Poker Card', 'Toaster', 'Shrimp', 'Sushi', 'Cheese', 'Notepaper', 'Cherry', 'Pliers', 'CD',
50
+ 'Pasta', 'Hammer', 'Cue', 'Avocado', 'Hamimelon', 'Flask', 'Mushroom', 'Screwdriver', 'Soap', 'Recorder',
51
+ 'Bear', 'Eggplant', 'Board Eraser', 'Coconut', 'Tape Measure/Ruler', 'Pig', 'Showerhead', 'Globe', 'Chips',
52
+ 'Steak', 'Crosswalk Sign', 'Stapler', 'Camel', 'Formula 1', 'Pomegranate', 'Dishwasher', 'Crab',
53
+ 'Hoverboard', 'Meat ball', 'Rice Cooker', 'Tuba', 'Calculator', 'Papaya', 'Antelope', 'Parrot', 'Seal',
54
+ 'Butterfly', 'Dumbbell', 'Donkey', 'Lion', 'Urinal', 'Dolphin', 'Electric Drill', 'Hair Dryer', 'Egg tart',
55
+ 'Jellyfish', 'Treadmill', 'Lighter', 'Grapefruit', 'Game board', 'Mop', 'Radish', 'Baozi', 'Target', 'French',
56
+ 'Spring Rolls', 'Monkey', 'Rabbit', 'Pencil Case', 'Yak', 'Red Cabbage', 'Binoculars', 'Asparagus', 'Barbell',
57
+ 'Scallop', 'Noddles', 'Comb', 'Dumpling', 'Oyster', 'Table Tennis paddle', 'Cosmetics Brush/Eyeliner Pencil',
58
+ 'Chainsaw', 'Eraser', 'Lobster', 'Durian', 'Okra', 'Lipstick', 'Cosmetics Mirror', 'Curling', 'Table Tennis']
59
 
60
 
61
  # Download script/URL (optional) ---------------------------------------------------------------------------------------
data/SKU-110K.yaml CHANGED
@@ -15,7 +15,7 @@ test: test.txt # test images (optional) 2936 images
15
 
16
  # Classes
17
  nc: 1 # number of classes
18
- names: [ 'object' ] # class names
19
 
20
 
21
  # Download script/URL (optional) ---------------------------------------------------------------------------------------
 
15
 
16
  # Classes
17
  nc: 1 # number of classes
18
+ names: ['object'] # class names
19
 
20
 
21
  # Download script/URL (optional) ---------------------------------------------------------------------------------------
data/VOC.yaml CHANGED
@@ -21,8 +21,8 @@ test: # test images (optional)
21
 
22
  # Classes
23
  nc: 20 # number of classes
24
- names: [ 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog',
25
- 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor' ] # class names
26
 
27
 
28
  # Download script/URL (optional) ---------------------------------------------------------------------------------------
 
21
 
22
  # Classes
23
  nc: 20 # number of classes
24
+ names: ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog',
25
+ 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'] # class names
26
 
27
 
28
  # Download script/URL (optional) ---------------------------------------------------------------------------------------
data/VisDrone.yaml CHANGED
@@ -15,7 +15,7 @@ test: VisDrone2019-DET-test-dev/images # test images (optional) 1610 images
15
 
16
  # Classes
17
  nc: 10 # number of classes
18
- names: [ 'pedestrian', 'people', 'bicycle', 'car', 'van', 'truck', 'tricycle', 'awning-tricycle', 'bus', 'motor' ]
19
 
20
 
21
  # Download script/URL (optional) ---------------------------------------------------------------------------------------
 
15
 
16
  # Classes
17
  nc: 10 # number of classes
18
+ names: ['pedestrian', 'people', 'bicycle', 'car', 'van', 'truck', 'tricycle', 'awning-tricycle', 'bus', 'motor']
19
 
20
 
21
  # Download script/URL (optional) ---------------------------------------------------------------------------------------
data/coco.yaml CHANGED
@@ -15,15 +15,15 @@ test: test-dev2017.txt # 20288 of 40670 images, submit to https://competitions.
15
 
16
  # Classes
17
  nc: 80 # number of classes
18
- names: [ 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
19
- 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
20
- 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
21
- 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
22
- 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
23
- 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
24
- 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
25
- 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
26
- 'hair drier', 'toothbrush' ] # class names
27
 
28
 
29
  # Download script/URL (optional)
 
15
 
16
  # Classes
17
  nc: 80 # number of classes
18
+ names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
19
+ 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
20
+ 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
21
+ 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
22
+ 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
23
+ 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
24
+ 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
25
+ 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
26
+ 'hair drier', 'toothbrush'] # class names
27
 
28
 
29
  # Download script/URL (optional)
data/coco128.yaml CHANGED
@@ -15,15 +15,15 @@ test: # test images (optional)
15
 
16
  # Classes
17
  nc: 80 # number of classes
18
- names: [ 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
19
- 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
20
- 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
21
- 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
22
- 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
23
- 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
24
- 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
25
- 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
26
- 'hair drier', 'toothbrush' ] # class names
27
 
28
 
29
  # Download script/URL (optional)
 
15
 
16
  # Classes
17
  nc: 80 # number of classes
18
+ names: ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
19
+ 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
20
+ 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
21
+ 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
22
+ 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
23
+ 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
24
+ 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
25
+ 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
26
+ 'hair drier', 'toothbrush'] # class names
27
 
28
 
29
  # Download script/URL (optional)
data/scripts/get_coco.sh CHANGED
@@ -12,7 +12,7 @@ d='../datasets' # unzip directory
12
  url=https://github.com/ultralytics/yolov5/releases/download/v1.0/
13
  f='coco2017labels.zip' # or 'coco2017labels-segments.zip', 68 MB
14
  echo 'Downloading' $url$f ' ...'
15
- curl -L $url$f -o $f && unzip -q $f -d $d && rm $f & # download, unzip, remove in background
16
 
17
  # Download/unzip images
18
  d='../datasets/coco/images' # unzip directory
@@ -22,6 +22,6 @@ f2='val2017.zip' # 1G, 5k images
22
  f3='test2017.zip' # 7G, 41k images (optional)
23
  for f in $f1 $f2; do
24
  echo 'Downloading' $url$f '...'
25
- curl -L $url$f -o $f && unzip -q $f -d $d && rm $f & # download, unzip, remove in background
26
  done
27
  wait # finish background tasks
 
12
  url=https://github.com/ultralytics/yolov5/releases/download/v1.0/
13
  f='coco2017labels.zip' # or 'coco2017labels-segments.zip', 68 MB
14
  echo 'Downloading' $url$f ' ...'
15
+ curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &
16
 
17
  # Download/unzip images
18
  d='../datasets/coco/images' # unzip directory
 
22
  f3='test2017.zip' # 7G, 41k images (optional)
23
  for f in $f1 $f2; do
24
  echo 'Downloading' $url$f '...'
25
+ curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &
26
  done
27
  wait # finish background tasks
data/scripts/get_coco128.sh CHANGED
@@ -12,6 +12,6 @@ d='../datasets' # unzip directory
12
  url=https://github.com/ultralytics/yolov5/releases/download/v1.0/
13
  f='coco128.zip' # or 'coco2017labels-segments.zip', 68 MB
14
  echo 'Downloading' $url$f ' ...'
15
- curl -L $url$f -o $f && unzip -q $f -d $d && rm $f & # download, unzip, remove in background
16
 
17
  wait # finish background tasks
 
12
  url=https://github.com/ultralytics/yolov5/releases/download/v1.0/
13
  f='coco128.zip' # or 'coco2017labels-segments.zip', 68 MB
14
  echo 'Downloading' $url$f ' ...'
15
+ curl -L $url$f -o $f && unzip -q $f -d $d && rm $f &
16
 
17
  wait # finish background tasks
data/xView.yaml CHANGED
@@ -15,15 +15,15 @@ val: images/autosplit_val.txt # train images (relative to 'path') 10% of 847 tr
15
 
16
  # Classes
17
  nc: 60 # number of classes
18
- names: [ 'Fixed-wing Aircraft', 'Small Aircraft', 'Cargo Plane', 'Helicopter', 'Passenger Vehicle', 'Small Car', 'Bus',
19
- 'Pickup Truck', 'Utility Truck', 'Truck', 'Cargo Truck', 'Truck w/Box', 'Truck Tractor', 'Trailer',
20
- 'Truck w/Flatbed', 'Truck w/Liquid', 'Crane Truck', 'Railway Vehicle', 'Passenger Car', 'Cargo Car',
21
- 'Flat Car', 'Tank car', 'Locomotive', 'Maritime Vessel', 'Motorboat', 'Sailboat', 'Tugboat', 'Barge',
22
- 'Fishing Vessel', 'Ferry', 'Yacht', 'Container Ship', 'Oil Tanker', 'Engineering Vehicle', 'Tower crane',
23
- 'Container Crane', 'Reach Stacker', 'Straddle Carrier', 'Mobile Crane', 'Dump Truck', 'Haul Truck',
24
- 'Scraper/Tractor', 'Front loader/Bulldozer', 'Excavator', 'Cement Mixer', 'Ground Grader', 'Hut/Tent', 'Shed',
25
- 'Building', 'Aircraft Hangar', 'Damaged Building', 'Facility', 'Construction Site', 'Vehicle Lot', 'Helipad',
26
- 'Storage Tank', 'Shipping container lot', 'Shipping Container', 'Pylon', 'Tower' ] # class names
27
 
28
 
29
  # Download script/URL (optional) ---------------------------------------------------------------------------------------
 
15
 
16
  # Classes
17
  nc: 60 # number of classes
18
+ names: ['Fixed-wing Aircraft', 'Small Aircraft', 'Cargo Plane', 'Helicopter', 'Passenger Vehicle', 'Small Car', 'Bus',
19
+ 'Pickup Truck', 'Utility Truck', 'Truck', 'Cargo Truck', 'Truck w/Box', 'Truck Tractor', 'Trailer',
20
+ 'Truck w/Flatbed', 'Truck w/Liquid', 'Crane Truck', 'Railway Vehicle', 'Passenger Car', 'Cargo Car',
21
+ 'Flat Car', 'Tank car', 'Locomotive', 'Maritime Vessel', 'Motorboat', 'Sailboat', 'Tugboat', 'Barge',
22
+ 'Fishing Vessel', 'Ferry', 'Yacht', 'Container Ship', 'Oil Tanker', 'Engineering Vehicle', 'Tower crane',
23
+ 'Container Crane', 'Reach Stacker', 'Straddle Carrier', 'Mobile Crane', 'Dump Truck', 'Haul Truck',
24
+ 'Scraper/Tractor', 'Front loader/Bulldozer', 'Excavator', 'Cement Mixer', 'Ground Grader', 'Hut/Tent', 'Shed',
25
+ 'Building', 'Aircraft Hangar', 'Damaged Building', 'Facility', 'Construction Site', 'Vehicle Lot', 'Helipad',
26
+ 'Storage Tank', 'Shipping container lot', 'Shipping Container', 'Pylon', 'Tower'] # class names
27
 
28
 
29
  # Download script/URL (optional) ---------------------------------------------------------------------------------------
models/hub/anchors.yaml CHANGED
@@ -4,55 +4,55 @@
4
  # P5 -------------------------------------------------------------------------------------------------------------------
5
  # P5-640:
6
  anchors_p5_640:
7
- - [ 10,13, 16,30, 33,23 ] # P3/8
8
- - [ 30,61, 62,45, 59,119 ] # P4/16
9
- - [ 116,90, 156,198, 373,326 ] # P5/32
10
 
11
 
12
  # P6 -------------------------------------------------------------------------------------------------------------------
13
  # P6-640: thr=0.25: 0.9964 BPR, 5.54 anchors past thr, n=12, img_size=640, metric_all=0.281/0.716-mean/best, past_thr=0.469-mean: 9,11, 21,19, 17,41, 43,32, 39,70, 86,64, 65,131, 134,130, 120,265, 282,180, 247,354, 512,387
14
  anchors_p6_640:
15
- - [ 9,11, 21,19, 17,41 ] # P3/8
16
- - [ 43,32, 39,70, 86,64 ] # P4/16
17
- - [ 65,131, 134,130, 120,265 ] # P5/32
18
- - [ 282,180, 247,354, 512,387 ] # P6/64
19
 
20
  # P6-1280: thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1280, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 19,27, 44,40, 38,94, 96,68, 86,152, 180,137, 140,301, 303,264, 238,542, 436,615, 739,380, 925,792
21
  anchors_p6_1280:
22
- - [ 19,27, 44,40, 38,94 ] # P3/8
23
- - [ 96,68, 86,152, 180,137 ] # P4/16
24
- - [ 140,301, 303,264, 238,542 ] # P5/32
25
- - [ 436,615, 739,380, 925,792 ] # P6/64
26
 
27
  # P6-1920: thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1920, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 28,41, 67,59, 57,141, 144,103, 129,227, 270,205, 209,452, 455,396, 358,812, 653,922, 1109,570, 1387,1187
28
  anchors_p6_1920:
29
- - [ 28,41, 67,59, 57,141 ] # P3/8
30
- - [ 144,103, 129,227, 270,205 ] # P4/16
31
- - [ 209,452, 455,396, 358,812 ] # P5/32
32
- - [ 653,922, 1109,570, 1387,1187 ] # P6/64
33
 
34
 
35
  # P7 -------------------------------------------------------------------------------------------------------------------
36
  # P7-640: thr=0.25: 0.9962 BPR, 6.76 anchors past thr, n=15, img_size=640, metric_all=0.275/0.733-mean/best, past_thr=0.466-mean: 11,11, 13,30, 29,20, 30,46, 61,38, 39,92, 78,80, 146,66, 79,163, 149,150, 321,143, 157,303, 257,402, 359,290, 524,372
37
  anchors_p7_640:
38
- - [ 11,11, 13,30, 29,20 ] # P3/8
39
- - [ 30,46, 61,38, 39,92 ] # P4/16
40
- - [ 78,80, 146,66, 79,163 ] # P5/32
41
- - [ 149,150, 321,143, 157,303 ] # P6/64
42
- - [ 257,402, 359,290, 524,372 ] # P7/128
43
 
44
  # P7-1280: thr=0.25: 0.9968 BPR, 6.71 anchors past thr, n=15, img_size=1280, metric_all=0.273/0.732-mean/best, past_thr=0.463-mean: 19,22, 54,36, 32,77, 70,83, 138,71, 75,173, 165,159, 148,334, 375,151, 334,317, 251,626, 499,474, 750,326, 534,814, 1079,818
45
  anchors_p7_1280:
46
- - [ 19,22, 54,36, 32,77 ] # P3/8
47
- - [ 70,83, 138,71, 75,173 ] # P4/16
48
- - [ 165,159, 148,334, 375,151 ] # P5/32
49
- - [ 334,317, 251,626, 499,474 ] # P6/64
50
- - [ 750,326, 534,814, 1079,818 ] # P7/128
51
 
52
  # P7-1920: thr=0.25: 0.9968 BPR, 6.71 anchors past thr, n=15, img_size=1920, metric_all=0.273/0.732-mean/best, past_thr=0.463-mean: 29,34, 81,55, 47,115, 105,124, 207,107, 113,259, 247,238, 222,500, 563,227, 501,476, 376,939, 749,711, 1126,489, 801,1222, 1618,1227
53
  anchors_p7_1920:
54
- - [ 29,34, 81,55, 47,115 ] # P3/8
55
- - [ 105,124, 207,107, 113,259 ] # P4/16
56
- - [ 247,238, 222,500, 563,227 ] # P5/32
57
- - [ 501,476, 376,939, 749,711 ] # P6/64
58
- - [ 1126,489, 801,1222, 1618,1227 ] # P7/128
 
4
  # P5 -------------------------------------------------------------------------------------------------------------------
5
  # P5-640:
6
  anchors_p5_640:
7
+ - [10,13, 16,30, 33,23] # P3/8
8
+ - [30,61, 62,45, 59,119] # P4/16
9
+ - [116,90, 156,198, 373,326] # P5/32
10
 
11
 
12
  # P6 -------------------------------------------------------------------------------------------------------------------
13
  # P6-640: thr=0.25: 0.9964 BPR, 5.54 anchors past thr, n=12, img_size=640, metric_all=0.281/0.716-mean/best, past_thr=0.469-mean: 9,11, 21,19, 17,41, 43,32, 39,70, 86,64, 65,131, 134,130, 120,265, 282,180, 247,354, 512,387
14
  anchors_p6_640:
15
+ - [9,11, 21,19, 17,41] # P3/8
16
+ - [43,32, 39,70, 86,64] # P4/16
17
+ - [65,131, 134,130, 120,265] # P5/32
18
+ - [282,180, 247,354, 512,387] # P6/64
19
 
20
  # P6-1280: thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1280, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 19,27, 44,40, 38,94, 96,68, 86,152, 180,137, 140,301, 303,264, 238,542, 436,615, 739,380, 925,792
21
  anchors_p6_1280:
22
+ - [19,27, 44,40, 38,94] # P3/8
23
+ - [96,68, 86,152, 180,137] # P4/16
24
+ - [140,301, 303,264, 238,542] # P5/32
25
+ - [436,615, 739,380, 925,792] # P6/64
26
 
27
  # P6-1920: thr=0.25: 0.9950 BPR, 5.55 anchors past thr, n=12, img_size=1920, metric_all=0.281/0.714-mean/best, past_thr=0.468-mean: 28,41, 67,59, 57,141, 144,103, 129,227, 270,205, 209,452, 455,396, 358,812, 653,922, 1109,570, 1387,1187
28
  anchors_p6_1920:
29
+ - [28,41, 67,59, 57,141] # P3/8
30
+ - [144,103, 129,227, 270,205] # P4/16
31
+ - [209,452, 455,396, 358,812] # P5/32
32
+ - [653,922, 1109,570, 1387,1187] # P6/64
33
 
34
 
35
  # P7 -------------------------------------------------------------------------------------------------------------------
36
  # P7-640: thr=0.25: 0.9962 BPR, 6.76 anchors past thr, n=15, img_size=640, metric_all=0.275/0.733-mean/best, past_thr=0.466-mean: 11,11, 13,30, 29,20, 30,46, 61,38, 39,92, 78,80, 146,66, 79,163, 149,150, 321,143, 157,303, 257,402, 359,290, 524,372
37
  anchors_p7_640:
38
+ - [11,11, 13,30, 29,20] # P3/8
39
+ - [30,46, 61,38, 39,92] # P4/16
40
+ - [78,80, 146,66, 79,163] # P5/32
41
+ - [149,150, 321,143, 157,303] # P6/64
42
+ - [257,402, 359,290, 524,372] # P7/128
43
 
44
  # P7-1280: thr=0.25: 0.9968 BPR, 6.71 anchors past thr, n=15, img_size=1280, metric_all=0.273/0.732-mean/best, past_thr=0.463-mean: 19,22, 54,36, 32,77, 70,83, 138,71, 75,173, 165,159, 148,334, 375,151, 334,317, 251,626, 499,474, 750,326, 534,814, 1079,818
45
  anchors_p7_1280:
46
+ - [19,22, 54,36, 32,77] # P3/8
47
+ - [70,83, 138,71, 75,173] # P4/16
48
+ - [165,159, 148,334, 375,151] # P5/32
49
+ - [334,317, 251,626, 499,474] # P6/64
50
+ - [750,326, 534,814, 1079,818] # P7/128
51
 
52
  # P7-1920: thr=0.25: 0.9968 BPR, 6.71 anchors past thr, n=15, img_size=1920, metric_all=0.273/0.732-mean/best, past_thr=0.463-mean: 29,34, 81,55, 47,115, 105,124, 207,107, 113,259, 247,238, 222,500, 563,227, 501,476, 376,939, 749,711, 1126,489, 801,1222, 1618,1227
53
  anchors_p7_1920:
54
+ - [29,34, 81,55, 47,115] # P3/8
55
+ - [105,124, 207,107, 113,259] # P4/16
56
+ - [247,238, 222,500, 563,227] # P5/32
57
+ - [501,476, 376,939, 749,711] # P6/64
58
+ - [1126,489, 801,1222, 1618,1227] # P7/128
models/hub/yolov3-spp.yaml CHANGED
@@ -3,47 +3,47 @@ nc: 80 # number of classes
3
  depth_multiple: 1.0 # model depth multiple
4
  width_multiple: 1.0 # layer channel multiple
5
  anchors:
6
- - [ 10,13, 16,30, 33,23 ] # P3/8
7
- - [ 30,61, 62,45, 59,119 ] # P4/16
8
- - [ 116,90, 156,198, 373,326 ] # P5/32
9
 
10
  # darknet53 backbone
11
  backbone:
12
  # [from, number, module, args]
13
- [ [ -1, 1, Conv, [ 32, 3, 1 ] ], # 0
14
- [ -1, 1, Conv, [ 64, 3, 2 ] ], # 1-P1/2
15
- [ -1, 1, Bottleneck, [ 64 ] ],
16
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 3-P2/4
17
- [ -1, 2, Bottleneck, [ 128 ] ],
18
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 5-P3/8
19
- [ -1, 8, Bottleneck, [ 256 ] ],
20
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 7-P4/16
21
- [ -1, 8, Bottleneck, [ 512 ] ],
22
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P5/32
23
- [ -1, 4, Bottleneck, [ 1024 ] ], # 10
24
  ]
25
 
26
  # YOLOv3-SPP head
27
  head:
28
- [ [ -1, 1, Bottleneck, [ 1024, False ] ],
29
- [ -1, 1, SPP, [ 512, [ 5, 9, 13 ] ] ],
30
- [ -1, 1, Conv, [ 1024, 3, 1 ] ],
31
- [ -1, 1, Conv, [ 512, 1, 1 ] ],
32
- [ -1, 1, Conv, [ 1024, 3, 1 ] ], # 15 (P5/32-large)
33
 
34
- [ -2, 1, Conv, [ 256, 1, 1 ] ],
35
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
36
- [ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P4
37
- [ -1, 1, Bottleneck, [ 512, False ] ],
38
- [ -1, 1, Bottleneck, [ 512, False ] ],
39
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
40
- [ -1, 1, Conv, [ 512, 3, 1 ] ], # 22 (P4/16-medium)
41
 
42
- [ -2, 1, Conv, [ 128, 1, 1 ] ],
43
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
44
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P3
45
- [ -1, 1, Bottleneck, [ 256, False ] ],
46
- [ -1, 2, Bottleneck, [ 256, False ] ], # 27 (P3/8-small)
47
 
48
- [ [ 27, 22, 15 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5)
49
  ]
 
3
  depth_multiple: 1.0 # model depth multiple
4
  width_multiple: 1.0 # layer channel multiple
5
  anchors:
6
+ - [10,13, 16,30, 33,23] # P3/8
7
+ - [30,61, 62,45, 59,119] # P4/16
8
+ - [116,90, 156,198, 373,326] # P5/32
9
 
10
  # darknet53 backbone
11
  backbone:
12
  # [from, number, module, args]
13
+ [[-1, 1, Conv, [32, 3, 1]], # 0
14
+ [-1, 1, Conv, [64, 3, 2]], # 1-P1/2
15
+ [-1, 1, Bottleneck, [64]],
16
+ [-1, 1, Conv, [128, 3, 2]], # 3-P2/4
17
+ [-1, 2, Bottleneck, [128]],
18
+ [-1, 1, Conv, [256, 3, 2]], # 5-P3/8
19
+ [-1, 8, Bottleneck, [256]],
20
+ [-1, 1, Conv, [512, 3, 2]], # 7-P4/16
21
+ [-1, 8, Bottleneck, [512]],
22
+ [-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
23
+ [-1, 4, Bottleneck, [1024]], # 10
24
  ]
25
 
26
  # YOLOv3-SPP head
27
  head:
28
+ [[-1, 1, Bottleneck, [1024, False]],
29
+ [-1, 1, SPP, [512, [5, 9, 13]]],
30
+ [-1, 1, Conv, [1024, 3, 1]],
31
+ [-1, 1, Conv, [512, 1, 1]],
32
+ [-1, 1, Conv, [1024, 3, 1]], # 15 (P5/32-large)
33
 
34
+ [-2, 1, Conv, [256, 1, 1]],
35
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
36
+ [[-1, 8], 1, Concat, [1]], # cat backbone P4
37
+ [-1, 1, Bottleneck, [512, False]],
38
+ [-1, 1, Bottleneck, [512, False]],
39
+ [-1, 1, Conv, [256, 1, 1]],
40
+ [-1, 1, Conv, [512, 3, 1]], # 22 (P4/16-medium)
41
 
42
+ [-2, 1, Conv, [128, 1, 1]],
43
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
44
+ [[-1, 6], 1, Concat, [1]], # cat backbone P3
45
+ [-1, 1, Bottleneck, [256, False]],
46
+ [-1, 2, Bottleneck, [256, False]], # 27 (P3/8-small)
47
 
48
+ [[27, 22, 15], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
49
  ]
models/hub/yolov3-tiny.yaml CHANGED
@@ -3,37 +3,37 @@ nc: 80 # number of classes
3
  depth_multiple: 1.0 # model depth multiple
4
  width_multiple: 1.0 # layer channel multiple
5
  anchors:
6
- - [ 10,14, 23,27, 37,58 ] # P4/16
7
- - [ 81,82, 135,169, 344,319 ] # P5/32
8
 
9
  # YOLOv3-tiny backbone
10
  backbone:
11
  # [from, number, module, args]
12
- [ [ -1, 1, Conv, [ 16, 3, 1 ] ], # 0
13
- [ -1, 1, nn.MaxPool2d, [ 2, 2, 0 ] ], # 1-P1/2
14
- [ -1, 1, Conv, [ 32, 3, 1 ] ],
15
- [ -1, 1, nn.MaxPool2d, [ 2, 2, 0 ] ], # 3-P2/4
16
- [ -1, 1, Conv, [ 64, 3, 1 ] ],
17
- [ -1, 1, nn.MaxPool2d, [ 2, 2, 0 ] ], # 5-P3/8
18
- [ -1, 1, Conv, [ 128, 3, 1 ] ],
19
- [ -1, 1, nn.MaxPool2d, [ 2, 2, 0 ] ], # 7-P4/16
20
- [ -1, 1, Conv, [ 256, 3, 1 ] ],
21
- [ -1, 1, nn.MaxPool2d, [ 2, 2, 0 ] ], # 9-P5/32
22
- [ -1, 1, Conv, [ 512, 3, 1 ] ],
23
- [ -1, 1, nn.ZeroPad2d, [ [ 0, 1, 0, 1 ] ] ], # 11
24
- [ -1, 1, nn.MaxPool2d, [ 2, 1, 0 ] ], # 12
25
  ]
26
 
27
  # YOLOv3-tiny head
28
  head:
29
- [ [ -1, 1, Conv, [ 1024, 3, 1 ] ],
30
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
31
- [ -1, 1, Conv, [ 512, 3, 1 ] ], # 15 (P5/32-large)
32
 
33
- [ -2, 1, Conv, [ 128, 1, 1 ] ],
34
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
35
- [ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P4
36
- [ -1, 1, Conv, [ 256, 3, 1 ] ], # 19 (P4/16-medium)
37
 
38
- [ [ 19, 15 ], 1, Detect, [ nc, anchors ] ], # Detect(P4, P5)
39
  ]
 
3
  depth_multiple: 1.0 # model depth multiple
4
  width_multiple: 1.0 # layer channel multiple
5
  anchors:
6
+ - [10,14, 23,27, 37,58] # P4/16
7
+ - [81,82, 135,169, 344,319] # P5/32
8
 
9
  # YOLOv3-tiny backbone
10
  backbone:
11
  # [from, number, module, args]
12
+ [[-1, 1, Conv, [16, 3, 1]], # 0
13
+ [-1, 1, nn.MaxPool2d, [2, 2, 0]], # 1-P1/2
14
+ [-1, 1, Conv, [32, 3, 1]],
15
+ [-1, 1, nn.MaxPool2d, [2, 2, 0]], # 3-P2/4
16
+ [-1, 1, Conv, [64, 3, 1]],
17
+ [-1, 1, nn.MaxPool2d, [2, 2, 0]], # 5-P3/8
18
+ [-1, 1, Conv, [128, 3, 1]],
19
+ [-1, 1, nn.MaxPool2d, [2, 2, 0]], # 7-P4/16
20
+ [-1, 1, Conv, [256, 3, 1]],
21
+ [-1, 1, nn.MaxPool2d, [2, 2, 0]], # 9-P5/32
22
+ [-1, 1, Conv, [512, 3, 1]],
23
+ [-1, 1, nn.ZeroPad2d, [[0, 1, 0, 1]]], # 11
24
+ [-1, 1, nn.MaxPool2d, [2, 1, 0]], # 12
25
  ]
26
 
27
  # YOLOv3-tiny head
28
  head:
29
+ [[-1, 1, Conv, [1024, 3, 1]],
30
+ [-1, 1, Conv, [256, 1, 1]],
31
+ [-1, 1, Conv, [512, 3, 1]], # 15 (P5/32-large)
32
 
33
+ [-2, 1, Conv, [128, 1, 1]],
34
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
35
+ [[-1, 8], 1, Concat, [1]], # cat backbone P4
36
+ [-1, 1, Conv, [256, 3, 1]], # 19 (P4/16-medium)
37
 
38
+ [[19, 15], 1, Detect, [nc, anchors]], # Detect(P4, P5)
39
  ]
models/hub/yolov3.yaml CHANGED
@@ -3,47 +3,47 @@ nc: 80 # number of classes
3
  depth_multiple: 1.0 # model depth multiple
4
  width_multiple: 1.0 # layer channel multiple
5
  anchors:
6
- - [ 10,13, 16,30, 33,23 ] # P3/8
7
- - [ 30,61, 62,45, 59,119 ] # P4/16
8
- - [ 116,90, 156,198, 373,326 ] # P5/32
9
 
10
  # darknet53 backbone
11
  backbone:
12
  # [from, number, module, args]
13
- [ [ -1, 1, Conv, [ 32, 3, 1 ] ], # 0
14
- [ -1, 1, Conv, [ 64, 3, 2 ] ], # 1-P1/2
15
- [ -1, 1, Bottleneck, [ 64 ] ],
16
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 3-P2/4
17
- [ -1, 2, Bottleneck, [ 128 ] ],
18
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 5-P3/8
19
- [ -1, 8, Bottleneck, [ 256 ] ],
20
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 7-P4/16
21
- [ -1, 8, Bottleneck, [ 512 ] ],
22
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P5/32
23
- [ -1, 4, Bottleneck, [ 1024 ] ], # 10
24
  ]
25
 
26
  # YOLOv3 head
27
  head:
28
- [ [ -1, 1, Bottleneck, [ 1024, False ] ],
29
- [ -1, 1, Conv, [ 512, [ 1, 1 ] ] ],
30
- [ -1, 1, Conv, [ 1024, 3, 1 ] ],
31
- [ -1, 1, Conv, [ 512, 1, 1 ] ],
32
- [ -1, 1, Conv, [ 1024, 3, 1 ] ], # 15 (P5/32-large)
33
 
34
- [ -2, 1, Conv, [ 256, 1, 1 ] ],
35
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
36
- [ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P4
37
- [ -1, 1, Bottleneck, [ 512, False ] ],
38
- [ -1, 1, Bottleneck, [ 512, False ] ],
39
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
40
- [ -1, 1, Conv, [ 512, 3, 1 ] ], # 22 (P4/16-medium)
41
 
42
- [ -2, 1, Conv, [ 128, 1, 1 ] ],
43
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
44
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P3
45
- [ -1, 1, Bottleneck, [ 256, False ] ],
46
- [ -1, 2, Bottleneck, [ 256, False ] ], # 27 (P3/8-small)
47
 
48
- [ [ 27, 22, 15 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5)
49
  ]
 
3
  depth_multiple: 1.0 # model depth multiple
4
  width_multiple: 1.0 # layer channel multiple
5
  anchors:
6
+ - [10,13, 16,30, 33,23] # P3/8
7
+ - [30,61, 62,45, 59,119] # P4/16
8
+ - [116,90, 156,198, 373,326] # P5/32
9
 
10
  # darknet53 backbone
11
  backbone:
12
  # [from, number, module, args]
13
+ [[-1, 1, Conv, [32, 3, 1]], # 0
14
+ [-1, 1, Conv, [64, 3, 2]], # 1-P1/2
15
+ [-1, 1, Bottleneck, [64]],
16
+ [-1, 1, Conv, [128, 3, 2]], # 3-P2/4
17
+ [-1, 2, Bottleneck, [128]],
18
+ [-1, 1, Conv, [256, 3, 2]], # 5-P3/8
19
+ [-1, 8, Bottleneck, [256]],
20
+ [-1, 1, Conv, [512, 3, 2]], # 7-P4/16
21
+ [-1, 8, Bottleneck, [512]],
22
+ [-1, 1, Conv, [1024, 3, 2]], # 9-P5/32
23
+ [-1, 4, Bottleneck, [1024]], # 10
24
  ]
25
 
26
  # YOLOv3 head
27
  head:
28
+ [[-1, 1, Bottleneck, [1024, False]],
29
+ [-1, 1, Conv, [512, [1, 1]]],
30
+ [-1, 1, Conv, [1024, 3, 1]],
31
+ [-1, 1, Conv, [512, 1, 1]],
32
+ [-1, 1, Conv, [1024, 3, 1]], # 15 (P5/32-large)
33
 
34
+ [-2, 1, Conv, [256, 1, 1]],
35
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
36
+ [[-1, 8], 1, Concat, [1]], # cat backbone P4
37
+ [-1, 1, Bottleneck, [512, False]],
38
+ [-1, 1, Bottleneck, [512, False]],
39
+ [-1, 1, Conv, [256, 1, 1]],
40
+ [-1, 1, Conv, [512, 3, 1]], # 22 (P4/16-medium)
41
 
42
+ [-2, 1, Conv, [128, 1, 1]],
43
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
44
+ [[-1, 6], 1, Concat, [1]], # cat backbone P3
45
+ [-1, 1, Bottleneck, [256, False]],
46
+ [-1, 2, Bottleneck, [256, False]], # 27 (P3/8-small)
47
 
48
+ [[27, 22, 15], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
49
  ]
models/hub/yolov5-fpn.yaml CHANGED
@@ -3,38 +3,38 @@ nc: 80 # number of classes
3
  depth_multiple: 1.0 # model depth multiple
4
  width_multiple: 1.0 # layer channel multiple
5
  anchors:
6
- - [ 10,13, 16,30, 33,23 ] # P3/8
7
- - [ 30,61, 62,45, 59,119 ] # P4/16
8
- - [ 116,90, 156,198, 373,326 ] # P5/32
9
 
10
  # YOLOv5 backbone
11
  backbone:
12
  # [from, number, module, args]
13
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
14
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
15
- [ -1, 3, Bottleneck, [ 128 ] ],
16
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
17
- [ -1, 9, BottleneckCSP, [ 256 ] ],
18
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
19
- [ -1, 9, BottleneckCSP, [ 512 ] ],
20
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 7-P5/32
21
- [ -1, 1, SPP, [ 1024, [ 5, 9, 13 ] ] ],
22
- [ -1, 6, BottleneckCSP, [ 1024 ] ], # 9
23
  ]
24
 
25
  # YOLOv5 FPN head
26
  head:
27
- [ [ -1, 3, BottleneckCSP, [ 1024, False ] ], # 10 (P5/32-large)
28
 
29
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
30
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
31
- [ -1, 1, Conv, [ 512, 1, 1 ] ],
32
- [ -1, 3, BottleneckCSP, [ 512, False ] ], # 14 (P4/16-medium)
33
 
34
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
35
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
36
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
37
- [ -1, 3, BottleneckCSP, [ 256, False ] ], # 18 (P3/8-small)
38
 
39
- [ [ 18, 14, 10 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5)
40
  ]
 
3
  depth_multiple: 1.0 # model depth multiple
4
  width_multiple: 1.0 # layer channel multiple
5
  anchors:
6
+ - [10,13, 16,30, 33,23] # P3/8
7
+ - [30,61, 62,45, 59,119] # P4/16
8
+ - [116,90, 156,198, 373,326] # P5/32
9
 
10
  # YOLOv5 backbone
11
  backbone:
12
  # [from, number, module, args]
13
+ [[-1, 1, Focus, [64, 3]], # 0-P1/2
14
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
15
+ [-1, 3, Bottleneck, [128]],
16
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
17
+ [-1, 9, BottleneckCSP, [256]],
18
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
19
+ [-1, 9, BottleneckCSP, [512]],
20
+ [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
21
+ [-1, 1, SPP, [1024, [5, 9, 13]]],
22
+ [-1, 6, BottleneckCSP, [1024]], # 9
23
  ]
24
 
25
  # YOLOv5 FPN head
26
  head:
27
+ [[-1, 3, BottleneckCSP, [1024, False]], # 10 (P5/32-large)
28
 
29
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
30
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
31
+ [-1, 1, Conv, [512, 1, 1]],
32
+ [-1, 3, BottleneckCSP, [512, False]], # 14 (P4/16-medium)
33
 
34
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
35
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
36
+ [-1, 1, Conv, [256, 1, 1]],
37
+ [-1, 3, BottleneckCSP, [256, False]], # 18 (P3/8-small)
38
 
39
+ [[18, 14, 10], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
40
  ]
models/hub/yolov5-p2.yaml CHANGED
@@ -7,46 +7,46 @@ anchors: 3
7
  # YOLOv5 backbone
8
  backbone:
9
  # [from, number, module, args]
10
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
11
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
12
- [ -1, 3, C3, [ 128 ] ],
13
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
14
- [ -1, 9, C3, [ 256 ] ],
15
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
16
- [ -1, 9, C3, [ 512 ] ],
17
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 7-P5/32
18
- [ -1, 1, SPP, [ 1024, [ 5, 9, 13 ] ] ],
19
- [ -1, 3, C3, [ 1024, False ] ], # 9
20
  ]
21
 
22
  # YOLOv5 head
23
  head:
24
- [ [ -1, 1, Conv, [ 512, 1, 1 ] ],
25
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
26
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
27
- [ -1, 3, C3, [ 512, False ] ], # 13
28
-
29
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
30
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
31
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
32
- [ -1, 3, C3, [ 256, False ] ], # 17 (P3/8-small)
33
-
34
- [ -1, 1, Conv, [ 128, 1, 1 ] ],
35
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
36
- [ [ -1, 2 ], 1, Concat, [ 1 ] ], # cat backbone P2
37
- [ -1, 1, C3, [ 128, False ] ], # 21 (P2/4-xsmall)
38
-
39
- [ -1, 1, Conv, [ 128, 3, 2 ] ],
40
- [ [ -1, 18 ], 1, Concat, [ 1 ] ], # cat head P3
41
- [ -1, 3, C3, [ 256, False ] ], # 24 (P3/8-small)
42
-
43
- [ -1, 1, Conv, [ 256, 3, 2 ] ],
44
- [ [ -1, 14 ], 1, Concat, [ 1 ] ], # cat head P4
45
- [ -1, 3, C3, [ 512, False ] ], # 27 (P4/16-medium)
46
-
47
- [ -1, 1, Conv, [ 512, 3, 2 ] ],
48
- [ [ -1, 10 ], 1, Concat, [ 1 ] ], # cat head P5
49
- [ -1, 3, C3, [ 1024, False ] ], # 30 (P5/32-large)
50
-
51
- [ [ 24, 27, 30 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5)
52
  ]
 
7
  # YOLOv5 backbone
8
  backbone:
9
  # [from, number, module, args]
10
+ [[-1, 1, Focus, [64, 3]], # 0-P1/2
11
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
12
+ [-1, 3, C3, [128]],
13
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
14
+ [-1, 9, C3, [256]],
15
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
16
+ [-1, 9, C3, [512]],
17
+ [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
18
+ [-1, 1, SPP, [1024, [5, 9, 13]]],
19
+ [-1, 3, C3, [1024, False]], # 9
20
  ]
21
 
22
  # YOLOv5 head
23
  head:
24
+ [[-1, 1, Conv, [512, 1, 1]],
25
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
26
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
27
+ [-1, 3, C3, [512, False]], # 13
28
+
29
+ [-1, 1, Conv, [256, 1, 1]],
30
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
31
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
32
+ [-1, 3, C3, [256, False]], # 17 (P3/8-small)
33
+
34
+ [-1, 1, Conv, [128, 1, 1]],
35
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
36
+ [[-1, 2], 1, Concat, [1]], # cat backbone P2
37
+ [-1, 1, C3, [128, False]], # 21 (P2/4-xsmall)
38
+
39
+ [-1, 1, Conv, [128, 3, 2]],
40
+ [[-1, 18], 1, Concat, [1]], # cat head P3
41
+ [-1, 3, C3, [256, False]], # 24 (P3/8-small)
42
+
43
+ [-1, 1, Conv, [256, 3, 2]],
44
+ [[-1, 14], 1, Concat, [1]], # cat head P4
45
+ [-1, 3, C3, [512, False]], # 27 (P4/16-medium)
46
+
47
+ [-1, 1, Conv, [512, 3, 2]],
48
+ [[-1, 10], 1, Concat, [1]], # cat head P5
49
+ [-1, 3, C3, [1024, False]], # 30 (P5/32-large)
50
+
51
+ [[24, 27, 30], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
52
  ]
models/hub/yolov5-p6.yaml CHANGED
@@ -7,48 +7,48 @@ anchors: 3
7
  # YOLOv5 backbone
8
  backbone:
9
  # [from, number, module, args]
10
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
11
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
12
- [ -1, 3, C3, [ 128 ] ],
13
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
14
- [ -1, 9, C3, [ 256 ] ],
15
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
16
- [ -1, 9, C3, [ 512 ] ],
17
- [ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32
18
- [ -1, 3, C3, [ 768 ] ],
19
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P6/64
20
- [ -1, 1, SPP, [ 1024, [ 3, 5, 7 ] ] ],
21
- [ -1, 3, C3, [ 1024, False ] ], # 11
22
  ]
23
 
24
  # YOLOv5 head
25
  head:
26
- [ [ -1, 1, Conv, [ 768, 1, 1 ] ],
27
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
28
- [ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P5
29
- [ -1, 3, C3, [ 768, False ] ], # 15
30
-
31
- [ -1, 1, Conv, [ 512, 1, 1 ] ],
32
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
33
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
34
- [ -1, 3, C3, [ 512, False ] ], # 19
35
-
36
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
37
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
38
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
39
- [ -1, 3, C3, [ 256, False ] ], # 23 (P3/8-small)
40
-
41
- [ -1, 1, Conv, [ 256, 3, 2 ] ],
42
- [ [ -1, 20 ], 1, Concat, [ 1 ] ], # cat head P4
43
- [ -1, 3, C3, [ 512, False ] ], # 26 (P4/16-medium)
44
-
45
- [ -1, 1, Conv, [ 512, 3, 2 ] ],
46
- [ [ -1, 16 ], 1, Concat, [ 1 ] ], # cat head P5
47
- [ -1, 3, C3, [ 768, False ] ], # 29 (P5/32-large)
48
-
49
- [ -1, 1, Conv, [ 768, 3, 2 ] ],
50
- [ [ -1, 12 ], 1, Concat, [ 1 ] ], # cat head P6
51
- [ -1, 3, C3, [ 1024, False ] ], # 32 (P5/64-xlarge)
52
-
53
- [ [ 23, 26, 29, 32 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5, P6)
54
  ]
 
7
  # YOLOv5 backbone
8
  backbone:
9
  # [from, number, module, args]
10
+ [[-1, 1, Focus, [64, 3]], # 0-P1/2
11
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
12
+ [-1, 3, C3, [128]],
13
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
14
+ [-1, 9, C3, [256]],
15
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
16
+ [-1, 9, C3, [512]],
17
+ [-1, 1, Conv, [768, 3, 2]], # 7-P5/32
18
+ [-1, 3, C3, [768]],
19
+ [-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
20
+ [-1, 1, SPP, [1024, [3, 5, 7]]],
21
+ [-1, 3, C3, [1024, False]], # 11
22
  ]
23
 
24
  # YOLOv5 head
25
  head:
26
+ [[-1, 1, Conv, [768, 1, 1]],
27
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
28
+ [[-1, 8], 1, Concat, [1]], # cat backbone P5
29
+ [-1, 3, C3, [768, False]], # 15
30
+
31
+ [-1, 1, Conv, [512, 1, 1]],
32
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
33
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
34
+ [-1, 3, C3, [512, False]], # 19
35
+
36
+ [-1, 1, Conv, [256, 1, 1]],
37
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
38
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
39
+ [-1, 3, C3, [256, False]], # 23 (P3/8-small)
40
+
41
+ [-1, 1, Conv, [256, 3, 2]],
42
+ [[-1, 20], 1, Concat, [1]], # cat head P4
43
+ [-1, 3, C3, [512, False]], # 26 (P4/16-medium)
44
+
45
+ [-1, 1, Conv, [512, 3, 2]],
46
+ [[-1, 16], 1, Concat, [1]], # cat head P5
47
+ [-1, 3, C3, [768, False]], # 29 (P5/32-large)
48
+
49
+ [-1, 1, Conv, [768, 3, 2]],
50
+ [[-1, 12], 1, Concat, [1]], # cat head P6
51
+ [-1, 3, C3, [1024, False]], # 32 (P5/64-xlarge)
52
+
53
+ [[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
54
  ]
models/hub/yolov5-p7.yaml CHANGED
@@ -7,59 +7,59 @@ anchors: 3
7
  # YOLOv5 backbone
8
  backbone:
9
  # [from, number, module, args]
10
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
11
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
12
- [ -1, 3, C3, [ 128 ] ],
13
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
14
- [ -1, 9, C3, [ 256 ] ],
15
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
16
- [ -1, 9, C3, [ 512 ] ],
17
- [ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32
18
- [ -1, 3, C3, [ 768 ] ],
19
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P6/64
20
- [ -1, 3, C3, [ 1024 ] ],
21
- [ -1, 1, Conv, [ 1280, 3, 2 ] ], # 11-P7/128
22
- [ -1, 1, SPP, [ 1280, [ 3, 5 ] ] ],
23
- [ -1, 3, C3, [ 1280, False ] ], # 13
24
  ]
25
 
26
  # YOLOv5 head
27
  head:
28
- [ [ -1, 1, Conv, [ 1024, 1, 1 ] ],
29
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
30
- [ [ -1, 10 ], 1, Concat, [ 1 ] ], # cat backbone P6
31
- [ -1, 3, C3, [ 1024, False ] ], # 17
32
 
33
- [ -1, 1, Conv, [ 768, 1, 1 ] ],
34
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
35
- [ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P5
36
- [ -1, 3, C3, [ 768, False ] ], # 21
37
 
38
- [ -1, 1, Conv, [ 512, 1, 1 ] ],
39
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
40
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
41
- [ -1, 3, C3, [ 512, False ] ], # 25
42
 
43
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
44
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
45
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
46
- [ -1, 3, C3, [ 256, False ] ], # 29 (P3/8-small)
47
 
48
- [ -1, 1, Conv, [ 256, 3, 2 ] ],
49
- [ [ -1, 26 ], 1, Concat, [ 1 ] ], # cat head P4
50
- [ -1, 3, C3, [ 512, False ] ], # 32 (P4/16-medium)
51
 
52
- [ -1, 1, Conv, [ 512, 3, 2 ] ],
53
- [ [ -1, 22 ], 1, Concat, [ 1 ] ], # cat head P5
54
- [ -1, 3, C3, [ 768, False ] ], # 35 (P5/32-large)
55
 
56
- [ -1, 1, Conv, [ 768, 3, 2 ] ],
57
- [ [ -1, 18 ], 1, Concat, [ 1 ] ], # cat head P6
58
- [ -1, 3, C3, [ 1024, False ] ], # 38 (P6/64-xlarge)
59
 
60
- [ -1, 1, Conv, [ 1024, 3, 2 ] ],
61
- [ [ -1, 14 ], 1, Concat, [ 1 ] ], # cat head P7
62
- [ -1, 3, C3, [ 1280, False ] ], # 41 (P7/128-xxlarge)
63
 
64
- [ [ 29, 32, 35, 38, 41 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5, P6, P7)
65
  ]
 
7
  # YOLOv5 backbone
8
  backbone:
9
  # [from, number, module, args]
10
+ [[-1, 1, Focus, [64, 3]], # 0-P1/2
11
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
12
+ [-1, 3, C3, [128]],
13
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
14
+ [-1, 9, C3, [256]],
15
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
16
+ [-1, 9, C3, [512]],
17
+ [-1, 1, Conv, [768, 3, 2]], # 7-P5/32
18
+ [-1, 3, C3, [768]],
19
+ [-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
20
+ [-1, 3, C3, [1024]],
21
+ [-1, 1, Conv, [1280, 3, 2]], # 11-P7/128
22
+ [-1, 1, SPP, [1280, [3, 5]]],
23
+ [-1, 3, C3, [1280, False]], # 13
24
  ]
25
 
26
  # YOLOv5 head
27
  head:
28
+ [[-1, 1, Conv, [1024, 1, 1]],
29
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
30
+ [[-1, 10], 1, Concat, [1]], # cat backbone P6
31
+ [-1, 3, C3, [1024, False]], # 17
32
 
33
+ [-1, 1, Conv, [768, 1, 1]],
34
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
35
+ [[-1, 8], 1, Concat, [1]], # cat backbone P5
36
+ [-1, 3, C3, [768, False]], # 21
37
 
38
+ [-1, 1, Conv, [512, 1, 1]],
39
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
40
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
41
+ [-1, 3, C3, [512, False]], # 25
42
 
43
+ [-1, 1, Conv, [256, 1, 1]],
44
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
45
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
46
+ [-1, 3, C3, [256, False]], # 29 (P3/8-small)
47
 
48
+ [-1, 1, Conv, [256, 3, 2]],
49
+ [[-1, 26], 1, Concat, [1]], # cat head P4
50
+ [-1, 3, C3, [512, False]], # 32 (P4/16-medium)
51
 
52
+ [-1, 1, Conv, [512, 3, 2]],
53
+ [[-1, 22], 1, Concat, [1]], # cat head P5
54
+ [-1, 3, C3, [768, False]], # 35 (P5/32-large)
55
 
56
+ [-1, 1, Conv, [768, 3, 2]],
57
+ [[-1, 18], 1, Concat, [1]], # cat head P6
58
+ [-1, 3, C3, [1024, False]], # 38 (P6/64-xlarge)
59
 
60
+ [-1, 1, Conv, [1024, 3, 2]],
61
+ [[-1, 14], 1, Concat, [1]], # cat head P7
62
+ [-1, 3, C3, [1280, False]], # 41 (P7/128-xxlarge)
63
 
64
+ [[29, 32, 35, 38, 41], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6, P7)
65
  ]
models/hub/yolov5-panet.yaml CHANGED
@@ -3,44 +3,44 @@ nc: 80 # number of classes
3
  depth_multiple: 1.0 # model depth multiple
4
  width_multiple: 1.0 # layer channel multiple
5
  anchors:
6
- - [ 10,13, 16,30, 33,23 ] # P3/8
7
- - [ 30,61, 62,45, 59,119 ] # P4/16
8
- - [ 116,90, 156,198, 373,326 ] # P5/32
9
 
10
  # YOLOv5 backbone
11
  backbone:
12
  # [from, number, module, args]
13
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
14
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
15
- [ -1, 3, BottleneckCSP, [ 128 ] ],
16
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
17
- [ -1, 9, BottleneckCSP, [ 256 ] ],
18
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
19
- [ -1, 9, BottleneckCSP, [ 512 ] ],
20
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 7-P5/32
21
- [ -1, 1, SPP, [ 1024, [ 5, 9, 13 ] ] ],
22
- [ -1, 3, BottleneckCSP, [ 1024, False ] ], # 9
23
  ]
24
 
25
  # YOLOv5 PANet head
26
  head:
27
- [ [ -1, 1, Conv, [ 512, 1, 1 ] ],
28
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
29
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
30
- [ -1, 3, BottleneckCSP, [ 512, False ] ], # 13
31
 
32
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
33
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
34
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
35
- [ -1, 3, BottleneckCSP, [ 256, False ] ], # 17 (P3/8-small)
36
 
37
- [ -1, 1, Conv, [ 256, 3, 2 ] ],
38
- [ [ -1, 14 ], 1, Concat, [ 1 ] ], # cat head P4
39
- [ -1, 3, BottleneckCSP, [ 512, False ] ], # 20 (P4/16-medium)
40
 
41
- [ -1, 1, Conv, [ 512, 3, 2 ] ],
42
- [ [ -1, 10 ], 1, Concat, [ 1 ] ], # cat head P5
43
- [ -1, 3, BottleneckCSP, [ 1024, False ] ], # 23 (P5/32-large)
44
 
45
- [ [ 17, 20, 23 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5)
46
  ]
 
3
  depth_multiple: 1.0 # model depth multiple
4
  width_multiple: 1.0 # layer channel multiple
5
  anchors:
6
+ - [10,13, 16,30, 33,23] # P3/8
7
+ - [30,61, 62,45, 59,119] # P4/16
8
+ - [116,90, 156,198, 373,326] # P5/32
9
 
10
  # YOLOv5 backbone
11
  backbone:
12
  # [from, number, module, args]
13
+ [[-1, 1, Focus, [64, 3]], # 0-P1/2
14
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
15
+ [-1, 3, BottleneckCSP, [128]],
16
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
17
+ [-1, 9, BottleneckCSP, [256]],
18
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
19
+ [-1, 9, BottleneckCSP, [512]],
20
+ [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
21
+ [-1, 1, SPP, [1024, [5, 9, 13]]],
22
+ [-1, 3, BottleneckCSP, [1024, False]], # 9
23
  ]
24
 
25
  # YOLOv5 PANet head
26
  head:
27
+ [[-1, 1, Conv, [512, 1, 1]],
28
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
29
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
30
+ [-1, 3, BottleneckCSP, [512, False]], # 13
31
 
32
+ [-1, 1, Conv, [256, 1, 1]],
33
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
34
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
35
+ [-1, 3, BottleneckCSP, [256, False]], # 17 (P3/8-small)
36
 
37
+ [-1, 1, Conv, [256, 3, 2]],
38
+ [[-1, 14], 1, Concat, [1]], # cat head P4
39
+ [-1, 3, BottleneckCSP, [512, False]], # 20 (P4/16-medium)
40
 
41
+ [-1, 1, Conv, [512, 3, 2]],
42
+ [[-1, 10], 1, Concat, [1]], # cat head P5
43
+ [-1, 3, BottleneckCSP, [1024, False]], # 23 (P5/32-large)
44
 
45
+ [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
46
  ]
models/hub/yolov5l6.yaml CHANGED
@@ -3,56 +3,56 @@ nc: 80 # number of classes
3
  depth_multiple: 1.0 # model depth multiple
4
  width_multiple: 1.0 # layer channel multiple
5
  anchors:
6
- - [ 19,27, 44,40, 38,94 ] # P3/8
7
- - [ 96,68, 86,152, 180,137 ] # P4/16
8
- - [ 140,301, 303,264, 238,542 ] # P5/32
9
- - [ 436,615, 739,380, 925,792 ] # P6/64
10
 
11
  # YOLOv5 backbone
12
  backbone:
13
  # [from, number, module, args]
14
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
15
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
16
- [ -1, 3, C3, [ 128 ] ],
17
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
18
- [ -1, 9, C3, [ 256 ] ],
19
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
20
- [ -1, 9, C3, [ 512 ] ],
21
- [ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32
22
- [ -1, 3, C3, [ 768 ] ],
23
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P6/64
24
- [ -1, 1, SPP, [ 1024, [ 3, 5, 7 ] ] ],
25
- [ -1, 3, C3, [ 1024, False ] ], # 11
26
  ]
27
 
28
  # YOLOv5 head
29
  head:
30
- [ [ -1, 1, Conv, [ 768, 1, 1 ] ],
31
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
32
- [ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P5
33
- [ -1, 3, C3, [ 768, False ] ], # 15
34
-
35
- [ -1, 1, Conv, [ 512, 1, 1 ] ],
36
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
37
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
38
- [ -1, 3, C3, [ 512, False ] ], # 19
39
-
40
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
41
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
42
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
43
- [ -1, 3, C3, [ 256, False ] ], # 23 (P3/8-small)
44
-
45
- [ -1, 1, Conv, [ 256, 3, 2 ] ],
46
- [ [ -1, 20 ], 1, Concat, [ 1 ] ], # cat head P4
47
- [ -1, 3, C3, [ 512, False ] ], # 26 (P4/16-medium)
48
-
49
- [ -1, 1, Conv, [ 512, 3, 2 ] ],
50
- [ [ -1, 16 ], 1, Concat, [ 1 ] ], # cat head P5
51
- [ -1, 3, C3, [ 768, False ] ], # 29 (P5/32-large)
52
-
53
- [ -1, 1, Conv, [ 768, 3, 2 ] ],
54
- [ [ -1, 12 ], 1, Concat, [ 1 ] ], # cat head P6
55
- [ -1, 3, C3, [ 1024, False ] ], # 32 (P6/64-xlarge)
56
-
57
- [ [ 23, 26, 29, 32 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5, P6)
58
  ]
 
3
  depth_multiple: 1.0 # model depth multiple
4
  width_multiple: 1.0 # layer channel multiple
5
  anchors:
6
+ - [19,27, 44,40, 38,94] # P3/8
7
+ - [96,68, 86,152, 180,137] # P4/16
8
+ - [140,301, 303,264, 238,542] # P5/32
9
+ - [436,615, 739,380, 925,792] # P6/64
10
 
11
  # YOLOv5 backbone
12
  backbone:
13
  # [from, number, module, args]
14
+ [[-1, 1, Focus, [64, 3]], # 0-P1/2
15
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
16
+ [-1, 3, C3, [128]],
17
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
18
+ [-1, 9, C3, [256]],
19
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
20
+ [-1, 9, C3, [512]],
21
+ [-1, 1, Conv, [768, 3, 2]], # 7-P5/32
22
+ [-1, 3, C3, [768]],
23
+ [-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
24
+ [-1, 1, SPP, [1024, [3, 5, 7]]],
25
+ [-1, 3, C3, [1024, False]], # 11
26
  ]
27
 
28
  # YOLOv5 head
29
  head:
30
+ [[-1, 1, Conv, [768, 1, 1]],
31
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
32
+ [[-1, 8], 1, Concat, [1]], # cat backbone P5
33
+ [-1, 3, C3, [768, False]], # 15
34
+
35
+ [-1, 1, Conv, [512, 1, 1]],
36
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
37
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
38
+ [-1, 3, C3, [512, False]], # 19
39
+
40
+ [-1, 1, Conv, [256, 1, 1]],
41
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
42
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
43
+ [-1, 3, C3, [256, False]], # 23 (P3/8-small)
44
+
45
+ [-1, 1, Conv, [256, 3, 2]],
46
+ [[-1, 20], 1, Concat, [1]], # cat head P4
47
+ [-1, 3, C3, [512, False]], # 26 (P4/16-medium)
48
+
49
+ [-1, 1, Conv, [512, 3, 2]],
50
+ [[-1, 16], 1, Concat, [1]], # cat head P5
51
+ [-1, 3, C3, [768, False]], # 29 (P5/32-large)
52
+
53
+ [-1, 1, Conv, [768, 3, 2]],
54
+ [[-1, 12], 1, Concat, [1]], # cat head P6
55
+ [-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
56
+
57
+ [[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
58
  ]
models/hub/yolov5m6.yaml CHANGED
@@ -3,56 +3,56 @@ nc: 80 # number of classes
3
  depth_multiple: 0.67 # model depth multiple
4
  width_multiple: 0.75 # layer channel multiple
5
  anchors:
6
- - [ 19,27, 44,40, 38,94 ] # P3/8
7
- - [ 96,68, 86,152, 180,137 ] # P4/16
8
- - [ 140,301, 303,264, 238,542 ] # P5/32
9
- - [ 436,615, 739,380, 925,792 ] # P6/64
10
 
11
  # YOLOv5 backbone
12
  backbone:
13
  # [from, number, module, args]
14
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
15
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
16
- [ -1, 3, C3, [ 128 ] ],
17
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
18
- [ -1, 9, C3, [ 256 ] ],
19
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
20
- [ -1, 9, C3, [ 512 ] ],
21
- [ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32
22
- [ -1, 3, C3, [ 768 ] ],
23
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P6/64
24
- [ -1, 1, SPP, [ 1024, [ 3, 5, 7 ] ] ],
25
- [ -1, 3, C3, [ 1024, False ] ], # 11
26
  ]
27
 
28
  # YOLOv5 head
29
  head:
30
- [ [ -1, 1, Conv, [ 768, 1, 1 ] ],
31
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
32
- [ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P5
33
- [ -1, 3, C3, [ 768, False ] ], # 15
34
-
35
- [ -1, 1, Conv, [ 512, 1, 1 ] ],
36
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
37
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
38
- [ -1, 3, C3, [ 512, False ] ], # 19
39
-
40
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
41
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
42
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
43
- [ -1, 3, C3, [ 256, False ] ], # 23 (P3/8-small)
44
-
45
- [ -1, 1, Conv, [ 256, 3, 2 ] ],
46
- [ [ -1, 20 ], 1, Concat, [ 1 ] ], # cat head P4
47
- [ -1, 3, C3, [ 512, False ] ], # 26 (P4/16-medium)
48
-
49
- [ -1, 1, Conv, [ 512, 3, 2 ] ],
50
- [ [ -1, 16 ], 1, Concat, [ 1 ] ], # cat head P5
51
- [ -1, 3, C3, [ 768, False ] ], # 29 (P5/32-large)
52
-
53
- [ -1, 1, Conv, [ 768, 3, 2 ] ],
54
- [ [ -1, 12 ], 1, Concat, [ 1 ] ], # cat head P6
55
- [ -1, 3, C3, [ 1024, False ] ], # 32 (P6/64-xlarge)
56
-
57
- [ [ 23, 26, 29, 32 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5, P6)
58
  ]
 
3
  depth_multiple: 0.67 # model depth multiple
4
  width_multiple: 0.75 # layer channel multiple
5
  anchors:
6
+ - [19,27, 44,40, 38,94] # P3/8
7
+ - [96,68, 86,152, 180,137] # P4/16
8
+ - [140,301, 303,264, 238,542] # P5/32
9
+ - [436,615, 739,380, 925,792] # P6/64
10
 
11
  # YOLOv5 backbone
12
  backbone:
13
  # [from, number, module, args]
14
+ [[-1, 1, Focus, [64, 3]], # 0-P1/2
15
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
16
+ [-1, 3, C3, [128]],
17
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
18
+ [-1, 9, C3, [256]],
19
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
20
+ [-1, 9, C3, [512]],
21
+ [-1, 1, Conv, [768, 3, 2]], # 7-P5/32
22
+ [-1, 3, C3, [768]],
23
+ [-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
24
+ [-1, 1, SPP, [1024, [3, 5, 7]]],
25
+ [-1, 3, C3, [1024, False]], # 11
26
  ]
27
 
28
  # YOLOv5 head
29
  head:
30
+ [[-1, 1, Conv, [768, 1, 1]],
31
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
32
+ [[-1, 8], 1, Concat, [1]], # cat backbone P5
33
+ [-1, 3, C3, [768, False]], # 15
34
+
35
+ [-1, 1, Conv, [512, 1, 1]],
36
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
37
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
38
+ [-1, 3, C3, [512, False]], # 19
39
+
40
+ [-1, 1, Conv, [256, 1, 1]],
41
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
42
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
43
+ [-1, 3, C3, [256, False]], # 23 (P3/8-small)
44
+
45
+ [-1, 1, Conv, [256, 3, 2]],
46
+ [[-1, 20], 1, Concat, [1]], # cat head P4
47
+ [-1, 3, C3, [512, False]], # 26 (P4/16-medium)
48
+
49
+ [-1, 1, Conv, [512, 3, 2]],
50
+ [[-1, 16], 1, Concat, [1]], # cat head P5
51
+ [-1, 3, C3, [768, False]], # 29 (P5/32-large)
52
+
53
+ [-1, 1, Conv, [768, 3, 2]],
54
+ [[-1, 12], 1, Concat, [1]], # cat head P6
55
+ [-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
56
+
57
+ [[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
58
  ]
models/hub/yolov5s-transformer.yaml CHANGED
@@ -3,44 +3,44 @@ nc: 80 # number of classes
3
  depth_multiple: 0.33 # model depth multiple
4
  width_multiple: 0.50 # layer channel multiple
5
  anchors:
6
- - [ 10,13, 16,30, 33,23 ] # P3/8
7
- - [ 30,61, 62,45, 59,119 ] # P4/16
8
- - [ 116,90, 156,198, 373,326 ] # P5/32
9
 
10
  # YOLOv5 backbone
11
  backbone:
12
  # [from, number, module, args]
13
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
14
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
15
- [ -1, 3, C3, [ 128 ] ],
16
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
17
- [ -1, 9, C3, [ 256 ] ],
18
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
19
- [ -1, 9, C3, [ 512 ] ],
20
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 7-P5/32
21
- [ -1, 1, SPP, [ 1024, [ 5, 9, 13 ] ] ],
22
- [ -1, 3, C3TR, [ 1024, False ] ], # 9 <-------- C3TR() Transformer module
23
  ]
24
 
25
  # YOLOv5 head
26
  head:
27
- [ [ -1, 1, Conv, [ 512, 1, 1 ] ],
28
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
29
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
30
- [ -1, 3, C3, [ 512, False ] ], # 13
31
 
32
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
33
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
34
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
35
- [ -1, 3, C3, [ 256, False ] ], # 17 (P3/8-small)
36
 
37
- [ -1, 1, Conv, [ 256, 3, 2 ] ],
38
- [ [ -1, 14 ], 1, Concat, [ 1 ] ], # cat head P4
39
- [ -1, 3, C3, [ 512, False ] ], # 20 (P4/16-medium)
40
 
41
- [ -1, 1, Conv, [ 512, 3, 2 ] ],
42
- [ [ -1, 10 ], 1, Concat, [ 1 ] ], # cat head P5
43
- [ -1, 3, C3, [ 1024, False ] ], # 23 (P5/32-large)
44
 
45
- [ [ 17, 20, 23 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5)
46
  ]
 
3
  depth_multiple: 0.33 # model depth multiple
4
  width_multiple: 0.50 # layer channel multiple
5
  anchors:
6
+ - [10,13, 16,30, 33,23] # P3/8
7
+ - [30,61, 62,45, 59,119] # P4/16
8
+ - [116,90, 156,198, 373,326] # P5/32
9
 
10
  # YOLOv5 backbone
11
  backbone:
12
  # [from, number, module, args]
13
+ [[-1, 1, Focus, [64, 3]], # 0-P1/2
14
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
15
+ [-1, 3, C3, [128]],
16
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
17
+ [-1, 9, C3, [256]],
18
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
19
+ [-1, 9, C3, [512]],
20
+ [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
21
+ [-1, 1, SPP, [1024, [5, 9, 13]]],
22
+ [-1, 3, C3TR, [1024, False]], # 9 <-------- C3TR() Transformer module
23
  ]
24
 
25
  # YOLOv5 head
26
  head:
27
+ [[-1, 1, Conv, [512, 1, 1]],
28
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
29
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
30
+ [-1, 3, C3, [512, False]], # 13
31
 
32
+ [-1, 1, Conv, [256, 1, 1]],
33
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
34
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
35
+ [-1, 3, C3, [256, False]], # 17 (P3/8-small)
36
 
37
+ [-1, 1, Conv, [256, 3, 2]],
38
+ [[-1, 14], 1, Concat, [1]], # cat head P4
39
+ [-1, 3, C3, [512, False]], # 20 (P4/16-medium)
40
 
41
+ [-1, 1, Conv, [512, 3, 2]],
42
+ [[-1, 10], 1, Concat, [1]], # cat head P5
43
+ [-1, 3, C3, [1024, False]], # 23 (P5/32-large)
44
 
45
+ [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
46
  ]
models/hub/yolov5s6.yaml CHANGED
@@ -3,56 +3,56 @@ nc: 80 # number of classes
3
  depth_multiple: 0.33 # model depth multiple
4
  width_multiple: 0.50 # layer channel multiple
5
  anchors:
6
- - [ 19,27, 44,40, 38,94 ] # P3/8
7
- - [ 96,68, 86,152, 180,137 ] # P4/16
8
- - [ 140,301, 303,264, 238,542 ] # P5/32
9
- - [ 436,615, 739,380, 925,792 ] # P6/64
10
 
11
  # YOLOv5 backbone
12
  backbone:
13
  # [from, number, module, args]
14
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
15
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
16
- [ -1, 3, C3, [ 128 ] ],
17
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
18
- [ -1, 9, C3, [ 256 ] ],
19
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
20
- [ -1, 9, C3, [ 512 ] ],
21
- [ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32
22
- [ -1, 3, C3, [ 768 ] ],
23
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P6/64
24
- [ -1, 1, SPP, [ 1024, [ 3, 5, 7 ] ] ],
25
- [ -1, 3, C3, [ 1024, False ] ], # 11
26
  ]
27
 
28
  # YOLOv5 head
29
  head:
30
- [ [ -1, 1, Conv, [ 768, 1, 1 ] ],
31
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
32
- [ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P5
33
- [ -1, 3, C3, [ 768, False ] ], # 15
34
-
35
- [ -1, 1, Conv, [ 512, 1, 1 ] ],
36
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
37
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
38
- [ -1, 3, C3, [ 512, False ] ], # 19
39
-
40
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
41
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
42
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
43
- [ -1, 3, C3, [ 256, False ] ], # 23 (P3/8-small)
44
-
45
- [ -1, 1, Conv, [ 256, 3, 2 ] ],
46
- [ [ -1, 20 ], 1, Concat, [ 1 ] ], # cat head P4
47
- [ -1, 3, C3, [ 512, False ] ], # 26 (P4/16-medium)
48
-
49
- [ -1, 1, Conv, [ 512, 3, 2 ] ],
50
- [ [ -1, 16 ], 1, Concat, [ 1 ] ], # cat head P5
51
- [ -1, 3, C3, [ 768, False ] ], # 29 (P5/32-large)
52
-
53
- [ -1, 1, Conv, [ 768, 3, 2 ] ],
54
- [ [ -1, 12 ], 1, Concat, [ 1 ] ], # cat head P6
55
- [ -1, 3, C3, [ 1024, False ] ], # 32 (P6/64-xlarge)
56
-
57
- [ [ 23, 26, 29, 32 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5, P6)
58
  ]
 
3
  depth_multiple: 0.33 # model depth multiple
4
  width_multiple: 0.50 # layer channel multiple
5
  anchors:
6
+ - [19,27, 44,40, 38,94] # P3/8
7
+ - [96,68, 86,152, 180,137] # P4/16
8
+ - [140,301, 303,264, 238,542] # P5/32
9
+ - [436,615, 739,380, 925,792] # P6/64
10
 
11
  # YOLOv5 backbone
12
  backbone:
13
  # [from, number, module, args]
14
+ [[-1, 1, Focus, [64, 3]], # 0-P1/2
15
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
16
+ [-1, 3, C3, [128]],
17
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
18
+ [-1, 9, C3, [256]],
19
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
20
+ [-1, 9, C3, [512]],
21
+ [-1, 1, Conv, [768, 3, 2]], # 7-P5/32
22
+ [-1, 3, C3, [768]],
23
+ [-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
24
+ [-1, 1, SPP, [1024, [3, 5, 7]]],
25
+ [-1, 3, C3, [1024, False]], # 11
26
  ]
27
 
28
  # YOLOv5 head
29
  head:
30
+ [[-1, 1, Conv, [768, 1, 1]],
31
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
32
+ [[-1, 8], 1, Concat, [1]], # cat backbone P5
33
+ [-1, 3, C3, [768, False]], # 15
34
+
35
+ [-1, 1, Conv, [512, 1, 1]],
36
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
37
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
38
+ [-1, 3, C3, [512, False]], # 19
39
+
40
+ [-1, 1, Conv, [256, 1, 1]],
41
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
42
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
43
+ [-1, 3, C3, [256, False]], # 23 (P3/8-small)
44
+
45
+ [-1, 1, Conv, [256, 3, 2]],
46
+ [[-1, 20], 1, Concat, [1]], # cat head P4
47
+ [-1, 3, C3, [512, False]], # 26 (P4/16-medium)
48
+
49
+ [-1, 1, Conv, [512, 3, 2]],
50
+ [[-1, 16], 1, Concat, [1]], # cat head P5
51
+ [-1, 3, C3, [768, False]], # 29 (P5/32-large)
52
+
53
+ [-1, 1, Conv, [768, 3, 2]],
54
+ [[-1, 12], 1, Concat, [1]], # cat head P6
55
+ [-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
56
+
57
+ [[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
58
  ]
models/hub/yolov5x6.yaml CHANGED
@@ -3,56 +3,56 @@ nc: 80 # number of classes
3
  depth_multiple: 1.33 # model depth multiple
4
  width_multiple: 1.25 # layer channel multiple
5
  anchors:
6
- - [ 19,27, 44,40, 38,94 ] # P3/8
7
- - [ 96,68, 86,152, 180,137 ] # P4/16
8
- - [ 140,301, 303,264, 238,542 ] # P5/32
9
- - [ 436,615, 739,380, 925,792 ] # P6/64
10
 
11
  # YOLOv5 backbone
12
  backbone:
13
  # [from, number, module, args]
14
- [ [ -1, 1, Focus, [ 64, 3 ] ], # 0-P1/2
15
- [ -1, 1, Conv, [ 128, 3, 2 ] ], # 1-P2/4
16
- [ -1, 3, C3, [ 128 ] ],
17
- [ -1, 1, Conv, [ 256, 3, 2 ] ], # 3-P3/8
18
- [ -1, 9, C3, [ 256 ] ],
19
- [ -1, 1, Conv, [ 512, 3, 2 ] ], # 5-P4/16
20
- [ -1, 9, C3, [ 512 ] ],
21
- [ -1, 1, Conv, [ 768, 3, 2 ] ], # 7-P5/32
22
- [ -1, 3, C3, [ 768 ] ],
23
- [ -1, 1, Conv, [ 1024, 3, 2 ] ], # 9-P6/64
24
- [ -1, 1, SPP, [ 1024, [ 3, 5, 7 ] ] ],
25
- [ -1, 3, C3, [ 1024, False ] ], # 11
26
  ]
27
 
28
  # YOLOv5 head
29
  head:
30
- [ [ -1, 1, Conv, [ 768, 1, 1 ] ],
31
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
32
- [ [ -1, 8 ], 1, Concat, [ 1 ] ], # cat backbone P5
33
- [ -1, 3, C3, [ 768, False ] ], # 15
34
-
35
- [ -1, 1, Conv, [ 512, 1, 1 ] ],
36
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
37
- [ [ -1, 6 ], 1, Concat, [ 1 ] ], # cat backbone P4
38
- [ -1, 3, C3, [ 512, False ] ], # 19
39
-
40
- [ -1, 1, Conv, [ 256, 1, 1 ] ],
41
- [ -1, 1, nn.Upsample, [ None, 2, 'nearest' ] ],
42
- [ [ -1, 4 ], 1, Concat, [ 1 ] ], # cat backbone P3
43
- [ -1, 3, C3, [ 256, False ] ], # 23 (P3/8-small)
44
-
45
- [ -1, 1, Conv, [ 256, 3, 2 ] ],
46
- [ [ -1, 20 ], 1, Concat, [ 1 ] ], # cat head P4
47
- [ -1, 3, C3, [ 512, False ] ], # 26 (P4/16-medium)
48
-
49
- [ -1, 1, Conv, [ 512, 3, 2 ] ],
50
- [ [ -1, 16 ], 1, Concat, [ 1 ] ], # cat head P5
51
- [ -1, 3, C3, [ 768, False ] ], # 29 (P5/32-large)
52
-
53
- [ -1, 1, Conv, [ 768, 3, 2 ] ],
54
- [ [ -1, 12 ], 1, Concat, [ 1 ] ], # cat head P6
55
- [ -1, 3, C3, [ 1024, False ] ], # 32 (P6/64-xlarge)
56
-
57
- [ [ 23, 26, 29, 32 ], 1, Detect, [ nc, anchors ] ], # Detect(P3, P4, P5, P6)
58
  ]
 
3
  depth_multiple: 1.33 # model depth multiple
4
  width_multiple: 1.25 # layer channel multiple
5
  anchors:
6
+ - [19,27, 44,40, 38,94] # P3/8
7
+ - [96,68, 86,152, 180,137] # P4/16
8
+ - [140,301, 303,264, 238,542] # P5/32
9
+ - [436,615, 739,380, 925,792] # P6/64
10
 
11
  # YOLOv5 backbone
12
  backbone:
13
  # [from, number, module, args]
14
+ [[-1, 1, Focus, [64, 3]], # 0-P1/2
15
+ [-1, 1, Conv, [128, 3, 2]], # 1-P2/4
16
+ [-1, 3, C3, [128]],
17
+ [-1, 1, Conv, [256, 3, 2]], # 3-P3/8
18
+ [-1, 9, C3, [256]],
19
+ [-1, 1, Conv, [512, 3, 2]], # 5-P4/16
20
+ [-1, 9, C3, [512]],
21
+ [-1, 1, Conv, [768, 3, 2]], # 7-P5/32
22
+ [-1, 3, C3, [768]],
23
+ [-1, 1, Conv, [1024, 3, 2]], # 9-P6/64
24
+ [-1, 1, SPP, [1024, [3, 5, 7]]],
25
+ [-1, 3, C3, [1024, False]], # 11
26
  ]
27
 
28
  # YOLOv5 head
29
  head:
30
+ [[-1, 1, Conv, [768, 1, 1]],
31
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
32
+ [[-1, 8], 1, Concat, [1]], # cat backbone P5
33
+ [-1, 3, C3, [768, False]], # 15
34
+
35
+ [-1, 1, Conv, [512, 1, 1]],
36
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
37
+ [[-1, 6], 1, Concat, [1]], # cat backbone P4
38
+ [-1, 3, C3, [512, False]], # 19
39
+
40
+ [-1, 1, Conv, [256, 1, 1]],
41
+ [-1, 1, nn.Upsample, [None, 2, 'nearest']],
42
+ [[-1, 4], 1, Concat, [1]], # cat backbone P3
43
+ [-1, 3, C3, [256, False]], # 23 (P3/8-small)
44
+
45
+ [-1, 1, Conv, [256, 3, 2]],
46
+ [[-1, 20], 1, Concat, [1]], # cat head P4
47
+ [-1, 3, C3, [512, False]], # 26 (P4/16-medium)
48
+
49
+ [-1, 1, Conv, [512, 3, 2]],
50
+ [[-1, 16], 1, Concat, [1]], # cat head P5
51
+ [-1, 3, C3, [768, False]], # 29 (P5/32-large)
52
+
53
+ [-1, 1, Conv, [768, 3, 2]],
54
+ [[-1, 12], 1, Concat, [1]], # cat head P6
55
+ [-1, 3, C3, [1024, False]], # 32 (P6/64-xlarge)
56
+
57
+ [[23, 26, 29, 32], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5, P6)
58
  ]
train.py CHANGED
@@ -74,7 +74,7 @@ def train(hyp, # path/to/hyp.yaml or hyp dictionary
74
  with open(save_dir / 'opt.yaml', 'w') as f:
75
  yaml.safe_dump(vars(opt), f, sort_keys=False)
76
  data_dict = None
77
-
78
  # Loggers
79
  if RANK in [-1, 0]:
80
  loggers = Loggers(save_dir, weights, opt, hyp, LOGGER).start() # loggers dict
@@ -83,7 +83,6 @@ def train(hyp, # path/to/hyp.yaml or hyp dictionary
83
  if resume:
84
  weights, epochs, hyp = opt.weights, opt.epochs, opt.hyp
85
 
86
-
87
  # Config
88
  plots = not evolve # create plots
89
  cuda = device.type != 'cpu'
@@ -96,7 +95,6 @@ def train(hyp, # path/to/hyp.yaml or hyp dictionary
96
  assert len(names) == nc, f'{len(names)} names found for nc={nc} dataset in {data}' # check
97
  is_coco = data.endswith('coco.yaml') and nc == 80 # COCO dataset
98
 
99
-
100
  # Model
101
  pretrained = weights.endswith('.pt')
102
  if pretrained:
 
74
  with open(save_dir / 'opt.yaml', 'w') as f:
75
  yaml.safe_dump(vars(opt), f, sort_keys=False)
76
  data_dict = None
77
+
78
  # Loggers
79
  if RANK in [-1, 0]:
80
  loggers = Loggers(save_dir, weights, opt, hyp, LOGGER).start() # loggers dict
 
83
  if resume:
84
  weights, epochs, hyp = opt.weights, opt.epochs, opt.hyp
85
 
 
86
  # Config
87
  plots = not evolve # create plots
88
  cuda = device.type != 'cpu'
 
95
  assert len(names) == nc, f'{len(names)} names found for nc={nc} dataset in {data}' # check
96
  is_coco = data.endswith('coco.yaml') and nc == 80 # COCO dataset
97
 
 
98
  # Model
99
  pretrained = weights.endswith('.pt')
100
  if pretrained:
utils/downloads.py CHANGED
@@ -115,7 +115,6 @@ def get_token(cookie="./cookie"):
115
  return line.split()[-1]
116
  return ""
117
 
118
-
119
  # Google utils: https://cloud.google.com/storage/docs/reference/libraries ----------------------------------------------
120
  #
121
  #
 
115
  return line.split()[-1]
116
  return ""
117
 
 
118
  # Google utils: https://cloud.google.com/storage/docs/reference/libraries ----------------------------------------------
119
  #
120
  #
utils/loggers/__init__.py CHANGED
@@ -1,7 +1,8 @@
1
  # YOLOv5 experiment logging utils
2
- import torch
3
  import warnings
4
  from threading import Thread
 
 
5
  from torch.utils.tensorboard import SummaryWriter
6
 
7
  from utils.general import colorstr, emojis
 
1
  # YOLOv5 experiment logging utils
 
2
  import warnings
3
  from threading import Thread
4
+
5
+ import torch
6
  from torch.utils.tensorboard import SummaryWriter
7
 
8
  from utils.general import colorstr, emojis
utils/loggers/wandb/log_dataset.py CHANGED
@@ -1,5 +1,4 @@
1
  import argparse
2
- import yaml
3
 
4
  from wandb_utils import WandbLogger
5
 
 
1
  import argparse
 
2
 
3
  from wandb_utils import WandbLogger
4
 
utils/loggers/wandb/sweep.py CHANGED
@@ -1,7 +1,8 @@
1
  import sys
2
- import wandb
3
  from pathlib import Path
4
 
 
 
5
  FILE = Path(__file__).absolute()
6
  sys.path.append(FILE.parents[2].as_posix()) # add utils/ to path
7
 
 
1
  import sys
 
2
  from pathlib import Path
3
 
4
+ import wandb
5
+
6
  FILE = Path(__file__).absolute()
7
  sys.path.append(FILE.parents[2].as_posix()) # add utils/ to path
8
 
utils/loggers/wandb/sweep.yaml CHANGED
@@ -25,9 +25,9 @@ parameters:
25
  data:
26
  value: "data/coco128.yaml"
27
  batch_size:
28
- values: [ 64 ]
29
  epochs:
30
- values: [ 10 ]
31
 
32
  lr0:
33
  distribution: uniform
 
25
  data:
26
  value: "data/coco128.yaml"
27
  batch_size:
28
+ values: [64]
29
  epochs:
30
+ values: [10]
31
 
32
  lr0:
33
  distribution: uniform
utils/loggers/wandb/wandb_utils.py CHANGED
@@ -3,9 +3,10 @@
3
  import logging
4
  import os
5
  import sys
6
- import yaml
7
  from contextlib import contextmanager
8
  from pathlib import Path
 
 
9
  from tqdm import tqdm
10
 
11
  FILE = Path(__file__).absolute()
 
3
  import logging
4
  import os
5
  import sys
 
6
  from contextlib import contextmanager
7
  from pathlib import Path
8
+
9
+ import yaml
10
  from tqdm import tqdm
11
 
12
  FILE = Path(__file__).absolute()
val.py CHANGED
@@ -13,7 +13,6 @@ from threading import Thread
13
 
14
  import numpy as np
15
  import torch
16
- import yaml
17
  from tqdm import tqdm
18
 
19
  FILE = Path(__file__).absolute()
 
13
 
14
  import numpy as np
15
  import torch
 
16
  from tqdm import tqdm
17
 
18
  FILE = Path(__file__).absolute()