Add mdformat to precommit checks and update other version (#7529)
Browse files* Update .pre-commit-config.yaml
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update .pre-commit-config.yaml
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update CONTRIBUTING.md
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Update README.md
* Update README.md
* Update README.md
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
- .github/CODE_OF_CONDUCT.md +12 -12
- .pre-commit-config.yaml +13 -11
- CONTRIBUTING.md +11 -7
- README.md +25 -28
- utils/loggers/wandb/README.md +58 -48
.github/CODE_OF_CONDUCT.md
CHANGED
@@ -17,23 +17,23 @@ diverse, inclusive, and healthy community.
|
|
17 |
Examples of behavior that contributes to a positive environment for our
|
18 |
community include:
|
19 |
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
and learning from the experience
|
25 |
-
|
26 |
overall community
|
27 |
|
28 |
Examples of unacceptable behavior include:
|
29 |
|
30 |
-
|
31 |
advances of any kind
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
address, without their explicit permission
|
36 |
-
|
37 |
professional setting
|
38 |
|
39 |
## Enforcement Responsibilities
|
@@ -121,8 +121,8 @@ https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
|
|
121 |
Community Impact Guidelines were inspired by [Mozilla's code of conduct
|
122 |
enforcement ladder](https://github.com/mozilla/diversity).
|
123 |
|
124 |
-
[homepage]: https://www.contributor-covenant.org
|
125 |
-
|
126 |
For answers to common questions about this code of conduct, see the FAQ at
|
127 |
https://www.contributor-covenant.org/faq. Translations are available at
|
128 |
https://www.contributor-covenant.org/translations.
|
|
|
|
|
|
17 |
Examples of behavior that contributes to a positive environment for our
|
18 |
community include:
|
19 |
|
20 |
+
- Demonstrating empathy and kindness toward other people
|
21 |
+
- Being respectful of differing opinions, viewpoints, and experiences
|
22 |
+
- Giving and gracefully accepting constructive feedback
|
23 |
+
- Accepting responsibility and apologizing to those affected by our mistakes,
|
24 |
and learning from the experience
|
25 |
+
- Focusing on what is best not just for us as individuals, but for the
|
26 |
overall community
|
27 |
|
28 |
Examples of unacceptable behavior include:
|
29 |
|
30 |
+
- The use of sexualized language or imagery, and sexual attention or
|
31 |
advances of any kind
|
32 |
+
- Trolling, insulting or derogatory comments, and personal or political attacks
|
33 |
+
- Public or private harassment
|
34 |
+
- Publishing others' private information, such as a physical or email
|
35 |
address, without their explicit permission
|
36 |
+
- Other conduct which could reasonably be considered inappropriate in a
|
37 |
professional setting
|
38 |
|
39 |
## Enforcement Responsibilities
|
|
|
121 |
Community Impact Guidelines were inspired by [Mozilla's code of conduct
|
122 |
enforcement ladder](https://github.com/mozilla/diversity).
|
123 |
|
|
|
|
|
124 |
For answers to common questions about this code of conduct, see the FAQ at
|
125 |
https://www.contributor-covenant.org/faq. Translations are available at
|
126 |
https://www.contributor-covenant.org/translations.
|
127 |
+
|
128 |
+
[homepage]: https://www.contributor-covenant.org
|
.pre-commit-config.yaml
CHANGED
@@ -13,7 +13,7 @@ ci:
|
|
13 |
|
14 |
repos:
|
15 |
- repo: https://github.com/pre-commit/pre-commit-hooks
|
16 |
-
rev: v4.
|
17 |
hooks:
|
18 |
- id: end-of-file-fixer
|
19 |
- id: trailing-whitespace
|
@@ -24,7 +24,7 @@ repos:
|
|
24 |
- id: check-docstring-first
|
25 |
|
26 |
- repo: https://github.com/asottile/pyupgrade
|
27 |
-
rev: v2.
|
28 |
hooks:
|
29 |
- id: pyupgrade
|
30 |
args: [--py36-plus]
|
@@ -42,15 +42,17 @@ repos:
|
|
42 |
- id: yapf
|
43 |
name: YAPF formatting
|
44 |
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
|
|
|
|
54 |
|
55 |
- repo: https://github.com/asottile/yesqa
|
56 |
rev: v1.3.0
|
|
|
13 |
|
14 |
repos:
|
15 |
- repo: https://github.com/pre-commit/pre-commit-hooks
|
16 |
+
rev: v4.2.0
|
17 |
hooks:
|
18 |
- id: end-of-file-fixer
|
19 |
- id: trailing-whitespace
|
|
|
24 |
- id: check-docstring-first
|
25 |
|
26 |
- repo: https://github.com/asottile/pyupgrade
|
27 |
+
rev: v2.32.0
|
28 |
hooks:
|
29 |
- id: pyupgrade
|
30 |
args: [--py36-plus]
|
|
|
42 |
- id: yapf
|
43 |
name: YAPF formatting
|
44 |
|
45 |
+
- repo: https://github.com/executablebooks/mdformat
|
46 |
+
rev: 0.7.14
|
47 |
+
hooks:
|
48 |
+
- id: mdformat
|
49 |
+
additional_dependencies:
|
50 |
+
- mdformat-gfm
|
51 |
+
- mdformat-black
|
52 |
+
exclude: |
|
53 |
+
(?x)^(
|
54 |
+
README.md
|
55 |
+
)$
|
56 |
|
57 |
- repo: https://github.com/asottile/yesqa
|
58 |
rev: v1.3.0
|
CONTRIBUTING.md
CHANGED
@@ -18,16 +18,19 @@ Submitting a PR is easy! This example shows how to submit a PR for updating `req
|
|
18 |
### 1. Select File to Update
|
19 |
|
20 |
Select `requirements.txt` to update by clicking on it in GitHub.
|
|
|
21 |
<p align="center"><img width="800" alt="PR_step1" src="https://user-images.githubusercontent.com/26833433/122260847-08be2600-ced4-11eb-828b-8287ace4136c.png"></p>
|
22 |
|
23 |
### 2. Click 'Edit this file'
|
24 |
|
25 |
Button is in top-right corner.
|
|
|
26 |
<p align="center"><img width="800" alt="PR_step2" src="https://user-images.githubusercontent.com/26833433/122260844-06f46280-ced4-11eb-9eec-b8a24be519ca.png"></p>
|
27 |
|
28 |
### 3. Make Changes
|
29 |
|
30 |
Change `matplotlib` version from `3.2.2` to `3.3`.
|
|
|
31 |
<p align="center"><img width="800" alt="PR_step3" src="https://user-images.githubusercontent.com/26833433/122260853-0a87e980-ced4-11eb-9fd2-3650fb6e0842.png"></p>
|
32 |
|
33 |
### 4. Preview Changes and Submit PR
|
@@ -35,6 +38,7 @@ Change `matplotlib` version from `3.2.2` to `3.3`.
|
|
35 |
Click on the **Preview changes** tab to verify your updates. At the bottom of the screen select 'Create a **new branch**
|
36 |
for this commit', assign your branch a descriptive name such as `fix/matplotlib_version` and click the green **Propose
|
37 |
changes** button. All done, your PR is now submitted to YOLOv5 for review and approval 😃!
|
|
|
38 |
<p align="center"><img width="800" alt="PR_step4" src="https://user-images.githubusercontent.com/26833433/122260856-0b208000-ced4-11eb-8e8e-77b6151cbcc3.png"></p>
|
39 |
|
40 |
### PR recommendations
|
@@ -70,21 +74,21 @@ understand and use to **reproduce** the problem. This is referred to by communit
|
|
70 |
a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example). Your code that reproduces
|
71 |
the problem should be:
|
72 |
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
|
77 |
In addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code
|
78 |
should be:
|
79 |
|
80 |
-
|
81 |
GitHub [master](https://github.com/ultralytics/yolov5/tree/master), and if necessary `git pull` or `git clone` a new
|
82 |
copy to ensure your problem has not already been resolved by previous commits.
|
83 |
-
|
84 |
repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code ⚠️.
|
85 |
|
86 |
-
If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛
|
87 |
-
Bug Report** [template](https://github.com/ultralytics/yolov5/issues/new/choose) and providing
|
88 |
a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) to help us better
|
89 |
understand and diagnose your problem.
|
90 |
|
|
|
18 |
### 1. Select File to Update
|
19 |
|
20 |
Select `requirements.txt` to update by clicking on it in GitHub.
|
21 |
+
|
22 |
<p align="center"><img width="800" alt="PR_step1" src="https://user-images.githubusercontent.com/26833433/122260847-08be2600-ced4-11eb-828b-8287ace4136c.png"></p>
|
23 |
|
24 |
### 2. Click 'Edit this file'
|
25 |
|
26 |
Button is in top-right corner.
|
27 |
+
|
28 |
<p align="center"><img width="800" alt="PR_step2" src="https://user-images.githubusercontent.com/26833433/122260844-06f46280-ced4-11eb-9eec-b8a24be519ca.png"></p>
|
29 |
|
30 |
### 3. Make Changes
|
31 |
|
32 |
Change `matplotlib` version from `3.2.2` to `3.3`.
|
33 |
+
|
34 |
<p align="center"><img width="800" alt="PR_step3" src="https://user-images.githubusercontent.com/26833433/122260853-0a87e980-ced4-11eb-9fd2-3650fb6e0842.png"></p>
|
35 |
|
36 |
### 4. Preview Changes and Submit PR
|
|
|
38 |
Click on the **Preview changes** tab to verify your updates. At the bottom of the screen select 'Create a **new branch**
|
39 |
for this commit', assign your branch a descriptive name such as `fix/matplotlib_version` and click the green **Propose
|
40 |
changes** button. All done, your PR is now submitted to YOLOv5 for review and approval 😃!
|
41 |
+
|
42 |
<p align="center"><img width="800" alt="PR_step4" src="https://user-images.githubusercontent.com/26833433/122260856-0b208000-ced4-11eb-8e8e-77b6151cbcc3.png"></p>
|
43 |
|
44 |
### PR recommendations
|
|
|
74 |
a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example). Your code that reproduces
|
75 |
the problem should be:
|
76 |
|
77 |
+
- ✅ **Minimal** – Use as little code as possible that still produces the same problem
|
78 |
+
- ✅ **Complete** – Provide **all** parts someone else needs to reproduce your problem in the question itself
|
79 |
+
- ✅ **Reproducible** – Test the code you're about to provide to make sure it reproduces the problem
|
80 |
|
81 |
In addition to the above requirements, for [Ultralytics](https://ultralytics.com/) to provide assistance your code
|
82 |
should be:
|
83 |
|
84 |
+
- ✅ **Current** – Verify that your code is up-to-date with current
|
85 |
GitHub [master](https://github.com/ultralytics/yolov5/tree/master), and if necessary `git pull` or `git clone` a new
|
86 |
copy to ensure your problem has not already been resolved by previous commits.
|
87 |
+
- ✅ **Unmodified** – Your problem must be reproducible without any modifications to the codebase in this
|
88 |
repository. [Ultralytics](https://ultralytics.com/) does not provide support for custom code ⚠️.
|
89 |
|
90 |
+
If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛
|
91 |
+
**Bug Report** [template](https://github.com/ultralytics/yolov5/issues/new/choose) and providing
|
92 |
a [minimum reproducible example](https://stackoverflow.com/help/minimal-reproducible-example) to help us better
|
93 |
understand and diagnose your problem.
|
94 |
|
README.md
CHANGED
@@ -103,8 +103,6 @@ results.print() # or .show(), .save(), .crop(), .pandas(), etc.
|
|
103 |
|
104 |
</details>
|
105 |
|
106 |
-
|
107 |
-
|
108 |
<details>
|
109 |
<summary>Inference with detect.py</summary>
|
110 |
|
@@ -149,20 +147,20 @@ python train.py --data coco.yaml --cfg yolov5n.yaml --weights '' --batch-size 12
|
|
149 |
<details open>
|
150 |
<summary>Tutorials</summary>
|
151 |
|
152 |
-
|
153 |
-
|
154 |
RECOMMENDED
|
155 |
-
|
156 |
-
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
|
161 |
-
|
162 |
-
|
163 |
-
|
164 |
-
|
165 |
-
|
166 |
|
167 |
</details>
|
168 |
|
@@ -203,7 +201,6 @@ Get started in seconds with our verified environments. Click each icon below for
|
|
203 |
|:-:|:-:|
|
204 |
|Automatically track and visualize all your YOLOv5 training runs in the cloud with [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme)|Label and export your custom datasets directly to YOLOv5 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) |
|
205 |
|
206 |
-
|
207 |
<!-- ## <div align="center">Compete and Win</div>
|
208 |
|
209 |
We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competition with **$10,000** in cash prizes!
|
@@ -224,18 +221,15 @@ We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competi
|
|
224 |
<details>
|
225 |
<summary>Figure Notes (click to expand)</summary>
|
226 |
|
227 |
-
|
228 |
-
|
229 |
-
|
230 |
-
|
|
|
231 |
</details>
|
232 |
|
233 |
### Pretrained Checkpoints
|
234 |
|
235 |
-
[assets]: https://github.com/ultralytics/yolov5/releases
|
236 |
-
|
237 |
-
[TTA]: https://github.com/ultralytics/yolov5/issues/303
|
238 |
-
|
239 |
|Model |size<br><sup>(pixels) |mAP<sup>val<br>0.5:0.95 |mAP<sup>val<br>0.5 |Speed<br><sup>CPU b1<br>(ms) |Speed<br><sup>V100 b1<br>(ms) |Speed<br><sup>V100 b32<br>(ms) |params<br><sup>(M) |FLOPs<br><sup>@640 (B)
|
240 |
|--- |--- |--- |--- |--- |--- |--- |--- |---
|
241 |
|[YOLOv5n][assets] |640 |28.0 |45.7 |**45** |**6.3**|**0.6**|**1.9**|**4.5**
|
@@ -253,10 +247,10 @@ We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competi
|
|
253 |
<details>
|
254 |
<summary>Table Notes (click to expand)</summary>
|
255 |
|
256 |
-
|
257 |
-
|
258 |
-
|
259 |
-
|
260 |
|
261 |
</details>
|
262 |
|
@@ -302,3 +296,6 @@ professional support requests please visit [https://ultralytics.com/contact](htt
|
|
302 |
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-instagram.png" width="3%"/>
|
303 |
</a>
|
304 |
</div>
|
|
|
|
|
|
|
|
103 |
|
104 |
</details>
|
105 |
|
|
|
|
|
106 |
<details>
|
107 |
<summary>Inference with detect.py</summary>
|
108 |
|
|
|
147 |
<details open>
|
148 |
<summary>Tutorials</summary>
|
149 |
|
150 |
+
- [Train Custom Data](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data) 🚀 RECOMMENDED
|
151 |
+
- [Tips for Best Training Results](https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results) ☘️
|
152 |
RECOMMENDED
|
153 |
+
- [Weights & Biases Logging](https://github.com/ultralytics/yolov5/issues/1289) 🌟 NEW
|
154 |
+
- [Roboflow for Datasets, Labeling, and Active Learning](https://github.com/ultralytics/yolov5/issues/4975) 🌟 NEW
|
155 |
+
- [Multi-GPU Training](https://github.com/ultralytics/yolov5/issues/475)
|
156 |
+
- [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36) ⭐ NEW
|
157 |
+
- [TFLite, ONNX, CoreML, TensorRT Export](https://github.com/ultralytics/yolov5/issues/251) 🚀
|
158 |
+
- [Test-Time Augmentation (TTA)](https://github.com/ultralytics/yolov5/issues/303)
|
159 |
+
- [Model Ensembling](https://github.com/ultralytics/yolov5/issues/318)
|
160 |
+
- [Model Pruning/Sparsity](https://github.com/ultralytics/yolov5/issues/304)
|
161 |
+
- [Hyperparameter Evolution](https://github.com/ultralytics/yolov5/issues/607)
|
162 |
+
- [Transfer Learning with Frozen Layers](https://github.com/ultralytics/yolov5/issues/1314) ⭐ NEW
|
163 |
+
- [Architecture Summary](https://github.com/ultralytics/yolov5/issues/6998) ⭐ NEW
|
164 |
|
165 |
</details>
|
166 |
|
|
|
201 |
|:-:|:-:|
|
202 |
|Automatically track and visualize all your YOLOv5 training runs in the cloud with [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme)|Label and export your custom datasets directly to YOLOv5 for training with [Roboflow](https://roboflow.com/?ref=ultralytics) |
|
203 |
|
|
|
204 |
<!-- ## <div align="center">Compete and Win</div>
|
205 |
|
206 |
We are super excited about our first-ever Ultralytics YOLOv5 🚀 EXPORT Competition with **$10,000** in cash prizes!
|
|
|
221 |
<details>
|
222 |
<summary>Figure Notes (click to expand)</summary>
|
223 |
|
224 |
+
- **COCO AP val** denotes mAP@0.5:0.95 metric measured on the 5000-image [COCO val2017](http://cocodataset.org) dataset over various inference sizes from 256 to 1536.
|
225 |
+
- **GPU Speed** measures average inference time per image on [COCO val2017](http://cocodataset.org) dataset using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) V100 instance at batch-size 32.
|
226 |
+
- **EfficientDet** data from [google/automl](https://github.com/google/automl) at batch size 8.
|
227 |
+
- **Reproduce** by `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
|
228 |
+
|
229 |
</details>
|
230 |
|
231 |
### Pretrained Checkpoints
|
232 |
|
|
|
|
|
|
|
|
|
233 |
|Model |size<br><sup>(pixels) |mAP<sup>val<br>0.5:0.95 |mAP<sup>val<br>0.5 |Speed<br><sup>CPU b1<br>(ms) |Speed<br><sup>V100 b1<br>(ms) |Speed<br><sup>V100 b32<br>(ms) |params<br><sup>(M) |FLOPs<br><sup>@640 (B)
|
234 |
|--- |--- |--- |--- |--- |--- |--- |--- |---
|
235 |
|[YOLOv5n][assets] |640 |28.0 |45.7 |**45** |**6.3**|**0.6**|**1.9**|**4.5**
|
|
|
247 |
<details>
|
248 |
<summary>Table Notes (click to expand)</summary>
|
249 |
|
250 |
+
- All checkpoints are trained to 300 epochs with default settings. Nano and Small models use [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) hyps, all others use [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml).
|
251 |
+
- **mAP<sup>val</sup>** values are for single-model single-scale on [COCO val2017](http://cocodataset.org) dataset.<br>Reproduce by `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
|
252 |
+
- **Speed** averaged over COCO val images using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) instance. NMS times (~1 ms/img) not included.<br>Reproduce by `python val.py --data coco.yaml --img 640 --task speed --batch 1`
|
253 |
+
- **TTA** [Test Time Augmentation](https://github.com/ultralytics/yolov5/issues/303) includes reflection and scale augmentations.<br>Reproduce by `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
|
254 |
|
255 |
</details>
|
256 |
|
|
|
296 |
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/logo-social-instagram.png" width="3%"/>
|
297 |
</a>
|
298 |
</div>
|
299 |
+
|
300 |
+
[assets]: https://github.com/ultralytics/yolov5/releases
|
301 |
+
[tta]: https://github.com/ultralytics/yolov5/issues/303
|
utils/loggers/wandb/README.md
CHANGED
@@ -1,66 +1,72 @@
|
|
1 |
📚 This guide explains how to use **Weights & Biases** (W&B) with YOLOv5 🚀. UPDATED 29 September 2021.
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
|
|
8 |
|
9 |
## About Weights & Biases
|
|
|
10 |
Think of [W&B](https://wandb.ai/site?utm_campaign=repo_yolo_wandbtutorial) like GitHub for machine learning models. With a few lines of code, save everything you need to debug, compare and reproduce your models — architecture, hyperparameters, git commits, model weights, GPU usage, and even datasets and predictions.
|
11 |
|
12 |
Used by top researchers including teams at OpenAI, Lyft, Github, and MILA, W&B is part of the new standard of best practices for machine learning. How W&B can help you optimize your machine learning workflows:
|
13 |
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
|
21 |
## First-Time Setup
|
|
|
22 |
<details open>
|
23 |
<summary> Toggle Details </summary>
|
24 |
When you first train, W&B will prompt you to create a new account and will generate an **API key** for you. If you are an existing user you can retrieve your key from https://wandb.ai/authorize. This key is used to tell W&B where to log your data. You only need to supply your key once, and then it is remembered on the same device.
|
25 |
|
26 |
W&B will create a cloud **project** (default is 'YOLOv5') for your training runs, and each new training run will be provided a unique run **name** within that project as project/name. You can also manually set your project and run name as:
|
27 |
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
|
32 |
YOLOv5 notebook example: <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
|
33 |
<img width="960" alt="Screen Shot 2021-09-29 at 10 23 13 PM" src="https://user-images.githubusercontent.com/26833433/135392431-1ab7920a-c49d-450a-b0b0-0c86ec86100e.png">
|
34 |
|
35 |
-
|
36 |
-
</details>
|
37 |
|
38 |
## Viewing Runs
|
|
|
39 |
<details open>
|
40 |
<summary> Toggle Details </summary>
|
41 |
Run information streams from your environment to the W&B cloud console as you train. This allows you to monitor and even cancel runs in <b>realtime</b> . All important information is logged:
|
42 |
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
|
52 |
<p align="center"><img width="900" alt="Weights & Biases dashboard" src="https://user-images.githubusercontent.com/26833433/135390767-c28b050f-8455-4004-adb0-3b730386e2b2.png"></p>
|
53 |
</details>
|
54 |
|
55 |
-
|
56 |
-
|
57 |
-
|
|
|
58 |
|
59 |
-
|
60 |
-
![Screenshot (85)](https://user-images.githubusercontent.com/15766192/143441866-7191b2cb-22f0-4e0f-ae64-2dc47dc13078.png)
|
61 |
|
62 |
## Advanced Usage
|
|
|
63 |
You can leverage W&B artifacts and Tables integration to easily visualize and manage your datasets, models and training evaluations. Here are some quick examples to get you started.
|
|
|
64 |
<details open>
|
65 |
<h3> 1: Train and Log Evaluation simultaneousy </h3>
|
66 |
This is an extension of the previous section, but it'll also training after uploading the dataset. <b> This also evaluation Table</b>
|
@@ -71,18 +77,20 @@ You can leverage W&B artifacts and Tables integration to easily visualize and ma
|
|
71 |
<b>Code</b> <code> $ python train.py --upload_data val</code>
|
72 |
|
73 |
![Screenshot from 2021-11-21 17-40-06](https://user-images.githubusercontent.com/15766192/142761183-c1696d8c-3f38-45ab-991a-bb0dfd98ae7d.png)
|
74 |
-
</details>
|
75 |
|
76 |
-
|
|
|
|
|
77 |
Log, visualize, dynamically query, and understand your data with <a href='https://docs.wandb.ai/guides/data-vis/tables'>W&B Tables</a>. You can use the following command to log your dataset as a W&B Table. This will generate a <code>{dataset}_wandb.yaml</code> file which can be used to train from dataset artifact.
|
78 |
<details>
|
79 |
<summary> <b>Usage</b> </summary>
|
80 |
<b>Code</b> <code> $ python utils/logger/wandb/log_dataset.py --project ... --name ... --data .. </code>
|
81 |
|
82 |
-
|
83 |
-
|
|
|
84 |
|
85 |
-
|
86 |
When you upload a dataset as described in the first section, you get a new config file with an added `_wandb` to its name. This file contains the information that
|
87 |
can be used to train a model directly from the dataset artifact. <b> This also logs evaluation </b>
|
88 |
<details>
|
@@ -90,51 +98,54 @@ You can leverage W&B artifacts and Tables integration to easily visualize and ma
|
|
90 |
<b>Code</b> <code> $ python train.py --data {data}_wandb.yaml </code>
|
91 |
|
92 |
![Screenshot (72)](https://user-images.githubusercontent.com/15766192/128979739-4cf63aeb-a76f-483f-8861-1c0100b938a5.png)
|
93 |
-
</details>
|
94 |
|
95 |
-
|
|
|
|
|
96 |
To enable saving and versioning checkpoints of your experiment, pass `--save_period n` with the base cammand, where `n` represents checkpoint interval.
|
97 |
You can also log both the dataset and model checkpoints simultaneously. If not passed, only the final model will be logged
|
98 |
|
99 |
-
|
100 |
<summary> <b>Usage</b> </summary>
|
101 |
<b>Code</b> <code> $ python train.py --save_period 1 </code>
|
102 |
|
103 |
![Screenshot (68)](https://user-images.githubusercontent.com/15766192/128726138-ec6c1f60-639d-437d-b4ee-3acd9de47ef3.png)
|
104 |
-
</details>
|
105 |
|
106 |
</details>
|
107 |
|
108 |
-
|
|
|
|
|
109 |
Any run can be resumed using artifacts if the <code>--resume</code> argument starts with <code>wandb-artifact://</code> prefix followed by the run path, i.e, <code>wandb-artifact://username/project/runid </code>. This doesn't require the model checkpoint to be present on the local system.
|
110 |
|
111 |
-
|
112 |
<summary> <b>Usage</b> </summary>
|
113 |
<b>Code</b> <code> $ python train.py --resume wandb-artifact://{run_path} </code>
|
114 |
|
115 |
![Screenshot (70)](https://user-images.githubusercontent.com/15766192/128728988-4e84b355-6c87-41ae-a591-14aecf45343e.png)
|
116 |
-
</details>
|
117 |
|
118 |
-
|
|
|
|
|
119 |
<b> Local dataset or model checkpoints are not required. This can be used to resume runs directly on a different device </b>
|
120 |
The syntax is same as the previous section, but you'll need to lof both the dataset and model checkpoints as artifacts, i.e, set bot <code>--upload_dataset</code> or
|
121 |
train from <code>_wandb.yaml</code> file and set <code>--save_period</code>
|
122 |
|
123 |
-
|
124 |
<summary> <b>Usage</b> </summary>
|
125 |
<b>Code</b> <code> $ python train.py --resume wandb-artifact://{run_path} </code>
|
126 |
|
127 |
![Screenshot (70)](https://user-images.githubusercontent.com/15766192/128728988-4e84b355-6c87-41ae-a591-14aecf45343e.png)
|
128 |
-
</details>
|
129 |
|
130 |
</details>
|
131 |
|
132 |
-
|
|
|
|
|
133 |
W&B Reports can be created from your saved runs for sharing online. Once a report is created you will receive a link you can use to publically share your results. Here is an example report created from the COCO128 tutorial trainings of all four YOLOv5 models ([link](https://wandb.ai/glenn-jocher/yolov5_tutorial/reports/YOLOv5-COCO128-Tutorial-Results--VmlldzozMDI5OTY)).
|
134 |
|
135 |
<img width="900" alt="Weights & Biases Reports" src="https://user-images.githubusercontent.com/26833433/135394029-a17eaf86-c6c1-4b1d-bb80-b90e83aaffa7.png">
|
136 |
|
137 |
-
|
138 |
## Environments
|
139 |
|
140 |
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
|
@@ -144,7 +155,6 @@ YOLOv5 may be run in any of the following up-to-date verified environments (with
|
|
144 |
- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)
|
145 |
- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) <a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>
|
146 |
|
147 |
-
|
148 |
## Status
|
149 |
|
150 |
![CI CPU testing](https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg)
|
|
|
1 |
📚 This guide explains how to use **Weights & Biases** (W&B) with YOLOv5 🚀. UPDATED 29 September 2021.
|
2 |
+
|
3 |
+
- [About Weights & Biases](#about-weights-&-biases)
|
4 |
+
- [First-Time Setup](#first-time-setup)
|
5 |
+
- [Viewing runs](#viewing-runs)
|
6 |
+
- [Disabling wandb](#disabling-wandb)
|
7 |
+
- [Advanced Usage: Dataset Versioning and Evaluation](#advanced-usage)
|
8 |
+
- [Reports: Share your work with the world!](#reports)
|
9 |
|
10 |
## About Weights & Biases
|
11 |
+
|
12 |
Think of [W&B](https://wandb.ai/site?utm_campaign=repo_yolo_wandbtutorial) like GitHub for machine learning models. With a few lines of code, save everything you need to debug, compare and reproduce your models — architecture, hyperparameters, git commits, model weights, GPU usage, and even datasets and predictions.
|
13 |
|
14 |
Used by top researchers including teams at OpenAI, Lyft, Github, and MILA, W&B is part of the new standard of best practices for machine learning. How W&B can help you optimize your machine learning workflows:
|
15 |
|
16 |
+
- [Debug](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#Free-2) model performance in real time
|
17 |
+
- [GPU usage](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#System-4) visualized automatically
|
18 |
+
- [Custom charts](https://wandb.ai/wandb/customizable-charts/reports/Powerful-Custom-Charts-To-Debug-Model-Peformance--VmlldzoyNzY4ODI) for powerful, extensible visualization
|
19 |
+
- [Share insights](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#Share-8) interactively with collaborators
|
20 |
+
- [Optimize hyperparameters](https://docs.wandb.com/sweeps) efficiently
|
21 |
+
- [Track](https://docs.wandb.com/artifacts) datasets, pipelines, and production models
|
22 |
|
23 |
## First-Time Setup
|
24 |
+
|
25 |
<details open>
|
26 |
<summary> Toggle Details </summary>
|
27 |
When you first train, W&B will prompt you to create a new account and will generate an **API key** for you. If you are an existing user you can retrieve your key from https://wandb.ai/authorize. This key is used to tell W&B where to log your data. You only need to supply your key once, and then it is remembered on the same device.
|
28 |
|
29 |
W&B will create a cloud **project** (default is 'YOLOv5') for your training runs, and each new training run will be provided a unique run **name** within that project as project/name. You can also manually set your project and run name as:
|
30 |
|
31 |
+
```shell
|
32 |
+
$ python train.py --project ... --name ...
|
33 |
+
```
|
34 |
|
35 |
YOLOv5 notebook example: <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a>
|
36 |
<img width="960" alt="Screen Shot 2021-09-29 at 10 23 13 PM" src="https://user-images.githubusercontent.com/26833433/135392431-1ab7920a-c49d-450a-b0b0-0c86ec86100e.png">
|
37 |
|
38 |
+
</details>
|
|
|
39 |
|
40 |
## Viewing Runs
|
41 |
+
|
42 |
<details open>
|
43 |
<summary> Toggle Details </summary>
|
44 |
Run information streams from your environment to the W&B cloud console as you train. This allows you to monitor and even cancel runs in <b>realtime</b> . All important information is logged:
|
45 |
|
46 |
+
- Training & Validation losses
|
47 |
+
- Metrics: Precision, Recall, mAP@0.5, mAP@0.5:0.95
|
48 |
+
- Learning Rate over time
|
49 |
+
- A bounding box debugging panel, showing the training progress over time
|
50 |
+
- GPU: Type, **GPU Utilization**, power, temperature, **CUDA memory usage**
|
51 |
+
- System: Disk I/0, CPU utilization, RAM memory usage
|
52 |
+
- Your trained model as W&B Artifact
|
53 |
+
- Environment: OS and Python types, Git repository and state, **training command**
|
54 |
|
55 |
<p align="center"><img width="900" alt="Weights & Biases dashboard" src="https://user-images.githubusercontent.com/26833433/135390767-c28b050f-8455-4004-adb0-3b730386e2b2.png"></p>
|
56 |
</details>
|
57 |
|
58 |
+
## Disabling wandb
|
59 |
+
|
60 |
+
- training after running `wandb disabled` inside that directory creates no wandb run
|
61 |
+
![Screenshot (84)](https://user-images.githubusercontent.com/15766192/143441777-c780bdd7-7cb4-4404-9559-b4316030a985.png)
|
62 |
|
63 |
+
- To enable wandb again, run `wandb online`
|
64 |
+
![Screenshot (85)](https://user-images.githubusercontent.com/15766192/143441866-7191b2cb-22f0-4e0f-ae64-2dc47dc13078.png)
|
65 |
|
66 |
## Advanced Usage
|
67 |
+
|
68 |
You can leverage W&B artifacts and Tables integration to easily visualize and manage your datasets, models and training evaluations. Here are some quick examples to get you started.
|
69 |
+
|
70 |
<details open>
|
71 |
<h3> 1: Train and Log Evaluation simultaneousy </h3>
|
72 |
This is an extension of the previous section, but it'll also training after uploading the dataset. <b> This also evaluation Table</b>
|
|
|
77 |
<b>Code</b> <code> $ python train.py --upload_data val</code>
|
78 |
|
79 |
![Screenshot from 2021-11-21 17-40-06](https://user-images.githubusercontent.com/15766192/142761183-c1696d8c-3f38-45ab-991a-bb0dfd98ae7d.png)
|
|
|
80 |
|
81 |
+
</details>
|
82 |
+
|
83 |
+
<h3>2. Visualize and Version Datasets</h3>
|
84 |
Log, visualize, dynamically query, and understand your data with <a href='https://docs.wandb.ai/guides/data-vis/tables'>W&B Tables</a>. You can use the following command to log your dataset as a W&B Table. This will generate a <code>{dataset}_wandb.yaml</code> file which can be used to train from dataset artifact.
|
85 |
<details>
|
86 |
<summary> <b>Usage</b> </summary>
|
87 |
<b>Code</b> <code> $ python utils/logger/wandb/log_dataset.py --project ... --name ... --data .. </code>
|
88 |
|
89 |
+
![Screenshot (64)](https://user-images.githubusercontent.com/15766192/128486078-d8433890-98a3-4d12-8986-b6c0e3fc64b9.png)
|
90 |
+
|
91 |
+
</details>
|
92 |
|
93 |
+
<h3> 3: Train using dataset artifact </h3>
|
94 |
When you upload a dataset as described in the first section, you get a new config file with an added `_wandb` to its name. This file contains the information that
|
95 |
can be used to train a model directly from the dataset artifact. <b> This also logs evaluation </b>
|
96 |
<details>
|
|
|
98 |
<b>Code</b> <code> $ python train.py --data {data}_wandb.yaml </code>
|
99 |
|
100 |
![Screenshot (72)](https://user-images.githubusercontent.com/15766192/128979739-4cf63aeb-a76f-483f-8861-1c0100b938a5.png)
|
|
|
101 |
|
102 |
+
</details>
|
103 |
+
|
104 |
+
<h3> 4: Save model checkpoints as artifacts </h3>
|
105 |
To enable saving and versioning checkpoints of your experiment, pass `--save_period n` with the base cammand, where `n` represents checkpoint interval.
|
106 |
You can also log both the dataset and model checkpoints simultaneously. If not passed, only the final model will be logged
|
107 |
|
108 |
+
<details>
|
109 |
<summary> <b>Usage</b> </summary>
|
110 |
<b>Code</b> <code> $ python train.py --save_period 1 </code>
|
111 |
|
112 |
![Screenshot (68)](https://user-images.githubusercontent.com/15766192/128726138-ec6c1f60-639d-437d-b4ee-3acd9de47ef3.png)
|
|
|
113 |
|
114 |
</details>
|
115 |
|
116 |
+
</details>
|
117 |
+
|
118 |
+
<h3> 5: Resume runs from checkpoint artifacts. </h3>
|
119 |
Any run can be resumed using artifacts if the <code>--resume</code> argument starts with <code>wandb-artifact://</code> prefix followed by the run path, i.e, <code>wandb-artifact://username/project/runid </code>. This doesn't require the model checkpoint to be present on the local system.
|
120 |
|
121 |
+
<details>
|
122 |
<summary> <b>Usage</b> </summary>
|
123 |
<b>Code</b> <code> $ python train.py --resume wandb-artifact://{run_path} </code>
|
124 |
|
125 |
![Screenshot (70)](https://user-images.githubusercontent.com/15766192/128728988-4e84b355-6c87-41ae-a591-14aecf45343e.png)
|
|
|
126 |
|
127 |
+
</details>
|
128 |
+
|
129 |
+
<h3> 6: Resume runs from dataset artifact & checkpoint artifacts. </h3>
|
130 |
<b> Local dataset or model checkpoints are not required. This can be used to resume runs directly on a different device </b>
|
131 |
The syntax is same as the previous section, but you'll need to lof both the dataset and model checkpoints as artifacts, i.e, set bot <code>--upload_dataset</code> or
|
132 |
train from <code>_wandb.yaml</code> file and set <code>--save_period</code>
|
133 |
|
134 |
+
<details>
|
135 |
<summary> <b>Usage</b> </summary>
|
136 |
<b>Code</b> <code> $ python train.py --resume wandb-artifact://{run_path} </code>
|
137 |
|
138 |
![Screenshot (70)](https://user-images.githubusercontent.com/15766192/128728988-4e84b355-6c87-41ae-a591-14aecf45343e.png)
|
|
|
139 |
|
140 |
</details>
|
141 |
|
142 |
+
</details>
|
143 |
+
|
144 |
+
<h3> Reports </h3>
|
145 |
W&B Reports can be created from your saved runs for sharing online. Once a report is created you will receive a link you can use to publically share your results. Here is an example report created from the COCO128 tutorial trainings of all four YOLOv5 models ([link](https://wandb.ai/glenn-jocher/yolov5_tutorial/reports/YOLOv5-COCO128-Tutorial-Results--VmlldzozMDI5OTY)).
|
146 |
|
147 |
<img width="900" alt="Weights & Biases Reports" src="https://user-images.githubusercontent.com/26833433/135394029-a17eaf86-c6c1-4b1d-bb80-b90e83aaffa7.png">
|
148 |
|
|
|
149 |
## Environments
|
150 |
|
151 |
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):
|
|
|
155 |
- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)
|
156 |
- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) <a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a>
|
157 |
|
|
|
158 |
## Status
|
159 |
|
160 |
![CI CPU testing](https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg)
|