glenn-jocher
commited on
Commit
β’
720aaa6
1
Parent(s):
8ee9fd1
Rename `test.py` to `val.py` (#4000)
Browse files- .github/ISSUE_TEMPLATE/bug-report.md +1 -1
- .github/workflows/ci-testing.yml +3 -3
- .github/workflows/greetings.yml +1 -1
- README.md +4 -4
- models/yolo.py +0 -1
- train.py +34 -34
- tutorial.ipynb +18 -18
- utils/augmentations.py +1 -1
- utils/general.py +1 -1
- utils/plots.py +4 -4
- test.py β val.py +10 -10
.github/ISSUE_TEMPLATE/bug-report.md
CHANGED
@@ -12,7 +12,7 @@ Before submitting a bug report, please be aware that your issue **must be reprod
|
|
12 |
- **Common dataset**: coco.yaml or coco128.yaml
|
13 |
- **Common environment**: Colab, Google Cloud, or Docker image. See https://github.com/ultralytics/yolov5#environments
|
14 |
|
15 |
-
If this is a custom dataset/training question you **must include** your `train*.jpg`, `
|
16 |
|
17 |
|
18 |
## π Bug
|
|
|
12 |
- **Common dataset**: coco.yaml or coco128.yaml
|
13 |
- **Common environment**: Colab, Google Cloud, or Docker image. See https://github.com/ultralytics/yolov5#environments
|
14 |
|
15 |
+
If this is a custom dataset/training question you **must include** your `train*.jpg`, `val*.jpg` and `results.png` figures, or we can not help you. You can generate these with `utils.plot_results()`.
|
16 |
|
17 |
|
18 |
## π Bug
|
.github/workflows/ci-testing.yml
CHANGED
@@ -68,9 +68,9 @@ jobs:
|
|
68 |
# detect
|
69 |
python detect.py --weights ${{ matrix.model }}.pt --device $di
|
70 |
python detect.py --weights runs/train/exp/weights/last.pt --device $di
|
71 |
-
#
|
72 |
-
python
|
73 |
-
python
|
74 |
|
75 |
python hubconf.py # hub
|
76 |
python models/yolo.py --cfg ${{ matrix.model }}.yaml # inspect
|
|
|
68 |
# detect
|
69 |
python detect.py --weights ${{ matrix.model }}.pt --device $di
|
70 |
python detect.py --weights runs/train/exp/weights/last.pt --device $di
|
71 |
+
# val
|
72 |
+
python val.py --img 128 --batch 16 --weights ${{ matrix.model }}.pt --device $di
|
73 |
+
python val.py --img 128 --batch 16 --weights runs/train/exp/weights/last.pt --device $di
|
74 |
|
75 |
python hubconf.py # hub
|
76 |
python models/yolo.py --cfg ${{ matrix.model }}.yaml # inspect
|
.github/workflows/greetings.yml
CHANGED
@@ -52,5 +52,5 @@ jobs:
|
|
52 |
|
53 |
![CI CPU testing](https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg)
|
54 |
|
55 |
-
If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training ([train.py](https://github.com/ultralytics/yolov5/blob/master/train.py)), testing ([
|
56 |
|
|
|
52 |
|
53 |
![CI CPU testing](https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg)
|
54 |
|
55 |
+
If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training ([train.py](https://github.com/ultralytics/yolov5/blob/master/train.py)), testing ([val.py](https://github.com/ultralytics/yolov5/blob/master/val.py)), inference ([detect.py](https://github.com/ultralytics/yolov5/blob/master/detect.py)) and export ([export.py](https://github.com/ultralytics/yolov5/blob/master/export.py)) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.
|
56 |
|
README.md
CHANGED
@@ -197,7 +197,7 @@ We are super excited about our first-ever Ultralytics YOLOv5 π EXPORT Competi
|
|
197 |
|
198 |
* GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS.
|
199 |
* EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8.
|
200 |
-
* **Reproduce** by `python
|
201 |
</details>
|
202 |
|
203 |
|
@@ -223,10 +223,10 @@ We are super excited about our first-ever Ultralytics YOLOv5 π EXPORT Competi
|
|
223 |
<summary>Table Notes (click to expand)</summary>
|
224 |
|
225 |
* AP<sup>test</sup> denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results denote val2017 accuracy.
|
226 |
-
* AP values are for single-model single-scale unless otherwise noted. **Reproduce mAP** by `python
|
227 |
-
* Speed<sub>GPU</sub> averaged over 5000 COCO val2017 images using a GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) V100 instance, and includes FP16 inference, postprocessing and NMS. **Reproduce speed** by `python
|
228 |
* All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation).
|
229 |
-
* Test Time Augmentation ([TTA](https://github.com/ultralytics/yolov5/issues/303)) includes reflection and scale augmentation. **Reproduce TTA** by `python
|
230 |
</details>
|
231 |
|
232 |
|
|
|
197 |
|
198 |
* GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS.
|
199 |
* EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8.
|
200 |
+
* **Reproduce** by `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
|
201 |
</details>
|
202 |
|
203 |
|
|
|
223 |
<summary>Table Notes (click to expand)</summary>
|
224 |
|
225 |
* AP<sup>test</sup> denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results denote val2017 accuracy.
|
226 |
+
* AP values are for single-model single-scale unless otherwise noted. **Reproduce mAP** by `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65`
|
227 |
+
* Speed<sub>GPU</sub> averaged over 5000 COCO val2017 images using a GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) V100 instance, and includes FP16 inference, postprocessing and NMS. **Reproduce speed** by `python val.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45`
|
228 |
* All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation).
|
229 |
+
* Test Time Augmentation ([TTA](https://github.com/ultralytics/yolov5/issues/303)) includes reflection and scale augmentation. **Reproduce TTA** by `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment`
|
230 |
</details>
|
231 |
|
232 |
|
models/yolo.py
CHANGED
@@ -310,4 +310,3 @@ if __name__ == '__main__':
|
|
310 |
# tb_writer = SummaryWriter('.')
|
311 |
# logger.info("Run 'tensorboard --logdir=models' to view tensorboard at http://localhost:6006/")
|
312 |
# tb_writer.add_graph(torch.jit.trace(model, img, strict=False), []) # add model graph
|
313 |
-
# tb_writer.add_image('test', img[0], dataformats='CWH') # add model to tensorboard
|
|
|
310 |
# tb_writer = SummaryWriter('.')
|
311 |
# logger.info("Run 'tensorboard --logdir=models' to view tensorboard at http://localhost:6006/")
|
312 |
# tb_writer.add_graph(torch.jit.trace(model, img, strict=False), []) # add model graph
|
|
train.py
CHANGED
@@ -32,7 +32,7 @@ from tqdm import tqdm
|
|
32 |
FILE = Path(__file__).absolute()
|
33 |
sys.path.append(FILE.parents[0].as_posix()) # add yolov5/ to path
|
34 |
|
35 |
-
import
|
36 |
from models.experimental import attempt_load
|
37 |
from models.yolo import Model
|
38 |
from utils.autoanchor import check_anchors
|
@@ -57,9 +57,9 @@ def train(hyp, # path/to/hyp.yaml or hyp dictionary
|
|
57 |
opt,
|
58 |
device,
|
59 |
):
|
60 |
-
save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume,
|
61 |
opt.save_dir, opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, \
|
62 |
-
opt.resume, opt.
|
63 |
|
64 |
# Directories
|
65 |
save_dir = Path(save_dir)
|
@@ -129,7 +129,7 @@ def train(hyp, # path/to/hyp.yaml or hyp dictionary
|
|
129 |
with torch_distributed_zero_first(RANK):
|
130 |
check_dataset(data_dict) # check
|
131 |
train_path = data_dict['train']
|
132 |
-
|
133 |
|
134 |
# Freeze
|
135 |
freeze = [] # parameter names to freeze (full or partial)
|
@@ -207,7 +207,7 @@ def train(hyp, # path/to/hyp.yaml or hyp dictionary
|
|
207 |
# Image sizes
|
208 |
gs = max(int(model.stride.max()), 32) # grid size (max stride)
|
209 |
nl = model.model[-1].nl # number of detection layers (used for scaling hyp['obj'])
|
210 |
-
imgsz,
|
211 |
|
212 |
# DP mode
|
213 |
if cuda and RANK == -1 and torch.cuda.device_count() > 1:
|
@@ -231,8 +231,8 @@ def train(hyp, # path/to/hyp.yaml or hyp dictionary
|
|
231 |
|
232 |
# Process 0
|
233 |
if RANK in [-1, 0]:
|
234 |
-
|
235 |
-
hyp=hyp, cache=opt.cache_images and not
|
236 |
workers=workers,
|
237 |
pad=0.5, prefix=colorstr('val: '))[0]
|
238 |
|
@@ -276,7 +276,7 @@ def train(hyp, # path/to/hyp.yaml or hyp dictionary
|
|
276 |
scheduler.last_epoch = start_epoch - 1 # do not move
|
277 |
scaler = amp.GradScaler(enabled=cuda)
|
278 |
compute_loss = ComputeLoss(model) # init loss class
|
279 |
-
logger.info(f'Image sizes {imgsz} train, {
|
280 |
f'Using {dataloader.num_workers} dataloader workers\n'
|
281 |
f'Logging results to {save_dir}\n'
|
282 |
f'Starting training for {epochs} epochs...')
|
@@ -384,20 +384,20 @@ def train(hyp, # path/to/hyp.yaml or hyp dictionary
|
|
384 |
# mAP
|
385 |
ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'gr', 'names', 'stride', 'class_weights'])
|
386 |
final_epoch = epoch + 1 == epochs
|
387 |
-
if not
|
388 |
wandb_logger.current_epoch = epoch + 1
|
389 |
-
results, maps, _ =
|
390 |
-
|
391 |
-
|
392 |
-
|
393 |
-
|
394 |
-
|
395 |
-
|
396 |
-
|
397 |
-
|
398 |
-
|
399 |
-
|
400 |
-
|
401 |
|
402 |
# Write
|
403 |
with open(results_file, 'a') as f:
|
@@ -454,15 +454,15 @@ def train(hyp, # path/to/hyp.yaml or hyp dictionary
|
|
454 |
if not evolve:
|
455 |
if is_coco: # COCO dataset
|
456 |
for m in [last, best] if best.exists() else [last]: # speed, mAP tests
|
457 |
-
results, _, _ =
|
458 |
-
|
459 |
-
|
460 |
-
|
461 |
-
|
462 |
-
|
463 |
-
|
464 |
-
|
465 |
-
|
466 |
|
467 |
# Strip optimizers
|
468 |
for f in last, best:
|
@@ -486,11 +486,11 @@ def parse_opt(known=False):
|
|
486 |
parser.add_argument('--hyp', type=str, default='data/hyps/hyp.scratch.yaml', help='hyperparameters path')
|
487 |
parser.add_argument('--epochs', type=int, default=300)
|
488 |
parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs')
|
489 |
-
parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='[train,
|
490 |
parser.add_argument('--rect', action='store_true', help='rectangular training')
|
491 |
parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
|
492 |
parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
|
493 |
-
parser.add_argument('--
|
494 |
parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')
|
495 |
parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations')
|
496 |
parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
|
@@ -538,7 +538,7 @@ def main(opt):
|
|
538 |
# opt.hyp = opt.hyp or ('hyp.finetune.yaml' if opt.weights else 'hyp.scratch.yaml')
|
539 |
opt.data, opt.cfg, opt.hyp = check_file(opt.data), check_file(opt.cfg), check_file(opt.hyp) # check files
|
540 |
assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified'
|
541 |
-
opt.img_size.extend([opt.img_size[-1]] * (2 - len(opt.img_size))) # extend to 2 sizes (train,
|
542 |
opt.name = 'evolve' if opt.evolve else opt.name
|
543 |
opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok or opt.evolve))
|
544 |
|
@@ -597,7 +597,7 @@ def main(opt):
|
|
597 |
if 'anchors' not in hyp: # anchors commented in hyp.yaml
|
598 |
hyp['anchors'] = 3
|
599 |
assert LOCAL_RANK == -1, 'DDP mode not implemented for --evolve'
|
600 |
-
opt.
|
601 |
# ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices
|
602 |
yaml_file = Path(opt.save_dir) / 'hyp_evolved.yaml' # save best result here
|
603 |
if opt.bucket:
|
|
|
32 |
FILE = Path(__file__).absolute()
|
33 |
sys.path.append(FILE.parents[0].as_posix()) # add yolov5/ to path
|
34 |
|
35 |
+
import val # for end-of-epoch mAP
|
36 |
from models.experimental import attempt_load
|
37 |
from models.yolo import Model
|
38 |
from utils.autoanchor import check_anchors
|
|
|
57 |
opt,
|
58 |
device,
|
59 |
):
|
60 |
+
save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, noval, nosave, workers, = \
|
61 |
opt.save_dir, opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, \
|
62 |
+
opt.resume, opt.noval, opt.nosave, opt.workers
|
63 |
|
64 |
# Directories
|
65 |
save_dir = Path(save_dir)
|
|
|
129 |
with torch_distributed_zero_first(RANK):
|
130 |
check_dataset(data_dict) # check
|
131 |
train_path = data_dict['train']
|
132 |
+
val_path = data_dict['val']
|
133 |
|
134 |
# Freeze
|
135 |
freeze = [] # parameter names to freeze (full or partial)
|
|
|
207 |
# Image sizes
|
208 |
gs = max(int(model.stride.max()), 32) # grid size (max stride)
|
209 |
nl = model.model[-1].nl # number of detection layers (used for scaling hyp['obj'])
|
210 |
+
imgsz, imgsz_val = [check_img_size(x, gs) for x in opt.img_size] # verify imgsz are gs-multiples
|
211 |
|
212 |
# DP mode
|
213 |
if cuda and RANK == -1 and torch.cuda.device_count() > 1:
|
|
|
231 |
|
232 |
# Process 0
|
233 |
if RANK in [-1, 0]:
|
234 |
+
valloader = create_dataloader(val_path, imgsz_val, batch_size // WORLD_SIZE * 2, gs, single_cls,
|
235 |
+
hyp=hyp, cache=opt.cache_images and not noval, rect=True, rank=-1,
|
236 |
workers=workers,
|
237 |
pad=0.5, prefix=colorstr('val: '))[0]
|
238 |
|
|
|
276 |
scheduler.last_epoch = start_epoch - 1 # do not move
|
277 |
scaler = amp.GradScaler(enabled=cuda)
|
278 |
compute_loss = ComputeLoss(model) # init loss class
|
279 |
+
logger.info(f'Image sizes {imgsz} train, {imgsz_val} val\n'
|
280 |
f'Using {dataloader.num_workers} dataloader workers\n'
|
281 |
f'Logging results to {save_dir}\n'
|
282 |
f'Starting training for {epochs} epochs...')
|
|
|
384 |
# mAP
|
385 |
ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'gr', 'names', 'stride', 'class_weights'])
|
386 |
final_epoch = epoch + 1 == epochs
|
387 |
+
if not noval or final_epoch: # Calculate mAP
|
388 |
wandb_logger.current_epoch = epoch + 1
|
389 |
+
results, maps, _ = val.run(data_dict,
|
390 |
+
batch_size=batch_size // WORLD_SIZE * 2,
|
391 |
+
imgsz=imgsz_val,
|
392 |
+
model=ema.ema,
|
393 |
+
single_cls=single_cls,
|
394 |
+
dataloader=valloader,
|
395 |
+
save_dir=save_dir,
|
396 |
+
save_json=is_coco and final_epoch,
|
397 |
+
verbose=nc < 50 and final_epoch,
|
398 |
+
plots=plots and final_epoch,
|
399 |
+
wandb_logger=wandb_logger,
|
400 |
+
compute_loss=compute_loss)
|
401 |
|
402 |
# Write
|
403 |
with open(results_file, 'a') as f:
|
|
|
454 |
if not evolve:
|
455 |
if is_coco: # COCO dataset
|
456 |
for m in [last, best] if best.exists() else [last]: # speed, mAP tests
|
457 |
+
results, _, _ = val.run(data_dict,
|
458 |
+
batch_size=batch_size // WORLD_SIZE * 2,
|
459 |
+
imgsz=imgsz_val,
|
460 |
+
model=attempt_load(m, device).half(),
|
461 |
+
single_cls=single_cls,
|
462 |
+
dataloader=valloader,
|
463 |
+
save_dir=save_dir,
|
464 |
+
save_json=True,
|
465 |
+
plots=False)
|
466 |
|
467 |
# Strip optimizers
|
468 |
for f in last, best:
|
|
|
486 |
parser.add_argument('--hyp', type=str, default='data/hyps/hyp.scratch.yaml', help='hyperparameters path')
|
487 |
parser.add_argument('--epochs', type=int, default=300)
|
488 |
parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs')
|
489 |
+
parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='[train, val] image sizes')
|
490 |
parser.add_argument('--rect', action='store_true', help='rectangular training')
|
491 |
parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
|
492 |
parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
|
493 |
+
parser.add_argument('--noval', action='store_true', help='only validate final epoch')
|
494 |
parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')
|
495 |
parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations')
|
496 |
parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
|
|
|
538 |
# opt.hyp = opt.hyp or ('hyp.finetune.yaml' if opt.weights else 'hyp.scratch.yaml')
|
539 |
opt.data, opt.cfg, opt.hyp = check_file(opt.data), check_file(opt.cfg), check_file(opt.hyp) # check files
|
540 |
assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified'
|
541 |
+
opt.img_size.extend([opt.img_size[-1]] * (2 - len(opt.img_size))) # extend to 2 sizes (train, val)
|
542 |
opt.name = 'evolve' if opt.evolve else opt.name
|
543 |
opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok or opt.evolve))
|
544 |
|
|
|
597 |
if 'anchors' not in hyp: # anchors commented in hyp.yaml
|
598 |
hyp['anchors'] = 3
|
599 |
assert LOCAL_RANK == -1, 'DDP mode not implemented for --evolve'
|
600 |
+
opt.noval, opt.nosave = True, True # only val/save final epoch
|
601 |
# ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices
|
602 |
yaml_file = Path(opt.save_dir) / 'hyp_evolved.yaml' # save best result here
|
603 |
if opt.bucket:
|
tutorial.ipynb
CHANGED
@@ -643,8 +643,8 @@
|
|
643 |
"id": "0eq1SMWl6Sfn"
|
644 |
},
|
645 |
"source": [
|
646 |
-
"# 2.
|
647 |
-
"
|
648 |
]
|
649 |
},
|
650 |
{
|
@@ -720,14 +720,14 @@
|
|
720 |
},
|
721 |
"source": [
|
722 |
"# Run YOLOv5x on COCO val2017\n",
|
723 |
-
"!python
|
724 |
],
|
725 |
"execution_count": null,
|
726 |
"outputs": [
|
727 |
{
|
728 |
"output_type": "stream",
|
729 |
"text": [
|
730 |
-
"Namespace(augment=False, batch_size=32, conf_thres=0.001, data='./data/coco.yaml', device='', exist_ok=False, half=True, img_size=640, iou_thres=0.65, name='exp', project='runs/
|
731 |
"YOLOv5 π v5.0-157-gc6b51f4 torch 1.8.1+cu101 CUDA:0 (Tesla V100-SXM2-16GB, 16160.5MB)\n",
|
732 |
"\n",
|
733 |
"Downloading https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5x.pt to yolov5x.pt...\n",
|
@@ -741,7 +741,7 @@
|
|
741 |
" all 5000 36335 0.746 0.626 0.68 0.49\n",
|
742 |
"Speed: 5.3/1.5/6.8 ms inference/NMS/total per 640x640 image at batch-size 32\n",
|
743 |
"\n",
|
744 |
-
"Evaluating pycocotools mAP... saving runs/
|
745 |
"loading annotations into memory...\n",
|
746 |
"Done (t=0.44s)\n",
|
747 |
"creating index...\n",
|
@@ -767,7 +767,7 @@
|
|
767 |
" Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.524\n",
|
768 |
" Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.735\n",
|
769 |
" Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.827\n",
|
770 |
-
"Results saved to runs/
|
771 |
],
|
772 |
"name": "stdout"
|
773 |
}
|
@@ -805,7 +805,7 @@
|
|
805 |
},
|
806 |
"source": [
|
807 |
"# Run YOLOv5s on COCO test-dev2017 using --task test\n",
|
808 |
-
"!python
|
809 |
],
|
810 |
"execution_count": null,
|
811 |
"outputs": []
|
@@ -976,7 +976,7 @@
|
|
976 |
"Plotting labels... \n",
|
977 |
"\n",
|
978 |
"\u001b[34m\u001b[1mautoanchor: \u001b[0mAnalyzing anchors... anchors/target = 4.26, Best Possible Recall (BPR) = 0.9946\n",
|
979 |
-
"Image sizes 640 train, 640
|
980 |
"Using 2 dataloader workers\n",
|
981 |
"Logging results to runs/train/exp\n",
|
982 |
"Starting training for 3 epochs...\n",
|
@@ -1036,7 +1036,7 @@
|
|
1036 |
"source": [
|
1037 |
"## Local Logging\n",
|
1038 |
"\n",
|
1039 |
-
"All results are logged by default to `runs/train`, with a new experiment directory created for each new training as `runs/train/exp2`, `runs/train/exp3`, etc. View train and
|
1040 |
]
|
1041 |
},
|
1042 |
{
|
@@ -1046,8 +1046,8 @@
|
|
1046 |
},
|
1047 |
"source": [
|
1048 |
"Image(filename='runs/train/exp/train_batch0.jpg', width=800) # train batch 0 mosaics and labels\n",
|
1049 |
-
"Image(filename='runs/train/exp/test_batch0_labels.jpg', width=800) #
|
1050 |
-
"Image(filename='runs/train/exp/test_batch0_pred.jpg', width=800) #
|
1051 |
],
|
1052 |
"execution_count": null,
|
1053 |
"outputs": []
|
@@ -1062,10 +1062,10 @@
|
|
1062 |
"`train_batch0.jpg` shows train batch 0 mosaics and labels\n",
|
1063 |
"\n",
|
1064 |
"> <img src=\"https://user-images.githubusercontent.com/26833433/124931217-4826f080-e002-11eb-87b9-ae0925a8c94b.jpg\" width=\"700\"> \n",
|
1065 |
-
"`test_batch0_labels.jpg` shows
|
1066 |
"\n",
|
1067 |
"> <img src=\"https://user-images.githubusercontent.com/26833433/124931209-46f5c380-e002-11eb-9bd5-7a3de2be9851.jpg\" width=\"700\"> \n",
|
1068 |
-
"`test_batch0_pred.jpg` shows
|
1069 |
]
|
1070 |
},
|
1071 |
{
|
@@ -1125,7 +1125,7 @@
|
|
1125 |
"\n",
|
1126 |
"![CI CPU testing](https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg)\n",
|
1127 |
"\n",
|
1128 |
-
"If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training ([train.py](https://github.com/ultralytics/yolov5/blob/master/train.py)), testing ([
|
1129 |
]
|
1130 |
},
|
1131 |
{
|
@@ -1147,8 +1147,8 @@
|
|
1147 |
"source": [
|
1148 |
"# Reproduce\n",
|
1149 |
"for x in 'yolov5s', 'yolov5m', 'yolov5l', 'yolov5x':\n",
|
1150 |
-
" !python
|
1151 |
-
" !python
|
1152 |
],
|
1153 |
"execution_count": null,
|
1154 |
"outputs": []
|
@@ -1193,8 +1193,8 @@
|
|
1193 |
" for d in 0 cpu; do # devices\n",
|
1194 |
" python detect.py --weights $m.pt --device $d # detect official\n",
|
1195 |
" python detect.py --weights runs/train/exp/weights/best.pt --device $d # detect custom\n",
|
1196 |
-
" python
|
1197 |
-
" python
|
1198 |
" done\n",
|
1199 |
" python hubconf.py # hub\n",
|
1200 |
" python models/yolo.py --cfg $m.yaml # inspect\n",
|
|
|
643 |
"id": "0eq1SMWl6Sfn"
|
644 |
},
|
645 |
"source": [
|
646 |
+
"# 2. Validate\n",
|
647 |
+
"Validate a model's accuracy on [COCO](https://cocodataset.org/#home) val or test-dev datasets. Models are downloaded automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases). To show results by class use the `--verbose` flag. Note that `pycocotools` metrics may be ~1% better than the equivalent repo metrics, as is visible below, due to slight differences in mAP computation."
|
648 |
]
|
649 |
},
|
650 |
{
|
|
|
720 |
},
|
721 |
"source": [
|
722 |
"# Run YOLOv5x on COCO val2017\n",
|
723 |
+
"!python val.py --weights yolov5x.pt --data coco.yaml --img 640 --iou 0.65 --half"
|
724 |
],
|
725 |
"execution_count": null,
|
726 |
"outputs": [
|
727 |
{
|
728 |
"output_type": "stream",
|
729 |
"text": [
|
730 |
+
"Namespace(augment=False, batch_size=32, conf_thres=0.001, data='./data/coco.yaml', device='', exist_ok=False, half=True, img_size=640, iou_thres=0.65, name='exp', project='runs/val', save_conf=False, save_hybrid=False, save_json=True, save_txt=False, single_cls=False, task='val', verbose=False, weights=['yolov5x.pt'])\n",
|
731 |
"YOLOv5 π v5.0-157-gc6b51f4 torch 1.8.1+cu101 CUDA:0 (Tesla V100-SXM2-16GB, 16160.5MB)\n",
|
732 |
"\n",
|
733 |
"Downloading https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5x.pt to yolov5x.pt...\n",
|
|
|
741 |
" all 5000 36335 0.746 0.626 0.68 0.49\n",
|
742 |
"Speed: 5.3/1.5/6.8 ms inference/NMS/total per 640x640 image at batch-size 32\n",
|
743 |
"\n",
|
744 |
+
"Evaluating pycocotools mAP... saving runs/val/exp/yolov5x_predictions.json...\n",
|
745 |
"loading annotations into memory...\n",
|
746 |
"Done (t=0.44s)\n",
|
747 |
"creating index...\n",
|
|
|
767 |
" Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.524\n",
|
768 |
" Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.735\n",
|
769 |
" Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.827\n",
|
770 |
+
"Results saved to runs/val/exp\n"
|
771 |
],
|
772 |
"name": "stdout"
|
773 |
}
|
|
|
805 |
},
|
806 |
"source": [
|
807 |
"# Run YOLOv5s on COCO test-dev2017 using --task test\n",
|
808 |
+
"!python val.py --weights yolov5s.pt --data coco.yaml --task test"
|
809 |
],
|
810 |
"execution_count": null,
|
811 |
"outputs": []
|
|
|
976 |
"Plotting labels... \n",
|
977 |
"\n",
|
978 |
"\u001b[34m\u001b[1mautoanchor: \u001b[0mAnalyzing anchors... anchors/target = 4.26, Best Possible Recall (BPR) = 0.9946\n",
|
979 |
+
"Image sizes 640 train, 640 val\n",
|
980 |
"Using 2 dataloader workers\n",
|
981 |
"Logging results to runs/train/exp\n",
|
982 |
"Starting training for 3 epochs...\n",
|
|
|
1036 |
"source": [
|
1037 |
"## Local Logging\n",
|
1038 |
"\n",
|
1039 |
+
"All results are logged by default to `runs/train`, with a new experiment directory created for each new training as `runs/train/exp2`, `runs/train/exp3`, etc. View train and val jpgs to see mosaics, labels, predictions and augmentation effects. Note a **Mosaic Dataloader** is used for training (shown below), a new concept developed by Ultralytics and first featured in [YOLOv4](https://arxiv.org/abs/2004.10934)."
|
1040 |
]
|
1041 |
},
|
1042 |
{
|
|
|
1046 |
},
|
1047 |
"source": [
|
1048 |
"Image(filename='runs/train/exp/train_batch0.jpg', width=800) # train batch 0 mosaics and labels\n",
|
1049 |
+
"Image(filename='runs/train/exp/test_batch0_labels.jpg', width=800) # val batch 0 labels\n",
|
1050 |
+
"Image(filename='runs/train/exp/test_batch0_pred.jpg', width=800) # val batch 0 predictions"
|
1051 |
],
|
1052 |
"execution_count": null,
|
1053 |
"outputs": []
|
|
|
1062 |
"`train_batch0.jpg` shows train batch 0 mosaics and labels\n",
|
1063 |
"\n",
|
1064 |
"> <img src=\"https://user-images.githubusercontent.com/26833433/124931217-4826f080-e002-11eb-87b9-ae0925a8c94b.jpg\" width=\"700\"> \n",
|
1065 |
+
"`test_batch0_labels.jpg` shows val batch 0 labels\n",
|
1066 |
"\n",
|
1067 |
"> <img src=\"https://user-images.githubusercontent.com/26833433/124931209-46f5c380-e002-11eb-9bd5-7a3de2be9851.jpg\" width=\"700\"> \n",
|
1068 |
+
"`test_batch0_pred.jpg` shows val batch 0 _predictions_"
|
1069 |
]
|
1070 |
},
|
1071 |
{
|
|
|
1125 |
"\n",
|
1126 |
"![CI CPU testing](https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg)\n",
|
1127 |
"\n",
|
1128 |
+
"If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training ([train.py](https://github.com/ultralytics/yolov5/blob/master/train.py)), testing ([val.py](https://github.com/ultralytics/yolov5/blob/master/val.py)), inference ([detect.py](https://github.com/ultralytics/yolov5/blob/master/detect.py)) and export ([export.py](https://github.com/ultralytics/yolov5/blob/master/export.py)) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.\n"
|
1129 |
]
|
1130 |
},
|
1131 |
{
|
|
|
1147 |
"source": [
|
1148 |
"# Reproduce\n",
|
1149 |
"for x in 'yolov5s', 'yolov5m', 'yolov5l', 'yolov5x':\n",
|
1150 |
+
" !python val.py --weights {x}.pt --data coco.yaml --img 640 --conf 0.25 --iou 0.45 # speed\n",
|
1151 |
+
" !python val.py --weights {x}.pt --data coco.yaml --img 640 --conf 0.001 --iou 0.65 # mAP"
|
1152 |
],
|
1153 |
"execution_count": null,
|
1154 |
"outputs": []
|
|
|
1193 |
" for d in 0 cpu; do # devices\n",
|
1194 |
" python detect.py --weights $m.pt --device $d # detect official\n",
|
1195 |
" python detect.py --weights runs/train/exp/weights/best.pt --device $d # detect custom\n",
|
1196 |
+
" python val.py --weights $m.pt --device $d # val official\n",
|
1197 |
+
" python val.py --weights runs/train/exp/weights/best.pt --device $d # val custom\n",
|
1198 |
" done\n",
|
1199 |
" python hubconf.py # hub\n",
|
1200 |
" python models/yolo.py --cfg $m.yaml # inspect\n",
|
utils/augmentations.py
CHANGED
@@ -90,7 +90,7 @@ def letterbox(im, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleF
|
|
90 |
|
91 |
# Scale ratio (new / old)
|
92 |
r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
|
93 |
-
if not scaleup: # only scale down, do not scale up (for better
|
94 |
r = min(r, 1.0)
|
95 |
|
96 |
# Compute padding
|
|
|
90 |
|
91 |
# Scale ratio (new / old)
|
92 |
r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
|
93 |
+
if not scaleup: # only scale down, do not scale up (for better val mAP)
|
94 |
r = min(r, 1.0)
|
95 |
|
96 |
# Compute padding
|
utils/general.py
CHANGED
@@ -633,7 +633,7 @@ def apply_classifier(x, model, img, im0):
|
|
633 |
for j, a in enumerate(d): # per item
|
634 |
cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])]
|
635 |
im = cv2.resize(cutout, (224, 224)) # BGR
|
636 |
-
# cv2.imwrite('
|
637 |
|
638 |
im = im[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
|
639 |
im = np.ascontiguousarray(im, dtype=np.float32) # uint8 to float32
|
|
|
633 |
for j, a in enumerate(d): # per item
|
634 |
cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])]
|
635 |
im = cv2.resize(cutout, (224, 224)) # BGR
|
636 |
+
# cv2.imwrite('example%i.jpg' % j, cutout)
|
637 |
|
638 |
im = im[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
|
639 |
im = np.ascontiguousarray(im, dtype=np.float32) # uint8 to float32
|
utils/plots.py
CHANGED
@@ -219,9 +219,9 @@ def plot_lr_scheduler(optimizer, scheduler, epochs=300, save_dir=''):
|
|
219 |
plt.close()
|
220 |
|
221 |
|
222 |
-
def
|
223 |
-
# Plot
|
224 |
-
x = np.loadtxt('
|
225 |
box = xyxy2xywh(x[:, :4])
|
226 |
cx, cy = box[:, 0], box[:, 1]
|
227 |
|
@@ -250,7 +250,7 @@ def plot_targets_txt(): # from utils.plots import *; plot_targets_txt()
|
|
250 |
|
251 |
|
252 |
def plot_study_txt(path='', x=None): # from utils.plots import *; plot_study_txt()
|
253 |
-
# Plot study.txt generated by
|
254 |
plot2 = False # plot additional results
|
255 |
if plot2:
|
256 |
ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True)[1].ravel()
|
|
|
219 |
plt.close()
|
220 |
|
221 |
|
222 |
+
def plot_val_txt(): # from utils.plots import *; plot_val()
|
223 |
+
# Plot val.txt histograms
|
224 |
+
x = np.loadtxt('val.txt', dtype=np.float32)
|
225 |
box = xyxy2xywh(x[:, :4])
|
226 |
cx, cy = box[:, 0], box[:, 1]
|
227 |
|
|
|
250 |
|
251 |
|
252 |
def plot_study_txt(path='', x=None): # from utils.plots import *; plot_study_txt()
|
253 |
+
# Plot study.txt generated by val.py
|
254 |
plot2 = False # plot additional results
|
255 |
if plot2:
|
256 |
ax = plt.subplots(2, 4, figsize=(10, 6), tight_layout=True)[1].ravel()
|
test.py β val.py
RENAMED
@@ -1,7 +1,7 @@
|
|
1 |
-
"""
|
2 |
|
3 |
Usage:
|
4 |
-
$ python path/to/
|
5 |
"""
|
6 |
|
7 |
import argparse
|
@@ -44,7 +44,7 @@ def run(data,
|
|
44 |
save_hybrid=False, # save label+prediction hybrid results to *.txt
|
45 |
save_conf=False, # save confidences in --save-txt labels
|
46 |
save_json=False, # save a cocoapi-compatible JSON results file
|
47 |
-
project='runs/
|
48 |
name='exp', # save to project/name
|
49 |
exist_ok=False, # existing project/name ok, do not increment
|
50 |
half=True, # use FP16 half-precision inference
|
@@ -228,9 +228,9 @@ def run(data,
|
|
228 |
|
229 |
# Plot images
|
230 |
if plots and batch_i < 3:
|
231 |
-
f = save_dir / f'
|
232 |
Thread(target=plot_images, args=(img, targets, paths, f, names), daemon=True).start()
|
233 |
-
f = save_dir / f'
|
234 |
Thread(target=plot_images, args=(img, output_to_target(out), paths, f, names), daemon=True).start()
|
235 |
|
236 |
# Compute statistics
|
@@ -262,7 +262,7 @@ def run(data,
|
|
262 |
if plots:
|
263 |
confusion_matrix.plot(save_dir=save_dir, names=list(names.values()))
|
264 |
if wandb_logger and wandb_logger.wandb:
|
265 |
-
val_batches = [wandb_logger.wandb.Image(str(f), caption=f.name) for f in sorted(save_dir.glob('
|
266 |
wandb_logger.log({"Validation": val_batches})
|
267 |
if wandb_images:
|
268 |
wandb_logger.log({"Bounding Box Debugger/Images": wandb_images})
|
@@ -305,7 +305,7 @@ def run(data,
|
|
305 |
|
306 |
|
307 |
def parse_opt():
|
308 |
-
parser = argparse.ArgumentParser(prog='
|
309 |
parser.add_argument('--data', type=str, default='data/coco128.yaml', help='dataset.yaml path')
|
310 |
parser.add_argument('--weights', nargs='+', type=str, default='yolov5s.pt', help='model.pt path(s)')
|
311 |
parser.add_argument('--batch-size', type=int, default=32, help='batch size')
|
@@ -321,7 +321,7 @@ def parse_opt():
|
|
321 |
parser.add_argument('--save-hybrid', action='store_true', help='save label+prediction hybrid results to *.txt')
|
322 |
parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
|
323 |
parser.add_argument('--save-json', action='store_true', help='save a cocoapi-compatible JSON results file')
|
324 |
-
parser.add_argument('--project', default='runs/
|
325 |
parser.add_argument('--name', default='exp', help='save to project/name')
|
326 |
parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
|
327 |
parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
|
@@ -334,7 +334,7 @@ def parse_opt():
|
|
334 |
|
335 |
def main(opt):
|
336 |
set_logging()
|
337 |
-
print(colorstr('
|
338 |
check_requirements(exclude=('tensorboard', 'thop'))
|
339 |
|
340 |
if opt.task in ('train', 'val', 'test'): # run normally
|
@@ -346,7 +346,7 @@ def main(opt):
|
|
346 |
save_json=False, plots=False)
|
347 |
|
348 |
elif opt.task == 'study': # run over a range of settings and save/plot
|
349 |
-
# python
|
350 |
x = list(range(256, 1536 + 128, 128)) # x axis (image sizes)
|
351 |
for w in opt.weights if isinstance(opt.weights, list) else [opt.weights]:
|
352 |
f = f'study_{Path(opt.data).stem}_{Path(w).stem}.txt' # filename to save to
|
|
|
1 |
+
"""Validate a trained YOLOv5 model accuracy on a custom dataset
|
2 |
|
3 |
Usage:
|
4 |
+
$ python path/to/val.py --data coco128.yaml --weights yolov5s.pt --img 640
|
5 |
"""
|
6 |
|
7 |
import argparse
|
|
|
44 |
save_hybrid=False, # save label+prediction hybrid results to *.txt
|
45 |
save_conf=False, # save confidences in --save-txt labels
|
46 |
save_json=False, # save a cocoapi-compatible JSON results file
|
47 |
+
project='runs/val', # save to project/name
|
48 |
name='exp', # save to project/name
|
49 |
exist_ok=False, # existing project/name ok, do not increment
|
50 |
half=True, # use FP16 half-precision inference
|
|
|
228 |
|
229 |
# Plot images
|
230 |
if plots and batch_i < 3:
|
231 |
+
f = save_dir / f'val_batch{batch_i}_labels.jpg' # labels
|
232 |
Thread(target=plot_images, args=(img, targets, paths, f, names), daemon=True).start()
|
233 |
+
f = save_dir / f'val_batch{batch_i}_pred.jpg' # predictions
|
234 |
Thread(target=plot_images, args=(img, output_to_target(out), paths, f, names), daemon=True).start()
|
235 |
|
236 |
# Compute statistics
|
|
|
262 |
if plots:
|
263 |
confusion_matrix.plot(save_dir=save_dir, names=list(names.values()))
|
264 |
if wandb_logger and wandb_logger.wandb:
|
265 |
+
val_batches = [wandb_logger.wandb.Image(str(f), caption=f.name) for f in sorted(save_dir.glob('val*.jpg'))]
|
266 |
wandb_logger.log({"Validation": val_batches})
|
267 |
if wandb_images:
|
268 |
wandb_logger.log({"Bounding Box Debugger/Images": wandb_images})
|
|
|
305 |
|
306 |
|
307 |
def parse_opt():
|
308 |
+
parser = argparse.ArgumentParser(prog='val.py')
|
309 |
parser.add_argument('--data', type=str, default='data/coco128.yaml', help='dataset.yaml path')
|
310 |
parser.add_argument('--weights', nargs='+', type=str, default='yolov5s.pt', help='model.pt path(s)')
|
311 |
parser.add_argument('--batch-size', type=int, default=32, help='batch size')
|
|
|
321 |
parser.add_argument('--save-hybrid', action='store_true', help='save label+prediction hybrid results to *.txt')
|
322 |
parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
|
323 |
parser.add_argument('--save-json', action='store_true', help='save a cocoapi-compatible JSON results file')
|
324 |
+
parser.add_argument('--project', default='runs/val', help='save to project/name')
|
325 |
parser.add_argument('--name', default='exp', help='save to project/name')
|
326 |
parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
|
327 |
parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
|
|
|
334 |
|
335 |
def main(opt):
|
336 |
set_logging()
|
337 |
+
print(colorstr('val: ') + ', '.join(f'{k}={v}' for k, v in vars(opt).items()))
|
338 |
check_requirements(exclude=('tensorboard', 'thop'))
|
339 |
|
340 |
if opt.task in ('train', 'val', 'test'): # run normally
|
|
|
346 |
save_json=False, plots=False)
|
347 |
|
348 |
elif opt.task == 'study': # run over a range of settings and save/plot
|
349 |
+
# python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5s.pt yolov5m.pt yolov5l.pt yolov5x.pt
|
350 |
x = list(range(256, 1536 + 128, 128)) # x axis (image sizes)
|
351 |
for w in opt.weights if isinstance(opt.weights, list) else [opt.weights]:
|
352 |
f = f'study_{Path(opt.data).stem}_{Path(w).stem}.txt' # filename to save to
|