glenn-jocher commited on
Commit
916d4aa
1 Parent(s): 1ddf692

v3.0 Release (#725)

Browse files

* initial commit

* remove yolov3-spp from test.py study

* update study --img range

* update mAP

* cleanup and speed updates

* update README plot

Dockerfile CHANGED
@@ -23,6 +23,7 @@ COPY . /usr/src/app
23
 
24
  # Build and Push
25
  # t=ultralytics/yolov5:latest && sudo docker build -t $t . && sudo docker push $t
 
26
 
27
  # Pull and Run
28
  # t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --ipc=host $t
@@ -43,7 +44,7 @@ COPY . /usr/src/app
43
  # sudo docker commit 092b16b25c5b usr/resume && sudo docker run -it --gpus all --ipc=host -v "$(pwd)"/coco:/usr/src/coco --entrypoint=sh usr/resume
44
 
45
  # Send weights to GCP
46
- # python -c "from utils.general import *; strip_optimizer('runs/exp0_*/weights/last.pt', 'tmp.pt')" && gsutil cp tmp.pt gs://*
47
 
48
  # Clean up
49
  # docker system prune -a --volumes
 
23
 
24
  # Build and Push
25
  # t=ultralytics/yolov5:latest && sudo docker build -t $t . && sudo docker push $t
26
+ # for v in {300..303}; do t=ultralytics/coco:v$v && sudo docker build -t $t . && sudo docker push $t; done
27
 
28
  # Pull and Run
29
  # t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --ipc=host $t
 
44
  # sudo docker commit 092b16b25c5b usr/resume && sudo docker run -it --gpus all --ipc=host -v "$(pwd)"/coco:/usr/src/coco --entrypoint=sh usr/resume
45
 
46
  # Send weights to GCP
47
+ # python -c "from utils.general import *; strip_optimizer('runs/exp0_*/weights/best.pt', 'tmp.pt')" && gsutil cp tmp.pt gs://*.pt
48
 
49
  # Clean up
50
  # docker system prune -a --volumes
README.md CHANGED
@@ -6,8 +6,9 @@
6
 
7
  This repository represents Ultralytics open-source research into future object detection methods, and incorporates our lessons learned and best practices evolved over training thousands of models on custom client datasets with our previous YOLO repository https://github.com/ultralytics/yolov3. **All code and models are under active development, and are subject to modification or deletion without notice.** Use at your own risk.
8
 
9
- <img src="https://user-images.githubusercontent.com/26833433/85340570-30360a80-b49b-11ea-87cf-bdf33d53ae15.png" width="1000">** GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 8, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS.
10
 
 
11
  - **July 23, 2020**: [v2.0 release](https://github.com/ultralytics/yolov5/releases/tag/v2.0): improved model definition, training and mAP.
12
  - **June 22, 2020**: [PANet](https://arxiv.org/abs/1803.01534) updates: new heads, reduced parameters, improved speed and mAP [364fcfd](https://github.com/ultralytics/yolov5/commit/364fcfd7dba53f46edd4f04c037a039c0a287972).
13
  - **June 19, 2020**: [FP16](https://pytorch.org/docs/stable/nn.html#torch.nn.Module.half) as new default for smaller checkpoints and faster inference [d4c6674](https://github.com/ultralytics/yolov5/commit/d4c6674c98e19df4c40e33a777610a18d1961145).
@@ -20,19 +21,20 @@ This repository represents Ultralytics open-source research into future object d
20
 
21
  | Model | AP<sup>val</sup> | AP<sup>test</sup> | AP<sub>50</sub> | Speed<sub>GPU</sub> | FPS<sub>GPU</sub> || params | FLOPS |
22
  |---------- |------ |------ |------ | -------- | ------| ------ |------ | :------: |
23
- | [YOLOv5s](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J) | 36.1 | 36.1 | 55.3 | **2.1ms** | **476** || 7.5M | 13.2B
24
- | [YOLOv5m](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J) | 43.5 | 43.5 | 62.5 | 3.0ms | 333 || 21.8M | 39.4B
25
- | [YOLOv5l](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J) | 47.0 | 47.1 | 65.6 | 3.9ms | 256 || 47.8M | 88.1B
26
- | [YOLOv5x](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J) | **49.0** | **49.0** | **67.4** | 6.1ms | 164 || 89.0M | 166.4B
27
  | | | | | | || |
28
- | [YOLOv3-SPP](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J) | 45.6 | 45.5 | 65.2 | 4.5ms | 222 || 63.0M | 118.0B
29
-
 
30
 
31
  ** AP<sup>test</sup> denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results in the table denote val2017 accuracy.
32
- ** All AP numbers are for single-model single-scale without ensemble or test-time augmentation. Reproduce by `python test.py --data coco.yaml --img 672 --conf 0.001`
33
- ** Speed<sub>GPU</sub> measures end-to-end time per image averaged over 5000 COCO val2017 images using a GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) instance with one V100 GPU, and includes image preprocessing, PyTorch FP16 image inference at --batch-size 32 --img-size 640, postprocessing and NMS. Average NMS time included in this chart is 1-2ms/img. Reproduce by `python test.py --data coco.yaml --img 640 --conf 0.1`
34
  ** All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation).
35
-
36
 
37
  ## Requirements
38
 
@@ -98,11 +100,11 @@ Results saved to /content/yolov5/inference/output
98
 
99
  ## Training
100
 
101
- Download [COCO](https://github.com/ultralytics/yolov5/blob/master/data/get_coco2017.sh) and run command below. Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the largest `--batch-size` your GPU allows (batch sizes shown for 16 GB devices).
102
  ```bash
103
  $ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size 64
104
- yolov5m 48
105
- yolov5l 32
106
  yolov5x 16
107
  ```
108
  <img src="https://user-images.githubusercontent.com/26833433/84186698-c4d54d00-aa45-11ea-9bde-c632c1230ccd.png" width="900">
 
6
 
7
  This repository represents Ultralytics open-source research into future object detection methods, and incorporates our lessons learned and best practices evolved over training thousands of models on custom client datasets with our previous YOLO repository https://github.com/ultralytics/yolov3. **All code and models are under active development, and are subject to modification or deletion without notice.** Use at your own risk.
8
 
9
+ <img src="https://user-images.githubusercontent.com/26833433/90187293-6773ba00-dd6e-11ea-8f90-cd94afc0427f.png" width="1000">** GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS. EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8.
10
 
11
+ - **August 13, 2020**: [v3.0 release](https://github.com/ultralytics/yolov5/releases/tag/v3.0): nn.Hardswish() activations, data autodownload, native AMP.
12
  - **July 23, 2020**: [v2.0 release](https://github.com/ultralytics/yolov5/releases/tag/v2.0): improved model definition, training and mAP.
13
  - **June 22, 2020**: [PANet](https://arxiv.org/abs/1803.01534) updates: new heads, reduced parameters, improved speed and mAP [364fcfd](https://github.com/ultralytics/yolov5/commit/364fcfd7dba53f46edd4f04c037a039c0a287972).
14
  - **June 19, 2020**: [FP16](https://pytorch.org/docs/stable/nn.html#torch.nn.Module.half) as new default for smaller checkpoints and faster inference [d4c6674](https://github.com/ultralytics/yolov5/commit/d4c6674c98e19df4c40e33a777610a18d1961145).
 
21
 
22
  | Model | AP<sup>val</sup> | AP<sup>test</sup> | AP<sub>50</sub> | Speed<sub>GPU</sub> | FPS<sub>GPU</sub> || params | FLOPS |
23
  |---------- |------ |------ |------ | -------- | ------| ------ |------ | :------: |
24
+ | [YOLOv5s](https://github.com/ultralytics/yolov5/releases/tag/v3.0) | 37.0 | 37.0 | 56.2 | **2.4ms** | **476** || 7.5M | 13.2B
25
+ | [YOLOv5m](https://github.com/ultralytics/yolov5/releases/tag/v3.0) | 44.3 | 44.3 | 63.2 | 3.4ms | 333 || 21.8M | 39.4B
26
+ | [YOLOv5l](https://github.com/ultralytics/yolov5/releases/tag/v3.0) | 47.7 | 47.7 | 66.5 | 4.4ms | 256 || 47.8M | 88.1B
27
+ | [YOLOv5x](https://github.com/ultralytics/yolov5/releases/tag/v3.0) | **49.2** | **49.2** | **67.7** | 6.9ms | 164 || 89.0M | 166.4B
28
  | | | | | | || |
29
+ | [YOLOv5x](https://github.com/ultralytics/yolov5/releases/tag/v3.0) + TTA|**50.8**| **50.8** | **68.9** | 25.5ms | 39 || 89.0M | 354.3B
30
+ | | | | | | || |
31
+ | [YOLOv3-SPP](https://github.com/ultralytics/yolov5/releases/tag/v3.0) | 45.6 | 45.5 | 65.2 | 4.5ms | 222 || 63.0M | 118.0B
32
 
33
  ** AP<sup>test</sup> denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results in the table denote val2017 accuracy.
34
+ ** All AP numbers are for single-model single-scale without ensemble or test-time augmentation. **Reproduce** by `python test.py --data coco.yaml --img 640 --conf 0.001`
35
+ ** Speed<sub>GPU</sub> measures end-to-end time per image averaged over 5000 COCO val2017 images using a GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) instance with one V100 GPU, and includes image preprocessing, PyTorch FP16 image inference at --batch-size 32 --img-size 640, postprocessing and NMS. Average NMS time included in this chart is 1-2ms/img. **Reproduce** by `python test.py --data coco.yaml --img 640 --conf 0.1`
36
  ** All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation).
37
+ ** Test Time Augmentation ([TTA](https://github.com/ultralytics/yolov5/issues/303)) runs at 3 image sizes. **Reproduce** by `python test.py --data coco.yaml --img 832 --augment`
38
 
39
  ## Requirements
40
 
 
100
 
101
  ## Training
102
 
103
+ Download [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) and run command below. Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the largest `--batch-size` your GPU allows (batch sizes shown for 16 GB devices).
104
  ```bash
105
  $ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size 64
106
+ yolov5m 40
107
+ yolov5l 24
108
  yolov5x 16
109
  ```
110
  <img src="https://user-images.githubusercontent.com/26833433/84186698-c4d54d00-aa45-11ea-9bde-c632c1230ccd.png" width="900">
data/hyp.finetune.yaml CHANGED
@@ -18,7 +18,7 @@ hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
18
  hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
19
  hsv_v: 0.4 # image HSV-Value augmentation (fraction)
20
  degrees: 0.0 # image rotation (+/- deg)
21
- translate: 0.5 # image translation (+/- fraction)
22
  scale: 0.5 # image scale (+/- gain)
23
  shear: 0.0 # image shear (+/- deg)
24
  perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
 
18
  hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
19
  hsv_v: 0.4 # image HSV-Value augmentation (fraction)
20
  degrees: 0.0 # image rotation (+/- deg)
21
+ translate: 0.1 # image translation (+/- fraction)
22
  scale: 0.5 # image scale (+/- gain)
23
  shear: 0.0 # image shear (+/- deg)
24
  perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
data/hyp.scratch.yaml CHANGED
@@ -18,7 +18,7 @@ hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
18
  hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
19
  hsv_v: 0.4 # image HSV-Value augmentation (fraction)
20
  degrees: 0.0 # image rotation (+/- deg)
21
- translate: 0.5 # image translation (+/- fraction)
22
  scale: 0.5 # image scale (+/- gain)
23
  shear: 0.0 # image shear (+/- deg)
24
  perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
 
18
  hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
19
  hsv_v: 0.4 # image HSV-Value augmentation (fraction)
20
  degrees: 0.0 # image rotation (+/- deg)
21
+ translate: 0.1 # image translation (+/- fraction)
22
  scale: 0.5 # image scale (+/- gain)
23
  shear: 0.0 # image shear (+/- deg)
24
  perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
models/common.py CHANGED
@@ -23,7 +23,7 @@ class Conv(nn.Module):
23
  super(Conv, self).__init__()
24
  self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
25
  self.bn = nn.BatchNorm2d(c2)
26
- self.act = nn.LeakyReLU(0.1, inplace=True) if act else nn.Identity()
27
 
28
  def forward(self, x):
29
  return self.act(self.bn(self.conv(x)))
 
23
  super(Conv, self).__init__()
24
  self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
25
  self.bn = nn.BatchNorm2d(c2)
26
+ self.act = nn.Hardswish() if act else nn.Identity()
27
 
28
  def forward(self, x):
29
  return self.act(self.bn(self.conv(x)))
models/yolo.py CHANGED
@@ -15,10 +15,13 @@ from utils.torch_utils import (
15
 
16
  logger = logging.getLogger(__name__)
17
 
 
18
  class Detect(nn.Module):
 
 
 
19
  def __init__(self, nc=80, anchors=(), ch=()): # detection layer
20
  super(Detect, self).__init__()
21
- self.stride = None # strides computed during build
22
  self.nc = nc # number of classes
23
  self.no = nc + 5 # number of outputs per anchor
24
  self.nl = len(anchors) # number of detection layers
@@ -28,7 +31,6 @@ class Detect(nn.Module):
28
  self.register_buffer('anchors', a) # shape(nl,na,2)
29
  self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2)
30
  self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
31
- self.export = False # onnx export
32
 
33
  def forward(self, x):
34
  # x = x.copy() # for profiling
 
15
 
16
  logger = logging.getLogger(__name__)
17
 
18
+
19
  class Detect(nn.Module):
20
+ stride = None # strides computed during build
21
+ export = False # onnx export
22
+
23
  def __init__(self, nc=80, anchors=(), ch=()): # detection layer
24
  super(Detect, self).__init__()
 
25
  self.nc = nc # number of classes
26
  self.no = nc + 5 # number of outputs per anchor
27
  self.nl = len(anchors) # number of detection layers
 
31
  self.register_buffer('anchors', a) # shape(nl,na,2)
32
  self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2)
33
  self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
 
34
 
35
  def forward(self, x):
36
  # x = x.copy() # for profiling
test.py CHANGED
@@ -280,9 +280,9 @@ if __name__ == '__main__':
280
  opt.verbose)
281
 
282
  elif opt.task == 'study': # run over a range of settings and save/plot
283
- for weights in ['yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt', 'yolov3-spp.pt']:
284
  f = 'study_%s_%s.txt' % (Path(opt.data).stem, Path(weights).stem) # filename to save to
285
- x = list(range(352, 832, 64)) # x axis
286
  y = [] # y axis
287
  for i in x: # img-size
288
  print('\nRunning %s point %s...' % (f, i))
@@ -290,4 +290,4 @@ if __name__ == '__main__':
290
  y.append(r + t) # results and times
291
  np.savetxt(f, y, fmt='%10.4g') # save
292
  os.system('zip -r study.zip study_*.txt')
293
- # plot_study_txt(f, x) # plot
 
280
  opt.verbose)
281
 
282
  elif opt.task == 'study': # run over a range of settings and save/plot
283
+ for weights in ['yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt']:
284
  f = 'study_%s_%s.txt' % (Path(opt.data).stem, Path(weights).stem) # filename to save to
285
+ x = list(range(320, 800, 64)) # x axis
286
  y = [] # y axis
287
  for i in x: # img-size
288
  print('\nRunning %s point %s...' % (f, i))
 
290
  y.append(r + t) # results and times
291
  np.savetxt(f, y, fmt='%10.4g') # save
292
  os.system('zip -r study.zip study_*.txt')
293
+ # utils.general.plot_study_txt(f, x) # plot
train.py CHANGED
@@ -30,6 +30,7 @@ from utils.torch_utils import init_seeds, ModelEMA, select_device, intersect_dic
30
 
31
  logger = logging.getLogger(__name__)
32
 
 
33
  def train(hyp, opt, device, tb_writer=None):
34
  logger.info(f'Hyperparameters {hyp}')
35
  log_dir = Path(tb_writer.log_dir) if tb_writer else Path(opt.logdir) / 'evolve' # logging directory
@@ -131,7 +132,7 @@ def train(hyp, opt, device, tb_writer=None):
131
  start_epoch = ckpt['epoch'] + 1
132
  if epochs < start_epoch:
133
  logger.info('%s has been trained for %g epochs. Fine-tuning for %g additional epochs.' %
134
- (weights, ckpt['epoch'], epochs))
135
  epochs += ckpt['epoch'] # finetune additional epochs
136
 
137
  del ckpt, state_dict
@@ -404,7 +405,7 @@ if __name__ == '__main__':
404
  parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
405
  parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')
406
  parser.add_argument('--logdir', type=str, default='runs/', help='logging directory')
407
- parser.add_argument('--workers', type=int, default=8, help='maximum number of workers for dataloader')
408
  opt = parser.parse_args()
409
 
410
  # Set DDP variables
@@ -419,7 +420,7 @@ if __name__ == '__main__':
419
  if last and not opt.weights:
420
  logger.info(f'Resuming training from {last}')
421
  opt.weights = last if opt.resume and not opt.weights else opt.weights
422
- if opt.global_rank in [-1,0]:
423
  check_git_status()
424
 
425
  opt.hyp = opt.hyp or ('data/hyp.finetune.yaml' if opt.weights else 'data/hyp.scratch.yaml')
 
30
 
31
  logger = logging.getLogger(__name__)
32
 
33
+
34
  def train(hyp, opt, device, tb_writer=None):
35
  logger.info(f'Hyperparameters {hyp}')
36
  log_dir = Path(tb_writer.log_dir) if tb_writer else Path(opt.logdir) / 'evolve' # logging directory
 
132
  start_epoch = ckpt['epoch'] + 1
133
  if epochs < start_epoch:
134
  logger.info('%s has been trained for %g epochs. Fine-tuning for %g additional epochs.' %
135
+ (weights, ckpt['epoch'], epochs))
136
  epochs += ckpt['epoch'] # finetune additional epochs
137
 
138
  del ckpt, state_dict
 
405
  parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
406
  parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')
407
  parser.add_argument('--logdir', type=str, default='runs/', help='logging directory')
408
+ parser.add_argument('--workers', type=int, default=8, help='maximum number of dataloader workers')
409
  opt = parser.parse_args()
410
 
411
  # Set DDP variables
 
420
  if last and not opt.weights:
421
  logger.info(f'Resuming training from {last}')
422
  opt.weights = last if opt.resume and not opt.weights else opt.weights
423
+ if opt.global_rank in [-1, 0]:
424
  check_git_status()
425
 
426
  opt.hyp = opt.hyp or ('data/hyp.finetune.yaml' if opt.weights else 'data/hyp.scratch.yaml')
utils/activations.py CHANGED
@@ -10,12 +10,6 @@ class Swish(nn.Module): #
10
  return x * torch.sigmoid(x)
11
 
12
 
13
- class HardSwish(nn.Module):
14
- @staticmethod
15
- def forward(x):
16
- return x * F.hardtanh(x + 3, 0., 6., True) / 6.
17
-
18
-
19
  class MemoryEfficientSwish(nn.Module):
20
  class F(torch.autograd.Function):
21
  @staticmethod
 
10
  return x * torch.sigmoid(x)
11
 
12
 
 
 
 
 
 
 
13
  class MemoryEfficientSwish(nn.Module):
14
  class F(torch.autograd.Function):
15
  @staticmethod
utils/datasets.py CHANGED
@@ -610,7 +610,7 @@ def load_mosaic(self, index):
610
 
611
  labels4 = []
612
  s = self.img_size
613
- yc, xc = s, s # mosaic center x, y
614
  indices = [index] + [random.randint(0, len(self.labels) - 1) for _ in range(3)] # 3 additional image indices
615
  for i, index in enumerate(indices):
616
  # Load image
@@ -804,7 +804,7 @@ def random_perspective(img, targets=(), degrees=10, translate=.1, scale=.1, shea
804
  return img, targets
805
 
806
 
807
- def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.2): # box1(4,n), box2(4,n)
808
  # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio
809
  w1, h1 = box1[2] - box1[0], box1[3] - box1[1]
810
  w2, h2 = box2[2] - box2[0], box2[3] - box2[1]
 
610
 
611
  labels4 = []
612
  s = self.img_size
613
+ yc, xc = [int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border] # mosaic center x, y
614
  indices = [index] + [random.randint(0, len(self.labels) - 1) for _ in range(3)] # 3 additional image indices
615
  for i, index in enumerate(indices):
616
  # Load image
 
804
  return img, targets
805
 
806
 
807
+ def box_candidates(box1, box2, wh_thr=2, ar_thr=20, area_thr=0.1): # box1(4,n), box2(4,n)
808
  # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio
809
  w1, h1 = box1[2] - box1[0], box1[3] - box1[1]
810
  w2, h2 = box2[2] - box2[0], box2[3] - box2[1]
utils/general.py CHANGED
@@ -1147,7 +1147,7 @@ def plot_study_txt(f='study.txt', x=None): # from utils.general import *; plot_
1147
  ax = ax.ravel()
1148
 
1149
  fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True)
1150
- for f in ['coco_study/study_coco_yolov5%s.txt' % x for x in ['s', 'm', 'l', 'x']]:
1151
  y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T
1152
  x = np.arange(y.shape[1]) if x is None else np.array(x)
1153
  s = ['P', 'R', 'mAP@.5', 'mAP@.5:.95', 't_inference (ms/img)', 't_NMS (ms/img)', 't_total (ms/img)']
@@ -1159,7 +1159,7 @@ def plot_study_txt(f='study.txt', x=None): # from utils.general import *; plot_
1159
  ax2.plot(y[6, :j], y[3, :j] * 1E2, '.-', linewidth=2, markersize=8,
1160
  label=Path(f).stem.replace('study_coco_', '').replace('yolo', 'YOLO'))
1161
 
1162
- ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [33.8, 39.6, 43.0, 47.5, 49.4, 50.7],
1163
  'k.-', linewidth=2, markersize=8, alpha=.25, label='EfficientDet')
1164
 
1165
  ax2.grid()
@@ -1170,7 +1170,7 @@ def plot_study_txt(f='study.txt', x=None): # from utils.general import *; plot_
1170
  ax2.set_ylabel('COCO AP val')
1171
  ax2.legend(loc='lower right')
1172
  plt.savefig('study_mAP_latency.png', dpi=300)
1173
- plt.savefig(f.replace('.txt', '.png'), dpi=200)
1174
 
1175
 
1176
  def plot_labels(labels, save_dir=''):
@@ -1247,8 +1247,11 @@ def plot_results(start=0, stop=0, bucket='', id=(), labels=(),
1247
  s = ['GIoU', 'Objectness', 'Classification', 'Precision', 'Recall',
1248
  'val GIoU', 'val Objectness', 'val Classification', 'mAP@0.5', 'mAP@0.5:0.95']
1249
  if bucket:
1250
- os.system('rm -rf storage.googleapis.com')
1251
- files = ['https://storage.googleapis.com/%s/results%g.txt' % (bucket, x) for x in id]
 
 
 
1252
  else:
1253
  files = glob.glob(str(Path(save_dir) / 'results*.txt')) + glob.glob('../../Downloads/results*.txt')
1254
  for fi, f in enumerate(files):
@@ -1266,8 +1269,8 @@ def plot_results(start=0, stop=0, bucket='', id=(), labels=(),
1266
  ax[i].set_title(s[i])
1267
  # if i in [5, 6, 7]: # share train and val loss y axes
1268
  # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5])
1269
- except:
1270
- print('Warning: Plotting error for %s, skipping file' % f)
1271
 
1272
  fig.tight_layout()
1273
  ax[1].legend()
 
1147
  ax = ax.ravel()
1148
 
1149
  fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4), tight_layout=True)
1150
+ for f in ['study/study_coco_yolov5%s.txt' % x for x in ['s', 'm', 'l', 'x']]:
1151
  y = np.loadtxt(f, dtype=np.float32, usecols=[0, 1, 2, 3, 7, 8, 9], ndmin=2).T
1152
  x = np.arange(y.shape[1]) if x is None else np.array(x)
1153
  s = ['P', 'R', 'mAP@.5', 'mAP@.5:.95', 't_inference (ms/img)', 't_NMS (ms/img)', 't_total (ms/img)']
 
1159
  ax2.plot(y[6, :j], y[3, :j] * 1E2, '.-', linewidth=2, markersize=8,
1160
  label=Path(f).stem.replace('study_coco_', '').replace('yolo', 'YOLO'))
1161
 
1162
+ ax2.plot(1E3 / np.array([209, 140, 97, 58, 35, 18]), [34.6, 40.5, 43.0, 47.5, 49.7, 51.5],
1163
  'k.-', linewidth=2, markersize=8, alpha=.25, label='EfficientDet')
1164
 
1165
  ax2.grid()
 
1170
  ax2.set_ylabel('COCO AP val')
1171
  ax2.legend(loc='lower right')
1172
  plt.savefig('study_mAP_latency.png', dpi=300)
1173
+ plt.savefig(f.replace('.txt', '.png'), dpi=300)
1174
 
1175
 
1176
  def plot_labels(labels, save_dir=''):
 
1247
  s = ['GIoU', 'Objectness', 'Classification', 'Precision', 'Recall',
1248
  'val GIoU', 'val Objectness', 'val Classification', 'mAP@0.5', 'mAP@0.5:0.95']
1249
  if bucket:
1250
+ # os.system('rm -rf storage.googleapis.com')
1251
+ # files = ['https://storage.googleapis.com/%s/results%g.txt' % (bucket, x) for x in id]
1252
+ files = ['results%g.txt' % x for x in id]
1253
+ c = ('gsutil cp ' + '%s ' * len(files) + '.') % tuple('gs://%s/results%g.txt' % (bucket, x) for x in id)
1254
+ os.system(c)
1255
  else:
1256
  files = glob.glob(str(Path(save_dir) / 'results*.txt')) + glob.glob('../../Downloads/results*.txt')
1257
  for fi, f in enumerate(files):
 
1269
  ax[i].set_title(s[i])
1270
  # if i in [5, 6, 7]: # share train and val loss y axes
1271
  # ax[i].get_shared_y_axes().join(ax[i], ax[i - 5])
1272
+ except Exception as e:
1273
+ print('Warning: Plotting error for %s; %s' % (f, e))
1274
 
1275
  fig.tight_layout()
1276
  ax[1].legend()