glenn-jocher
commited on
Commit
•
94e6711
1
Parent(s):
2a0aff6
updates
Browse files
README.md
CHANGED
@@ -25,15 +25,15 @@ For business inquiries and professional support requests please visit us at http
|
|
25 |
|
26 |
| Model | AP<sup>val</sup> | AP<sup>test</sup> | AP<sub>50</sub> | Latency<sub>GPU</sub> | FPS<sub>GPU</sub> || params | FLOPs |
|
27 |
|---------- |------ |------ |------ | -------- | ------| ------ |------ | :------: |
|
28 |
-
| YOLOv5-s ([ckpt](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J)) | 33.
|
29 |
-
| YOLOv5-m ([ckpt](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J)) | 41.
|
30 |
-
| YOLOv5-l ([ckpt](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J)) | 44.
|
31 |
-
| YOLOv5-x ([ckpt](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J)) | **47.1** | **47.2** | **66.7** | 15.
|
32 |
-
| YOLOv3-SPP ([ckpt](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J)) | 45.
|
33 |
|
34 |
** AP<sup>test</sup> denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results in the table denote val2017 accuracy.
|
35 |
-
** All AP numbers are for single-model single-scale without ensemble or test-time augmentation. Reproduce by
|
36 |
-
** Latency<sub>GPU</sub> measures end-to-end latency per image averaged over 5000 COCO val2017 images using a
|
37 |
** All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation).
|
38 |
|
39 |
|
|
|
25 |
|
26 |
| Model | AP<sup>val</sup> | AP<sup>test</sup> | AP<sub>50</sub> | Latency<sub>GPU</sub> | FPS<sub>GPU</sub> || params | FLOPs |
|
27 |
|---------- |------ |------ |------ | -------- | ------| ------ |------ | :------: |
|
28 |
+
| YOLOv5-s ([ckpt](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J)) | 33.0 | 33.0 | 53.2 | **2.9ms** | **345** || 7.0M | 14.0B
|
29 |
+
| YOLOv5-m ([ckpt](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J)) | 41.4 | 41.4 | 61.5 | 5.0ms | 200 || 25.2M | 50.2B
|
30 |
+
| YOLOv5-l ([ckpt](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J)) | 44.3 | 44.5 | 64.3 | 8.9ms | 112 || 61.8M | 123.1B
|
31 |
+
| YOLOv5-x ([ckpt](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J)) | **47.1** | **47.2** | **66.7** | 15.2ms | 66 || 123.1M | 245.7B
|
32 |
+
| YOLOv3-SPP ([ckpt](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J)) | 45.6 | 45.5 | 65.2 | 8.3ms | 120 || 63.0M | 118.0B
|
33 |
|
34 |
** AP<sup>test</sup> denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results in the table denote val2017 accuracy.
|
35 |
+
** All AP numbers are for single-model single-scale without ensemble or test-time augmentation. Reproduce by `python test.py --img 736 --conf 0.001`
|
36 |
+
** Latency<sub>GPU</sub> measures end-to-end latency per image averaged over 5000 COCO val2017 images using a GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) instance with one V100 GPU, and includes image preprocessing, pytorch FP32 inference at batch size 16, postprocessing and NMS. Average NMS time included in this chart is 1-2ms/img. Reproduce by `python test.py --img 640 --conf 0.1`
|
37 |
** All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation).
|
38 |
|
39 |
|