glenn-jocher pre-commit-ci[bot] commited on
Commit
ec4b6dd
β€’
1 Parent(s): e1dc894

Update export format docstrings (#6151)

Browse files

* Update export documentation

* Cleanup

* Update export.py

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update README.md

* Update README.md

* Update README.md

* Update train.py

* Update train.py

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

Files changed (5) hide show
  1. README.md +31 -27
  2. detect.py +18 -6
  3. export.py +14 -12
  4. train.py +9 -2
  5. val.py +13 -1
README.md CHANGED
@@ -62,15 +62,14 @@ See the [YOLOv5 Docs](https://docs.ultralytics.com) for full documentation on tr
62
  <details open>
63
  <summary>Install</summary>
64
 
65
- [**Python>=3.6.0**](https://www.python.org/) is required with all
66
- [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) installed including
67
- [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/):
68
- <!-- $ sudo apt update && apt install -y libgl1-mesa-glx libsm6 libxext6 libxrender-dev -->
69
 
70
  ```bash
71
- $ git clone https://github.com/ultralytics/yolov5
72
- $ cd yolov5
73
- $ pip install -r requirements.txt
74
  ```
75
 
76
  </details>
@@ -78,8 +77,9 @@ $ pip install -r requirements.txt
78
  <details open>
79
  <summary>Inference</summary>
80
 
81
- Inference with YOLOv5 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36). Models automatically download
82
- from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases).
 
83
 
84
  ```python
85
  import torch
@@ -104,17 +104,17 @@ results.print() # or .show(), .save(), .crop(), .pandas(), etc.
104
  <details>
105
  <summary>Inference with detect.py</summary>
106
 
107
- `detect.py` runs inference on a variety of sources, downloading models automatically from
108
- the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.
109
 
110
  ```bash
111
- $ python detect.py --source 0 # webcam
112
- img.jpg # image
113
- vid.mp4 # video
114
- path/ # directory
115
- path/*.jpg # glob
116
- 'https://youtu.be/Zgi9g1ksQHc' # YouTube
117
- 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
118
  ```
119
 
120
  </details>
@@ -122,16 +122,20 @@ $ python detect.py --source 0 # webcam
122
  <details>
123
  <summary>Training</summary>
124
 
125
- Run commands below to reproduce results
126
- on [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) dataset (dataset auto-downloads on
127
- first use). Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the
128
- largest `--batch-size` your GPU allows (batch sizes shown for 16 GB devices).
 
 
 
129
 
130
  ```bash
131
- $ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size 64
132
- yolov5m 40
133
- yolov5l 24
134
- yolov5x 16
 
135
  ```
136
 
137
  <img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png">
@@ -225,6 +229,7 @@ We are super excited about our first-ever Ultralytics YOLOv5 πŸš€ EXPORT Competi
225
  ### Pretrained Checkpoints
226
 
227
  [assets]: https://github.com/ultralytics/yolov5/releases
 
228
  [TTA]: https://github.com/ultralytics/yolov5/issues/303
229
 
230
  |Model |size<br><sup>(pixels) |mAP<sup>val<br>0.5:0.95 |mAP<sup>val<br>0.5 |Speed<br><sup>CPU b1<br>(ms) |Speed<br><sup>V100 b1<br>(ms) |Speed<br><sup>V100 b32<br>(ms) |params<br><sup>(M) |FLOPs<br><sup>@640 (B)
@@ -257,7 +262,6 @@ We love your input! We want to make contributing to YOLOv5 as easy and transpare
257
 
258
  <a href="https://github.com/ultralytics/yolov5/graphs/contributors"><img src="https://opencollective.com/ultralytics/contributors.svg?width=990" /></a>
259
 
260
-
261
  ## <div align="center">Contact</div>
262
 
263
  For YOLOv5 bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues). For business inquiries or
 
62
  <details open>
63
  <summary>Install</summary>
64
 
65
+ Clone repo and install [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) in a
66
+ [**Python>=3.6.0**](https://www.python.org/) environment, including
67
+ [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/).
 
68
 
69
  ```bash
70
+ git clone https://github.com/ultralytics/yolov5 # clone
71
+ cd yolov5
72
+ pip install -r requirements.txt # install
73
  ```
74
 
75
  </details>
 
77
  <details open>
78
  <summary>Inference</summary>
79
 
80
+ Inference with YOLOv5 and [PyTorch Hub](https://github.com/ultralytics/yolov5/issues/36)
81
+ . [Models](https://github.com/ultralytics/yolov5/tree/master/models) download automatically from the latest
82
+ YOLOv5 [release](https://github.com/ultralytics/yolov5/releases).
83
 
84
  ```python
85
  import torch
 
104
  <details>
105
  <summary>Inference with detect.py</summary>
106
 
107
+ `detect.py` runs inference on a variety of sources, downloading [models](https://github.com/ultralytics/yolov5/tree/master/models) automatically from
108
+ the latest YOLOv5 [release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.
109
 
110
  ```bash
111
+ python detect.py --source 0 # webcam
112
+ img.jpg # image
113
+ vid.mp4 # video
114
+ path/ # directory
115
+ path/*.jpg # glob
116
+ 'https://youtu.be/Zgi9g1ksQHc' # YouTube
117
+ 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
118
  ```
119
 
120
  </details>
 
122
  <details>
123
  <summary>Training</summary>
124
 
125
+ The commands below reproduce YOLOv5 [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh)
126
+ results. [Models](https://github.com/ultralytics/yolov5/tree/master/models)
127
+ and [datasets](https://github.com/ultralytics/yolov5/tree/master/data) download automatically from the latest
128
+ YOLOv5 [release](https://github.com/ultralytics/yolov5/releases). Training times for YOLOv5n/s/m/l/x are
129
+ 1/2/4/6/8 days on a V100 GPU ([Multi-GPU](https://github.com/ultralytics/yolov5/issues/475) times faster). Use the
130
+ largest `--batch-size` possible, or pass `--batch-size -1` for
131
+ YOLOv5 [AutoBatch](https://github.com/ultralytics/yolov5/pull/5092). Batch sizes shown for V100-16GB.
132
 
133
  ```bash
134
+ python train.py --data coco.yaml --cfg yolov5n.yaml --weights '' --batch-size 128
135
+ yolov5s 64
136
+ yolov5m 40
137
+ yolov5l 24
138
+ yolov5x 16
139
  ```
140
 
141
  <img width="800" src="https://user-images.githubusercontent.com/26833433/90222759-949d8800-ddc1-11ea-9fa1-1c97eed2b963.png">
 
229
  ### Pretrained Checkpoints
230
 
231
  [assets]: https://github.com/ultralytics/yolov5/releases
232
+
233
  [TTA]: https://github.com/ultralytics/yolov5/issues/303
234
 
235
  |Model |size<br><sup>(pixels) |mAP<sup>val<br>0.5:0.95 |mAP<sup>val<br>0.5 |Speed<br><sup>CPU b1<br>(ms) |Speed<br><sup>V100 b1<br>(ms) |Speed<br><sup>V100 b32<br>(ms) |params<br><sup>(M) |FLOPs<br><sup>@640 (B)
 
262
 
263
  <a href="https://github.com/ultralytics/yolov5/graphs/contributors"><img src="https://opencollective.com/ultralytics/contributors.svg?width=990" /></a>
264
 
 
265
  ## <div align="center">Contact</div>
266
 
267
  For YOLOv5 bugs and feature requests please visit [GitHub Issues](https://github.com/ultralytics/yolov5/issues). For business inquiries or
detect.py CHANGED
@@ -2,14 +2,26 @@
2
  """
3
  Run inference on images, videos, directories, streams, etc.
4
 
5
- Usage:
6
- $ python path/to/detect.py --weights yolov5s.pt --source 0 # webcam
7
- img.jpg # image
8
- vid.mp4 # video
9
- path/ # directory
10
- path/*.jpg # glob
11
  'https://youtu.be/Zgi9g1ksQHc' # YouTube
12
  'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
 
 
 
 
 
 
 
 
 
 
 
 
13
  """
14
 
15
  import argparse
 
2
  """
3
  Run inference on images, videos, directories, streams, etc.
4
 
5
+ Usage - sources:
6
+ $ python path/to/detect.py --weights yolov5s.pt --source 0 # webcam
7
+ img.jpg # image
8
+ vid.mp4 # video
9
+ path/ # directory
10
+ path/*.jpg # glob
11
  'https://youtu.be/Zgi9g1ksQHc' # YouTube
12
  'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
13
+
14
+ Usage - formats:
15
+ $ python path/to/detect.py --weights yolov5s.pt # PyTorch
16
+ yolov5s.torchscript # TorchScript
17
+ yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn
18
+ yolov5s.mlmodel # CoreML (under development)
19
+ yolov5s_openvino_model # OpenVINO (under development)
20
+ yolov5s_saved_model # TensorFlow SavedModel
21
+ yolov5s.pb # TensorFlow protobuf
22
+ yolov5s.tflite # TensorFlow Lite
23
+ yolov5s_edgetpu.tflite # TensorFlow Edge TPU
24
+ yolov5s.engine # TensorRT
25
  """
26
 
27
  import argparse
export.py CHANGED
@@ -2,18 +2,19 @@
2
  """
3
  Export a YOLOv5 PyTorch model to other formats. TensorFlow exports authored by https://github.com/zldrobit
4
 
5
- Format | Example | `--include ...` argument
6
- --- | --- | ---
7
- PyTorch | yolov5s.pt | -
8
- TorchScript | yolov5s.torchscript | `torchscript`
9
- ONNX | yolov5s.onnx | `onnx`
10
- CoreML | yolov5s.mlmodel | `coreml`
11
- OpenVINO | yolov5s_openvino_model/ | `openvino`
12
- TensorFlow SavedModel | yolov5s_saved_model/ | `saved_model`
13
- TensorFlow GraphDef | yolov5s.pb | `pb`
14
- TensorFlow Lite | yolov5s.tflite | `tflite`
15
- TensorFlow.js | yolov5s_web_model/ | `tfjs`
16
- TensorRT | yolov5s.engine | `engine`
 
17
 
18
  Usage:
19
  $ python path/to/export.py --weights yolov5s.pt --include torchscript onnx coreml openvino saved_model tflite tfjs
@@ -27,6 +28,7 @@ Inference:
27
  yolov5s_saved_model
28
  yolov5s.pb
29
  yolov5s.tflite
 
30
  yolov5s.engine
31
 
32
  TensorFlow.js:
 
2
  """
3
  Export a YOLOv5 PyTorch model to other formats. TensorFlow exports authored by https://github.com/zldrobit
4
 
5
+ Format | Example | `--include ...` argument
6
+ --- | --- | ---
7
+ PyTorch | yolov5s.pt | -
8
+ TorchScript | yolov5s.torchscript | `torchscript`
9
+ ONNX | yolov5s.onnx | `onnx`
10
+ CoreML | yolov5s.mlmodel | `coreml`
11
+ OpenVINO | yolov5s_openvino_model/ | `openvino`
12
+ TensorFlow SavedModel | yolov5s_saved_model/ | `saved_model`
13
+ TensorFlow GraphDef | yolov5s.pb | `pb`
14
+ TensorFlow Lite | yolov5s.tflite | `tflite`
15
+ TensorFlow Edge TPU | yolov5s_edgetpu.tflite | `edgetpu`
16
+ TensorFlow.js | yolov5s_web_model/ | `tfjs`
17
+ TensorRT | yolov5s.engine | `engine`
18
 
19
  Usage:
20
  $ python path/to/export.py --weights yolov5s.pt --include torchscript onnx coreml openvino saved_model tflite tfjs
 
28
  yolov5s_saved_model
29
  yolov5s.pb
30
  yolov5s.tflite
31
+ yolov5s_edgetpu.tflite
32
  yolov5s.engine
33
 
34
  TensorFlow.js:
train.py CHANGED
@@ -1,10 +1,17 @@
1
  # YOLOv5 πŸš€ by Ultralytics, GPL-3.0 license
2
  """
3
- Train a YOLOv5 model on a custom dataset
 
 
 
 
 
4
 
5
  Usage:
6
- $ python path/to/train.py --data coco128.yaml --weights yolov5s.pt --img 640
 
7
  """
 
8
  import argparse
9
  import math
10
  import os
 
1
  # YOLOv5 πŸš€ by Ultralytics, GPL-3.0 license
2
  """
3
+ Train a YOLOv5 model on a custom dataset.
4
+
5
+ Models and datasets download automatically from the latest YOLOv5 release.
6
+ Models: https://github.com/ultralytics/yolov5/tree/master/models
7
+ Datasets: https://github.com/ultralytics/yolov5/tree/master/data
8
+ Tutorial: https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data
9
 
10
  Usage:
11
+ $ python path/to/train.py --data coco128.yaml --weights yolov5s.pt --img 640 # from pretrained (RECOMMENDED)
12
+ $ python path/to/train.py --data coco128.yaml --weights '' --cfg yolov5s.yaml --img 640 # from scratch
13
  """
14
+
15
  import argparse
16
  import math
17
  import os
val.py CHANGED
@@ -3,7 +3,19 @@
3
  Validate a trained YOLOv5 model accuracy on a custom dataset
4
 
5
  Usage:
6
- $ python path/to/val.py --data coco128.yaml --weights yolov5s.pt --img 640
 
 
 
 
 
 
 
 
 
 
 
 
7
  """
8
 
9
  import argparse
 
3
  Validate a trained YOLOv5 model accuracy on a custom dataset
4
 
5
  Usage:
6
+ $ python path/to/val.py --weights yolov5s.pt --data coco128.yaml --img 640
7
+
8
+ Usage - formats:
9
+ $ python path/to/val.py --weights yolov5s.pt # PyTorch
10
+ yolov5s.torchscript # TorchScript
11
+ yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn
12
+ yolov5s.mlmodel # CoreML (under development)
13
+ yolov5s_openvino_model # OpenVINO (under development)
14
+ yolov5s_saved_model # TensorFlow SavedModel
15
+ yolov5s.pb # TensorFlow protobuf
16
+ yolov5s.tflite # TensorFlow Lite
17
+ yolov5s_edgetpu.tflite # TensorFlow Edge TPU
18
+ yolov5s.engine # TensorRT
19
  """
20
 
21
  import argparse