glenn-jocher commited on
Commit
bfad364
1 Parent(s): dbbc6b5

Created using Colaboratory

Browse files
Files changed (1) hide show
  1. tutorial.ipynb +8 -8
tutorial.ipynb CHANGED
@@ -415,7 +415,7 @@
415
  "clear_output()\n",
416
  "print(f\"Setup complete. Using torch {torch.__version__} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})\")"
417
  ],
418
- "execution_count": 1,
419
  "outputs": [
420
  {
421
  "output_type": "stream",
@@ -461,7 +461,7 @@
461
  "!python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/\n",
462
  "#Image(filename='runs/detect/exp/zidane.jpg', width=600)"
463
  ],
464
- "execution_count": 4,
465
  "outputs": [
466
  {
467
  "output_type": "stream",
@@ -538,7 +538,7 @@
538
  "torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017val.zip', 'tmp.zip')\n",
539
  "!unzip -q tmp.zip -d ../datasets && rm tmp.zip"
540
  ],
541
- "execution_count": 5,
542
  "outputs": [
543
  {
544
  "output_type": "display_data",
@@ -571,7 +571,7 @@
571
  "# Run YOLOv5x on COCO val2017\n",
572
  "!python val.py --weights yolov5x.pt --data coco.yaml --img 640 --iou 0.65 --half"
573
  ],
574
- "execution_count": 6,
575
  "outputs": [
576
  {
577
  "output_type": "stream",
@@ -734,7 +734,7 @@
734
  "# Train YOLOv5s on COCO128 for 3 epochs\n",
735
  "!python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache"
736
  ],
737
- "execution_count": 8,
738
  "outputs": [
739
  {
740
  "output_type": "stream",
@@ -853,13 +853,13 @@
853
  "\n",
854
  "All results are logged by default to `runs/train`, with a new experiment directory created for each new training as `runs/train/exp2`, `runs/train/exp3`, etc. View train and val jpgs to see mosaics, labels, predictions and augmentation effects. Note an Ultralytics **Mosaic Dataloader** is used for training (shown below), which combines 4 images into 1 mosaic during training.\n",
855
  "\n",
856
- "> <img src=\"https://user-images.githubusercontent.com/26833433/124931219-48bf8700-e002-11eb-84f0-e05d95b118dd.jpg\" width=\"700\"> \n",
857
  "`train_batch0.jpg` shows train batch 0 mosaics and labels\n",
858
  "\n",
859
- "> <img src=\"https://user-images.githubusercontent.com/26833433/124931217-4826f080-e002-11eb-87b9-ae0925a8c94b.jpg\" width=\"700\"> \n",
860
  "`test_batch0_labels.jpg` shows val batch 0 labels\n",
861
  "\n",
862
- "> <img src=\"https://user-images.githubusercontent.com/26833433/124931209-46f5c380-e002-11eb-9bd5-7a3de2be9851.jpg\" width=\"700\"> \n",
863
  "`test_batch0_pred.jpg` shows val batch 0 _predictions_\n",
864
  "\n",
865
  "Training results are automatically logged to [Tensorboard](https://www.tensorflow.org/tensorboard) and [CSV](https://github.com/ultralytics/yolov5/pull/4148) as `results.csv`, which is plotted as `results.png` (below) after training completes. You can also plot any `results.csv` file manually:\n",
 
415
  "clear_output()\n",
416
  "print(f\"Setup complete. Using torch {torch.__version__} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})\")"
417
  ],
418
+ "execution_count": null,
419
  "outputs": [
420
  {
421
  "output_type": "stream",
 
461
  "!python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/\n",
462
  "#Image(filename='runs/detect/exp/zidane.jpg', width=600)"
463
  ],
464
+ "execution_count": null,
465
  "outputs": [
466
  {
467
  "output_type": "stream",
 
538
  "torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017val.zip', 'tmp.zip')\n",
539
  "!unzip -q tmp.zip -d ../datasets && rm tmp.zip"
540
  ],
541
+ "execution_count": null,
542
  "outputs": [
543
  {
544
  "output_type": "display_data",
 
571
  "# Run YOLOv5x on COCO val2017\n",
572
  "!python val.py --weights yolov5x.pt --data coco.yaml --img 640 --iou 0.65 --half"
573
  ],
574
+ "execution_count": null,
575
  "outputs": [
576
  {
577
  "output_type": "stream",
 
734
  "# Train YOLOv5s on COCO128 for 3 epochs\n",
735
  "!python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache"
736
  ],
737
+ "execution_count": null,
738
  "outputs": [
739
  {
740
  "output_type": "stream",
 
853
  "\n",
854
  "All results are logged by default to `runs/train`, with a new experiment directory created for each new training as `runs/train/exp2`, `runs/train/exp3`, etc. View train and val jpgs to see mosaics, labels, predictions and augmentation effects. Note an Ultralytics **Mosaic Dataloader** is used for training (shown below), which combines 4 images into 1 mosaic during training.\n",
855
  "\n",
856
+ "> <img src=\"https://user-images.githubusercontent.com/26833433/131255960-b536647f-7c61-4f60-bbc5-cb2544d71b2a.jpg\" width=\"700\"> \n",
857
  "`train_batch0.jpg` shows train batch 0 mosaics and labels\n",
858
  "\n",
859
+ "> <img src=\"https://user-images.githubusercontent.com/26833433/131256748-603cafc7-55d1-4e58-ab26-83657761aed9.jpg\" width=\"700\"> \n",
860
  "`test_batch0_labels.jpg` shows val batch 0 labels\n",
861
  "\n",
862
+ "> <img src=\"https://user-images.githubusercontent.com/26833433/131256752-3f25d7a5-7b0f-4bb3-ab78-46343c3800fe.jpg\" width=\"700\"> \n",
863
  "`test_batch0_pred.jpg` shows val batch 0 _predictions_\n",
864
  "\n",
865
  "Training results are automatically logged to [Tensorboard](https://www.tensorflow.org/tensorboard) and [CSV](https://github.com/ultralytics/yolov5/pull/4148) as `results.csv`, which is plotted as `results.png` (below) after training completes. You can also plot any `results.csv` file manually:\n",