metadata
language: en
tags:
- object detection
- computer vision
- darknet
- yolo
datasets:
- coco
- imagenette
license: mit
thumbnail: https://github.com/hunglc007/tensorflow-yolov4-tflite
pipeline_tag: object-detection
YOLOv4
YOLO, for "You Only Look Once", is an object detection system in real-time, introduced in this paper, that recognizes various objects in a single enclosure. It identifies objects more rapidly and more precisely than other recognition systems. Three authors Alexey Bochkovskiy, the Russian developer who built the YOLO Windows version, Chien-Yao Wang, and Hong-Yuan Mark Liao, are accounted for in this work and the entire code is available on Github.
This YOLOv4 library, inspired by previous YOLOv3 implementations here:
Limitations and biases
Object-recognition technology has improved drastically in the past few years across the industry, and it is now part of a huge variety of products and services that millions of people worldwide use. However, errors in object-recognition algorithms can stem from the training data used to create the system is geographically constrained and/or that it fails to recognize cultural differences.
The COCO dataset used to train yolov4-tflite has been found to have annotation errors on more than 20% of images. Such errors include captions describing people differently based on skin tone and gender expression. This serves as a reminder to be cognizant that these biases already exist and a warning to be careful about the increasing bias that is likely to come with advancements in image captioning technology.
How to use YOLOv4tflite
You can use this model to detect objects in an image of choice. Follow the following scripts to implement on your own!
git lfs install
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
cd ..
sudo apt-get install git-lfs
git lfs install
cd ./yolo_v4_tflite
git clone https://huggingface.co/SamMorgan/yolo_v4_tflite
python detect.py --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --image ./data/kite.jpg --output ./test.jpg
python detect.py --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --image <insert path to image of choice> --output <insert path to output location of choice>
Evaluate on COCO 2017 Dataset
cd data
mkdir dataset
cd ..
cd scripts
python coco_convert.py --input ./coco/annotations/instances_val2017.json --output val2017.pkl
python coco_annotation.py --coco_path ./coco
cd ..
python evaluate.py --weights ./data/yolov4.weights
cd mAP/extra
python remove_space.py
cd ..
python main.py --output results_yolov4_tf
mAP50 on COCO 2017 Dataset
Detection |
512x512 |
416x416 |
320x320 |
YoloV3 |
55.43 |
52.32 |
|
YoloV4 |
61.96 |
57.33 |
|
Benchmark
python benchmarks.py --size 416 --model yolov4 --weights ./data/yolov4.weights
TensorRT performance
YoloV4 416 images/s |
FP32 |
FP16 |
INT8 |
Batch size 1 |
55 |
116 |
|
Batch size 8 |
70 |
152 |
|
Tesla P100
Detection |
512x512 |
416x416 |
320x320 |
YoloV3 FPS |
40.6 |
49.4 |
61.3 |
YoloV4 FPS |
33.4 |
41.7 |
50.0 |
Tesla K80
Detection |
512x512 |
416x416 |
320x320 |
YoloV3 FPS |
10.8 |
12.9 |
17.6 |
YoloV4 FPS |
9.6 |
11.7 |
16.0 |
Tesla T4
Detection |
512x512 |
416x416 |
320x320 |
YoloV3 FPS |
27.6 |
32.3 |
45.1 |
YoloV4 FPS |
24.0 |
30.3 |
40.1 |
Tesla P4
Detection |
512x512 |
416x416 |
320x320 |
YoloV3 FPS |
20.2 |
24.2 |
31.2 |
YoloV4 FPS |
16.2 |
20.2 |
26.5 |
Macbook Pro 15 (2.3GHz i7)
Detection |
512x512 |
416x416 |
320x320 |
YoloV3 FPS |
|
|
|
YoloV4 FPS |
|
|
|
Traning your own model
In config.py set FISRT_STAGE_EPOCHS=0
python train.py
python train.py --weights ./data/yolov4.weights
The training performance is not fully reproduced yet, so I recommended to use Alex's Darknet to train your own data, then convert the .weights to tensorflow or tflite.
References
- YOLOv4: Optimal Speed and Accuracy of Object Detection YOLOv4.
- darknet