Accuracy evaluation of models in OpenCV Zoo
Make sure you have the following packages installed:
pip install tqdm
pip install scikit-learn
pip install scipy==1.8.1
Generally speaking, evaluation can be done with the following command:
python eval.py -m model_name -d dataset_name -dr dataset_root_dir
Supported datasets:
ImageNet
Prepare data
Please visit https://image-net.org/ to download the ImageNet dataset (only need images in ILSVRC/Data/CLS-LOC/val
) and the labels from caffe. Organize files as follow:
$ tree -L 2 /path/to/imagenet
.
βββ caffe_ilsvrc12
β βββ det_synset_words.txt
β βββ imagenet.bet.pickle
β βββ imagenet_mean.binaryproto
β βββ synsets.txt
β βββ synset_words.txt
β βββ test.txt
β βββ train.txt
β βββ val.txt
βββ caffe_ilsvrc12.tar.gz
βββ ILSVRC
β βββ Annotations
β βββ Data
β βββ ImageSets
βββ imagenet_object_localization_patched2019.tar.gz
βββ LOC_sample_submission.csv
βββ LOC_synset_mapping.txt
βββ LOC_train_solution.csv
βββ LOC_val_solution.csv
Evaluation
Run evaluation with the following command:
python eval.py -m mobilenet -d imagenet -dr /path/to/imagenet
WIDERFace
The script is modified based on WiderFace-Evaluation.
Prepare data
Please visit http://shuoyang1213.me/WIDERFACE to download the WIDERFace dataset Validation Images, Face annotations and eval_tools. Organize files as follow:
$ tree -L 2 /path/to/widerface
.
βββ eval_tools
β βββ boxoverlap.m
β βββ evaluation.m
β βββ ground_truth
β βββ nms.m
β βββ norm_score.m
β βββ plot
β βββ read_pred.m
β βββ wider_eval.m
βββ wider_face_split
β βββ readme.txt
β βββ wider_face_test_filelist.txt
β βββ wider_face_test.mat
β βββ wider_face_train_bbx_gt.txt
β βββ wider_face_train.mat
β βββ wider_face_val_bbx_gt.txt
β βββ wider_face_val.mat
βββ WIDER_val
βββ images
Evaluation
Run evaluation with the following command:
python eval.py -m yunet -d widerface -dr /path/to/widerface
LFW
The script is modified based on evaluation of InsightFace.
This evaluation uses YuNet as face detector. The structure of the face bounding boxes saved in lfw_face_bboxes.npy is shown below. Each row represents the bounding box of the main face that will be used in each image.
[
[x, y, w, h, x_re, y_re, x_le, y_le, x_nt, y_nt, x_rcm, y_rcm, x_lcm, y_lcm],
...
[x, y, w, h, x_re, y_re, x_le, y_le, x_nt, y_nt, x_rcm, y_rcm, x_lcm, y_lcm]
]
x1, y1, w, h
are the top-left coordinates, width and height of the face bounding box, {x, y}_{re, le, nt, rcm, lcm}
stands for the coordinates of right eye, left eye, nose tip, the right corner and left corner of the mouth respectively. Data type of this numpy array is np.float32
.
Prepare data
Please visit http://vis-www.cs.umass.edu/lfw to download the LFW all images(needs to be decompressed) and pairs.txt(needs to be placed in the view2
folder). Organize files as follow:
$ tree -L 2 /path/to/lfw
.
βββ lfw
β βββ Aaron_Eckhart
β βββ ...
β βββ Zydrunas_Ilgauskas
βββ view2
βββ pairs.txt
Evaluation
Run evaluation with the following command:
python eval.py -m sface -d lfw -dr /path/to/lfw
ICDAR2003
Prepare data
Please visit http://iapr-tc11.org/mediawiki/index.php/ICDAR_2003_Robust_Reading_Competitions to download the ICDAR2003 dataset and the labels. You have to download the Robust Word Recognition TrialTrain Set only.
$ tree -L 2 /path/to/icdar
.
βββ word
β βββ 1
β β βββ self
β β βββ ...
β β βββ willcooks
β βββ ...
β βββ 12
βββ word.xml
Evaluation
Run evaluation with the following command:
python eval.py -m crnn -d icdar -dr /path/to/icdar
Example
download zip file from http://www.iapr-tc11.org/dataset/ICDAR2003_RobustReading/TrialTrain/word.zip
upzip file to /path/to/icdar
python eval.py -m crnn -d icdar -dr /path/to/icdar
IIIT5K
Prepare data
Please visit https://github.com/cv-small-snails/Text-Recognition-Material to download the IIIT5K dataset and the labels.
Evaluation
All the datasets in the format of lmdb can be evaluated by this script.
Run evaluation with the following command:
python eval.py -m crnn -d iiit5k -dr /path/to/iiit5k
Mini Supervisely
Prepare data
Please download the mini_supervisely data from here which includes the validation dataset and unzip it.
$ tree -L 2 /path/to/mini_supervisely
.
βββ Annotations
β βββ ache-adult-depression-expression-41253.png
β βββ ...
βββ Images
β βββ ache-adult-depression-expression-41253.jpg
β βββ ...
βββ test.txt
βββ train.txt
βββ val.txt
Evaluation
Run evaluation with the following command :
python eval.py -m pphumanseg -d mini_supervisely -dr /path/to/pphumanseg
Run evaluation on quantized model with the following command :
python eval.py -m pphumanseg_q -d mini_supervisely -dr /path/to/pphumanseg