File size: 4,361 Bytes
7652882
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
# Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks

This repo contains the code, data and trained models for the paper [Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks](https://arxiv.org/ftp/arxiv/papers/1604/1604.02878.pdf).

## Overview

MTCNN is a popular algorithm for face detection that uses multiple neural networks to detect faces in images. It is capable of detecting faces under various lighting and pose conditions and can detect multiple faces in an image.

We have implemented MTCNN using the pytorch framework. Pytorch is a popular deep learning framework that provides tools for building and training neural networks. 

![](https://img.enderfga.cn/img/image-20221208152130975.png)

![](https://img.enderfga.cn/img/image-20221208152231511.png)
## Description of file
```shell
β”œβ”€β”€ README.md                      # explanatory document
β”œβ”€β”€ get_data.py                    # Generate corresponding training data depending on the input β€œ--net”
β”œβ”€β”€ img                            # mid.png is used for testing visualization effects,other images are the corresponding results.
β”‚   β”œβ”€β”€ mid.png
β”‚Β Β  β”œβ”€β”€ onet.png
β”‚Β Β  β”œβ”€β”€ pnet.png
β”‚Β Β  β”œβ”€β”€ rnet.png
β”‚Β Β  β”œβ”€β”€ result.png
β”‚Β Β  └── result.jpg
β”œβ”€β”€ model_store                    # Our pre-trained model
β”‚Β Β  β”œβ”€β”€ onet_epoch_20.pt
β”‚Β Β  β”œβ”€β”€ pnet_epoch_20.pt
β”‚Β Β  └── rnet_epoch_20.pt
β”œβ”€β”€ requirements.txt               # Environmental version requirements
β”œβ”€β”€ test.py                        # Specify different "--net" to get the corresponding visualization results
β”œβ”€β”€ test.sh                        # Used to test mid.png, which will test the output visualization of three networks
β”œβ”€β”€ train.out                      # Our complete training log for this experiment
β”œβ”€β”€ train.py                       # Specify different "--net" for the training of the corresponding network
β”œβ”€β”€ train.sh                       # Generate data from start to finish and train
└── utils                          # Some common tool functions and modules
    β”œβ”€β”€ config.py
    β”œβ”€β”€ dataloader.py
    β”œβ”€β”€ detect.py
    β”œβ”€β”€ models.py
    β”œβ”€β”€ tool.py
    └── vision.py
```
## Requirements

* numpy==1.21.4
* matplotlib==3.5.0
* opencv-python==4.4.0.42
* torch==1.13.0+cu116

## How to Install

- ```shell
  conda create -n env python=3.8 -y
  conda activate env
  ```
- ```shell
  pip install -r requirements.txt
  ```

## Preprocessing

- download [WIDER_FACE](http://shuoyang1213.me/WIDERFACE/) face detection data then store it into ./data_set/face_detection
- download [CNN_FacePoint](http://mmlab.ie.cuhk.edu.hk/archive/CNN_FacePoint.htm) face detection and landmark data then store it into ./data_set/face_landmark

### Preprocessed Data

```shell
# Before training Pnet
python get_data.py --net=pnet
# Before training Rnet, please use your trained model path
python get_data.py --net=rnet --pnet_path=./model_store/pnet_epoch_20.pt
# Before training Onet, please use your trained model path
python get_data.py --net=onet --pnet_path=./model_store/pnet_epoch_20.pt --rnet_path=./model_store/rnet_epoch_20.pt
```

## How to Run

### Train

```shell
python train.py --net=pnet/rnet/onet #Specify the corresponding network to start training
bash train.sh                        #Alternatively, use the sh file to train in order
```

The checkpoints will be saved in a subfolder of `./model_store/*`.

#### Finetuning from an existing checkpoint

```shell
python train.py --net=pnet/rnet/onet --load=[model path]
```

model path should be a subdirectory in the `./model_store/` directory, e.g. `--load=./model_store/pnet_epoch_20.pt`

### Evaluate

#### Use the sh file to test in order

```shell
bash test.sh
```

#### To detect a single image

```shell
python test.py --net=pnet/rnet/onet  --path=test.jpg
```

#### To detect a video stream from a camera

```shell
python test.py --input_mode=0
```

#### The result of  "--net=pnet"

![](https://img.enderfga.cn/img/20221208160900.png)

#### The result of  "--net=rnet"

![](https://img.enderfga.cn/img/image-20221208155022083.png)

#### The result of  "--net=onet"

![](https://img.enderfga.cn/img/image-20221208155044451.png)