File size: 4,352 Bytes
4eed331
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
---

title: face-censor
app_file: src/main.py
sdk: gradio
sdk_version: 5.12.0
---

# Face Detection and Censoring System

A Python-based system for detecting faces in images and videos using YOLOv8, with the ability to censor detected faces. The system is designed to be modular and extensible.

## Features

- Face detection using YOLOv8
- Support for both image and video processing
- Easy to use User Interface
- Modular censoring system
- Trained on the WIDER FACE dataset via Roboflow
- Multiple masking methods including: blur, emoji, and text (see [demo](#demo))

## Local Installation

1. Clone the repository:
```bash

git clone https://github.com/Spring-0/face-censor.git

cd face-censor

```

2. Create a virtual environment and activate it:
```bash

python -m venv .venv

source .venv/bin/activate  # On Windows, use: .venv\Scripts\activate

```

3. Install the required packages:
```bash

pip install -r requirements.txt

```

4. Run:
```bash

python src/main.py

```

## Training the Model - Optional

The project uses the WIDER FACE dataset from Roboflow for training. I have included a pre-trained model, so there is no need to re-train it unless you want to. Here is how:

1. Update this line in `training/training.py` if required:
```python

device="0"  # Set to "0" to utilize GPU, otherwise set to "cpu" to utilize CPU

```

2. Create a `.env` file in the project root with your Roboflow API key:
```bash

ROBOFLOW_API_KEY=your_api_key_here

```

3. Run the training script:
```bash

cd training

python3 training.py

```

## Usage

### API Usage
```python

# Face detection model

from models.yolo_detector import YOLOFaceDetector



# Masking methods (no need to import all, just what you want to use)

from masking.text import TextCensor

from masking.emoji import EmojiCensor

from masking.blur import BlurCensor



# Media processor

from processor import MediaProcessor



# Initialize face detector model

detector = YOLOFaceDetector()

```
### Creating Masking Object
This is what determines what effect will be applied to mask the faces.

#### Using Text Masking
```python

text_censor = TextCensor(

    text="HELLO", # The text to draw on faces

    draw_background=True, # Control whether to draw solid background behind text

    background_color="white", # The color of the solid background

    text_color="black", # The color of the text

    scale_factor=0.2 # The text size scaling factor, default to 0.5

)

```
#### Using Emoji Masking
```python

emoji_censor = EmojiCensor(

    emoji="😁", # The emoji you want to use to mask faces

    font="seguiemj.ttf", # The path to the emoji font file, by default uses "seguiemj.ttf"

    scale_factor=1.0 # The emoji size scaling factor in percentage, default to 1.0

)

```
#### Using Blur Masking
```python

blur_censor = BlurCensor(

    blur_factor=71 # The strength of the blur effect, defaults to 99

)

```

### Create Media Processor
After creating the masking method object(s), you need to pass it to the `MediaProcessor` constructor like so:
```python

processor = MediaProcessor(detector, blur_censor)

```

### Processing Images
```python

# Process an image

processor.process_image("input.jpg", "output.jpg")

```

### Processing Videos
```python

# Process a video

processor.process_video("input.mp4", "output.mp4")

```

## Demo

### Input Image/Video
![Input Image/Video](assets/input.jpg)

### Output Image/Video
#### Blur Masking
![Output Blur Image/Video](assets/output_blur.jpg)

#### Emoji Masking
![Output Emoji Image/Video](assets/output_emoji.jpg)

#### Text Masking
![Output Text Image/Video](assets/output_text.jpg)

## Requirements

- Python 3.8+
- PyTorch
- OpenCV
- Ultralytics YOLOv8
- Roboflow

See `requirements.txt` for complete list.

## License

GPU General Public License - see LICENSE file for details.

## Contributing

1. Fork the repository
2. Create your feature branch (`git checkout -b feature/AmazingFeature`)
3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request

## TODO

- [x] Add emoji face masking
- [ ] Add support for real time streams
- [x] Add GUI interface
- [ ] Add partial face censoring (eyes)