Spaces:
Running
on
T4
Running
on
T4
File size: 1,686 Bytes
ac4ce84 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
# face-parsing.PyTorch
<p align="center">
<a href="https://github.com/zllrunning/face-parsing.PyTorch">
<img class="page-image" src="https://github.com/zllrunning/face-parsing.PyTorch/blob/master/6.jpg" >
</a>
</p>
### Contents
- [Training](#training)
- [Demo](#Demo)
- [References](#references)
## Training
1. Prepare training data:
-- download [CelebAMask-HQ dataset](https://github.com/switchablenorms/CelebAMask-HQ)
-- change file path in the `prepropess_data.py` and run
```Shell
python prepropess_data.py
```
2. Train the model using CelebAMask-HQ dataset:
Just run the train script:
```
$ CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 train.py
```
If you do not wish to train the model, you can download [our pre-trained model](https://drive.google.com/open?id=154JgKpzCPW82qINcVieuPH3fZ2e0P812) and save it in `res/cp`.
## Demo
1. Evaluate the trained model using:
```Shell
# evaluate using GPU
python test.py
```
## Face makeup using parsing maps
[**face-makeup.PyTorch**](https://github.com/zllrunning/face-makeup.PyTorch)
<table>
<tr>
<th> </th>
<th>Hair</th>
<th>Lip</th>
</tr>
<!-- Line 1: Original Input -->
<tr>
<td><em>Original Input</em></td>
<td><img src="makeup/116_ori.png" height="256" width="256" alt="Original Input"></td>
<td><img src="makeup/116_lip_ori.png" height="256" width="256" alt="Original Input"></td>
</tr>
<!-- Line 3: Color -->
<tr>
<td>Color</td>
<td><img src="makeup/116_1.png" height="256" width="256" alt="Color"></td>
<td><img src="makeup/116_3.png" height="256" width="256" alt="Color"></td>
</tr>
</table>
## References
- [BiSeNet](https://github.com/CoinCheung/BiSeNet) |