Towards Robust Blind Face Restoration with Codebook Lookup Transformer
Paper | Project Page | Video
Shangchen Zhou, Kelvin C.K. Chan, Chongyi Li, Chen Change Loy
S-Lab, Nanyang Technological University
:star: If CodeFormer is helpful to your images or projects, please help star this repo. Thanks! :hugs:
Update
- 2022.09.09: Integrated to :rocket: Replicate. Try out online demo!
- 2022.09.04: Add face upsampling
--face_upsample
for high-resolution AI-created face enhancement. - 2022.08.23: Some modifications on face detection and fusion for better AI-created face enhancement.
- 2022.08.07: Integrate Real-ESRGAN to support background image enhancement.
- 2022.07.29: Integrate new face detectors of
['RetinaFace'(default), 'YOLOv5']
. - 2022.07.17: Add Colab demo of CodeFormer.
- 2022.07.16: Release inference code for face restoration. :blush:
- 2022.06.21: This repo is created.
TODO
- Add checkpoint for face inpainting
- Add training code and config files
-
Add background image enhancement
Face Restoration
Face Color Enhancement and Restoration
Face Inpainting
Dependencies and Installation
- Pytorch >= 1.7.1
- CUDA >= 10.1
- Other required packages in
requirements.txt
# git clone this repository
git clone https://github.com/sczhou/CodeFormer
cd CodeFormer
# create new anaconda env
conda create -n codeformer python=3.8 -y
conda activate codeformer
# install python dependencies
pip3 install -r requirements.txt
python basicsr/setup.py develop
Quick Inference
Download Pre-trained Models:
Download the facelib pretrained models from [Google Drive | OneDrive] to the weights/facelib
folder. You can manually download the pretrained models OR download by runing the following command.
python scripts/download_pretrained_models.py facelib
Download the CodeFormer pretrained models from [Google Drive | OneDrive] to the weights/CodeFormer
folder. You can manually download the pretrained models OR download by runing the following command.
python scripts/download_pretrained_models.py CodeFormer
Prepare Testing Data:
You can put the testing images in the inputs/TestWhole
folder. If you would like to test on cropped and aligned faces, you can put them in the inputs/cropped_faces
folder.
Testing on Face Restoration:
# For cropped and aligned faces
python inference_codeformer.py --w 0.5 --has_aligned --test_path [input folder]
# For the whole images
# Add '--bg_upsampler realesrgan' to enhance the background regions with Real-ESRGAN
# Add '--face_upsample' to further upsample restorated face with Real-ESRGAN
python inference_codeformer.py --w 0.7 --test_path [input folder]
NOTE that w is in [0, 1]. Generally, smaller w tends to produce a higher-quality result, while larger w yields a higher-fidelity result.
The results will be saved in the results
folder.
Citation
If our work is useful for your research, please consider citing:
@article{zhou2022codeformer,
author = {Zhou, Shangchen and Chan, Kelvin C.K. and Li, Chongyi and Loy, Chen Change},
title = {Towards Robust Blind Face Restoration with Codebook Lookup TransFormer},
journal = {arXiv preprint arXiv:2206.11253},
year = {2022}
}
License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Acknowledgement
This project is based on BasicSR. We also borrow some codes from Unleashing Transformers, YOLOv5-face, and FaceXLib. Thanks for their awesome works.
Contact
If you have any question, please feel free to reach me out at shangchenzhou@gmail.com
.