AIGCer-OPPO commited on
Commit
e55ee1f
โ€ข
1 Parent(s): a4e528b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +126 -3
README.md CHANGED
@@ -1,3 +1,126 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ # FaceScore
6
+
7
+ <p align="center">
8
+ ๐Ÿ“ƒ <a href="https://arxiv.org/abs/2406.17100" target="_blank">Paper</a> โ€ข ๐ŸŒ <a href="https://github.com/OPPO-Mente-Lab/FaceScore" target="_blank">Repo</a>
9
+ </p>
10
+
11
+ **FaceScore: Benchmarking and Enhancing Face Quality in Human Generation**
12
+
13
+ Traditional facial quality assessment focuses on whether a face is suitable for recognition, while image aesthetic scorers emphasize overall aesthetics rather than details. FaceScore is the first reward model that focuses on faces in text-to-image models, designed to score the faces generated in images. It is fine-tuned on positive and negative sample pairs generated using an inpainting pipeline based on real face images and surpasses previous models in predicting human preferences for generated faces.
14
+
15
+ - [Example Use](#example-use)
16
+ - [LoRA base on SDXL](#lora-based-on-sdxl)
17
+ - [Citation](#citation)
18
+
19
+
20
+ ## Example Use
21
+
22
+ We provide an example inference script in the directory of this repo.
23
+ We also provide a real face image for testing. Note that the model can also score real faces in the image, and no need to provide a specific prompt.
24
+
25
+
26
+ Use the following code to get the human preference scores from ImageReward:
27
+
28
+ ```python
29
+ from FaceScore import FaceScore
30
+ import os
31
+
32
+
33
+ face_score_model = FaceScore('FaceScore')
34
+ # load locally
35
+ # face_score_model = FaceScore(path_to_checkpoint,med_config = path_to_config)
36
+
37
+ img_path = 'assets/Lecun.jpg'
38
+ face_score,box,confidences = face_score_model.get_reward(img_path)
39
+ print(f'The face score of {img_path} is {face_score}, and the bounding box of the face(s) is {box}')
40
+
41
+ ```
42
+ You can also choose to load the model locally, after downloading the checkpoint in [FaceScore](https://huggingface.co/AIGCer-OPPO/FaceScore/tree/main).
43
+
44
+ The output should be like as follow (the exact numbers may be slightly different depending on the compute device):
45
+
46
+ ```
47
+ The face score of assets/Lecun.jpg is 3.993915319442749, and the bounding box of the faces is [[104.02845764160156, 28.232379913330078, 143.57421875, 78.53730773925781]]
48
+ ```
49
+
50
+ ## LoRA based on SDXL
51
+ We leverage FaceScore to filter data and perform direct preference optimization on SDXL.
52
+ The LoRA weight is [here](https://huggingface.co/AIGCer-OPPO/FaceScore-dpo-SDXL-LoRA/tree/main).
53
+ Here we provide a quick example:
54
+ ```
55
+ from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel
56
+ import torch
57
+
58
+ # load pipeline
59
+ inference_dtype = torch.float16
60
+ pipe = StableDiffusionXLPipeline.from_pretrained(
61
+ "stabilityai/stable-diffusion-xl-base-1.0",
62
+ torch_dtype=inference_dtype,
63
+ )
64
+ vae = AutoencoderKL.from_pretrained(
65
+ 'madebyollin/sdxl-vae-fp16-fix',
66
+ torch_dtype=inference_dtype,
67
+ )
68
+ pipe.vae = vae
69
+ # You can load it locally
70
+ pipe.load_lora_weights("AIGCer-OPPO/FaceScore-dpo-SDXL-LoRA")
71
+ pipe.to('cuda')
72
+
73
+ generator=torch.Generator(device='cuda').manual_seed(42)
74
+ image = pipe(
75
+ prompt='A woman in a costume standing in the desert',
76
+ guidance_scale=5.0,
77
+ generator=generator,
78
+ output_type='pil',
79
+ ).images[0]
80
+ image.save('A woman in a costume standing in the desert.png')
81
+ ```
82
+ We provide some examples generated by ours (right) and compare with the original SDXL (left) below.
83
+ <div style="display: flex; justify-content: space-around;">
84
+ <div style="text-align: center;">
85
+ <img src="https://huggingface.co/AIGCer-OPPO/FaceScore/resolve/main/assets/desert.jpg" alt="ๅ›พ็‰‡1" style="width: 600px;" />
86
+ <p>A woman in a costume standing in the desert. </p>
87
+ </div>
88
+ <div style="text-align: center;">
89
+ <img src="https://huggingface.co/AIGCer-OPPO/FaceScore/resolve/main/assets/scarf.jpg" alt="ๅ›พ็‰‡2" style="width: 600px;" />
90
+ <p>A woman wearing a blue jacket and scarf.</p>
91
+ </div>
92
+ </div>
93
+ <div style="display: flex; justify-content: space-around;">
94
+ <div style="text-align: center;">
95
+ <img src="https://huggingface.co/AIGCer-OPPO/FaceScore/resolve/main/assets/stage.jpg" alt="ๅ›พ็‰‡1" style="width: 600px;" />
96
+ <p>A woman in a costume standing in the desert. </p>
97
+ </div>
98
+ <div style="text-align: center;">
99
+ <img src="https://huggingface.co/AIGCer-OPPO/FaceScore/resolve/main/assets/striped.jpg" alt="ๅ›พ็‰‡2" style="width: 600px;" />
100
+ <p>A woman with black hair and a striped shirt.</p>
101
+ </div>
102
+ </div>
103
+ <div style="display: flex; justify-content: space-around;">
104
+ <div style="text-align: center;">
105
+ <img src="https://huggingface.co/AIGCer-OPPO/FaceScore/resolve/main/assets/sword.jpg" alt="ๅ›พ็‰‡1" style="width: 600px;" />
106
+ <p>A woman with white hair and white armor is holding a sword. </p>
107
+ </div>
108
+ <div style="text-align: center;">
109
+ <img src="https://huggingface.co/AIGCer-OPPO/FaceScore/resolve/main/assets/white.jpg" alt="ๅ›พ็‰‡2" style="width: 600px;" />
110
+ <p>A woman with long black hair and a white shirt.</p>
111
+ </div>
112
+ </div>
113
+
114
+ ## Citation
115
+
116
+ ```
117
+ @misc{liao2024facescorebenchmarkingenhancingface,
118
+ title={FaceScore: Benchmarking and Enhancing Face Quality in Human Generation},
119
+ author={Zhenyi Liao and Qingsong Xie and Chen Chen and Hannan Lu and Zhijie Deng},
120
+ year={2024},
121
+ eprint={2406.17100},
122
+ archivePrefix={arXiv},
123
+ primaryClass={cs.CV},
124
+ url={https://arxiv.org/abs/2406.17100},
125
+ }
126
+ ```