Image-to-Image
Diffusers
English
controlnet
laion
face
mediapipe
adhikjoshi commited on
Commit
3d6b457
1 Parent(s): 7e39831

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +214 -0
README.md ADDED
@@ -0,0 +1,214 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ thumbnail: ''
5
+ tags:
6
+ - controlnet
7
+ - laion
8
+ - face
9
+ - mediapipe
10
+ - image-to-image
11
+ license: openrail
12
+ base_model: stabilityai/stable-diffusion-2-1-base
13
+ datasets:
14
+ - LAION-Face
15
+ - LAION
16
+ pipeline_tag: image-to-image
17
+ ---
18
+
19
+
20
+ FORK of https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace
21
+ # ControlNet LAION Face Dataset
22
+
23
+ ## Table of Contents:
24
+ - Overview: Samples, Contents, and Construction
25
+ - Usage: Downloading, Training, and Inference
26
+ - License
27
+ - Credits and Thanks
28
+
29
+ # Overview:
30
+
31
+ This dataset is designed to train a ControlNet with human facial expressions. It includes keypoints for pupils to allow gaze direction. Training has been tested on Stable Diffusion v2.1 base (512) and Stable Diffusion v1.5.
32
+
33
+ ## Samples:
34
+
35
+ Cherry-picked from ControlNet + Stable Diffusion v2.1 Base
36
+
37
+ |Input|Face Detection|Output|
38
+ |:---:|:---:|:---:|
39
+ |<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/happy_source.jpg">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/happy_annotation.png">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/happy_result.png">|
40
+ |<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/neutral_source.jpg">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/neutral_annotation.png">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/neutral_result.png">|
41
+ |<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/sad_source.jpg">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/sad_annotation.png">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/sad_result.png">|
42
+ |<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/screaming_source.jpg">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/screaming_annotation.png">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/screaming_result.png">|
43
+ |<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/sideways_source.jpg">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/sideways_annotation.png">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/sideways_result.png">|
44
+ |<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/surprised_source.jpg">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/surprised_annotation.png">|<img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/surprised_result.png">|
45
+
46
+ Images with multiple faces are also supported:
47
+
48
+ <img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/family_source.jpg">
49
+
50
+ <img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/family_annotation.png">
51
+
52
+ <img src="https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/family_result.png">
53
+
54
+
55
+ ## Dataset Contents:
56
+
57
+ - train_laion_face.py - Entrypoint for ControlNet training.
58
+ - laion_face_dataset.py - Code for performing dataset iteration. Cropping and resizing happens here.
59
+ - tool_download_face_targets.py - A tool to read metadata.json and populate the target folder.
60
+ - tool_generate_face_poses.py - The original file used to generate the source images. Included for reproducibility, but not required for training.
61
+ - training/laion-face-processed/prompt.jsonl - Read by laion_face_dataset. Includes prompts for the images.
62
+ - training/laion-face-processed/metadata.json - Excerpts from LAION for the relevant data. Also used for downloading the target dataset.
63
+ - training/laion-face-processed/source/xxxxxxxxx.jpg - Images with detections performed. Generated from the target images.
64
+ - training/laion-face-processed/target/xxxxxxxxx.jpg - Selected images from LAION Face.
65
+
66
+ ## Dataset Construction:
67
+
68
+ Source images were generated by pulling slice 00000 from LAION Face and passing them through MediaPipe's face detector with special configuration parameters.
69
+
70
+ The colors and line thicknesses used for MediaPipe are as follows:
71
+
72
+ ```
73
+ f_thick = 2
74
+ f_rad = 1
75
+ right_iris_draw = DrawingSpec(color=(10, 200, 250), thickness=f_thick, circle_radius=f_rad)
76
+ right_eye_draw = DrawingSpec(color=(10, 200, 180), thickness=f_thick, circle_radius=f_rad)
77
+ right_eyebrow_draw = DrawingSpec(color=(10, 220, 180), thickness=f_thick, circle_radius=f_rad)
78
+ left_iris_draw = DrawingSpec(color=(250, 200, 10), thickness=f_thick, circle_radius=f_rad)
79
+ left_eye_draw = DrawingSpec(color=(180, 200, 10), thickness=f_thick, circle_radius=f_rad)
80
+ left_eyebrow_draw = DrawingSpec(color=(180, 220, 10), thickness=f_thick, circle_radius=f_rad)
81
+ mouth_draw = DrawingSpec(color=(10, 180, 10), thickness=f_thick, circle_radius=f_rad)
82
+ head_draw = DrawingSpec(color=(10, 200, 10), thickness=f_thick, circle_radius=f_rad)
83
+
84
+ iris_landmark_spec = {468: right_iris_draw, 473: left_iris_draw}
85
+ ```
86
+
87
+ We have implemented a method named `draw_pupils` which modifies some functionality from MediaPipe. It exists as a stopgap until some pending changes are merged.
88
+
89
+
90
+ # Usage:
91
+
92
+ The containing ZIP file should be decompressed into the root of the ControlNet directory. The `train_laion_face.py`, `laion_face_dataset.py`, and other `.py` files should sit adjacent to `tutorial_train.py` and `tutorial_train_sd21.py`. We are assuming a checkout of the ControlNet repo at 0acb7e5, but there is no direct dependency on the repository.
93
+
94
+ ## Downloading:
95
+
96
+ For copyright reasons, we cannot include the original target files. We have provided a script (tool_download_face_targets.py) which will read from training/laion-face-processed/metadata.json and populate the target folder. This file has no requirements, but will use tqdm if it is installed.
97
+
98
+ ## Training:
99
+
100
+ When the targets folder is fully populated, training can be run on a machine with at least 24 gigabytes of VRAM. Our model was trained for 200 hours (four epochs) on an A6000.
101
+
102
+ ```bash
103
+ python tool_add_control.py ./models/v1-5-pruned-emaonly.ckpt ./models/controlnet_sd15_laion_face.ckpt
104
+ python ./train_laion_face_sd15.py
105
+ ```
106
+
107
+ ## Inference:
108
+
109
+ We have provided `gradio_face2image.py`. Update the following two lines to point them to your trained model.
110
+
111
+ ```
112
+ model = create_model('./models/cldm_v21.yaml').cpu() # If you fine-tune on SD2.1 base, this does not need to change.
113
+ model.load_state_dict(load_state_dict('./models/control_sd21_openpose.pth', location='cuda'))
114
+ ```
115
+
116
+ The model has some limitations: while it is empirically better at tracking gaze and mouth poses than previous attempts, it may still ignore controls. Adding details to the prompt like, "looking right" can abate bad behavior.
117
+
118
+ ## 🧨 Diffusers
119
+
120
+ It is recommended to use the checkpoint with [Stable Diffusion 2.1 - Base](stabilityai/stable-diffusion-2-1-base) as the checkpoint has been trained on it.
121
+ Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
122
+
123
+ To use with Stable Diffusion 1.5, insert `subfolder="diffusion_sd15"` into the from_pretrained arguments. A v1.5 half-precision variant is provided but untested.
124
+
125
+ 1. Install `diffusers` and related packages:
126
+ ```
127
+ $ pip install diffusers transformers accelerate
128
+ ```
129
+
130
+ 2. Run code:
131
+ ```py
132
+ from PIL import Image
133
+ import numpy as np
134
+ import torch
135
+ from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
136
+ from diffusers.utils import load_image
137
+
138
+ image = load_image(
139
+ "https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/family_annotation.png"
140
+ )
141
+
142
+ # Stable Diffusion 2.1-base:
143
+ controlnet = ControlNetModel.from_pretrained("CrucibleAI/ControlNetMediaPipeFace", torch_dtype=torch.float16, variant="fp16")
144
+ pipe = StableDiffusionControlNetPipeline.from_pretrained(
145
+ "stabilityai/stable-diffusion-2-1-base", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16
146
+ )
147
+ # OR
148
+ # Stable Diffusion 1.5:
149
+ controlnet = ControlNetModel.from_pretrained("CrucibleAI/ControlNetMediaPipeFace", subfolder="diffusion_sd15")
150
+ pipe = StableDiffusionControlNetPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", controlnet=controlnet, safety_checker=None)
151
+
152
+ pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
153
+
154
+ # Remove if you do not have xformers installed
155
+ # see https://huggingface.co/docs/diffusers/v0.13.0/en/optimization/xformers#installing-xformers
156
+ # for installation instructions
157
+ pipe.enable_xformers_memory_efficient_attention()
158
+ pipe.enable_model_cpu_offload()
159
+
160
+ image = pipe("a happy family at a dentist advertisement", image=image, num_inference_steps=30).images[0]
161
+ image.save('./images.png')
162
+ ```
163
+
164
+
165
+ # License:
166
+
167
+ ### Source Images: (/training/laion-face-processed/source/)
168
+ This work is marked with CC0 1.0. To view a copy of this license, visit http://creativecommons.org/publicdomain/zero/1.0
169
+
170
+ ### Trained Models:
171
+ Our trained ControlNet checkpoints are released under CreativeML Open RAIL-M.
172
+
173
+ ### Source Code:
174
+ lllyasviel/ControlNet is licensed under the Apache License 2.0
175
+
176
+ Our modifications are released under the same license.
177
+
178
+
179
+ # Credits and Thanks:
180
+
181
+ Greatest thanks to Zhang et al. for ControlNet, Rombach et al. (StabilityAI) for Stable Diffusion, and Schuhmann et al. for LAION.
182
+
183
+ Sample images for this document were obtained from Unsplash and are CC0.
184
+
185
+ ```
186
+ @misc{zhang2023adding,
187
+ title={Adding Conditional Control to Text-to-Image Diffusion Models},
188
+ author={Lvmin Zhang and Maneesh Agrawala},
189
+ year={2023},
190
+ eprint={2302.05543},
191
+ archivePrefix={arXiv},
192
+ primaryClass={cs.CV}
193
+ }
194
+
195
+ @misc{rombach2021highresolution,
196
+ title={High-Resolution Image Synthesis with Latent Diffusion Models},
197
+ author={Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer},
198
+ year={2021},
199
+ eprint={2112.10752},
200
+ archivePrefix={arXiv},
201
+ primaryClass={cs.CV}
202
+ }
203
+
204
+ @misc{schuhmann2022laion5b,
205
+ title={LAION-5B: An open large-scale dataset for training next generation image-text models},
206
+ author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev},
207
+ year={2022},
208
+ eprint={2210.08402},
209
+ archivePrefix={arXiv},
210
+ primaryClass={cs.CV}
211
+ }
212
+ ```
213
+
214
+ This project was made possible by Crucible AI.