Image-to-Image
Diffusers
Safetensors
English
controlnet
laion
face
mediapipe
JosephCatrambone commited on
Commit
497cb11
1 Parent(s): 5035e75

Update README.md

Browse files

Add instructions for running the v1.5 variant.

Files changed (1) hide show
  1. README.md +10 -2
README.md CHANGED
@@ -105,7 +105,7 @@ python ./train_laion_face_sd15.py
105
  We have provided `gradio_face2image.py`. Update the following two lines to point them to your trained model.
106
 
107
  ```
108
- model = create_model('./models/cldm_v21.yaml').cpu() # If you fine-tuned on SD2.1 base, this does not need to change.
109
  model.load_state_dict(load_state_dict('./models/control_sd21_openpose.pth', location='cuda'))
110
  ```
111
 
@@ -116,7 +116,9 @@ The model has some limitations: while it is empirically better at tracking gaze
116
  It is recommended to use the checkpoint with [Stable Diffusion 2.1 - Base](stabilityai/stable-diffusion-2-1-base) as the checkpoint has been trained on it.
117
  Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
118
 
119
- 1. Let's install `diffusers` and related packages:
 
 
120
  ```
121
  $ pip install diffusers transformers accelerate
122
  ```
@@ -133,10 +135,16 @@ image = load_image(
133
  "https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/family_annotation.png"
134
  )
135
 
 
136
  controlnet = ControlNetModel.from_pretrained("CrucibleAI/ControlNetMediaPipeFace", torch_dtype=torch.float16, variant="fp16")
137
  pipe = StableDiffusionControlNetPipeline.from_pretrained(
138
  "stabilityai/stable-diffusion-2-1-base", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16
139
  )
 
 
 
 
 
140
  pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
141
 
142
  # Remove if you do not have xformers installed
 
105
  We have provided `gradio_face2image.py`. Update the following two lines to point them to your trained model.
106
 
107
  ```
108
+ model = create_model('./models/cldm_v21.yaml').cpu() # If you fine-tune on SD2.1 base, this does not need to change.
109
  model.load_state_dict(load_state_dict('./models/control_sd21_openpose.pth', location='cuda'))
110
  ```
111
 
 
116
  It is recommended to use the checkpoint with [Stable Diffusion 2.1 - Base](stabilityai/stable-diffusion-2-1-base) as the checkpoint has been trained on it.
117
  Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
118
 
119
+ To use with Stable Diffusion 1.5, insert `subfolder="diffusion_sd15"` into the from_pretrained arguments. A v1.5 half-precision variant is provided but untested.
120
+
121
+ 1. Install `diffusers` and related packages:
122
  ```
123
  $ pip install diffusers transformers accelerate
124
  ```
 
135
  "https://huggingface.co/CrucibleAI/ControlNetMediaPipeFace/resolve/main/samples_laion_face_dataset/family_annotation.png"
136
  )
137
 
138
+ # Stable Diffusion 2.1-base:
139
  controlnet = ControlNetModel.from_pretrained("CrucibleAI/ControlNetMediaPipeFace", torch_dtype=torch.float16, variant="fp16")
140
  pipe = StableDiffusionControlNetPipeline.from_pretrained(
141
  "stabilityai/stable-diffusion-2-1-base", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16
142
  )
143
+ # OR
144
+ # Stable Diffusion 1.5:
145
+ controlnet = ControlNetModel.from_pretrained("CrucibleAI/ControlNetMediaPipeFace", subfolder="diffusion_sd15")
146
+ pipe = StableDiffusionControlNetPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", controlnet=controlnet, safety_checker=None)
147
+
148
  pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
149
 
150
  # Remove if you do not have xformers installed