text
stringlengths 0
5.54k
|
---|
... guidance_scale=7, |
... editing_prompt=[ |
... "smiling, smile", # Concepts to apply |
... "glasses, wearing glasses", |
... "curls, wavy hair, curly hair", |
... "beard, full beard, mustache", |
... ], |
... reverse_editing_direction=[ |
... False, |
... False, |
... False, |
... False, |
... ], # Direction of guidance i.e. increase all concepts |
... edit_warmup_steps=[10, 10, 10, 10], # Warmup period for each concept |
... edit_guidance_scale=[4, 5, 5, 5.4], # Guidance scale for each concept |
... edit_threshold=[ |
... 0.99, |
... 0.975, |
... 0.925, |
... 0.96, |
... ], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions |
... edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance |
... edit_mom_beta=0.6, # Momentum beta |
... edit_weights=[1, 1, 1, 1, 1], # Weights of the individual concepts against each other |
... ) |
>>> image = out.images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.semantic_stable_diffusion.pipeline_output.SemanticStableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — |
List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — |
List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or |
None if safety checking could not be performed. Output class for Stable Diffusion pipelines. |
ControlNet ControlNet is a type of model for controlling image diffusion models by conditioning the model with an additional input image. There are many types of conditioning inputs (canny edge, user sketching, human pose, depth, and more) you can use to control a diffusion model. This is hugely useful because it affords you greater control over image generation, making it easier to generate specific images without experimenting with different text prompts or denoising values as much. Check out Section 3.5 of the ControlNet paper v1 for a list of ControlNet implementations on various conditioning inputs. You can find the official Stable Diffusion ControlNet conditioned models on lllyasviel’s Hub profile, and more community-trained ones on the Hub. For Stable Diffusion XL (SDXL) ControlNet models, you can find them on the 🤗 Diffusers Hub organization, or you can browse community-trained ones on the Hub. A ControlNet model has two sets of weights (or blocks) connected by a zero-convolution layer: a locked copy keeps everything a large pretrained diffusion model has learned a trainable copy is trained on the additional conditioning input Since the locked copy preserves the pretrained model, training and implementing a ControlNet on a new conditioning input is as fast as finetuning any other model because you aren’t training the model from scratch. This guide will show you how to use ControlNet for text-to-image, image-to-image, inpainting, and more! There are many types of ControlNet conditioning inputs to choose from, but in this guide we’ll only focus on several of them. Feel free to experiment with other conditioning inputs! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab |
#!pip install -q diffusers transformers accelerate opencv-python Text-to-image For text-to-image, you normally pass a text prompt to the model. But with ControlNet, you can specify an additional conditioning input. Let’s condition the model with a canny image, a white outline of an image on a black background. This way, the ControlNet can use the canny image as a control to guide the model to generate an image with the same outline. Load an image and use the opencv-python library to extract the canny image: Copied from diffusers.utils import load_image, make_image_grid |
from PIL import Image |
import cv2 |
import numpy as np |
original_image = load_image( |
"https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" |
) |
image = np.array(original_image) |
low_threshold = 100 |
high_threshold = 200 |
image = cv2.Canny(image, low_threshold, high_threshold) |
image = image[:, :, None] |
image = np.concatenate([image, image, image], axis=2) |
canny_image = Image.fromarray(image) original image canny image Next, load a ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionControlNetPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler |
import torch |
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True) |
pipe = StableDiffusionControlNetPipeline.from_pretrained( |
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True |
) |
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) |
pipe.enable_model_cpu_offload() Now pass your prompt and canny image to the pipeline: Copied output = pipe( |
"the mona lisa", image=canny_image |
).images[0] |
make_image_grid([original_image, canny_image, output], rows=1, cols=3) Image-to-image For image-to-image, you’d typically pass an initial image and a prompt to the pipeline to generate a new image. With ControlNet, you can pass an additional conditioning input to guide the model. Let’s condition the model with a depth map, an image which contains spatial information. This way, the ControlNet can use the depth map as a control to guide the model to generate an image that preserves spatial information. You’ll use the StableDiffusionControlNetImg2ImgPipeline for this task, which is different from the StableDiffusionControlNetPipeline because it allows you to pass an initial image as the starting point for the image generation process. Load an image and use the depth-estimation Pipeline from 🤗 Transformers to extract the depth map of an image: Copied import torch |
import numpy as np |
from transformers import pipeline |
from diffusers.utils import load_image, make_image_grid |
image = load_image( |
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-img2img.jpg" |
) |
def get_depth_map(image, depth_estimator): |
image = depth_estimator(image)["depth"] |
image = np.array(image) |
image = image[:, :, None] |
image = np.concatenate([image, image, image], axis=2) |
detected_map = torch.from_numpy(image).float() / 255.0 |
depth_map = detected_map.permute(2, 0, 1) |
return depth_map |
depth_estimator = pipeline("depth-estimation") |
depth_map = get_depth_map(image, depth_estimator).unsqueeze(0).half().to("cuda") Next, load a ControlNet model conditioned on depth maps and pass it to the StableDiffusionControlNetImg2ImgPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler |
import torch |
controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, use_safetensors=True) |
pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( |
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True |
) |
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) |
pipe.enable_model_cpu_offload() Now pass your prompt, initial image, and depth map to the pipeline: Copied output = pipe( |
"lego batman and robin", image=image, control_image=depth_map, |
).images[0] |
make_image_grid([image, output], rows=1, cols=2) original image generated image Inpainting For inpainting, you need an initial image, a mask image, and a prompt describing what to replace the mask with. ControlNet models allow you to add another control image to condition a model with. Let’s condition the model with an inpainting mask. This way, the ControlNet can use the inpainting mask as a control to guide the model to generate an image within the mask area. Load an initial image and a mask image: Copied from diffusers.utils import load_image, make_image_grid |
init_image = load_image( |
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint.jpg" |
) |
init_image = init_image.resize((512, 512)) |
mask_image = load_image( |
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint-mask.jpg" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.