Paint by Example: Exemplar-based Image Editing with Diffusion Models by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen.
The abstract of the paper is the following:
Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity.
The original codebase can be found here.
|pipeline_paint_by_example.py||Image-Guided Image Painting||-|
- PaintByExample is supported by the official Fantasy-Studio/Paint-by-Example checkpoint. The checkpoint has been warm-started from the CompVis/stable-diffusion-v1-4 and with the objective to inpaint partly masked images conditioned on example / reference images
- To quickly demo PaintByExample, please have a look at this demo
- You can run the following code snippet as an example:
# !pip install diffusers transformers import PIL import requests import torch from io import BytesIO from diffusers import DiffusionPipeline def download_image(url): response = requests.get(url) return PIL.Image.open(BytesIO(response.content)).convert("RGB") img_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png" mask_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png" example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg" init_image = download_image(img_url).resize((512, 512)) mask_image = download_image(mask_url).resize((512, 512)) example_image = download_image(example_url).resize((512, 512)) pipe = DiffusionPipeline.from_pretrained( "Fantasy-Studio/Paint-by-Example", torch_dtype=torch.float16, ) pipe = pipe.to("cuda") image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images image
class diffusers.PaintByExamplePipeline< source >
( vae: AutoencoderKL image_encoder: PaintByExampleImageEncoder unet: UNet2DConditionModel scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler] safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = False )
- vae (AutoencoderKL) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
PaintByExampleImageEncoder) — Encodes the example input image. The unet is conditioned on the example image instead of a text prompt.
CLIPTokenizer) — Tokenizer of class CLIPTokenizer.
- unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents.
scheduler (SchedulerMixin) —
A scheduler to be used in combination with
unetto denoise the encoded image latents. Can be one of DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler.
StableDiffusionSafetyChecker) — Classification module that estimates whether generated images could be considered offensive or harmful. Please, refer to the model card for details.
CLIPImageProcessor) — Model that extracts features from generated images to be used as inputs for the
Pipeline for image-guided image inpainting using Stable Diffusion. This is an experimental feature.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
__call__< source >
example_image: typing.Union[torch.FloatTensor, PIL.Image.Image]
image: typing.Union[torch.FloatTensor, PIL.Image.Image]
mask_image: typing.Union[torch.FloatTensor, PIL.Image.Image]
height: typing.Optional[int] = None
width: typing.Optional[int] = None
num_inference_steps: int = 50
guidance_scale: float = 5.0
negative_prompt: typing.Union[str, typing.List[str], NoneType] = None
num_images_per_prompt: typing.Optional[int] = 1
eta: float = 0.0
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None
latents: typing.Optional[torch.FloatTensor] = None
output_type: typing.Optional[str] = 'pil'
return_dict: bool = True
callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None
callback_steps: int = 1
List[PIL.Image.Image]) — The exemplar image to guide the image generation.
Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will be masked out with
mask_imageand repainted according to
Image, or tensor representing an image batch, to mask
image. White pixels in the mask will be repainted, while black pixels will be preserved. If
mask_imageis a PIL image, it will be converted to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, so the expected shape would be
(B, H, W, 1).
int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The height in pixels of the generated image.
int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The width in pixels of the generated image.
int, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
float, optional, defaults to 7.5) — Guidance scale as defined in Classifier-Free Diffusion Guidance.
guidance_scaleis defined as
wof equation 2. of Imagen Paper. Guidance scale is enabled by setting
guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text
prompt, usually at the expense of lower image quality.
List[str], optional) — The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored if
guidance_scaleis less than
int, optional, defaults to 1) — The number of images to generate per prompt.
float, optional, defaults to 0.0) — Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to schedulers.DDIMScheduler, will be ignored for others.
torch.Generator, optional) — One or a list of torch generator(s) to make generation deterministic.
torch.FloatTensor, optional) — Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random
str, optional, defaults to
"pil") — The output format of the generate image. Choose between PIL:
bool, optional, defaults to
True) — Whether or not to return a StableDiffusionPipelineOutput instead of a plain tuple.
Callable, optional) — A function that will be called every
callback_stepssteps during inference. The function will be called with the following arguments:
callback(step: int, timestep: int, latents: torch.FloatTensor).
int, optional, defaults to 1) — The frequency at which the
callbackfunction will be called. If not specified, the callback will be called at every step.
return_dict is True, otherwise a
tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bool
s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`.
Function invoked when calling the pipeline for generation.
enable_sequential_cpu_offload< source >
( gpu_id = 0 )
Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called.