# RePaint ## Overview [RePaint: Inpainting using Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2201.09865) (PNDM) by Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, Luc Van Gool. The abstract of the paper is the following: Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks. RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions. The original codebase can be found [here](https://github.com/andreas128/RePaint). ## Available Pipelines: | Pipeline | Tasks | Colab |-------------------------------------------------------------------------------------------------------------------------------|--------------------|:---:| | [pipeline_repaint.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/repaint/pipeline_repaint.py) | *Image Inpainting* | - | ## Usage example ```python from io import BytesIO import torch import PIL import requests from diffusers import RePaintPipeline, RePaintScheduler def download_image(url): response = requests.get(url) return PIL.Image.open(BytesIO(response.content)).convert("RGB") img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png" mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png" # Load the original image and the mask as PIL images original_image = download_image(img_url).resize((256, 256)) mask_image = download_image(mask_url).resize((256, 256)) # Load the RePaint scheduler and pipeline based on a pretrained DDPM model scheduler = RePaintScheduler.from_pretrained("google/ddpm-ema-celebahq-256") pipe = RePaintPipeline.from_pretrained("google/ddpm-ema-celebahq-256", scheduler=scheduler) pipe = pipe.to("cuda") generator = torch.Generator(device="cuda").manual_seed(0) output = pipe( original_image=original_image, mask_image=mask_image, num_inference_steps=250, eta=0.0, jump_length=10, jump_n_sample=10, generator=generator, ) inpainted_image = output.images[0] ``` ## RePaintPipeline [[autodoc]] RePaintPipeline - all - __call__