Papers
arxiv:2108.01073

SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations

Published on Aug 2, 2021
Authors:
,
,
,
,
,

Abstract

Guided image synthesis enables everyday users to create and edit photo-realistic images with minimum effort. The key challenge is balancing faithfulness to the user input (e.g., hand-drawn colored strokes) and realism of the synthesized image. Existing GAN-based methods attempt to achieve such balance using either conditional GANs or GAN inversions, which are challenging and often require additional training data or loss functions for individual applications. To address these issues, we introduce a new image synthesis and editing method, Stochastic Differential Editing (SDEdit), based on a diffusion model generative prior, which synthesizes realistic images by iteratively denoising through a stochastic differential equation (SDE). Given an input image with user guide of any type, SDEdit first adds noise to the input, then subsequently denoises the resulting image through the SDE prior to increase its realism. SDEdit does not require task-specific training or inversions and can naturally achieve the balance between realism and faithfulness. SDEdit significantly outperforms state-of-the-art GAN-based methods by up to 98.09% on realism and 91.72% on overall satisfaction scores, according to a human perception study, on multiple tasks, including stroke-based image synthesis and editing as well as image compositing.

Community

Erwin Smith

photo_2023-11-06_11-18-08.jpg

label5_amusement_06705.jpg

Sign up or log in to comment

Models citing this paper 43

Browse 43 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2108.01073 in a dataset README.md to link it from this page.

Spaces citing this paper 1529

Collections including this paper 10