Papers
arxiv:2312.07409

DiffMorpher: Unleashing the Capability of Diffusion Models for Image Morphing

Published on Dec 12, 2023
· Featured in Daily Papers on Dec 13, 2023
Authors:
,
,
,

Abstract

Diffusion models have achieved remarkable image generation quality surpassing previous generative models. However, a notable limitation of diffusion models, in comparison to GANs, is their difficulty in smoothly interpolating between two image samples, due to their highly unstructured latent space. Such a smooth interpolation is intriguing as it naturally serves as a solution for the image morphing task with many applications. In this work, we present DiffMorpher, the first approach enabling smooth and natural image interpolation using diffusion models. Our key idea is to capture the semantics of the two images by fitting two LoRAs to them respectively, and interpolate between both the LoRA parameters and the latent noises to ensure a smooth semantic transition, where correspondence automatically emerges without the need for annotation. In addition, we propose an attention interpolation and injection technique and a new sampling schedule to further enhance the smoothness between consecutive images. Extensive experiments demonstrate that DiffMorpher achieves starkly better image morphing effects than previous methods across a variety of object categories, bridging a critical functional gap that distinguished diffusion models from GANs.

Community

Super cool, how to try ?

default.jpg

cobenisme_Rayan_young_man_of_20_funny_with_glasses_manga_style_88768e78-92d4-4986-bb68-7afc747269b4.png

how to use

Paper author

Stay tuned! We will release a Gradio demo for you to try in a few days.

能支持mac M1吗?

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2312.07409 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2312.07409 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2312.07409 in a Space README.md to link it from this page.

Collections including this paper 8