Papers
arxiv:2307.10373

TokenFlow: Consistent Diffusion Features for Consistent Video Editing

Published on Jul 19, 2023
· Featured in Daily Papers on Jul 21, 2023
Authors:
,

Abstract

The generative AI revolution has recently expanded to videos. Nevertheless, current state-of-the-art video models are still lagging behind image models in terms of visual quality and user control over the generated content. In this work, we present a framework that harnesses the power of a text-to-image diffusion model for the task of text-driven video editing. Specifically, given a source video and a target text-prompt, our method generates a high-quality video that adheres to the target text, while preserving the spatial layout and motion of the input video. Our method is based on a key observation that consistency in the edited video can be obtained by enforcing consistency in the diffusion feature space. We achieve this by explicitly propagating diffusion features based on inter-frame correspondences, readily available in the model. Thus, our framework does not require any training or fine-tuning, and can work in conjunction with any off-the-shelf text-to-image editing method. We demonstrate state-of-the-art editing results on a variety of real-world videos. Webpage: https://diffusion-tokenflow.github.io/

Community

Congrats! Really awesome demo I've ever met in video generation... 😍

The paper really interesting! Just wondering, did they test the 'corresponding features are interchangeable for the diffusion model' idea in latent diffusion model (LDM) for the experiments in Fig2 and Fig3? And does LDM's latent space show similar cool findings

This looks so good, Yalla Balagan!

i think EBSynth is more great work to do like this???????

This comment has been hidden

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2307.10373 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2307.10373 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 4