Papers
arxiv:2309.00398

VideoGen: A Reference-Guided Latent Diffusion Approach for High Definition Text-to-Video Generation

Published on Sep 1, 2023
· Featured in Daily Papers on Sep 4, 2023
Authors:
,
,
,
,
,
,
,

Abstract

In this paper, we present VideoGen, a text-to-video generation approach, which can generate a high-definition video with high frame fidelity and strong temporal consistency using reference-guided latent diffusion. We leverage an off-the-shelf text-to-image generation model, e.g., Stable Diffusion, to generate an image with high content quality from the text prompt, as a reference image to guide video generation. Then, we introduce an efficient cascaded latent diffusion module conditioned on both the reference image and the text prompt, for generating latent video representations, followed by a flow-based temporal upsampling step to improve the temporal resolution. Finally, we map latent video representations into a high-definition video through an enhanced video decoder. During training, we use the first frame of a ground-truth video as the reference image for training the cascaded latent diffusion module. The main characterises of our approach include: the reference image generated by the text-to-image model improves the visual fidelity; using it as the condition makes the diffusion model focus more on learning the video dynamics; and the video decoder is trained over unlabeled video data, thus benefiting from high-quality easily-available videos. VideoGen sets a new state-of-the-art in text-to-video generation in terms of both qualitative and quantitative evaluation.

Community

Any demo video?Or projectpage url?

Any demo video?Or projectpage url?

The paper PDF has 2 embedded videos that you can see with Adobe Reader. I think if you were to present a SoTA research in text2video you'd publish more results, right?

Any demo video?Or projectpage url?

https://videogen.github.io/VideoGen/

Any demo video?Or projectpage url?

https://videogen.github.io/VideoGen/

Tons of thanks~

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2309.00398 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2309.00398 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2309.00398 in a Space README.md to link it from this page.

Collections including this paper 4