Abstract
Large text-to-image diffusion models have exhibited impressive proficiency in generating high-quality images. However, when applying these models to video domain, ensuring temporal consistency across video frames remains a formidable challenge. This paper proposes a novel zero-shot text-guided video-to-video translation framework to adapt image models to videos. The framework includes two parts: key frame translation and full video translation. The first part uses an adapted diffusion model to generate key frames, with hierarchical cross-frame constraints applied to enforce coherence in shapes, textures and colors. The second part propagates the key frames to other frames with temporal-aware patch matching and frame blending. Our framework achieves global style and local texture temporal consistency at a low cost (without re-training or optimization). The adaptation is compatible with existing image diffusion techniques, allowing our framework to take advantage of them, such as customizing a specific subject with LoRA, and introducing extra spatial guidance with ControlNet. Extensive experimental results demonstrate the effectiveness of our proposed framework over existing methods in rendering high-quality and temporally-coherent videos.
Community
Is an implementation of this available yet? I'm really looking forward to trying this!
Would love to try this - great work!
How can I try this?
looks great!
Nice
any code?
@JackieLoong , the code will be released after the paper is published.
from the project website: https://anonymous-31415926.github.io/
Since there are no discussions I hope you let it here or maybe even add into readme
52.) Windows - Free
Turn Videos Into Animation With Just 1 Click - ReRender A Video Tutorial - Installer For Windows
53.) RunPod - Cloud - Paid
Turn Videos Into Animation / 3D Just 1 Click - ReRender A Video Tutorial - Installer For RunPod
How we can use this project to make animation
Models citing this paper 0
No model linking this paper