Fairy: Fast Parallelized Instruction-Guided Video-to-Video Synthesis
Abstract
In this paper, we introduce Fairy, a minimalist yet robust adaptation of image-editing diffusion models, enhancing them for video editing applications. Our approach centers on the concept of anchor-based cross-frame attention, a mechanism that implicitly propagates diffusion features across frames, ensuring superior temporal coherence and high-fidelity synthesis. Fairy not only addresses limitations of previous models, including memory and processing speed. It also improves temporal consistency through a unique data augmentation strategy. This strategy renders the model equivariant to affine transformations in both source and target images. Remarkably efficient, Fairy generates 120-frame 512x384 videos (4-second duration at 30 FPS) in just 14 seconds, outpacing prior works by at least 44x. A comprehensive user study, involving 1000 generated samples, confirms that our approach delivers superior quality, decisively outperforming established methods.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- FlowVid: Taming Imperfect Optical Flows for Consistent Video-to-Video Synthesis (2023)
- MaskINT: Video Editing via Interpolative Non-autoregressive Masked Transformers (2023)
- VidToMe: Video Token Merging for Zero-Shot Video Editing (2023)
- Motion-Conditioned Image Animation for Video Editing (2023)
- RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models (2023)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper