VividFace: A Diffusion-Based Hybrid Framework for High-Fidelity Video Face Swapping
Abstract
Video face swapping is becoming increasingly popular across various applications, yet existing methods primarily focus on static images and struggle with video face swapping because of temporal consistency and complex scenarios. In this paper, we present the first diffusion-based framework specifically designed for video face swapping. Our approach introduces a novel image-video hybrid training framework that leverages both abundant static image data and temporal video sequences, addressing the inherent limitations of video-only training. The framework incorporates a specially designed diffusion model coupled with a VidFaceVAE that effectively processes both types of data to better maintain temporal coherence of the generated videos. To further disentangle identity and pose features, we construct the Attribute-Identity Disentanglement Triplet (AIDT) Dataset, where each triplet has three face images, with two images sharing the same pose and two sharing the same identity. Enhanced with a comprehensive occlusion augmentation, this dataset also improves robustness against occlusions. Additionally, we integrate 3D reconstruction techniques as input conditioning to our network for handling large pose variations. Extensive experiments demonstrate that our framework achieves superior performance in identity preservation, temporal consistency, and visual quality compared to existing methods, while requiring fewer inference steps. Our approach effectively mitigates key challenges in video face swapping, including temporal flickering, identity preservation, and robustness to occlusions and pose variations.
Community
Introducing VividFace – a cutting-edge, diffusion-based framework for high-fidelity video face swapping. Leveraging a hybrid approach that combines the power of static images and dynamic video sequences, VividFace ensures exceptional identity preservation, temporal consistency, and robustness against complex pose variations and occlusions. With our VidFaceVAE and advanced 3D reconstruction techniques, you can achieve realistic, seamless face swaps across video frames with unmatched quality. Say goodbye to flickering and distortion, and experience the future of video face swapping with VividFace.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- HiFiVFS: High Fidelity Video Face Swapping (2024)
- FuseAnyPart: Diffusion-Driven Facial Parts Swapping via Multiple Reference Images (2024)
- Hallo3: Highly Dynamic and Realistic Portrait Image Animation with Diffusion Transformer Networks (2024)
- Dynamic Try-On: Taming Video Virtual Try-on with Dynamic Attention Mechanism (2024)
- StableAnimator: High-Quality Identity-Preserving Human Image Animation (2024)
- DreamDance: Animating Human Images by Enriching 3D Geometry Cues from 2D Poses (2024)
- DIVE: Taming DINO for Subject-Driven Video Editing (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper