FlexWorld: Progressively Expanding 3D Scenes for Flexiable-View Synthesis
Abstract
Generating flexible-view 3D scenes, including 360{\deg} rotation and zooming, from single images is challenging due to a lack of 3D data. To this end, we introduce FlexWorld, a novel framework consisting of two key components: (1) a strong video-to-video (V2V) diffusion model to generate high-quality novel view images from incomplete input rendered from a coarse scene, and (2) a progressive expansion process to construct a complete 3D scene. In particular, leveraging an advanced pre-trained video model and accurate depth-estimated training pairs, our V2V model can generate novel views under large camera pose variations. Building upon it, FlexWorld progressively generates new 3D content and integrates it into the global scene through geometry-aware scene fusion. Extensive experiments demonstrate the effectiveness of FlexWorld in generating high-quality novel view videos and flexible-view 3D scenes from single images, achieving superior visual quality under multiple popular metrics and datasets compared to existing state-of-the-art methods. Qualitatively, we highlight that FlexWorld can generate high-fidelity scenes with flexible views like 360{\deg} rotations and zooming. Project page: https://ml-gsai.github.io/FlexWorld.
Community
Twitter:https://x.com/Luxi_Chen123/status/1902280142812242151
Arxiv Paper:https://arxiv.org/abs/2503.13265
Project:https://ml-gsai.github.io/FlexWorld/
Github:https://github.com/ML-GSAI/FlexWorld
All Code and Weights are Open-sourced. Welcome to enjoy FlexWorld!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- InsTex: Indoor Scenes Stylized Texture Synthesis (2025)
- AuraFusion360: Augmented Unseen Region Alignment for Reference-based 360° Unbounded Scene Inpainting (2025)
- V2Edit: Versatile Video Diffusion Editor for Videos and 3D Scenes (2025)
- Enhancing Monocular 3D Scene Completion with Diffusion Model (2025)
- Generative Gaussian Splatting: Generating 3D Scenes with Video Diffusion Priors (2025)
- WonderVerse: Extendable 3D Scene Generation with Video Generative Models (2025)
- CineMaster: A 3D-Aware and Controllable Framework for Cinematic Text-to-Video Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper