Papers
arxiv:2305.11588

Text2NeRF: Text-Driven 3D Scene Generation with Neural Radiance Fields

Published on May 19, 2023
· Featured in Daily Papers on May 22, 2023
Authors:
,
,

Abstract

Text-driven 3D scene generation is widely applicable to video gaming, film industry, and metaverse applications that have a large demand for 3D scenes. However, existing text-to-3D generation methods are limited to producing 3D objects with simple geometries and dreamlike styles that lack realism. In this work, we present Text2NeRF, which is able to generate a wide range of 3D scenes with complicated geometric structures and high-fidelity textures purely from a text prompt. To this end, we adopt NeRF as the 3D representation and leverage a pre-trained text-to-image diffusion model to constrain the 3D reconstruction of the NeRF to reflect the scene description. Specifically, we employ the diffusion model to infer the text-related image as the content prior and use a monocular depth estimation method to offer the geometric prior. Both content and geometric priors are utilized to update the NeRF model. To guarantee textured and geometric consistency between different views, we introduce a progressive scene inpainting and updating strategy for novel view synthesis of the scene. Our method requires no additional training data but only a natural language description of the scene as the input. Extensive experiments demonstrate that our Text2NeRF outperforms existing methods in producing photo-realistic, multi-view consistent, and diverse 3D scenes from a variety of natural language prompts.

Community

Generate 3D scene (represented as NeRF network weights/parameters) from text prompt by leveraging a text to image diffusion model as constraint. Use a monocular depth estimation method over generated images for geometric prior. Use depth image based rendering (DIBR) to get support set (multiple views of an image) for training (updating) NeRF. Perform progressive inpainting through a diffusion model (painting the masked regions in novel views/support set images). Align depth maps for depth consistency across views. Use RGB, depth, and transmittance loss for training NeRF (implemented using TensoRF). From University of Hong Kong and Tencent.

Links: arxiv, website

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2305.11588 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2305.11588 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2305.11588 in a Space README.md to link it from this page.

Collections including this paper 1