Papers
arxiv:2309.16653

DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation

Published on Sep 28, 2023
ยท Featured in Daily Papers on Sep 29, 2023
Authors:
,
,

Abstract

Recent advances in 3D content creation mostly leverage optimization-based 3D generation via score distillation sampling (SDS). Though promising results have been exhibited, these methods often suffer from slow per-sample optimization, limiting their practical usage. In this paper, we propose DreamGaussian, a novel 3D content generation framework that achieves both efficiency and quality simultaneously. Our key insight is to design a generative 3D Gaussian Splatting model with companioned mesh extraction and texture refinement in UV space. In contrast to the occupancy pruning used in Neural Radiance Fields, we demonstrate that the progressive densification of 3D Gaussians converges significantly faster for 3D generative tasks. To further enhance the texture quality and facilitate downstream applications, we introduce an efficient algorithm to convert 3D Gaussians into textured meshes and apply a fine-tuning stage to refine the details. Extensive experiments demonstrate the superior efficiency and competitive generation quality of our proposed approach. Notably, DreamGaussian produces high-quality textured meshes in just 2 minutes from a single-view image, achieving approximately 10 times acceleration compared to existing methods.

Community

Here is a ML-generated summary

Objective
The paper proposes DreamGaussian, a novel 3D content generation framework that achieves both efficiency and quality for image-to-3D and text-to-3D tasks.

Insights

  • Adapting 3D Gaussian splatting into generative settings significantly reduces optimization time compared to NeRF-based methods.
  • The progressive densification of Gaussian splatting is more aligned with the optimization progress in generative settings than occupancy pruning in NeRF.
  • An efficient mesh extraction algorithm from 3D Gaussians is proposed and enables explicit mesh and texture refinement.
  • Refining the texture in UV space with multi-step MSE loss avoids artifacts compared to using SDS loss.
  • The proposed two-stage framework effectively balances speed and quality, generating high-quality textured meshes from images or text in just a few minutes.

Results
The experiments demonstrate that DreamGaussian produces photorealistic 3D assets with 10x speedup compared to previous optimization-based image-to-3D and text-to-3D methods while achieving competitive visual quality.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

This comment has been hidden

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2309.16653 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2309.16653 in a dataset README.md to link it from this page.

Spaces citing this paper 11

Collections including this paper 23