Papers
arxiv:2310.12945

3D-GPT: Procedural 3D Modeling with Large Language Models

Published on Oct 19, 2023
ยท Featured in Daily Papers on Oct 20, 2023

Abstract

In the pursuit of efficient automated content creation, procedural generation, leveraging modifiable parameters and rule-based systems, emerges as a promising approach. Nonetheless, it could be a demanding endeavor, given its intricate nature necessitating a deep understanding of rules, algorithms, and parameters. To reduce workload, we introduce 3D-GPT, a framework utilizing large language models~(LLMs) for instruction-driven 3D modeling. 3D-GPT positions LLMs as proficient problem solvers, dissecting the procedural 3D modeling tasks into accessible segments and appointing the apt agent for each task. 3D-GPT integrates three core agents: the task dispatch agent, the conceptualization agent, and the modeling agent. They collaboratively achieve two objectives. First, it enhances concise initial scene descriptions, evolving them into detailed forms while dynamically adapting the text based on subsequent instructions. Second, it integrates procedural generation, extracting parameter values from enriched text to effortlessly interface with 3D software for asset creation. Our empirical investigations confirm that 3D-GPT not only interprets and executes instructions, delivering reliable results but also collaborates effectively with human designers. Furthermore, it seamlessly integrates with Blender, unlocking expanded manipulation possibilities. Our work highlights the potential of LLMs in 3D modeling, offering a basic framework for future advancements in scene generation and animation.

Community

My summary of the day: Researchers propose a new AI system called 3D-GPT that creates 3D models by combining natural language instructions and agents specialized for working with existing 3D modeling tools.

3D-GPT has predefined functions that make 3D shapes, and it tweaks parameters to build scenes. The key is getting the AI to understand instructions and pick the right tools.

It has three main agents:

  • A dispatcher that parses the text and picks generation functions
  • A conceptualizer that adds details missing from the description
  • A modeler that sets parameters and outputs code to drive 3D software

By breaking modeling work down into steps, the agents can collab to match the descriptions. This is sort of like how a 3D modeling team of humans would work.

The paper authors show it making simple scenes like "lush meadow with flowers" that fit the text. It also modifies scenes appropriately when given new instructions. I include some gifs of example outputs in my full summary. They look pretty good - I would say 2005-quality graphics.

There are limits. It fully relies on existing generators, so quality is capped. Details and curves are iffy. It resorts to default shapes often instead of true understanding. And I doubt the verts and textures are well-optimized.

The agent architecture seems to be really popular right now. This one shows some planning skills, which could extend to more creative tasks someday.

TLDR: AI agents can team up to generate 3D models from text instructions. Works to some degree but limitations remain.

Full summary.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2310.12945 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2310.12945 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 14