3DGraphLLM: Combining Semantic Graphs and Large Language Models for 3D Scene Understanding
Abstract
A 3D scene graph represents a compact scene model, storing information about the objects and the semantic relationships between them, making its use promising for robotic tasks. When interacting with a user, an embodied intelligent agent should be capable of responding to various queries about the scene formulated in natural language. Large Language Models (LLMs) are beneficial solutions for user-robot interaction due to their natural language understanding and reasoning abilities. Recent methods for creating learnable representations of 3D scenes have demonstrated the potential to improve the quality of LLMs responses by adapting to the 3D world. However, the existing methods do not explicitly utilize information about the semantic relationships between objects, limiting themselves to information about their coordinates. In this work, we propose a method 3DGraphLLM for constructing a learnable representation of a 3D scene graph. The learnable representation is used as input for LLMs to perform 3D vision-language tasks. In our experiments on popular ScanRefer, RIORefer, Multi3DRefer, ScanQA, Sqa3D, and Scan2cap datasets, we demonstrate the advantage of this approach over baseline methods that do not use information about the semantic relationships between objects. The code is publicly available at https://github.com/CognitiveAISystems/3DGraphLLM.
Community
• We introduce 3DGraphLLM, the first method to create a learnable 3D scene graph repre- sentation for LLMs, enabling the mapping of semantic relationships between objects in the scene to LLM’s token embedding space.
• We propose an algorithm that produces a flat sequence of graph embedding tokens using k-nearest neighbor selection with a minimum distance filter between objects, optimizing inference speed by reducing the number of tokens required to describe the scene.
• 3DGraphLLM shows state-of-the-art results for the 3D referred object grounding task on the Multi3DRefer (+5.8% F1@0.5) and ScanRefer (+4.4% Acc@0.5) benchmarks and also for the 3D scene captioning on the Scan2Cap dataset (CIDEr@0.5 +5.8%).
The code is publicly available at https://github.com/CognitiveAISystems/3DGraphLLM. The model is available on HuggingFace Hub: https://huggingface.co/wingrune/3DGraphLLM.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Solving Zero-Shot 3D Visual Grounding as Constraint Satisfaction Problems (2024)
- Online Knowledge Integration for 3D Semantic Mapping: A Survey (2024)
- LayoutVLM: Differentiable Optimization of 3D Layout via Vision-Language Models (2024)
- VLA-3D: A Dataset for 3D Semantic Scene Understanding and Navigation (2024)
- LLaVA-SpaceSGG: Visual Instruct Tuning for Open-vocabulary Scene Graph Generation with Enhanced Spatial Relations (2024)
- 3D Scene Graph Guided Vision-Language Pre-training (2024)
- Open-Vocabulary Octree-Graph for 3D Scene Understanding (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper