Abstract
We study open-world part segmentation in 3D: segmenting any part in any object based on any text query. Prior methods are limited in object categories and part vocabularies. Recent advances in AI have demonstrated effective open-world recognition capabilities in 2D. Inspired by this progress, we propose an open-world, direct-prediction model for 3D part segmentation that can be applied zero-shot to any object. Our approach, called Find3D, trains a general-category point embedding model on large-scale 3D assets from the internet without any human annotation. It combines a data engine, powered by foundation models for annotating data, with a contrastive training method. We achieve strong performance and generalization across multiple datasets, with up to a 3x improvement in mIoU over the next best method. Our model is 6x to over 300x faster than existing baselines. To encourage research in general-category open-world 3D part segmentation, we also release a benchmark for general objects and parts. Project website: https://ziqi-ma.github.io/find3dsite/
Community
The first open-world semantic 3D segmentation method that actually works. Greatly outperforms all existing methods.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SAMPart3D: Segment Any Part in 3D Objects (2024)
- Towards Open-Vocabulary Semantic Segmentation Without Semantic Labels (2024)
- SA3DIP: Segment Any 3D Instance with Potential 3D Priors (2024)
- BelHouse3D: A Benchmark Dataset for Assessing Occlusion Robustness in 3D Point Cloud Semantic Segmentation (2024)
- Zero-Shot Scene Reconstruction from Single Images with Deep Prior Assembly (2024)
- Open-RGBT: Open-vocabulary RGB-T Zero-shot Semantic Segmentation in Open-world Environments (2024)
- PAVLM: Advancing Point Cloud based Affordance Understanding Via Vision-Language Model (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper