Masked Scene Modeling: Narrowing the Gap Between Supervised and Self-Supervised Learning in 3D Scene Understanding
Abstract
Self-supervised learning has transformed 2D computer vision by enabling models trained on large, unannotated datasets to provide versatile off-the-shelf features that perform similarly to models trained with labels. However, in 3D scene understanding, self-supervised methods are typically only used as a weight initialization step for task-specific fine-tuning, limiting their utility for general-purpose feature extraction. This paper addresses this shortcoming by proposing a robust evaluation protocol specifically designed to assess the quality of self-supervised features for 3D scene understanding. Our protocol uses multi-resolution feature sampling of hierarchical models to create rich point-level representations that capture the semantic capabilities of the model and, hence, are suitable for evaluation with linear probing and nearest-neighbor methods. Furthermore, we introduce the first self-supervised model that performs similarly to supervised models when only off-the-shelf features are used in a linear probing setup. In particular, our model is trained natively in 3D with a novel self-supervised approach based on a Masked Scene Modeling objective, which reconstructs deep features of masked patches in a bottom-up manner and is specifically tailored to hierarchical 3D models. Our experiments not only demonstrate that our method achieves competitive performance to supervised models, but also surpasses existing self-supervised approaches by a large margin. The model and training code can be found at our Github repository (https://github.com/phermosilla/msm).
Community
Are you tired of projecting 2D features into 3D for general problem solving? In this paper, we propose a self-supervised model that generates semantic off-the-shelf features natively in 3D! #CVPR2025
Project: https://phermosilla.github.io/msm/
Arxiv: https://arxiv.org/abs/2504.06719
Github: https://github.com/phermosilla/msm
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Multi-Scale Neighborhood Occupancy Masked Autoencoder for Self-Supervised Learning in LiDAR Point Clouds (2025)
- Sonata: Self-Supervised Learning of Reliable Point Representations (2025)
- SceneSplat: Gaussian Splatting-based Scene Understanding with Vision-Language Pretraining (2025)
- PSA-SSL: Pose and Size-aware Self-Supervised Learning on LiDAR Point Clouds (2025)
- CleverDistiller: Simple and Spatially Consistent Cross-modal Distillation (2025)
- Label-Efficient LiDAR Semantic Segmentation with 2D-3D Vision Transformer Adapters (2025)
- RendBEV: Semantic Novel View Synthesis for Self-Supervised Bird's Eye View Segmentation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper