GEAL: Generalizable 3D Affordance Learning with Cross-Modal Consistency

Dongyue Lu    Lingdong Kong    Tianxin Huang    Gim Hee Lee   
National University of Singapore   

## About ๐Ÿ› ๏ธ **GEAL** is a novel framework designed to enhance the generalization and robustness of 3D affordance learning by leveraging pre-trained 2D models. To facilitate robust 3D affordance learning across diverse real-world scenarios, we establish two 3D affordance robustness benchmarks: **PIAD-C** and **LASO-C**, based on the test sets of the commonly used datasets PIAD and LASO. We apply seven types of corruptions: - **Add Global** - **Add Local** - **Drop Global** - **Drop Local** - **Rotate** - **Scale** - **Jitter** Each corruption is applied with five severity levels, resulting in a total of **4890 object-affordance pairings**, comprising **17 affordance categories** and **23 object categories** with **2047 distinct object shapes**.
GEAL Performance GIF GEAL Performance GIF
## Updates ๐Ÿ“ฐ - **[2024.12]** - We have released our **PIAD-C** and **LASO-C** datasets! ๐ŸŽ‰๐Ÿ“‚ ## Dataset and Code Release ๐Ÿš€ We are excited to announce the release of our dataset and dataloader: - **Dataset**: Available in the `PIAD-C` and `LASO-C` files ๐Ÿ“œ - **Dataloader**: Available in the `dataset.py` file ๐Ÿ“œ Stay tuned! Further evaluation code will be coming soon. ๐Ÿ”งโœจ