File size: 2,560 Bytes
d77a28a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
<p align="center">
  
  <h3 align="center"><strong>GEAL: Generalizable 3D Affordance Learning with Cross-Modal Consistency</strong></h3>

  <p align="center">
      <a href="https://dylanorange.github.io" target='_blank'>Dongyue Lu</a>&nbsp;&nbsp;&nbsp;
      <a href="https://ldkong.com" target='_blank'>Lingdong Kong</a>&nbsp;&nbsp;&nbsp;
      <a href="https://tianxinhuang.github.io/" target='_blank'>Tianxin Huang</a>&nbsp;&nbsp;&nbsp;
      <a href="https://www.comp.nus.edu.sg/~leegh/">Gim Hee Lee</a>&nbsp;&nbsp;&nbsp;
    </br>
  National University of Singapore&nbsp;&nbsp;&nbsp;
  </p>

</p>

<p align="center">
  <a href="https://dylanorange.github.io/projects/geal/static/files/geal.pdf" target='_blank'>
    <img src="https://img.shields.io/badge/Paper-%F0%9F%93%83-lightblue">
  </a>
  <a href="https://dylanorange.github.io/projects/geal" target='_blank'>
    <img src="https://img.shields.io/badge/Project-%F0%9F%94%97-blue">
  </a>
  <a href="https://huggingface.co/datasets/dylanorange/geal" target="_blank">
    <img src="https://img.shields.io/badge/Dataset-%20Hugging%20Face-yellow">
</a>


</p>


## About 🛠️

**GEAL** is a novel framework designed to enhance the generalization and robustness of 3D affordance learning by leveraging pre-trained 2D models.

To facilitate robust 3D affordance learning across diverse real-world scenarios, we establish two 3D affordance robustness benchmarks: **PIAD-C** and **LASO-C**, based on the test sets of the commonly used datasets PIAD and LASO. We apply seven types of corruptions:

- **Add Global**
- **Add Local**
- **Drop Global**
- **Drop Local**
- **Rotate**
- **Scale**
- **Jitter**

Each corruption is applied with five severity levels, resulting in a total of **4890 object-affordance pairings**, comprising **17 affordance categories** and **23 object categories** with **2047 distinct object shapes**.


<div style="text-align: center;">
    <img src="supp_benchmark_1.jpg" alt="GEAL Performance GIF" style="max-width: 100%; height: auto; width: 1000px;">
    <img src="supp_benchmark_2.jpg" alt="GEAL Performance GIF" style="max-width: 100%; height: auto; width: 1000px;">
</div>

## Updates 📰

- **[2024.12]** - We have released our **PIAD-C** and **LASO-C** datasets! 🎉📂


## Dataset and Code Release 🚀

We are excited to announce the release of our dataset and dataloader:

- **Dataset**: Available in the `PIAD-C` and `LASO-C` files 📜
- **Dataloader**: Available in the `dataset.py` file 📜

Stay tuned! Further evaluation code will be coming soon. 🔧✨