File size: 6,718 Bytes
9a2d10c
 
834175e
9a2d10c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b3006e7
9a2d10c
df84e85
b3006e7
9bb7d29
 
b3006e7
834175e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b3006e7
834175e
1c5a93c
b3006e7
1c5a93c
 
b3006e7
ac92716
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b3006e7
ac92716
a3eb791
b3006e7
a3eb791
 
b3006e7
9a2d10c
 
 
 
 
df84e85
 
834175e
 
 
 
1c5a93c
 
ac92716
 
 
 
a3eb791
 
b3006e7
 
 
 
 
 
 
9a2d10c
b3006e7
eb089a3
b3006e7
 
 
 
 
 
eb089a3
 
 
 
 
 
 
 
 
 
b3006e7
 
 
 
 
 
 
 
 
eb089a3
 
 
b3006e7
eb089a3
 
b3006e7
 
 
eb089a3
 
 
 
 
 
 
b3006e7
eb089a3
b3006e7
eb089a3
 
 
 
 
 
 
 
 
 
 
 
 
b3006e7
 
 
 
eb089a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b3006e7
eb089a3
b3006e7
 
eb089a3
b3006e7
 
eb089a3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
---
dataset_info:
- config_name: Autonomous Driving
  features:
  - name: domain
    dtype: string
  - name: image
    dtype: image
  - name: question
    dtype: string
  - name: actions
    sequence: string
  - name: answer_index
    dtype: int64
  - name: reason
    dtype: string
  - name: key_concept
    sequence: string
  - name: question_prompt
    dtype: string
  - name: answer_with_reason
    dtype: string
  - name: full_meta_data_json
    dtype: string
  splits:
  - name: test_open
    num_bytes: 134659773
    num_examples: 100
  - name: test_closed
    num_bytes: 67549223
    num_examples: 150
  download_size: 270416985
  dataset_size: 202208996
- config_name: Domestic Robot
  features:
  - name: domain
    dtype: string
  - name: image
    dtype: image
  - name: question
    dtype: string
  - name: actions
    sequence: string
  - name: answer_index
    dtype: int64
  - name: reason
    dtype: string
  - name: key_concept
    sequence: string
  - name: question_prompt
    dtype: string
  - name: answer_with_reason
    dtype: string
  - name: full_meta_data_json
    dtype: string
  splits:
  - name: test_open
    num_bytes: 91702060
    num_examples: 100
  - name: test_closed
    num_bytes: 177827577
    num_examples: 200
  download_size: 105390299
  dataset_size: 269529637
- config_name: Open-World Game
  features:
  - name: domain
    dtype: string
  - name: image
    dtype: image
  - name: question
    dtype: string
  - name: actions
    sequence: string
  - name: answer_index
    dtype: int64
  - name: reason
    dtype: string
  - name: key_concept
    sequence: string
  - name: question_prompt
    dtype: string
  - name: answer_with_reason
    dtype: string
  - name: full_meta_data_json
    dtype: string
  splits:
  - name: test_open
    num_bytes: 16139511
    num_examples: 117
  - name: test_closed
    num_bytes: 19069366
    num_examples: 141
  download_size: 34988721
  dataset_size: 35208877
configs:
- config_name: Autonomous Driving
  data_files:
  - split: test_open
    path: Autonomous Driving/test_open-*
  - split: test_closed
    path: Autonomous Driving/test_closed-*
- config_name: Domestic Robot
  data_files:
  - split: test_open
    path: Domestic Robot/test_open-*
  - split: test_closed
    path: Domestic Robot/test_closed-*
- config_name: Open-World Game
  data_files:
  - split: test_open
    path: Open-World Game/test_open-*
  - split: test_closed
    path: Open-World Game/test_closed-*
license: apache-2.0
task_categories:
- multiple-choice
- visual-question-answering
language:
- en
pretty_name: PCA-Bench
---


<h1 align="center">PCA-Bench</h1>

<p align="center">

<a href="https://github.com/pkunlp-icler/PCA-EVAL">
<img alt="Static Badge" src="https://img.shields.io/badge/Github-Online-white">

<a href="https://github.com/pkunlp-icler/PCA-EVAL/blob/main/PCA_Bench_Paper.pdf">
<img alt="Static Badge" src="https://img.shields.io/badge/Paper-PCABench-red">

<a href="https://huggingface.co/datasets/PCA-Bench/PCA-Bench-V1">
<img alt="Static Badge" src="https://img.shields.io/badge/HFDataset-PCABenchV1-yellow">
</a>

<a href="https://docs.qq.com/sheet/DVUd4WUpGRHRqUnNV">
<img alt="Static Badge" src="https://img.shields.io/badge/Leaderboard-Online-blue">
</a>
</p>




*PCA-Bench is an innovative benchmark for evaluating and locating errors in Multimodal LLMs when conducting embodied decision making tasks, specifically focusing on perception, cognition, and action.*


## Release
- [2024.02.15] [PCA-Bench-V1](https://github.com/pkunlp-icler/PCA-EVAL) is released. We release the open and closed track data in [huggingface](https://huggingface.co/datasets/PCA-Bench/PCA-Bench-V1). We also set an online [leaderboard ](https://docs.qq.com/sheet/DVUd4WUpGRHRqUnNV) accepting users' submission.
- [2023.12.15] [PCA-EVAL](https://arxiv.org/abs/2310.02071) is accepted to Foundation Model for Decision Making Workshop @NeurIPS 2023. PCA-Evaluation tool is released in github.

## Leaderboard
[Leaderboard with Full Metrics](https://docs.qq.com/sheet/DVUd4WUpGRHRqUnNV)



## Submit Results

📢 For close track evaluaiton and PCA-Evaluation, please follow [this file](https://github.com/pkunlp-icler/PCA-EVAL/blob/main/pca-eval/results/chatgpt_holmes_outputs/Autonomous%20Driving.json) to organize your model output. Submit **Six JSON files** from different domains and different tracks, along with your **model name** and **organization** to us via [email](mailto:leo.liang.chen@stu.pku.edu.cn). Ensure you use the dataset's provided prompt as the default input for fair comparison.

We will send the PCA-Eval results of your model to you and update the leaderboard.

We provide sample code to get the six json files. User only needs to add your model inference code:
```python
# Sample code for PCA-Eval
from datasets import load_dataset
from tqdm import tqdm
import json
import os

def YOUR_INFERENCE_CODE(prompt,image):
    """Simple single round multimodal conversation call.
    """
    response = YOUR_MODEL.inference(prompt,image)
    return response

output_path = "./Results-DIR-PATH/"
os.mkdir(output_path)

dataset_ad = load_dataset("PCA-Bench/PCA-Bench-V1","Autonomous Driving")
dataset_dr = load_dataset("PCA-Bench/PCA-Bench-V1","Domestic Robot")
dataset_og = load_dataset("PCA-Bench/PCA-Bench-V1","Open-World Game")

test_dataset_dict = {"Autonomous-Driving":dataset_ad,"Domestic-Robot":dataset_dr,"Open-World-Game":dataset_og}
test_split = ["test_closed","test_open"]
test_domain = list(test_dataset_dict.keys())

for domain in test_domain:
  for split in test_split:
    print("testing on %s:%s"%(domain,split))

    prediction_results = []
    output_filename = output_path+"%s-%s.json"%(domain,split)
    prompts = test_dataset_dict[domain][split]['question_prompt']
    images = test_dataset_dict[domain][split]['image']

    for prompt_id in tqdm(range(len(prompts))):
        user_inputs = prompts[prompt_id] # do not change the prompts for fair comparison
        index = prompt_id
        image = images[prompt_id]

        outputs = YOUR_INFERENCE_CODE(user_inputs,image)

        prediction_results.append({
            'prompt': user_inputs,
            'model_output': outputs,
            'index': index,
        })

    with open(output_filename, 'w') as f:
        json.dump(prediction_results, f, indent=4)

# submit the 6 json files in the output_path to our email
```

You could also simply compute the multiple-choice accuracy locally as a comparison metric in your own experiments. However, in the online leaderboard, we only consider the average action score and Genuine PCA score when ranking models.


For more information, refer to the offical [github repo](https://github.com/pkunlp-icler/PCA-EVAL)