Datasets:
image
imagewidth (px) 3.84k
3.84k
| label
class label 7
classes |
---|---|
1001_tagging
|
|
1001_tagging
|
|
1001_tagging
|
|
1001_tagging
|
|
1001_tagging
|
|
1001_tagging
|
|
5002_legoassemble
|
|
5002_legoassemble
|
|
5002_legoassemble
|
|
5002_legoassemble
|
|
5002_legoassemble
|
|
0001_fencing
|
|
0001_fencing
|
|
0001_fencing
|
|
0001_fencing
|
|
0001_fencing
|
|
4002_basketball
|
|
4002_basketball
|
|
4002_basketball
|
|
4002_basketball
|
|
4002_basketball
|
|
3001_volleyball
|
|
3001_volleyball
|
|
3001_volleyball
|
|
3001_volleyball
|
|
3001_volleyball
|
|
6029_badminton
|
|
6029_badminton
|
|
6029_badminton
|
|
6029_badminton
|
|
6029_badminton
|
|
2001_tennis
|
|
2001_tennis
|
|
2001_tennis
|
|
2001_tennis
|
|
2001_tennis
|
Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs
Dataset Card for All-Angles Bench
Dataset Description
The dataset presents a comprehensive benchmark consisting of over 2,100 human-annotated multi-view question-answer (QA) pairs, spanning 90 real-world scenes. Each scene is captured from multiple viewpoints, providing diverse perspectives and context for the associated questions.
Dataset Sources
- EgoHumans - Egocentric multi-view human activity understanding dataset
- Ego-Exo4D - Large-scale egocentric and exocentric video dataset for multi-person interaction understanding
Direct Usage
from datasets import load_dataset
dataset = load_dataset("ch-chenyu/All-Angles-Bench")
Prepare Full Benchmark Data on Local Machine
- Set up Git lfs and clone the benchmark:
$ conda install git-lfs
$ git lfs install
$ git lfs clone https://huggingface.co/datasets/ch-chenyu/All-Angles-Bench
- Download Ego4D-Exo dataset and extract the frames for the benchmark scenes:
We provide the image files for the EgoHumans dataset. For the Ego-Exo4D dataset, due to licensing restrictions, you will need to first sign the license agreement from the official Ego-Exo4D repository at https://ego4ddataset.com/egoexo-license/. After signing the license, you would get Access ID
and Access Key
via email. Then follow the steps below to set up access:
$ pip install awscli
$ aws configure
When prompted, enter the following:
AWS Access Key ID [None]: your Access ID
AWS Secret Access Key [None]: your Access Key
Default region name [None]: us-west-2
Default output format [None]: json
Once configured, run the following to download the dataset (downscaled_takes/448
) from this page, and then use the preprocessing scripts to extract the corresponding images.
$ pip install ego4d --upgrade
$ egoexo -o All-Angles-Bench/ --parts downscaled_takes/448
$ python All-Angles-Bench/scripts/process_ego4d_exo.py --input All-Angles-Bench
- Transform JSON metadata into benchmark TSV format:
To convert the metadata from JSON format into a structured TSV format compatible with benchmark evaluation scripts in VLMEvalKit, run:
$ python All-Angles-Bench/scripts/json2tsv_pair.py --input All-Angles-Bench/data.json
Dataset Structure
The JSON data contains the following key-value pairs:
Key | Type | Description |
---|---|---|
index |
Integer | Unique identifier for the data entry (e.g. 1221 ) |
folder |
String | Directory name where the scene is stored (e.g. "05_volleyball" ) |
category |
String | Task category (e.g. "counting" ) |
pair_idx |
String | Index of a corresponding paired question (if applicable) |
image_path |
List | Array of input image paths |
question |
String | Natural language query about the scene |
A /B /C |
String | Multiple choice options |
answer |
String | Correct option label (e.g. "B" ) |
sourced_dataset |
String | Source dataset name (e.g. "EgoHumans" ) |
Citation
@article{yeh2025seeing,
title={Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs},
author={Chun-Hsiao Yeh, Chenyu Wang, Shengbang Tong, Ta-Ying Cheng, Ruoyu Wang, Tianzhe Chu, Yuexiang Zhai, Yubei Chen, Shenghua Gao and Yi Ma},
journal={arXiv preprint arXiv:2504.15280},
year={2025}
}
Acknowledgements
You may refer to related work that serves as foundations for our framework and code repository, EgoHumans, Ego-Exo4D, VLMEvalKit. Thanks for their wonderful work and data.
- Downloads last month
- 670