File size: 3,232 Bytes
e57fae8 de6a200 e57fae8 ecb2666 6ad3802 ecb2666 44daf70 2934ff2 44daf70 ecb2666 bc4feba e241689 ecb2666 674e901 ecb2666 674e901 ecb2666 674e901 ecb2666 299cd16 ecb2666 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
---
dataset_info:
features:
- name: pid
dtype: int64
- name: question
dtype: string
- name: decoded_image
dtype: image
- name: image
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: category
dtype: string
- name: complexity
dtype: int64
splits:
- name: GRAB
num_bytes: 466596459.9
num_examples: 2170
download_size: 406793109
dataset_size: 466596459.9
configs:
- config_name: default
data_files:
- split: GRAB
path: data/GRAB-*
license: mit
---
# GRAB: A Challenging GRaph Analysis Benchmark for Large Multimodal Models
## Dataset Description
- **Homepage:** [https://grab-benchmark.github.io](https://grab-benchmark.github.io)
- **Paper:** [GRAB: A Challenging GRaph Analysis Benchmark for Large Multimodal Models](https://arxiv.org/abs/2408.11817)
- **Repository** [GRAB](https://github.com/jonathan-roberts1/GRAB)
- **Leaderboard** [https://grab-benchmark.github.io](https://grab-benchmark.github.io)
### Dataset Summary
Large multimodal models (LMMs) have exhibited proficiencies across many visual tasks. Although numerous benchmarks exist to evaluate model performance, they increasingly have insufficient headroom and are **unfit to evaluate the next generation of frontier LMMs**.
To overcome this, we present **GRAB**, a challenging benchmark focused on the tasks **human analysts** might typically perform when interpreting figures. Such tasks include estimating the mean, intercepts or correlations of functions and data series and performing transforms.
We evaluate a suite of **20 LMMs** on GRAB, finding it to be a challenging benchmark, with the current best model scoring just **21.7%**.
### Example usage
```python
from datasets import load_dataset
# load dataset
grab_dataset = load_dataset("jonathan-roberts1/GRAB", split='GRAB')
"""
Dataset({
features: ['pid', 'question', 'decoded_image', 'image', 'answer', 'task', 'category', 'complexity'],
num_rows: 2170
})
"""
# query individual questions
grab_dataset[40] # e.g., the 41st element
"""
{'pid': 40, 'question': 'What is the value of the y-intercept of the function? Give your answer as an integer.',
'decoded_image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=5836x4842 at 0x12288EA60>,
'image': 'images/40.png', 'answer': '1', 'task': 'properties', 'category': 'Intercepts and Gradients',
'complexity': 0}
"""
question_40 = grab_dataset[40]['question'] # question
answer_40 = grab_dataset[40]['answer'] # ground truth answer
pil_image_40 = grab_dataset[0]['decoded_image']
```
Note -- the 'image' feature corresponds to filepaths in the ```images``` dir in this repository: (https://huggingface.co/datasets/jonathan-roberts1/GRAB/resolve/main/images.zip)
Please visit our [GitHub repository](https://github.com/jonathan-roberts1/GRAB) for example inference code.
### Dataset Curators
This dataset was curated by Jonathan Roberts, Kai Han, and Samuel Albanie
### Citation Information
```
@article{roberts2024grab,
title={GRAB: A Challenging GRaph Analysis Benchmark for Large Multimodal Models},
author={Roberts, Jonathan and Han, Kai and Albanie, Samuel},
journal={arXiv preprint arXiv:2408.11817},
year={2024}
}
``` |