jonathan-roberts1
commited on
Commit
•
e241689
1
Parent(s):
44daf70
Update README.md
Browse files
README.md
CHANGED
@@ -40,7 +40,11 @@ configs:
|
|
40 |
- **Leaderboard** [https://grab-benchmark.github.io](https://grab-benchmark.github.io)
|
41 |
|
42 |
### Dataset Summary
|
43 |
-
|
|
|
|
|
|
|
|
|
44 |
|
45 |
### Example usage
|
46 |
```python
|
|
|
40 |
- **Leaderboard** [https://grab-benchmark.github.io](https://grab-benchmark.github.io)
|
41 |
|
42 |
### Dataset Summary
|
43 |
+
Large multimodal models (LMMs) have exhibited proficiences across many visual tasks. Although numerous benchmarks exist to evaluate model performance, they increasing have insufficient headroom and are **unfit to evaluate the next generation of frontier LMMs**.
|
44 |
+
|
45 |
+
To overcome this, we present **GRAB**, a challenging benchmark focused on the tasks **human analysts** might typically perform when interpreting figures. Such tasks include estimating the mean, intercepts or correlations of functions and data series and performing transforms.
|
46 |
+
|
47 |
+
We evaluate a suite of **20 LMMs** on GRAB, finding it to be a challenging benchmark, with the current best model scoring just **21.7%**.
|
48 |
|
49 |
### Example usage
|
50 |
```python
|