paulgavrikov commited on
Commit
400e313
·
verified ·
1 Parent(s): c4435e5

readme: updating docs

Browse files
Files changed (1) hide show
  1. README.md +28 -1
README.md CHANGED
@@ -29,6 +29,7 @@ configs:
29
  - split: test
30
  path: data/test-*
31
  license: cc-by-sa-4.0
 
32
  task_categories:
33
  - visual-question-answering
34
  language:
@@ -41,6 +42,13 @@ pretty_name: VisualOverload
41
  <p align="center">
42
  <img src="https://github.com/paulgavrikov/visualoverload/blob/main/assets/logo.jpg?raw=true" width="400">
43
  </p>
 
 
 
 
 
 
 
44
  Is basic visual understanding really solved in state-of-the-art VLMs? We present VisualOverload, a slightly different visual question answering (VQA) benchmark comprising 2,720 question–answer pairs, with privately held ground-truth responses. Unlike prior VQA datasets that typically focus on near global image understanding, VisualOverload challenges models to perform simple, knowledge-free vision tasks in densely populated (or, overloaded) scenes. Our dataset consists of high-resolution scans of public-domain paintings that are populated with multiple figures, actions, and unfolding subplots set against elaborately detailed backdrops. We manually annotated these images with questions across six task categories to probe for a thorough understanding of the scene. We hypothesize that current benchmarks overestimate the performance of VLMs, and encoding and reasoning over details is still a challenging task for them, especially if they are confronted with densely populated scenes. Indeed, we observe that even the best model (o3) out of 37 tested models only achieves 19.6% accuracy on our hardest test split and overall 69.5% accuracy on all questions. Beyond a thorough evaluation, we complement our benchmark with an error analysis that reveals multiple failure modes, including a lack of counting skills, failure in OCR, and striking logical inconsistencies under complex tasks. Altogether, VisualOverload exposes a critical gap in current vision models and offers a crucial resource for the community to develop better models.
45
 
46
 
@@ -82,4 +90,23 @@ Example:
82
  ]
83
  ```
84
  ## 🏆 Submit to the leaderboard
85
- We welcome all submissions for model *or* method (including prompting-based) to our dataset. Please create a [GitHub issue](https://github.com/paulgavrikov/visualoverload/issues) following the template and include your predictions as JSON.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  - split: test
30
  path: data/test-*
31
  license: cc-by-sa-4.0
32
+ arxiv: 2509.25339
33
  task_categories:
34
  - visual-question-answering
35
  language:
 
42
  <p align="center">
43
  <img src="https://github.com/paulgavrikov/visualoverload/blob/main/assets/logo.jpg?raw=true" width="400">
44
  </p>
45
+
46
+ <p align="center">
47
+ [<a href="http://arxiv.org/abs/2509.25339">📚 Paper</a>]
48
+ [<a href="https://huggingface.co/spaces/paulgavrikov/visualoverload-submit">🏆 Leaderboard</a>]
49
+ [<a href="https://huggingface.co/spaces/paulgavrikov/visualoverload-submit">🎯 Online Evaluator</a>]
50
+ </p>
51
+
52
  Is basic visual understanding really solved in state-of-the-art VLMs? We present VisualOverload, a slightly different visual question answering (VQA) benchmark comprising 2,720 question–answer pairs, with privately held ground-truth responses. Unlike prior VQA datasets that typically focus on near global image understanding, VisualOverload challenges models to perform simple, knowledge-free vision tasks in densely populated (or, overloaded) scenes. Our dataset consists of high-resolution scans of public-domain paintings that are populated with multiple figures, actions, and unfolding subplots set against elaborately detailed backdrops. We manually annotated these images with questions across six task categories to probe for a thorough understanding of the scene. We hypothesize that current benchmarks overestimate the performance of VLMs, and encoding and reasoning over details is still a challenging task for them, especially if they are confronted with densely populated scenes. Indeed, we observe that even the best model (o3) out of 37 tested models only achieves 19.6% accuracy on our hardest test split and overall 69.5% accuracy on all questions. Beyond a thorough evaluation, we complement our benchmark with an error analysis that reveals multiple failure modes, including a lack of counting skills, failure in OCR, and striking logical inconsistencies under complex tasks. Altogether, VisualOverload exposes a critical gap in current vision models and offers a crucial resource for the community to develop better models.
53
 
54
 
 
90
  ]
91
  ```
92
  ## 🏆 Submit to the leaderboard
93
+ We welcome all submissions for model *or* method (including prompting-based) to our dataset. Please create a [GitHub issue](https://github.com/paulgavrikov/visualoverload/issues) following the template and include your predictions as JSON.
94
+
95
+
96
+ ## 📝 License
97
+
98
+ Our dataset is licensed under CC BY-SA 4.0. All images are based on artwork that is royalty-free public domain (CC0).
99
+
100
+ ## 📚 Citation
101
+
102
+ ```latex
103
+ @misc{gavrikov2025visualoverload,
104
+ title={VisualOverload: Probing Visual Understanding of VLMs in Really Dense Scenes},
105
+ author={Paul Gavrikov and Wei Lin and M. Jehanzeb Mirza and Soumya Jahagirdar and Muhammad Huzaifa and Sivan Doveh and Serena Yeung-Levy and James Glass and Hilde Kuehne},
106
+ year={2025},
107
+ eprint={2509.25339},
108
+ archivePrefix={arXiv},
109
+ primaryClass={cs.CV},
110
+ url={https://arxiv.org/abs/2509.25339},
111
+ }
112
+ ```