Added paper link to README.md
Browse files
README.md
CHANGED
@@ -113,7 +113,7 @@ configs:
|
|
113 |
---
|
114 |
# VisOnlyQA
|
115 |
|
116 |
-
|
117 |
|
118 |
VisOnlyQA is designed to evaluate the visual perception capability of large vision language models (LVLMs) on geometric information of scientific figures. The evaluation set includes 1,200 mlutiple choice questions in 12 visual perception tasks on 4 categories of scientific figures. We also provide a training dataset consisting of 70k instances.
|
119 |
|
@@ -132,6 +132,7 @@ VisOnlyQA is designed to evaluate the visual perception capability of large visi
|
|
132 |
title={VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information},
|
133 |
author={Ryo Kamoi and Yusen Zhang and Sarkar Snigdha Sarathi Das and Ranran Haoran Zhang and Rui Zhang},
|
134 |
year={2024},
|
|
|
135 |
}
|
136 |
```
|
137 |
|
|
|
113 |
---
|
114 |
# VisOnlyQA
|
115 |
|
116 |
+
VisOnlyQA is a dataset proposed in the paper "[VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information](https://arxiv.org/abs/2412.00947)".
|
117 |
|
118 |
VisOnlyQA is designed to evaluate the visual perception capability of large vision language models (LVLMs) on geometric information of scientific figures. The evaluation set includes 1,200 mlutiple choice questions in 12 visual perception tasks on 4 categories of scientific figures. We also provide a training dataset consisting of 70k instances.
|
119 |
|
|
|
132 |
title={VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information},
|
133 |
author={Ryo Kamoi and Yusen Zhang and Sarkar Snigdha Sarathi Das and Ranran Haoran Zhang and Rui Zhang},
|
134 |
year={2024},
|
135 |
+
journal={arXiv preprint arXiv:2412.00947}
|
136 |
}
|
137 |
```
|
138 |
|