Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,91 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-nc-4.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-4.0
|
3 |
+
---
|
4 |
+
# K-DTCBench
|
5 |
+
|
6 |
+
We introduce **K-DTCBench**, a newly developed Korean benchmark featuring both computer-generated and handwritten documents, tables, and charts.
|
7 |
+
It consists of 80 questions for each image type and two questions per image, summing up to 240 questions in total.
|
8 |
+
This benchmark is designed to evaluate whether vision-language models can process images in different formats and be applicable for diverse domains.
|
9 |
+
All images are generated with made-up values and statements for evaluation purposes only. We scanned hand-written documents/tables/charts, or created digital objects with matplotlib library to build K-DTCBench.
|
10 |
+
The proportions of digital and hand-written images are equal, each constituting 50%.
|
11 |
+
|
12 |
+
|
13 |
+
For more details, Please refer to the [VARCO-VISION technical report(Coming Soon)]().
|
14 |
+
|
15 |
+
<table>
|
16 |
+
<tr>
|
17 |
+
<th>Category</th>
|
18 |
+
<th>Image</th>
|
19 |
+
<th>K-DTCBench</th>
|
20 |
+
</tr>
|
21 |
+
<tr>
|
22 |
+
<td align="center">document</td>
|
23 |
+
<td width=350><img src="https://cdn-uploads.huggingface.co/production/uploads/624ceaa38746b2f5773c2d1c/Ipi4HR73P-PDC5XcgP3WF.png"></td>
|
24 |
+
<td>
|
25 |
+
<strong>question:</strong> ๋ณด๊ณ ์์ ์ฃผ์ ๋ด์ฉ์ด ์๋ ๊ฒ์ ๋ฌด์์ธ๊ฐ์?
|
26 |
+
<br>
|
27 |
+
<strong>A:</strong> ์์ ์ธํ๋ผ ํ์ถฉ
|
28 |
+
<br>
|
29 |
+
<strong>B:</strong> ์ฌ๋ ๋ฐ ์ฌ๊ณ ์๋ฐฉ ์ฒด๊ณ ๊ตฌ์ถ
|
30 |
+
<br>
|
31 |
+
<strong>C:</strong> ์๋ฏผ ์์ ๊ต์ก ๊ฐํ
|
32 |
+
<br>
|
33 |
+
<strong>D:</strong> ๊ธด๊ธ ๋์ ์์คํ
๊ฐ์
|
34 |
+
</td>
|
35 |
+
</tr>
|
36 |
+
<tr>
|
37 |
+
<td align="center">table</td>
|
38 |
+
<td width=350><img src="https://cdn-uploads.huggingface.co/production/uploads/624ceaa38746b2f5773c2d1c/dz_FuPnpZ5P4P3LEB5PZ0.png"></td>
|
39 |
+
<td>
|
40 |
+
<strong>question:</strong> ์ธํ๋ผ ๊ตฌ์ถ ํญ๋ชฉ์ ์ ์๋ ๋ช ์ ์ธ๊ฐ์?
|
41 |
+
<br>
|
42 |
+
<strong>A:</strong> 4
|
43 |
+
<br>
|
44 |
+
<strong>B:</strong> 6
|
45 |
+
<br>
|
46 |
+
<strong>C:</strong> 8
|
47 |
+
<br>
|
48 |
+
<strong>D:</strong> 10
|
49 |
+
</td>
|
50 |
+
</tr>
|
51 |
+
<tr>
|
52 |
+
<td align="center">chart</td>
|
53 |
+
<td width=350><img src="https://cdn-uploads.huggingface.co/production/uploads/624ceaa38746b2f5773c2d1c/IbNMPPgd974SbCAsz6zIS.png"></td>
|
54 |
+
<td>
|
55 |
+
<strong>question:</strong> ์ง์ฅ์ธ๋ค์ด ํด๊ทผ ํ ๋ ๋ฒ์งธ๋ก ์ ํธํ๋ ํ๋์ ๋ฌด์์ธ๊ฐ์?
|
56 |
+
<br>
|
57 |
+
<strong>A:</strong> ์ด๋
|
58 |
+
<br>
|
59 |
+
<strong>B:</strong> ์ฌ๊ฐํ๋
|
60 |
+
<br>
|
61 |
+
<strong>C:</strong> ์๊ธฐ๊ฐ๋ฐ
|
62 |
+
<br>
|
63 |
+
<strong>D:</strong> ํด์
|
64 |
+
</td>
|
65 |
+
</tr>
|
66 |
+
</table>
|
67 |
+
<br>
|
68 |
+
|
69 |
+
## Inference Prompt
|
70 |
+
```
|
71 |
+
<image>
|
72 |
+
{question}
|
73 |
+
Options: A: {A}, B: {B}, C: {C}, D: {D}
|
74 |
+
์ฃผ์ด์ง ์ ํ์ง ์ค ํด๋น ์ต์
์ ๋ฌธ์๋ก ์ง์ ๋ตํ์ธ์.
|
75 |
+
```
|
76 |
+
|
77 |
+
<br>
|
78 |
+
|
79 |
+
## Results
|
80 |
+
Below are the evaluation results of various vision-language models, including [VARCO-VISION-14B]() on K-DTCBench.
|
81 |
+
|
82 |
+
| | VARCO-VISION-14B | Pangea-7B | Pixtral-12B | Molmo-7B-D | Qwen2-VL-7B-Instruct | LLaVA-One-Vision-7B |
|
83 |
+
| :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
84 |
+
| K-DTCBench | **84.58** | 48.33 | 27.50 | 45.83 | 75.00 | 52.91 |
|
85 |
+
|
86 |
+
<br>
|
87 |
+
|
88 |
+
## Citation
|
89 |
+
(bibtex will be updated soon..) If you use K-DTCBench in your research, please cite the following:
|
90 |
+
```
|
91 |
+
```
|