Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
Korean
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
kimyoungjune commited on
Commit
53914e1
ยท
verified ยท
1 Parent(s): 3ad6ff0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +91 -3
README.md CHANGED
@@ -1,3 +1,91 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ ---
4
+ # K-DTCBench
5
+
6
+ We introduce **K-DTCBench**, a newly developed Korean benchmark featuring both computer-generated and handwritten documents, tables, and charts.
7
+ It consists of 80 questions for each image type and two questions per image, summing up to 240 questions in total.
8
+ This benchmark is designed to evaluate whether vision-language models can process images in different formats and be applicable for diverse domains.
9
+ All images are generated with made-up values and statements for evaluation purposes only. We scanned hand-written documents/tables/charts, or created digital objects with matplotlib library to build K-DTCBench.
10
+ The proportions of digital and hand-written images are equal, each constituting 50%.
11
+
12
+
13
+ For more details, Please refer to the [VARCO-VISION technical report(Coming Soon)]().
14
+
15
+ <table>
16
+ <tr>
17
+ <th>Category</th>
18
+ <th>Image</th>
19
+ <th>K-DTCBench</th>
20
+ </tr>
21
+ <tr>
22
+ <td align="center">document</td>
23
+ <td width=350><img src="https://cdn-uploads.huggingface.co/production/uploads/624ceaa38746b2f5773c2d1c/Ipi4HR73P-PDC5XcgP3WF.png"></td>
24
+ <td>
25
+ <strong>question:</strong> ๋ณด๊ณ ์„œ์˜ ์ฃผ์š” ๋‚ด์šฉ์ด ์•„๋‹Œ ๊ฒƒ์€ ๋ฌด์—‡์ธ๊ฐ€์š”?
26
+ <br>
27
+ <strong>A:</strong> ์•ˆ์ „ ์ธํ”„๋ผ ํ™•์ถฉ
28
+ <br>
29
+ <strong>B:</strong> ์žฌ๋‚œ ๋ฐ ์‚ฌ๊ณ  ์˜ˆ๋ฐฉ ์ฒด๊ณ„ ๊ตฌ์ถ•
30
+ <br>
31
+ <strong>C:</strong> ์‹œ๋ฏผ ์•ˆ์ „ ๊ต์œก ๊ฐ•ํ™”
32
+ <br>
33
+ <strong>D:</strong> ๊ธด๊ธ‰ ๋Œ€์‘ ์‹œ์Šคํ…œ ๊ฐœ์„ 
34
+ </td>
35
+ </tr>
36
+ <tr>
37
+ <td align="center">table</td>
38
+ <td width=350><img src="https://cdn-uploads.huggingface.co/production/uploads/624ceaa38746b2f5773c2d1c/dz_FuPnpZ5P4P3LEB5PZ0.png"></td>
39
+ <td>
40
+ <strong>question:</strong> ์ธํ”„๋ผ ๊ตฌ์ถ• ํ•ญ๋ชฉ์˜ ์ ์ˆ˜๋Š” ๋ช‡ ์ ์ธ๊ฐ€์š”?
41
+ <br>
42
+ <strong>A:</strong> 4
43
+ <br>
44
+ <strong>B:</strong> 6
45
+ <br>
46
+ <strong>C:</strong> 8
47
+ <br>
48
+ <strong>D:</strong> 10
49
+ </td>
50
+ </tr>
51
+ <tr>
52
+ <td align="center">chart</td>
53
+ <td width=350><img src="https://cdn-uploads.huggingface.co/production/uploads/624ceaa38746b2f5773c2d1c/IbNMPPgd974SbCAsz6zIS.png"></td>
54
+ <td>
55
+ <strong>question:</strong> ์ง์žฅ์ธ๋“ค์ด ํ‡ด๊ทผ ํ›„ ๋‘ ๋ฒˆ์งธ๋กœ ์„ ํ˜ธํ•˜๋Š” ํ™œ๋™์€ ๋ฌด์—‡์ธ๊ฐ€์š”?
56
+ <br>
57
+ <strong>A:</strong> ์šด๋™
58
+ <br>
59
+ <strong>B:</strong> ์—ฌ๊ฐ€ํ™œ๋™
60
+ <br>
61
+ <strong>C:</strong> ์ž๊ธฐ๊ฐœ๋ฐœ
62
+ <br>
63
+ <strong>D:</strong> ํœด์‹
64
+ </td>
65
+ </tr>
66
+ </table>
67
+ <br>
68
+
69
+ ## Inference Prompt
70
+ ```
71
+ <image>
72
+ {question}
73
+ Options: A: {A}, B: {B}, C: {C}, D: {D}
74
+ ์ฃผ์–ด์ง„ ์„ ํƒ์ง€ ์ค‘ ํ•ด๋‹น ์˜ต์…˜์˜ ๋ฌธ์ž๋กœ ์ง์ ‘ ๋‹ตํ•˜์„ธ์š”.
75
+ ```
76
+
77
+ <br>
78
+
79
+ ## Results
80
+ Below are the evaluation results of various vision-language models, including [VARCO-VISION-14B]() on K-DTCBench.
81
+
82
+ | | VARCO-VISION-14B | Pangea-7B | Pixtral-12B | Molmo-7B-D | Qwen2-VL-7B-Instruct | LLaVA-One-Vision-7B |
83
+ | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
84
+ | K-DTCBench | **84.58** | 48.33 | 27.50 | 45.83 | 75.00 | 52.91 |
85
+
86
+ <br>
87
+
88
+ ## Citation
89
+ (bibtex will be updated soon..) If you use K-DTCBench in your research, please cite the following:
90
+ ```
91
+ ```