File size: 3,155 Bytes
eacd0e4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5ff4faf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
492e81b
5ff4faf
 
 
 
 
 
 
492e81b
5ff4faf
 
 
 
 
 
 
 
eacd0e4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
dataset_info:
  features:
  - name: test_name
    dtype: string
  - name: question_number
    dtype: int64
  - name: context
    dtype: string
  - name: question
    dtype: string
  - name: gold
    dtype: int64
  - name: option#1
    dtype: string
  - name: option#2
    dtype: string
  - name: option#3
    dtype: string
  - name: option#4
    dtype: string
  - name: option#5
    dtype: string
  - name: Category
    dtype: string
  - name: Human_Peformance
    dtype: float64
  - name: __index_level_0__
    dtype: int64
  splits:
  - name: train
    num_bytes: 4220807
    num_examples: 936
  download_size: 1076028
  dataset_size: 4220807
---
# Dataset Card for "CSAT-QA"

## Dataset Summary
The field of Korean Language Processing is experiencing a surge in interest, 
illustrated by the introduction of open-source models such as Polyglot-Ko and proprietary models like HyperClova. 
Yet, as the development of larger and superior language models accelerates, evaluation methods aren't keeping pace. 
Recognizing this gap, we at HAE-RAE are dedicated to creating tailored benchmarks for the rigorous evaluation of these models.

CSAT-QA incorporates 936 multiple choice question answering (MCQA) questions, manually curated from 
the Korean University entrance exam, the College Scholastic Ability Test (CSAT). For a detailed explanation of how the CSAT-QA was created 
please check out the [accompanying blog post](https://github.com/guijinSON/hae-rae/blob/main/blog/CSAT-QA.md), 
and for evaluation check out [LM-Eval-Harness](https://github.com/EleutherAI/lm-evaluation-harness) on github.

## Evaluation Results

| Category | Polyglot-Ko-12.8B | GPT-3.5-16k | GPT-4     | Human_Performance |
|----------|----------------|-------------|-----------|-------------------|
| WR       | 0.09           | 9.09        | 45.45     | **54.0**          |
| GR       | 0.00           | 20.00       | 32.00     | **45.41**         |
| LI       | 21.62          | 35.14       | **59.46** | 54.38             |
| RCH      | 17.14          | 37.14       | **62.86** | 48.7              |
| RCS      | 10.81          | 27.03       | **64.86** | 39.93             |
| RCSS     | 11.9           | 30.95       | **71.43** | 44.54             |
| Average  | 10.26          | 26.56       | **56.01** | 47.8              |


## How to Use

The CSAT-QA includes two subsets. The full version with 936 questions can be downloaded using the following code:

```
from datasets import load_dataset
dataset = load_dataset("EleutherAI/CSAT-QA",split="full")
```

A more condensed version, which includes human accuracy data, can be downloaded using the following code:
```
from datasets import load_dataset
import pandas as pd

dataset = load_dataset("EleutherAI/CSAT-QA",split="full")
dataset = pd.DataFrame(dataset["train"]).dropna(subset=["Category"])
```

## License

The copyright of this material belongs to the Korea Institute for Curriculum and Evaluation(한국교육과정평가원) and may be used for research purposes only.


[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)