File size: 2,823 Bytes
d744594 1823427 d744594 57648e2 d5082e9 57648e2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
configs:
- config_name: General Knowledge
data_files:
- split: test
path: data/HAERAE-Bench-v1-KGK.csv
- config_name: History
data_files:
- split: test
path: data/HAERAE-Bench-v1-HI.csv
- config_name: Loan Words
data_files:
- split: test
path: data/HAERAE-Bench-v1-LW.csv
- config_name: Reading Comprehension
data_files:
- split: test
path: data/HAERAE-Bench-v1-RC.csv
- config_name: Rare Words
data_files:
- split: test
path: data/HAERAE-Bench-v1-RW.csv
- config_name: Standard Nomenclature
data_files:
- split: test
path: data/HAERAE-Bench-v1-SN.csv
---
The HAE_RAE_BENCH 1.0 is the original implementation of the dataset froom the paper: [HAE-RAE BENCH paper](https://arxiv.org/abs/2309.02706).
The benchmark is a collection of 1,538 instances across 6 tasks: standard_nomenclature, loan_word, rare_word, general_knowledge, history and reading comprehension.
To replicate the studies from the paper, see below.
### Dataset Overview
| Task | Instances | Version | Explanation |
|-----------------------------|-----------|---------|---------------------------------------------------------------------|
| standard_nomenclature | 153 | v1.0 | Multiple-choice questions about Korean standard nomenclatures from NIKL. |
| loan_word | 169 | v1.0 | Multiple-choice questions about Korean loan words from NIKL. |
| rare_word | 405 | v1.0 | Multiple-choice questions about rare Korean words from NIKL. |
| general_knowledge | 176 | v1.0 | Multiple-choice questions on Korean cultural knowledge. |
| history | 188 | v1.0 | Multiple-choice questions on Korean history. |
| reading_comprehension | 447 | v1.0 | Multiple-choice questions on Korean reading comprehension from the Korean Language Ability Test (KLAT). |
| **Total** | **1538** | | |
### Evaluation Code
```
!git clone https://github.com/guijinSON/lm-evaluation-harness.git
!pip install sentencepiece
%cd lm-evaluation-harness
!pip install -e .
!pip install -e ".[multilingual]"
!pip install huggingface_hub
!lm_eval --model hf \
--model_args pretrained=EleutherAI/polyglot-ko-12.8b \
--tasks HRB \
--device cuda:0 \
--batch_size auto:4 \
--write_out
```
*We've observed minor differences with the original paper, we postulate that this is mostly because of the update in the LM-Eval-Harness repo.*
### Point of Contact
For any questions contact us via the following email:)
```
spthsrbwls123@yonsei.ac.kr
``` |