Datasets:

Modalities:
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
HAE_RAE_BENCH_1.0 / README.md
amphora's picture
Update README.md
d5082e9 verified
metadata
configs:
  - config_name: General Knowledge
    data_files:
      - split: test
        path: data/HAERAE-Bench-v1-KGK.csv
  - config_name: History
    data_files:
      - split: test
        path: data/HAERAE-Bench-v1-HI.csv
  - config_name: Loan Words
    data_files:
      - split: test
        path: data/HAERAE-Bench-v1-LW.csv
  - config_name: Reading Comprehension
    data_files:
      - split: test
        path: data/HAERAE-Bench-v1-RC.csv
  - config_name: Rare Words
    data_files:
      - split: test
        path: data/HAERAE-Bench-v1-RW.csv
  - config_name: Standard Nomenclature
    data_files:
      - split: test
        path: data/HAERAE-Bench-v1-SN.csv

The HAE_RAE_BENCH 1.0 is the original implementation of the dataset froom the paper: HAE-RAE BENCH paper. The benchmark is a collection of 1,538 instances across 6 tasks: standard_nomenclature, loan_word, rare_word, general_knowledge, history and reading comprehension. To replicate the studies from the paper, see below.

Dataset Overview

Task Instances Version Explanation
standard_nomenclature 153 v1.0 Multiple-choice questions about Korean standard nomenclatures from NIKL.
loan_word 169 v1.0 Multiple-choice questions about Korean loan words from NIKL.
rare_word 405 v1.0 Multiple-choice questions about rare Korean words from NIKL.
general_knowledge 176 v1.0 Multiple-choice questions on Korean cultural knowledge.
history 188 v1.0 Multiple-choice questions on Korean history.
reading_comprehension 447 v1.0 Multiple-choice questions on Korean reading comprehension from the Korean Language Ability Test (KLAT).
Total 1538

Evaluation Code

!git clone https://github.com/guijinSON/lm-evaluation-harness.git
!pip install sentencepiece
%cd lm-evaluation-harness
!pip install -e .
!pip install -e ".[multilingual]"
!pip install huggingface_hub

!lm_eval --model hf \
    --model_args pretrained=EleutherAI/polyglot-ko-12.8b \
    --tasks HRB \
    --device cuda:0 \
    --batch_size auto:4 \
    --write_out

We've observed minor differences with the original paper, we postulate that this is mostly because of the update in the LM-Eval-Harness repo.

Point of Contact

For any questions contact us via the following email:)

spthsrbwls123@yonsei.ac.kr