Datasets:

Modalities:
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
amphora commited on
Commit
57648e2
1 Parent(s): 0706b82

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -2
README.md CHANGED
@@ -25,6 +25,44 @@ configs:
25
  - split: test
26
  path: data/HAERAE-Bench-v1-SN.csv
27
  ---
28
- # Dataset Card for "HaeRae_Bench"
29
 
30
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  - split: test
26
  path: data/HAERAE-Bench-v1-SN.csv
27
  ---
 
28
 
29
+ The HAE_RAE_BENCH 1.0 is the original implementation of the dataset froom the paper: [HAE-RAE BENCH paper](https://arxiv.org/abs/2309.02706).
30
+ The benchmark is a collection of 1,538 instances across 6 tasks: standard_nomenclature, loan_word, rare_word, general_knowledge, history and reading comprehension.
31
+ To replicate the studies from the paper, see below.
32
+
33
+
34
+ ### Dataset Overview
35
+
36
+ | Task | Instances | Version | Explanation |
37
+ |-----------------------------|-----------|---------|---------------------------------------------------------------------|
38
+ | standard_nomenclature | 153 | v1.0 | Multiple-choice questions about Korean standard nomenclatures from NIKL. |
39
+ | loan_word | 169 | v1.0 | Multiple-choice questions about Korean loan words from NIKL. |
40
+ | rare_word | 405 | v1.0 | Multiple-choice questions about rare Korean words from NIKL. |
41
+ | general_knowledge | 176 | v1.0 | Multiple-choice questions on Korean cultural knowledge. |
42
+ | history | 188 | v1.0 | Multiple-choice questions on Korean history. |
43
+ | reading_comprehension | 447 | v1.0 | Multiple-choice questions on Korean reading comprehension from the Korean Language Ability Test (KLAT). |
44
+ | **Total** | **1538** | | |
45
+
46
+
47
+ ### Evaluation Code
48
+ ```
49
+ !git clone https://github.com/guijinSON/lm-evaluation-harness.git
50
+ !pip install sentencepiece
51
+ %cd lm-evaluation-harness
52
+ !pip install -e .
53
+ !pip install -e ".[multilingual]"
54
+ !pip install huggingface_hub
55
+
56
+ !lm_eval --model hf \
57
+ --model_args pretrained=EleutherAI/polyglot-ko-12.8b \
58
+ --tasks HRB \
59
+ --device cuda:0 \
60
+ --batch_size auto:4 \
61
+ --write_out
62
+ ```
63
+
64
+ ### Point of Contact
65
+ For any questions contact us via the following email:)
66
+ ```
67
+ spthsrbwls123@yonsei.ac.kr
68
+ ```