Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
nayeon212 commited on
Commit
0df4881
β€’
1 Parent(s): 26c261f

UPDATE: all files in datacard

Browse files
Files changed (1) hide show
  1. README.md +43 -5
README.md CHANGED
@@ -53,6 +53,42 @@ configs:
53
  path: "data/annotations_hf/US_data.json"
54
  - split: JB
55
  path: "data/annotations_hf/West_Java_data.json"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  ---
57
  # BLEnD
58
 
@@ -70,9 +106,11 @@ Furthermore, we find that LLMs perform better in their local languages for mid-t
70
 
71
  ## Dataset
72
  All the data samples for short-answer questions, including the human-annotated answers, can be found in the `data/` directory.
73
- Specifically, the annotations from each country are included in the `data/annotations/` directory, with the file names as `{country/region}_data.json`. Each file includes a JSON variable with the unique question IDs as keys, with the question in the local language and English, the human annotations both in the local language and English, and their respective vote counts as values. The same dataset for South Korea is shown below:
 
74
  ```JSON
75
- "Al-en-06": {
 
76
  "question": "λŒ€ν•œλ―Όκ΅­ 학ꡐ κΈ‰μ‹μ—μ„œ ν”νžˆ λ³Ό 수 μžˆλŠ” μŒμ‹μ€ λ¬΄μ—‡μΈκ°€μš”?",
77
  "en_question": "What is a common school cafeteria food in your country?",
78
  "annotations": [
@@ -103,11 +141,10 @@ Specifically, the annotations from each country are included in the `data/annota
103
  "no-answer": 0,
104
  "not-applicable": 0
105
  }
106
- },
107
  ```
108
- We also include the prompts that we used for LLM evaluation in both local languages and English in the data/prompts/ directory. Each file is named `{country/region}_prompts.csv`. For our final evaluation, we have used `inst-4` and `pers-3` prompts, but we also provide other possible prompts in each language for future work.
109
 
110
- The topics and source language for each question can be found in `questions`.
111
  Questions for each country in their local languages and English can be accessed by **country codes**. Each CSV file question ID, topic, source language, question in English, and the local language (in the `Translation` column) for all questions.
112
 
113
  | **Country/Region** | **Code** | **Language** | **Code**|
@@ -129,3 +166,4 @@ Questions for each country in their local languages and English can be accessed
129
  | Northern Nigeria | NG | Hausa | ha
130
  | Ethiopia | ET | Amharic | am
131
 
 
 
53
  path: "data/annotations_hf/US_data.json"
54
  - split: JB
55
  path: "data/annotations_hf/West_Java_data.json"
56
+ - config_name: short-answer-questions
57
+ data_files:
58
+ - split: DZ
59
+ path: "data/questions/Algeria_questions.csv"
60
+ - split: AS
61
+ path: "data/questions/Assam_questions.csv"
62
+ - split: AZ
63
+ path: "data/questions/Azerbaijan_questions.csv"
64
+ - split: CN
65
+ path: "data/questions/China_questions.csv"
66
+ - split: ET
67
+ path: "data/questions/Ethiopia_questions.csv"
68
+ - split: GR
69
+ path: "data/questions/Greece_questions.csv"
70
+ - split: ID
71
+ path: "data/questions/Indonesia_questions.csv"
72
+ - split: IR
73
+ path: "data/questions/Iran_questions.csv"
74
+ - split: MX
75
+ path: "data/questions/Mexico_questions.csv"
76
+ - split: KP
77
+ path: "data/questions/North_Korea_questions.csv"
78
+ - split: NG
79
+ path: "data/questions/Northern_Nigeria_questions.csv"
80
+ - split: KR
81
+ path: "data/questions/South_Korea_questions.csv"
82
+ - split: ES
83
+ path: "data/questions/Spain_questions.csv"
84
+ - split: GB
85
+ path: "data/questions/UK_questions.csv"
86
+ - split: US
87
+ path: "data/questions/US_questions.csv"
88
+ - split: JB
89
+ path: "data/questions/West_Java_questions.csv"
90
+ - config_name: multiple-choice-questions
91
+ data_files: "evaluation/mc_data/mc_questions_file.csv"
92
  ---
93
  # BLEnD
94
 
 
106
 
107
  ## Dataset
108
  All the data samples for short-answer questions, including the human-annotated answers, can be found in the `data/` directory.
109
+ Specifically, the annotations from each country are included in the `annotations/` split, and each country/region's data can be accessed by **country codes**.
110
+ Each file includes a JSON variable with question IDs, questions in the local language and English, the human annotations both in the local language and English, and their respective vote counts as values. The same dataset for South Korea is shown below:
111
  ```JSON
112
+ [{
113
+ "ID": "Al-en-06",
114
  "question": "λŒ€ν•œλ―Όκ΅­ 학ꡐ κΈ‰μ‹μ—μ„œ ν”νžˆ λ³Ό 수 μžˆλŠ” μŒμ‹μ€ λ¬΄μ—‡μΈκ°€μš”?",
115
  "en_question": "What is a common school cafeteria food in your country?",
116
  "annotations": [
 
141
  "no-answer": 0,
142
  "not-applicable": 0
143
  }
144
+ }],
145
  ```
 
146
 
147
+ The topics and source language for each question can be found in `short-answer-questions` split.
148
  Questions for each country in their local languages and English can be accessed by **country codes**. Each CSV file question ID, topic, source language, question in English, and the local language (in the `Translation` column) for all questions.
149
 
150
  | **Country/Region** | **Code** | **Language** | **Code**|
 
166
  | Northern Nigeria | NG | Hausa | ha
167
  | Ethiopia | ET | Amharic | am
168
 
169
+ The current set of multiple choice questions and their answers can be found at the `multiple-choice-questions` split.