Datasets:
File size: 7,897 Bytes
1085fcb 4459047 26c261f 4459047 26c261f 4459047 26c261f 4459047 26c261f 4459047 26c261f 4459047 26c261f 4459047 26c261f 4459047 26c261f 4459047 26c261f 4459047 26c261f 4459047 26c261f 4459047 26c261f 4459047 26c261f 4459047 26c261f 4459047 26c261f 4459047 26c261f 0df4881 116880a 0df4881 116880a 0df4881 116880a 0df4881 116880a 0df4881 116880a 0df4881 116880a 0df4881 116880a 0df4881 116880a 0df4881 116880a 0df4881 116880a 0df4881 116880a 0df4881 116880a 0df4881 116880a 0df4881 116880a 0df4881 116880a 0df4881 116880a 0df4881 77db244 1085fcb 605bdce 98151e7 605bdce f42a4d3 605bdce 2e825ff cd0b855 77db244 cd0b855 0df4881 605bdce 0df4881 605bdce 0df4881 605bdce 0df4881 2e825ff 77db244 e857ad0 77db244 e857ad0 605bdce |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 |
---
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- en
- zh
- es
- id
- ko
- el
- fa
- ar
- az
- su
- as
- ha
- am
size_categories:
- 10K<n<100K
configs:
- config_name: annotations
data_files:
- split: DZ
path: "data/annotations_hf/Algeria_data.json"
- split: AS
path: "data/annotations_hf/Assam_data.json"
- split: AZ
path: "data/annotations_hf/Azerbaijan_data.json"
- split: CN
path: "data/annotations_hf/China_data.json"
- split: ET
path: "data/annotations_hf/Ethiopia_data.json"
- split: GR
path: "data/annotations_hf/Greece_data.json"
- split: ID
path: "data/annotations_hf/Indonesia_data.json"
- split: IR
path: "data/annotations_hf/Iran_data.json"
- split: MX
path: "data/annotations_hf/Mexico_data.json"
- split: KP
path: "data/annotations_hf/North_Korea_data.json"
- split: NG
path: "data/annotations_hf/Northern_Nigeria_data.json"
- split: KR
path: "data/annotations_hf/South_Korea_data.json"
- split: ES
path: "data/annotations_hf/Spain_data.json"
- split: GB
path: "data/annotations_hf/UK_data.json"
- split: US
path: "data/annotations_hf/US_data.json"
- split: JB
path: "data/annotations_hf/West_Java_data.json"
- config_name: short-answer-questions
data_files:
- split: DZ
path: "data/questions_hf/Algeria_questions.json"
- split: AS
path: "data/questions_hf/Assam_questions.json"
- split: AZ
path: "data/questions_hf/Azerbaijan_questions.json"
- split: CN
path: "data/questions_hf/China_questions.json"
- split: ET
path: "data/questions_hf/Ethiopia_questions.json"
- split: GR
path: "data/questions_hf/Greece_questions.json"
- split: ID
path: "data/questions_hf/Indonesia_questions.json"
- split: IR
path: "data/questions_hf/Iran_questions.json"
- split: MX
path: "data/questions_hf/Mexico_questions.json"
- split: KP
path: "data/questions_hf/North_Korea_questions.json"
- split: NG
path: "data/questions_hf/Northern_Nigeria_questions.json"
- split: KR
path: "data/questions_hf/South_Korea_questions.json"
- split: ES
path: "data/questions_hf/Spain_questions.json"
- split: GB
path: "data/questions_hf/UK_questions.json"
- split: US
path: "data/questions_hf/US_questions.json"
- split: JB
path: "data/questions_hf/West_Java_questions.json"
- config_name: multiple-choice-questions
data_files:
- split: test
path: "data/mc_questions_hf/mc_questions_file.json"
---
# BLEnD
This is the official repository of **[BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages](https://arxiv.org/abs/2406.09948)** (Submitted to NeurIPS 2024 Datasets and Benchmarks Track).
## About
![BLEnD Construction & LLM Evaluation Framework](main_figure.png)
Large language models (LLMs) often lack culture-specific everyday knowledge, especially across diverse regions and non-English languages. Existing benchmarks for evaluating LLMs' cultural sensitivities are usually limited to a single language or online sources like Wikipedia, which may not reflect the daily habits, customs, and lifestyles of different regions. That is, information about the food people eat for their birthday celebrations, spices they typically use, musical instruments youngsters play, or the sports they practice in school is not always explicitly written online.
To address this issue, we introduce **BLEnD**, a hand-crafted benchmark designed to evaluate LLMs' everyday knowledge across diverse cultures and languages.
The benchmark comprises 52.6k question-answer pairs from 16 countries/regions, in 13 different languages, including low-resource ones such as Amharic, Assamese, Azerbaijani, Hausa, and Sundanese.
We evaluate LLMs in two formats: short-answer questions, and multiple-choice questions.
We show that LLMs perform better in cultures that are more present online, with a maximum 57.34% difference in GPT-4, the best-performing model, in the short-answer format.
Furthermore, we find that LLMs perform better in their local languages for mid-to-high-resource languages. Interestingly, for languages deemed to be low-resource, LLMs provide better answers in English.
## Requirements
```Python
datasets >= 2.19.2
pandas >= 2.1.4
```
## Dataset
All the data samples for short-answer questions, including the human-annotated answers, can be found in the `data/` directory.
Specifically, the annotations from each country are included in the `annotations` split, and each country/region's data can be accessed by **[country codes](https://huggingface.co/datasets/nayeon212/BLEnD#countryregion-codes)**.
```Python
from datasets import load_dataset
annotations = load_dataset("nayeon212/BLEnD",'annotations')
# To access data from Assam:
assam_annotations = annotations['AS']
```
Each file includes a JSON variable with question IDs, questions in the local language and English, the human annotations both in the local language and English, and their respective vote counts as values. The same dataset for South Korea is shown below:
```JSON
[{
"ID": "Al-en-06",
"question": "๋ํ๋ฏผ๊ตญ ํ๊ต ๊ธ์์์ ํํ ๋ณผ ์ ์๋ ์์์ ๋ฌด์์ธ๊ฐ์?",
"en_question": "What is a common school cafeteria food in your country?",
"annotations": [
{
"answers": [
"๊น์น"
],
"en_answers": [
"kimchi"
],
"count": 4
},
{
"answers": [
"๋ฐฅ",
"์๋ฐฅ",
"์"
],
"en_answers": [
"rice"
],
"count": 3
},
...
],
"idks": {
"idk": 0,
"no-answer": 0,
"not-applicable": 0
}
}],
```
The topics and source language for each question can be found in `short-answer-questions` split.
Questions for each country in their local languages and English can be accessed by **[country codes](https://huggingface.co/datasets/nayeon212/BLEnD#countryregion-codes)**.
Each CSV file question ID, topic, source language, question in English, and the local language (in the `Translation` column) for all questions.
```Python
from datasets import load_dataset
questions = load_dataset("nayeon212/BLEnD",'short-answer-questions')
# To access data from Assam:
assam_questions = questions['AS']
```
The current set of multiple choice questions and their answers can be found at the `multiple-choice-questions` split.
```Python
from datasets import load_dataset
mcq = load_dataset("nayeon212/BLEnD",'multiple-choice-questions')
```
### Country/Region Codes
| **Country/Region** | **Code** | **Language** | **Code**|
|:--------:|:--------------:|:------------:|:------------:|
| United States | US | English | en
| United Kingdom | GB | English |en
| China | CN | Chinese | zh
| Spain | ES | Spanish | es
| Mexico | MX |Spanish|es
| Indonesia | ID | Indonesian | id
| South Korea | KR | Korean | ko
| North Korea | KP | Korean |ko
| Greece | GR | Greek | el
| Iran | IR | Persian | fa
| Algeria | DZ | Arabic | ar
| Azerbaijan | AZ | Azerbaijani | az
| West Java | JB | Sundanese | su
| Assam | AS | Assamese | as
| Northern Nigeria | NG | Hausa | ha
| Ethiopia | ET | Amharic | am
|