Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
BLEnD / README.md
nayeon212's picture
Update README.md
f42a4d3 verified
metadata
license: cc-by-sa-4.0
task_categories:
  - question-answering
language:
  - en
  - zh
  - es
  - id
  - ko
  - el
  - fa
  - ar
  - az
  - su
  - as
  - ha
  - am
size_categories:
  - 10K<n<100K
configs:
  - config_name: annotations
    data_files:
      - split: DZ
        path: data/annotations_hf/Algeria_data.json
      - split: AS
        path: data/annotations_hf/Assam_data.json
      - split: AZ
        path: data/annotations_hf/Azerbaijan_data.json
      - split: CN
        path: data/annotations_hf/China_data.json
      - split: ET
        path: data/annotations_hf/Ethiopia_data.json
      - split: GR
        path: data/annotations_hf/Greece_data.json
      - split: ID
        path: data/annotations_hf/Indonesia_data.json
      - split: IR
        path: data/annotations_hf/Iran_data.json
      - split: MX
        path: data/annotations_hf/Mexico_data.json
      - split: KP
        path: data/annotations_hf/North_Korea_data.json
      - split: NG
        path: data/annotations_hf/Northern_Nigeria_data.json
      - split: KR
        path: data/annotations_hf/South_Korea_data.json
      - split: ES
        path: data/annotations_hf/Spain_data.json
      - split: GB
        path: data/annotations_hf/UK_data.json
      - split: US
        path: data/annotations_hf/US_data.json
      - split: JB
        path: data/annotations_hf/West_Java_data.json
  - config_name: short-answer-questions
    data_files:
      - split: DZ
        path: data/questions_hf/Algeria_questions.json
      - split: AS
        path: data/questions_hf/Assam_questions.json
      - split: AZ
        path: data/questions_hf/Azerbaijan_questions.json
      - split: CN
        path: data/questions_hf/China_questions.json
      - split: ET
        path: data/questions_hf/Ethiopia_questions.json
      - split: GR
        path: data/questions_hf/Greece_questions.json
      - split: ID
        path: data/questions_hf/Indonesia_questions.json
      - split: IR
        path: data/questions_hf/Iran_questions.json
      - split: MX
        path: data/questions_hf/Mexico_questions.json
      - split: KP
        path: data/questions_hf/North_Korea_questions.json
      - split: NG
        path: data/questions_hf/Northern_Nigeria_questions.json
      - split: KR
        path: data/questions_hf/South_Korea_questions.json
      - split: ES
        path: data/questions_hf/Spain_questions.json
      - split: GB
        path: data/questions_hf/UK_questions.json
      - split: US
        path: data/questions_hf/US_questions.json
      - split: JB
        path: data/questions_hf/West_Java_questions.json
  - config_name: multiple-choice-questions
    data_files:
      - split: test
        path: data/mc_questions_hf/mc_questions_file.json

BLEnD

This is the official repository of BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages (Submitted to NeurIPS 2024 Datasets and Benchmarks Track).

About

BLEnD Construction & LLM Evaluation Framework

Large language models (LLMs) often lack culture-specific everyday knowledge, especially across diverse regions and non-English languages. Existing benchmarks for evaluating LLMs' cultural sensitivities are usually limited to a single language or online sources like Wikipedia, which may not reflect the daily habits, customs, and lifestyles of different regions. That is, information about the food people eat for their birthday celebrations, spices they typically use, musical instruments youngsters play, or the sports they practice in school is not always explicitly written online. To address this issue, we introduce BLEnD, a hand-crafted benchmark designed to evaluate LLMs' everyday knowledge across diverse cultures and languages. The benchmark comprises 52.6k question-answer pairs from 16 countries/regions, in 13 different languages, including low-resource ones such as Amharic, Assamese, Azerbaijani, Hausa, and Sundanese. We evaluate LLMs in two formats: short-answer questions, and multiple-choice questions. We show that LLMs perform better in cultures that are more present online, with a maximum 57.34% difference in GPT-4, the best-performing model, in the short-answer format. Furthermore, we find that LLMs perform better in their local languages for mid-to-high-resource languages. Interestingly, for languages deemed to be low-resource, LLMs provide better answers in English.

Requirements

datasets >= 2.19.2
pandas >= 2.1.4

Dataset

All the data samples for short-answer questions, including the human-annotated answers, can be found in the data/ directory. Specifically, the annotations from each country are included in the annotations split, and each country/region's data can be accessed by country codes.

from datasets import load_dataset

annotations = load_dataset("nayeon212/BLEnD",'annotations')

# To access data from Assam:
assam_annotations = annotations['AS']

Each file includes a JSON variable with question IDs, questions in the local language and English, the human annotations both in the local language and English, and their respective vote counts as values. The same dataset for South Korea is shown below:

[{
    "ID": "Al-en-06",
    "question": "๋Œ€ํ•œ๋ฏผ๊ตญ ํ•™๊ต ๊ธ‰์‹์—์„œ ํ”ํžˆ ๋ณผ ์ˆ˜ ์žˆ๋Š” ์Œ์‹์€ ๋ฌด์—‡์ธ๊ฐ€์š”?",
    "en_question": "What is a common school cafeteria food in your country?",
    "annotations": [
        {
            "answers": [
                "๊น€์น˜"
            ],
            "en_answers": [
                "kimchi"
            ],
            "count": 4
        },
        {
            "answers": [
                "๋ฐฅ",
                "์Œ€๋ฐฅ",
                "์Œ€"
            ],
            "en_answers": [
                "rice"
            ],
            "count": 3
        },
        ...
    ],
    "idks": {
        "idk": 0,
        "no-answer": 0,
        "not-applicable": 0
    }
}],

The topics and source language for each question can be found in short-answer-questions split. Questions for each country in their local languages and English can be accessed by country codes. Each CSV file question ID, topic, source language, question in English, and the local language (in the Translation column) for all questions.

from datasets import load_dataset

questions = load_dataset("nayeon212/BLEnD",'short-answer-questions')

# To access data from Assam:
assam_questions = questions['AS']

The current set of multiple choice questions and their answers can be found at the multiple-choice-questions split.

from datasets import load_dataset

mcq = load_dataset("nayeon212/BLEnD",'multiple-choice-questions')

Country/Region Codes

Country/Region Code Language Code
United States US English en
United Kingdom GB English en
China CN Chinese zh
Spain ES Spanish es
Mexico MX Spanish es
Indonesia ID Indonesian id
South Korea KR Korean ko
North Korea KP Korean ko
Greece GR Greek el
Iran IR Persian fa
Algeria DZ Arabic ar
Azerbaijan AZ Azerbaijani az
West Java JB Sundanese su
Assam AS Assamese as
Northern Nigeria NG Hausa ha
Ethiopia ET Amharic am