Datasets:

Modalities:
Tabular
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:
CulturalBench / README.md
kellycyy's picture
Update README.md
996d62a verified
metadata
license: cc-by-4.0
dataset_info: null
configs:
  - config_name: CulturalBench-Hard
    default: true
    data_files:
      - split: test
        path: CulturalBench-Hard.csv
  - config_name: CulturalBench-Easy
    data_files:
      - split: test
        path: CulturalBench-Easy.csv
size_categories:
  - 1K<n<10K
pretty_name: CulturalBench

CulturalBench - a Robust, Diverse and Challenging Benchmark on Measuring the (Lack of) Cultural Knowledge of LLMs

πŸ“Œ Resources: Paper | Leaderboard

πŸ“˜ Description of CulturalBench

  • CulturalBench is a set of 1,227 human-written and human-verified questions for effectively assessing LLMs’ cultural knowledge, covering 45 global regions including the underrepresented ones like Bangladesh, Zimbabwe, and Peru.

  • We evaluate models on two setups: CulturalBench-Easy and CulturalBench-Hard which share the same questions but asked differently.

    1. CulturalBench-Easy: multiple-choice questions (Output: one out of four options i.e. A,B,C,D). Evaluate model accuracy at question level (i.e. per question_idx). There are 1,227 questions in total.
    2. CulturalBench-Hard: binary (Output: one out of two possibilties i.e. True/False). Evaluate model accuracy at question level (i.e. per question_idx). There are 1,227x4=4908 binary judgements in total with 1,227 questions provided.
  • See details on CulturalBench paper at https://arxiv.org/pdf/2410.02677.

🌎 Country distribution

Continent Num of questions Included Country/Region
North America 27 Canada; United States
South America 150 Argentina; Brazil; Chile; Mexico; Peru
East Europe 115 Czech Republic; Poland; Romania; Ukraine; Russia
South Europe 76 Spain; Italy
West Europe 96 France; Germany; Netherlands; United Kingdom
Africa 134 Egypt; Morocco; Nigeria; South Africa; Zimbabwe
Middle East/West Asia 127 Iran; Israel; Lebanon; Saudi Arabia; Turkey
South Asia 106 Bangladesh; India; Nepal; Pakistan
Southeast Asia 159 Indonesia; Malaysia; Philippines; Singapore; Thailand; Vietnam
East Asia 211 China; Hong Kong; Japan; South Korea; Taiwan
Oceania 26 Australia; New Zealand

πŸ₯‡ Leaderboard of CulturalBench

  • We evaluated 30 frontier LLMs (update: 2024-10-04 13:20:58) and hosted the leaderboard at https://huggingface.co/spaces/kellycyy/CulturalBench.
  • We find that LLMs are sensitive to such difference in setups (e.g., GPT-4o with 27.3% difference).
  • Compared to human performance (92.6% accuracy), CULTURALBENCH-Hard is more challenging for frontier LLMs with the best performing model (GPT-4o) at only 61.5% and the worst (Llama3-8b) at 21.4%.

πŸ“– Example of CulturalBench

  • Examples of questions in two setups: image/png

πŸ’» How to load the datasets

from datasets import load_dataset

ds_hard = load_dataset("kellycyy/CulturalBench", "CulturalBench-Hard")
ds_easy = load_dataset("kellycyy/CulturalBench", "CulturalBench-Easy")

Contact

E-Mail: Kelly Chiu

Citation

If you find this dataset useful, please cite the following works

@misc{chiu2024culturalbenchrobustdiversechallenging,
      title={CulturalBench: a Robust, Diverse and Challenging Benchmark on Measuring the (Lack of) Cultural Knowledge of LLMs}, 
      author={Yu Ying Chiu and Liwei Jiang and Bill Yuchen Lin and Chan Young Park and Shuyue Stella Li and Sahithya Ravi and Mehar Bhatia and Maria Antoniak and Yulia Tsvetkov and Vered Shwartz and Yejin Choi},
      year={2024},
      eprint={2410.02677},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2410.02677}, 
}