Fact-Completion / README.md
dfurman's picture
Update README.md
b3b2243
---
license: apache-2.0
tags:
- natural-language-understanding
language_creators:
- expert-generated
- machine-generated
multilinguality:
- multilingual
pretty_name: Polyglot or Not? Fact-Completion Benchmark
size_categories:
- 100K<n<1M
task_categories:
- text-generation
- fill-mask
- text2text-generation
dataset_info:
features:
- name: dataset_id
dtype: string
- name: stem
dtype: string
- name: 'true'
dtype: string
- name: 'false'
dtype: string
- name: relation
dtype: string
- name: subject
dtype: string
- name: object
dtype: string
splits:
- name: English
num_bytes: 3474255
num_examples: 26254
- name: Spanish
num_bytes: 3175733
num_examples: 18786
- name: French
num_bytes: 3395566
num_examples: 18395
- name: Russian
num_bytes: 659526
num_examples: 3289
- name: Portuguese
num_bytes: 4158146
num_examples: 22974
- name: German
num_bytes: 2611160
num_examples: 16287
- name: Italian
num_bytes: 3709786
num_examples: 20448
- name: Ukrainian
num_bytes: 1868358
num_examples: 7918
- name: Polish
num_bytes: 1683647
num_examples: 9484
- name: Romanian
num_bytes: 2846002
num_examples: 17568
- name: Czech
num_bytes: 1631582
num_examples: 9427
- name: Bulgarian
num_bytes: 4597410
num_examples: 20577
- name: Swedish
num_bytes: 3226502
num_examples: 21576
- name: Serbian
num_bytes: 1327674
num_examples: 5426
- name: Hungarian
num_bytes: 865409
num_examples: 4650
- name: Croatian
num_bytes: 1195097
num_examples: 7358
- name: Danish
num_bytes: 3580458
num_examples: 23365
- name: Slovenian
num_bytes: 1299653
num_examples: 7873
- name: Dutch
num_bytes: 3732795
num_examples: 22590
- name: Catalan
num_bytes: 3319466
num_examples: 18898
download_size: 27090207
dataset_size: 52358225
language:
- en
- fr
- es
- de
- uk
- bg
- ca
- da
- hr
- hu
- it
- nl
- pl
- pt
- ro
- ru
- sl
- sr
- sv
- cs
---
# Dataset Card
- **Homepage:** https://bit.ly/ischool-berkeley-capstone
- **Repository:** https://github.com/daniel-furman/Capstone
- **Point of Contact:** daniel_furman@berkeley.edu
## Dataset Summary
This is the dataset for **Polyglot or Not?: Measuring Multilingual Encyclopedic Knowledge Retrieval from Foundation Language Models**.
## Test Description
Given a factual association such as *The capital of France is **Paris***, we determine whether a model adequately "knows" this information with the following test:
* Step **1**: prompt the model to predict the likelihood of the token **Paris** following *The Capital of France is*
* Step **2**: prompt the model to predict the average likelihood of a set of false, counterfactual tokens following the same stem.
If the value from **1** is greater than the value from **2** we conclude that model adequately recalls that fact. Formally, this is an application of the Contrastive Knowledge Assessment proposed in [[1][bib]].
For every foundation model of interest (like [LLaMA](https://arxiv.org/abs/2302.13971)), we perform this assessment on a set of facts translated into 20 languages. All told, we score foundation models on 303k fact-completions ([results](https://github.com/daniel-furman/capstone#multilingual-fact-completion-results)).
We also score monolingual models (like [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)) on English-only fact-completion ([results](https://github.com/daniel-furman/capstone#english-fact-completion-results)).
## Languages
The dataset covers 20 languages, which use either the Latin or Cyrillic scripts: bg, ca, cs, da, de, en, es, fr, hr, hu, it,
nl, pl, pt, ro, ru, sl, sr, sv, uk.
## Data Splits
The dataset splits correspond to the 20 languages above.
## Source Data
We sourced the English cut of the dataset from [1] and [2] and used the Google Translate API to produce the other 19 language cuts.
## Licensing Information
The dataset is licensed under the Apache 2.0 license and may be used with the corresponding affordances without limit.
## Citation Information
```
@misc{schott2023polyglot,
doi = {10.48550/arXiv.2305.13675},
title={Polyglot or Not? Measuring Multilingual Encyclopedic Knowledge Retrieval from Foundation Language Models},
author={Tim Schott and Daniel Furman and Shreshta Bhat},
year={2023},
eprint={2305.13675,
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Bibliography
[1] Dong, Qingxiu, Damai Dai, Yifan Song, Jingjing Xu, Zhifang Sui, and Lei Li. "Calibrating Factual Knowledge in Pretrained Language Models". In Findings of the Association for Computational Linguistics: EMNLP 2022. [arXiv:2210.03329][cka] (2022).
```
@misc{dong2022calibrating,
doi = {10.48550/arXiv.2210.03329},
title={Calibrating Factual Knowledge in Pretrained Language Models},
author={Qingxiu Dong and Damai Dai and Yifan Song and Jingjing Xu and Zhifang Sui and Lei Li},
year={2022},
eprint={2210.03329},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
[2] Meng, Kevin, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, and David Bau. "Mass Editing Memory in a Transformer." arXiv preprint [arXiv:2210.07229][memit] (2022).
```
@misc{meng2022massediting,
doi = {10.48550/arXiv.2210.07229},
title={Mass-Editing Memory in a Transformer},
author={Kevin Meng and Arnab Sen Sharma and Alex Andonian and Yonatan Belinkov and David Bau},
year={2022},
eprint={2210.07229},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```