File size: 5,690 Bytes
6d1ce12 20ac86e f1389f8 c50ada3 f1389f8 48300bf e2ead31 2cb26b7 9c99884 e2ead31 6005e03 995c57e 2cb26b7 4b8204a e2ead31 e077e99 2cb26b7 e7dbc39 e2ead31 c2958b7 e2ead31 a1c47d8 f3c9eae 2cb26b7 06b8972 e2ead31 06b8972 e2ead31 e077e99 e2ead31 06b8972 2cb26b7 06b8972 e2ead31 06b8972 e2ead31 06b8972 e077e99 e4d1343 a3ea854 6d1ce12 c14929e 8c43aac f1389f8 e677ed6 e3a855b f1389f8 8c43aac f1389f8 c50ada3 f1389f8 8c43aac f1389f8 c50ada3 f1389f8 8c43aac f1389f8 8c43aac f1389f8 8c43aac f1389f8 8c43aac f1389f8 8c43aac f1389f8 25c8944 f1389f8 25c8944 f1389f8 8c43aac f1389f8 25c8944 f1389f8 c50ada3 f1389f8 25c8944 418bcdf 25c8944 f1389f8 418bcdf 25c8944 f1389f8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 |
---
license: apache-2.0
tags:
- natural-language-understanding
language_creators:
- expert-generated
- machine-generated
multilinguality:
- multilingual
pretty_name: Polyglot or Not? Fact-Completion Benchmark
size_categories:
- 100K<n<1M
task_categories:
- text-generation
- fill-mask
- text2text-generation
dataset_info:
features:
- name: dataset_id
dtype: string
- name: stem
dtype: string
- name: 'true'
dtype: string
- name: 'false'
dtype: string
- name: relation
dtype: string
- name: subject
dtype: string
- name: object
dtype: string
splits:
- name: English
num_bytes: 3474255
num_examples: 26254
- name: Spanish
num_bytes: 3175733
num_examples: 18786
- name: French
num_bytes: 3395566
num_examples: 18395
- name: Russian
num_bytes: 659526
num_examples: 3289
- name: Portuguese
num_bytes: 4158146
num_examples: 22974
- name: German
num_bytes: 2611160
num_examples: 16287
- name: Italian
num_bytes: 3709786
num_examples: 20448
- name: Ukrainian
num_bytes: 1868358
num_examples: 7918
- name: Polish
num_bytes: 1683647
num_examples: 9484
- name: Romanian
num_bytes: 2846002
num_examples: 17568
- name: Czech
num_bytes: 1631582
num_examples: 9427
- name: Bulgarian
num_bytes: 4597410
num_examples: 20577
- name: Swedish
num_bytes: 3226502
num_examples: 21576
- name: Serbian
num_bytes: 1327674
num_examples: 5426
- name: Hungarian
num_bytes: 865409
num_examples: 4650
- name: Croatian
num_bytes: 1195097
num_examples: 7358
- name: Danish
num_bytes: 3580458
num_examples: 23365
- name: Slovenian
num_bytes: 1299653
num_examples: 7873
- name: Dutch
num_bytes: 3732795
num_examples: 22590
- name: Catalan
num_bytes: 3319466
num_examples: 18898
download_size: 27090222
dataset_size: 52358225
language:
- en
- fr
- es
- de
- uk
- bg
- ca
- da
- hr
- hu
- it
- nl
- pl
- pt
- ro
- ru
- sl
- sr
- sv
- cs
---
# Dataset Card
- **Homepage:** https://bit.ly/ischool-berkeley-capstone
- **Repository:** https://github.com/daniel-furman/Capstone
- **Point of Contact:** daniel_furman@berkeley.edu
## Dataset Summary
This is the dataset for **Polyglot or Not?: Measuring Multilingual Encyclopedic Knowledge Retrieval from Foundation Language Models**.
## Test Description
Given a factual association such as *The capital of France is **Paris***, we determine whether a model adequately "knows" this information with the following test:
* Step **1**: prompt the model to predict the likelihood of the token **Paris** following *The Capital of France is*
* Step **2**: prompt the model to predict the average likelihood of a set of false, counterfactual tokens following the same stem.
If the value from **1** is greater than the value from **2** we conclude that model adequately recalls that fact. Formally, this is an application of the Contrastive Knowledge Assessment proposed in [[1][bib]].
For every foundation model of interest (like [LLaMA](https://arxiv.org/abs/2302.13971)), we perform this assessment on a set of facts translated into 20 languages. All told, we score foundation models on 303k fact-completions ([results](https://github.com/daniel-furman/capstone#multilingual-fact-completion-results)).
We also score monolingual models (like [GPT-2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)) on English-only fact-completion ([results](https://github.com/daniel-furman/capstone#english-fact-completion-results)).
## Languages
The dataset covers 20 languages, which use either the Latin or Cyrillic scripts: bg, ca, cs, da, de, en, es, fr, hr, hu, it,
nl, pl, pt, ro, ru, sl, sr, sv, uk.
## Data Splits
The dataset splits correspond to the 20 languages above.
## Source Data
We sourced the English cut of the dataset from [1] and [2] and used the Google Translate API to produce the other 19 language cuts.
## Licensing Information
The dataset is licensed under the Apache 2.0 license and may be used with the corresponding affordances without limit.
## Citation Information
```
@misc{polyglot_or_not,
author = {Daniel Furman and Tim Schott and Shreshta Bhat},
title = {Polyglot or Not?: Measuring Multilingual Encyclopedic Knowledge Retrieval from Foundation Language Models},
year = {2023}
publisher = {GitHub},
howpublished = {\url{https://github.com/daniel-furman/Capstone}},
}
```
## Bibliography
[1] Dong, Qingxiu, Damai Dai, Yifan Song, Jingjing Xu, Zhifang Sui, and Lei Li. "Calibrating Factual Knowledge in Pretrained Language Models". In Findings of the Association for Computational Linguistics: EMNLP 2022. [arXiv:2210.03329][cka] (2022).
```
@misc{dong2022calibrating,
doi = {10.48550/arXiv.2210.03329},
title={Calibrating Factual Knowledge in Pretrained Language Models},
author={Qingxiu Dong and Damai Dai and Yifan Song and Jingjing Xu and Zhifang Sui and Lei Li},
year={2022},
eprint={2210.03329},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
[2] Meng, Kevin, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, and David Bau. "Mass Editing Memory in a Transformer." arXiv preprint [arXiv:2210.07229][memit] (2022).
```
@misc{meng2022massediting,
doi = {10.48550/arXiv.2210.07229},
title={Mass-Editing Memory in a Transformer},
author={Kevin Meng and Arnab Sen Sharma and Alex Andonian and Yonatan Belinkov and David Bau},
year={2022},
eprint={2210.07229},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|