system HF staff commited on
Commit
7c52bc4
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,270 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ - expert-generated
5
+ - machine-generated
6
+ language_creators:
7
+ - crowdsourced
8
+ - expert-generated
9
+ - machine-generated
10
+ languages:
11
+ - en
12
+ licenses:
13
+ - cc-by-4-0
14
+ multilinguality:
15
+ - monolingual
16
+ size_categories:
17
+ - n>1M
18
+ source_datasets:
19
+ - extended|conceptnet5
20
+ - extended|squad
21
+ task_categories:
22
+ - text-retrieval
23
+ - text-scoring
24
+ task_ids:
25
+ - fact-checking-retrieval
26
+ - text-scoring-other-probing
27
+ ---
28
+
29
+ # Dataset Card for LAMA: LAnguage Model Analysis - a dataset for probing and analyzing the factual and commonsense knowledge contained in pretrained language models.
30
+
31
+ ## Table of Contents
32
+ - [Dataset Description](#dataset-description)
33
+ - [Dataset Summary](#dataset-summary)
34
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
35
+ - [Languages](#languages)
36
+ - [Dataset Structure](#dataset-structure)
37
+ - [Data Instances](#data-instances)
38
+ - [Data Fields](#data-instances)
39
+ - [Data Splits](#data-instances)
40
+ - [Dataset Creation](#dataset-creation)
41
+ - [Curation Rationale](#curation-rationale)
42
+ - [Source Data](#source-data)
43
+ - [Annotations](#annotations)
44
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
45
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
46
+ - [Social Impact of Dataset](#social-impact-of-dataset)
47
+ - [Discussion of Biases](#discussion-of-biases)
48
+ - [Other Known Limitations](#other-known-limitations)
49
+ - [Additional Information](#additional-information)
50
+ - [Dataset Curators](#dataset-curators)
51
+ - [Licensing Information](#licensing-information)
52
+ - [Citation Information](#citation-information)
53
+
54
+ ## Dataset Description
55
+
56
+ - **Homepage:**
57
+ https://github.com/facebookresearch/LAMA
58
+ - **Repository:**
59
+ https://github.com/facebookresearch/LAMA
60
+ - **Paper:**
61
+ @inproceedings{petroni2019language,
62
+ title={Language Models as Knowledge Bases?},
63
+ author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
64
+ booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
65
+ year={2019}
66
+ }
67
+
68
+ @inproceedings{petroni2020how,
69
+ title={How Context Affects Language Models' Factual Predictions},
70
+ author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},
71
+ booktitle={Automated Knowledge Base Construction},
72
+ year={2020},
73
+ url={https://openreview.net/forum?id=025X0zPfn}
74
+ }
75
+
76
+ ### Dataset Summary
77
+
78
+ This dataset provides the data for LAMA. The dataset include a subset
79
+ of Google_RE
80
+ (https://code.google.com/archive/p/relation-extraction-corpus/), TRex
81
+ (subset of wikidata triples), Conceptnet
82
+ (https://github.com/commonsense/conceptnet5/wiki) and Squad. There are
83
+ configs for each of "google_re", "trex", "conceptnet" and "squad",
84
+ respectively.
85
+
86
+ The dataset includes some cleanup, and addition of a masked sentence
87
+ and associated answers for the [MASK] token. The accuracy in
88
+ predicting the [MASK] token shows how well the language model knows
89
+ facts and common sense information. The [MASK] tokens are only for the
90
+ "object" slots.
91
+
92
+ This version of the dataset includes "negated" sentences as well as
93
+ the masked sentence. Also, certain of the config includes "template"
94
+ and "template_negated" fields of the form "[X] some text [Y]", where
95
+ [X] and [Y] are the subject and object slots respectively of certain
96
+ relations.
97
+
98
+ See the paper for more details. For more information, also see:
99
+ https://github.com/facebookresearch/LAMA
100
+
101
+ ### Languages
102
+ en
103
+
104
+ ## Dataset Structure
105
+
106
+ ### Data Instances
107
+
108
+
109
+ The trex config has the following fields:
110
+
111
+
112
+ ``
113
+ {'description': 'the item (an institution, law, public office ...) or statement belongs to or has power over or applies to the value (a territorial jurisdiction: a country, state, municipality, ...)', 'label': 'applies to jurisdiction', 'masked_sentence': 'It is known as a principality as it is a monarchy headed by two Co-Princes – the Spanish/Roman Catholic Bishop of Urgell and the President of [MASK].', 'obj_label': 'France', 'obj_surface': 'France', 'obj_uri': 'Q142', 'predicate_id': 'P1001', 'sub_label': 'president of the French Republic', 'sub_surface': 'President', 'sub_uri': 'Q191954', 'template': '[X] is a legal term in [Y] .', 'template_negated': '[X] is not a legal term in [Y] .', 'type': 'N-M', 'uuid': '3fe3d4da-9df9-45ba-8109-784ce5fba38a'}
114
+ ``
115
+
116
+ The conceptnet config has the following fields:
117
+
118
+
119
+ ``
120
+ {'masked_sentence': 'One of the things you do when you are alive is [MASK].', 'negated': '', 'obj': 'think', 'obj_label': 'think', 'pred': 'HasSubevent', 'sub': 'alive', 'uuid': 'd4f11631dde8a43beda613ec845ff7d1'}
121
+ ``
122
+
123
+ The squad config has the following fields:
124
+
125
+
126
+ ``
127
+ {'id': '56be4db0acb8001400a502f0_0', 'masked_sentence': 'To emphasize the 50th anniversary of the Super Bowl the [MASK] color was used.', 'negated': "['To emphasize the 50th anniversary of the Super Bowl the [MASK] color was not used.']", 'obj_label': 'gold', 'sub_label': 'Squad'}
128
+ ``
129
+
130
+ The google_re config has the following fields:
131
+
132
+
133
+ ``
134
+ {'evidences': '[{\'url\': \'http://en.wikipedia.org/wiki/Peter_F._Martin\', \'snippet\': "Peter F. Martin (born 1941) is an American politician who is a Democratic member of the Rhode Island House of Representatives. He has represented the 75th District Newport since 6 January 2009. He is currently serves on the House Committees on Judiciary, Municipal Government, and Veteran\'s Affairs. During his first term of office he served on the House Committees on Small Business and Separation of Powers & Government Oversight. In August 2010, Representative Martin was appointed as a Commissioner on the Atlantic States Marine Fisheries Commission", \'considered_sentences\': [\'Peter F Martin (born 1941) is an American politician who is a Democratic member of the Rhode Island House of Representatives .\']}]', 'judgments': "[{'rater': '18349444711114572460', 'judgment': 'yes'}, {'rater': '17595829233063766365', 'judgment': 'yes'}, {'rater': '4593294093459651288', 'judgment': 'yes'}, {'rater': '7387074196865291426', 'judgment': 'yes'}, {'rater': '17154471385681223613', 'judgment': 'yes'}]", 'masked_sentence': 'Peter F Martin (born [MASK]) is an American politician who is a Democratic member of the Rhode Island House of Representatives .', 'obj': '1941', 'obj_aliases': '[]', 'obj_label': '1941', 'obj_w': 'None', 'pred': '/people/person/date_of_birth', 'sub': '/m/09gb0bw', 'sub_aliases': '[]', 'sub_label': 'Peter F. Martin', 'sub_w': 'None', 'template': '[X] (born [Y]).', 'template_negated': '[X] (not born [Y]).', 'uuid': '18af2dac-21d3-4c42-aff5-c247f245e203'}
135
+ ``
136
+
137
+ ### Data Fields
138
+
139
+ The trex config has the following fields:
140
+ * uuid: the id
141
+ * obj_uri: a uri for the object slot
142
+ * obj_label: a label for the object slot
143
+ * sub_uri: a uri for the subject slot
144
+ * sub_label: a label for the subject slot
145
+ * predicate_id: the predicate/relationship
146
+ * sub_surface: the surface text for the subject
147
+ * obj_surface: The surface text for the object. This is the word that should be predicted by the [MASK] token.
148
+ * masked_sentence: The masked sentence used to probe, with the object word replaced with [MASK]
149
+ * template: A pattern of text for extracting the relationship, object and subject of the form "[X] some text [Y]", where [X] and [Y] are the subject and object slots respectively. template may be missing and replaced with an empty string.
150
+ * template_negated: Same as above, except the [Y] is not the object. template_negated may be missing and replaced with empty strings.
151
+ * label: the label for the relationship/predicate. label may be missing and replaced with an empty string.
152
+ * description': a description of the relationship/predicate. description may be missing and replaced with an empty string.
153
+ * type: a type id for the relationship/predicate. type may be missing and replaced with an empty string.
154
+
155
+ The conceptnet config has the following fields:
156
+ * uuid: the id
157
+ * sub: the subject. subj may be missing and replaced with an empty string.
158
+ * obj: the object to be predicted. obj may be missing and replaced with an empty string.
159
+ * pred: the predicate/relationship
160
+ * obj_label: the object label
161
+ * masked_sentence: The masked sentence used to probe, with the object word replaced with [MASK]
162
+ * negated: same as above, except [MASK] is replaced by something that is not the object word. negated may be missing and replaced with empty strings.
163
+
164
+
165
+ The squad config has the following fields:
166
+ * id: the id
167
+ * sub_label: the subject label
168
+ * obj_label: the object label that is being predicted
169
+ * masked_sentence: The masked sentence used to probe, with the object word replaced with [MASK]
170
+ * negated: same as above, except [MASK] is replaced by something that is not the object word. negated may be missing and replaced with empty strings.
171
+
172
+
173
+ The google_re config has the following fields:
174
+
175
+ * uuid: the id
176
+ * pred: the predicate
177
+ * sub: the subject. subj may be missing and replaced with an empty string.
178
+ * obj: the object. obj may be missing and replaced with an empty string.
179
+ * evidences: flattened json string that provides evidence for predicate. parse this json string to get more 'snippet' information.
180
+ * judgments: data about judgments
181
+ * sub_q: unknown
182
+ * sub_label: label for the subject
183
+ * sub_aliases: unknown
184
+ * obj_w: unknown
185
+ * obj_label: label for the object
186
+ * obj_aliases: unknown
187
+ * masked_sentence: The masked sentence used to probe, with the object word replaced with [MASK]
188
+ * template: A pattern of text for extracting the relationship, object and subject of the form "[X] some text [Y]", where [X] and [Y] are the subject and object slots respectively.
189
+ * template_negated: Same as above, except the [Y] is not the object.
190
+
191
+
192
+ ### Data Splits
193
+
194
+ There are no data splits.
195
+
196
+ ## Dataset Creation
197
+
198
+ ### Curation Rationale
199
+
200
+ This dataset was gathered and created to probe what language models understand.
201
+
202
+ ### Source Data
203
+
204
+ #### Initial Data Collection and Normalization
205
+
206
+ See the reaserch paper and website for more detail. The dataset was
207
+ created gathered from various other datasets with cleanups for probing.
208
+
209
+
210
+ #### Who are the source language producers?
211
+
212
+ The LAMA authors and the original authors of the various configs.
213
+
214
+ ### Annotations
215
+
216
+ #### Annotation process
217
+
218
+ Human annotations under the original datasets (conceptnet), and various machine annotations.
219
+
220
+ #### Who are the annotators?
221
+
222
+ Human annotations and machine annotations.
223
+
224
+ ### Personal and Sensitive Information
225
+
226
+ Unkown, but likely names of famous people.
227
+
228
+ ## Considerations for Using the Data
229
+
230
+ ### Social Impact of Dataset
231
+
232
+ The goal for the work is to probe the understanding of language models.
233
+
234
+ ### Discussion of Biases
235
+
236
+ Since the data is from human annotators, there is likely to be baises.
237
+
238
+ [More Information Needed]
239
+
240
+ ### Other Known Limitations
241
+
242
+ The original documentation for the datafields are limited.
243
+
244
+ ## Additional Information
245
+
246
+ ### Dataset Curators
247
+
248
+ The authors of LAMA at Facebook and the authors of the original datasets.
249
+
250
+ ### Licensing Information
251
+
252
+ The Creative Commons Attribution-Noncommercial 4.0 International License. see https://github.com/facebookresearch/LAMA/blob/master/LICENSE
253
+
254
+ ### Citation Information
255
+
256
+ @inproceedings{petroni2019language,
257
+ title={Language Models as Knowledge Bases?},
258
+ author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
259
+ booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
260
+ year={2019}
261
+ }
262
+
263
+ @inproceedings{petroni2020how,
264
+ title={How Context Affects Language Models' Factual Predictions},
265
+ author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},
266
+ booktitle={Automated Knowledge Base Construction},
267
+ year={2020},
268
+ url={https://openreview.net/forum?id=025X0zPfn}
269
+ }
270
+
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"trex": {"description": "LAMA is a dataset used to probe and analyze the factual and commonsense knowledge contained in pretrained language models. See https://github.com/facebookresearch/LAMA.\n", "citation": "@inproceedings{petroni2019language,\n title={Language Models as Knowledge Bases?},\n author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},\n booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},\n year={2019}\n}\n@inproceedings{petroni2020how,\n title={How Context Affects Language Models' Factual Predictions},\n author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},\n booktitle={Automated Knowledge Base Construction},\n year={2020},\n url={https://openreview.net/forum?id=025X0zPfn}\n}\n", "homepage": "https://github.com/facebookresearch/LAMA", "license": "The Creative Commons Attribution-Noncommercial 4.0 International License. see https://github.com/facebookresearch/LAMA/blob/master/LICENSE", "features": {"uuid": {"dtype": "string", "id": null, "_type": "Value"}, "obj_uri": {"dtype": "string", "id": null, "_type": "Value"}, "obj_label": {"dtype": "string", "id": null, "_type": "Value"}, "sub_uri": {"dtype": "string", "id": null, "_type": "Value"}, "sub_label": {"dtype": "string", "id": null, "_type": "Value"}, "predicate_id": {"dtype": "string", "id": null, "_type": "Value"}, "sub_surface": {"dtype": "string", "id": null, "_type": "Value"}, "obj_surface": {"dtype": "string", "id": null, "_type": "Value"}, "masked_sentence": {"dtype": "string", "id": null, "_type": "Value"}, "template": {"dtype": "string", "id": null, "_type": "Value"}, "template_negated": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "description": {"dtype": "string", "id": null, "_type": "Value"}, "type": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "lama", "config_name": "trex", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 656913189, "num_examples": 1304391, "dataset_name": "lama"}}, "download_checksums": {"https://dl.fbaipublicfiles.com/LAMA/negated_data.tar.gz": {"num_bytes": 74639115, "checksum": "1a151058e6608e47983ea4c99c50bb69248c1c0763a04a3793b0a0b657aa0b61"}}, "download_size": 74639115, "post_processing_size": null, "dataset_size": 656913189, "size_in_bytes": 731552304}, "squad": {"description": "LAMA is a dataset used to probe and analyze the factual and commonsense knowledge contained in pretrained language models. See https://github.com/facebookresearch/LAMA.\n", "citation": "@inproceedings{petroni2019language,\n title={Language Models as Knowledge Bases?},\n author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},\n booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},\n year={2019}\n}\n@inproceedings{petroni2020how,\n title={How Context Affects Language Models' Factual Predictions},\n author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},\n booktitle={Automated Knowledge Base Construction},\n year={2020},\n url={https://openreview.net/forum?id=025X0zPfn}\n}\n", "homepage": "https://github.com/facebookresearch/LAMA", "license": "The Creative Commons Attribution-Noncommercial 4.0 International License. see https://github.com/facebookresearch/LAMA/blob/master/LICENSE", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "sub_label": {"dtype": "string", "id": null, "_type": "Value"}, "obj_label": {"dtype": "string", "id": null, "_type": "Value"}, "negated": {"dtype": "string", "id": null, "_type": "Value"}, "masked_sentence": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "lama", "config_name": "squad", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 57188, "num_examples": 305, "dataset_name": "lama"}}, "download_checksums": {"https://dl.fbaipublicfiles.com/LAMA/negated_data.tar.gz": {"num_bytes": 74639115, "checksum": "1a151058e6608e47983ea4c99c50bb69248c1c0763a04a3793b0a0b657aa0b61"}}, "download_size": 74639115, "post_processing_size": null, "dataset_size": 57188, "size_in_bytes": 74696303}, "google_re": {"description": "LAMA is a dataset used to probe and analyze the factual and commonsense knowledge contained in pretrained language models. See https://github.com/facebookresearch/LAMA.\n", "citation": "@inproceedings{petroni2019language,\n title={Language Models as Knowledge Bases?},\n author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},\n booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},\n year={2019}\n}\n@inproceedings{petroni2020how,\n title={How Context Affects Language Models' Factual Predictions},\n author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},\n booktitle={Automated Knowledge Base Construction},\n year={2020},\n url={https://openreview.net/forum?id=025X0zPfn}\n}\n", "homepage": "https://github.com/facebookresearch/LAMA", "license": "The Creative Commons Attribution-Noncommercial 4.0 International License. see https://github.com/facebookresearch/LAMA/blob/master/LICENSE", "features": {"pred": {"dtype": "string", "id": null, "_type": "Value"}, "sub": {"dtype": "string", "id": null, "_type": "Value"}, "obj": {"dtype": "string", "id": null, "_type": "Value"}, "evidences": {"dtype": "string", "id": null, "_type": "Value"}, "judgments": {"dtype": "string", "id": null, "_type": "Value"}, "sub_w": {"dtype": "string", "id": null, "_type": "Value"}, "sub_label": {"dtype": "string", "id": null, "_type": "Value"}, "sub_aliases": {"dtype": "string", "id": null, "_type": "Value"}, "obj_w": {"dtype": "string", "id": null, "_type": "Value"}, "obj_label": {"dtype": "string", "id": null, "_type": "Value"}, "obj_aliases": {"dtype": "string", "id": null, "_type": "Value"}, "uuid": {"dtype": "string", "id": null, "_type": "Value"}, "masked_sentence": {"dtype": "string", "id": null, "_type": "Value"}, "template": {"dtype": "string", "id": null, "_type": "Value"}, "template_negated": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "lama", "config_name": "google_re", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 7638657, "num_examples": 6106, "dataset_name": "lama"}}, "download_checksums": {"https://dl.fbaipublicfiles.com/LAMA/negated_data.tar.gz": {"num_bytes": 74639115, "checksum": "1a151058e6608e47983ea4c99c50bb69248c1c0763a04a3793b0a0b657aa0b61"}}, "download_size": 74639115, "post_processing_size": null, "dataset_size": 7638657, "size_in_bytes": 82277772}, "conceptnet": {"description": "LAMA is a dataset used to probe and analyze the factual and commonsense knowledge contained in pretrained language models. See https://github.com/facebookresearch/LAMA.\n", "citation": "@inproceedings{petroni2019language,\n title={Language Models as Knowledge Bases?},\n author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},\n booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},\n year={2019}\n}\n@inproceedings{petroni2020how,\n title={How Context Affects Language Models' Factual Predictions},\n author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},\n booktitle={Automated Knowledge Base Construction},\n year={2020},\n url={https://openreview.net/forum?id=025X0zPfn}\n}\n", "homepage": "https://github.com/facebookresearch/LAMA", "license": "The Creative Commons Attribution-Noncommercial 4.0 International License. see https://github.com/facebookresearch/LAMA/blob/master/LICENSE", "features": {"uuid": {"dtype": "string", "id": null, "_type": "Value"}, "sub": {"dtype": "string", "id": null, "_type": "Value"}, "obj": {"dtype": "string", "id": null, "_type": "Value"}, "pred": {"dtype": "string", "id": null, "_type": "Value"}, "obj_label": {"dtype": "string", "id": null, "_type": "Value"}, "masked_sentence": {"dtype": "string", "id": null, "_type": "Value"}, "negated": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "lama", "config_name": "conceptnet", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4130000, "num_examples": 29774, "dataset_name": "lama"}}, "download_checksums": {"https://dl.fbaipublicfiles.com/LAMA/negated_data.tar.gz": {"num_bytes": 74639115, "checksum": "1a151058e6608e47983ea4c99c50bb69248c1c0763a04a3793b0a0b657aa0b61"}}, "download_size": 74639115, "post_processing_size": null, "dataset_size": 4130000, "size_in_bytes": 78769115}}
dummy/conceptnet/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d5f303a647a30c6e321ddb369583fa94d1748ad0013003b11da8a6a9a5febe90
3
+ size 1591
dummy/google_re/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a738f8b3767f50b14c08589889115e564ce419ce4dc54ecceeab0b422f5a3b8c
3
+ size 8658
dummy/squad/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f86d9c501587632ed931bcf9079227e850ead223e733f8b1bcb849c8d03451fe
3
+ size 1672
dummy/trex/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a27203dba9d33e498c47575e0166d08a2c8f7fa3a83f680ab835d0bfa7073fe4
3
+ size 2167
lama.py ADDED
@@ -0,0 +1,345 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """The LAMA Dataset"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import glob
20
+ import json
21
+ import os
22
+
23
+ import datasets
24
+
25
+
26
+ _CITATION = """@inproceedings{petroni2019language,
27
+ title={Language Models as Knowledge Bases?},
28
+ author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
29
+ booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
30
+ year={2019}
31
+ }
32
+ @inproceedings{petroni2020how,
33
+ title={How Context Affects Language Models' Factual Predictions},
34
+ author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},
35
+ booktitle={Automated Knowledge Base Construction},
36
+ year={2020},
37
+ url={https://openreview.net/forum?id=025X0zPfn}
38
+ }
39
+ """
40
+
41
+
42
+ _DESCRIPTION = """LAMA is a dataset used to probe and analyze the factual and commonsense knowledge contained in pretrained language models. See https://github.com/facebookresearch/LAMA.
43
+ """
44
+
45
+ _HOMEPAGE = "https://github.com/facebookresearch/LAMA"
46
+
47
+ _LICENSE = "The Creative Commons Attribution-Noncommercial 4.0 International License. see https://github.com/facebookresearch/LAMA/blob/master/LICENSE"
48
+
49
+ _URLs = {
50
+ "trex": "https://dl.fbaipublicfiles.com/LAMA/negated_data.tar.gz",
51
+ "squad": "https://dl.fbaipublicfiles.com/LAMA/negated_data.tar.gz",
52
+ "google_re": "https://dl.fbaipublicfiles.com/LAMA/negated_data.tar.gz",
53
+ "conceptnet": "https://dl.fbaipublicfiles.com/LAMA/negated_data.tar.gz",
54
+ }
55
+
56
+
57
+ class Lama(datasets.GeneratorBasedBuilder):
58
+ """Lama Dataset"""
59
+
60
+ VERSION = datasets.Version("1.1.0")
61
+
62
+ BUILDER_CONFIGS = [
63
+ datasets.BuilderConfig(name="trex", version=VERSION, description="The TRex part of the Lama dataset"),
64
+ datasets.BuilderConfig(name="squad", version=VERSION, description="The Squad part of the Lama dataset"),
65
+ datasets.BuilderConfig(
66
+ name="google_re", version=VERSION, description="The Google_re part of the Lama dataset"
67
+ ),
68
+ datasets.BuilderConfig(
69
+ name="conceptnet", version=VERSION, description="The Conceptnet part of the Lama dataset"
70
+ ),
71
+ ]
72
+
73
+ DEFAULT_CONFIG_NAME = "trex"
74
+
75
+ def _info(self):
76
+ if self.config.name == "trex":
77
+ features = datasets.Features(
78
+ {
79
+ "uuid": datasets.Value("string"),
80
+ "obj_uri": datasets.Value("string"),
81
+ "obj_label": datasets.Value("string"),
82
+ "sub_uri": datasets.Value("string"),
83
+ "sub_label": datasets.Value("string"),
84
+ "predicate_id": datasets.Value("string"),
85
+ "sub_surface": datasets.Value("string"),
86
+ "obj_surface": datasets.Value("string"),
87
+ "masked_sentence": datasets.Value("string"),
88
+ "template": datasets.Value("string"),
89
+ "template_negated": datasets.Value("string"),
90
+ "label": datasets.Value("string"),
91
+ "description": datasets.Value("string"),
92
+ "type": datasets.Value("string"),
93
+ }
94
+ )
95
+ return datasets.DatasetInfo(
96
+ description=_DESCRIPTION,
97
+ features=features,
98
+ supervised_keys=None,
99
+ homepage=_HOMEPAGE,
100
+ license=_LICENSE,
101
+ citation=_CITATION,
102
+ )
103
+ elif self.config.name == "conceptnet":
104
+ features = datasets.Features(
105
+ {
106
+ "uuid": datasets.Value("string"),
107
+ "sub": datasets.Value("string"),
108
+ "obj": datasets.Value("string"),
109
+ "pred": datasets.Value("string"),
110
+ "obj_label": datasets.Value("string"),
111
+ "masked_sentence": datasets.Value("string"),
112
+ "negated": datasets.Value("string"),
113
+ }
114
+ )
115
+ return datasets.DatasetInfo(
116
+ description=_DESCRIPTION,
117
+ features=features,
118
+ supervised_keys=None,
119
+ homepage=_HOMEPAGE,
120
+ license=_LICENSE,
121
+ citation=_CITATION,
122
+ )
123
+ elif self.config.name == "squad":
124
+ features = datasets.Features(
125
+ {
126
+ "id": datasets.Value("string"),
127
+ "sub_label": datasets.Value("string"),
128
+ "obj_label": datasets.Value("string"),
129
+ "negated": datasets.Value("string"),
130
+ "masked_sentence": datasets.Value("string"),
131
+ }
132
+ )
133
+ return datasets.DatasetInfo(
134
+ description=_DESCRIPTION,
135
+ features=features,
136
+ supervised_keys=None,
137
+ homepage=_HOMEPAGE,
138
+ license=_LICENSE,
139
+ citation=_CITATION,
140
+ )
141
+ elif self.config.name == "google_re":
142
+ features = datasets.Features(
143
+ {
144
+ "pred": datasets.Value("string"),
145
+ "sub": datasets.Value("string"),
146
+ "obj": datasets.Value("string"),
147
+ "evidences": datasets.Value("string"),
148
+ "judgments": datasets.Value("string"),
149
+ "sub_w": datasets.Value("string"),
150
+ "sub_label": datasets.Value("string"),
151
+ "sub_aliases": datasets.Value("string"),
152
+ "obj_w": datasets.Value("string"),
153
+ "obj_label": datasets.Value("string"),
154
+ "obj_aliases": datasets.Value("string"),
155
+ "uuid": datasets.Value("string"),
156
+ "masked_sentence": datasets.Value("string"),
157
+ "template": datasets.Value("string"),
158
+ "template_negated": datasets.Value("string"),
159
+ }
160
+ )
161
+ return datasets.DatasetInfo(
162
+ description=_DESCRIPTION,
163
+ features=features,
164
+ supervised_keys=None,
165
+ homepage=_HOMEPAGE,
166
+ license=_LICENSE,
167
+ citation=_CITATION,
168
+ )
169
+
170
+ def _split_generators(self, dl_manager):
171
+ """Returns SplitGenerators."""
172
+ my_urls = _URLs[self.config.name]
173
+ data_dir = dl_manager.download_and_extract(my_urls)
174
+ if self.config.name == "trex":
175
+ return [
176
+ datasets.SplitGenerator(
177
+ name=datasets.Split.TRAIN,
178
+ gen_kwargs={
179
+ "filepath": [os.path.join(data_dir, "relations.jsonl")]
180
+ + list(glob.glob(os.path.join(data_dir, "TREx", "*"))),
181
+ "split": "train",
182
+ },
183
+ ),
184
+ ]
185
+ elif self.config.name == "google_re":
186
+ return [
187
+ datasets.SplitGenerator(
188
+ name=datasets.Split.TRAIN,
189
+ gen_kwargs={
190
+ "filepath": [
191
+ os.path.join(data_dir, *f.split("/"))
192
+ for f in [
193
+ "Google_RE/date_of_birth_test.jsonl",
194
+ "Google_RE/place_of_birth_test.jsonl",
195
+ "Google_RE/place_of_death_test.jsonl",
196
+ ]
197
+ ],
198
+ "split": "train",
199
+ },
200
+ ),
201
+ ]
202
+ elif self.config.name == "conceptnet":
203
+ return [
204
+ datasets.SplitGenerator(
205
+ name=datasets.Split.TRAIN,
206
+ gen_kwargs={
207
+ "filepath": os.path.join(data_dir, "ConceptNet", "test.jsonl"),
208
+ "split": "train",
209
+ },
210
+ ),
211
+ ]
212
+ elif self.config.name == "squad":
213
+ return [
214
+ datasets.SplitGenerator(
215
+ name=datasets.Split.TRAIN,
216
+ gen_kwargs={
217
+ "filepath": os.path.join(data_dir, "Squad", "test.jsonl"),
218
+ "split": "train",
219
+ },
220
+ ),
221
+ ]
222
+
223
+ def _generate_examples(self, filepath, split):
224
+ """ Yields examples from the LAMA dataset. """
225
+ if self.config.name == "trex":
226
+ paths = filepath
227
+ relations_path = paths[0]
228
+ paths = paths[1:]
229
+ all_rels = {}
230
+ with open(relations_path, encoding="utf-8") as f:
231
+ for row in f:
232
+ data = json.loads(row)
233
+ all_rels[data["relation"]] = data
234
+ id_ = -1
235
+ for filepath in paths:
236
+ with open(filepath, encoding="utf-8") as f:
237
+ for row in f:
238
+ data = json.loads(row)
239
+ pred = all_rels.get(data["predicate_id"], {})
240
+ for evidences in data["evidences"]:
241
+ id_ += 1
242
+ yield id_, {
243
+ "uuid": str(data["uuid"]),
244
+ "obj_uri": str(data["obj_uri"]),
245
+ "obj_label": str(data["obj_label"]),
246
+ "sub_uri": str(data["sub_uri"]),
247
+ "sub_label": str(data["sub_label"]),
248
+ "predicate_id": str(data["predicate_id"]),
249
+ "sub_surface": str(evidences["sub_surface"]),
250
+ "obj_surface": str(evidences["obj_surface"]),
251
+ "masked_sentence": str(evidences["masked_sentence"]),
252
+ "template": str(pred.get("template", "")),
253
+ "template_negated": str(pred.get("template_negated", "")),
254
+ "label": str(pred.get("label", "")),
255
+ "description": str(pred.get("description", "")),
256
+ "type": str(pred.get("type", "")),
257
+ }
258
+ elif self.config.name == "conceptnet":
259
+ id_ = -1
260
+ with open(filepath, encoding="utf-8") as f:
261
+ for row in f:
262
+ data = json.loads(row)
263
+ if data.get("negated") is not None:
264
+ for masked_sentence, negated in zip(data["masked_sentences"], data["negated"]):
265
+ id_ += 1
266
+ yield id_, {
267
+ "uuid": str(data["uuid"]),
268
+ "sub": str(data.get("sub", "")),
269
+ "obj": str(data.get("obj", "")),
270
+ "pred": str(data["pred"]),
271
+ "obj_label": str(data["obj_label"]),
272
+ "masked_sentence": str(masked_sentence),
273
+ "negated": str(negated),
274
+ }
275
+ else:
276
+ for masked_sentence in data["masked_sentences"]:
277
+ id_ += 1
278
+ yield id_, {
279
+ "uuid": str(data["uuid"]),
280
+ "sub": str(data.get("sub", "")),
281
+ "obj": str(data.get("obj", "")),
282
+ "pred": str(data["pred"]),
283
+ "obj_label": str(data["obj_label"]),
284
+ "masked_sentence": str(masked_sentence),
285
+ "negated": str(""),
286
+ }
287
+ elif self.config.name == "squad":
288
+ id_ = -1
289
+ with open(filepath, encoding="utf-8") as f:
290
+ for row in f:
291
+ data = json.loads(row)
292
+ for masked_sentence in data["masked_sentences"]:
293
+ id_ += 1
294
+ yield id_, {
295
+ "id": str(data["id"]),
296
+ "sub_label": str(data["sub_label"]),
297
+ "obj_label": str(data["obj_label"]),
298
+ "negated": str(data.get("negated", "")),
299
+ "masked_sentence": str(masked_sentence),
300
+ }
301
+ elif self.config.name == "google_re":
302
+ id_ = -1
303
+ paths = filepath
304
+ for filepath in paths:
305
+ # from https://github.com/facebookresearch/LAMA/blob/master/scripts/run_experiments.py
306
+ if "place_of_birth" in filepath:
307
+ pred = {
308
+ "relation": "place_of_birth",
309
+ "template": "[X] was born in [Y] .",
310
+ "template_negated": "[X] was not born in [Y] .",
311
+ }
312
+ elif "date_of_birth" in filepath:
313
+ pred = {
314
+ "relation": "date_of_birth",
315
+ "template": "[X] (born [Y]).",
316
+ "template_negated": "[X] (not born [Y]).",
317
+ }
318
+ else:
319
+ pred = {
320
+ "relation": "place_of_death",
321
+ "template": "[X] died in [Y] .",
322
+ "template_negated": "[X] did not die in [Y] .",
323
+ }
324
+ with open(filepath, encoding="utf-8") as f:
325
+ for row in f:
326
+ data = json.loads(row)
327
+ for masked_sentence in data["masked_sentences"]:
328
+ id_ += 1
329
+ yield id_, {
330
+ "pred": str(data["pred"]),
331
+ "sub": str(data["sub"]),
332
+ "obj": str(data["obj"]),
333
+ "evidences": str(data["evidences"]),
334
+ "judgments": str(data["judgments"]),
335
+ "sub_w": str(data["sub_w"]),
336
+ "sub_label": str(data["sub_label"]),
337
+ "sub_aliases": str(data["sub_aliases"]),
338
+ "obj_w": str(data["obj_w"]),
339
+ "obj_label": str(data["obj_label"]),
340
+ "obj_aliases": str(data["obj_aliases"]),
341
+ "uuid": str(data["uuid"]),
342
+ "masked_sentence": str(masked_sentence),
343
+ "template": str(pred["template"]),
344
+ "template_negated": str(pred["template_negated"]),
345
+ }