albertvillanova HF staff commited on
Commit
2408898
1 Parent(s): 32da737

Refactor and add metadata to fever dataset (#4503)

Browse files

* Refactor description, homepage and citation

* Update dataset card

* Refactor base_url and urls

* Add feverous config

* Update dataset card

* Update metadata JSON

* Update dummy data

* Remove feverous config

* Revert documentation card

* Revert metadata JSON

* Revert dummy data

Commit from https://github.com/huggingface/datasets/commit/d262b95bd17972fba4b46eecd12d5809aff0aa2d

Files changed (3) hide show
  1. README.md +47 -21
  2. dataset_infos.json +1 -1
  3. fever.py +106 -110
README.md CHANGED
@@ -54,23 +54,37 @@ task_ids:
54
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
55
  - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
56
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
57
- - **Size of downloaded dataset files:** 1677.26 MB
58
- - **Size of the generated dataset:** 6959.34 MB
59
- - **Total amount of disk used:** 8636.60 MB
60
 
61
  ### Dataset Summary
62
 
63
- With billions of individual pages on the web providing information on almost every conceivable topic, we should have the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot of recent research and media coverage: false information coming from unreliable sources. [1] [2]
 
 
 
 
64
 
65
  The FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.
66
 
67
- It consists of claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as SUPPORTED, REFUTED or NOTENOUGHINFO by annotators.
 
 
 
 
 
 
 
 
 
 
 
68
 
69
  ### Supported Tasks and Leaderboards
70
 
71
  The task is verification of textual claims against textual sources.
72
 
73
- When compared to textual entailment (TE)/natural language inference, the key difference is that in these tasks the passage to verify each claim is given, and in recent years it typically consists a single sentence, while in verification systems it is retrieved from a large set of documents in order to form the evidence.
 
 
74
 
75
  ### Languages
76
 
@@ -83,8 +97,8 @@ The dataset is in English.
83
  #### v1.0
84
 
85
  - **Size of downloaded dataset files:** 42.78 MB
86
- - **Size of the generated dataset:** 38.39 MB
87
- - **Total amount of disk used:** 81.17 MB
88
 
89
  An example of 'train' looks as follows.
90
  ```
@@ -117,8 +131,8 @@ An example of 'validation' looks as follows.
117
  #### wiki_pages
118
 
119
  - **Size of downloaded dataset files:** 1634.11 MB
120
- - **Size of the generated dataset:** 6920.65 MB
121
- - **Total amount of disk used:** 8554.76 MB
122
 
123
  An example of 'wikipedia_pages' looks as follows.
124
  ```
@@ -161,21 +175,21 @@ The data fields are the same among all splits.
161
 
162
  #### v1.0
163
 
164
- | |train |unlabelled_dev|labelled_dev|paper_dev|unlabelled_test|paper_test|
165
- |----|-----:|-------------:|-----------:|--------:|--------------:|---------:|
166
- |v1.0|311431| 19998| 37566| 18999| 19998| 18567|
167
 
168
  #### v2.0
169
 
170
- | |validation|
171
- |----|---------:|
172
- |v2.0| 2384|
173
 
174
  #### wiki_pages
175
 
176
- | |wikipedia_pages|
177
- |----------|--------------:|
178
- |wiki_pages| 5416537|
179
 
180
  ## Dataset Creation
181
 
@@ -237,16 +251,28 @@ These data annotations incorporate material from Wikipedia, which is licensed pu
237
 
238
  ### Citation Information
239
 
 
240
  ```bibtex
241
  @inproceedings{Thorne18Fever,
242
  author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},
243
- title = {{FEVER}: a Large-scale Dataset for Fact Extraction and VERification},
244
  booktitle = {NAACL-HLT},
245
  year = {2018}
246
  }
247
  ```
248
 
 
 
 
 
 
 
 
 
 
249
 
250
  ### Contributions
251
 
252
- Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset.
 
 
 
54
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
55
  - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
56
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
57
 
58
  ### Dataset Summary
59
 
60
+ With billions of individual pages on the web providing information on almost every conceivable topic, we should have
61
+ the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this
62
+ information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to
63
+ transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot
64
+ of recent research and media coverage: false information coming from unreliable sources.
65
 
66
  The FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.
67
 
68
+ - FEVER Dataset: FEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences
69
+ extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims
70
+ are classified as Supported, Refuted or NotEnoughInfo. For the first two classes, the annotators also recorded the
71
+ sentence(s) forming the necessary evidence for their judgment.
72
+
73
+ - FEVER 2.0 Adversarial Attacks Dataset: The FEVER 2.0 Dataset consists of 1174 claims created by the submissions of
74
+ participants in the Breaker phase of the 2019 shared task. Participants (Breakers) were tasked with generating
75
+ adversarial examples that induce classification errors for the existing systems. Breakers submitted a dataset of up to
76
+ 1000 instances with equal number of instances for each of the three classes (Supported, Refuted NotEnoughInfo). Only
77
+ novel claims (i.e. not contained in the original FEVER dataset) were considered as valid entries to the shared task.
78
+ The submissions were then manually evaluated for Correctness (grammatical, appropriately labeled and meet the FEVER
79
+ annotation guidelines requirements).
80
 
81
  ### Supported Tasks and Leaderboards
82
 
83
  The task is verification of textual claims against textual sources.
84
 
85
+ When compared to textual entailment (TE)/natural language inference, the key difference is that in these tasks the
86
+ passage to verify each claim is given, and in recent years it typically consists a single sentence, while in
87
+ verification systems it is retrieved from a large set of documents in order to form the evidence.
88
 
89
  ### Languages
90
 
 
97
  #### v1.0
98
 
99
  - **Size of downloaded dataset files:** 42.78 MB
100
+ - **Size of the generated dataset:** 38.19 MB
101
+ - **Total amount of disk used:** 80.96 MB
102
 
103
  An example of 'train' looks as follows.
104
  ```
 
131
  #### wiki_pages
132
 
133
  - **Size of downloaded dataset files:** 1634.11 MB
134
+ - **Size of the generated dataset:** 6918.06 MB
135
+ - **Total amount of disk used:** 8552.17 MB
136
 
137
  An example of 'wikipedia_pages' looks as follows.
138
  ```
 
175
 
176
  #### v1.0
177
 
178
+ | | train | unlabelled_dev | labelled_dev | paper_dev | unlabelled_test | paper_test |
179
+ |------|-------:|---------------:|-------------:|----------:|----------------:|-----------:|
180
+ | v1.0 | 311431 | 19998 | 37566 | 18999 | 19998 | 18567 |
181
 
182
  #### v2.0
183
 
184
+ | | validation |
185
+ |------|-----------:|
186
+ | v2.0 | 2384 |
187
 
188
  #### wiki_pages
189
 
190
+ | | wikipedia_pages |
191
+ |------------|----------------:|
192
+ | wiki_pages | 5416537 |
193
 
194
  ## Dataset Creation
195
 
 
251
 
252
  ### Citation Information
253
 
254
+ If you use "FEVER Dataset", please cite:
255
  ```bibtex
256
  @inproceedings{Thorne18Fever,
257
  author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},
258
+ title = {{FEVER}: a Large-scale Dataset for Fact Extraction and {VERification}},
259
  booktitle = {NAACL-HLT},
260
  year = {2018}
261
  }
262
  ```
263
 
264
+ If you use "FEVER 2.0 Adversarial Attacks Dataset", please cite:
265
+ ```bibtex
266
+ @inproceedings{Thorne19FEVER2,
267
+ author = {Thorne, James and Vlachos, Andreas and Cocarascu, Oana and Christodoulopoulos, Christos and Mittal, Arpit},
268
+ title = {The {FEVER2.0} Shared Task},
269
+ booktitle = {Proceedings of the Second Workshop on {Fact Extraction and VERification (FEVER)}},
270
+ year = {2018}
271
+ }
272
+ ```
273
 
274
  ### Contributions
275
 
276
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq),
277
+ [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun),
278
+ [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
dataset_infos.json CHANGED
@@ -1 +1 @@
1
- {"v1.0": {"description": "\nWith billions of individual pages on the web providing information on almost every conceivable topic, we should have the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this information is contained in structured sources (Wikidata, Freebase, etc.) \u2013 we are therefore limited by our ability to transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot of recent research and media coverage: false information coming from unreliable sources. [1] [2]\n\nThe FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.\n\nFEVER V1.0", "citation": "\n@inproceedings{Thorne18Fever,\n author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},\n title = {{FEVER}: a Large-scale Dataset for Fact Extraction and VERification},\n booktitle = {NAACL-HLT},\n year = {2018}\n}\n}\n", "homepage": "https://fever.ai/", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "claim": {"dtype": "string", "id": null, "_type": "Value"}, "evidence_annotation_id": {"dtype": "int32", "id": null, "_type": "Value"}, "evidence_id": {"dtype": "int32", "id": null, "_type": "Value"}, "evidence_wiki_url": {"dtype": "string", "id": null, "_type": "Value"}, "evidence_sentence_id": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "fever", "config_name": "v1.0", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 29591412, "num_examples": 311431, "dataset_name": "fever"}, "unlabelled_test": {"name": "unlabelled_test", "num_bytes": 1617002, "num_examples": 19998, "dataset_name": "fever"}, "unlabelled_dev": {"name": "unlabelled_dev", "num_bytes": 1548965, "num_examples": 19998, "dataset_name": "fever"}, "labelled_dev": {"name": "labelled_dev", "num_bytes": 3643157, "num_examples": 37566, "dataset_name": "fever"}, "paper_dev": {"name": "paper_dev", "num_bytes": 1821489, "num_examples": 18999, "dataset_name": "fever"}, "paper_test": {"name": "paper_test", "num_bytes": 1821668, "num_examples": 18567, "dataset_name": "fever"}}, "download_checksums": {"https://fever.ai/download/fever/train.jsonl": {"num_bytes": 33024303, "checksum": "eba7e8f87076753f8494718b9a857827af7bf73e76c9e4b75420207d26e588b6"}, "https://fever.ai/download/fever/shared_task_dev.jsonl": {"num_bytes": 4349935, "checksum": "e89865bfe1b4dd054e03dd57d7241a6fde24862905f31117cf0cd719f7c78df7"}, "https://fever.ai/download/fever/shared_task_dev_public.jsonl": {"num_bytes": 1530640, "checksum": "acda01ae5ee7e75c73909a665f465cec20704ea26e9d676cd7423ff2c8ab0e8b"}, "https://fever.ai/download/fever/shared_task_test.jsonl": {"num_bytes": 1599159, "checksum": "76dd0872d8fa1f49efe1194fe8a88b7dd4c715c77d87a142b615d4be583e1e51"}, "https://fever.ai/download/fever/paper_dev.jsonl": {"num_bytes": 2168767, "checksum": "41158707810008747946bf23471e82df53e77a513524b9e3ec1c2e674ef5ef8c"}, "https://fever.ai/download/fever/paper_test.jsonl": {"num_bytes": 2181168, "checksum": "fb7b0280a0adc2302bbb29bfb7af37274fa585de3171bcf908f180642d11d88e"}}, "download_size": 44853972, "post_processing_size": null, "dataset_size": 40043693, "size_in_bytes": 84897665}, "v2.0": {"description": "\nWith billions of individual pages on the web providing information on almost every conceivable topic, we should have the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this information is contained in structured sources (Wikidata, Freebase, etc.) \u2013 we are therefore limited by our ability to transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot of recent research and media coverage: false information coming from unreliable sources. [1] [2]\n\nThe FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.\n\nFEVER V2.0", "citation": "\n@inproceedings{Thorne18Fever,\n author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},\n title = {{FEVER}: a Large-scale Dataset for Fact Extraction and VERification},\n booktitle = {NAACL-HLT},\n year = {2018}\n}\n}\n", "homepage": "https://fever.ai/", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "claim": {"dtype": "string", "id": null, "_type": "Value"}, "evidence_annotation_id": {"dtype": "int32", "id": null, "_type": "Value"}, "evidence_id": {"dtype": "int32", "id": null, "_type": "Value"}, "evidence_wiki_url": {"dtype": "string", "id": null, "_type": "Value"}, "evidence_sentence_id": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "fever", "config_name": "v2.0", "version": {"version_str": "2.0.0", "description": "", "major": 2, "minor": 0, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 306243, "num_examples": 2384, "dataset_name": "fever"}}, "download_checksums": {"https://fever.ai/download/fever2.0/fever2-fixers-dev.jsonl": {"num_bytes": 392466, "checksum": "43c3df77cf9bf6022b9356ed1d66df6d8a9a0126c4e4b8d155742e3a9988c814"}}, "download_size": 392466, "post_processing_size": null, "dataset_size": 306243, "size_in_bytes": 698709}, "wiki_pages": {"description": "\nWith billions of individual pages on the web providing information on almost every conceivable topic, we should have the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this information is contained in structured sources (Wikidata, Freebase, etc.) \u2013 we are therefore limited by our ability to transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot of recent research and media coverage: false information coming from unreliable sources. [1] [2]\n\nThe FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.\n\nWikipedia pages", "citation": "\n@inproceedings{Thorne18Fever,\n author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},\n title = {{FEVER}: a Large-scale Dataset for Fact Extraction and VERification},\n booktitle = {NAACL-HLT},\n year = {2018}\n}\n}\n", "homepage": "https://fever.ai/", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "lines": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "fever", "config_name": "wiki_pages", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"wikipedia_pages": {"name": "wikipedia_pages", "num_bytes": 7254115038, "num_examples": 5416537, "dataset_name": "fever"}}, "download_checksums": {"https://fever.ai/download/fever/wiki-pages.zip": {"num_bytes": 1713485474, "checksum": "4b06d95da6adf7fe02d2796176c670dacccb21348da89cba4c50676ab99665f2"}}, "download_size": 1713485474, "post_processing_size": null, "dataset_size": 7254115038, "size_in_bytes": 8967600512}}
 
1
+ {"v1.0": {"description": "FEVER v1.0\nFEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as Supported, Refuted or NotEnoughInfo. For the first two classes, the annotators also recorded the sentence(s) forming the necessary evidence for their judgment.", "citation": "@inproceedings{Thorne18Fever,\n author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},\n title = {{FEVER}: a Large-scale Dataset for Fact Extraction and {VERification}},\n booktitle = {NAACL-HLT},\n year = {2018}\n}", "homepage": "https://fever.ai/dataset/fever.html", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "claim": {"dtype": "string", "id": null, "_type": "Value"}, "evidence_annotation_id": {"dtype": "int32", "id": null, "_type": "Value"}, "evidence_id": {"dtype": "int32", "id": null, "_type": "Value"}, "evidence_wiki_url": {"dtype": "string", "id": null, "_type": "Value"}, "evidence_sentence_id": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "fever", "config_name": "v1.0", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 29591412, "num_examples": 311431, "dataset_name": "fever"}, "labelled_dev": {"name": "labelled_dev", "num_bytes": 3643157, "num_examples": 37566, "dataset_name": "fever"}, "unlabelled_dev": {"name": "unlabelled_dev", "num_bytes": 1548965, "num_examples": 19998, "dataset_name": "fever"}, "unlabelled_test": {"name": "unlabelled_test", "num_bytes": 1617002, "num_examples": 19998, "dataset_name": "fever"}, "paper_dev": {"name": "paper_dev", "num_bytes": 1821489, "num_examples": 18999, "dataset_name": "fever"}, "paper_test": {"name": "paper_test", "num_bytes": 1821668, "num_examples": 18567, "dataset_name": "fever"}}, "download_checksums": {"https://fever.ai/download/fever/train.jsonl": {"num_bytes": 33024303, "checksum": "eba7e8f87076753f8494718b9a857827af7bf73e76c9e4b75420207d26e588b6"}, "https://fever.ai/download/fever/shared_task_dev.jsonl": {"num_bytes": 4349935, "checksum": "e89865bfe1b4dd054e03dd57d7241a6fde24862905f31117cf0cd719f7c78df7"}, "https://fever.ai/download/fever/shared_task_dev_public.jsonl": {"num_bytes": 1530640, "checksum": "acda01ae5ee7e75c73909a665f465cec20704ea26e9d676cd7423ff2c8ab0e8b"}, "https://fever.ai/download/fever/shared_task_test.jsonl": {"num_bytes": 1599159, "checksum": "76dd0872d8fa1f49efe1194fe8a88b7dd4c715c77d87a142b615d4be583e1e51"}, "https://fever.ai/download/fever/paper_dev.jsonl": {"num_bytes": 2168767, "checksum": "41158707810008747946bf23471e82df53e77a513524b9e3ec1c2e674ef5ef8c"}, "https://fever.ai/download/fever/paper_test.jsonl": {"num_bytes": 2181168, "checksum": "fb7b0280a0adc2302bbb29bfb7af37274fa585de3171bcf908f180642d11d88e"}}, "download_size": 44853972, "post_processing_size": null, "dataset_size": 40043693, "size_in_bytes": 84897665}, "v2.0": {"description": "FEVER v2.0:\nThe FEVER 2.0 Dataset consists of 1174 claims created by the submissions of participants in the Breaker phase of the 2019 shared task. Participants (Breakers) were tasked with generating adversarial examples that induce classification errors for the existing systems. Breakers submitted a dataset of up to 1000 instances with equal number of instances for each of the three classes (Supported, Refuted NotEnoughInfo). Only novel claims (i.e. not contained in the original FEVER dataset) were considered as valid entries to the shared task. The submissions were then manually evaluated for Correctness (grammatical, appropriately labeled and meet the FEVER annotation guidelines requirements).", "citation": "@inproceedings{Thorne19FEVER2,\n author = {Thorne, James and Vlachos, Andreas and Cocarascu, Oana and Christodoulopoulos, Christos and Mittal, Arpit},\n title = {The {FEVER2.0} Shared Task},\n booktitle = {Proceedings of the Second Workshop on {Fact Extraction and VERification (FEVER)}},\n year = {2018}\n}", "homepage": "https://fever.ai/dataset/adversarial.html", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "claim": {"dtype": "string", "id": null, "_type": "Value"}, "evidence_annotation_id": {"dtype": "int32", "id": null, "_type": "Value"}, "evidence_id": {"dtype": "int32", "id": null, "_type": "Value"}, "evidence_wiki_url": {"dtype": "string", "id": null, "_type": "Value"}, "evidence_sentence_id": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "fever", "config_name": "v2.0", "version": {"version_str": "2.0.0", "description": null, "major": 2, "minor": 0, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 306243, "num_examples": 2384, "dataset_name": "fever"}}, "download_checksums": {"https://fever.ai/download/fever2.0/fever2-fixers-dev.jsonl": {"num_bytes": 392466, "checksum": "43c3df77cf9bf6022b9356ed1d66df6d8a9a0126c4e4b8d155742e3a9988c814"}}, "download_size": 392466, "post_processing_size": null, "dataset_size": 306243, "size_in_bytes": 698709}, "wiki_pages": {"description": "Wikipedia pages for FEVER v1.0:\nFEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as Supported, Refuted or NotEnoughInfo. For the first two classes, the annotators also recorded the sentence(s) forming the necessary evidence for their judgment.", "citation": "@inproceedings{Thorne18Fever,\n author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},\n title = {{FEVER}: a Large-scale Dataset for Fact Extraction and {VERification}},\n booktitle = {NAACL-HLT},\n year = {2018}\n}", "homepage": "https://fever.ai/dataset/fever.html", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "lines": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "fever", "config_name": "wiki_pages", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"wikipedia_pages": {"name": "wikipedia_pages", "num_bytes": 7254115038, "num_examples": 5416537, "dataset_name": "fever"}}, "download_checksums": {"https://fever.ai/download/fever/wiki-pages.zip": {"num_bytes": 1713485474, "checksum": "4b06d95da6adf7fe02d2796176c670dacccb21348da89cba4c50676ab99665f2"}}, "download_size": 1713485474, "post_processing_size": null, "dataset_size": 7254115038, "size_in_bytes": 8967600512}}
fever.py CHANGED
@@ -16,40 +16,31 @@
16
  # Lint as: python3
17
  """FEVER dataset."""
18
 
19
-
20
  import json
21
  import os
 
22
 
23
  import datasets
24
 
25
 
26
- _CITATION = """
27
- @inproceedings{Thorne18Fever,
28
- author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},
29
- title = {{FEVER}: a Large-scale Dataset for Fact Extraction and VERification},
30
- booktitle = {NAACL-HLT},
31
- year = {2018}
32
- }
33
- }
34
- """
35
-
36
- _DESCRIPTION = """
37
- With billions of individual pages on the web providing information on almost every conceivable topic, we should have the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot of recent research and media coverage: false information coming from unreliable sources. [1] [2]
38
-
39
- The FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.
40
- """
41
-
42
-
43
  class FeverConfig(datasets.BuilderConfig):
44
  """BuilderConfig for FEVER."""
45
 
46
- def __init__(self, **kwargs):
47
- """BuilderConfig for FEVER
48
 
49
  Args:
50
- **kwargs: keyword arguments forwarded to super.
 
 
 
 
51
  """
52
- super(FeverConfig, self).__init__(**kwargs)
 
 
 
 
53
 
54
 
55
  class Fever(datasets.GeneratorBasedBuilder):
@@ -58,30 +49,100 @@ class Fever(datasets.GeneratorBasedBuilder):
58
  BUILDER_CONFIGS = [
59
  FeverConfig(
60
  name="v1.0",
61
- description="FEVER V1.0",
62
- version=datasets.Version("1.0.0", ""),
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
  ),
64
  FeverConfig(
65
  name="v2.0",
66
- description="FEVER V2.0",
67
- version=datasets.Version("2.0.0", ""),
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  ),
69
  FeverConfig(
70
  name="wiki_pages",
71
- description="Wikipedia pages",
72
- version=datasets.Version("1.0.0", ""),
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
  ),
74
  ]
75
 
76
  def _info(self):
77
-
78
  if self.config.name == "wiki_pages":
79
  features = {
80
  "id": datasets.Value("string"),
81
  "text": datasets.Value("string"),
82
  "lines": datasets.Value("string"),
83
  }
84
- else:
85
  features = {
86
  "id": datasets.Value("int32"),
87
  "label": datasets.Value("string"),
@@ -92,91 +153,26 @@ class Fever(datasets.GeneratorBasedBuilder):
92
  "evidence_sentence_id": datasets.Value("int32"),
93
  }
94
  return datasets.DatasetInfo(
95
- description=_DESCRIPTION + "\n" + self.config.description,
96
  features=datasets.Features(features),
97
- homepage="https://fever.ai/",
98
- citation=_CITATION,
99
  )
100
 
101
  def _split_generators(self, dl_manager):
102
  """Returns SplitGenerators."""
103
- if self.config.name == "v2.0":
104
- base_url = "https://fever.ai/download/fever2.0"
105
- urls = f"{base_url}/fever2-fixers-dev.jsonl"
106
- dl_path = dl_manager.download_and_extract(urls)
107
- return [
108
- datasets.SplitGenerator(
109
- name=datasets.Split.VALIDATION,
110
- gen_kwargs={
111
- "filepath": dl_path,
112
- },
113
- )
114
- ]
115
- elif self.config.name == "v1.0":
116
- base_url = "https://fever.ai/download/fever"
117
- urls = {
118
- "train": f"{base_url}/train.jsonl",
119
- "labelled_dev": f"{base_url}/shared_task_dev.jsonl",
120
- "unlabelled_dev": f"{base_url}/shared_task_dev_public.jsonl",
121
- "unlabelled_test": f"{base_url}/shared_task_test.jsonl",
122
- "paper_dev": f"{base_url}/paper_dev.jsonl",
123
- "paper_test": f"{base_url}/paper_test.jsonl",
124
- }
125
- dl_path = dl_manager.download_and_extract(urls)
126
- return [
127
- datasets.SplitGenerator(
128
- name=datasets.Split.TRAIN,
129
- gen_kwargs={
130
- "filepath": dl_path["train"],
131
- },
132
- ),
133
- datasets.SplitGenerator(
134
- name="unlabelled_test",
135
- gen_kwargs={
136
- "filepath": dl_path["unlabelled_test"],
137
- },
138
- ),
139
- datasets.SplitGenerator(
140
- name="unlabelled_dev",
141
- gen_kwargs={
142
- "filepath": dl_path["unlabelled_dev"],
143
- },
144
- ),
145
- datasets.SplitGenerator(
146
- name="labelled_dev",
147
- gen_kwargs={
148
- "filepath": dl_path["labelled_dev"],
149
- },
150
- ),
151
- datasets.SplitGenerator(
152
- name="paper_dev",
153
- gen_kwargs={
154
- "filepath": dl_path["paper_dev"],
155
- },
156
- ),
157
- datasets.SplitGenerator(
158
- name="paper_test",
159
- gen_kwargs={
160
- "filepath": dl_path["paper_test"],
161
- },
162
- ),
163
- ]
164
- elif self.config.name == "wiki_pages":
165
- base_url = "https://fever.ai/download/fever"
166
- urls = f"{base_url}/wiki-pages.zip"
167
- dl_path = dl_manager.download_and_extract(urls)
168
- files = sorted(os.listdir(os.path.join(dl_path, "wiki-pages")))
169
- file_paths = [os.path.join(dl_path, "wiki-pages", file) for file in files]
170
- return [
171
- datasets.SplitGenerator(
172
- name="wikipedia_pages",
173
- gen_kwargs={
174
- "filepath": file_paths,
175
- },
176
- ),
177
- ]
178
- else:
179
- raise ValueError("config name not found")
180
 
181
  def _generate_examples(self, filepath):
182
  """Yields examples."""
 
16
  # Lint as: python3
17
  """FEVER dataset."""
18
 
 
19
  import json
20
  import os
21
+ import textwrap
22
 
23
  import datasets
24
 
25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  class FeverConfig(datasets.BuilderConfig):
27
  """BuilderConfig for FEVER."""
28
 
29
+ def __init__(self, homepage: str = None, citation: str = None, base_url: str = None, urls: dict = None, **kwargs):
30
+ """BuilderConfig for FEVER.
31
 
32
  Args:
33
+ homepage (`str`): Homepage.
34
+ citation (`str`): Citation reference.
35
+ base_url (`str`): Data base URL that precedes all data URLs.
36
+ urls (`dict`): Data URLs (each URL will pe preceded by `base_url`).
37
+ **kwargs: keyword arguments forwarded to super.
38
  """
39
+ super().__init__(**kwargs)
40
+ self.homepage = homepage
41
+ self.citation = citation
42
+ self.base_url = base_url
43
+ self.urls = {key: f"{base_url}/{url}" for key, url in urls.items()}
44
 
45
 
46
  class Fever(datasets.GeneratorBasedBuilder):
 
49
  BUILDER_CONFIGS = [
50
  FeverConfig(
51
  name="v1.0",
52
+ version=datasets.Version("1.0.0"),
53
+ description=textwrap.dedent(
54
+ "FEVER v1.0\n"
55
+ "FEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences "
56
+ "extracted from Wikipedia and subsequently verified without knowledge of the sentence they were "
57
+ "derived from. The claims are classified as Supported, Refuted or NotEnoughInfo. For the first two "
58
+ "classes, the annotators also recorded the sentence(s) forming the necessary evidence for their "
59
+ "judgment."
60
+ ),
61
+ homepage="https://fever.ai/dataset/fever.html",
62
+ citation=textwrap.dedent(
63
+ """\
64
+ @inproceedings{Thorne18Fever,
65
+ author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},
66
+ title = {{FEVER}: a Large-scale Dataset for Fact Extraction and {VERification}},
67
+ booktitle = {NAACL-HLT},
68
+ year = {2018}
69
+ }"""
70
+ ),
71
+ base_url="https://fever.ai/download/fever",
72
+ urls={
73
+ datasets.Split.TRAIN: "train.jsonl",
74
+ "labelled_dev": "shared_task_dev.jsonl",
75
+ "unlabelled_dev": "shared_task_dev_public.jsonl",
76
+ "unlabelled_test": "shared_task_test.jsonl",
77
+ "paper_dev": "paper_dev.jsonl",
78
+ "paper_test": "paper_test.jsonl",
79
+ },
80
  ),
81
  FeverConfig(
82
  name="v2.0",
83
+ version=datasets.Version("2.0.0"),
84
+ description=textwrap.dedent(
85
+ "FEVER v2.0:\n"
86
+ "The FEVER 2.0 Dataset consists of 1174 claims created by the submissions of participants in the "
87
+ "Breaker phase of the 2019 shared task. Participants (Breakers) were tasked with generating "
88
+ "adversarial examples that induce classification errors for the existing systems. Breakers submitted "
89
+ "a dataset of up to 1000 instances with equal number of instances for each of the three classes "
90
+ "(Supported, Refuted NotEnoughInfo). Only novel claims (i.e. not contained in the original FEVER "
91
+ "dataset) were considered as valid entries to the shared task. The submissions were then manually "
92
+ "evaluated for Correctness (grammatical, appropriately labeled and meet the FEVER annotation "
93
+ "guidelines requirements)."
94
+ ),
95
+ homepage="https://fever.ai/dataset/adversarial.html",
96
+ citation=textwrap.dedent(
97
+ """\
98
+ @inproceedings{Thorne19FEVER2,
99
+ author = {Thorne, James and Vlachos, Andreas and Cocarascu, Oana and Christodoulopoulos, Christos and Mittal, Arpit},
100
+ title = {The {FEVER2.0} Shared Task},
101
+ booktitle = {Proceedings of the Second Workshop on {Fact Extraction and VERification (FEVER)}},
102
+ year = {2018}
103
+ }"""
104
+ ),
105
+ base_url="https://fever.ai/download/fever2.0",
106
+ urls={
107
+ datasets.Split.VALIDATION: "fever2-fixers-dev.jsonl",
108
+ },
109
  ),
110
  FeverConfig(
111
  name="wiki_pages",
112
+ version=datasets.Version("1.0.0"),
113
+ description=textwrap.dedent(
114
+ "Wikipedia pages for FEVER v1.0:\n"
115
+ "FEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences "
116
+ "extracted from Wikipedia and subsequently verified without knowledge of the sentence they were "
117
+ "derived from. The claims are classified as Supported, Refuted or NotEnoughInfo. For the first two "
118
+ "classes, the annotators also recorded the sentence(s) forming the necessary evidence for their "
119
+ "judgment."
120
+ ),
121
+ homepage="https://fever.ai/dataset/fever.html",
122
+ citation=textwrap.dedent(
123
+ """\
124
+ @inproceedings{Thorne18Fever,
125
+ author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},
126
+ title = {{FEVER}: a Large-scale Dataset for Fact Extraction and {VERification}},
127
+ booktitle = {NAACL-HLT},
128
+ year = {2018}
129
+ }"""
130
+ ),
131
+ base_url="https://fever.ai/download/fever",
132
+ urls={
133
+ "wikipedia_pages": "wiki-pages.zip",
134
+ },
135
  ),
136
  ]
137
 
138
  def _info(self):
 
139
  if self.config.name == "wiki_pages":
140
  features = {
141
  "id": datasets.Value("string"),
142
  "text": datasets.Value("string"),
143
  "lines": datasets.Value("string"),
144
  }
145
+ elif self.config.name == "v1.0" or self.config.name == "v2.0":
146
  features = {
147
  "id": datasets.Value("int32"),
148
  "label": datasets.Value("string"),
 
153
  "evidence_sentence_id": datasets.Value("int32"),
154
  }
155
  return datasets.DatasetInfo(
156
+ description=self.config.description,
157
  features=datasets.Features(features),
158
+ homepage=self.config.homepage,
159
+ citation=self.config.citation,
160
  )
161
 
162
  def _split_generators(self, dl_manager):
163
  """Returns SplitGenerators."""
164
+ dl_paths = dl_manager.download_and_extract(self.config.urls)
165
+ return [
166
+ datasets.SplitGenerator(
167
+ name=split,
168
+ gen_kwargs={
169
+ "filepath": dl_paths[split]
170
+ if self.config.name != "wiki_pages"
171
+ else dl_manager.iter_files(os.path.join(dl_paths[split], "wiki-pages")),
172
+ },
173
+ )
174
+ for split in dl_paths.keys()
175
+ ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
176
 
177
  def _generate_examples(self, filepath):
178
  """Yields examples."""