Dataset:
fever

Task Categories: text-classification
Languages: en
Multilinguality: monolingual
Size Categories: 100K<n<1M
Language Creators: found
Annotations Creators: crowdsourced
Source Datasets: extended|wikipedia

Dataset Card for "fever"

Dataset Summary

With billions of individual pages on the web providing information on almost every conceivable topic, we should have the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot of recent research and media coverage: false information coming from unreliable sources. [1] [2]

The FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.

It consists of claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as SUPPORTED, REFUTED or NOTENOUGHINFO by annotators.

Supported Tasks and Leaderboards

The task is verification of textual claims against textual sources.

When compared to textual entailment (TE)/natural language inference, the key difference is that in these tasks the passage to verify each claim is given, and in recent years it typically consists a single sentence, while in verification systems it is retrieved from a large set of documents in order to form the evidence.

Languages

The dataset is in English.

Dataset Structure

We show detailed information for up to 5 configurations of the dataset.

Data Instances

v1.0

  • Size of downloaded dataset files: 42.78 MB
  • Size of the generated dataset: 38.39 MB
  • Total amount of disk used: 81.17 MB

An example of 'train' looks as follows.

'claim': 'Nikolaj Coster-Waldau worked with the Fox Broadcasting Company.',
 'evidence_wiki_url': 'Nikolaj_Coster-Waldau',
 'label': 'SUPPORTS',
 'id': 75397,
 'evidence_id': 104971,
 'evidence_sentence_id': 7,
 'evidence_annotation_id': 92206}

v2.0

  • Size of downloaded dataset files: 0.37 MB
  • Size of the generated dataset: 0.29 MB
  • Total amount of disk used: 0.67 MB

An example of 'validation' looks as follows.

{'claim': "There is a convicted statutory rapist called Chinatown's writer.",
  'evidence_wiki_url': '',
  'label': 'NOT ENOUGH INFO',
  'id': 500000,
  'evidence_id': -1,
  'evidence_sentence_id': -1,
  'evidence_annotation_id': 269158}

wiki_pages

  • Size of downloaded dataset files: 1634.11 MB
  • Size of the generated dataset: 6920.65 MB
  • Total amount of disk used: 8554.76 MB

An example of 'wikipedia_pages' looks as follows.

{'text': 'The following are the football -LRB- soccer -RRB- events of the year 1928 throughout the world . ',
  'lines': '0\tThe following are the football -LRB- soccer -RRB- events of the year 1928 throughout the world .\n1\t',
  'id': '1928_in_association_football'}

Data Fields

The data fields are the same among all splits.

v1.0

  • id: a int32 feature.
  • label: a string feature.
  • claim: a string feature.
  • evidence_annotation_id: a int32 feature.
  • evidence_id: a int32 feature.
  • evidence_wiki_url: a string feature.
  • evidence_sentence_id: a int32 feature.

v2.0

  • id: a int32 feature.
  • label: a string feature.
  • claim: a string feature.
  • evidence_annotation_id: a int32 feature.
  • evidence_id: a int32 feature.
  • evidence_wiki_url: a string feature.
  • evidence_sentence_id: a int32 feature.

wiki_pages

  • id: a string feature.
  • text: a string feature.
  • lines: a string feature.

Data Splits

v1.0

train unlabelled_dev labelled_dev paper_dev unlabelled_test paper_test
v1.0 311431 19998 37566 18999 19998 18567

v2.0

validation
v2.0 2384

wiki_pages

wikipedia_pages
wiki_pages 5416537

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

FEVER license:

These data annotations incorporate material from Wikipedia, which is licensed pursuant to the Wikipedia Copyright Policy. These annotations are made available under the license terms described on the applicable Wikipedia article pages, or, where Wikipedia license terms are unavailable, under the Creative Commons Attribution-ShareAlike License (version 3.0), available at http://creativecommons.org/licenses/by-sa/3.0/ (collectively, the “License Terms”). You may not use these files except in compliance with the applicable License Terms.

Citation Information

@inproceedings{Thorne18Fever,
    author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},
    title = {{FEVER}: a Large-scale Dataset for Fact Extraction and VERification},
    booktitle = {NAACL-HLT},
    year = {2018}
}

Contributions

Thanks to @thomwolf, @lhoestq, @mariamabarham, @lewtun for adding this dataset.

Models trained or fine-tuned on fever