Dataset Preview
Go to dataset viewer
id (int32)label (string)claim (string)evidence_annotation_id (int32)evidence_id (int32)evidence_wiki_url (string)evidence_sentence_id (int32)
75,397
"SUPPORTS"
"Nikolaj Coster-Waldau worked with the Fox Broadcasting Company."
92,206
104,971
"Nikolaj_Coster-Waldau"
7
75,397
"SUPPORTS"
"Nikolaj Coster-Waldau worked with the Fox Broadcasting Company."
92,206
104,971
"Fox_Broadcasting_Company"
-1
150,448
"SUPPORTS"
"Roman Atwood is a content creator."
174,271
187,498
"Roman_Atwood"
1
150,448
"SUPPORTS"
"Roman Atwood is a content creator."
174,271
187,499
"Roman_Atwood"
3
214,861
"SUPPORTS"
"History of art includes architecture, dance, sculpture, music, painting, poetry literature, theatre, narrative, film, photography and graphic arts."
255,136
254,645
"History_of_art"
2
156,709
"REFUTES"
"Adrienne Bailon is an accountant."
180,804
193,183
"Adrienne_Bailon"
-1
83,235
"NOT ENOUGH INFO"
"System of a Down briefly disbanded in limbo."
100,277
-1
""
-1
129,629
"SUPPORTS"
"Homeland is an American television spy thriller based on the Israeli television series Prisoners of War."
151,831
166,598
"Homeland_-LRB-TV_series-RRB-"
-1
129,629
"SUPPORTS"
"Homeland is an American television spy thriller based on the Israeli television series Prisoners of War."
151,831
166,598
"Prisoners_of_War_-LRB-TV_series-RRB-"
-1
149,579
"NOT ENOUGH INFO"
"Beautiful reached number two on the Billboard Hot 100 in 2003."
173,384
-1
""
-1
229,289
"NOT ENOUGH INFO"
"Neal Schon was named in 1954."
273,626
-1
""
-1
33,078
"SUPPORTS"
"The Boston Celtics play their home games at TD Garden."
49,158
58,489
"Boston_Celtics"
3
33,078
"SUPPORTS"
"The Boston Celtics play their home games at TD Garden."
49,159
58,490
"Boston_Celtics"
3
6,744
"SUPPORTS"
"The Ten Commandments is an epic film."
23,513
28,977
"The_Ten_Commandments_-LRB-1956_film-RRB-"
-1
6,744
"SUPPORTS"
"The Ten Commandments is an epic film."
23,513
28,978
"The_Ten_Commandments_-LRB-1956_film-RRB-"
20
226,034
"SUPPORTS"
"Tetris has sold millions of physical copies."
269,479
265,800
"Tetris"
18
40,190
"SUPPORTS"
"Cyndi Lauper won the Best New Artist award at the 27th Grammy Awards in 1985."
56,492
66,697
"Cyndi_Lauper"
2
76,253
"SUPPORTS"
"There is a movie called The Hunger Games."
93,100
106,004
"The_Hunger_Games_-LRB-film-RRB-"
-1
76,253
"SUPPORTS"
"There is a movie called The Hunger Games."
93,100
106,005
"The_Hunger_Games_-LRB-film-RRB-"
1
76,253
"SUPPORTS"
"There is a movie called The Hunger Games."
93,100
106,006
"The_Hunger_Games_-LRB-film-RRB-"
2
76,253
"SUPPORTS"
"There is a movie called The Hunger Games."
93,100
106,007
"The_Hunger_Games_-LRB-film-RRB-"
16
188,923
"SUPPORTS"
"Ryan Gosling has been to a country in Africa."
220,565
226,318
"Ryan_Gosling"
17
188,923
"SUPPORTS"
"Ryan Gosling has been to a country in Africa."
220,565
226,318
"Chad"
-1
138,503
"REFUTES"
"Stranger Things is set in Bloomington, Indiana."
161,295
175,782
"Stranger_Things"
5
129,983
"SUPPORTS"
"Ryan Seacrest is a person."
152,194
166,969
"Ryan_Seacrest"
-1
129,983
"SUPPORTS"
"Ryan Seacrest is a person."
152,194
166,970
"Ryan_Seacrest"
1
129,983
"SUPPORTS"
"Ryan Seacrest is a person."
152,194
166,971
"Ryan_Seacrest"
2
129,983
"SUPPORTS"
"Ryan Seacrest is a person."
152,194
166,972
"Ryan_Seacrest"
5
73,170
"REFUTES"
"Puerto Rico is not an unincorporated territory of the United States."
89,957
102,650
"Puerto_Rico"
-1
179,616
"SUPPORTS"
"Michael Giacchino composed the score for Doctor Strange."
208,457
216,486
"Michael_Giacchino"
1
207,456
"SUPPORTS"
"Stranger than Fiction is a film."
245,342
246,172
"Stranger_than_Fiction_-LRB-2006_film-RRB-"
-1
3
"SUPPORTS"
"Chris Hemsworth appeared in A Perfect Getaway."
15,732
19,585
"Chris_Hemsworth"
2
93,956
"SUPPORTS"
"Selena recorded music."
275,618
270,725
"Selena"
8
93,956
"SUPPORTS"
"Selena recorded music."
275,618
270,726
"Selena"
-1
93,956
"SUPPORTS"
"Selena recorded music."
275,618
270,727
"Selena"
2
93,956
"SUPPORTS"
"Selena recorded music."
275,618
270,728
"Selena"
4
93,956
"SUPPORTS"
"Selena recorded music."
277,400
272,314
"Selena"
8
93,956
"SUPPORTS"
"Selena recorded music."
277,400
272,315
"Selena"
21
93,956
"SUPPORTS"
"Selena recorded music."
277,903
272,835
"Selena"
-1
93,956
"SUPPORTS"
"Selena recorded music."
277,903
272,836
"Selena"
9
93,956
"SUPPORTS"
"Selena recorded music."
277,903
272,837
"Selena"
10
93,956
"SUPPORTS"
"Selena recorded music."
277,903
272,838
"Selena"
11
93,956
"SUPPORTS"
"Selena recorded music."
277,903
272,839
"Selena"
14
93,956
"SUPPORTS"
"Selena recorded music."
277,903
272,840
"Selena"
16
93,956
"SUPPORTS"
"Selena recorded music."
277,903
272,841
"Selena"
18
93,956
"SUPPORTS"
"Selena recorded music."
277,903
272,842
"Selena"
19
93,956
"SUPPORTS"
"Selena recorded music."
277,903
272,843
"Selena"
21
93,956
"SUPPORTS"
"Selena recorded music."
277,903
272,844
"Selena"
32
138,117
"NOT ENOUGH INFO"
"John Wick: Chapter 2 was theatrically released in the Oregon."
160,895
-1
""
-1
53,133
"SUPPORTS"
"Robert J. O'Neill was born April 10, 1976."
69,411
80,212
"Robert_J._O'Neill_-LRB-U.S._Navy_SEAL-RRB-"
-1
210,010
"NOT ENOUGH INFO"
"Afghanistan is the source of the Kushan dynasty."
248,748
-1
""
-1
134,381
"NOT ENOUGH INFO"
"Marilyn Monroe worked with Warner Brothers."
156,940
-1
""
-1
228,271
"SUPPORTS"
"The Silence of the Lambs was a film starring Scott Glenn."
272,337
268,022
"The_Silence_of_the_Lambs_-LRB-film-RRB-"
-1
228,271
"SUPPORTS"
"The Silence of the Lambs was a film starring Scott Glenn."
272,346
268,030
"The_Silence_of_the_Lambs_-LRB-film-RRB-"
-1
15,812
"REFUTES"
"Peggy Sue Got Married is a Egyptian film released in 1986."
31,205
37,902
"Peggy_Sue_Got_Married"
-1
15,812
"REFUTES"
"Peggy Sue Got Married is a Egyptian film released in 1986."
31,205
37,902
"Francis_Ford_Coppola"
-1
15,812
"REFUTES"
"Peggy Sue Got Married is a Egyptian film released in 1986."
31,211
37,908
"Peggy_Sue_Got_Married"
-1
57,330
"REFUTES"
"Andy Roddick lost 5 Master Series between 2002 and 2010."
73,660
84,912
"Andy_Roddick"
7
52,432
"REFUTES"
"As the Vietnam War raged in 1969, Yoko Ono and her husband John Lennon did not have two week-long Bed-Ins for Peace."
68,739
79,581
"Bed-In"
-1
214,706
"SUPPORTS"
"Tupac Shakur was born Lesane Parish Crooks."
254,927
254,471
"Tupac_Shakur"
-1
214,706
"SUPPORTS"
"Tupac Shakur was born Lesane Parish Crooks."
254,931
254,481
"Tupac_Shakur"
-1
228,761
"NOT ENOUGH INFO"
"Harold Ramis was a veteran."
272,943
-1
""
-1
172,270
"SUPPORTS"
"The Jim Henson Company produced The Muppet Movie, The Dark Crystal, and Labyrinth."
198,826
208,539
"The_Jim_Henson_Company"
5
85,441
"NOT ENOUGH INFO"
"United Nations was relocated to the United States in 1945."
102,619
-1
""
-1
65,960
"NOT ENOUGH INFO"
"Whoopi Goldberg co-produced an American dance tournament."
82,524
-1
""
-1
64,557
"SUPPORTS"
"Slovenia uses the euro."
80,998
92,847
"Slovenia"
29
64,557
"SUPPORTS"
"Slovenia uses the euro."
80,998
92,847
"Eurozone"
-1
200,996
"SUPPORTS"
"Iowa is a part of the Midwestern United States."
236,500
238,958
"Midwestern_United_States"
4
56,204
"NOT ENOUGH INFO"
"Keith Urban is a person who sings."
176,603
-1
""
-1
56,204
"NOT ENOUGH INFO"
"Keith Urban is a person who sings."
178,904
-1
""
-1
56,204
"NOT ENOUGH INFO"
"Keith Urban is a person who sings."
313,935
-1
""
-1
190,922
"NOT ENOUGH INFO"
"Hotell is owned by Lisa Langseth."
223,143
-1
""
-1
206,132
"SUPPORTS"
"Saratoga is an American film from 1937."
243,729
244,902
"Saratoga_-LRB-film-RRB-"
-1
215,831
"SUPPORTS"
"J. Howard Marshall was a baby."
256,426
255,641
"J._Howard_Marshall"
-1
215,831
"SUPPORTS"
"J. Howard Marshall was a baby."
256,426
255,641
"Infant"
-1
180,769
"REFUTES"
"Grace Jones is a dancer."
209,812
217,612
"Grace_Jones"
-1
170,685
"SUPPORTS"
"Lisbon has a population larger than 1."
196,930
206,999
"Lisbon"
-1
170,685
"SUPPORTS"
"Lisbon has a population larger than 1."
196,930
207,000
"Lisbon"
2
111,602
"REFUTES"
"Willie Nelson dropped out of college after three years."
131,008
145,780
"Willie_Nelson"
9
119,264
"SUPPORTS"
"Malcolm Young was the co-founder of Australian hard rock band AC/DC."
140,181
155,183
"Malcolm_Young"
-1
106,718
"NOT ENOUGH INFO"
"Nice & Slow is a jazz single."
279,608
-1
""
-1
106,718
"NOT ENOUGH INFO"
"Nice & Slow is a jazz single."
281,329
-1
""
-1
106,718
"NOT ENOUGH INFO"
"Nice & Slow is a jazz single."
283,497
-1
""
-1
106,718
"NOT ENOUGH INFO"
"Nice & Slow is a jazz single."
327,984
-1
""
-1
106,718
"NOT ENOUGH INFO"
"Nice & Slow is a jazz single."
327,990
-1
""
-1
77,712
"SUPPORTS"
"Newfoundland and Labrador is the most linguistically homogeneous of Canada."
94,661
107,645
"Newfoundland_and_Labrador"
4
184,132
"REFUTES"
"Furia is adapted from a short story by Anna Politkovskaya."
214,362
221,432
"Furia_-LRB-film-RRB-"
-1
74,400
"SUPPORTS"
"One state of the United States has Russia to its west."
91,175
103,890
"United_States"
3
91,253
"SUPPORTS"
"Sophie Turner was born in the 1990s."
108,589
122,194
"Sophie_Turner"
-1
53,316
"SUPPORTS"
"Mother Teresa was made a saint by the church."
69,596
80,416
"Mother_Teresa"
11
189,815
"SUPPORTS"
"The Smurfs (film) was released."
221,732
227,296
"The_Smurfs_-LRB-film-RRB-"
10
189,815
"SUPPORTS"
"The Smurfs (film) was released."
221,747
227,307
"The_Smurfs_-LRB-film-RRB-"
-1
189,815
"SUPPORTS"
"The Smurfs (film) was released."
221,747
227,308
"The_Smurfs_-LRB-film-RRB-"
10
189,815
"SUPPORTS"
"The Smurfs (film) was released."
221,747
227,309
"The_Smurfs_-LRB-film-RRB-"
15
189,815
"SUPPORTS"
"The Smurfs (film) was released."
221,747
227,310
"The_Smurfs_-LRB-film-RRB-"
12
110,847
"REFUTES"
"C. S. Forester's first name was Carl."
130,043
144,760
"C._S._Forester"
-1
36,776
"REFUTES"
"Kong: Skull Island is a book."
52,964
62,891
"Kong-COLON-_Skull_Island"
-1
137,323
"REFUTES"
"The Challenge was a scripted show."
160,064
174,586
"The_Challenge_-LRB-TV_series-RRB-"
-1
113,118
"SUPPORTS"
"Berlin is the capital of Germany."
132,683
147,607
"Berlin"
-1
225,701
"SUPPORTS"
"South Korea has a highly educated workforce."
269,028
265,440
"South_Korea"
27
End of preview (truncated to 100 rows)

Dataset Card for "fever"

Dataset Summary

With billions of individual pages on the web providing information on almost every conceivable topic, we should have the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot of recent research and media coverage: false information coming from unreliable sources.

The FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.

  • FEVER Dataset: FEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as Supported, Refuted or NotEnoughInfo. For the first two classes, the annotators also recorded the sentence(s) forming the necessary evidence for their judgment.

  • FEVER 2.0 Adversarial Attacks Dataset: The FEVER 2.0 Dataset consists of 1174 claims created by the submissions of participants in the Breaker phase of the 2019 shared task. Participants (Breakers) were tasked with generating adversarial examples that induce classification errors for the existing systems. Breakers submitted a dataset of up to 1000 instances with equal number of instances for each of the three classes (Supported, Refuted NotEnoughInfo). Only novel claims (i.e. not contained in the original FEVER dataset) were considered as valid entries to the shared task. The submissions were then manually evaluated for Correctness (grammatical, appropriately labeled and meet the FEVER annotation guidelines requirements).

Supported Tasks and Leaderboards

The task is verification of textual claims against textual sources.

When compared to textual entailment (TE)/natural language inference, the key difference is that in these tasks the passage to verify each claim is given, and in recent years it typically consists a single sentence, while in verification systems it is retrieved from a large set of documents in order to form the evidence.

Languages

The dataset is in English.

Dataset Structure

Data Instances

v1.0

  • Size of downloaded dataset files: 42.78 MB
  • Size of the generated dataset: 38.19 MB
  • Total amount of disk used: 80.96 MB

An example of 'train' looks as follows.

'claim': 'Nikolaj Coster-Waldau worked with the Fox Broadcasting Company.',
 'evidence_wiki_url': 'Nikolaj_Coster-Waldau',
 'label': 'SUPPORTS',
 'id': 75397,
 'evidence_id': 104971,
 'evidence_sentence_id': 7,
 'evidence_annotation_id': 92206}

v2.0

  • Size of downloaded dataset files: 0.37 MB
  • Size of the generated dataset: 0.29 MB
  • Total amount of disk used: 0.67 MB

An example of 'validation' looks as follows.

{'claim': "There is a convicted statutory rapist called Chinatown's writer.",
  'evidence_wiki_url': '',
  'label': 'NOT ENOUGH INFO',
  'id': 500000,
  'evidence_id': -1,
  'evidence_sentence_id': -1,
  'evidence_annotation_id': 269158}

wiki_pages

  • Size of downloaded dataset files: 1634.11 MB
  • Size of the generated dataset: 6918.06 MB
  • Total amount of disk used: 8552.17 MB

An example of 'wikipedia_pages' looks as follows.

{'text': 'The following are the football -LRB- soccer -RRB- events of the year 1928 throughout the world . ',
  'lines': '0\tThe following are the football -LRB- soccer -RRB- events of the year 1928 throughout the world .\n1\t',
  'id': '1928_in_association_football'}

Data Fields

The data fields are the same among all splits.

v1.0

  • id: a int32 feature.
  • label: a string feature.
  • claim: a string feature.
  • evidence_annotation_id: a int32 feature.
  • evidence_id: a int32 feature.
  • evidence_wiki_url: a string feature.
  • evidence_sentence_id: a int32 feature.

v2.0

  • id: a int32 feature.
  • label: a string feature.
  • claim: a string feature.
  • evidence_annotation_id: a int32 feature.
  • evidence_id: a int32 feature.
  • evidence_wiki_url: a string feature.
  • evidence_sentence_id: a int32 feature.

wiki_pages

  • id: a string feature.
  • text: a string feature.
  • lines: a string feature.

Data Splits

v1.0

train unlabelled_dev labelled_dev paper_dev unlabelled_test paper_test
v1.0 311431 19998 37566 18999 19998 18567

v2.0

validation
v2.0 2384

wiki_pages

wikipedia_pages
wiki_pages 5416537

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

FEVER license:

These data annotations incorporate material from Wikipedia, which is licensed pursuant to the Wikipedia Copyright Policy. These annotations are made available under the license terms described on the applicable Wikipedia article pages, or, where Wikipedia license terms are unavailable, under the Creative Commons Attribution-ShareAlike License (version 3.0), available at http://creativecommons.org/licenses/by-sa/3.0/ (collectively, the “License Terms”). You may not use these files except in compliance with the applicable License Terms.

Citation Information

If you use "FEVER Dataset", please cite:

@inproceedings{Thorne18Fever,
    author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},
    title = {{FEVER}: a Large-scale Dataset for Fact Extraction and {VERification}},
    booktitle = {NAACL-HLT},
    year = {2018}
}

If you use "FEVER 2.0 Adversarial Attacks Dataset", please cite:

@inproceedings{Thorne19FEVER2,
    author = {Thorne, James and Vlachos, Andreas and Cocarascu, Oana and Christodoulopoulos, Christos and Mittal, Arpit},
    title = {The {FEVER2.0} Shared Task},
    booktitle = {Proceedings of the Second Workshop on {Fact Extraction and VERification (FEVER)}},
    year = {2018}
}

Contributions

Thanks to @thomwolf, @lhoestq, @mariamabarham, @lewtun, @albertvillanova for adding this dataset.

Downloads last month
3,169

Models trained or fine-tuned on fever