--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - en-US license: - mit multilinguality: - monolingual pretty_name: sufficient_facts size_categories: - 1Kby Columbia Records ." } ``` ### Data Instances * FEVER: 600 consituent-level, 400 sentence-level; * HoVer - 600 consituent-level, 400 sentence-level; * VitaminC - 600 consituent-level. ### Data Fields * `claim` - the claim that is being verified * `evidence` - the augmented evidence for the claim, i.e. the evidence with some removed information * `label_before` - the original label for the claim-evidence pair, before information was removed from the evidence * `label_after` - the label for the augmented claim-evidence pair, after information was removed from the evidence, as annotated by crowd-source workers * `type` - type of the information removed from the evidence. The types are fine-grained and their mapping to the general types -- 7 constituent and 1 sentence type can be found in [types.json](types.json) file. * `removed` - the text of the removed information from the evidence * `text_orig` - the original text of the evidence, as presented to crowd-source workers, the text of the removed information is inside `` tags. ### Data Splits | name |test_fever|test_hover|test_vitaminc| |----------|-------:|-----:|-------:| |test| 1000| 1000| 600| Augmented from the test splits of the corresponding datasets. ### Annotations #### Annotation process The workers were provided with the following task description: For each evidence text, some facts have been removed (marked in red). You should annotate whether, given the remaining facts in the evidence text, the evidence is still enough for verifying the claim.

Note: You should not incorporate your own knowledge or beliefs! You should rely only on the evidence provided for the claim. The annotators were then given example instance annotations. Finally, annotators were asked to complete a qualification test in order to be allowed to annotate instances for the task. The resulting inter-annotator agreement for SufficientFacts is 0.81 Fleiss'k from three annotators. #### Who are the annotators? The annotations were performed by workers at Amazon Mechanical Turk. ## Additional Information ### Licensing Information MIT ### Citation Information ``` @article{atanasova2022fact, title={Fact Checking with Insufficient Evidence}, author={Atanasova, Pepa and Simonsen, Jakob Grue and Lioma, Christina and Augenstein, Isabelle}, journal={Transactions of the Association for Computational Linguistics (TACL)}, year={2022} } ``` ### Contributions Thanks to [@apepa](https://github.com/apepa) for adding this dataset.