Sub-tasks: fact-checking
Languages: English
Multilinguality: monolingual
Size Categories: 1K<n<10K
Language Creators: found
Annotations Creators: expert-generated
Source Datasets: original

The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support. If this is not possible, please open a discussion for direct help.

Dataset Card for "scifact"

Dataset Summary

SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts, and annotated with labels and rationales.

Supported Tasks and Leaderboards

More Information Needed


More Information Needed

Dataset Structure

Data Instances


  • Size of downloaded dataset files: 3.12 MB
  • Size of the generated dataset: 262.61 kB
  • Total amount of disk used: 3.38 MB

An example of 'validation' looks as follows.

    "cited_doc_ids": [14717500],
    "claim": "1,000 genomes project enables mapping of genetic sequence variation consisting of rare variants with larger penetrance effects than common variants.",
    "evidence_doc_id": "14717500",
    "evidence_label": "SUPPORT",
    "evidence_sentences": [2, 5],
    "id": 3


  • Size of downloaded dataset files: 3.12 MB
  • Size of the generated dataset: 7.99 MB
  • Total amount of disk used: 11.11 MB

An example of 'train' looks as follows.

This example was too long and was cropped:

    "abstract": "[\"Alterations of the architecture of cerebral white matter in the developing human brain can affect cortical development and res...",
    "doc_id": 4983,
    "structured": false,
    "title": "Microstructural development of human newborn cerebral white matter assessed in vivo by diffusion tensor magnetic resonance imaging."

Data Fields

The data fields are the same among all splits.


  • id: a int32 feature.
  • claim: a string feature.
  • evidence_doc_id: a string feature.
  • evidence_label: a string feature.
  • evidence_sentences: a list of int32 features.
  • cited_doc_ids: a list of int32 features.


  • doc_id: a int32 feature.
  • title: a string feature.
  • abstract: a list of string features.
  • structured: a bool feature.

Data Splits


train validation test
claims 1261 450 300


corpus 5183

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed


Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

The SciFact dataset is released under the CC BY-NC 2.0. By using the SciFact data, you are agreeing to its usage terms.

Citation Information

    title = "Fact or Fiction: Verifying Scientific Claims",
    author = "Wadden, David  and
      Lin, Shanchuan  and
      Lo, Kyle  and
      Wang, Lucy Lu  and
      van Zuylen, Madeleine  and
      Cohan, Arman  and
      Hajishirzi, Hannaneh",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "",
    doi = "10.18653/v1/2020.emnlp-main.609",
    pages = "7534--7550",


Thanks to @thomwolf, @lhoestq, @dwadden, @patrickvonplaten, @mariamabarham, @lewtun for adding this dataset.

Downloads last month

Models trained or fine-tuned on allenai/scifact