The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
Dataset Card for "scifact"
Dataset Summary
SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts, and annotated with labels and rationales.
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
claims
- Size of downloaded dataset files: 3.12 MB
- Size of the generated dataset: 262.61 kB
- Total amount of disk used: 3.38 MB
An example of 'validation' looks as follows.
{
"cited_doc_ids": [14717500],
"claim": "1,000 genomes project enables mapping of genetic sequence variation consisting of rare variants with larger penetrance effects than common variants.",
"evidence_doc_id": "14717500",
"evidence_label": "SUPPORT",
"evidence_sentences": [2, 5],
"id": 3
}
corpus
- Size of downloaded dataset files: 3.12 MB
- Size of the generated dataset: 7.99 MB
- Total amount of disk used: 11.11 MB
An example of 'train' looks as follows.
This example was too long and was cropped:
{
"abstract": "[\"Alterations of the architecture of cerebral white matter in the developing human brain can affect cortical development and res...",
"doc_id": 4983,
"structured": false,
"title": "Microstructural development of human newborn cerebral white matter assessed in vivo by diffusion tensor magnetic resonance imaging."
}
Data Fields
The data fields are the same among all splits.
claims
id
: aint32
feature.claim
: astring
feature.evidence_doc_id
: astring
feature.evidence_label
: astring
feature.evidence_sentences
: alist
ofint32
features.cited_doc_ids
: alist
ofint32
features.
corpus
doc_id
: aint32
feature.title
: astring
feature.abstract
: alist
ofstring
features.structured
: abool
feature.
Data Splits
claims
train | validation | test | |
---|---|---|---|
claims | 1261 | 450 | 300 |
corpus
train | |
---|---|
corpus | 5183 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
https://github.com/allenai/scifact/blob/master/LICENSE.md
The SciFact dataset is released under the CC BY-NC 2.0. By using the SciFact data, you are agreeing to its usage terms.
Citation Information
@inproceedings{wadden-etal-2020-fact,
title = "Fact or Fiction: Verifying Scientific Claims",
author = "Wadden, David and
Lin, Shanchuan and
Lo, Kyle and
Wang, Lucy Lu and
van Zuylen, Madeleine and
Cohan, Arman and
Hajishirzi, Hannaneh",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.609",
doi = "10.18653/v1/2020.emnlp-main.609",
pages = "7534--7550",
}
Contributions
Thanks to @thomwolf, @lhoestq, @dwadden, @patrickvonplaten, @mariamabarham, @lewtun for adding this dataset.
- Downloads last month
- 313