How to load this dataset directly with the
π€/datasets
library:
from datasets import load_dataset dataset = load_dataset("scifact")
None yet. Start fine-tuning now =)
SciFact, a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts, and annotated with labels and rationales
We show detailed information for up to 5 configurations of the dataset.
An example of 'validation' looks as follows.
{
"cited_doc_ids": [14717500],
"claim": "1,000 genomes project enables mapping of genetic sequence variation consisting of rare variants with larger penetrance effects than common variants.",
"evidence_doc_id": "14717500",
"evidence_label": "SUPPORT",
"evidence_sentences": [2, 5],
"id": 3
}
An example of 'train' looks as follows.
This example was too long and was cropped:
{
"abstract": "[\"Alterations of the architecture of cerebral white matter in the developing human brain can affect cortical development and res...",
"doc_id": 4983,
"structured": false,
"title": "Microstructural development of human newborn cerebral white matter assessed in vivo by diffusion tensor magnetic resonance imaging."
}
The data fields are the same among all splits.
id
: a int32
feature.claim
: a string
feature.evidence_doc_id
: a string
feature.evidence_label
: a string
feature.evidence_sentences
: a list
of int32
features.cited_doc_ids
: a list
of int32
features.doc_id
: a int32
feature.title
: a string
feature.abstract
: a list
of string
features.structured
: a bool
feature.train | validation | test | |
---|---|---|---|
claims | 1261 | 450 | 300 |
train | |
---|---|
corpus | 5183 |
@inproceedings{scifact2020
title={ Fact or Fiction: Verifying Scientific Claims},
author={David, Wadden and Kyle, Lo and Lucy Lu, Wang and Shanchuan, Lin and Madeleine van, Zuylen and Arman, Cohan and Hannaneh, Hajishirzi},
booktitle={2011 AAAI Spring Symposium Series},
year={2020},
}