dataset_info:
- config_name: '16384'
features:
- name: input
dtype: string
- name: output
dtype: string
- name: metadata
struct:
- name: domains
sequence: string
- name: input_context
dtype: string
- name: output_context
dtype: string
- name: source_type
dtype: string
- name: task_family
dtype: string
- name: _instance_id
dtype: string
splits:
- name: train
num_bytes: 651887545
num_examples: 72646
- name: validation
num_bytes: 316306085
num_examples: 34621
- name: test
num_bytes: 422473879
num_examples: 41909
download_size: 623896235
dataset_size: 1390667509
- config_name: '4096'
features:
- name: input
dtype: string
- name: output
dtype: string
- name: metadata
struct:
- name: domains
sequence: string
- name: input_context
dtype: string
- name: output_context
dtype: string
- name: source_type
dtype: string
- name: task_family
dtype: string
- name: _instance_id
dtype: string
splits:
- name: train
num_bytes: 388072842
num_examples: 70521
- name: validation
num_bytes: 147030710
num_examples: 30736
- name: test
num_bytes: 186329809
num_examples: 35875
download_size: 308815650
dataset_size: 721433361
- config_name: '8192'
features:
- name: input
dtype: string
- name: output
dtype: string
- name: metadata
struct:
- name: domains
sequence: string
- name: input_context
dtype: string
- name: output_context
dtype: string
- name: source_type
dtype: string
- name: task_family
dtype: string
- name: _instance_id
dtype: string
splits:
- name: train
num_bytes: 546901470
num_examples: 72367
- name: validation
num_bytes: 252982177
num_examples: 34001
- name: test
num_bytes: 313157272
num_examples: 40064
download_size: 491399393
dataset_size: 1113040919
configs:
- config_name: '16384'
data_files:
- split: train
path: 16384/train-*
- split: validation
path: 16384/validation-*
- split: test
path: 16384/test-*
- config_name: '4096'
data_files:
- split: train
path: 4096/train-*
- split: validation
path: 4096/validation-*
- split: test
path: 4096/test-*
- config_name: '8192'
data_files:
- split: train
path: 8192/train-*
- split: validation
path: 8192/validation-*
- split: test
path: 8192/test-*
license: odc-by
language:
- en
tags:
- chemistry
- biomedicine
- clinical medicine
- artificial intelligence
- materials science
size_categories:
- 100K<n<1M
SciRIFF
The SciRIFF dataset includes 137K instruction-following demonstrations for 54 scientific literature understanding tasks. The tasks cover five essential scientific literature categories and span five domains. The dataset is described in our paper SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature.
There are three dataset configurations with different max context lengths: 4096, 8192, and 16384. All experiments in the paper are performed with the 4096 context window. You can load the dataset like:
import datasets
ds = datasets.load_dataset("allenai/SciRIFF", "4096")
Code to create the dataset, train models on SciRIFF, and perform evaluation is available at our GitHub repo: https://github.com/allenai/SciRIFF. To train models on SciRIFF data, you should use the SciRIFF train mix dataset.
Table of Contents
Dataset details
Each instance in SciRIFF has the following fields:
input
: Task input (i.e. user message).output
: Task output (i.e. expected model response)._instance_id
: A unique id for the instance, formatted like{task_name}:{split}:{instance_id}
. For instance,qasa_abstractive_qa:test:182
.metadata
: Task metadata. More information on the schema for task metadata can be found in the SciRIFF GitHub repo.task_family
: The category to which this task belongs. Options includesummarization
,ie
,qa
,entailment
, andclassification
. Some categories have sub-categories which are largely self-explanatory; see the repo for more information.domains
: Scientific field(s) that the task covers. Options include:clinical_medicine
,biomedicine
,chemistry
,artificial_intelligence
,materials_science
, andmisc
.input_context
: Whether the input is a paragraph, full text, etc. Options include:sentence
,paragraph
,multiple_paragraphs
(including full paper text), andstructured
(e.g. code for a LaTex table).source_type
: Indicates whether the input comes from a single paper or multiple. Options includesingle_source
,multiple_source
.output_context
: Options include:label
,sentence
,paragraph
,multiple_paragraphs
,json
,jsonlines
.
License
SciRIFF is licensed under ODC-By
. Licenses of the datasets from which SciRIFF is derived are listed below.
Task provenance
SciRIFF was created by repurposing existing scientific literature understanding datasets. Below we provide information on the source data for each SciRIFF task, including license information on individual datasets where available. Where possible, we leveraged the BigBIO collection as a starting point, rather than reprocessing datasets from scratch. In the table below, we include the name of the BigBio subset for all tasks available in BigBio; these can be loaded like datasets.load_dataset(bigbio/{bigbio_subset})
.
Task metadata
Below we include metadata on each task, as described in the metadata fields above.
SciRIFF Name | Task Family | Domains | Input Context | Source Type | Output Context |
---|---|---|---|---|---|
acl_arc_intent_classification |
classification | artificial_intelligence | multiple_paragraphs | single_source | label |
anat_em_ner |
ie.named_entity_recognition | biomedicine | paragraph | single_source | json |
annotated_materials_syntheses_events |
ie.event_extraction | materials_science | paragraph | single_source | json |
bc7_litcovid_topic_classification |
classification | clinical_medicine | paragraph | single_source | json |
bioasq_factoid_qa |
qa.abstractive | biomedicine | multiple_paragraphs | multiple_source | sentence |
bioasq_general_qa |
qa.abstractive | biomedicine | multiple_paragraphs | multiple_source | sentence |
bioasq_list_qa |
qa.abstractive | biomedicine | multiple_paragraphs | multiple_source | json |
bioasq_yesno_qa |
qa.yes_no | biomedicine | multiple_paragraphs | multiple_source | label |
biored_ner |
ie.named_entity_recognition | biomedicine | paragraph | single_source | json |
cdr_ner |
ie.named_entity_recognition | biomedicine | paragraph | single_source | json |
chemdner_ner |
ie.named_entity_recognition | biomedicine | paragraph | single_source | json |
chemprot_ner |
ie.named_entity_recognition | biomedicine | paragraph | single_source | json |
chemprot_re |
ie.relation_extraction | biomedicine | paragraph | single_source | json |
chemsum_single_document_summarization |
summarization | chemistry | multiple_paragraphs | single_source | paragraph |
chemtables_te |
ie.structure_to_json | chemistry | structured | single_source | jsonlines |
chia_ner |
ie.named_entity_recognition | clinical_medicine | paragraph | single_source | json |
covid_deepset_qa |
qa.extractive | biomedicine | paragraph | single_source | sentence |
covidfact_entailment |
entailment | biomedicine, clinical_medicine | paragraph | single_source | json |
craftchem_ner |
ie.named_entity_recognition | biomedicine | sentence | single_source | json |
data_reco_mcq_mc |
qa.multiple_choice | artificial_intelligence | multiple_paragraphs | multiple_source | json |
data_reco_mcq_sc |
qa.multiple_choice | artificial_intelligence | multiple_paragraphs | multiple_source | label |
ddi_ner |
ie.named_entity_recognition | biomedicine | paragraph | single_source | json |
discomat_te |
ie.structure_to_json | materials_science | structured | single_source | jsonlines |
drug_combo_extraction_re |
ie.relation_extraction | clinical_medicine | paragraph | single_source | json |
evidence_inference |
ie.relation_extraction | clinical_medicine | paragraph | single_source | json |
genia_ner |
ie.named_entity_recognition | biomedicine | paragraph | single_source | json |
gnormplus_ner |
ie.named_entity_recognition | biomedicine | paragraph | single_source | json |
healthver_entailment |
entailment | clinical_medicine | paragraph | single_source | json |
linnaeus_ner |
ie.named_entity_recognition | biomedicine | multiple_paragraphs | single_source | json |
medmentions_ner |
ie.named_entity_recognition | biomedicine | paragraph | single_source | json |
mltables_te |
ie.structure_to_json | artificial_intelligence | structured | single_source | jsonlines |
mslr2022_cochrane_multidoc_summarization |
summarization | clinical_medicine | paragraph | multiple_source | paragraph |
mslr2022_ms2_multidoc_summarization |
summarization | clinical_medicine | paragraph | multiple_source | paragraph |
multicite_intent_classification |
classification | artificial_intelligence | paragraph | single_source | json |
multixscience_multidoc_summarization |
summarization | artificial_intelligence, biomedicine, materials_science, misc |
multiple_paragraphs | multiple_source | paragraph |
mup_single_document_summarization |
summarization | artificial_intelligence | multiple_paragraphs | single_source | paragraph |
ncbi_ner |
ie.named_entity_recognition | biomedicine | paragraph | single_source | json |
nlmchem_ner |
ie.named_entity_recognition | biomedicine | multiple_paragraphs | single_source | json |
nlmgene_ner |
ie.named_entity_recognition | biomedicine | paragraph | single_source | json |
pico_ner |
ie.named_entity_recognition | clinical_medicine | paragraph | single_source | json |
pubmedqa_qa |
qa.yes_no | biomedicine | paragraph | single_source | label |
qasa_abstractive_qa |
qa.abstractive | artificial_intelligence | multiple_paragraphs | single_source | paragraph |
qasper_abstractive_qa |
qa.abstractive | artificial_intelligence | multiple_paragraphs | single_source | json |
qasper_extractive_qa |
qa.extractive | artificial_intelligence | multiple_paragraphs | single_source | json |
scicite_classification |
classification | artificial_intelligence | paragraph | single_source | label |
scientific_lay_summarisation_ elife_single_doc_summ |
summarization | biomedicine | multiple_paragraphs | single_source | paragraph |
scientific_lay_summarisation_ plos_single_doc_summ |
summarization | biomedicine | multiple_paragraphs | single_source | paragraph |
scientific_papers_summarization_single_doc_arxiv |
summarization | artificial_intelligence, misc | multiple_paragraphs | single_source | paragraph |
scientific_papers_summarization_single_doc_pubmed |
summarization | biomedicine | multiple_paragraphs | single_source | paragraph |
scierc_ner |
ie.named_entity_recognition | artificial_intelligence | paragraph | single_source | json |
scierc_re |
ie.relation_extraction | artificial_intelligence | paragraph | single_source | json |
scifact_entailment |
entailment | biomedicine, clinical_medicine | paragraph | single_source | json |
scireviewgen_multidoc_summarization |
summarization | artificial_intelligence | multiple_paragraphs | multiple_source | paragraph |
scitldr_aic |
summarization | artificial_intelligence | multiple_paragraphs | single_source | sentence |