Datasets:
metadata
annotations_creators:
- found
language:
- pt
language_creators:
- found
license: afl-3.0
multilinguality:
- monolingual
pretty_name: KittIA
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- question-answering
task_ids: []
paperswithcode_id: null
tags:
- not-for-all-audiences
dataset_info:
- config_name: arxiv
features:
- name: article
dtype: string
- name: abstract
dtype: string
- name: section_names
dtype: string
splits:
- name: train
num_bytes: 7148341992
num_examples: 203037
- name: validation
num_bytes: 217125524
num_examples: 6436
- name: test
num_bytes: 217514961
num_examples: 6440
download_size: 4504646347
dataset_size: 7582982477
- config_name: pubmed
features:
- name: article
dtype: string
- name: abstract
dtype: string
- name: section_names
dtype: string
splits:
- name: train
num_bytes: 2252027383
num_examples: 119924
- name: validation
num_bytes: 127403398
num_examples: 6633
- name: test
num_bytes: 127184448
num_examples: 6658
download_size: 4504646347
dataset_size: 2506615229
Dataset Card for "scientific_papers"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage:
- Repository: https://github.com/armancohan/long-summarization
- Paper: A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 9.01 GB
- Size of the generated dataset: 10.09 GB
- Total amount of disk used: 19.10 GB
Dataset Summary
Scientific papers datasets contains two sets of long and structured documents. The datasets are obtained from ArXiv and PubMed OpenAccess repositories.
Both "arxiv" and "pubmed" have two features:
- article: the body of the document, paragraphs separated by "/n".
- abstract: the abstract of the document, paragraphs separated by "/n".
- section_names: titles of sections, separated by "/n".
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
arxiv
- Size of downloaded dataset files: 4.50 GB
- Size of the generated dataset: 7.58 GB
- Total amount of disk used: 12.09 GB
An example of 'train' looks as follows.
This example was too long and was cropped:
{
"abstract": "\" we have studied the leptonic decay @xmath0 , via the decay channel @xmath1 , using a sample of tagged @xmath2 decays collected...",
"article": "\"the leptonic decays of a charged pseudoscalar meson @xmath7 are processes of the type @xmath8 , where @xmath9 , @xmath10 , or @...",
"section_names": "[sec:introduction]introduction\n[sec:detector]data and the cleo- detector\n[sec:analysys]analysis method\n[sec:conclusion]summary"
}
pubmed
- Size of downloaded dataset files: 4.50 GB
- Size of the generated dataset: 2.51 GB
- Total amount of disk used: 7.01 GB
An example of 'validation' looks as follows.
This example was too long and was cropped:
{
"abstract": "\" background and aim : there is lack of substantial indian data on venous thromboembolism ( vte ) . \\n the aim of this study was...",
"article": "\"approximately , one - third of patients with symptomatic vte manifests pe , whereas two - thirds manifest dvt alone .\\nboth dvt...",
"section_names": "\"Introduction\\nSubjects and Methods\\nResults\\nDemographics and characteristics of venous thromboembolism patients\\nRisk factors ..."
}
Data Fields
The data fields are the same among all splits.
arxiv
article
: astring
feature.abstract
: astring
feature.section_names
: astring
feature.
pubmed
article
: astring
feature.abstract
: astring
feature.section_names
: astring
feature.
Data Splits
name | train | validation | test |
---|---|---|---|
arxiv | 203037 | 6436 | 6440 |
pubmed | 119924 | 6633 | 6658 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@article{Cohan_2018,
title={A Discourse-Aware Attention Model for Abstractive Summarization of
Long Documents},
url={http://dx.doi.org/10.18653/v1/n18-2097},
DOI={10.18653/v1/n18-2097},
journal={Proceedings of the 2018 Conference of the North American Chapter of
the Association for Computational Linguistics: Human Language
Technologies, Volume 2 (Short Papers)},
publisher={Association for Computational Linguistics},
author={Cohan, Arman and Dernoncourt, Franck and Kim, Doo Soon and Bui, Trung and Kim, Seokhwan and Chang, Walter and Goharian, Nazli},
year={2018}
}
Contributions
Thanks to @thomwolf, @jplu, @lewtun, @patrickvonplaten for adding this dataset.