Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Language Creators:
found
Annotations Creators:
found
Source Datasets:
scientific_papers
ArXiv:
License:
arxiv_ppr_embeds / README.md
reichenbach's picture
Upload README.md with huggingface_hub
63d413a
metadata
annotations_creators:
  - found
language:
  - en
language_creators:
  - found
license:
  - unknown
multilinguality:
  - monolingual
pretty_name: ScientificPapers
size_categories:
  - 100K<n<1M
source_datasets:
  - scientific_papers
task_categories:
  - summarization
task_ids: []
paperswithcode_id: null
tags:
  - abstractive-summarization
dataset_info:
  features:
    - name: article
      dtype: string
    - name: abstract
      dtype: string
    - name: embeddings
      sequence: float64
  splits:
    - name: train
      num_bytes: 8367611540
      num_examples: 203037
    - name: validation
      num_bytes: 256178362
      num_examples: 6440
    - name: test
      num_bytes: 255771184
      num_examples: 6436
  download_size: 4718720913
  dataset_size: 8879561086

Dataset Card for "scientific_papers"

This dataset is derived from https://huggingface.co/datasets/scientific_papers with additional creation of embeddings via https://huggingface.co/docs/transformers/model_doc/rag for Natural Questions trained Base Model. This dataset is created for purpose of Retrieval Augmented Generation examples and experiments.

Table of Contents

Dataset Description

Dataset Summary

Scientific papers datasets contains one sets of long and structured documents. The datasets are obtained from ArXiv repositories.

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Data Instances

arxiv

  • Size of downloaded dataset files: 4.50 GB
  • Size of the generated dataset: 7.58 GB
  • Total amount of disk used: 12.09 GB

An example of 'train' looks as follows.

This example was too long and was cropped:

{
    "abstract": "\" we have studied the leptonic decay @xmath0 , via the decay channel @xmath1 , using a sample of tagged @xmath2 decays collected...",
    "article": "\"the leptonic decays of a charged pseudoscalar meson @xmath7 are processes of the type @xmath8 , where @xmath9 , @xmath10 , or @...",
    "section_names": "[sec:introduction]introduction\n[sec:detector]data and the cleo- detector\n[sec:analysys]analysis method\n[sec:conclusion]summary"
}

Data Fields

The data fields are the same among all splits.

arxiv

  • article: a string feature.
  • abstract: a string feature.
  • section_names: a string feature.
  • embeddings: a float 768 dimensional vector

Data Splits

name train validation test
arxiv 203037 6436 6440

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information

@article{Cohan_2018,
   title={A Discourse-Aware Attention Model for Abstractive Summarization of
            Long Documents},
   url={http://dx.doi.org/10.18653/v1/n18-2097},
   DOI={10.18653/v1/n18-2097},
   journal={Proceedings of the 2018 Conference of the North American Chapter of
          the Association for Computational Linguistics: Human Language
          Technologies, Volume 2 (Short Papers)},
   publisher={Association for Computational Linguistics},
   author={Cohan, Arman and Dernoncourt, Franck and Kim, Doo Soon and Bui, Trung and Kim, Seokhwan and Chang, Walter and Goharian, Nazli},
   year={2018}
}

Contributions

Thanks to @thomwolf, @jplu, @lewtun, @patrickvonplaten for adding this dataset.