UNIST-Eunchan's picture
Update README.md
83188e5
metadata
dataset_info:
  features:
    - name: 'Unnamed: 0'
      dtype: int64
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: abstract
      dtype: string
    - name: introduction
      dtype: string
  splits:
    - name: train
      num_bytes: 1844987
      num_examples: 421
    - name: validation
      num_bytes: 949747
      num_examples: 211
    - name: test
      num_bytes: 1403003
      num_examples: 320
  download_size: 2341682
  dataset_size: 4197737
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
license: mit
task_categories:
  - summarization
  - question-answering
language:
  - en
tags:
  - nlp-research-paper-abstract
  - nlp-research-paper
  - question-generation
pretty_name: NLP_Papers_to_Question_Generation
size_categories:
  - n<1K

Dataset Card for Dataset Name

This dataset was created by modifying and adapting the allenai/QASPER: a dataset for question answering on scientific research papers dataset and aims to generate Question-Answer Pairs from the Abstract, Introduction of an NLP Paper.

Dataset Description

  • First, we extracted the abstract, introduction of each NLP paper from QASPER dataset.

  • We also extracted only the rows labeled question and answer that had an abstract answer rather than extractive.

  • train : 421 rows

  • validation : 211 rows

  • test : 320 rows

  • Curated by: @UNIST-Eunchan

Dataset Sources

This data is made by applying and processing allenai/qasper

Uses

  • Question Generation from Research Paper
  • Long-Document Summarization
  • Question-based Summarization

Dataset Creation

Curation Rationale

Long Document Summarization datasets, especially those for Research Paper Summarization, are very limited and scarce.

We tweak the existing data to provide domains and QA pairs specific to NLP among Research Papers.

We expect to be able to generate multiple QA pairs if we let the model sample through training.

We will release the fine-tuned model in the future.