qangaroo / README.md
albertvillanova's picture
Convert dataset to Parquet (#6)
a6842a0 verified
metadata
language:
  - en
pretty_name: qangaroo
dataset_info:
  - config_name: masked_medhop
    features:
      - name: query
        dtype: string
      - name: supports
        sequence: string
      - name: candidates
        sequence: string
      - name: answer
        dtype: string
      - name: id
        dtype: string
    splits:
      - name: train
        num_bytes: 95813556
        num_examples: 1620
      - name: validation
        num_bytes: 16800542
        num_examples: 342
    download_size: 58801723
    dataset_size: 112614098
  - config_name: masked_wikihop
    features:
      - name: query
        dtype: string
      - name: supports
        sequence: string
      - name: candidates
        sequence: string
      - name: answer
        dtype: string
      - name: id
        dtype: string
    splits:
      - name: train
        num_bytes: 348073986
        num_examples: 43738
      - name: validation
        num_bytes: 43663600
        num_examples: 5129
    download_size: 211302995
    dataset_size: 391737586
  - config_name: medhop
    features:
      - name: query
        dtype: string
      - name: supports
        sequence: string
      - name: candidates
        sequence: string
      - name: answer
        dtype: string
      - name: id
        dtype: string
    splits:
      - name: train
        num_bytes: 93937294
        num_examples: 1620
      - name: validation
        num_bytes: 16461612
        num_examples: 342
    download_size: 57837760
    dataset_size: 110398906
  - config_name: wikihop
    features:
      - name: query
        dtype: string
      - name: supports
        sequence: string
      - name: candidates
        sequence: string
      - name: answer
        dtype: string
      - name: id
        dtype: string
    splits:
      - name: train
        num_bytes: 325777822
        num_examples: 43738
      - name: validation
        num_bytes: 40843303
        num_examples: 5129
    download_size: 202454962
    dataset_size: 366621125
configs:
  - config_name: masked_medhop
    data_files:
      - split: train
        path: masked_medhop/train-*
      - split: validation
        path: masked_medhop/validation-*
  - config_name: masked_wikihop
    data_files:
      - split: train
        path: masked_wikihop/train-*
      - split: validation
        path: masked_wikihop/validation-*
  - config_name: medhop
    data_files:
      - split: train
        path: medhop/train-*
      - split: validation
        path: medhop/validation-*
  - config_name: wikihop
    data_files:
      - split: train
        path: wikihop/train-*
      - split: validation
        path: wikihop/validation-*

Dataset Card for "qangaroo"

Table of Contents

Dataset Description

Dataset Summary

We have created two new Reading Comprehension datasets focussing on multi-hop (alias multi-step) inference.

Several pieces of information often jointly imply another fact. In multi-hop inference, a new fact is derived by combining facts via a chain of multiple steps.

Our aim is to build Reading Comprehension methods that perform multi-hop inference on text, where individual facts are spread out across different documents.

The two QAngaroo datasets provide a training and evaluation resource for such methods.

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Data Instances

masked_medhop

  • Size of downloaded dataset files: 339.84 MB
  • Size of the generated dataset: 112.63 MB
  • Total amount of disk used: 452.47 MB

An example of 'validation' looks as follows.


masked_wikihop

  • Size of downloaded dataset files: 339.84 MB
  • Size of the generated dataset: 391.98 MB
  • Total amount of disk used: 731.82 MB

An example of 'validation' looks as follows.


medhop

  • Size of downloaded dataset files: 339.84 MB
  • Size of the generated dataset: 110.42 MB
  • Total amount of disk used: 450.26 MB

An example of 'validation' looks as follows.


wikihop

  • Size of downloaded dataset files: 339.84 MB
  • Size of the generated dataset: 366.87 MB
  • Total amount of disk used: 706.71 MB

An example of 'validation' looks as follows.


Data Fields

The data fields are the same among all splits.

masked_medhop

  • query: a string feature.
  • supports: a list of string features.
  • candidates: a list of string features.
  • answer: a string feature.
  • id: a string feature.

masked_wikihop

  • query: a string feature.
  • supports: a list of string features.
  • candidates: a list of string features.
  • answer: a string feature.
  • id: a string feature.

medhop

  • query: a string feature.
  • supports: a list of string features.
  • candidates: a list of string features.
  • answer: a string feature.
  • id: a string feature.

wikihop

  • query: a string feature.
  • supports: a list of string features.
  • candidates: a list of string features.
  • answer: a string feature.
  • id: a string feature.

Data Splits

name train validation
masked_medhop 1620 342
masked_wikihop 43738 5129
medhop 1620 342
wikihop 43738 5129

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information


Contributions

Thanks to @thomwolf, @jplu, @lewtun, @lhoestq, @mariamabarham for adding this dataset.