wikitablequestions / README.md
albertvillanova's picture
Convert dataset sizes from base 2 to base 10 in the dataset card (#2)
d39cd16
metadata
annotations_creators:
  - crowdsourced
language_creators:
  - found
language:
  - en
license:
  - cc-by-4.0
multilinguality:
  - monolingual
paperswithcode_id: null
pretty_name: WikiTableQuestions
size_categories:
  - 10K<n<100K
source_datasets:
  - original
task_categories:
  - question-answering
task_ids: []
tags:
  - table-question-answering
dataset_info:
  - config_name: random-split-1
    features:
      - name: id
        dtype: string
      - name: question
        dtype: string
      - name: answers
        sequence: string
      - name: table
        struct:
          - name: header
            sequence: string
          - name: rows
            sequence:
              sequence: string
          - name: name
            dtype: string
    splits:
      - name: train
        num_bytes: 30364389
        num_examples: 11321
      - name: test
        num_bytes: 11423506
        num_examples: 4344
      - name: validation
        num_bytes: 7145768
        num_examples: 2831
    download_size: 29267445
    dataset_size: 48933663
  - config_name: random-split-2
    features:
      - name: id
        dtype: string
      - name: question
        dtype: string
      - name: answers
        sequence: string
      - name: table
        struct:
          - name: header
            sequence: string
          - name: rows
            sequence:
              sequence: string
          - name: name
            dtype: string
    splits:
      - name: train
        num_bytes: 30098954
        num_examples: 11314
      - name: test
        num_bytes: 11423506
        num_examples: 4344
      - name: validation
        num_bytes: 7411203
        num_examples: 2838
    download_size: 29267445
    dataset_size: 48933663
  - config_name: random-split-3
    features:
      - name: id
        dtype: string
      - name: question
        dtype: string
      - name: answers
        sequence: string
      - name: table
        struct:
          - name: header
            sequence: string
          - name: rows
            sequence:
              sequence: string
          - name: name
            dtype: string
    splits:
      - name: train
        num_bytes: 28778697
        num_examples: 11314
      - name: test
        num_bytes: 11423506
        num_examples: 4344
      - name: validation
        num_bytes: 8731460
        num_examples: 2838
    download_size: 29267445
    dataset_size: 48933663
  - config_name: random-split-4
    features:
      - name: id
        dtype: string
      - name: question
        dtype: string
      - name: answers
        sequence: string
      - name: table
        struct:
          - name: header
            sequence: string
          - name: rows
            sequence:
              sequence: string
          - name: name
            dtype: string
    splits:
      - name: train
        num_bytes: 30166421
        num_examples: 11321
      - name: test
        num_bytes: 11423506
        num_examples: 4344
      - name: validation
        num_bytes: 7343736
        num_examples: 2831
    download_size: 29267445
    dataset_size: 48933663
  - config_name: random-split-5
    features:
      - name: id
        dtype: string
      - name: question
        dtype: string
      - name: answers
        sequence: string
      - name: table
        struct:
          - name: header
            sequence: string
          - name: rows
            sequence:
              sequence: string
          - name: name
            dtype: string
    splits:
      - name: train
        num_bytes: 30333964
        num_examples: 11316
      - name: test
        num_bytes: 11423506
        num_examples: 4344
      - name: validation
        num_bytes: 7176193
        num_examples: 2836
    download_size: 29267445
    dataset_size: 48933663

Dataset Card for WikiTableQuestions

Table of Contents

Dataset Description

Dataset Summary

The WikiTableQuestions dataset is a large-scale dataset for the task of question answering on semi-structured tables.

Supported Tasks and Leaderboards

question-answering, table-question-answering

Languages

en

Dataset Structure

Data Instances

default

  • Size of downloaded dataset files: 29.27 MB
  • Size of the generated dataset: 47.90 MB
  • Total amount of disk used: 77.18 MB

An example of 'validation' looks as follows:

{
    "id": "nt-0",
    "question": "what was the last year where this team was a part of the usl a-league?",
    "answers": ["2004"],
    "table": {
        "header": ["Year", "Division", "League", ...], 
        "name": "csv/204-csv/590.csv", 
        "rows": [
           ["2001", "2", "USL A-League", ...],
           ["2002", "2", "USL A-League", ...], 
           ...
        ]
    }
}

Data Fields

The data fields are the same among all splits.

default

  • id: a string feature.
  • question: a string feature.
  • answers: a list of string feature.
  • table: a dictionary feature containing:
    • header: a list of string features.
    • rows: a list of list of string features:
    • name: a string feature.

Data Splits

name train validation test
default 11321 2831 4344

Dataset Creation

Curation Rationale

[Needs More Information]

Source Data

Initial Data Collection and Normalization

[Needs More Information]

Who are the source language producers?

[Needs More Information]

Annotations

Annotation process

[Needs More Information]

Who are the annotators?

[Needs More Information]

Personal and Sensitive Information

[Needs More Information]

Considerations for Using the Data

Social Impact of Dataset

[Needs More Information]

Discussion of Biases

[Needs More Information]

Other Known Limitations

[Needs More Information]

Additional Information

Dataset Curators

Panupong Pasupat and Percy Liang

Licensing Information

Creative Commons Attribution Share Alike 4.0 International

Citation Information

@inproceedings{pasupat-liang-2015-compositional,
    title = "Compositional Semantic Parsing on Semi-Structured Tables",
    author = "Pasupat, Panupong and Liang, Percy",
    booktitle = "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
    month = jul,
    year = "2015",
    address = "Beijing, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/P15-1142",
    doi = "10.3115/v1/P15-1142",
    pages = "1470--1480",
}

Contributions

Thanks to @SivilTaram for adding this dataset.