Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
com_qa / README.md
albertvillanova's picture
Convert dataset to Parquet
8daec61 verified
|
raw
history blame
7.67 kB
metadata
language:
  - en
license: unknown
task_categories:
  - question-answering
paperswithcode_id: comqa
pretty_name: ComQA
dataset_info:
  features:
    - name: cluster_id
      dtype: string
    - name: questions
      sequence: string
    - name: answers
      sequence: string
  splits:
    - name: train
      num_bytes: 692932
      num_examples: 3966
    - name: test
      num_bytes: 271554
      num_examples: 2243
    - name: validation
      num_bytes: 131129
      num_examples: 966
  download_size: 474169
  dataset_size: 1095615
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
      - split: validation
        path: data/validation-*

Dataset Card for "com_qa"

Table of Contents

Dataset Description

Dataset Summary

ComQA is a dataset of 11,214 questions, which were collected from WikiAnswers, a community question answering website. By collecting questions from such a site we ensure that the information needs are ones of interest to actual users. Moreover, questions posed there are often cannot be answered by commercial search engines or QA technology, making them more interesting for driving future research compared to those collected from an engine's query log. The dataset contains questions with various challenging phenomena such as the need for temporal reasoning, comparison (e.g., comparatives, superlatives, ordinals), compositionality (multiple, possibly nested, subquestions with multiple entities), and unanswerable questions (e.g., Who was the first human being on Mars?). Through a large crowdsourcing effort, questions in ComQA are grouped into 4,834 paraphrase clusters that express the same information need. Each cluster is annotated with its answer(s). ComQA answers come in the form of Wikipedia entities wherever possible. Wherever the answers are temporal or measurable quantities, TIMEX3 and the International System of Units (SI) are used for normalization.

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Data Instances

default

  • Size of downloaded dataset files: 1.67 MB
  • Size of the generated dataset: 1.10 MB
  • Total amount of disk used: 2.78 MB

An example of 'validation' looks as follows.

{
    "answers": ["https://en.wikipedia.org/wiki/north_sea"],
    "cluster_id": "cluster-922",
    "questions": ["what sea separates the scandinavia peninsula from britain?", "which sea separates britain from scandinavia?"]
}

Data Fields

The data fields are the same among all splits.

default

  • cluster_id: a string feature.
  • questions: a list of string features.
  • answers: a list of string features.

Data Splits

name train validation test
default 3966 966 2243

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information

@inproceedings{abujabal-etal-2019-comqa,
    title = "{ComQA: A Community-sourced Dataset for Complex Factoid Question Answering with Paraphrase Clusters",
    author = {Abujabal, Abdalghani  and
      Saha Roy, Rishiraj  and
      Yahya, Mohamed  and
      Weikum, Gerhard},
    booktitle = {Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)},
    month = {jun},
    year = {2019},
    address = {Minneapolis, Minnesota},
    publisher = {Association for Computational Linguistics},
    url = {https://www.aclweb.org/anthology/N19-1027},
    doi = {10.18653/v1/N19-1027{,
    pages = {307--317},
    }

Contributions

Thanks to @lewtun, @thomwolf, @mariamabarham, @patrickvonplaten, @albertvillanova for adding this dataset.