Back to all datasets
Dataset: com_qa 🏷
Update on GitHub

How to load this dataset directly with the πŸ€—/nlp library:

			
Copy to clipboard
from nlp import load_dataset dataset = load_dataset("com_qa")

Description

ComQA is a dataset of 11,214 questions, which were collected from WikiAnswers, a community question answering website. By collecting questions from such a site we ensure that the information needs are ones of interest to actual users. Moreover, questions posed there are often cannot be answered by commercial search engines or QA technology, making them more interesting for driving future research compared to those collected from an engine's query log. The dataset contains questions with various challenging phenomena such as the need for temporal reasoning, comparison (e.g., comparatives, superlatives, ordinals), compositionality (multiple, possibly nested, subquestions with multiple entities), and unanswerable questions (e.g., Who was the first human being on Mars?). Through a large crowdsourcing effort, questions in ComQA are grouped into 4,834 paraphrase clusters that express the same information need. Each cluster is annotated with its answer(s). ComQA answers come in the form of Wikipedia entities wherever possible. Wherever the answers are temporal or measurable quantities, TIMEX3 and the International System of Units (SI) are used for normalization.

Citation

@inproceedings{abujabal-etal-2019-comqa,
    title = "{ComQA: A Community-sourced Dataset for Complex Factoid Question Answering with Paraphrase Clusters",
    author = {Abujabal, Abdalghani  and
      Saha Roy, Rishiraj  and
      Yahya, Mohamed  and
      Weikum, Gerhard},
    booktitle = {Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)},
    month = {jun},
    year = {2019},
    address = {Minneapolis, Minnesota},
    publisher = {Association for Computational Linguistics},
    url = {https://www.aclweb.org/anthology/N19-1027},
    doi = {10.18653/v1/N19-1027{,
    pages = {307--317},
    }

Models trained or fine-tuned on com_qa

None yet. Start fine-tuning now =)