Datasets:
license: cc-by-4.0
task_categories:
- question-answering
- conversational
language:
- en
tags:
- complex
- question answering
- complexQA
- QA
- heterogeneous sources
pretty_name: CompMix
size_categories:
- 1K<n<10K
splits:
- name: train
num_examples: 4966
- name: validation
num_examples: 1680
- name: test
num_examples: 2764
Dataset Card for ConvMix
Dataset Description
- Homepage: CompMix Website
- Point of Contact: Philipp Christmann
Dataset Summary
CompMix collates the completed versions of the conversational questions in the ConvMix dataset, that are provided directly by crowdworkers from Amazon Mechanical Turk (AMT). Questions in CompMix exhibit complex phenomena like the presence of multiple entities, relations, temporal conditions, comparisons, aggregations, and more. It is aimed at evaluating QA methods that operate over a mixture of heterogeneous input sources (KB, text, tables, infoboxes). The dataset has 9,410 questions, split into train (4,966 questions), dev (1,680), and test (2,764) sets. All answers provided in the CompMix dataset are grounded to the KB (except for dates which are normalized, and other literals like names).
Further details will be provided in a dedicated write-up soon.
Dataset Creation
CompMix collates the completed versions of the conversational questions in ConvMix, that are provided directly by the crowdworkers.
The ConvMix benchmark, on which CompMix is based, was created by real humans. We tried to ensure that the collected data is as natural as possible. Master crowdworkers on Amazon Mechanical Turk (AMT) selected an entity of interest in a specific domain, and then started issuing conversational questions on this entity, potentially drifting to other topics of interest throughout the course of the conversation. By letting users choose the entities themselves, we aimed to ensure that they are more interested into the topics the conversations are based on. After writing a question, users were asked to find the answer in eithers Wikidata, Wikipedia text, a Wikipedia table or a Wikipedia infobox, whatever they find more natural for the specific question at hand. Since Wikidata requires some basic understanding of knowledge bases, we provided video guidelines that illustrated how Wikidata can be used for detecting answers, following an example conversation. For each conversational question, that might be incomplete, the crowdworker provides a completed question that is intent-explicit, and can be answered without the conversational context. These questions constitute the CompMix dataset. We provide also the answer source the user found the answer in and question entities.