File size: 2,818 Bytes
768b6ca
 
5538839
 
 
 
 
 
66f6cb4
5538839
66f6cb4
5538839
 
66f6cb4
5538839
 
e83c6fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c39ac45
e83c6fc
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
license: cc-by-4.0
task_categories:
- question-answering
- conversational
language:
- en
tags:
- complex
- question answering
- complexQA
- QA
- heterogeneous sources
pretty_name: CompMix
size_categories:
- 1K<n<10K
splits:
  - name: train
    num_examples: 4966
  - name: validation
    num_examples: 1680
  - name: test
    num_examples: 2764
---

# Dataset Card for ConvMix

## Dataset Description

- **Homepage:** [CompMix Website](https://qa.mpi-inf.mpg.de/compmix)
- **Point of Contact:** [Philipp Christmann](mailto:pchristm@mpi-inf.mpg.de)

### Dataset Summary

CompMix collates the completed versions of the conversational questions in the [ConvMix dataset](https://convinse.mpi-inf.mpg.de), that are provided directly by crowdworkers from Amazon Mechanical Turk (AMT). Questions in CompMix exhibit complex phenomena like the presence of multiple entities, relations, temporal conditions, comparisons, aggregations, and more. It is aimed at evaluating QA methods that operate over a mixture of heterogeneous input sources (KB, text, tables, infoboxes). The dataset has 9,410 questions, split into train (4,966 questions), dev (1,680), and test (2,764) sets. All answers provided in the CompMix dataset are grounded to the KB (except for dates which are normalized, and other literals like names). 

Further details will be provided in a dedicated write-up soon.


### Dataset Creation
CompMix collates the completed versions of the conversational questions in ConvMix, that are provided directly by the crowdworkers. 

The ConvMix benchmark, on which CompMix is based, was created by real humans. We tried to ensure that the collected data is as natural as possible. Master crowdworkers on Amazon Mechanical Turk (AMT) selected an entity of interest in a specific domain, and then started issuing conversational questions on this entity, potentially drifting to other topics of interest throughout the course of the conversation. By letting users choose the entities themselves, we aimed to ensure that they are more interested into the topics the conversations are based on. After writing a question, users were asked to find the answer in eithers Wikidata, Wikipedia text, a Wikipedia table or a Wikipedia infobox, whatever they find more natural for the specific question at hand. Since Wikidata requires some basic understanding of knowledge bases, we provided video guidelines that illustrated how Wikidata can be used for detecting answers, following an example conversation. For each conversational question, that might be incomplete, the crowdworker provides a completed question that is intent-explicit, and can be answered without the conversational context. These questions constitute the CompMix dataset. We provide also the answer source the user found the answer in and question entities.