pchristm commited on
Commit
e83c6fc
1 Parent(s): 5538839

Add dataset summary and description

Browse files
Files changed (1) hide show
  1. README.md +28 -1
README.md CHANGED
@@ -14,4 +14,31 @@ tags:
14
  pretty_name: ConvMix
15
  size_categories:
16
  - 1K<n<10K
17
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  pretty_name: ConvMix
15
  size_categories:
16
  - 1K<n<10K
17
+ splits:
18
+ - name: train
19
+ num_examples: 4966
20
+ - name: validation
21
+ num_examples: 1680
22
+ - name: test
23
+ num_examples: 2764
24
+ ---
25
+
26
+ # Dataset Card for ConvMix
27
+
28
+ ## Dataset Description
29
+
30
+ - **Homepage:** [CompMix Website](https://qa.mpi-inf.mpg.de/compmix)
31
+ - **Point of Contact:** [Philipp Christmann](mailto:pchristm@mpi-inf.mpg.de)
32
+
33
+ ### Dataset Summary
34
+
35
+ CompMix collates the completed versions of the conversational questions in the [ConvMix dataset](https://convinse.mpi-inf.mpg.de), that are provided directly by crowdworkers from Amazon Mechanical Turk (AMT). Questions in CompMix exhibit complex phenomena like the presence of multiple entities, relations, temporal conditions, comparisons, aggregations, and more. It is aimed at evaluating QA methods that operate over a mixture of heterogeneous input sources (KB, text, tables, infoboxes). The dataset has 9,410 questions, split into train (4,966 questions), dev (1,680), and test (2,764) sets. All answers provided in the CompMix dataset are grounded to the KB (except for dates which are normalized, and other literals like names).
36
+
37
+ Further details will be provided in a dedicated write-up soon.
38
+
39
+
40
+ ### Dataset Creation
41
+ CompMix collates the completed versions of the conversational questions in ConvMix, and are provided directly by the crowdworkers.
42
+
43
+ The ConvMix benchmark, on which CompMix is based, was created by real humans. We tried to ensure that the collected data is as natural as possible. Master crowdworkers on Amazon Mechanical Turk (AMT) selected an entity of interest in a specific domain, and then started issuing conversational questions on this entity, potentially drifting to other topics of interest throughout the course of the conversation. By letting users choose the entities themselves, we aimed to ensure that they are more interested into the topics the conversations are based on. After writing a question, users were asked to find the answer in eithers Wikidata, Wikipedia text, a Wikipedia table or a Wikipedia infobox, whatever they find more natural for the specific question at hand. Since Wikidata requires some basic understanding of knowledge bases, we provided video guidelines that illustrated how Wikidata can be used for detecting answers, following an example conversation. For each conversational question, that might be incomplete, the crowdworker provides a completed question that is intent-explicit, and can be answered without the conversational context. These questions constitute the CompMix dataset. We provide also the answer source the user found the answer in and question entities.
44
+