pchristm commited on
Commit
61ddb74
1 Parent(s): bd3521b

Update dataset description and summary

Browse files
Files changed (1) hide show
  1. README.md +41 -0
README.md CHANGED
@@ -1,3 +1,44 @@
1
  ---
2
  license: cc-by-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ task_categories:
4
+ - question-answering
5
+ - conversational
6
+ language:
7
+ - en
8
+ tags:
9
+ - complex
10
+ - question answering
11
+ - convQA
12
+ - conversationalAI
13
+ - conversational
14
+ - QA
15
+ - heterogeneous sources
16
+ pretty_name: ConvMix
17
+ size_categories:
18
+ - 10K<n<100K
19
+ splits:
20
+ - name: train
21
+ num_examples: 8400
22
+ - name: validation
23
+ num_examples: 2800
24
+ - name: test
25
+ num_examples: 4800
26
  ---
27
+
28
+ # Dataset Card for ConvMix
29
+
30
+ ## Dataset Description
31
+
32
+ - **Homepage:** [ConvMix Website](https://convinse.mpi-inf.mpg.de/)
33
+ - **Paper:** [Conversational Question Answering on Heterogeneous Sources](https://dl.acm.org/doi/10.1145/3477495.3531815)
34
+ - **Leaderboard:** [ConvMix Leaderboard](https://convinse.mpi-inf.mpg.de/)
35
+ - **Point of Contact:** [Philipp Christmann](mailto:pchristm@mpi-inf.mpg.de)
36
+
37
+ ### Dataset Summary
38
+
39
+ We construct and release the first benchmark, ConvMix, for conversational question answering (ConvQA) over heterogeneous sources, comprising 3000 real-user conversations with 16000 questions, along with entity annotations, completed question utterances, and question paraphrases.
40
+ The dataset naturally requires information from multiple sources for answering the individual questions in the conversations.
41
+
42
+ ### Dataset Creation
43
+
44
+ The ConvMix benchmark was created by real humans. We tried to ensure that the collected data is as natural as possible. Master crowdworkers on Amazon Mechanical Turk (AMT) selected an entity of interest in a specific domain, and then started issuing conversational questions on this entity, potentially drifting to other topics of interest throughout the course of the conversation. By letting users choose the entities themselves, we aimed to ensure that they are more interested into the topics the conversations are based on. After writing a question, users were asked to find the answer in eithers Wikidata, Wikipedia text, a Wikipedia table or a Wikipedia infobox, whatever they find more natural for the specific question at hand. Since Wikidata requires some basic understanding of knowledge bases, we provided video guidelines that illustrated how Wikidata can be used for detecting answers, following an example conversation. For each conversational question, that might be incomplete, the crowdworker provides a completed question that is intent-explicit, and can be answered without the conversational context. These questions constitute the CompMix dataset. We provide also the answer source the user found the answer in and question entities.