jpcorb20 commited on
Commit
735244f
1 Parent(s): ae6c4a8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -1
README.md CHANGED
@@ -1 +1,31 @@
1
- MultiDoGo dialog dataset : https://aclanthology.org/D19-1460/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ annotations_creators:
2
+ - crowdsourced
3
+ language_creators:
4
+ - crowdsourced
5
+ languages:
6
+ - en
7
+ licenses:
8
+ - cdla-permissive-1.0
9
+ multilinguality:
10
+ - monolingual
11
+ pretty_name: multidogo
12
+ size_categories:
13
+ - unknown
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ - sequence-modeling
19
+ - structure-prediction
20
+ - other
21
+ task_ids:
22
+ - intent-classification
23
+ - dialogue-modeling
24
+ - slot-filling
25
+ - named-entity-recognition
26
+ - other-other-my-task-description
27
+
28
+ MultiDoGo dialog dataset : https://aclanthology.org/D19-1460/
29
+
30
+ *Abstract*
31
+ The need for high-quality, large-scale, goal-oriented dialogue datasets continues to grow as virtual assistants become increasingly wide-spread. However, publicly available datasets useful for this area are limited either in their size, linguistic diversity, domain coverage, or annotation granularity. In this paper, we present strategies toward curating and annotating large scale goal oriented dialogue data. We introduce the MultiDoGO dataset to overcome these limitations. With a total of over 81K dialogues harvested across six domains, MultiDoGO is over 8 times the size of MultiWOZ, the other largest comparable dialogue dataset currently available to the public. Over 54K of these harvested conversations are annotated for intent classes and slot labels. We adopt a Wizard-of-Oz approach wherein a crowd-sourced worker (the “customer”) is paired with a trained annotator (the “agent”). The data curation process was controlled via biases to ensure a diversity in dialogue flows following variable dialogue policies. We provide distinct class label tags for agents vs. customer utterances, along with applicable slot labels. We also compare and contrast our strategies on annotation granularity, i.e. turn vs. sentence level. Furthermore, we compare and contrast annotations curated by leveraging professional annotators vs the crowd. We believe our strategies for eliciting and annotating such a dialogue dataset scales across modalities and domains and potentially languages in the future. To demonstrate the efficacy of our devised strategies we establish neural baselines for classification on the agent and customer utterances as well as slot labeling for each domain.