jpcorb20 commited on
Commit
02b2d33
2 Parent(s): 77eec53 1230cfe

correct readme

Browse files
Files changed (1) hide show
  1. README.md +35 -0
README.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cdla-permissive-1.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: multidogo
13
+ size_categories:
14
+ - 10k<n<100k
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text-classification
19
+ - sequence-modeling
20
+ - structure-prediction
21
+ - other
22
+ task_ids:
23
+ - intent-classification
24
+ - dialogue-modeling
25
+ - slot-filling
26
+ - named-entity-recognition
27
+ - other-other-my-task-description
28
+ ---
29
+
30
+ MultiDoGo dialog dataset:
31
+ - paper: https://aclanthology.org/D19-1460/
32
+ - git repo: https://github.com/awslabs/multi-domain-goal-oriented-dialogues-dataset
33
+
34
+ *Abstract*
35
+ The need for high-quality, large-scale, goal-oriented dialogue datasets continues to grow as virtual assistants become increasingly wide-spread. However, publicly available datasets useful for this area are limited either in their size, linguistic diversity, domain coverage, or annotation granularity. In this paper, we present strategies toward curating and annotating large scale goal oriented dialogue data. We introduce the MultiDoGO dataset to overcome these limitations. With a total of over 81K dialogues harvested across six domains, MultiDoGO is over 8 times the size of MultiWOZ, the other largest comparable dialogue dataset currently available to the public. Over 54K of these harvested conversations are annotated for intent classes and slot labels. We adopt a Wizard-of-Oz approach wherein a crowd-sourced worker (the “customer”) is paired with a trained annotator (the “agent”). The data curation process was controlled via biases to ensure a diversity in dialogue flows following variable dialogue policies. We provide distinct class label tags for agents vs. customer utterances, along with applicable slot labels. We also compare and contrast our strategies on annotation granularity, i.e. turn vs. sentence level. Furthermore, we compare and contrast annotations curated by leveraging professional annotators vs the crowd. We believe our strategies for eliciting and annotating such a dialogue dataset scales across modalities and domains and potentially languages in the future. To demonstrate the efficacy of our devised strategies we establish neural baselines for classification on the agent and customer utterances as well as slot labeling for each domain.