system HF staff commited on
Commit
026fd78
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,225 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - Microsoft Research Data License Agreement
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - sequence-modeling
18
+ task_ids:
19
+ - dialogue-modeling
20
+ ---
21
+
22
+ # Dataset Card for MetaLWOz
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Repository:** [MetaLWOz Project Website](https://www.microsoft.com/en-us/research/project/metalwoz/)
50
+ - **Paper:** [Fast Domain Adaptation for Goal-Oriented Dialogue Using a Hybrid Generative-Retrieval Transformer](https://ieeexplore.ieee.org/abstract/document/9053599), and [Hybrid Generative-Retrieval Transformers for Dialogue Domain Adaptation](https://arxiv.org/pdf/2003.01680.pdf)
51
+ - **Point of Contact:** [Hannes Schulz](https://www.microsoft.com/en-us/research/people/haschulz/)
52
+
53
+ ### Dataset Summary
54
+
55
+ MetaLWOz: A Dataset of Multi-Domain Dialogues for the Fast Adaptation of Conversation Models.
56
+ We introduce the Meta-Learning Wizard of Oz (MetaLWOz) dialogue dataset for developing fast adaptation methods for
57
+ conversation models. This data can be used to train task-oriented dialogue models, specifically to develop methods to
58
+ quickly simulate user responses with a small amount of data. Such fast-adaptation models fall into the research areas
59
+ of transfer learning and meta learning. The dataset consists of 37,884 crowdsourced dialogues recorded between two
60
+ human users in a Wizard of Oz setup, in which one was instructed to behave like a bot, and the other a true human
61
+ user. The users are assigned a task belonging to a particular domain, for example booking a reservation at a
62
+ particular restaurant, and work together to complete the task. Our dataset spans 47 domains having 227 tasks total.
63
+ Dialogues are a minimum of 10 turns long.
64
+
65
+ ### Supported Tasks and Leaderboards
66
+
67
+ This dataset supports a range of task.
68
+ - **Generative dialogue modeling** or `dialogue-modeling`: This data can be used to train task-oriented dialogue
69
+ models, specifically to develop methods to quickly simulate user responses with a small amount of data. Such fast
70
+ -adaptation models fall into the research areas of transfer learning and meta learning. The text of the dialogues
71
+ can be used to train a sequence model on the utterances.
72
+ Example of sample input/output is given in section [Data Instances](#data-instances)
73
+
74
+
75
+
76
+ ### Languages
77
+
78
+ The text in the dataset is in English (`en`).
79
+
80
+ ## Dataset Structure
81
+
82
+ ### Data Instances
83
+
84
+ A data instance is a full multi-turn dialogue between two crowd-workers, one had the role of being a `bot`, and the other one was the `user`. Both were
85
+ given a `domain` and a `task`. Each turn has a single utterance, e.g.:
86
+ ```
87
+ Domain: Ski
88
+ User Task: You want to know if there are good ski hills an
89
+ hour’s drive from your current location.
90
+ Bot Task: Tell the user that there are no ski hills in their
91
+ immediate location.
92
+ Bot: Hello how may I help you?
93
+ User: Is there any good ski hills an hour’s drive from my
94
+ current location?
95
+ Bot: I’m sorry to inform you that there are no ski hills in your
96
+ immediate location
97
+ User: Can you help me find the nearest?
98
+ Bot: Absolutely! It looks like you’re about 3 hours away from
99
+ Bear Mountain. That seems to be the closest.
100
+ User: Hmm.. sounds good
101
+ Bot: Alright! I can help you get your lift tickets now!When
102
+ will you be going?
103
+ User: Awesome! please get me a ticket for 10pax
104
+ Bot: You’ve got it. Anything else I can help you with?
105
+ User: None. Thanks again!
106
+ Bot: No problem!
107
+ ```
108
+ Example of input/output for this dialog:
109
+ ```
110
+ Input: dialog history = Hello how may I help you?; Is there
111
+ any good ski hills an hour’s drive from my current location?;
112
+ I’m sorry to inform you that there are no ski hills in your
113
+ immediate location
114
+ Output: user response = Can you help me find the nearest?
115
+ ```
116
+
117
+ ### Data Fields
118
+
119
+ Each dialogue instance has the following fields:
120
+ - `id`: a unique ID identifying the dialog.
121
+ - `user_id`: a unique ID identifying the user.
122
+ - `bot_id`: a unique ID identifying the bot.
123
+ - `domain`: a unique ID identifying the domain. Provides a mapping to tasks dataset.
124
+ - `task_id`: a unique ID identifying the task. Provides a mapping to tasks dataset.
125
+ - `turns`: the sequence of utterances alternating between `bot` and `user`, starting with a prompt from `bot`.
126
+
127
+ Each task instance has following fields:
128
+ - `task_id`: a unique ID identifying the task.
129
+ - `domain`: a unique ID identifying the domain.
130
+ - `bot_prompt`: The task specification for bot.
131
+ - `bot_role`: The domain oriented role of bot.
132
+ - `user_prompt`: The task specification for user.
133
+ - `user_role`: The domain oriented role of user.
134
+
135
+
136
+
137
+ ### Data Splits
138
+
139
+ The dataset is split into a `train` and `test` split with the following sizes:
140
+
141
+ | | Training MetaLWOz | Evaluation MetaLWOz | Combined |
142
+ | ----- | ------ | ----- | ---- |
143
+ | Total Domains | 47 | 4 | 51 |
144
+ | Total Tasks | 226 | 14 | 240 |
145
+ | Total Dialogs | 37884 | 2319 | 40203 |
146
+
147
+ Below are the various statistics of the dataset:
148
+
149
+ | Statistic | Mean | Minimum | Maximum |
150
+ | ----- | ------ | ----- | ---- |
151
+ | Number of tasks per domain | 4.8 | 3 | 11 |
152
+ | Number of dialogs per domain | 806.0 | 288 | 1990 |
153
+ | Number of dialogs per task | 167.6 | 32 | 285 |
154
+ | Number of turns per dialog | 11.4 | 10 | 46 |
155
+
156
+
157
+ ## Dataset Creation
158
+
159
+ ### Curation Rationale
160
+
161
+ [More Information Needed]
162
+
163
+ ### Source Data
164
+
165
+ #### Initial Data Collection and Normalization
166
+
167
+ [More Information Needed]
168
+
169
+ #### Who are the source language producers?
170
+
171
+ [More Information Needed]
172
+
173
+ ### Annotations
174
+
175
+ #### Annotation process
176
+
177
+ [More Information Needed]
178
+
179
+ #### Who are the annotators?
180
+
181
+ [More Information Needed]
182
+
183
+ ### Personal and Sensitive Information
184
+
185
+ [More Information Needed]
186
+
187
+ ## Considerations for Using the Data
188
+
189
+ ### Social Impact of Dataset
190
+
191
+ [More Information Needed]
192
+
193
+ ### Discussion of Biases
194
+
195
+ [More Information Needed]
196
+
197
+ ### Other Known Limitations
198
+
199
+ [More Information Needed]
200
+
201
+ ## Additional Information
202
+
203
+ ### Dataset Curators
204
+ The dataset v1 version is created by team of researchers from Microsoft Research (Montreal, Canada)
205
+
206
+ ### Licensing Information
207
+
208
+ The dataset is released under [Microsoft Research Data License Agreement](https://msropendata-web-api.azurewebsites.net/licenses/2f933be3-284d-500b-7ea3-2aa2fd0f1bb2/view)
209
+
210
+ ### Citation Information
211
+
212
+ You can cite the following for the various versions of MetaLWOz:
213
+
214
+ Version 1.0
215
+ ```
216
+ @InProceedings{shalyminov2020fast,
217
+ author = {Shalyminov, Igor and Sordoni, Alessandro and Atkinson, Adam and Schulz, Hannes},
218
+ title = {Fast Domain Adaptation For Goal-Oriented Dialogue Using A Hybrid Generative-Retrieval Transformer},
219
+ booktitle = {2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
220
+ year = {2020},
221
+ month = {April},
222
+ url = {https://www.microsoft.com/en-us/research/publication/fast-domain-adaptation-for-goal-oriented-dialogue-using-a
223
+ -hybrid-generative-retrieval-transformer/},
224
+ }
225
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"dialogues": {"description": "MetaLWOz: A Dataset of Multi-Domain Dialogues for the Fast Adaptation of Conversation Models. We introduce the Meta-Learning Wizard of Oz (MetaLWOz) dialogue dataset for developing fast adaptation methods for conversation models. This data can be used to train task-oriented dialogue models, specifically to develop methods to quickly simulate user responses with a small amount of data. Such fast-adaptation models fall into the research areas of transfer learning and meta learning. The dataset consists of 37,884 crowdsourced dialogues recorded between two human users in a Wizard of Oz setup, in which one was instructed to behave like a bot, and the other a true human user. The users are assigned a task belonging to a particular domain, for example booking a reservation at a particular restaurant, and work together to complete the task. Our dataset spans 47 domains having 227 tasks total. Dialogues are a minimum of 10 turns long.\n", "citation": "@InProceedings{shalyminov2020fast,\nauthor = {Shalyminov, Igor and Sordoni, Alessandro and Atkinson, Adam and Schulz, Hannes},\ntitle = {Fast Domain Adaptation For Goal-Oriented Dialogue Using A Hybrid Generative-Retrieval Transformer},\nbooktitle = {2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},\nyear = {2020},\nmonth = {April},\nurl = {https://www.microsoft.com/en-us/research/publication/fast-domain-adaptation-for-goal-oriented-dialogue-using-a-hybrid-generative-retrieval-transformer/},\n}\n", "homepage": "https://www.microsoft.com/en-us/research/project/metalwoz/", "license": "Microsoft Research Data License Agreement", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "user_id": {"dtype": "string", "id": null, "_type": "Value"}, "bot_id": {"dtype": "string", "id": null, "_type": "Value"}, "domain": {"dtype": "string", "id": null, "_type": "Value"}, "task_id": {"dtype": "string", "id": null, "_type": "Value"}, "turns": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "meta_woz", "config_name": "dialogues", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 19999218, "num_examples": 37884, "dataset_name": "meta_woz"}, "test": {"name": "test", "num_bytes": 1284287, "num_examples": 2319, "dataset_name": "meta_woz"}}, "download_checksums": {"https://download.microsoft.com/download/E/B/8/EB84CB1A-D57D-455F-B905-3ABDE80404E5/metalwoz-v1.zip": {"num_bytes": 5639228, "checksum": "2a2ae3b25760aa2725e70bc6480562fa5d720c9689a508d28417631496d6764f"}, "https://download.microsoft.com/download/0/c/4/0c4a8893-cbf9-4a43-a44a-09bab9539234/metalwoz-test-v1.zip": {"num_bytes": 2990635, "checksum": "6722d1d9ec05334dd801972767ae3bdefcd15f71bf73fea1d098f214a96a7c6c"}}, "download_size": 8629863, "post_processing_size": null, "dataset_size": 21283505, "size_in_bytes": 29913368}, "tasks": {"description": "MetaLWOz: A Dataset of Multi-Domain Dialogues for the Fast Adaptation of Conversation Models. We introduce the Meta-Learning Wizard of Oz (MetaLWOz) dialogue dataset for developing fast adaptation methods for conversation models. This data can be used to train task-oriented dialogue models, specifically to develop methods to quickly simulate user responses with a small amount of data. Such fast-adaptation models fall into the research areas of transfer learning and meta learning. The dataset consists of 37,884 crowdsourced dialogues recorded between two human users in a Wizard of Oz setup, in which one was instructed to behave like a bot, and the other a true human user. The users are assigned a task belonging to a particular domain, for example booking a reservation at a particular restaurant, and work together to complete the task. Our dataset spans 47 domains having 227 tasks total. Dialogues are a minimum of 10 turns long.\n", "citation": "@InProceedings{shalyminov2020fast,\nauthor = {Shalyminov, Igor and Sordoni, Alessandro and Atkinson, Adam and Schulz, Hannes},\ntitle = {Fast Domain Adaptation For Goal-Oriented Dialogue Using A Hybrid Generative-Retrieval Transformer},\nbooktitle = {2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},\nyear = {2020},\nmonth = {April},\nurl = {https://www.microsoft.com/en-us/research/publication/fast-domain-adaptation-for-goal-oriented-dialogue-using-a-hybrid-generative-retrieval-transformer/},\n}\n", "homepage": "https://www.microsoft.com/en-us/research/project/metalwoz/", "license": "Microsoft Research Data License Agreement", "features": {"task_id": {"dtype": "string", "id": null, "_type": "Value"}, "domain": {"dtype": "string", "id": null, "_type": "Value"}, "bot_prompt": {"dtype": "string", "id": null, "_type": "Value"}, "bot_role": {"dtype": "string", "id": null, "_type": "Value"}, "user_prompt": {"dtype": "string", "id": null, "_type": "Value"}, "user_role": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "meta_woz", "config_name": "tasks", "version": "0.0.0", "splits": {"train": {"name": "train", "num_bytes": 73768, "num_examples": 227, "dataset_name": "meta_woz"}, "test": {"name": "test", "num_bytes": 4351, "num_examples": 14, "dataset_name": "meta_woz"}}, "download_checksums": {"https://download.microsoft.com/download/E/B/8/EB84CB1A-D57D-455F-B905-3ABDE80404E5/metalwoz-v1.zip": {"num_bytes": 5639228, "checksum": "2a2ae3b25760aa2725e70bc6480562fa5d720c9689a508d28417631496d6764f"}, "https://download.microsoft.com/download/0/c/4/0c4a8893-cbf9-4a43-a44a-09bab9539234/metalwoz-test-v1.zip": {"num_bytes": 2990635, "checksum": "6722d1d9ec05334dd801972767ae3bdefcd15f71bf73fea1d098f214a96a7c6c"}}, "download_size": 8629863, "post_processing_size": null, "dataset_size": 78119, "size_in_bytes": 8707982}}
dummy/dialogues/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb8adb221ac474cbfcef27a9daf3e34d3f8403c617d37202db077723dae09124
3
+ size 16460
dummy/tasks/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60faf495c7c6ae2ed2807ec091886b299a2a7a5168becc61cd8b43fb79c1c44e
3
+ size 8084
meta_woz.py ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """MetaLWOz: A Dataset of Multi-Domain Dialogues for the Fast Adaptation of Conversation Models"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import json
20
+ import os
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @InProceedings{shalyminov2020fast,
27
+ author = {Shalyminov, Igor and Sordoni, Alessandro and Atkinson, Adam and Schulz, Hannes},
28
+ title = {Fast Domain Adaptation For Goal-Oriented Dialogue Using A Hybrid Generative-Retrieval Transformer},
29
+ booktitle = {2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
30
+ year = {2020},
31
+ month = {April},
32
+ url = {https://www.microsoft.com/en-us/research/publication/fast-domain-adaptation-for-goal-oriented-dialogue-using-a
33
+ -hybrid-generative-retrieval-transformer/},
34
+ }
35
+ """
36
+
37
+ _DESCRIPTION = """\
38
+ MetaLWOz: A Dataset of Multi-Domain Dialogues for the Fast Adaptation of Conversation Models. \
39
+ We introduce the Meta-Learning Wizard of Oz (MetaLWOz) dialogue dataset for developing fast adaptation methods for \
40
+ conversation models. This data can be used to train task-oriented dialogue models, specifically to develop methods to \
41
+ quickly simulate user responses with a small amount of data. Such fast-adaptation models fall into the research areas \
42
+ of transfer learning and meta learning. The dataset consists of 37,884 crowdsourced dialogues recorded between two \
43
+ human users in a Wizard of Oz setup, in which one was instructed to behave like a bot, and the other a true human \
44
+ user. The users are assigned a task belonging to a particular domain, for example booking a reservation at a \
45
+ particular restaurant, and work together to complete the task. Our dataset spans 47 domains having 227 tasks total. \
46
+ Dialogues are a minimum of 10 turns long.
47
+ """
48
+
49
+ _HOMEPAGE = "https://www.microsoft.com/en-us/research/project/metalwoz/"
50
+
51
+ _LICENSE = "Microsoft Research Data License Agreement"
52
+
53
+ _URLs = {
54
+ "train": "https://download.microsoft.com/download/E/B/8/EB84CB1A-D57D-455F-B905-3ABDE80404E5/metalwoz-v1.zip",
55
+ "test": "https://download.microsoft.com/download/0/c/4/0c4a8893-cbf9-4a43-a44a-09bab9539234/metalwoz-test-v1.zip",
56
+ }
57
+
58
+
59
+ class MetaWoz(datasets.GeneratorBasedBuilder):
60
+ VERSION = datasets.Version("1.0.0")
61
+
62
+ BUILDER_CONFIGS = [
63
+ datasets.BuilderConfig(name="dialogues", description="The dataset of dialogues from various domains."),
64
+ datasets.BuilderConfig(
65
+ name="tasks", description="The metadata for tasks corresponding to dialogues from " "various domains."
66
+ ),
67
+ ]
68
+
69
+ DEFAULT_CONFIG_NAME = "dialogues"
70
+
71
+ def _info(self):
72
+ if self.config.name == "tasks":
73
+ features = datasets.Features(
74
+ {
75
+ "task_id": datasets.Value("string"),
76
+ "domain": datasets.Value("string"),
77
+ "bot_prompt": datasets.Value("string"),
78
+ "bot_role": datasets.Value("string"),
79
+ "user_prompt": datasets.Value("string"),
80
+ "user_role": datasets.Value("string"),
81
+ }
82
+ )
83
+ else:
84
+ features = datasets.Features(
85
+ {
86
+ "id": datasets.Value("string"),
87
+ "user_id": datasets.Value("string"),
88
+ "bot_id": datasets.Value("string"),
89
+ "domain": datasets.Value("string"),
90
+ "task_id": datasets.Value("string"),
91
+ "turns": datasets.Sequence(datasets.Value("string")),
92
+ }
93
+ )
94
+ return datasets.DatasetInfo(
95
+ description=_DESCRIPTION,
96
+ features=features,
97
+ supervised_keys=None,
98
+ homepage=_HOMEPAGE,
99
+ license=_LICENSE,
100
+ citation=_CITATION,
101
+ )
102
+
103
+ def _split_generators(self, dl_manager):
104
+ """Returns SplitGenerators."""
105
+ data_dir = dl_manager.download_and_extract(_URLs)
106
+ data_dir["test"] = dl_manager.extract(os.path.join(data_dir["test"], "dstc8_metalwoz_heldout.zip"))
107
+
108
+ return [
109
+ datasets.SplitGenerator(
110
+ name=datasets.Split.TRAIN,
111
+ # These kwargs will be passed to _generate_examples
112
+ gen_kwargs={"data_dir": data_dir["train"]},
113
+ ),
114
+ datasets.SplitGenerator(
115
+ name=datasets.Split.TEST,
116
+ # These kwargs will be passed to _generate_examples
117
+ gen_kwargs={"data_dir": data_dir["test"]},
118
+ ),
119
+ ]
120
+
121
+ def _generate_examples(self, data_dir):
122
+ """ Yields examples. """
123
+ if self.config.name == "tasks":
124
+ filepath = os.path.join(data_dir, "tasks.txt")
125
+ with open(filepath, encoding="utf-8") as f:
126
+ for id_, row in enumerate(f):
127
+ data = json.loads(row)
128
+ yield id_, {
129
+ "task_id": data["task_id"],
130
+ "domain": data["domain"],
131
+ "bot_prompt": data["bot_prompt"],
132
+ "bot_role": data["bot_role"],
133
+ "user_prompt": data["user_prompt"],
134
+ "user_role": data["user_role"],
135
+ }
136
+ else:
137
+ id_ = -1
138
+ base_path = os.path.join(data_dir, "dialogues")
139
+ file_list = sorted(
140
+ [os.path.join(base_path, file) for file in os.listdir(base_path) if file.endswith(".txt")]
141
+ )
142
+ for filepath in file_list:
143
+ with open(filepath, encoding="utf-8") as f:
144
+ for row in f:
145
+ id_ += 1
146
+ data = json.loads(row)
147
+ yield id_, {
148
+ "id": data["id"],
149
+ "user_id": data["user_id"],
150
+ "bot_id": data["bot_id"],
151
+ "domain": data["domain"],
152
+ "task_id": data["task_id"],
153
+ "turns": data["turns"],
154
+ }