Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
crowdsourced
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
Tags:
License:
system HF staff commited on
Commit
da99b7b
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +258 -0
  3. dataset_infos.json +1 -0
  4. dummy/1.0.0/dummy_data.zip +3 -0
  5. taskmaster3.py +140 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,258 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - sequence-modeling
18
+ task_ids:
19
+ - dialogue-modeling
20
+ ---
21
+
22
+ # Dataset Card Creation Guide
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** [Taskmaster](https://research.google/tools/datasets/taskmaster-1/)
50
+ - **Repository:** [GitHub](https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020)
51
+ - **Paper:** [Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset](https://arxiv.org/abs/1909.05358)
52
+ - **Leaderboard:** N/A
53
+ - **Point of Contact:** [Taskmaster Googlegroup](taskmaster-datasets@googlegroups.com)
54
+
55
+ ### Dataset Summary
56
+
57
+ Taskmaster is dataset for goal oriented conversations. The Taskmaster-3 dataset consists of 23,757 movie ticketing dialogs.
58
+ By "movie ticketing" we mean conversations where the customer's goal is to purchase tickets after deciding
59
+ on theater, time, movie name, number of tickets, and date, or opt out of the transaction. This collection
60
+ was created using the "self-dialog" method. This means a single, crowd-sourced worker is
61
+ paid to create a conversation writing turns for both speakers, i.e. the customer and the ticketing agent.
62
+
63
+ ### Supported Tasks and Leaderboards
64
+
65
+ [More Information Needed]
66
+
67
+ ### Languages
68
+
69
+ The dataset is in English language.
70
+
71
+ ## Dataset Structure
72
+
73
+ ### Data Instances
74
+
75
+ A typical example looks like this
76
+
77
+ ```
78
+ {
79
+ "conversation_id": "dlg-ddee80da-9ffa-4773-9ce7-f73f727cb79c",
80
+ "instructions": "SCENARIO: Pretend you’re *using a digital assistant to purchase tickets for a movie currently showing in theaters*. ...",
81
+ "scenario": "4 exchanges with 1 error and predefined variables",
82
+ "utterances": [
83
+ {
84
+ "apis": [],
85
+ "index": 0,
86
+ "segments": [
87
+ {
88
+ "annotations": [
89
+ {
90
+ "name": "num.tickets"
91
+ }
92
+ ],
93
+ "end_index": 21,
94
+ "start_index": 20,
95
+ "text": "2"
96
+ },
97
+ {
98
+ "annotations": [
99
+ {
100
+ "name": "name.movie"
101
+ }
102
+ ],
103
+ "end_index": 42,
104
+ "start_index": 37,
105
+ "text": "Mulan"
106
+ }
107
+ ],
108
+ "speaker": "user",
109
+ "text": "I would like to buy 2 tickets to see Mulan."
110
+ },
111
+ {
112
+ "index": 6,
113
+ "segments": [],
114
+ "speaker": "user",
115
+ "text": "Yes.",
116
+ "apis": [
117
+ {
118
+ "args": [
119
+ {
120
+ "arg_name": "name.movie",
121
+ "arg_value": "Mulan"
122
+ },
123
+ {
124
+ "arg_name": "name.theater",
125
+ "arg_value": "Mountain AMC 16"
126
+ }
127
+ ],
128
+ "index": 6,
129
+ "name": "book_tickets",
130
+ "response": [
131
+ {
132
+ "response_name": "status",
133
+ "response_value": "success"
134
+ }
135
+ ]
136
+ }
137
+ ]
138
+ }
139
+ ],
140
+ "vertical": "Movie Tickets"
141
+ }
142
+ ```
143
+
144
+ ### Data Fields
145
+
146
+ Each conversation in the data file has the following structure:
147
+
148
+ - `conversation_id`: A universally unique identifier with the prefix 'dlg-'. The ID has no meaning.
149
+ - `utterances`: A list of utterances that make up the conversation.
150
+ - `instructions`: Instructions for the crowdsourced worker used in creating the conversation.
151
+ - `vertical`: In this dataset the vertical for all dialogs is "Movie Tickets".
152
+ - `scenario`: This is the title of the instructions for each dialog.
153
+
154
+ Each utterance has the following fields:
155
+
156
+ - `index`: A 0-based index indicating the order of the utterances in the conversation.
157
+ - `speaker`: Either USER or ASSISTANT, indicating which role generated this utterance.
158
+ - `text`: The raw text of the utterance. In case of self dialogs (one_person_dialogs), this is written by the crowdsourced worker. In case of the WOz dialogs, 'ASSISTANT' turns are written and 'USER' turns are transcribed from the spoken recordings of crowdsourced workers.
159
+ - `segments`: A list of various text spans with semantic annotations.
160
+ - `apis`: An array of API invocations made during the utterance.
161
+
162
+ Each API has the following structure:
163
+
164
+ - `name`: The name of the API invoked (e.g. find_movies).
165
+ - `index`: The index of the parent utterance.
166
+ - `args`: A `list` of `dict` with keys `arg_name` and `arg_value` which represent the name of the argument and the value for the argument respectively.
167
+ - `response`: A `list` of `dict`s with keys `response_name` and `response_value` which represent the name of the response and the value for the response respectively.
168
+
169
+ Each segment has the following fields:
170
+
171
+ - `start_index`: The position of the start of the annotation in the utterance text.
172
+ - `end_index`: The position of the end of the annotation in the utterance text.
173
+ - `text`: The raw text that has been annotated.
174
+ - `annotations`: A list of annotation details for this segment.
175
+
176
+ Each annotation has a single field:
177
+
178
+ - `name`: The annotation name.
179
+
180
+
181
+
182
+ ### Data Splits
183
+
184
+ There are no deafults splits for all the config. The below table lists the number of examples in each config.
185
+
186
+ | | Train |
187
+ |-------------------|--------|
188
+ | n_instances | 23757 |
189
+
190
+
191
+ ## Dataset Creation
192
+
193
+ ### Curation Rationale
194
+
195
+ [More Information Needed]
196
+
197
+ ### Source Data
198
+
199
+ [More Information Needed]
200
+
201
+ #### Initial Data Collection and Normalization
202
+
203
+ [More Information Needed]
204
+
205
+ #### Who are the source language producers?
206
+
207
+ [More Information Needed]
208
+
209
+ ### Annotations
210
+
211
+ [More Information Needed]
212
+
213
+ #### Annotation process
214
+
215
+ [More Information Needed]
216
+
217
+ #### Who are the annotators?
218
+
219
+ [More Information Needed]
220
+
221
+ ### Personal and Sensitive Information
222
+
223
+ [More Information Needed]
224
+
225
+ ## Considerations for Using the Data
226
+
227
+ ### Social Impact of Dataset
228
+
229
+ [More Information Needed]
230
+
231
+ ### Discussion of Biases
232
+
233
+ [More Information Needed]
234
+
235
+ ### Other Known Limitations
236
+
237
+ [More Information Needed]
238
+
239
+ ## Additional Information
240
+
241
+ ### Dataset Curators
242
+
243
+ [More Information Needed]
244
+
245
+ ### Licensing Information
246
+
247
+ The dataset is licensed under `Creative Commons Attribution 4.0 License`
248
+
249
+ ### Citation Information
250
+
251
+ [More Information Needed]
252
+ ```
253
+ @inproceedings{48484,
254
+ title = {Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset},
255
+ author = {Bill Byrne and Karthik Krishnamoorthi and Chinnadhurai Sankar and Arvind Neelakantan and Daniel Duckworth and Semih Yavuz and Ben Goodrich and Amit Dubey and Kyu-Young Kim and Andy Cedilnik},
256
+ year = {2019}
257
+ }
258
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "Taskmaster is dataset for goal oriented conversations. The Taskmaster-3 dataset consists of 23,757 movie ticketing dialogs. By \"movie ticketing\" we mean conversations where the customer's goal is to purchase tickets after deciding on theater, time, movie name, number of tickets, and date, or opt out of the transaction. This collection was created using the \"self-dialog\" method. This means a single, crowd-sourced worker is paid to create a conversation writing turns for both speakers, i.e. the customer and the ticketing agent.\n", "citation": "@inproceedings{48484,\ntitle\t= {Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset},\nauthor\t= {Bill Byrne and Karthik Krishnamoorthi and Chinnadhurai Sankar and Arvind Neelakantan and Daniel Duckworth and Semih Yavuz and Ben Goodrich and Amit Dubey and Kyu-Young Kim and Andy Cedilnik},\nyear\t= {2019}\n}\n", "homepage": "https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020", "license": "", "features": {"conversation_id": {"dtype": "string", "id": null, "_type": "Value"}, "vertical": {"dtype": "string", "id": null, "_type": "Value"}, "instructions": {"dtype": "string", "id": null, "_type": "Value"}, "scenario": {"dtype": "string", "id": null, "_type": "Value"}, "utterances": [{"index": {"dtype": "int32", "id": null, "_type": "Value"}, "speaker": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "apis": [{"name": {"dtype": "string", "id": null, "_type": "Value"}, "index": {"dtype": "int32", "id": null, "_type": "Value"}, "args": [{"arg_name": {"dtype": "string", "id": null, "_type": "Value"}, "arg_value": {"dtype": "string", "id": null, "_type": "Value"}}], "response": [{"response_name": {"dtype": "string", "id": null, "_type": "Value"}, "response_value": {"dtype": "string", "id": null, "_type": "Value"}}]}], "segments": [{"start_index": {"dtype": "int32", "id": null, "_type": "Value"}, "end_index": {"dtype": "int32", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "annotations": [{"name": {"dtype": "string", "id": null, "_type": "Value"}}]}]}]}, "post_processed": null, "supervised_keys": null, "builder_name": "taskmaster3", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 143609327, "num_examples": 23757, "dataset_name": "taskmaster3"}}, "download_checksums": {"https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_00.json": {"num_bytes": 15832328, "checksum": "4edf97557e1aa7f654bf97994b5eae42653ef6d4f5e50136f7664d7e5cdff7b6"}, "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_01.json": {"num_bytes": 15881778, "checksum": "a4fef75ec7824bb3fb29b8afe9ed63f3354b81077bb9b6f58f2349bb60d660fc"}, "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_02.json": {"num_bytes": 15487631, "checksum": "25cb4788c4c857740152612397b0e687dfeade14c7dbfeed501a53290725aedf"}, "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_03.json": {"num_bytes": 15728910, "checksum": "5e1f15544f259c804438458f34a6c7f00400e30dba7648c37a0b31886c5dd903"}, "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_04.json": {"num_bytes": 15602257, "checksum": "17d8deb3bc6c3551c564cb9003a47202a835e011477b1e3c81711b083bf4064b"}, "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_05.json": {"num_bytes": 15654421, "checksum": "12e58018be7b42aa16968ab914d03c56af9fae9fb5feec7448e01f738c50d2fe"}, "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_06.json": {"num_bytes": 15590810, "checksum": "a4e290dafc6362ca8d4d786d6e70f6339297b0414cca4fe0ec79c1d383d65673"}, "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_07.json": {"num_bytes": 15661246, "checksum": "becec1c0e9b88781fbb81f5d032826f85d77756cfae7c37acbca9b4b2a470fdb"}, "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_08.json": {"num_bytes": 15643179, "checksum": "c2718197cedfb1ff7b830492ddbdc36c213ac98c83ba974ff0ea0eeeffdfe9b5"}, "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_09.json": {"num_bytes": 15681874, "checksum": "a8867eb49010fa735b3c0b2f47df4cc225c79b6ca919009db857f454cccc986d"}, "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_10.json": {"num_bytes": 15600102, "checksum": "88a8ce81f2c843c7a52a1a6c016d790e6547d5a3ffdd000ca00f0676c02e077b"}, "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_11.json": {"num_bytes": 15686138, "checksum": "10e5d2dadb7a55b08d41605498b46320fa2df7b87c71a820df4245e9eeeba4d1"}, "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_12.json": {"num_bytes": 15684546, "checksum": "be255d29439d10fc33bb222a1294da0af7ed4bdf8ede84c389d9b744e9cdf343"}, "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_13.json": {"num_bytes": 15608345, "checksum": "a477d4668398b86bc7f68f86c20a423e81b4dbc41002a5d605dee0c6cee250b8"}, "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_14.json": {"num_bytes": 15730197, "checksum": "92c7a0ca1fce8845610a7fdb00972750228351b2c2065e76120c328c806cc1e5"}, "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_15.json": {"num_bytes": 15666588, "checksum": "e67e9d649d29f3edc12796cb4b1308cc7cb89f676c12e96d8fea558510b6ee18"}, "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_16.json": {"num_bytes": 15573228, "checksum": "f0d3f22ee8b9224ea4efc57182f1620f849f3b90050cc04992319aab04e7b453"}, "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_17.json": {"num_bytes": 15734442, "checksum": "15c469a030aae16a98969675b238f368436ec0a5074f3eb79458fdc81cf489d4"}, "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_18.json": {"num_bytes": 15697850, "checksum": "28dfe3f65c1cd7bab8ed9da0e667ee1aec1561cf41237531f30c889cafb0997d"}, "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data/data_19.json": {"num_bytes": 15656271, "checksum": "7ae99d1237c25519269c3e6969ab0ecd88509828c436cbecb8a86831007c3868"}}, "download_size": 313402141, "post_processing_size": null, "dataset_size": 143609327, "size_in_bytes": 457011468}}
dummy/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3e4bb5a9fdc9812501c8d0d7ee8b8f7ffba4b75753ba4e4b2380d1e9b36dfa3
3
+ size 7070
taskmaster3.py ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Taskmaster-3: A goal oriented conversations dataset for movie ticketing domain """
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import json
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @inproceedings{48484,
26
+ title = {Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset},
27
+ author = {Bill Byrne and Karthik Krishnamoorthi and Chinnadhurai Sankar and Arvind Neelakantan and Daniel Duckworth and Semih Yavuz and Ben Goodrich and Amit Dubey and Kyu-Young Kim and Andy Cedilnik},
28
+ year = {2019}
29
+ }
30
+ """
31
+
32
+ _DESCRIPTION = """\
33
+ Taskmaster is dataset for goal oriented conversations. The Taskmaster-3 dataset consists of 23,757 movie ticketing dialogs. \
34
+ By "movie ticketing" we mean conversations where the customer's goal is to purchase tickets after deciding \
35
+ on theater, time, movie name, number of tickets, and date, or opt out of the transaction. This collection \
36
+ was created using the "self-dialog" method. This means a single, crowd-sourced worker is \
37
+ paid to create a conversation writing turns for both speakers, i.e. the customer and the ticketing agent.
38
+ """
39
+
40
+ _HOMEPAGE = "https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020"
41
+
42
+ _BASE_URL = "https://raw.githubusercontent.com/google-research-datasets/Taskmaster/master/TM-3-2020/data"
43
+
44
+
45
+ class Taskmaster3(datasets.GeneratorBasedBuilder):
46
+ """Taskmaster-3: A goal oriented conversations dataset for movie ticketing domain"""
47
+
48
+ VERSION = datasets.Version("1.0.0")
49
+
50
+ def _info(self):
51
+ features = {
52
+ "conversation_id": datasets.Value("string"),
53
+ "vertical": datasets.Value("string"),
54
+ "instructions": datasets.Value("string"),
55
+ "scenario": datasets.Value("string"),
56
+ "utterances": [
57
+ {
58
+ "index": datasets.Value("int32"),
59
+ "speaker": datasets.Value("string"),
60
+ "text": datasets.Value("string"),
61
+ "apis": [
62
+ {
63
+ "name": datasets.Value("string"),
64
+ "index": datasets.Value("int32"),
65
+ "args": [
66
+ {
67
+ "arg_name": datasets.Value("string"),
68
+ "arg_value": datasets.Value("string"),
69
+ }
70
+ ],
71
+ "response": [
72
+ {
73
+ "response_name": datasets.Value("string"),
74
+ "response_value": datasets.Value("string"),
75
+ }
76
+ ],
77
+ }
78
+ ],
79
+ "segments": [
80
+ {
81
+ "start_index": datasets.Value("int32"),
82
+ "end_index": datasets.Value("int32"),
83
+ "text": datasets.Value("string"),
84
+ "annotations": [{"name": datasets.Value("string")}],
85
+ }
86
+ ],
87
+ }
88
+ ],
89
+ }
90
+ return datasets.DatasetInfo(
91
+ description=_DESCRIPTION,
92
+ features=datasets.Features(features),
93
+ supervised_keys=None,
94
+ homepage=_HOMEPAGE,
95
+ citation=_CITATION,
96
+ )
97
+
98
+ def _split_generators(self, dl_manager):
99
+ urls = [f"{_BASE_URL}/data_{i:02}.json" for i in range(20)]
100
+ dialog_files = dl_manager.download(urls)
101
+ return [
102
+ datasets.SplitGenerator(
103
+ name=datasets.Split.TRAIN,
104
+ gen_kwargs={"dialog_files": dialog_files},
105
+ ),
106
+ ]
107
+
108
+ def _generate_examples(self, dialog_files):
109
+ for filepath in dialog_files:
110
+ with open(filepath, encoding="utf-8") as f:
111
+ dialogs = json.load(f)
112
+ for dialog in dialogs:
113
+ example = self._prepare_example(dialog)
114
+ yield example["conversation_id"], example
115
+
116
+ def _prepare_example(self, dialog):
117
+ utterances = dialog["utterances"]
118
+ for utterance in utterances:
119
+ if "segments" not in utterance:
120
+ utterance["segments"] = []
121
+
122
+ if "apis" in utterance:
123
+ utterance["apis"] = self._transform_apis(utterance["apis"])
124
+ else:
125
+ utterance["apis"] = []
126
+ return dialog
127
+
128
+ def _transform_apis(self, apis):
129
+ for api in apis:
130
+ if "args" in api:
131
+ api["args"] = [{"arg_name": k, "arg_value": v} for k, v in api["args"].items()]
132
+ else:
133
+ api["args"] = []
134
+
135
+ if "response" in api:
136
+ api["response"] = [{"response_name": k, "response_value": v} for k, v in api["response"].items()]
137
+ else:
138
+ api["response"] = []
139
+
140
+ return apis