Sebastian Gehrmann commited on
Commit
d6f1aa1
1 Parent(s): df13018

data card.

Browse files
Files changed (1) hide show
  1. README.md +404 -180
README.md CHANGED
@@ -1,21 +1,85 @@
1
- ## Dataset Overview
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
- ### Where to find the data and its documentation
4
 
5
- #### What is the webpage for the dataset (if it exists)?
6
 
7
- https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020
8
 
9
- #### What is the link to where the original dataset is hosted?
 
 
10
 
11
- https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020
12
 
13
- #### What is the link to the paper describing the dataset (open access preferred)?
 
 
14
 
15
- https://arxiv.org/pdf/2012.12458.pdf
16
 
17
- #### Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex.
 
 
18
 
 
 
 
 
19
  ```
20
  @article{byrne2020tickettalk,
21
  title={TicketTalk: Toward human-level performance with end-to-end, transaction-based dialog systems},
@@ -23,333 +87,493 @@ https://arxiv.org/pdf/2012.12458.pdf
23
  journal={arXiv preprint arXiv:2012.12458},
24
  year={2020}
25
  }
26
- ```
27
 
28
- @article{byrne2020tickettalk,
29
- title={TicketTalk: Toward human-level performance with end-to-end, transaction-based dialog systems},
30
- author={Byrne, Bill and Krishnamoorthi, Karthik and Ganesh, Saravanan and Kale, Mihir Sanjay},
31
- journal={arXiv preprint arXiv:2012.12458},
32
- year={2020}
33
- }
 
 
 
 
 
 
34
 
35
- #### If known, provide the name of at least one person the reader can contact for questions about the dataset.
36
 
37
- Karthik Krishnamoorthi
 
 
38
 
39
- #### If known, provide the email of at least one person the reader can contact for questions about the dataset.
40
 
41
- krishnamoorthi@google.com
42
 
43
- #### Does the dataset have an active leaderboard?
44
 
45
- no
 
 
 
46
 
47
- ### Languages and Intended Use
48
 
49
- #### Is the dataset multilingual?
 
 
50
 
51
- no
52
 
53
- #### What dialects are covered? Are there multiple dialects per language?
 
 
 
54
 
55
- NA
56
 
57
- #### What languages/dialects are covered in the dataset?
 
 
58
 
59
- English
60
 
61
- #### Whose language is in the dataset?
 
 
 
62
 
63
- NA
64
 
65
- #### What is the license of the dataset?
 
 
66
 
67
- cc-by-4.0: Creative Commons Attribution 4.0 International
68
 
69
- #### What is the intended use of the dataset?
 
 
70
 
71
- Dialogues
72
 
73
- #### What primary task does the dataset support?
 
 
 
74
 
75
- Dialog Response Generation
76
 
77
- #### Provide a short description of the communicative goal of a model trained for this task on this dataset.
78
 
79
- a movie ticketing dialog dataset with 23,789 annotated conversations.
80
 
81
- ### Credit
 
 
82
 
83
- #### In what kind of organization did the dataset curation happen?
84
 
85
- other
 
 
86
 
87
- #### Name the organization(s).
88
 
89
- NA
 
 
90
 
91
- #### Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s).
92
 
93
- Google researchers
 
 
94
 
95
- #### Who funded the data creation?
96
 
97
- Google
 
 
98
 
99
- #### Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM.
100
 
101
- Tosin Adewumi (Luleå University of Technology)
102
 
103
- ### Structure
104
 
105
- #### List and describe the fields present in the dataset.
 
 
 
 
 
 
106
 
107
- gem_id
108
- utterances
109
- instruction_id
110
- conversation_id
111
 
112
- #### How was the dataset structure determined?
 
 
113
 
114
- NA
115
 
116
- #### How were the labels chosen?
 
 
117
 
118
- NA
119
 
120
- #### Provide a JSON formatted example of a typical instance in the dataset.
 
 
 
 
 
 
 
 
121
 
122
- ```json
123
- GEM-TASKMASTER-TEST-0,0,"<U>correct<C><U>i'll love to see a movie tonight<A>Sure. What movie would you like to see?<U>i havn't decided yet.can you help with any?<A>No problem. Can you tell me what city you’re in?<U>in currently in oak valley,arkansas.<PN>find_movies<PAN>location<PAV>Oak Valley, Arkansas<PR>find_movies<PRAN>name.movie<PRAV>No Time To Die<PRAV>Emma<A>Ok. There are a couple of popular films out in your area: No Time To Die and Emma. Do either of those sound interesting?<U>i'm quite familiar with no time to die but tell me more about emma.<PN>get_movie_attribute<PAN>attribute<PAV>description.plot<PAN>name.movie<PAV>Emma<PR>get_movie_attribute<PRAN>description.plot<PRAV>In this adaptation of Jane Austen's classic novel, pretty socialite Emma Woodhouse, played by Gwyneth Paltrow, entertains herself by playing matchmaker for those around her<A>OK. Here’s the description of Emma: In this adaptation of Jane Austen's classic novel, pretty socialite Emma Woodhouse, played by Gwyneth Paltrow, entertains herself by playing matchmaker for those around her.<U>hum i love that description,i'll love to book two tickets to emma<PN>find_theaters<PAN>location<PAV>Oak Valley, Arkansas<PAN>name.movie<PAV>Emma<PR>find_theaters<PRAN>name.theater<PRAV>AMC Mercado 24<PN>find_showtimes<PAN>date.showing<PAV>tonight<PAN>name.movie<PAV>Emma<PAN>name.theater<PAV>AMC Mercado 24<PR>find_showtimes<PRAN>time.showing<PRAV>5:00pm<PRAV>7:10pm<PRAV>9:45pm<A>OK. Emma is playing at AMC Mercado 24 today at 5:00pm, 7:10pm, and 9:45pm. Which time would you like?<U>7:10pm is fine<A>All right. Let me confirm that you’d like two tickets for Emma at AMC Mercado 24 tonight at 7:10pm. Is that all correct?",[A]Is it OK to go ahead and purchase these tickets?,dlg-32b67f97-6fcb-4135-b0bc-ac6aa248b889
124
- ```
125
 
126
- GEM-TASKMASTER-TEST-0,0,"<U>correct<C><U>i'll love to see a movie tonight<A>Sure. What movie would you like to see?<U>i havn't decided yet.can you help with any?<A>No problem. Can you tell me what city you’re in?<U>in currently in oak valley,arkansas.<PN>find_movies<PAN>location<PAV>Oak Valley, Arkansas<PR>find_movies<PRAN>name.movie<PRAV>No Time To Die<PRAV>Emma<A>Ok. There are a couple of popular films out in your area: No Time To Die and Emma. Do either of those sound interesting?<U>i'm quite familiar with no time to die but tell me more about emma.<PN>get_movie_attribute<PAN>attribute<PAV>description.plot<PAN>name.movie<PAV>Emma<PR>get_movie_attribute<PRAN>description.plot<PRAV>In this adaptation of Jane Austen's classic novel, pretty socialite Emma Woodhouse, played by Gwyneth Paltrow, entertains herself by playing matchmaker for those around her<A>OK. Here’s the description of Emma: In this adaptation of Jane Austen's classic novel, pretty socialite Emma Woodhouse, played by Gwyneth Paltrow, entertains herself by playing matchmaker for those around her.<U>hum i love that description,i'll love to book two tickets to emma<PN>find_theaters<PAN>location<PAV>Oak Valley, Arkansas<PAN>name.movie<PAV>Emma<PR>find_theaters<PRAN>name.theater<PRAV>AMC Mercado 24<PN>find_showtimes<PAN>date.showing<PAV>tonight<PAN>name.movie<PAV>Emma<PAN>name.theater<PAV>AMC Mercado 24<PR>find_showtimes<PRAN>time.showing<PRAV>5:00pm<PRAV>7:10pm<PRAV>9:45pm<A>OK. Emma is playing at AMC Mercado 24 today at 5:00pm, 7:10pm, and 9:45pm. Which time would you like?<U>7:10pm is fine<A>All right. Let me confirm that you’d like two tickets for Emma at AMC Mercado 24 tonight at 7:10pm. Is that all correct?",[A]Is it OK to go ahead and purchase these tickets?,dlg-32b67f97-6fcb-4135-b0bc-ac6aa248b889
 
 
 
 
127
 
128
- #### Describe and name the splits in the dataset if there are more than one.
129
 
130
- train
131
- dev
132
- test
133
 
134
- #### Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
135
 
136
- NA
 
 
137
 
138
- #### What does an outlier of the dataset in terms of length/perplexity/embedding look like?
139
 
140
- NA
141
 
142
- ## Dataset in GEM
143
 
144
- ### Rationale
145
 
146
- #### What does this dataset contribute toward better generation evaluation and why is it part of GEM?
147
 
148
- Dialogue generation that makes sense
 
 
149
 
150
- #### Do other datasets for the high level task exist?
151
 
152
- yes
 
 
153
 
154
- #### Does this dataset cover other languages than other datasets for the same task?
155
 
156
- no
 
 
157
 
158
- #### What else sets this dataset apart from other similar datasets in GEM?
159
 
160
- NA
 
 
161
 
162
- #### What aspect of model ability can be measured with this dataset?
163
 
164
- NA
 
 
165
 
166
- ### GEM Additional Curation
167
 
168
- #### Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data?
169
 
170
- yes
171
 
172
- #### What changes have been made to he original dataset?
 
 
173
 
174
- other
175
 
176
- #### For each of these changes, described them in more details and provided the intended purpose of the modification
 
 
177
 
178
- gem_id field was added to the 3 data splits
179
 
180
- #### Does GEM provide additional splits to the dataset?
 
 
181
 
182
- no
183
 
184
- ### Getting Started
 
 
185
 
186
- #### Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task.
187
 
188
- https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020
189
 
190
- #### Technical terms used in this card and the dataset and their definitions
191
 
192
- NA
 
 
193
 
194
- ## Previous Results
195
 
196
- ### Previous Results
 
 
197
 
198
- #### What aspect of model ability can be measured with this dataset?
199
 
200
- BLEU: 60
201
 
202
- #### What metrics are typically used for this task?
203
 
204
- BLEU
205
 
206
- #### List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task.
207
 
208
- automatic evaluation
 
 
209
 
210
- #### Are previous results available?
211
 
212
- yes
 
 
213
 
214
- #### What evaluation approaches have others used?
215
 
216
- NA
 
 
217
 
218
- #### What are the most relevant previous results for this task/dataset?
219
 
220
- NA
 
 
221
 
222
- ## Dataset Curation
223
 
224
- ### Original Curation
 
 
225
 
226
- #### Original curation rationale
227
 
228
- NA
 
 
229
 
230
- #### What was the communicative goal?
231
 
232
- a movie ticketing dialog dataset with 23,789 annotated conversations.
233
 
234
- #### Is the dataset aggregated from different data sources?
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
235
 
236
- no
 
 
237
 
238
- ### Language Data
239
 
240
- #### How was the language data obtained?
241
 
242
- Crowdsourced
243
 
244
- #### If crowdsourced, where from?
 
 
 
245
 
246
- Participatory experiment
247
 
248
- #### What further information do we have on the language producers?
 
 
249
 
250
- NA
251
 
252
- #### Does the language in the dataset focus on specific topics? How would you describe them?
253
 
254
- Ticketing
255
 
256
- #### Was the text validated by a different worker or a data curator?
 
 
257
 
258
- not validated
259
 
260
- #### Were text instances selected or filtered?
 
 
261
 
262
- not filtered
263
 
264
- ### Structured Annotations
265
 
266
- #### Does the dataset have additional annotations for each instance?
267
 
268
- none
 
 
 
269
 
270
- #### Was an annotation service used?
271
 
272
- no
 
 
273
 
274
- ### Consent
275
 
276
- #### Was there a consent policy involved when gathering the data?
277
 
278
- no
279
 
280
- #### If not, what is the justification for reusing the data?
 
 
281
 
282
- NA
283
 
284
- ### Private Identifying Information (PII)
285
 
286
- #### Does the source language data likely contain Personal Identifying Information about the data creators or subjects?
287
 
288
- no PII
289
 
290
- #### Provide a justification for selecting `no PII` above.
291
 
292
- It's based on ticketing without personal information
 
 
293
 
294
- ### Maintenance
295
 
296
- #### Does the original dataset have a maintenance plan?
297
 
298
- no
299
 
300
- ## Broader Social Context
 
 
301
 
302
- ### Previous Work on the Social Impact of the Dataset
303
 
304
- #### Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems?
305
 
306
- no
307
 
308
- ### Impact on Under-Served Communities
 
 
309
 
310
- #### Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models).
311
 
312
- no
 
 
313
 
314
- ### Discussion of Biases
315
 
316
- #### Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group.
317
 
318
- unsure
319
 
320
- #### Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ?
321
 
322
- NA
323
 
324
- ## Considerations for Using the Data
 
 
325
 
326
- ### PII Risks and Liability
327
 
328
- #### Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset.
329
 
330
- NA
331
 
332
- ### Licenses
 
 
333
 
334
- #### Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset?
335
 
336
- open license - commercial use allowed
 
 
337
 
338
- #### Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data?
339
 
340
- public domain
341
 
342
- ### Known Technical Limitations
343
 
344
- #### Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible.
 
 
345
 
346
- NA
347
 
348
- #### When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for.
 
 
349
 
350
- NA
351
 
352
- #### What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public.
 
 
353
 
354
- NA
355
 
 
1
+ ---
2
+ annotations_creators:
3
+ - none
4
+ language_creators:
5
+ - unknown
6
+ languages:
7
+ - unknown
8
+ licenses:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - unknown
12
+ pretty_name: Taskmaster
13
+ size_categories:
14
+ - unknown
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - dialog-response-generation
19
+ task_ids:
20
+ - unknown
21
+ ---
22
+
23
+ # Dataset Card for GEM/Taskmaster
24
+
25
+ ## Dataset Description
26
+
27
+ - **Homepage:** https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020
28
+ - **Repository:** https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020
29
+ - **Paper:** https://arxiv.org/abs/2012.12458
30
+ - **Leaderboard:** N/A
31
+ - **Point of Contact:** Karthik Krishnamoorthi
32
+
33
+ ### Link to Main Data Card
34
+
35
+ You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/Taskmaster).
36
+
37
+ ### Dataset Summary
38
+
39
+ This is a large task-oriented dialog dataset in which a model has to produce the response. The input contains the context and a structured representation of what the model is supposed to generate. The input is already pre-formatted as string, turning this into a pure text-to-text problem.
40
+
41
+ You can load the dataset via:
42
+ ```
43
+ import datasets
44
+ data = datasets.load_dataset('GEM/Taskmaster')
45
+ ```
46
+ The data loader can be found [here](https://huggingface.co/datasets/GEM/Taskmaster).
47
+
48
+ #### website
49
+ [Github](https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020)
50
+
51
+ #### paper
52
+ [Arxiv](https://arxiv.org/abs/2012.12458)
53
+
54
+ #### authors
55
+ Google researchers
56
 
57
+ ## Dataset Overview
58
 
59
+ ### Where to find the Data and its Documentation
60
 
61
+ #### Webpage
62
 
63
+ <!-- info: What is the webpage for the dataset (if it exists)? -->
64
+ <!-- scope: telescope -->
65
+ [Github](https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020)
66
 
67
+ #### Download
68
 
69
+ <!-- info: What is the link to where the original dataset is hosted? -->
70
+ <!-- scope: telescope -->
71
+ [Github](https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020)
72
 
73
+ #### Paper
74
 
75
+ <!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
76
+ <!-- scope: telescope -->
77
+ [Arxiv](https://arxiv.org/abs/2012.12458)
78
 
79
+ #### BibTex
80
+
81
+ <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
82
+ <!-- scope: microscope -->
83
  ```
84
  @article{byrne2020tickettalk,
85
  title={TicketTalk: Toward human-level performance with end-to-end, transaction-based dialog systems},
 
87
  journal={arXiv preprint arXiv:2012.12458},
88
  year={2020}
89
  }
90
+ ```
91
 
92
+ #### Contact Name
93
+
94
+ <!-- quick -->
95
+ <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
96
+ <!-- scope: periscope -->
97
+ Karthik Krishnamoorthi
98
+
99
+ #### Contact Email
100
+
101
+ <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
102
+ <!-- scope: periscope -->
103
+ krishnamoorthi@google.com
104
 
105
+ #### Has a Leaderboard?
106
 
107
+ <!-- info: Does the dataset have an active leaderboard? -->
108
+ <!-- scope: telescope -->
109
+ no
110
 
 
111
 
112
+ ### Languages and Intended Use
113
 
114
+ #### Multilingual?
115
 
116
+ <!-- quick -->
117
+ <!-- info: Is the dataset multilingual? -->
118
+ <!-- scope: telescope -->
119
+ no
120
 
121
+ #### Covered Dialects
122
 
123
+ <!-- info: What dialects are covered? Are there multiple dialects per language? -->
124
+ <!-- scope: periscope -->
125
+ NA
126
 
127
+ #### Covered Languages
128
 
129
+ <!-- quick -->
130
+ <!-- info: What languages/dialects are covered in the dataset? -->
131
+ <!-- scope: telescope -->
132
+ `English`
133
 
134
+ #### Whose Language?
135
 
136
+ <!-- info: Whose language is in the dataset? -->
137
+ <!-- scope: periscope -->
138
+ NA
139
 
140
+ #### License
141
 
142
+ <!-- quick -->
143
+ <!-- info: What is the license of the dataset? -->
144
+ <!-- scope: telescope -->
145
+ cc-by-4.0: Creative Commons Attribution 4.0 International
146
 
147
+ #### Intended Use
148
 
149
+ <!-- info: What is the intended use of the dataset? -->
150
+ <!-- scope: microscope -->
151
+ Dialogues
152
 
153
+ #### Primary Task
154
 
155
+ <!-- info: What primary task does the dataset support? -->
156
+ <!-- scope: telescope -->
157
+ Dialog Response Generation
158
 
159
+ #### Communicative Goal
160
 
161
+ <!-- quick -->
162
+ <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
163
+ <!-- scope: periscope -->
164
+ a movie ticketing dialog dataset with 23,789 annotated conversations.
165
 
 
166
 
167
+ ### Credit
168
 
169
+ #### Curation Organization Type(s)
170
 
171
+ <!-- info: In what kind of organization did the dataset curation happen? -->
172
+ <!-- scope: telescope -->
173
+ `other`
174
 
175
+ #### Curation Organization(s)
176
 
177
+ <!-- info: Name the organization(s). -->
178
+ <!-- scope: periscope -->
179
+ NA
180
 
181
+ #### Dataset Creators
182
 
183
+ <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
184
+ <!-- scope: microscope -->
185
+ Google researchers
186
 
187
+ #### Funding
188
 
189
+ <!-- info: Who funded the data creation? -->
190
+ <!-- scope: microscope -->
191
+ Google
192
 
193
+ #### Who added the Dataset to GEM?
194
 
195
+ <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
196
+ <!-- scope: microscope -->
197
+ Tosin Adewumi (Luleå University of Technology)
198
 
 
199
 
200
+ ### Dataset Structure
201
 
202
+ #### Data Fields
203
 
204
+ <!-- info: List and describe the fields present in the dataset. -->
205
+ <!-- scope: telescope -->
206
+ - `gem_id`: The unique example id
207
+ - `context`: The context of the conversation
208
+ - `target`: A string representing the target
209
+ -`references`: A List representing the target(s)
210
+ -`conversation_id`: A unique ID of the conversation
211
 
212
+ #### Reason for Structure
 
 
 
213
 
214
+ <!-- info: How was the dataset structure determined? -->
215
+ <!-- scope: microscope -->
216
+ NA
217
 
218
+ #### How were labels chosen?
219
 
220
+ <!-- info: How were the labels chosen? -->
221
+ <!-- scope: microscope -->
222
+ NA
223
 
224
+ #### Example Instance
225
 
226
+ <!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
227
+ <!-- scope: periscope -->
228
+ ```
229
+ {'context': "<PR>get_movie_attribute<PRAN>rating.movie<PRAV>rated R<C><U>I wanna see a movie<A>where are you?<U>spring hills kansas<PN>find_theaters<PAN>location<PAV>spring hills kansas<PR>find_theaters<PRAN>name.theater<PRAV>AMC Holiday Theater<PRAV>Cinemark Downtown<A>there are 2 theaters near you, the AMC Holiday Theater and Cinemark Downtown. Did you know which movie you'd like to see?<U>funny one please<PN>find_movies<PAN>location<PAV>spring hills kansas<PR>find_movies<PRAN>name.movie<PRAV>Not My Problem<PRAV>Family Jewels<PN>get_movie_attribute<PAN>name.movie<PAV>Not My Problem<PAN>attribute<PAV>name.genre<PR>get_movie_attribute<PRAN>name.genre<PRAV>comedy<PN>get_movie_attribute<PAN>name.movie<PAV>Not My Problem<PAN>attribute<PAV>name.person<PR>get_movie_attribute<PRAN>name.person<PRAV>Matt Damon<PN>get_movie_attribute<PAN>name.movie<PAV>Not My Problem<PAN>attribute<PAV>name.person<PR>get_movie_attribute<PRAN>name.person<PRAV>Noah Schnapp<PN>get_movie_attribute<PAN>name.movie<PAV>Family Jewels<PAN>attribute<PAV>name.genre<PR>get_movie_attribute<PRAN>name.genre<PRAV>romantic comedy<PN>get_movie_attribute<PAN>name.movie<PAV>Family Jewels<PAN>attribute<PAV>name.person<PR>get_movie_attribute<PRAN>name.person<PRAV>Melissa McCarthy<PN>get_movie_attribute<PAN>name.movie<PAV>Family Jewels<PAN>attribute<PAV>name.person<PR>get_movie_attribute<PRAN>name.person<PRAV>Ryan Reynolds<A>There's the comedy film called Not My Problem starring Matt Damon and Noah Schnapp. There's also a romantic comedy called Family Jewels starring Melissa McCarthy and Ryan Reynolds.<U>what ratings are there?<PN>get_movie_attribute<PAN>name.movie<PAV>Not My Problem<PAN>attribute<PAV>rating.movie<PR>get_movie_attribute<PRAN>rating.movie<PRAV>rated PG-13<PN>get_movie_attribute<PAN>name.movie<PAV>Family Jewels<PAN>attribute<PAV>rating.movie",
230
+ 'conversation_id': 'dlg-d1f52e7e-c34c-4e85-b406-85ed138b5068',
231
+ 'gem_id': 'Taskmaster-train-0',
232
+ 'references': ['Not My Problem is rated PG-13 and Family Jewels is rated R.'],
233
+ 'target': 'Not My Problem is rated PG-13 and Family Jewels is rated R.'}
234
+ ```
235
 
236
+ #### Data Splits
 
 
237
 
238
+ <!-- info: Describe and name the splits in the dataset if there are more than one. -->
239
+ <!-- scope: periscope -->
240
+ -`train`: 187182 examples
241
+ -`dev`: 23406 examples
242
+ -`test`: 23316 examples
243
 
244
+ #### Splitting Criteria
245
 
246
+ <!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
247
+ <!-- scope: microscope -->
248
+ NA
249
 
250
+ ####
251
 
252
+ <!-- info: What does an outlier of the dataset in terms of length/perplexity/embedding look like? -->
253
+ <!-- scope: microscope -->
254
+ NA
255
 
 
256
 
 
257
 
258
+ ## Dataset in GEM
259
 
260
+ ### Rationale for Inclusion in GEM
261
 
262
+ #### Why is the Dataset in GEM?
263
 
264
+ <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
265
+ <!-- scope: microscope -->
266
+ Dialogue generation that makes sense
267
 
268
+ #### Similar Datasets
269
 
270
+ <!-- info: Do other datasets for the high level task exist? -->
271
+ <!-- scope: telescope -->
272
+ yes
273
 
274
+ #### Unique Language Coverage
275
 
276
+ <!-- info: Does this dataset cover other languages than other datasets for the same task? -->
277
+ <!-- scope: periscope -->
278
+ no
279
 
280
+ #### Difference from other GEM datasets
281
 
282
+ <!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
283
+ <!-- scope: microscope -->
284
+ NA
285
 
286
+ #### Ability that the Dataset measures
287
 
288
+ <!-- info: What aspect of model ability can be measured with this dataset? -->
289
+ <!-- scope: periscope -->
290
+ NA
291
 
 
292
 
293
+ ### GEM-Specific Curation
294
 
295
+ #### Modificatied for GEM?
296
 
297
+ <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
298
+ <!-- scope: telescope -->
299
+ yes
300
 
301
+ #### GEM Modifications
302
 
303
+ <!-- info: What changes have been made to he original dataset? -->
304
+ <!-- scope: periscope -->
305
+ `other`
306
 
307
+ #### Modification Details
308
 
309
+ <!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
310
+ <!-- scope: microscope -->
311
+ gem_id field was added to the 3 data splits
312
 
313
+ #### Additional Splits?
314
 
315
+ <!-- info: Does GEM provide additional splits to the dataset? -->
316
+ <!-- scope: telescope -->
317
+ no
318
 
 
319
 
320
+ ### Getting Started with the Task
321
 
322
+ #### Pointers to Resources
323
 
324
+ <!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
325
+ <!-- scope: microscope -->
326
+ https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020
327
 
328
+ #### Technical Terms
329
 
330
+ <!-- info: Technical terms used in this card and the dataset and their definitions -->
331
+ <!-- scope: microscope -->
332
+ NA
333
 
 
334
 
 
335
 
336
+ ## Previous Results
337
 
338
+ ### Previous Results
339
 
340
+ #### Measured Model Abilities
341
 
342
+ <!-- info: What aspect of model ability can be measured with this dataset? -->
343
+ <!-- scope: telescope -->
344
+ BLEU: 60
345
 
346
+ #### Metrics
347
 
348
+ <!-- info: What metrics are typically used for this task? -->
349
+ <!-- scope: periscope -->
350
+ `BLEU`
351
 
352
+ #### Proposed Evaluation
353
 
354
+ <!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
355
+ <!-- scope: microscope -->
356
+ automatic evaluation
357
 
358
+ #### Previous results available?
359
 
360
+ <!-- info: Are previous results available? -->
361
+ <!-- scope: telescope -->
362
+ yes
363
 
364
+ #### Other Evaluation Approaches
365
 
366
+ <!-- info: What evaluation approaches have others used? -->
367
+ <!-- scope: periscope -->
368
+ NA
369
 
370
+ #### Relevant Previous Results
371
 
372
+ <!-- info: What are the most relevant previous results for this task/dataset? -->
373
+ <!-- scope: microscope -->
374
+ NA
375
 
 
376
 
 
377
 
378
+ ## Dataset Curation
379
+
380
+ ### Original Curation
381
+
382
+ #### Original Curation Rationale
383
+
384
+ <!-- info: Original curation rationale -->
385
+ <!-- scope: telescope -->
386
+ NA
387
+
388
+ #### Communicative Goal
389
+
390
+ <!-- info: What was the communicative goal? -->
391
+ <!-- scope: periscope -->
392
+ a movie ticketing dialog dataset with 23,789 annotated conversations.
393
+
394
+ #### Sourced from Different Sources
395
+
396
+ <!-- info: Is the dataset aggregated from different data sources? -->
397
+ <!-- scope: telescope -->
398
+ no
399
+
400
+
401
+ ### Language Data
402
+
403
+ #### How was Language Data Obtained?
404
+
405
+ <!-- info: How was the language data obtained? -->
406
+ <!-- scope: telescope -->
407
+ `Crowdsourced`
408
+
409
+ #### Where was it crowdsourced?
410
+
411
+ <!-- info: If crowdsourced, where from? -->
412
+ <!-- scope: periscope -->
413
+ `Participatory experiment`
414
+
415
+ #### Language Producers
416
+
417
+ <!-- info: What further information do we have on the language producers? -->
418
+ <!-- scope: microscope -->
419
+ NA
420
+
421
+ #### Topics Covered
422
+
423
+ <!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
424
+ <!-- scope: periscope -->
425
+ Ticketing
426
+
427
+ #### Data Validation
428
+
429
+ <!-- info: Was the text validated by a different worker or a data curator? -->
430
+ <!-- scope: telescope -->
431
+ not validated
432
+
433
+ #### Was Data Filtered?
434
 
435
+ <!-- info: Were text instances selected or filtered? -->
436
+ <!-- scope: telescope -->
437
+ not filtered
438
 
 
439
 
440
+ ### Structured Annotations
441
 
442
+ #### Additional Annotations?
443
 
444
+ <!-- quick -->
445
+ <!-- info: Does the dataset have additional annotations for each instance? -->
446
+ <!-- scope: telescope -->
447
+ none
448
 
449
+ #### Annotation Service?
450
 
451
+ <!-- info: Was an annotation service used? -->
452
+ <!-- scope: telescope -->
453
+ no
454
 
 
455
 
456
+ ### Consent
457
 
458
+ #### Any Consent Policy?
459
 
460
+ <!-- info: Was there a consent policy involved when gathering the data? -->
461
+ <!-- scope: telescope -->
462
+ no
463
 
464
+ #### Justification for Using the Data
465
 
466
+ <!-- info: If not, what is the justification for reusing the data? -->
467
+ <!-- scope: microscope -->
468
+ NA
469
 
 
470
 
471
+ ### Private Identifying Information (PII)
472
 
473
+ #### Contains PII?
474
 
475
+ <!-- quick -->
476
+ <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
477
+ <!-- scope: telescope -->
478
+ no PII
479
 
480
+ #### Justification for no PII
481
 
482
+ <!-- info: Provide a justification for selecting `no PII` above. -->
483
+ <!-- scope: periscope -->
484
+ It's based on ticketing without personal information
485
 
 
486
 
487
+ ### Maintenance
488
 
489
+ #### Any Maintenance Plan?
490
 
491
+ <!-- info: Does the original dataset have a maintenance plan? -->
492
+ <!-- scope: telescope -->
493
+ no
494
 
 
495
 
 
496
 
497
+ ## Broader Social Context
498
 
499
+ ### Previous Work on the Social Impact of the Dataset
500
 
501
+ #### Usage of Models based on the Data
502
 
503
+ <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
504
+ <!-- scope: telescope -->
505
+ no
506
 
 
507
 
508
+ ### Impact on Under-Served Communities
509
 
510
+ #### Addresses needs of underserved Communities?
511
 
512
+ <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
513
+ <!-- scope: telescope -->
514
+ no
515
 
 
516
 
517
+ ### Discussion of Biases
518
 
519
+ #### Any Documented Social Biases?
520
 
521
+ <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
522
+ <!-- scope: telescope -->
523
+ unsure
524
 
525
+ #### Are the Language Producers Representative of the Language?
526
 
527
+ <!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
528
+ <!-- scope: periscope -->
529
+ NA
530
 
 
531
 
 
532
 
533
+ ## Considerations for Using the Data
534
 
535
+ ### PII Risks and Liability
536
 
537
+ #### Potential PII Risk
538
 
539
+ <!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. -->
540
+ <!-- scope: microscope -->
541
+ NA
542
 
 
543
 
544
+ ### Licenses
545
 
546
+ #### Copyright Restrictions on the Dataset
547
 
548
+ <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
549
+ <!-- scope: periscope -->
550
+ `open license - commercial use allowed`
551
 
552
+ #### Copyright Restrictions on the Language Data
553
 
554
+ <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
555
+ <!-- scope: periscope -->
556
+ `public domain`
557
 
 
558
 
559
+ ### Known Technical Limitations
560
 
561
+ #### Technical Limitations
562
 
563
+ <!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
564
+ <!-- scope: microscope -->
565
+ NA
566
 
567
+ #### Unsuited Applications
568
 
569
+ <!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
570
+ <!-- scope: microscope -->
571
+ NA
572
 
573
+ #### Discouraged Use Cases
574
 
575
+ <!-- info: What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public. -->
576
+ <!-- scope: microscope -->
577
+ NA
578
 
 
579