annotations_creators:
- none
language_creators:
- unknown
language:
- unknown
license:
- cc-by-4.0
multilinguality:
- unknown
pretty_name: Taskmaster
size_categories:
- unknown
source_datasets:
- original
task_categories:
- dialog-response-generation
task_ids:
- unknown
Dataset Card for GEM/Taskmaster
Dataset Description
- Homepage: https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020
- Repository: https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020
- Paper: https://arxiv.org/abs/2012.12458
- Leaderboard: N/A
- Point of Contact: Karthik Krishnamoorthi
Link to Main Data Card
You can find the main data card on the GEM Website.
Dataset Summary
This is a large task-oriented dialog dataset in which a model has to produce the response. The input contains the context and a structured representation of what the model is supposed to generate. The input is already pre-formatted as string, turning this into a pure text-to-text problem.
You can load the dataset via:
import datasets
data = datasets.load_dataset('GEM/Taskmaster')
The data loader can be found here.
website
paper
authors
Google researchers
Dataset Overview
Where to find the Data and its Documentation
Webpage
Download
Paper
BibTex
@article{byrne2020tickettalk,
title={TicketTalk: Toward human-level performance with end-to-end, transaction-based dialog systems},
author={Byrne, Bill and Krishnamoorthi, Karthik and Ganesh, Saravanan and Kale, Mihir Sanjay},
journal={arXiv preprint arXiv:2012.12458},
year={2020}
}
Contact Name
Karthik Krishnamoorthi
Contact Email
Has a Leaderboard?
no
Languages and Intended Use
Multilingual?
no
Covered Dialects
NA
Covered Languages
English
Whose Language?
NA
License
cc-by-4.0: Creative Commons Attribution 4.0 International
Intended Use
Dialogues
Primary Task
Dialog Response Generation
Communicative Goal
a movie ticketing dialog dataset with 23,789 annotated conversations.
Credit
Curation Organization Type(s)
other
Curation Organization(s)
NA
Dataset Creators
Google researchers
Funding
Who added the Dataset to GEM?
Tosin Adewumi (Luleå University of Technology)
Dataset Structure
Data Fields
gem_id
: The unique example idcontext
: The context of the conversationtarget
: A string representing the target -references
: A List representing the target(s) -conversation_id
: A unique ID of the conversation
Reason for Structure
NA
How were labels chosen?
NA
Example Instance
{'context': "<PR>get_movie_attribute<PRAN>rating.movie<PRAV>rated R<C><U>I wanna see a movie<A>where are you?<U>spring hills kansas<PN>find_theaters<PAN>location<PAV>spring hills kansas<PR>find_theaters<PRAN>name.theater<PRAV>AMC Holiday Theater<PRAV>Cinemark Downtown<A>there are 2 theaters near you, the AMC Holiday Theater and Cinemark Downtown. Did you know which movie you'd like to see?<U>funny one please<PN>find_movies<PAN>location<PAV>spring hills kansas<PR>find_movies<PRAN>name.movie<PRAV>Not My Problem<PRAV>Family Jewels<PN>get_movie_attribute<PAN>name.movie<PAV>Not My Problem<PAN>attribute<PAV>name.genre<PR>get_movie_attribute<PRAN>name.genre<PRAV>comedy<PN>get_movie_attribute<PAN>name.movie<PAV>Not My Problem<PAN>attribute<PAV>name.person<PR>get_movie_attribute<PRAN>name.person<PRAV>Matt Damon<PN>get_movie_attribute<PAN>name.movie<PAV>Not My Problem<PAN>attribute<PAV>name.person<PR>get_movie_attribute<PRAN>name.person<PRAV>Noah Schnapp<PN>get_movie_attribute<PAN>name.movie<PAV>Family Jewels<PAN>attribute<PAV>name.genre<PR>get_movie_attribute<PRAN>name.genre<PRAV>romantic comedy<PN>get_movie_attribute<PAN>name.movie<PAV>Family Jewels<PAN>attribute<PAV>name.person<PR>get_movie_attribute<PRAN>name.person<PRAV>Melissa McCarthy<PN>get_movie_attribute<PAN>name.movie<PAV>Family Jewels<PAN>attribute<PAV>name.person<PR>get_movie_attribute<PRAN>name.person<PRAV>Ryan Reynolds<A>There's the comedy film called Not My Problem starring Matt Damon and Noah Schnapp. There's also a romantic comedy called Family Jewels starring Melissa McCarthy and Ryan Reynolds.<U>what ratings are there?<PN>get_movie_attribute<PAN>name.movie<PAV>Not My Problem<PAN>attribute<PAV>rating.movie<PR>get_movie_attribute<PRAN>rating.movie<PRAV>rated PG-13<PN>get_movie_attribute<PAN>name.movie<PAV>Family Jewels<PAN>attribute<PAV>rating.movie",
'conversation_id': 'dlg-d1f52e7e-c34c-4e85-b406-85ed138b5068',
'gem_id': 'Taskmaster-train-0',
'references': ['Not My Problem is rated PG-13 and Family Jewels is rated R.'],
'target': 'Not My Problem is rated PG-13 and Family Jewels is rated R.'}
Data Splits
-train
: 187182 examples
-dev
: 23406 examples
-test
: 23316 examples
Splitting Criteria
NA
NA
Dataset in GEM
Rationale for Inclusion in GEM
Why is the Dataset in GEM?
Dialogue generation that makes sense
Similar Datasets
yes
Unique Language Coverage
no
Difference from other GEM datasets
NA
Ability that the Dataset measures
NA
GEM-Specific Curation
Modificatied for GEM?
yes
GEM Modifications
other
Modification Details
gem_id field was added to the 3 data splits
Additional Splits?
no
Getting Started with the Task
Pointers to Resources
https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020
Technical Terms
NA
Previous Results
Previous Results
Measured Model Abilities
BLEU: 60
Metrics
BLEU
Proposed Evaluation
automatic evaluation
Previous results available?
yes
Other Evaluation Approaches
NA
Relevant Previous Results
NA
Dataset Curation
Original Curation
Original Curation Rationale
NA
Communicative Goal
a movie ticketing dialog dataset with 23,789 annotated conversations.
Sourced from Different Sources
no
Language Data
How was Language Data Obtained?
Crowdsourced
Where was it crowdsourced?
Participatory experiment
Language Producers
NA
Topics Covered
Ticketing
Data Validation
not validated
Was Data Filtered?
not filtered
Structured Annotations
Additional Annotations?
none
Annotation Service?
no
Consent
Any Consent Policy?
no
Justification for Using the Data
NA
Private Identifying Information (PII)
Contains PII?
no PII
Justification for no PII
It's based on ticketing without personal information
Maintenance
Any Maintenance Plan?
no
Broader Social Context
Previous Work on the Social Impact of the Dataset
Usage of Models based on the Data
no
Impact on Under-Served Communities
Addresses needs of underserved Communities?
no
Discussion of Biases
Any Documented Social Biases?
unsure
Are the Language Producers Representative of the Language?
NA
Considerations for Using the Data
PII Risks and Liability
Potential PII Risk
NA
Licenses
Copyright Restrictions on the Dataset
open license - commercial use allowed
Copyright Restrictions on the Language Data
public domain
Known Technical Limitations
Technical Limitations
NA
Unsuited Applications
NA
Discouraged Use Cases
NA