The dataset viewer is not available for this dataset.
The dataset viewer doesn't support this dataset because it runs arbitrary python code. Please open a discussion in the discussion tab if you think this is an error and tag @lhoestq and @severo.
Error code:   DatasetWithScriptNotSupportedError

Need help to make the dataset viewer work? Open a discussion for direct support.

YAML Metadata Error: "configs[0]" must be of type object
YAML Metadata Error: "configs[1]" must be of type object
YAML Metadata Error: "configs[2]" must be of type object

Dataset Card for EmoWOZ Dataset

Dataset Summary

EmoWOZ is based on MultiWOZ, a multi-domain task-oriented dialogue dataset. It contains more than 11K task-oriented dialogues with more than 83K emotion annotations of user utterances. In addition to Wizard-of-Oz dialogues from MultiWOZ, we collect human-machine dialogues (DialMAGE) within the same set of domains to sufficiently cover the space of various emotions that can happen during the lifetime of a data-driven dialogue system. There are 7 emotion labels, which are adapted from the OCC emotion models: Neutral, Satisfied, Dissatisfied, Excited, Apologetic, Fearful, Abusive.

Some of the statistics about the dataset:

Metirc Value
# Dialogues 11434
# Turns 167234
# Annotations 83617
# Unique Tokens 28417
Average Turns per Dialogue 14.63
Average Tokens per Turn 12.78

Emotion Distribution in EmoWOZ and subsets:

Emotion EmoWOZ # MultiWOZ DialMAGE
Neutral 58,656 51,426 7,230
Satisfied 17,532 17,061 471
Dissatisfied 5,117 914 4,203
Excited 971 860 111
Apologetic 840 838 2
Fearful 396 381 15
Satisfied 105 44 61

Supported Tasks and Leaderboards

  • 'Emotion Recognition in Conversations': See the Papers With Code leaderboard for more models.
  • 'Additional Classification Tasks': According to the initial benchmark paper, emotion labels in EmoWOZ can be mapped to sentiment polarities. Therefore, sentiment classification and sentiment analysis can also be performed. Since EmoWOZ has two subsets: MultiWOZ (human-to-human) and DialMAGE (human-to-machine), it is also possible to perform cross-domain emotion/sentiment recognition.

Languages

Only English is represented in the data.

Dataset Structure

Data Instances

For each instance, there is a string id for the dialogue, a list of strings for the dialogue utterances, and a list of integers for the emotion labels.

{
    'dialogue_id': 'PMUL4725.json',
    'log': {
        'text': [
            'Hi, i am looking for some museums that I could visit when in town, could you help me find some?', 
            'Is there an area of town you prefer?', 
            "No, I don't care.", 
            "I recommend the Cafe Jello Gallery in the west. It's free to enter!", 
            'I also need a place to stay', 
            'Great! There are 33 hotels in the area. What area of town would you like to stay in? What is your preference on price?', 
            " The attraction should be in the type of museum. I don't care about the price range or the area", 
            'Just to clarify - did you need a different museum? Or a hotel?', 
            'That museum from earlier is fine, I just need their postalcode.  I need a hotel two in the west and moderately priced.  ', 
            "The postal code for Cafe Jello Gallery is cb30af. Okay, Hobson's House matches your request. ", 
            'Do they have internet?', 
            'Yes they do. Would you like me to book a room for you?', 
            "No thanks. I will do that later. Can you please arrange for taxi service from Cafe Jello to Hobson's House sometime after 04:00?", 
            'I was able to book that for you. Be expecting a grey Tesla. If you need to reach them, please call 07615015749. ', 
            'Well that you that is all i need for today', 
            'Your welcome.  Have a great day!'
        ],
        'emotion': [0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1]
    }
}

Data Fields

  • dialogue_id: a string representing the unique id of the dialogue. For MultiWOZ dialogues, the original id is keeped. For DialMAGE dialogues, all ids are in the format of DMAGExxx.json where xxx is an integer of variable number of digits.
  • text: a list of strings containing the dialogue turns.
  • emotion: a list of integers containing the sequence of emotion labels for the dialogue. Specificially,
    • -1: system turns with unlabelled emotion
    • 0: neutral, no emotion expressed
    • 1: fearful, or sad/disappointed, negative emotion elicited by facts/events, which is out of the system's control
    • 2: dissatisfied, negative emotion elicited by the system, usually after the system's poor performance
    • 3: apologetic, negative emotion from the user, usually expressing apologies for causing confusion or changing search criteria
    • 4: abusive, negative emotion elicited by the system, expressed in an impolite way
    • 5: excited, positive emotion elicited by facts/events
    • 6: satisfied, positive emotion elicited by the system

Data Splits

The EmoWOZ dataset has 3 splits: train, validation, and test. Below are the statistics for the dataset.

Dataset Split Number of Emotion Annotations in Split Of Which from MultiWOZ Of Which from DialMage
Train 66,474 56,778 9696
Validation 8,509 7,374 1135
Test 8,634 7,372 1262

Dataset Creation

Curation Rationale

EmoWOZ was built on top of MultiWOZ because MultiWOZ is a well-established dataset for task-oriented dialogue modelling, allowing further study of the impact of user emotions on downstream tasks. The additional 1000 human-machine dialogues (DialMAGE) was collected to improve the emotion coverage and emotional expression diversity.

Source Data

Initial Data Collection and Normalization

MultiWOZ dialogues were inherited from the work of MultiWOZ - A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling.

DialMAGE dialogues were collected from a human evaluation of an RNN-based policy trained on MultiWOZ on Amazon Mechanical Turk platform.

Who are the source language producers?

The text of both MultiWOZ and DialMAGE was written by workers on Amazon Mechanical Turk platform. For detailed data collection set-ups, please refer to their respective publications.

Annotations

All dialogues take place between a user and a system (or an operator). The dialogue always starts with a user turn, which is always followed by a system response, and ends with a system turn. Only user turns are annotated with a emotion label.

Annotation process

Each user utterance was annotated by three annotators. The final label was determined by majority voting. If there was no agreement, the final label would be resolved manually.

For details such as annotator selection process and quality assurance methods, please refer to the EmoWOZ publication.

Who are the annotators?

Annotators are crowdsource workers on Amazon Mechanical Turk platform.

Personal and Sensitive Information

All annotators are anonymised. There is no personal information in EmoWOZ.

Considerations for Using the Data

Social Impact of Dataset

The purpose of this dataset is to help develop task-oriented dialogue systems that can perceive human emotions and avoid abusive behaviours. This task is useful for building more human-like dialogue agents.

Discussion of Biases

There is bias in emotion distribution in the MultiWOZ (human-human) and DialMAGE (human-machine) subset of EmoWOZ. The linguistic styles are also different between the two subsets.

As pointed out in Reevaluating Data Partitioning for Emotion Detection in EmoWOZ, there is also emotion shift in train-dev-test split in the MultiWOZ subset. EmoWOZ keeps the original data split of MultiWOZ, which is suitable for task-oriented dialogue modelling but the emotion distribution in these data splits are different. Further investigations will be needed.

Other Known Limitations

The emotion distribution is unbalanced where neutral, satisfied, and dissatisfied make up more than 95% of the labels.

Additional Information

Dataset Curators

The collection and annotation of EmoWOZ were conducted by the Chair for Dialog Systems and Machine Learning at Heinrich Heine Universität Düsseldorf.

Licensing Information

The EmoWOZ datasetis released under the CC-BY-NC-4.0 License.

Citation Information

@inproceedings{feng-etal-2022-emowoz,
    title = "{E}mo{WOZ}: A Large-Scale Corpus and Labelling Scheme for Emotion Recognition in Task-Oriented Dialogue Systems",
    author = "Feng, Shutong  and
      Lubis, Nurul  and
      Geishauser, Christian  and
      Lin, Hsien-chin  and
      Heck, Michael  and
      van Niekerk, Carel  and
      Gasic, Milica",
    booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
    month = jun,
    year = "2022",
    address = "Marseille, France",
    publisher = "European Language Resources Association",
    url = "https://aclanthology.org/2022.lrec-1.436",
    pages = "4096--4113",
    abstract = "The ability to recognise emotions lends a conversational artificial intelligence a human touch. While emotions in chit-chat dialogues have received substantial attention, emotions in task-oriented dialogues remain largely unaddressed. This is despite emotions and dialogue success having equally important roles in a natural system. Existing emotion-annotated task-oriented corpora are limited in size, label richness, and public availability, creating a bottleneck for downstream tasks. To lay a foundation for studies on emotions in task-oriented dialogues, we introduce EmoWOZ, a large-scale manually emotion-annotated corpus of task-oriented dialogues. EmoWOZ is based on MultiWOZ, a multi-domain task-oriented dialogue dataset. It contains more than 11K dialogues with more than 83K emotion annotations of user utterances. In addition to Wizard-of-Oz dialogues from MultiWOZ, we collect human-machine dialogues within the same set of domains to sufficiently cover the space of various emotions that can happen during the lifetime of a data-driven dialogue system. To the best of our knowledge, this is the first large-scale open-source corpus of its kind. We propose a novel emotion labelling scheme, which is tailored to task-oriented dialogues. We report a set of experimental results to show the usability of this corpus for emotion recognition and state tracking in task-oriented dialogues.",
}
Downloads last month
8