dialog2flow-dataset / README.md
sergioburdisso's picture
Update README.md
971c57c verified
|
raw
history blame
10.8 kB
metadata
license: other
multilinguality:
  - monolingual
language:
  - en
pretty_name: Dialog2Flow Training Corpus
size_categories:
  - 1M<n<10M
source_datasets:
  - Salesforce/dialogstudio
task_categories:
  - sentence-similarity
  - feature-extraction
  - text2text-generation
  - text-generation
tags:
  - task-oriented-dialog
  - task-oriented-dialogues
  - dialog-flow
  - dialog-modeling
  - dialogue-flow
  - dialogue-modeling
  - conversational-ia
  - dialog-acts
  - slots

image/png

Dialog2Flow Training Corpus

This page hosts the dataset introduced in the paper "Dialog2Flow: Pre-training Soft-Contrastive Action-Driven Sentence Embeddings for Automatic Dialog Flow Extraction" published in the EMNLP 2024 main conference. Here we are not only making available the dataset but also each one of the 20 (standardized) task-oriented dialogue datasets used to build it.

The corpus consists of 3.4 million utterances/sentences annotated with dialog act and slot labels across 52 different domains. Domain names and dialog act labels were manually standardized across the 20 datasets.

Load Training Datasets

From this corpus, in the paper, 3 datasets were created for training the sentence encoders, one for the single target (D2F_single) training containing the subset with both dialog act and slots annotation; and other two for the joint target (DFD_joint), one containing the subset with dialog acts and another with slots only. To use them, you can use one of the following names, respectively:

  1. "dialog-acts+slots": (utterance, action label) pairs.
  2. "dialog-acts": (utterance, dialog act label) pairs.
  3. "slots": (utterance, slots label) pairs.

For instance, below is one example to load the "dialog-acts+slots" dataset:

from datasets import load_dataset

dataset = load_dataset('sergioburdisso/dialog2flow-dataset', 'dialog-acts+slots', trust_remote_code=True)

print(dataset)

Output:

DatasetDict({
    train: Dataset({
        features: ['utterance', 'label'],
        num_rows: 1577184
    })
    validation: Dataset({
        features: ['utterance', 'label'],
        num_rows: 4695
    })
})

Load (Individual) Task-Oriented Dialog Datasets

We also provide access to each one of the 20 task-oriented dialogue datasets standardizing annotation and format used to build the corpus. To load each dataset we can simply use its name as given in the following table with information about License and number of dialogues in each dataset:

Dataset Name Train Validation Test Total License
ABCD 8034 1004 1004 10042 MIT License
BiTOD 2952 295 442 3689 Apache License 2.0
Disambiguation 8433 999 1000 10432 MiT License
DSTC2-Clean 1612 506 1117 3235 GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007
FRAMES 1329 - 40 1369 GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007
GECOR 676 - - 676 CC BY 4.0
HDSA-Dialog 8438 1000 1000 10438 MIT License
KETOD 4247 545 532 5324 MiT License
MS-DC 10000 - - 10000 MICROSOFT RESEARCH LICENSE TERMS
MulDoGO 59939 1150 2319 63408 Community Data License Agreement – Permissive – Version 1.0
MultiWOZ_2.1 8434 999 1000 10433 MiT License
MULTIWOZ2_2 8437 1000 1000 10437 Mit License
SGD 16142 2482 4201 22825 CC BY-SA 4.0
SimJointGEN 100000 10000 10000 120000 No license
SimJointMovie 384 120 264 768 No license
SimJointRestaurant 1116 349 775 2240 No license
Taskmaster1 6170 769 769 7708 Attribution 4.0 International (CC BY 4.0)
Taskmaster2 17304 - - 17304 Creative Commons Attribution 4.0 License (CC BY 4.0)
Taskmaster3 22724 17019 17903 57646 Creative Commons Attribution 4.0 License (CC BY 4.0)
WOZ2_0 600 200 400 1200 Apache License 2.0

For instance, below is one example to load the "WOZ2_0" dataset:

from datasets import load_dataset

dataset = load_dataset('sergioburdisso/dialog2flow-dataset', 'WOZ2_0', trust_remote_code=True)

print(dataset)

Output:

DatasetDict({
    test: Dataset({
        features: ['dialog'],
        num_rows: 400
    })
    train: Dataset({
        features: ['dialog'],
        num_rows: 600
    })
    validation: Dataset({
        features: ['dialog'],
        num_rows: 200
    })
})

Note that, unlike previous datasets that contained utterance-label pairs, these individual datasets consist of only one feature "dialog" since their a collection of dialogs (not utterances). Each dialog in turn has the JSON structure described in Appendix A of the paper. For instance, let's get the first dialog of the train split:

print(dataset["train"][0]["dialog"])

Output:

[
   {
      "speaker":"user",
      "text":"Are there any eritrean restaurants in town?",
      "domains":[
         "restaurant"
      ],
      "labels":{
         "dialog_acts":{
            "acts":[
               "inform"
            ],
            "main_acts":[
               "inform"
            ],
            "original_acts":[
               "inform"
            ]
         },
         "slots":[
            "food"
         ],
         "intents":"None"
      }
   },
    ...
   {
      "speaker":"system",
      "text":"There is a wide variety of Chinese restaurants, do you have an area preference or a price preference to narrow it down?",
      "domains":[
         "restaurant"
      ],
      "labels":{
         "dialog_acts":{
            "acts":[
               "request"
            ],
            "main_acts":[
               "request"
            ],
            "original_acts":[
               "request"
            ]
         },
         "slots":[
            "area"
         ],
         "intents":"None"
      }
   },
  ...
]

Corpus Details

Stats

  • Utterances: 3.4M
  • Domains: 52
  • Dialogs: 369,174
  • Labels:
    • Dialog acts: 18
    • Slots: 524
    • Actions (dialog act + slots): 3,982

Full List of Dialog Acts

List of the final 18 dialog act labels along with their proportion in the corpus:

inform (64.66%) · request (12.62%) · offer (6.62%) · inform_success (3.07%) · good_bye (2.67%) · agreement (2.45%) · thank_you (2.25%) · confirm (2.10%) · disagreement (1.60%) · request_more (1.06%) · request_alternative (0.90%) · recommendation (0.70%) · inform_failure (0.64%) · greeting (0.31%) · confirm_answer (0.18%) · confirm_question (0.17%) · request_update (0.02%) · request_compare (0.01%)

Full List of Domains

List of the final 52 domain names along with their proportion in the corpus:

movie (32.98%) · restaurant (13.48%) · hotel (10.15%) · train (4.52%) · flight (4.30%) · event (3.56%) · attraction (3.50%) · service (2.44%) · bus (2.28%) · taxi (2.21%) · rentalcars (2.20%) · travel (2.16%) · music (1.81%) · medium (1.66%) · ridesharing (1.30%) · booking (1.21%) · home (1.01%) · finance (0.79%) · airline (0.69%) · calendar (0.69%) · fastfood (0.68%) · insurance (0.61%) · weather (0.58%) · bank (0.47%) · hkmtr (0.36%) · mlb (0.35%) · ml (0.31%) · food (0.30%) · epl (0.30%) · pizza (0.25%) · coffee (0.24%) · uber (0.24%) · software (0.23%) · auto (0.21%) · nba (0.20%) · product_defect (0.17%) · shipping_issue (0.16%) · alarm (0.13%) · order_issue (0.13%) · messaging (0.13%) · hospital (0.11%) · subscription_inquiry (0.11%) · account_access (0.11%) · payment (0.10%) · purchase_dispute (0.10%) · nfl (0.09%) · chat (0.08%) · police (0.07%) · single_item_query (0.06%) · storewide_query (0.06%) · troubleshoot_site (0.06%) · manage_account (0.06%)

More details about the corpus can be found in Section 4 and Appendix A of the original paper.

Citation

If you found the paper and/or this repository useful, please consider citing our work :)

EMNLP paper: here.

@inproceedings{burdisso-etal-2024-dialog2flow,
    title = "{D}ialog2{F}low: Pre-training Soft-Contrastive Action-Driven Sentence Embeddings for Automatic Dialog Flow Extraction",
    author = "Burdisso, Sergio  and
      Madikeri, Srikanth  and
      Motlicek, Petr",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-main.310",
    pages = "5421--5440",
}

License

Individual datasets were originally loaded from DialogStudio and therefore, this project follows their licensing structure. For detailed licensing information, please refer to the specific licenses accompanying the datasets provided in the table above.

All extra content purely authored by us is released under the MIT license:

Copyright (c) 2024 Idiap Research Institute.

MIT License.