curiosity_dialogs / README.md
albertvillanova's picture
Replace YAML keys from int to str (#2)
1252b91
metadata
annotations_creators:
  - crowdsourced
language_creators:
  - crowdsourced
language:
  - en
license:
  - cc-by-nc-4.0
multilinguality:
  - monolingual
size_categories:
  - 10K<n<100K
source_datasets:
  - original
task_categories:
  - text-generation
  - fill-mask
task_ids:
  - dialogue-modeling
paperswithcode_id: curiosity
pretty_name: Curiosity Dataset
tags:
  - conversational-curiosity
dataset_info:
  features:
    - name: messages
      sequence:
        - name: message
          dtype: string
        - name: liked
          dtype:
            class_label:
              names:
                '0': 'False'
                '1': 'True'
        - name: sender
          dtype:
            class_label:
              names:
                '0': user
                '1': assistant
        - name: facts
          sequence:
            - name: fid
              dtype: int32
            - name: used
              dtype:
                class_label:
                  names:
                    '0': 'False'
                    '1': 'True'
            - name: source
              dtype:
                class_label:
                  names:
                    '0': section
                    '1': known
                    '2': random
        - name: message_id
          dtype: string
        - name: dialog_acts
          sequence: string
    - name: known_entities
      sequence: string
    - name: focus_entity
      dtype: string
    - name: dialog_id
      dtype: int32
    - name: inferred_steps
      dtype:
        class_label:
          names:
            '0': 'False'
            '1': 'True'
    - name: created_time
      dtype: int64
    - name: aspects
      sequence: string
    - name: first_aspect
      dtype: string
    - name: second_aspect
      dtype: string
    - name: shuffle_facts
      dtype:
        class_label:
          names:
            '0': 'False'
            '1': 'True'
    - name: related_entities
      sequence: string
    - name: tag
      dtype: string
    - name: user_id
      dtype: int32
    - name: assistant_id
      dtype: int32
    - name: is_annotated
      dtype:
        class_label:
          names:
            '0': 'False'
            '1': 'True'
    - name: user_dialog_rating
      dtype: int32
    - name: user_other_agent_rating
      dtype: int32
    - name: assistant_dialog_rating
      dtype: int32
    - name: assistant_other_agent_rating
      dtype: int32
    - name: reported
      dtype:
        class_label:
          names:
            '0': 'False'
            '1': 'True'
    - name: annotated
      dtype:
        class_label:
          names:
            '0': 'False'
            '1': 'True'
  config_name: curiosity_dialogs
  splits:
    - name: train
      num_bytes: 37198297
      num_examples: 10287
    - name: val
      num_bytes: 4914487
      num_examples: 1287
    - name: test
      num_bytes: 4915613
      num_examples: 1287
    - name: test_zero
      num_bytes: 4333191
      num_examples: 1187
  download_size: 92169165
  dataset_size: 51361588

Dataset Card for Curiosity Dataset

Table of Contents

Dataset Description

Dataset Summary

Curiosity dataset consists of 14K English dialogs (181K utterances) where users and assistants converse about geographic topics like geopolitical entities and locations. This dataset is annotated with pre-existing user knowledge, message-level dialog acts, grounding to Wikipedia, and user reactions to messages.

Supported Tasks and Leaderboards

  • text-generation-other-conversational-curiosity: The dataset can be used to train a model for Conversational Curiosity, which consists in the testing of the hypothesis that engagement increases when users are presented with facts related to what they know. Success on this task is typically measured by achieving a high Accuracy and F1 Score.

Languages

The text in the dataset is in English collected by crowd-souring. The associated BCP-47 code is en.

Dataset Structure

Data Instances

A typical data point consists of dialogs between an user and an assistant, which is followed by the different attributes of the particular dialog.

An example from the Curiosity Dataset train set looks as follows:

{'annotated': 1,
 'aspects': ['Media', 'Politics and government'],
 'assistant_dialog_rating': 5,
 'assistant_id': 341,
 'assistant_other_agent_rating': 5,
 'created_time': 1571783665,
 'dialog_id': 21922,
 'first_aspect': 'Media',
 'focus_entity': 'Namibia',
 'inferred_steps': 1,
 'is_annotated': 0,
 'known_entities': ['South Africa', 'United Kingdom', 'Portugal'],
 'messages': {'dialog_acts': [['request_topic'],
   ['inform_response'],
   ['request_aspect'],
   ['inform_response'],
   ['request_followup'],
   ['inform_response'],
   ['request_aspect', 'feedback_positive'],
   ['inform_response'],
   ['request_followup'],
   ['inform_response'],
   [],
   []],
  'facts': [{'fid': [], 'source': [], 'used': []},
   {'fid': [77870, 77676, 77816, 77814, 77775, 77659, 77877, 77785, 77867],
    'source': [0, 1, 2, 2, 0, 2, 0, 1, 1],
    'used': [0, 0, 0, 0, 0, 0, 0, 0, 0]},
   {'fid': [], 'source': [], 'used': []},
   {'fid': [77725, 77870, 77676, 77863, 77814, 77775, 77659, 77877, 77867],
    'source': [2, 0, 1, 1, 2, 0, 2, 0, 1],
    'used': [0, 0, 0, 0, 0, 0, 0, 0, 0]},
   {'fid': [], 'source': [], 'used': []},
   {'fid': [77694, 77661, 77863, 77780, 77671, 77704, 77869, 77693, 77877],
    'source': [1, 2, 1, 0, 2, 2, 0, 1, 0],
    'used': [0, 0, 0, 0, 0, 0, 0, 0, 1]},
   {'fid': [], 'source': [], 'used': []},
   {'fid': [77816, 77814, 77864, 77659, 77877, 77803, 77738, 77784, 77789],
    'source': [2, 2, 0, 2, 0, 1, 1, 0, 1],
    'used': [0, 0, 0, 0, 0, 0, 0, 0, 0]},
   {'fid': [], 'source': [], 'used': []},
   {'fid': [77694, 77776, 77780, 77696, 77707, 77693, 77778, 77702, 77743],
    'source': [1, 0, 0, 2, 1, 1, 0, 2, 2],
    'used': [0, 0, 0, 0, 0, 0, 0, 0, 0]},
   {'fid': [], 'source': [], 'used': []},
   {'fid': [77662, 77779, 77742, 77734, 77663, 77777, 77702, 77731, 77778],
    'source': [1, 0, 2, 1, 2, 0, 2, 1, 0],
    'used': [0, 0, 0, 0, 0, 0, 0, 0, 1]}],
  'liked': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
  'message': ['Hi. I want information about Namibia.',
   'Nmbia is a country in southern Africa.',
   'Do you have information about the media there?',
   'A mentional amount of foriegn',
   'What about it?',
   "Media and journalists in Namibia are represented by the Namibia chapter of the Media Institute of 'southern Africa and the Editors Forum of Namibia.",
   'Interesting! What can you tell me about the politics and government?',
   'Namibia formed the Namibian Defence Force, comprising former enemies in a 23-year bush war.',
   'Do you have more information about it?',
   "With a small army and a fragile economy , the Namibian government's principal foreign policy concern is developing strengthened ties within the Southern African region.",
   "That's all I wanted to know. Thank you!",
   'My pleasure!'],
  'message_id': ['617343895',
   '2842515356',
   '4240816985',
   '520711081',
   '1292358002',
   '3677078227',
   '1563061125',
   '1089028270',
   '1607063839',
   '113037558',
   '1197873991',
   '1399017322'],
  'sender': [0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1]},
 'related_entities': ['Western Roman Empire',
  'United Kingdom',
  'Portuguese language',
  'Southern African Development Community',
  'South Africa',
  'Kalahari Desert',
  'Namib Desert',
  'League of Nations',
  'Afrikaans',
  'Sub-Saharan Africa',
  'Portugal',
  'South-West Africa',
  'Warmbad, Namibia',
  'German language',
  'NBC'],
 'reported': 0,
 'second_aspect': 'Politics and government',
 'shuffle_facts': 1,
 'tag': 'round_2',
 'user_dialog_rating': 5,
 'user_id': 207,
 'user_other_agent_rating': 5}

Data Fields

  • messages: List of dialogs between the user and the assistant and their associated attributes
    • dialog_acts: List of actions performed in the dialogs
    • facts: List of facts returned by the assistant
      • fid: Fact ID
      • source: Source for the fact
      • used: Whether facts were used before in the same dialog
    • liked: List of values indicating whether each dialog was liked
    • message: List of dialogs (messages) between the user and the assistant
    • message_id: Message ID
    • sender: Message author ID (numeric)
  • known_entities: Rooted facts about entities the user knows
  • focus_entity : Entity in focus in the dialogs
  • dialog_id : Dialog ID
  • inferred_steps: Number of inferred steps
  • created_time: Time of creation of the dialog
  • aspects: List of two aspects which the dialog is about
  • first_aspect: First aspect
  • second_aspect: Second aspect
  • shuffle_facts: Whether facts were shuffled
  • related_entities : List of fifteen related entities to the focus entity
  • tag: Conversation tag
  • user_id: User ID
  • assistant_id: Assistant ID
  • is_annotated: 0 or 1 (More Information Needed)
  • user_dialog_rating: 1 - 5 (More Information Needed)
  • user_other_agent_rating: 1 - 5 (More Information Needed)
  • assistant_dialog_rating: 1 - 5 (More Information Needed)
  • assistant_other_agent_rating: 1 - 5 (More Information Needed)
  • reported: Whether the dialog was reported inappropriate
  • annotated: 0 or 1 (More Information Needed)

Data Splits

The data is split into a training, validation, test and test_zero set as per the original dataset split.

train validation test test_zero
Input dialog examples 10287 1287 1287 1187

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

Attribution-NonCommercial 4.0 International

Citation Information

@inproceedings{rodriguez2020curiosity,
    title = {Information Seeking in the Spirit of Learning: a Dataset for Conversational Curiosity},
    author = {Pedro Rodriguez and Paul Crook and Seungwhan Moon and Zhiguang Wang},
    year = 2020,
    booktitle = {Empirical Methods in Natural Language Processing}
}

Contributions

Thanks to @vineeths96 for adding this dataset.