Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
1K<n<10K
ArXiv:
License:
IMAD / README.md
VityaVitalich's picture
Update README.md
3dc086a
metadata
license: cc-by-nc-4.0
task_categories:
  - text-generation
  - image-to-text
language:
  - en
multilinguality:
  - monolingual
pretty_name: IMAD
size_categories:
  - 1K<n<10K
tags:
  - multi-modal
  - dialogue

Dataset Description

Dataset Summary

This dataset contains data from the paper IMage Augmented multi-modal Dialogue: IMAD. The main feature of this dataset is the novelty of the task. It has been generated specifically for the purpose of image interpretation in a dialogue context. Some of the dialogue utterances have been replaced with images, allowing a generative model to be trained to restore the initial utterance. The dialogues are sourced from multiple dialogue datasets (DailyDialog, Commonsense, PersonaChat, MuTual, Empathetic Dialogues, Dream) and have been filtered using a technique described in the paper. A significant portion of the data has been labeled by assessors, resulting in a high inter-reliability score. The combination of these methods has led to a well-filtered dataset and consequently a high BLEU score. We hope that this dataset will be beneficial for the development of multi-modal deep learning.

Data Fields

Dataset contains 5 fields

  • image_id: string that contains id of image in the full Unsplash Dataset
  • source_data: string that contains the name of source dataset
  • utter: string that contains utterance that was replaced in this dialogue with an image
  • context: list of string that contains sequence of utterances in the dialogue before the replaced utterance
  • image_like: int that shows if the data was collected with assessors or via filtering technique

Licensing Information

Textual part of IMAD is licensed under CC BY-NC-SA 4.0. Full Dataset with images could be requested directly contacting authors or could be obtained with matching images_id with Unsplash full dataset.

Contacts

Feel free to reach out to us at [vvmoskvoretskiy@yandex.ru] for inquiries, collaboration suggestions, or data requests related to our work.

Citation Information

To cite this article please use this BibTex reference

@misc{viktor2023imad,
      title={IMAD: IMage-Augmented multi-modal Dialogue}, 
      author={Moskvoretskii Viktor and Frolov Anton and Kuznetsov Denis},
      year={2023},
      eprint={2305.10512},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Or via MLA Citation

Viktor, Moskvoretskii et al. “IMAD: IMage-Augmented multi-modal Dialogue.” (2023).