Back to all datasets
Dataset: cornell_movie_dialog 🏷
Update on GitHub

How to load this dataset directly with the πŸ€—/nlp library:

			
Copy to clipboard
from nlp import load_dataset dataset = load_dataset("cornell_movie_dialog")

Description

This corpus contains a large metadata-rich collection of fictional conversations extracted from raw movie scripts: - 220,579 conversational exchanges between 10,292 pairs of movie characters - involves 9,035 characters from 617 movies - in total 304,713 utterances - movie metadata included: - genres - release year - IMDB rating - number of IMDB votes - IMDB rating - character metadata included: - gender (for 3,774 characters) - position on movie credits (3,321 characters)

Citation

  @InProceedings{Danescu-Niculescu-Mizil+Lee:11a,

  author={Cristian Danescu-Niculescu-Mizil and Lillian Lee},

  title={Chameleons in imagined conversations: 
  A new approach to understanding coordination of linguistic style in dialogs.},

  booktitle={Proceedings of the 

        Workshop on Cognitive Modeling and Computational Linguistics, ACL 2011},

  year={2011}

}

Models trained or fine-tuned on cornell_movie_dialog

None yet. Start fine-tuning now =)