Datasets:

Task Categories: sequence-modeling
Multilinguality: multilingual
Size Categories: unknown
Licenses: unknown
Language Creators: found
Annotations Creators: machine-generated
Source Datasets: original

Dataset Card Creation Guide

Dataset Summary

This is a new collection of translated movie subtitles from http://www.opensubtitles.org/. IMPORTANT: If you use the OpenSubtitle corpus: Please, add a link to http://www.opensubtitles.org/ to your website and to your reports and publications produced with the data! This is a slightly cleaner version of the subtitle collection using improved sentence alignment and better language checking. 62 languages, 1,782 bitexts total number of files: 3,735,070 total number of tokens: 22.10G total number of sentence fragments: 3.35G

This dataset only focus on monolingual subtitles with each document corresponding to a subtitle file.

Supported Tasks and Leaderboards

For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the task-category-tag with an appropriate other:other-task-name).

  • task-category-tag: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a high/low metric name. The (model name or model class) model currently achieves the following score. [IF A LEADERBOARD IS AVAILABLE]: This task has an active leaderboard which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name.

Languages

Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,...

When relevant, please provide BCP-47 codes, which consist of a primary language subtag, with a script subtag and/or region subtag if available.

Dataset Structure

Data Instances

Each example corresponds to a subtitle file.

{
  "subtitle": "Happy birthday to you."\n"Happy birthday to you."\n"Happy birthday, dear..."\nMemory is always there.\n17 years old,\nI was young, vulnerable, and powerless, making the same mistakes over and over again.\nAnd yet she was strong.\nBut that is always where my memory ends.\nAt that place, when we were 17.\nAnd as it ends there, my life also comes to a stop.\n"We Were There\n- Last Part " ....,
  "meta": {
    "year": 2012,
    "imdbId": 2194724,
    "subtitleId": 4786461.xml,
  }
}

Data Fields

Each example includes the text in the subtitle entry as well as meta data.

  • subtitle: The subtitle text. The punctuation includes escaped line breaks characters.
  • year: Year the subtitle file was added.
  • imdbId: Movie unique identifier following the reference from Internet Movie Database
  • subtitleId: Subtitle file identifier. They may be multiple examples refering to the same movie for a given language.

Data Splits

The dataset is split given languages.

Language Number of documents Average document length Total Number of tokens File size
fr 120,000 5,002 600M 1.1G
en 440,000 5,575 2,453M 3.5G
zh-CN 20,000 2,168 43M 269M
pt 130,000 4,932 641M 1.2G
es 230,000 5,020 1,155M 2.2G
ar 90,000 4,379 394M 1.3G

Dataset Creation

Curation Rationale

What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?

Source Data

The dataset is based on OpenSubtitles database.

Initial Data Collection and Normalization

Raw subtitle files follow a series of pre-precessing operations:

  • Subtitle conversion: First the encoding is detected and converted to utf-8.
  • Sentence segmentation and tokenisation: Sentences are then reconstructed since raw subtitle files corresponds to block of text which do not align with sentence boundaries. Sentence are then tokenized whith specific tools for Japanese and Chinese and the default Moses tokenizer otherwise.
  • Correction of OCR and spelling errors: Some subtitles are automatically generated using Optical Character Recognition (OCR). This leads to recuring errors which are automatically detected and corrected using statistical language model.
  • Inclusion of meta-data: Each file is associated with meta-data.
  • Post-processing: In the current dataset, we add some basic post-processing steps. We parsed the xmlfiles and untokenize the sentences.

Who are the source language producers?

Subtitles are written by contributors of the OpenSubtitles database. They may be human written or automatically generated using OCR methods.

Citation Information

@inproceedings{lison_16,
  author    = {Pierre Lison and
               J{\"{o}}rg Tiedemann},
  editor    = {Nicoletta Calzolari and
               Khalid Choukri and
               Thierry Declerck and
               Sara Goggi and
               Marko Grobelnik and
               Bente Maegaard and
               Joseph Mariani and
               H{\'{e}}l{\`{e}}ne Mazo and
               Asunci{\'{o}}n Moreno and
               Jan Odijk and
               Stelios Piperidis},
  title     = {OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and
               {TV} Subtitles},
  booktitle = {Proceedings of the Tenth International Conference on Language Resources
               and Evaluation {LREC} 2016, Portoro{\v{z}}, Slovenia, May 23-28, 2016},
  publisher = {European Language Resources Association {(ELRA)}},
  year      = {2016},
  url       = {http://www.lrec-conf.org/proceedings/lrec2016/summaries/947.html},
}

Contributions

Thanks to @AntoineSimoulin for adding this dataset.

Models trained or fine-tuned on bigscience/open_subtitles_monolingual

None yet