Datasets:

Task Categories: sequence-modeling
Multilinguality: multilingual
Size Categories: unknown
Licenses: unknown
Language Creators: found
Annotations Creators: machine-generated
Source Datasets: original
1 ---
2 annotations_creators:
3 - machine-generated
4 language_creators:
5 - found
6 languages:
7 - fr
8 - en
9 - zh-CN
10 - pt
11 - es
12 - ar
13 licenses:
14 - unknown
15 multilinguality:
16 - multilingual
17 pretty_name: ''
18 size_categories:
19 - unknown
20 source_datasets:
21 - original
22 task_categories:
23 - sequence-modeling
24 task_ids:
25 - language-modeling
26 ---
27
28 # Dataset Card Creation Guide
29
30 ## Table of Contents
31 - [Dataset Card Creation Guide](#dataset-card-creation-guide)
32 - [Table of Contents](#table-of-contents)
33 - [Dataset Description](#dataset-description)
34 - [Dataset Summary](#dataset-summary)
35 - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
36 - [Languages](#languages)
37 - [Dataset Structure](#dataset-structure)
38 - [Data Instances](#data-instances)
39 - [Data Fields](#data-fields)
40 - [Data Splits](#data-splits)
41 - [Dataset Creation](#dataset-creation)
42 - [Curation Rationale](#curation-rationale)
43 - [Source Data](#source-data)
44 - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
45 - [Who are the source language producers?](#who-are-the-source-language-producers)
46 - [Citation Information](#citation-information)
47 - [Contributions](#contributions)
48
49 ## Dataset Description
50
51 - **Homepage:** [Opus OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles-v2018.php)
52 - **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]()
53 - **Paper:** [OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles](http://www.lrec-conf.org/proceedings/lrec2016/pdf/947_Paper.pdf)
54 - **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
55 - **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]()
56
57 ### Dataset Summary
58
59 This is a new collection of translated movie subtitles from [http://www.opensubtitles.org/](http://www.opensubtitles.org/).
60 **IMPORTANT**: If you use the OpenSubtitle corpus: Please, add a link to [http://www.opensubtitles.org/](http://www.opensubtitles.org/) to your website and to your reports and publications produced with the data!
61 This is a slightly cleaner version of the subtitle collection using improved sentence alignment and better language checking.
62 62 languages, 1,782 bitexts
63 total number of files: 3,735,070
64 total number of tokens: 22.10G
65 total number of sentence fragments: 3.35G
66
67 This dataset only focus on monolingual subtitles with each document corresponding to a subtitle file.
68
69 ### Supported Tasks and Leaderboards
70
71 For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`).
72
73 - `task-category-tag`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
74
75 ### Languages
76
77 Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,...
78
79 When relevant, please provide [BCP-47 codes](https://tools.ietf.org/html/bcp47), which consist of a [primary language subtag](https://tools.ietf.org/html/bcp47#section-2.2.1), with a [script subtag](https://tools.ietf.org/html/bcp47#section-2.2.3) and/or [region subtag](https://tools.ietf.org/html/bcp47#section-2.2.4) if available.
80
81 ## Dataset Structure
82
83 ### Data Instances
84
85 Each example corresponds to a subtitle file.
86
87 ```
88 {
89 "subtitle": "Happy birthday to you."\n"Happy birthday to you."\n"Happy birthday, dear..."\nMemory is always there.\n17 years old,\nI was young, vulnerable, and powerless, making the same mistakes over and over again.\nAnd yet she was strong.\nBut that is always where my memory ends.\nAt that place, when we were 17.\nAnd as it ends there, my life also comes to a stop.\n"We Were There\n- Last Part " ....,
90 "meta": {
91 "year": 2012,
92 "imdbId": 2194724,
93 "subtitleId": 4786461.xml,
94 }
95 }
96 ```
97
98 ### Data Fields
99
100 Each example includes the text in the `subtitle` entry as well as meta data.
101
102 - `subtitle`: The subtitle text. The punctuation includes escaped line breaks characters.
103 - `year`: Year the subtitle file was added.
104 - `imdbId`: Movie unique identifier following the reference from [Internet Movie Database](http://www.imdb.com)
105 - `subtitleId`: Subtitle file identifier. They may be multiple examples refering to the same movie for a given language.
106
107 ### Data Splits
108
109 The dataset is split given languages.
110
111 | Language | Number of documents | Average document length | Total Number of tokens | File size |
112 | -------- | --------------------- | ----------------------- | ---------------------- | --------- |
113 | fr | 120,000 | 5,002 | 600M | 1.1G |
114 | en | 440,000 | 5,575 | 2,453M | 3.5G |
115 | zh-CN | 20,000 | 2,168 | 43M | 269M |
116 | pt | 130,000 | 4,932 | 641M | 1.2G |
117 | es | 230,000 | 5,020 | 1,155M | 2.2G |
118 | ar | 90,000 | 4,379 | 394M | 1.3G |
119
120 ## Dataset Creation
121
122 ### Curation Rationale
123
124 What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
125
126 ### Source Data
127
128 The dataset is based on [OpenSubtitles](http://www.opensubtitles.org) database.
129
130 #### Initial Data Collection and Normalization
131
132 Raw subtitle files follow a series of pre-precessing operations:
133 - `Subtitle conversion`: First the encoding is detected and converted to utf-8.
134 - `Sentence segmentation and tokenisation`: Sentences are then reconstructed since raw subtitle files corresponds to block of text which do not align with sentence boundaries. Sentence are then tokenized whith specific tools for Japanese and Chinese and the default Moses tokenizer otherwise.
135 - `Correction of OCR and spelling errors`: Some subtitles are automatically generated using Optical Character Recognition (OCR). This leads to recuring errors which are automatically detected and corrected using statistical language model.
136 - `Inclusion of meta-data`: Each file is associated with meta-data.
137 - `Post-processing`: In the current dataset, we add some basic post-processing steps. We parsed the `xml`files and untokenize the sentences.
138
139 #### Who are the source language producers?
140
141 Subtitles are written by contributors of the [OpenSubtitles](http://www.opensubtitles.org) database. They may be human written or automatically generated using OCR methods.
142
143 ### Citation Information
144
145 ```
146 @inproceedings{lison_16,
147 author = {Pierre Lison and
148 J{\"{o}}rg Tiedemann},
149 editor = {Nicoletta Calzolari and
150 Khalid Choukri and
151 Thierry Declerck and
152 Sara Goggi and
153 Marko Grobelnik and
154 Bente Maegaard and
155 Joseph Mariani and
156 H{\'{e}}l{\`{e}}ne Mazo and
157 Asunci{\'{o}}n Moreno and
158 Jan Odijk and
159 Stelios Piperidis},
160 title = {OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and
161 {TV} Subtitles},
162 booktitle = {Proceedings of the Tenth International Conference on Language Resources
163 and Evaluation {LREC} 2016, Portoro{\v{z}}, Slovenia, May 23-28, 2016},
164 publisher = {European Language Resources Association {(ELRA)}},
165 year = {2016},
166 url = {http://www.lrec-conf.org/proceedings/lrec2016/summaries/947.html},
167 }
168 ```
169
170
171 ### Contributions
172
173 Thanks to [@AntoineSimoulin](https://github.com/AntoineSimoulin) for adding this dataset.
174