Datasets:

Task Categories: sequence-modeling
Multilinguality: multilingual
Size Categories: unknown
Licenses: unknown
Language Creators: found
Annotations Creators: machine-generated
Source Datasets: original
asi commited on
Commit
e9ec941
1 Parent(s): 3631c0b

First version of the open_subtitles_monolingual dataset.

Browse files
Files changed (2) hide show
  1. README.md +173 -0
  2. open_subtitles_monolingual.py +127 -0
README.md ADDED
@@ -0,0 +1,173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - fr
8
+ - en
9
+ - zh-CN
10
+ - pt
11
+ - es
12
+ - ar
13
+ licenses:
14
+ - unknown
15
+ multilinguality:
16
+ - multilingual
17
+ pretty_name: ''
18
+ size_categories:
19
+ - unknown
20
+ source_datasets:
21
+ - original
22
+ task_categories:
23
+ - sequence-modeling
24
+ task_ids:
25
+ - language-modeling
26
+ ---
27
+
28
+ # Dataset Card Creation Guide
29
+
30
+ ## Table of Contents
31
+ - [Dataset Card Creation Guide](#dataset-card-creation-guide)
32
+ - [Table of Contents](#table-of-contents)
33
+ - [Dataset Description](#dataset-description)
34
+ - [Dataset Summary](#dataset-summary)
35
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
36
+ - [Languages](#languages)
37
+ - [Dataset Structure](#dataset-structure)
38
+ - [Data Instances](#data-instances)
39
+ - [Data Fields](#data-fields)
40
+ - [Data Splits](#data-splits)
41
+ - [Dataset Creation](#dataset-creation)
42
+ - [Curation Rationale](#curation-rationale)
43
+ - [Source Data](#source-data)
44
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
45
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
46
+ - [Citation Information](#citation-information)
47
+ - [Contributions](#contributions)
48
+
49
+ ## Dataset Description
50
+
51
+ - **Homepage:** [Opus OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles-v2018.php)
52
+ - **Repository:** [If the dataset is hosted on github or has a github homepage, add URL here]()
53
+ - **Paper:** [OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles](http://www.lrec-conf.org/proceedings/lrec2016/pdf/947_Paper.pdf)
54
+ - **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
55
+ - **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]()
56
+
57
+ ### Dataset Summary
58
+
59
+ This is a new collection of translated movie subtitles from [http://www.opensubtitles.org/](http://www.opensubtitles.org/).
60
+ **IMPORTANT**: If you use the OpenSubtitle corpus: Please, add a link to [http://www.opensubtitles.org/](http://www.opensubtitles.org/) to your website and to your reports and publications produced with the data!
61
+ This is a slightly cleaner version of the subtitle collection using improved sentence alignment and better language checking.
62
+ 62 languages, 1,782 bitexts
63
+ total number of files: 3,735,070
64
+ total number of tokens: 22.10G
65
+ total number of sentence fragments: 3.35G
66
+
67
+ This dataset only focus on monolingual subtitles with each document corresponding to a subtitle file.
68
+
69
+ ### Supported Tasks and Leaderboards
70
+
71
+ For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the `task-category-tag` with an appropriate `other:other-task-name`).
72
+
73
+ - `task-category-tag`: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a *high/low* [metric name](https://huggingface.co/metrics/metric_name). The ([model name](https://huggingface.co/model_name) or [model class](https://huggingface.co/transformers/model_doc/model_class.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* This task has an active leaderboard which can be found at [leaderboard url]() and ranks models based on [metric name](https://huggingface.co/metrics/metric_name) while also reporting [other metric name](https://huggingface.co/metrics/other_metric_name).
74
+
75
+ ### Languages
76
+
77
+ Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,...
78
+
79
+ When relevant, please provide [BCP-47 codes](https://tools.ietf.org/html/bcp47), which consist of a [primary language subtag](https://tools.ietf.org/html/bcp47#section-2.2.1), with a [script subtag](https://tools.ietf.org/html/bcp47#section-2.2.3) and/or [region subtag](https://tools.ietf.org/html/bcp47#section-2.2.4) if available.
80
+
81
+ ## Dataset Structure
82
+
83
+ ### Data Instances
84
+
85
+ Each example corresponds to a subtitle file.
86
+
87
+ ```
88
+ {
89
+ "subtitle": "Happy birthday to you."\n"Happy birthday to you."\n"Happy birthday, dear..."\nMemory is always there.\n17 years old,\nI was young, vulnerable, and powerless, making the same mistakes over and over again.\nAnd yet she was strong.\nBut that is always where my memory ends.\nAt that place, when we were 17.\nAnd as it ends there, my life also comes to a stop.\n"We Were There\n- Last Part " ....,
90
+ "meta": {
91
+ "year": 2012,
92
+ "imdbId": 2194724,
93
+ "subtitleId": 4786461.xml,
94
+ }
95
+ }
96
+ ```
97
+
98
+ ### Data Fields
99
+
100
+ Each example includes the text in the `subtitle` entry as well as meta data.
101
+
102
+ - `subtitle`: The subtitle text. The punctuation includes escaped line breaks characters.
103
+ - `year`: Year the subtitle file was added.
104
+ - `imdbId`: Movie unique identifier following the reference from [Internet Movie Database](http://www.imdb.com)
105
+ - `subtitleId`: Subtitle file identifier. They may be multiple examples refering to the same movie for a given language.
106
+
107
+ ### Data Splits
108
+
109
+ The dataset is split given languages.
110
+
111
+ | Language | Number of documents | Average document length | Total Number of tokens | File size |
112
+ | -------- | --------------------- | ----------------------- | ---------------------- | --------- |
113
+ | fr | 120,000 | 5,002 | 600M | 1.1G |
114
+ | en | 440,000 | 5,575 | 2,453M | 3.5G |
115
+ | zh-CN | 20,000 | 2,168 | 43M | 269M |
116
+ | pt | 130,000 | 4,932 | 641M | 1.2G |
117
+ | es | 230,000 | 5,020 | 1,155M | 2.2G |
118
+ | ar | 90,000 | 4,379 | 394M | 1.3G |
119
+
120
+ ## Dataset Creation
121
+
122
+ ### Curation Rationale
123
+
124
+ What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
125
+
126
+ ### Source Data
127
+
128
+ The dataset is based on [OpenSubtitles](http://www.opensubtitles.org) database.
129
+
130
+ #### Initial Data Collection and Normalization
131
+
132
+ Raw subtitle files follow a series of pre-precessing operations:
133
+ - `Subtitle conversion`: First the encoding is detected and converted to utf-8.
134
+ - `Sentence segmentation and tokenisation`: Sentences are then reconstructed since raw subtitle files corresponds to block of text which do not align with sentence boundaries. Sentence are then tokenized whith specific tools for Japanese and Chinese and the default Moses tokenizer otherwise.
135
+ - `Correction of OCR and spelling errors`: Some subtitles are automatically generated using Optical Character Recognition (OCR). This leads to recuring errors which are automatically detected and corrected using statistical language model.
136
+ - `Inclusion of meta-data`: Each file is associated with meta-data.
137
+ - `Post-processing`: In the current dataset, we add some basic post-processing steps. We parsed the `xml`files and untokenize the sentences.
138
+
139
+ #### Who are the source language producers?
140
+
141
+ Subtitles are written by contributors of the [OpenSubtitles](http://www.opensubtitles.org) database. They may be human written or automatically generated using OCR methods.
142
+
143
+ ### Citation Information
144
+
145
+ ```
146
+ @inproceedings{lison_16,
147
+ author = {Pierre Lison and
148
+ J{\"{o}}rg Tiedemann},
149
+ editor = {Nicoletta Calzolari and
150
+ Khalid Choukri and
151
+ Thierry Declerck and
152
+ Sara Goggi and
153
+ Marko Grobelnik and
154
+ Bente Maegaard and
155
+ Joseph Mariani and
156
+ H{\'{e}}l{\`{e}}ne Mazo and
157
+ Asunci{\'{o}}n Moreno and
158
+ Jan Odijk and
159
+ Stelios Piperidis},
160
+ title = {OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and
161
+ {TV} Subtitles},
162
+ booktitle = {Proceedings of the Tenth International Conference on Language Resources
163
+ and Evaluation {LREC} 2016, Portoro{\v{z}}, Slovenia, May 23-28, 2016},
164
+ publisher = {European Language Resources Association {(ELRA)}},
165
+ year = {2016},
166
+ url = {http://www.lrec-conf.org/proceedings/lrec2016/summaries/947.html},
167
+ }
168
+ ```
169
+
170
+
171
+ ### Contributions
172
+
173
+ Thanks to [@AntoineSimoulin](https://github.com/AntoineSimoulin) for adding this dataset.
open_subtitles_monolingual.py ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and Antoine Simoulin.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ import csv
17
+ import json
18
+ import os
19
+
20
+ import datasets
21
+
22
+
23
+ _CITATION = """\
24
+ @inproceedings{lison_16,
25
+ author = {Pierre Lison and
26
+ J{\"{o}}rg Tiedemann},
27
+ editor = {Nicoletta Calzolari and
28
+ Khalid Choukri and
29
+ Thierry Declerck and
30
+ Sara Goggi and
31
+ Marko Grobelnik and
32
+ Bente Maegaard and
33
+ Joseph Mariani and
34
+ H{\'{e}}l{\`{e}}ne Mazo and
35
+ Asunci{\'{o}}n Moreno and
36
+ Jan Odijk and
37
+ Stelios Piperidis},
38
+ title = {OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and
39
+ {TV} Subtitles},
40
+ booktitle = {Proceedings of the Tenth International Conference on Language Resources
41
+ and Evaluation {LREC} 2016, Portoro{\v{z}}, Slovenia, May 23-28, 2016},
42
+ publisher = {European Language Resources Association {(ELRA)}},
43
+ year = {2016},
44
+ url = {http://www.lrec-conf.org/proceedings/lrec2016/summaries/947.html},
45
+ }
46
+ """
47
+
48
+ _DESCRIPTION = """\
49
+ This is a new collection of translated movie subtitles from http://www.opensubtitles.org/.
50
+ IMPORTANT: If you use the OpenSubtitle corpus: Please, add a link to http://www.opensubtitles.org/ to your website and to your reports and publications produced with the data!
51
+ This is a slightly cleaner version of the subtitle collection using improved sentence alignment and better language checking.
52
+ 62 languages, 1,782 bitexts
53
+ total number of files: 3,735,070
54
+ total number of tokens: 22.10G
55
+ total number of sentence fragments: 3.35G
56
+ """
57
+
58
+ _HOMEPAGE_URL = "http://opus.nlpl.eu/OpenSubtitles.php"
59
+
60
+ _URLs = {
61
+ language: './{}.jsonl.gz'.format(language) \
62
+ for language in ['fr', 'en', 'zh_cn', 'pt', 'es', 'ar']
63
+ }
64
+
65
+
66
+ # TODO: Name of the dataset usually match the script name with CamelCase instead of snake_case
67
+ class OpenSubtitlesMonolingual(datasets.GeneratorBasedBuilder):
68
+ """Collection of translated movie subtitles from http://www.opensubtitles.org/."""
69
+
70
+ VERSION = datasets.Version("1.1.0")
71
+
72
+ # , version=VERSION,
73
+ BUILDER_CONFIGS = [
74
+ datasets.BuilderConfig(name=language, description="{} subtitles".format(language)) \
75
+ for language in _URLs.keys()
76
+ ]
77
+
78
+ def _info(self):
79
+ # TODO: This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
80
+ features = datasets.Features(
81
+ {
82
+ "subtitle": datasets.Value("string"),
83
+ "meta": {
84
+ "year": datasets.Value("int32"),
85
+ "imdbId": datasets.Value("int32"),
86
+ "subtitleId": datasets.Value("int32"),
87
+ }
88
+ }
89
+ )
90
+ return datasets.DatasetInfo(
91
+ description=_DESCRIPTION,
92
+ features=features,
93
+ supervised_keys=None,
94
+ homepage=_HOMEPAGE_URL,
95
+ citation=_CITATION,
96
+ )
97
+
98
+
99
+ def _split_generators(self, dl_manager):
100
+ """Returns SplitGenerators."""
101
+ my_url = _URLs[self.config.name]
102
+ data_file = dl_manager.download_and_extract(my_url)
103
+ return [
104
+ datasets.SplitGenerator(
105
+ name=datasets.Split.TRAIN,
106
+ gen_kwargs={
107
+ "filepath": data_file,
108
+ },
109
+ )
110
+ ]
111
+
112
+
113
+ def _generate_examples(self, filepath):
114
+ """ Yields examples as (key, example) tuples. """
115
+
116
+ with open(filepath, encoding="utf-8") as f:
117
+ for id_, row in enumerate(f):
118
+ data = json.loads(row)
119
+ yield id_, {
120
+ "subtitle": data['subtitles'],
121
+ "meta": {
122
+ "year": data['year'],
123
+ "imdbId": data['IMDbs'],
124
+ "subtitleId": int(data['filename'][:-len('.xml')]),
125
+ },
126
+ }
127
+