Yeb Havinga commited on
Commit
501f190
1 Parent(s): 09f5119

Add dataset

Browse files
README.md ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - en
8
+ - nl
9
+ license:
10
+ - unknown
11
+ multilinguality:
12
+ - multilingual
13
+ size_categories:
14
+ - 10K<n<100K
15
+ - 1M<n<10M
16
+ - n<1K
17
+ source_datasets:
18
+ - original
19
+ task_categories:
20
+ - translation
21
+ task_ids: []
22
+ pretty_name: OpenSubtitles En Nl
23
+ ---
24
+
25
+ # Dataset Card for OpenSubtitles
26
+
27
+ ## Table of Contents
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+ - [Contributions](#contributions)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Homepage:** http://opus.nlpl.eu/OpenSubtitles.php
54
+ - **Repository:** None
55
+ - **Paper:** http://www.lrec-conf.org/proceedings/lrec2016/pdf/62_Paper.pdf
56
+ - **Leaderboard:** [More Information Needed]
57
+ - **Point of Contact:** [More Information Needed]
58
+
59
+ ### Dataset Summary
60
+
61
+ This dataset is a subset from the en-nl open_subtitles dataset.
62
+ It contains only subtitles of tv shows that have a rating of at least 8.0 with at least 1000 votes.
63
+ The subtitles are also ordered and appended into buffers several lengths, with a maximum of 370 tokens
64
+ as tokenized by the 'yhavinga/ul2-base-dutch' tokenizer.
65
+
66
+
67
+ ### Supported Tasks and Leaderboards
68
+
69
+ [More Information Needed]
70
+
71
+ ### Languages
72
+
73
+ The languages in the dataset are:
74
+ - en
75
+ - nl
76
+
77
+ ## Dataset Structure
78
+
79
+ ### Data Instances
80
+
81
+ Here are some examples of questions and facts:
82
+
83
+
84
+ ### Data Fields
85
+
86
+ [More Information Needed]
87
+
88
+ ### Data Splits
89
+
90
+ [More Information Needed]
91
+
92
+ ## Dataset Creation
93
+
94
+ ### Curation Rationale
95
+
96
+ [More Information Needed]
97
+
98
+ ### Source Data
99
+
100
+ [More Information Needed]
101
+
102
+ #### Initial Data Collection and Normalization
103
+
104
+ [More Information Needed]
105
+
106
+ #### Who are the source language producers?
107
+
108
+ [More Information Needed]
109
+
110
+ ### Annotations
111
+
112
+ [More Information Needed]
113
+
114
+ #### Annotation process
115
+
116
+ [More Information Needed]
117
+
118
+ #### Who are the annotators?
119
+
120
+ [More Information Needed]
121
+
122
+ ### Personal and Sensitive Information
123
+
124
+ [More Information Needed]
125
+
126
+ ## Considerations for Using the Data
127
+
128
+ ### Social Impact of Dataset
129
+
130
+ [More Information Needed]
131
+
132
+ ### Discussion of Biases
133
+
134
+ [More Information Needed]
135
+
136
+ ### Other Known Limitations
137
+
138
+ [More Information Needed]
139
+
140
+ ## Additional Information
141
+
142
+ ### Dataset Curators
143
+
144
+ [More Information Needed]
145
+
146
+ ### Licensing Information
147
+
148
+ [More Information Needed]
149
+
150
+ ### Citation Information
151
+
152
+ [More Information Needed]
153
+
154
+ ### Contributions
155
+
156
+ Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding the open_subtitles dataset.
src/create_dataset.py ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gzip
2
+ import json
3
+
4
+ import numpy as np
5
+ import pandas as pd
6
+ from transformers import AutoTokenizer
7
+
8
+ COLLATE_LENGTH = 370
9
+
10
+
11
+ def emit(line_id, nl_str, en_str, nl_l, en_l):
12
+ obj = {
13
+ "id": line_id,
14
+ "translation": {
15
+ "nl": nl_str.strip(),
16
+ "en": en_str.strip(),
17
+ },
18
+ "nl_len": nl_l,
19
+ "en_len": en_l,
20
+ }
21
+ writer.write(str.encode(json.dumps(obj)))
22
+ writer.write("\n".encode("utf-8"))
23
+
24
+
25
+ class TokenLength:
26
+ def __init__(self, tokenizer):
27
+ self.tokenizer = AutoTokenizer.from_pretrained(
28
+ tokenizer, max_length=4096, truncation=False, use_fast=False
29
+ )
30
+
31
+ def __call__(self, text: str):
32
+ return len(self.tokenizer.encode(text, max_length=4096, truncation=False))
33
+
34
+
35
+ class Counter:
36
+ def __init__(self, start=0):
37
+ self.count = start
38
+
39
+ def __call__(self):
40
+ self.count += 1
41
+ return self.count
42
+
43
+
44
+ class Buffer:
45
+ def __init__(
46
+ self,
47
+ id: int,
48
+ emit_lines: bool,
49
+ max_length: int,
50
+ en_prefix="",
51
+ ):
52
+ self.id = id
53
+ self.emit_lines = emit_lines
54
+ self.max_length = max_length
55
+ self.en_prefix = en_prefix
56
+ self.counter = Counter()
57
+ self.nl_l = None
58
+ self.en_l = None
59
+ self.nl_buf = None
60
+ self.en_buf = None
61
+ self.cur_max_length = None
62
+ self.reset()
63
+
64
+ def set_cur_max_length(self):
65
+ """You can check the distribution with the following code:
66
+ %matplotlib notebook
67
+ import numpy as np
68
+ import matplotlib.pyplot as plt
69
+
70
+ plt.rcParams['figure.figsize'] = [9.5,6]
71
+ fig, ax = plt.subplots(1, 1)
72
+
73
+ r = np.random.beta(20,8,102000)
74
+ ax.hist(r, density=True, histtype='stepfilled', alpha=0.2, bins=200)
75
+ ax.legend(loc='best', frameon=False)
76
+ plt.show()
77
+ """
78
+ self.cur_max_length = int(self.max_length * np.random.beta(20, 8))
79
+
80
+ def reset(self):
81
+ self.nl_l = None
82
+ self.en_l = None
83
+ self.nl_buf = None
84
+ self.en_buf = None
85
+ self.set_cur_max_length()
86
+
87
+ def add_ok(self, nl_str, en_str, separator="\n"):
88
+ """If the new text fits within the max_length tokens, add it, else return False"""
89
+ nl_new = self.nl_buf + f"{separator}{nl_str}" if self.nl_buf else nl_str
90
+ en_new = self.en_buf + f"{separator}{en_str}" if self.en_buf else en_str
91
+ nl_new_l = token_length(nl_new)
92
+ en_new_l = token_length(en_new)
93
+ # Check if we can add it or if the result would be too long
94
+ if (
95
+ nl_new_l > self.cur_max_length
96
+ or token_length(self.en_prefix + en_new) > self.cur_max_length
97
+ ):
98
+ return False
99
+ else:
100
+ self.nl_buf = nl_new
101
+ self.en_buf = en_new
102
+ self.nl_l = nl_new_l
103
+ self.en_l = en_new_l
104
+ return True
105
+
106
+ def emit(self, row, separator):
107
+ nl_str = row.translation["nl"]
108
+ en_str = row.translation["en"]
109
+ nl_id = row.meta["sentenceIds"]["nl"]
110
+ en_id = row.meta["sentenceIds"]["en"]
111
+
112
+ # if one of the sentences ends on a . but the other doesn't, add a dot to the other
113
+ if nl_str.endswith(".") and not en_str.endswith("."):
114
+ en_str += "."
115
+ elif en_str.endswith(".") and not nl_str.endswith("."):
116
+ nl_str += "."
117
+ # Strip any leading "- " or "- " from the sentences
118
+ nl_str = nl_str.lstrip("- ")
119
+ en_str = en_str.lstrip("- ")
120
+
121
+ nl_len = token_length(nl_str)
122
+ en_len = token_length(en_str)
123
+ if self.emit_lines and nl_len <= COLLATE_LENGTH and en_len <= COLLATE_LENGTH:
124
+ emit(
125
+ line_id=f"{row.tconst}-nl{nl_id}-en{en_id}-l-",
126
+ nl_str=nl_str,
127
+ en_str=en_str,
128
+ nl_l=nl_len,
129
+ en_l=en_len,
130
+ )
131
+ if self.add_ok(nl_str.strip(), en_str.strip(), separator):
132
+ return
133
+
134
+ # If buf.add returns false, we've hit the maximum length boundary, so emit the current buffer, if it is not Empty
135
+ if self.nl_buf:
136
+ emit(
137
+ line_id=f"{row.tconst}-b{self.id}-{self.counter()}",
138
+ nl_str=self.nl_buf,
139
+ en_str=self.en_buf,
140
+ nl_l=self.nl_l,
141
+ en_l=self.en_l,
142
+ )
143
+ # After emit of the buffer, we reset the buffer
144
+ self.reset()
145
+
146
+ # Add the first line in this new buffer
147
+ result = self.add_ok(nl_str.strip(), en_str.strip())
148
+ if not result:
149
+ self.reset()
150
+
151
+
152
+ if __name__ == "__main__":
153
+ token_length = TokenLength(tokenizer="yhavinga/ul2-base-dutch")
154
+ line_counter = Counter()
155
+
156
+ buffers = [
157
+ Buffer(
158
+ id=index, emit_lines=(index == 0), max_length=buf_max_length, en_prefix=""
159
+ )
160
+ for index, buf_max_length in enumerate([0.6 * 370, 370])
161
+ ]
162
+
163
+ df = pd.read_json("episode_opensubtitles.json.gz", lines=True)
164
+
165
+ with gzip.open("outfile", mode="wb") as writer:
166
+ for row in df.itertuples():
167
+ for buffer in buffers:
168
+ buffer.emit(row, separator="\n")
src/create_opensub_imdb_joined.py ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import duckdb as duckdb
2
+ import pandas as pd
3
+ import tabulate
4
+ from datasets import load_dataset
5
+
6
+ cursor = duckdb.connect()
7
+ cursor.execute("PRAGMA threads=4")
8
+
9
+ NROWS = 100000000
10
+ NA_VALUES = "\\N"
11
+
12
+ dataset = load_dataset("open_subtitles", lang1="en", lang2="nl", split="train")
13
+ open_subtitles = dataset.data.table
14
+ print(
15
+ tabulate.tabulate(
16
+ cursor.execute(f"SELECT * FROM open_subtitles LIMIT 5").fetchdf(),
17
+ headers="keys",
18
+ tablefmt="psql",
19
+ )
20
+ )
21
+
22
+ # title_akas = pd.read_csv('title.akas.tsv.gz', sep='\t', na_values=NA_VALUES, nrows=NROWS)
23
+ # title_df = cursor.execute("SELECT * from title_akas limit 5").fetch_df()
24
+ # print(tabulate.tabulate(title_df, headers="keys", tablefmt="psql"))
25
+
26
+ title_basics = pd.read_csv(
27
+ "title.basics.tsv.gz", sep="\t", na_values=NA_VALUES, nrows=NROWS
28
+ )
29
+ basics_df = cursor.execute("SELECT * from title_basics limit 5").fetch_df()
30
+ print(tabulate.tabulate(basics_df, headers="keys", tablefmt="psql"))
31
+
32
+ title_episodes = pd.read_csv(
33
+ "title.episode.tsv.gz", sep="\t", na_values=NA_VALUES, nrows=NROWS
34
+ )
35
+ episodes_df = cursor.execute("SELECT * from title_episodes limit 5").fetch_df()
36
+ print(tabulate.tabulate(episodes_df, headers="keys", tablefmt="psql"))
37
+
38
+ title_ratings = pd.read_csv(
39
+ "title.ratings.tsv.gz", sep="\t", na_values=NA_VALUES, nrows=NROWS
40
+ )
41
+ ratings_df = cursor.execute("SELECT * from title_ratings limit 5").fetch_df()
42
+ print(tabulate.tabulate(ratings_df, headers="keys", tablefmt="psql"))
43
+
44
+ # # FIGURE OUT HOW WE CAN JOIN THE SUBTITLE DATASET WITH THE IMDB DATASET
45
+ # count_join_subtitle_title_akas = cursor.execute(
46
+ # """
47
+ # SELECT COUNT(*) FROM open_subtitles JOIN title_akas ON 'tt' || open_subtitles.meta.imdbId = title_akas.titleId
48
+ # """
49
+ # ).fetchall()
50
+ # print(f"Count join subtitle title akas: {count_join_subtitle_title_akas}")
51
+ #
52
+ # count_join_subtitle_title_basics = cursor.execute(
53
+ # """
54
+ # SELECT COUNT(*) FROM open_subtitles JOIN title_basics ON 'tt' || open_subtitles.meta.imdbId = title_basics.tconst
55
+ # """
56
+ # ).fetchdf()
57
+ # print(f"Count join subtitle title basics: {count_join_subtitle_title_basics}")
58
+ #
59
+ # count_join_subtitle_title_episodes = cursor.execute(
60
+ # """
61
+ # SELECT COUNT(*) FROM open_subtitles JOIN title_episodes ON 'tt' || open_subtitles.meta.imdbId = title_episodes.tconst
62
+ # """
63
+ # ).fetchdf()
64
+ # print(f"Count join subtitle title episodes: {count_join_subtitle_title_episodes}")
65
+ #
66
+ # count_join_subtitle_title_episodes_parent = cursor.execute(
67
+ # """
68
+ # SELECT COUNT(*) FROM open_subtitles JOIN title_episodes ON 'tt' || open_subtitles.meta.imdbId = title_episodes.parentTconst
69
+ # """
70
+ # ).fetchdf()
71
+ # print(f"Count join subtitle title episodes parent: {count_join_subtitle_title_episodes_parent}")
72
+ #
73
+ # count_join_subtitle_title_ratings = cursor.execute(
74
+ # """
75
+ # SELECT COUNT(*) FROM open_subtitles JOIN title_ratings ON 'tt' || open_subtitles.meta.imdbId = title_ratings.tconst
76
+ # """
77
+ # ).fetchdf()
78
+ # print(f"Count join subtitle title ratings: {count_join_subtitle_title_ratings}")
79
+
80
+
81
+ # join title_episode with its parent title_basics and title_ratings
82
+ episode_detail = cursor.execute(
83
+ """
84
+ SELECT
85
+ open_subtitles.id,
86
+ open_subtitles.translation,
87
+ open_subtitles.meta,
88
+ title_basics.tconst,
89
+ title_basics.primaryTitle,
90
+ title_basics.startYear,
91
+ title_basics.endYear,
92
+ title_basics.genres,
93
+ title_basics.runtimeMinutes,
94
+ title_basics.titleType,
95
+ title_basics.isAdult,
96
+ title_ratings.tconst AS rating_tconst,
97
+ title_ratings.averageRating,
98
+ title_ratings.numVotes,
99
+ title_episodes.tconst as episode_tconst,
100
+ title_episodes.parentTconst,
101
+ title_episodes.seasonNumber,
102
+ title_episodes.episodeNumber
103
+ FROM
104
+ title_episodes
105
+ INNER JOIN
106
+ title_basics
107
+ ON
108
+ title_episodes.parentTconst = title_basics.tconst
109
+ INNER JOIN
110
+ title_ratings
111
+ ON
112
+ title_episodes.tconst = title_ratings.tconst
113
+ INNER JOIN
114
+ open_subtitles
115
+ ON
116
+ title_episodes.tconst = 'tt' || open_subtitles.meta.imdbId
117
+ WHERE isAdult == 0
118
+ and averageRating > 8.0
119
+ and numVotes > 1000
120
+ ORDER BY startYear, episode_tconst, seasonNumber, episodeNumber, meta.sentenceIds.en
121
+ """
122
+ ).fetch_df()
123
+ print(tabulate.tabulate(episode_detail[:5], headers="keys", tablefmt="psql"))
124
+
125
+ # write episode_detail to json file
126
+ episode_detail.to_json("episode_opensubtitles.json", orient="records", lines=True)
src/episode_opensubtitles.json.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:612bc5d454dbbe15fd03200be5a174cd22a02fae628b7f7391db874a1786b186
3
+ size 139127032
train.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aceec2f2def4ce2ffedea50b1899116adbf16c49cf565c049908a4b523bd5a4f
3
+ size 155466680