system HF staff commited on
Commit
ac06b6e
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,244 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - es
8
+ licenses:
9
+ DGT:
10
+ - mit
11
+ DOGC:
12
+ - mit
13
+ ECB:
14
+ - mit
15
+ EMEA:
16
+ - mit
17
+ EUBookShop:
18
+ - mit
19
+ Europarl:
20
+ - mit
21
+ GlobalVoices:
22
+ - mit
23
+ JRC:
24
+ - mit
25
+ NewsCommentary11:
26
+ - mit
27
+ OpenSubtitles2018:
28
+ - mit
29
+ ParaCrawl:
30
+ - mit
31
+ TED:
32
+ - mit
33
+ UN:
34
+ - mit
35
+ all_wikis:
36
+ - mit
37
+ combined:
38
+ - mit
39
+ multiUN:
40
+ - mit
41
+ multilinguality:
42
+ - monolingual
43
+ size_categories:
44
+ DGT:
45
+ - n>1M
46
+ DOGC:
47
+ - n>1M
48
+ ECB:
49
+ - n>1M
50
+ EMEA:
51
+ - n>1M
52
+ EUBookShop:
53
+ - n>1M
54
+ Europarl:
55
+ - n>1M
56
+ GlobalVoices:
57
+ - 100K<n<1M
58
+ JRC:
59
+ - n>1M
60
+ NewsCommentary11:
61
+ - 100K<n<1M
62
+ OpenSubtitles2018:
63
+ - n>1M
64
+ ParaCrawl:
65
+ - n>1M
66
+ TED:
67
+ - 100K<n<1M
68
+ UN:
69
+ - 10K<n<100K
70
+ all_wikis:
71
+ - n>1M
72
+ combined:
73
+ - n>1M
74
+ multiUN:
75
+ - n>1M
76
+ source_datasets:
77
+ - original
78
+ task_categories:
79
+ - other
80
+ task_ids:
81
+ - other-other-pretraining-language-models
82
+ ---
83
+
84
+ # Dataset Card for [Dataset Name]
85
+
86
+ ## Table of Contents
87
+ - [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
88
+ - [Table of Contents](#table-of-contents)
89
+ - [Dataset Description](#dataset-description)
90
+ - [Dataset Summary](#dataset-summary)
91
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
92
+ - [Languages](#languages)
93
+ - [Dataset Structure](#dataset-structure)
94
+ - [Data Instances](#data-instances)
95
+ - [Data Fields](#data-fields)
96
+ - [Data Splits](#data-splits)
97
+ - [Dataset Creation](#dataset-creation)
98
+ - [Curation Rationale](#curation-rationale)
99
+ - [Source Data](#source-data)
100
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
101
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
102
+ - [Annotations](#annotations)
103
+ - [Annotation process](#annotation-process)
104
+ - [Who are the annotators?](#who-are-the-annotators)
105
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
106
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
107
+ - [Social Impact of Dataset](#social-impact-of-dataset)
108
+ - [Discussion of Biases](#discussion-of-biases)
109
+ - [Other Known Limitations](#other-known-limitations)
110
+ - [Additional Information](#additional-information)
111
+ - [Dataset Curators](#dataset-curators)
112
+ - [Licensing Information](#licensing-information)
113
+ - [Citation Information](#citation-information)
114
+
115
+ ## Dataset Description
116
+
117
+ - **Homepage:** [https://github.com/josecannete/spanish-corpora](https://github.com/josecannete/spanish-corpora)
118
+ - **Repository:** [https://github.com/josecannete/spanish-corpora](https://github.com/josecannete/spanish-corpora)
119
+ - **Paper:**
120
+ - **Leaderboard:**
121
+ - **Point of Contact:** [José Cañete](mailto:jose.canete@ug.uchile.cl) (corpus creator) or [Lewis Tunstall](mailto:lewis.c.tunstall@gmail.com) (corpus submitter)
122
+
123
+ ### Dataset Summary
124
+
125
+ The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, `all_wiki` only includes examples from Spanish Wikipedia:
126
+
127
+ ```python
128
+ from datasets import load_dataset
129
+ all_wiki = load_dataset('large_spanish_corpus', name='all_wiki')
130
+ ```
131
+
132
+ By default, the config is set to "combined" which loads all the corpora.
133
+
134
+
135
+
136
+ ### Supported Tasks and Leaderboards
137
+
138
+ [More Information Needed]
139
+
140
+ ### Languages
141
+
142
+ Spanish
143
+
144
+ ## Dataset Structure
145
+
146
+ ### Data Instances
147
+
148
+ [More Information Needed]
149
+
150
+ ### Data Fields
151
+
152
+ [More Information Needed]
153
+
154
+ ### Data Splits
155
+
156
+ The following is taken from the corpus' source repsository:
157
+
158
+ * Spanish Wikis: Which include Wikipedia, Wikinews, Wikiquotes and more. These were first processed with wikiextractor (https://github.com/josecannete/wikiextractorforBERT) using the wikis dump of 20/04/2019.
159
+
160
+ * ParaCrawl: Spanish portion of ParaCrawl (http://opus.nlpl.eu/ParaCrawl.php)
161
+
162
+ * EUBookshop: Spanish portion of EUBookshop (http://opus.nlpl.eu/EUbookshop.php)
163
+
164
+ * MultiUN: Spanish portion of MultiUN (http://opus.nlpl.eu/MultiUN.php)
165
+
166
+ * OpenSubtitles: Spanish portion of OpenSubtitles2018 (http://opus.nlpl.eu/OpenSubtitles-v2018.php)
167
+
168
+ * DGC: Spanish portion of DGT (http://opus.nlpl.eu/DGT.php)
169
+
170
+ * DOGC: Spanish portion of DOGC (http://opus.nlpl.eu/DOGC.php)
171
+
172
+ * ECB: Spanish portion of ECB (http://opus.nlpl.eu/ECB.php)
173
+
174
+ * EMEA: Spanish portion of EMEA (http://opus.nlpl.eu/EMEA.php)
175
+
176
+ * Europarl: Spanish portion of Europarl (http://opus.nlpl.eu/Europarl.php)
177
+
178
+ * GlobalVoices: Spanish portion of GlobalVoices (http://opus.nlpl.eu/GlobalVoices.php)
179
+
180
+ * JRC: Spanish portion of JRC (http://opus.nlpl.eu/JRC-Acquis.php)
181
+
182
+ * News-Commentary11: Spanish portion of NCv11 (http://opus.nlpl.eu/News-Commentary-v11.php)
183
+
184
+ * TED: Spanish portion of TED (http://opus.nlpl.eu/TED2013.php)
185
+
186
+ * UN: Spanish portion of UN (http://opus.nlpl.eu/UN.php)
187
+
188
+ ## Dataset Creation
189
+
190
+ ### Curation Rationale
191
+
192
+ [More Information Needed]
193
+
194
+ ### Source Data
195
+
196
+ #### Initial Data Collection and Normalization
197
+
198
+ [More Information Needed]
199
+
200
+ #### Who are the source language producers?
201
+
202
+ [More Information Needed]
203
+
204
+ ### Annotations
205
+
206
+ #### Annotation process
207
+
208
+ [More Information Needed]
209
+
210
+ #### Who are the annotators?
211
+
212
+ [More Information Needed]
213
+
214
+ ### Personal and Sensitive Information
215
+
216
+ [More Information Needed]
217
+
218
+ ## Considerations for Using the Data
219
+
220
+ ### Social Impact of Dataset
221
+
222
+ [More Information Needed]
223
+
224
+ ### Discussion of Biases
225
+
226
+ [More Information Needed]
227
+
228
+ ### Other Known Limitations
229
+
230
+ [More Information Needed]
231
+
232
+ ## Additional Information
233
+
234
+ ### Dataset Curators
235
+
236
+ [More Information Needed]
237
+
238
+ ### Licensing Information
239
+
240
+ [More Information Needed]
241
+
242
+ ### Citation Information
243
+
244
+ [More Information Needed]
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"JRC": {"description": "The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, \"all_wiki\" only includes examples from Spanish Wikipedia. By default, the config is set to \"combined\" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the \"split\" argument.\n", "citation": "@dataset{jose_canete_2019_3247731,\n author = {Jos\u00e9 Ca\u00f1ete},\n title = {Compilation of Large Spanish Unannotated Corpora},\n month = may,\n year = 2019,\n publisher = {Zenodo},\n doi = {10.5281/zenodo.3247731},\n url = {https://doi.org/10.5281/zenodo.3247731}\n}\n", "homepage": "https://github.com/josecannete/spanish-corpora", "license": "MIT", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "large_spanish_corpus", "config_name": "JRC", "version": {"version_str": "1.1.0", "description": "", "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 380895504, "num_examples": 3410620, "dataset_name": "large_spanish_corpus"}}, "download_checksums": {"https://zenodo.org/record/3247731/files/raw.tar.bz2": {"num_bytes": 4099166669, "checksum": "4f934fbb1b9ecd1cdd3145f5817415c4722a0bc05b0874e47e62303c367b3a95"}}, "download_size": 4099166669, "post_processing_size": null, "dataset_size": 380895504, "size_in_bytes": 4480062173}, "EMEA": {"description": "The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, \"all_wiki\" only includes examples from Spanish Wikipedia. By default, the config is set to \"combined\" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the \"split\" argument.\n", "citation": "@dataset{jose_canete_2019_3247731,\n author = {Jos\u00e9 Ca\u00f1ete},\n title = {Compilation of Large Spanish Unannotated Corpora},\n month = may,\n year = 2019,\n publisher = {Zenodo},\n doi = {10.5281/zenodo.3247731},\n url = {https://doi.org/10.5281/zenodo.3247731}\n}\n", "homepage": "https://github.com/josecannete/spanish-corpora", "license": "MIT", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "large_spanish_corpus", "config_name": "EMEA", "version": {"version_str": "1.1.0", "description": "", "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 100259598, "num_examples": 1221233, "dataset_name": "large_spanish_corpus"}}, "download_checksums": {"https://zenodo.org/record/3247731/files/raw.tar.bz2": {"num_bytes": 4099166669, "checksum": "4f934fbb1b9ecd1cdd3145f5817415c4722a0bc05b0874e47e62303c367b3a95"}}, "download_size": 4099166669, "post_processing_size": null, "dataset_size": 100259598, "size_in_bytes": 4199426267}, "GlobalVoices": {"description": "The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, \"all_wiki\" only includes examples from Spanish Wikipedia. By default, the config is set to \"combined\" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the \"split\" argument.\n", "citation": "@dataset{jose_canete_2019_3247731,\n author = {Jos\u00e9 Ca\u00f1ete},\n title = {Compilation of Large Spanish Unannotated Corpora},\n month = may,\n year = 2019,\n publisher = {Zenodo},\n doi = {10.5281/zenodo.3247731},\n url = {https://doi.org/10.5281/zenodo.3247731}\n}\n", "homepage": "https://github.com/josecannete/spanish-corpora", "license": "MIT", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "large_spanish_corpus", "config_name": "GlobalVoices", "version": {"version_str": "1.1.0", "description": "", "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 114435784, "num_examples": 897075, "dataset_name": "large_spanish_corpus"}}, "download_checksums": {"https://zenodo.org/record/3247731/files/raw.tar.bz2": {"num_bytes": 4099166669, "checksum": "4f934fbb1b9ecd1cdd3145f5817415c4722a0bc05b0874e47e62303c367b3a95"}}, "download_size": 4099166669, "post_processing_size": null, "dataset_size": 114435784, "size_in_bytes": 4213602453}, "ECB": {"description": "The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, \"all_wiki\" only includes examples from Spanish Wikipedia. By default, the config is set to \"combined\" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the \"split\" argument.\n", "citation": "@dataset{jose_canete_2019_3247731,\n author = {Jos\u00e9 Ca\u00f1ete},\n title = {Compilation of Large Spanish Unannotated Corpora},\n month = may,\n year = 2019,\n publisher = {Zenodo},\n doi = {10.5281/zenodo.3247731},\n url = {https://doi.org/10.5281/zenodo.3247731}\n}\n", "homepage": "https://github.com/josecannete/spanish-corpora", "license": "MIT", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "large_spanish_corpus", "config_name": "ECB", "version": {"version_str": "1.1.0", "description": "", "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 336285757, "num_examples": 1875738, "dataset_name": "large_spanish_corpus"}}, "download_checksums": {"https://zenodo.org/record/3247731/files/raw.tar.bz2": {"num_bytes": 4099166669, "checksum": "4f934fbb1b9ecd1cdd3145f5817415c4722a0bc05b0874e47e62303c367b3a95"}}, "download_size": 4099166669, "post_processing_size": null, "dataset_size": 336285757, "size_in_bytes": 4435452426}, "DOGC": {"description": "The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, \"all_wiki\" only includes examples from Spanish Wikipedia. By default, the config is set to \"combined\" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the \"split\" argument.\n", "citation": "@dataset{jose_canete_2019_3247731,\n author = {Jos\u00e9 Ca\u00f1ete},\n title = {Compilation of Large Spanish Unannotated Corpora},\n month = may,\n year = 2019,\n publisher = {Zenodo},\n doi = {10.5281/zenodo.3247731},\n url = {https://doi.org/10.5281/zenodo.3247731}\n}\n", "homepage": "https://github.com/josecannete/spanish-corpora", "license": "MIT", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "large_spanish_corpus", "config_name": "DOGC", "version": {"version_str": "1.1.0", "description": "", "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 898279656, "num_examples": 10917053, "dataset_name": "large_spanish_corpus"}}, "download_checksums": {"https://zenodo.org/record/3247731/files/raw.tar.bz2": {"num_bytes": 4099166669, "checksum": "4f934fbb1b9ecd1cdd3145f5817415c4722a0bc05b0874e47e62303c367b3a95"}}, "download_size": 4099166669, "post_processing_size": null, "dataset_size": 898279656, "size_in_bytes": 4997446325}, "all_wikis": {"description": "The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, \"all_wiki\" only includes examples from Spanish Wikipedia. By default, the config is set to \"combined\" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the \"split\" argument.\n", "citation": "@dataset{jose_canete_2019_3247731,\n author = {Jos\u00e9 Ca\u00f1ete},\n title = {Compilation of Large Spanish Unannotated Corpora},\n month = may,\n year = 2019,\n publisher = {Zenodo},\n doi = {10.5281/zenodo.3247731},\n url = {https://doi.org/10.5281/zenodo.3247731}\n}\n", "homepage": "https://github.com/josecannete/spanish-corpora", "license": "MIT", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "large_spanish_corpus", "config_name": "all_wikis", "version": {"version_str": "1.1.0", "description": "", "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 3782280549, "num_examples": 28109484, "dataset_name": "large_spanish_corpus"}}, "download_checksums": {"https://zenodo.org/record/3247731/files/raw.tar.bz2": {"num_bytes": 4099166669, "checksum": "4f934fbb1b9ecd1cdd3145f5817415c4722a0bc05b0874e47e62303c367b3a95"}}, "download_size": 4099166669, "post_processing_size": null, "dataset_size": 3782280549, "size_in_bytes": 7881447218}, "TED": {"description": "The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, \"all_wiki\" only includes examples from Spanish Wikipedia. By default, the config is set to \"combined\" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the \"split\" argument.\n", "citation": "@dataset{jose_canete_2019_3247731,\n author = {Jos\u00e9 Ca\u00f1ete},\n title = {Compilation of Large Spanish Unannotated Corpora},\n month = may,\n year = 2019,\n publisher = {Zenodo},\n doi = {10.5281/zenodo.3247731},\n url = {https://doi.org/10.5281/zenodo.3247731}\n}\n", "homepage": "https://github.com/josecannete/spanish-corpora", "license": "MIT", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "large_spanish_corpus", "config_name": "TED", "version": {"version_str": "1.1.0", "description": "", "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 15858148, "num_examples": 157910, "dataset_name": "large_spanish_corpus"}}, "download_checksums": {"https://zenodo.org/record/3247731/files/raw.tar.bz2": {"num_bytes": 4099166669, "checksum": "4f934fbb1b9ecd1cdd3145f5817415c4722a0bc05b0874e47e62303c367b3a95"}}, "download_size": 4099166669, "post_processing_size": null, "dataset_size": 15858148, "size_in_bytes": 4115024817}, "multiUN": {"description": "The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, \"all_wiki\" only includes examples from Spanish Wikipedia. By default, the config is set to \"combined\" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the \"split\" argument.\n", "citation": "@dataset{jose_canete_2019_3247731,\n author = {Jos\u00e9 Ca\u00f1ete},\n title = {Compilation of Large Spanish Unannotated Corpora},\n month = may,\n year = 2019,\n publisher = {Zenodo},\n doi = {10.5281/zenodo.3247731},\n url = {https://doi.org/10.5281/zenodo.3247731}\n}\n", "homepage": "https://github.com/josecannete/spanish-corpora", "license": "MIT", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "large_spanish_corpus", "config_name": "multiUN", "version": {"version_str": "1.1.0", "description": "", "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2327269369, "num_examples": 13127490, "dataset_name": "large_spanish_corpus"}}, "download_checksums": {"https://zenodo.org/record/3247731/files/raw.tar.bz2": {"num_bytes": 4099166669, "checksum": "4f934fbb1b9ecd1cdd3145f5817415c4722a0bc05b0874e47e62303c367b3a95"}}, "download_size": 4099166669, "post_processing_size": null, "dataset_size": 2327269369, "size_in_bytes": 6426436038}, "Europarl": {"description": "The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, \"all_wiki\" only includes examples from Spanish Wikipedia. By default, the config is set to \"combined\" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the \"split\" argument.\n", "citation": "@dataset{jose_canete_2019_3247731,\n author = {Jos\u00e9 Ca\u00f1ete},\n title = {Compilation of Large Spanish Unannotated Corpora},\n month = may,\n year = 2019,\n publisher = {Zenodo},\n doi = {10.5281/zenodo.3247731},\n url = {https://doi.org/10.5281/zenodo.3247731}\n}\n", "homepage": "https://github.com/josecannete/spanish-corpora", "license": "MIT", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "large_spanish_corpus", "config_name": "Europarl", "version": {"version_str": "1.1.0", "description": "", "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 359897865, "num_examples": 2174141, "dataset_name": "large_spanish_corpus"}}, "download_checksums": {"https://zenodo.org/record/3247731/files/raw.tar.bz2": {"num_bytes": 4099166669, "checksum": "4f934fbb1b9ecd1cdd3145f5817415c4722a0bc05b0874e47e62303c367b3a95"}}, "download_size": 4099166669, "post_processing_size": null, "dataset_size": 359897865, "size_in_bytes": 4459064534}, "NewsCommentary11": {"description": "The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, \"all_wiki\" only includes examples from Spanish Wikipedia. By default, the config is set to \"combined\" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the \"split\" argument.\n", "citation": "@dataset{jose_canete_2019_3247731,\n author = {Jos\u00e9 Ca\u00f1ete},\n title = {Compilation of Large Spanish Unannotated Corpora},\n month = may,\n year = 2019,\n publisher = {Zenodo},\n doi = {10.5281/zenodo.3247731},\n url = {https://doi.org/10.5281/zenodo.3247731}\n}\n", "homepage": "https://github.com/josecannete/spanish-corpora", "license": "MIT", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "large_spanish_corpus", "config_name": "NewsCommentary11", "version": {"version_str": "1.1.0", "description": "", "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 48350573, "num_examples": 288771, "dataset_name": "large_spanish_corpus"}}, "download_checksums": {"https://zenodo.org/record/3247731/files/raw.tar.bz2": {"num_bytes": 4099166669, "checksum": "4f934fbb1b9ecd1cdd3145f5817415c4722a0bc05b0874e47e62303c367b3a95"}}, "download_size": 4099166669, "post_processing_size": null, "dataset_size": 48350573, "size_in_bytes": 4147517242}, "UN": {"description": "The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, \"all_wiki\" only includes examples from Spanish Wikipedia. By default, the config is set to \"combined\" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the \"split\" argument.\n", "citation": "@dataset{jose_canete_2019_3247731,\n author = {Jos\u00e9 Ca\u00f1ete},\n title = {Compilation of Large Spanish Unannotated Corpora},\n month = may,\n year = 2019,\n publisher = {Zenodo},\n doi = {10.5281/zenodo.3247731},\n url = {https://doi.org/10.5281/zenodo.3247731}\n}\n", "homepage": "https://github.com/josecannete/spanish-corpora", "license": "MIT", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "large_spanish_corpus", "config_name": "UN", "version": {"version_str": "1.1.0", "description": "", "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 23654590, "num_examples": 74067, "dataset_name": "large_spanish_corpus"}}, "download_checksums": {"https://zenodo.org/record/3247731/files/raw.tar.bz2": {"num_bytes": 4099166669, "checksum": "4f934fbb1b9ecd1cdd3145f5817415c4722a0bc05b0874e47e62303c367b3a95"}}, "download_size": 4099166669, "post_processing_size": null, "dataset_size": 23654590, "size_in_bytes": 4122821259}, "EUBookShop": {"description": "The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, \"all_wiki\" only includes examples from Spanish Wikipedia. By default, the config is set to \"combined\" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the \"split\" argument.\n", "citation": "@dataset{jose_canete_2019_3247731,\n author = {Jos\u00e9 Ca\u00f1ete},\n title = {Compilation of Large Spanish Unannotated Corpora},\n month = may,\n year = 2019,\n publisher = {Zenodo},\n doi = {10.5281/zenodo.3247731},\n url = {https://doi.org/10.5281/zenodo.3247731}\n}\n", "homepage": "https://github.com/josecannete/spanish-corpora", "license": "MIT", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "large_spanish_corpus", "config_name": "EUBookShop", "version": {"version_str": "1.1.0", "description": "", "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1326861077, "num_examples": 8214959, "dataset_name": "large_spanish_corpus"}}, "download_checksums": {"https://zenodo.org/record/3247731/files/raw.tar.bz2": {"num_bytes": 4099166669, "checksum": "4f934fbb1b9ecd1cdd3145f5817415c4722a0bc05b0874e47e62303c367b3a95"}}, "download_size": 4099166669, "post_processing_size": null, "dataset_size": 1326861077, "size_in_bytes": 5426027746}, "ParaCrawl": {"description": "The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, \"all_wiki\" only includes examples from Spanish Wikipedia. By default, the config is set to \"combined\" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the \"split\" argument.\n", "citation": "@dataset{jose_canete_2019_3247731,\n author = {Jos\u00e9 Ca\u00f1ete},\n title = {Compilation of Large Spanish Unannotated Corpora},\n month = may,\n year = 2019,\n publisher = {Zenodo},\n doi = {10.5281/zenodo.3247731},\n url = {https://doi.org/10.5281/zenodo.3247731}\n}\n", "homepage": "https://github.com/josecannete/spanish-corpora", "license": "MIT", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "large_spanish_corpus", "config_name": "ParaCrawl", "version": {"version_str": "1.1.0", "description": "", "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1840430234, "num_examples": 15510649, "dataset_name": "large_spanish_corpus"}}, "download_checksums": {"https://zenodo.org/record/3247731/files/raw.tar.bz2": {"num_bytes": 4099166669, "checksum": "4f934fbb1b9ecd1cdd3145f5817415c4722a0bc05b0874e47e62303c367b3a95"}}, "download_size": 4099166669, "post_processing_size": null, "dataset_size": 1840430234, "size_in_bytes": 5939596903}, "OpenSubtitles2018": {"description": "The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, \"all_wiki\" only includes examples from Spanish Wikipedia. By default, the config is set to \"combined\" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the \"split\" argument.\n", "citation": "@dataset{jose_canete_2019_3247731,\n author = {Jos\u00e9 Ca\u00f1ete},\n title = {Compilation of Large Spanish Unannotated Corpora},\n month = may,\n year = 2019,\n publisher = {Zenodo},\n doi = {10.5281/zenodo.3247731},\n url = {https://doi.org/10.5281/zenodo.3247731}\n}\n", "homepage": "https://github.com/josecannete/spanish-corpora", "license": "MIT", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "large_spanish_corpus", "config_name": "OpenSubtitles2018", "version": {"version_str": "1.1.0", "description": "", "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 7477281776, "num_examples": 213508602, "dataset_name": "large_spanish_corpus"}}, "download_checksums": {"https://zenodo.org/record/3247731/files/raw.tar.bz2": {"num_bytes": 4099166669, "checksum": "4f934fbb1b9ecd1cdd3145f5817415c4722a0bc05b0874e47e62303c367b3a95"}}, "download_size": 4099166669, "post_processing_size": null, "dataset_size": 7477281776, "size_in_bytes": 11576448445}, "DGT": {"description": "The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, \"all_wiki\" only includes examples from Spanish Wikipedia. By default, the config is set to \"combined\" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the \"split\" argument.\n", "citation": "@dataset{jose_canete_2019_3247731,\n author = {Jos\u00e9 Ca\u00f1ete},\n title = {Compilation of Large Spanish Unannotated Corpora},\n month = may,\n year = 2019,\n publisher = {Zenodo},\n doi = {10.5281/zenodo.3247731},\n url = {https://doi.org/10.5281/zenodo.3247731}\n}\n", "homepage": "https://github.com/josecannete/spanish-corpora", "license": "MIT", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "large_spanish_corpus", "config_name": "DGT", "version": {"version_str": "1.1.0", "description": "", "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 396217351, "num_examples": 3168368, "dataset_name": "large_spanish_corpus"}}, "download_checksums": {"https://zenodo.org/record/3247731/files/raw.tar.bz2": {"num_bytes": 4099166669, "checksum": "4f934fbb1b9ecd1cdd3145f5817415c4722a0bc05b0874e47e62303c367b3a95"}}, "download_size": 4099166669, "post_processing_size": null, "dataset_size": 396217351, "size_in_bytes": 4495384020}, "combined": {"description": "The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, \"all_wiki\" only includes examples from Spanish Wikipedia. By default, the config is set to \"combined\" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the \"split\" argument.\n", "citation": "@dataset{jose_canete_2019_3247731,\n author = {Jos\u00e9 Ca\u00f1ete},\n title = {Compilation of Large Spanish Unannotated Corpora},\n month = may,\n year = 2019,\n publisher = {Zenodo},\n doi = {10.5281/zenodo.3247731},\n url = {https://doi.org/10.5281/zenodo.3247731}\n}\n", "homepage": "https://github.com/josecannete/spanish-corpora", "license": "MIT", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "large_spanish_corpus", "config_name": "combined", "version": {"version_str": "1.1.0", "description": "", "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 19428257807, "num_examples": 302656160, "dataset_name": "large_spanish_corpus"}}, "download_checksums": {"https://zenodo.org/record/3247731/files/raw.tar.bz2": {"num_bytes": 4099166669, "checksum": "4f934fbb1b9ecd1cdd3145f5817415c4722a0bc05b0874e47e62303c367b3a95"}}, "download_size": 4099166669, "post_processing_size": null, "dataset_size": 19428257807, "size_in_bytes": 23527424476}}
dummy/DGT/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d04da708863337a4e23dd5d6d8513fd942dabb2535b0e9a5a1a6977dd3ed3c63
3
+ size 1210
dummy/DOGC/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99e9f55c4b8d5a80f64aaf1ad26ca14817b3ea57423550751a349bb544ae6416
3
+ size 1074
dummy/ECB/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7bd2d4bba63956338c62b2d322e927fcfdd8e79609939e8cbab88022cb4b395e
3
+ size 716
dummy/EMEA/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0966dd9e9c1998d37e9a234daa8afee1409bdee18445f44d08873e069016f889
3
+ size 996
dummy/EUBookShop/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e11c56ac5c2043289a405134cf51716fd0348eb0a784041dded93f45842edaa
3
+ size 712
dummy/Europarl/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:796cef6a2ebc659139e6c1b4ebdefa092d0bf1dd2f43b7800f5d7b4bfc206580
3
+ size 991
dummy/GlobalVoices/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c76ed579a89eee3754c809a4efa9a2c14999499c1efde498f7708e527a75c76c
3
+ size 1376
dummy/JRC/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:507209a0b2b28f88671f3ce8ab77c8fc672418a942cb1a2bcd2c595be4248951
3
+ size 1122
dummy/NewsCommentary11/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cb6712f3a2238af4fc7e7b0a120eff0a2e338dc364eff757cd2e4de92665440
3
+ size 1562
dummy/OpenSubtitles2018/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d00ca90a08adad8840fc40ef0f6fab503db583652e95a20d01d1a8e0bef6cfc
3
+ size 818
dummy/ParaCrawl/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19c4962286410008284020c3012537e4c37b065bd2d3e246040095ea5101b129
3
+ size 1645
dummy/TED/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b62cb21ebeaf8615080c722f5a4524c0e2b82c057cdeeb86ffd3ad8ac276cde6
3
+ size 1088
dummy/UN/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0cd9aa732b83bfd460debbee8a553e729e066fd07d1709619c0f4538eca8a6d4
3
+ size 2439
dummy/all_wikis/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bfa8df33aa3dcba610d882a53f6f31e1272b83336be8d5c2950fc32028a17e15
3
+ size 861
dummy/combined/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:672500f3a519db3bbd53d2be59f5fd44bfb9f7b61d11b95a0991368ff6f27957
3
+ size 12018
dummy/multiUN/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2447792d5f4eb6779a6e59346817c83d64ec5f929c4d8d8a2c1683f53096c3f7
3
+ size 840
large_spanish_corpus.py ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """The Large Spanish Corpus is a compilation of Spanish corpora spanning Wikipedia to European parliament notes."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import os
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @dataset{jose_canete_2019_3247731,
26
+ author = {José Cañete},
27
+ title = {Compilation of Large Spanish Unannotated Corpora},
28
+ month = may,
29
+ year = 2019,
30
+ publisher = {Zenodo},
31
+ doi = {10.5281/zenodo.3247731},
32
+ url = {https://doi.org/10.5281/zenodo.3247731}
33
+ }
34
+ """
35
+
36
+ _DESCRIPTION = """\
37
+ The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament \
38
+ notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes \
39
+ examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this \
40
+ setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
41
+ """
42
+
43
+ _HOMEPAGE = "https://github.com/josecannete/spanish-corpora"
44
+
45
+ _LICENSE = "MIT"
46
+
47
+ _URL = "https://zenodo.org/record/3247731/files/raw.tar.bz2"
48
+
49
+ _CORPORA = [
50
+ "JRC",
51
+ "EMEA",
52
+ "GlobalVoices",
53
+ "ECB",
54
+ "DOGC",
55
+ "all_wikis",
56
+ "TED",
57
+ "multiUN",
58
+ "Europarl",
59
+ "NewsCommentary11",
60
+ "UN",
61
+ "EUBookShop",
62
+ "ParaCrawl",
63
+ "OpenSubtitles2018",
64
+ "DGT",
65
+ ]
66
+
67
+ _CORPORA_FILEPATHS = {corpus: os.path.join("spanish-corpora", "raw", f"{corpus}.txt") for corpus in _CORPORA}
68
+
69
+ _VERSION = "1.1.0"
70
+
71
+ _COMBINED = "combined"
72
+
73
+
74
+ class LargeSpanishCorpusConfig(datasets.BuilderConfig):
75
+ def __init__(self, corpora=None, **kwargs):
76
+ super(LargeSpanishCorpusConfig, self).__init__(version=datasets.Version(_VERSION, ""), **kwargs)
77
+ self.corpora = corpora
78
+
79
+ @property
80
+ def filepaths(self):
81
+ return [_CORPORA_FILEPATHS[corpus] for corpus in self.corpora]
82
+
83
+
84
+ class LargeSpanishCorpus(datasets.GeneratorBasedBuilder):
85
+ """The Large Spanish Corpus."""
86
+
87
+ BUILDER_CONFIGS = [
88
+ LargeSpanishCorpusConfig(name=corpus, corpora=[corpus], description=f"Spanish examples in corpus {corpus}.")
89
+ for corpus in _CORPORA
90
+ ] + [
91
+ LargeSpanishCorpusConfig(
92
+ name=_COMBINED, corpora=_CORPORA, description=f"Complete Spanish dataset with all corpora."
93
+ )
94
+ ]
95
+ BUILDER_CONFIG_CLASS = LargeSpanishCorpusConfig
96
+ DEFAULT_CONFIG_NAME = _COMBINED
97
+
98
+ def _info(self):
99
+ return datasets.DatasetInfo(
100
+ description=_DESCRIPTION,
101
+ features=datasets.Features(
102
+ {
103
+ "text": datasets.Value("string"),
104
+ }
105
+ ),
106
+ supervised_keys=None,
107
+ homepage=_HOMEPAGE,
108
+ license=_LICENSE,
109
+ citation=_CITATION,
110
+ )
111
+
112
+ def _split_generators(self, dl_manager):
113
+ data_dir = dl_manager.download_and_extract(_URL)
114
+ return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"data_dir": data_dir})]
115
+
116
+ def _generate_examples(self, data_dir):
117
+ for filepath in self.config.filepaths:
118
+ filepath = os.path.join(data_dir, filepath)
119
+ _id = 0
120
+ with open(filepath, mode="r", encoding="utf-8") as f:
121
+ for line in f:
122
+ yield _id, {"text": line.strip()},
123
+ _id += 1