Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
parquet-converter commited on
Commit
cecce9f
1 Parent(s): 36186a3

Update parquet files

Browse files
README.md DELETED
@@ -1,201 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - found
4
- language_creators:
5
- - found
6
- language:
7
- - en
8
- - es
9
- - pt
10
- license:
11
- - unknown
12
- multilinguality:
13
- - multilingual
14
- size_categories:
15
- - 100K<n<1M
16
- source_datasets:
17
- - original
18
- task_categories:
19
- - translation
20
- task_ids: []
21
- paperswithcode_id: null
22
- pretty_name: SciELO
23
- configs:
24
- - en-es
25
- - en-pt
26
- - en-pt-es
27
- dataset_info:
28
- - config_name: en-es
29
- features:
30
- - name: translation
31
- dtype:
32
- translation:
33
- languages:
34
- - en
35
- - es
36
- splits:
37
- - name: train
38
- num_bytes: 71777213
39
- num_examples: 177782
40
- download_size: 22965217
41
- dataset_size: 71777213
42
- - config_name: en-pt
43
- features:
44
- - name: translation
45
- dtype:
46
- translation:
47
- languages:
48
- - en
49
- - pt
50
- splits:
51
- - name: train
52
- num_bytes: 1032669686
53
- num_examples: 2828917
54
- download_size: 322726075
55
- dataset_size: 1032669686
56
- - config_name: en-pt-es
57
- features:
58
- - name: translation
59
- dtype:
60
- translation:
61
- languages:
62
- - en
63
- - pt
64
- - es
65
- splits:
66
- - name: train
67
- num_bytes: 147472132
68
- num_examples: 255915
69
- download_size: 45556562
70
- dataset_size: 147472132
71
- ---
72
-
73
- # Dataset Card for SciELO
74
-
75
- ## Table of Contents
76
- - [Dataset Description](#dataset-description)
77
- - [Dataset Summary](#dataset-summary)
78
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
79
- - [Languages](#languages)
80
- - [Dataset Structure](#dataset-structure)
81
- - [Data Instances](#data-instances)
82
- - [Data Fields](#data-fields)
83
- - [Data Splits](#data-splits)
84
- - [Dataset Creation](#dataset-creation)
85
- - [Curation Rationale](#curation-rationale)
86
- - [Source Data](#source-data)
87
- - [Annotations](#annotations)
88
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
89
- - [Considerations for Using the Data](#considerations-for-using-the-data)
90
- - [Social Impact of Dataset](#social-impact-of-dataset)
91
- - [Discussion of Biases](#discussion-of-biases)
92
- - [Other Known Limitations](#other-known-limitations)
93
- - [Additional Information](#additional-information)
94
- - [Dataset Curators](#dataset-curators)
95
- - [Licensing Information](#licensing-information)
96
- - [Citation Information](#citation-information)
97
- - [Contributions](#contributions)
98
-
99
- ## Dataset Description
100
-
101
- - **Homepage:**[SciELO](https://sites.google.com/view/felipe-soares/datasets#h.p_92uSCyAjWSRB)
102
- - **Repository:**
103
- - **Paper:** [A Large Parallel Corpus of Full-Text Scientific Articles](https://arxiv.org/abs/1905.01852)
104
- - **Leaderboard:**
105
- - **Point of Contact:**
106
-
107
- ### Dataset Summary
108
-
109
- A parallel corpus of full-text scientific articles collected from Scielo database in the following languages:English, Portuguese and Spanish.
110
- The corpus is sentence aligned for all language pairs, as well as trilingual aligned for a small subset of sentences.
111
- Alignment was carried out using the Hunalign algorithm.
112
-
113
- ### Supported Tasks and Leaderboards
114
-
115
- The underlying task is machine translation.
116
-
117
- ### Languages
118
-
119
- [More Information Needed]
120
-
121
- ## Dataset Structure
122
-
123
- ### Data Instances
124
-
125
- [More Information Needed]
126
-
127
- ### Data Fields
128
-
129
- [More Information Needed]
130
-
131
- ### Data Splits
132
-
133
- [More Information Needed]
134
-
135
- ## Dataset Creation
136
-
137
- ### Curation Rationale
138
-
139
- [More Information Needed]
140
-
141
- ### Source Data
142
-
143
- #### Initial Data Collection and Normalization
144
-
145
- [More Information Needed]
146
-
147
- #### Who are the source language producers?
148
-
149
- [More Information Needed]
150
-
151
- ### Annotations
152
-
153
- #### Annotation process
154
-
155
- [More Information Needed]
156
-
157
- #### Who are the annotators?
158
-
159
- [More Information Needed]
160
-
161
- ### Personal and Sensitive Information
162
-
163
- [More Information Needed]
164
-
165
- ## Considerations for Using the Data
166
-
167
- ### Social Impact of Dataset
168
-
169
- [More Information Needed]
170
-
171
- ### Discussion of Biases
172
-
173
- [More Information Needed]
174
-
175
- ### Other Known Limitations
176
-
177
- [More Information Needed]
178
-
179
- ## Additional Information
180
-
181
- ### Dataset Curators
182
-
183
- [More Information Needed]
184
-
185
- ### Licensing Information
186
-
187
- [More Information Needed]
188
-
189
- ### Citation Information
190
-
191
- ```
192
- @inproceedings{soares2018large,
193
- title={A Large Parallel Corpus of Full-Text Scientific Articles},
194
- author={Soares, Felipe and Moreira, Viviane and Becker, Karin},
195
- booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)},
196
- year={2018}
197
- }
198
- ```
199
- ### Contributions
200
-
201
- Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"en-es": {"description": "A parallel corpus of full-text scientific articles collected from Scielo database in the following languages: English, Portuguese and Spanish. The corpus is sentence aligned for all language pairs, as well as trilingual aligned for a small subset of sentences. Alignment was carried out using the Hunalign algorithm.\n", "citation": "@inproceedings{soares2018large,\n title={A Large Parallel Corpus of Full-Text Scientific Articles},\n author={Soares, Felipe and Moreira, Viviane and Becker, Karin},\n booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)},\n year={2018}\n}\n", "homepage": "http://www.euromatrixplus.net/multi-un/", "license": "", "features": {"translation": {"languages": ["en", "es"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "builder_name": "scielo", "config_name": "en-es", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 71777213, "num_examples": 177782, "dataset_name": "scielo"}}, "download_checksums": {"https://ndownloader.figstatic.com/files/14019287": {"num_bytes": 22965217, "checksum": "a56de2ee24727b42817a88339913b2741e10c37347b567ae4bf239894c6e1fca"}}, "download_size": 22965217, "post_processing_size": null, "dataset_size": 71777213, "size_in_bytes": 94742430}, "en-pt": {"description": "A parallel corpus of full-text scientific articles collected from Scielo database in the following languages: English, Portuguese and Spanish. The corpus is sentence aligned for all language pairs, as well as trilingual aligned for a small subset of sentences. Alignment was carried out using the Hunalign algorithm.\n", "citation": "@inproceedings{soares2018large,\n title={A Large Parallel Corpus of Full-Text Scientific Articles},\n author={Soares, Felipe and Moreira, Viviane and Becker, Karin},\n booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)},\n year={2018}\n}\n", "homepage": "http://www.euromatrixplus.net/multi-un/", "license": "", "features": {"translation": {"languages": ["en", "pt"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "builder_name": "scielo", "config_name": "en-pt", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1032669686, "num_examples": 2828917, "dataset_name": "scielo"}}, "download_checksums": {"https://ndownloader.figstatic.com/files/14019308": {"num_bytes": 322726075, "checksum": "d494e3566019f67c56cfa937c6ceb26dcad8f1454e0154c2da192de9bfff6a0c"}}, "download_size": 322726075, "post_processing_size": null, "dataset_size": 1032669686, "size_in_bytes": 1355395761}, "en-pt-es": {"description": "A parallel corpus of full-text scientific articles collected from Scielo database in the following languages: English, Portuguese and Spanish. The corpus is sentence aligned for all language pairs, as well as trilingual aligned for a small subset of sentences. Alignment was carried out using the Hunalign algorithm.\n", "citation": "@inproceedings{soares2018large,\n title={A Large Parallel Corpus of Full-Text Scientific Articles},\n author={Soares, Felipe and Moreira, Viviane and Becker, Karin},\n booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)},\n year={2018}\n}\n", "homepage": "http://www.euromatrixplus.net/multi-un/", "license": "", "features": {"translation": {"languages": ["en", "pt", "es"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "builder_name": "scielo", "config_name": "en-pt-es", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 147472132, "num_examples": 255915, "dataset_name": "scielo"}}, "download_checksums": {"https://ndownloader.figstatic.com/files/14019293": {"num_bytes": 45556562, "checksum": "ad943502c027e0c8ac804e15529ffab2ceefa919e63127635f67ff31e51da32f"}}, "download_size": 45556562, "post_processing_size": null, "dataset_size": 147472132, "size_in_bytes": 193028694}}
 
 
en-es/scielo-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76d46b2dd2f8401089b1d35b828b6ff874445fddc1491508ac8369a74f96d5d0
3
+ size 39938802
en-pt-es/scielo-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a7bce41f96c52f0d06bef3c01f472cbffe115148918437fe1b5c943f7937809
3
+ size 80329521
en-pt/scielo-train-00000-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e4777ebf7652f721ce4ee7023753518c4640c5d163e48accad820b6c44b84db
3
+ size 274014368
en-pt/scielo-train-00001-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbbdbf59c7fc81d6960a8a4aea74215799243fbf39da37ede50c4fa45567b940
3
+ size 274007368
en-pt/scielo-train-00002-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6141e973600092a3f04645e5b8b48e63ce4b884081dcc83893fc675f161e7e0
3
+ size 17698048
scielo.py DELETED
@@ -1,121 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """Parallel corpus of full-text articles in Portuguese, English and Spanish from SciELO"""
16
-
17
-
18
- import datasets
19
-
20
-
21
- _CITATION = """\
22
- @inproceedings{soares2018large,
23
- title={A Large Parallel Corpus of Full-Text Scientific Articles},
24
- author={Soares, Felipe and Moreira, Viviane and Becker, Karin},
25
- booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)},
26
- year={2018}
27
- }
28
- """
29
-
30
-
31
- _DESCRIPTION = """\
32
- A parallel corpus of full-text scientific articles collected from Scielo database in the following languages: \
33
- English, Portuguese and Spanish. The corpus is sentence aligned for all language pairs, \
34
- as well as trilingual aligned for a small subset of sentences. Alignment was carried out using the Hunalign algorithm.
35
- """
36
-
37
-
38
- _HOMEPAGE = "https://sites.google.com/view/felipe-soares/datasets#h.p_92uSCyAjWSRB"
39
-
40
- _LANGUAGES = ["en-es", "en-pt", "en-pt-es"]
41
-
42
- _URLS = {
43
- "en-es": "https://ndownloader.figstatic.com/files/14019287",
44
- "en-pt": "https://ndownloader.figstatic.com/files/14019308",
45
- "en-pt-es": "https://ndownloader.figstatic.com/files/14019293",
46
- }
47
-
48
-
49
- class Scielo(datasets.GeneratorBasedBuilder):
50
- """Parallel corpus of full-text articles in Portuguese, English and Spanish from SciELO"""
51
-
52
- VERSION = datasets.Version("1.0.0")
53
-
54
- BUILDER_CONFIGS = [
55
- datasets.BuilderConfig(name="en-es", version=datasets.Version("1.0.0"), description="English-Spanish"),
56
- datasets.BuilderConfig(name="en-pt", version=datasets.Version("1.0.0"), description="English-Portuguese"),
57
- datasets.BuilderConfig(
58
- name="en-pt-es", version=datasets.Version("1.0.0"), description="English-Portuguese-Spanish"
59
- ),
60
- ]
61
-
62
- def _info(self):
63
- return datasets.DatasetInfo(
64
- description=_DESCRIPTION,
65
- features=datasets.Features(
66
- {"translation": datasets.features.Translation(languages=tuple(self.config.name.split("-")))}
67
- ),
68
- supervised_keys=None,
69
- homepage=_HOMEPAGE,
70
- citation=_CITATION,
71
- )
72
-
73
- def _split_generators(self, dl_manager):
74
- """Returns SplitGenerators."""
75
- archive = dl_manager.download(_URLS[self.config.name])
76
- lang_pair = self.config.name.split("-")
77
- fname = self.config.name.replace("-", "_")
78
-
79
- if self.config.name == "en-pt-es":
80
- return [
81
- datasets.SplitGenerator(
82
- name=datasets.Split.TRAIN,
83
- gen_kwargs={
84
- "source_file": f"{fname}.en",
85
- "target_file": f"{fname}.pt",
86
- "target_file_2": f"{fname}.es",
87
- "files": dl_manager.iter_archive(archive),
88
- },
89
- ),
90
- ]
91
-
92
- return [
93
- datasets.SplitGenerator(
94
- name=datasets.Split.TRAIN,
95
- gen_kwargs={
96
- "source_file": f"{fname}.{lang_pair[0]}",
97
- "target_file": f"{fname}.{lang_pair[1]}",
98
- "files": dl_manager.iter_archive(archive),
99
- },
100
- ),
101
- ]
102
-
103
- def _generate_examples(self, source_file, target_file, files, target_file_2=None):
104
- for path, f in files:
105
- if path == source_file:
106
- source_sentences = f.read().decode("utf-8").split("\n")
107
- elif path == target_file:
108
- target_sentences = f.read().decode("utf-8").split("\n")
109
- elif self.config.name == "en-pt-es" and path == target_file_2:
110
- target_sentences_2 = f.read().decode("utf-8").split("\n")
111
-
112
- if self.config.name == "en-pt-es":
113
- source, target, target_2 = tuple(self.config.name.split("-"))
114
- for idx, (l1, l2, l3) in enumerate(zip(source_sentences, target_sentences, target_sentences_2)):
115
- result = {"translation": {source: l1, target: l2, target_2: l3}}
116
- yield idx, result
117
- else:
118
- source, target = tuple(self.config.name.split("-"))
119
- for idx, (l1, l2) in enumerate(zip(source_sentences, target_sentences)):
120
- result = {"translation": {source: l1, target: l2}}
121
- yield idx, result