system HF staff commited on
Commit
6d34429
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +150 -0
  3. capes.py +102 -0
  4. dataset_infos.json +1 -0
  5. dummy/en-pt/1.0.0/dummy_data.zip +3 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ - pt
9
+ licenses:
10
+ - unknown
11
+ multilinguality:
12
+ - multilingual
13
+ size_categories:
14
+ - n>1M
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - conditional-text-generation
19
+ task_ids:
20
+ - machine-translation
21
+ ---
22
+
23
+ # Dataset Card for [Dataset Name]
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-instances)
33
+ - [Data Splits](#data-instances)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+
48
+ ## Dataset Description
49
+
50
+ - **Homepage:**[Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES](https://sites.google.com/view/felipe-soares/datasets#h.p_kxOR6EhHm2a6)
51
+ - **Repository:**
52
+ - **Paper:**
53
+ - **Leaderboard:**
54
+ - **Point of Contact:**
55
+
56
+ ### Dataset Summary
57
+
58
+ A parallel corpus of theses and dissertations abstracts in English and Portuguese were collected from the
59
+ CAPES website (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) - Brazil.
60
+ The corpus is sentence aligned for all language pairs. Approximately 240,000 documents were
61
+ collected and aligned using the Hunalign algorithm.
62
+
63
+ ### Supported Tasks and Leaderboards
64
+
65
+ The underlying task is machine translation.
66
+
67
+ ### Languages
68
+
69
+ [More Information Needed]
70
+
71
+ ## Dataset Structure
72
+
73
+ ### Data Instances
74
+
75
+ [More Information Needed]
76
+
77
+ ### Data Fields
78
+
79
+ [More Information Needed]
80
+
81
+ ### Data Splits
82
+
83
+ [More Information Needed]
84
+
85
+ ## Dataset Creation
86
+
87
+ ### Curation Rationale
88
+
89
+ [More Information Needed]
90
+
91
+ ### Source Data
92
+
93
+ #### Initial Data Collection and Normalization
94
+
95
+ [More Information Needed]
96
+
97
+ #### Who are the source language producers?
98
+
99
+ [More Information Needed]
100
+
101
+ ### Annotations
102
+
103
+ #### Annotation process
104
+
105
+ [More Information Needed]
106
+
107
+ #### Who are the annotators?
108
+
109
+ [More Information Needed]
110
+
111
+ ### Personal and Sensitive Information
112
+
113
+ [More Information Needed]
114
+
115
+ ## Considerations for Using the Data
116
+
117
+ ### Social Impact of Dataset
118
+
119
+ [More Information Needed]
120
+
121
+ ### Discussion of Biases
122
+
123
+ [More Information Needed]
124
+
125
+ ### Other Known Limitations
126
+
127
+ [More Information Needed]
128
+
129
+ ## Additional Information
130
+
131
+ ### Dataset Curators
132
+
133
+ [More Information Needed]
134
+
135
+ ### Licensing Information
136
+
137
+ [More Information Needed]
138
+
139
+ ### Citation Information
140
+
141
+ ```
142
+ @inproceedings{soares2018parallel,
143
+ title={A Parallel Corpus of Theses and Dissertations Abstracts},
144
+ author={Soares, Felipe and Yamashita, Gabrielli Harumi and Anzanello, Michel Jose},
145
+ booktitle={International Conference on Computational Processing of the Portuguese Language},
146
+ pages={345--352},
147
+ year={2018},
148
+ organization={Springer}
149
+ }
150
+ ```
capes.py ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Capes: Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import os
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @inproceedings{soares2018parallel,
26
+ title={A Parallel Corpus of Theses and Dissertations Abstracts},
27
+ author={Soares, Felipe and Yamashita, Gabrielli Harumi and Anzanello, Michel Jose},
28
+ booktitle={International Conference on Computational Processing of the Portuguese Language},
29
+ pages={345--352},
30
+ year={2018},
31
+ organization={Springer}
32
+ }
33
+ """
34
+
35
+
36
+ _DESCRIPTION = """\
37
+ A parallel corpus of theses and dissertations abstracts in English and Portuguese were collected from the \
38
+ CAPES website (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) - Brazil. \
39
+ The corpus is sentence aligned for all language pairs. Approximately 240,000 documents were \
40
+ collected and aligned using the Hunalign algorithm.
41
+ """
42
+
43
+
44
+ _HOMEPAGE = "https://sites.google.com/view/felipe-soares/datasets#h.p_kxOR6EhHm2a6"
45
+
46
+ _URL = "https://ndownloader.figstatic.com/files/14015837"
47
+
48
+
49
+ class Capes(datasets.GeneratorBasedBuilder):
50
+ """Capes: Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES"""
51
+
52
+ VERSION = datasets.Version("1.0.0")
53
+
54
+ BUILDER_CONFIGS = [
55
+ datasets.BuilderConfig(
56
+ name="en-pt",
57
+ version=datasets.Version("1.0.0"),
58
+ description="Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES",
59
+ )
60
+ ]
61
+
62
+ def _info(self):
63
+ return datasets.DatasetInfo(
64
+ description=_DESCRIPTION,
65
+ features=datasets.Features(
66
+ {"translation": datasets.features.Translation(languages=tuple(self.config.name.split("-")))}
67
+ ),
68
+ supervised_keys=None,
69
+ homepage=_HOMEPAGE,
70
+ citation=_CITATION,
71
+ )
72
+
73
+ def _split_generators(self, dl_manager):
74
+ """Returns SplitGenerators."""
75
+ data_dir = dl_manager.download_and_extract(_URL)
76
+ return [
77
+ datasets.SplitGenerator(
78
+ name=datasets.Split.TRAIN,
79
+ gen_kwargs={
80
+ "source_file": os.path.join(data_dir, "en_pt.en"),
81
+ "target_file": os.path.join(data_dir, "en_pt.pt"),
82
+ },
83
+ ),
84
+ ]
85
+
86
+ def _generate_examples(self, source_file, target_file):
87
+ with open(source_file, encoding="utf-8") as f:
88
+ source_sentences = f.read().split("\n")
89
+ with open(target_file, encoding="utf-8") as f:
90
+ target_sentences = f.read().split("\n")
91
+
92
+ assert len(target_sentences) == len(source_sentences), "Sizes do not match: %d vs %d for %s vs %s." % (
93
+ len(source_sentences),
94
+ len(target_sentences),
95
+ source_file,
96
+ target_file,
97
+ )
98
+
99
+ source, target = tuple(self.config.name.split("-"))
100
+ for idx, (l1, l2) in enumerate(zip(source_sentences, target_sentences)):
101
+ result = {"translation": {source: l1, target: l2}}
102
+ yield idx, result
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"en-pt": {"description": "A parallel corpus of theses and dissertations abstracts in English and Portuguese were collected from the CAPES website (Coordena\u00e7\u00e3o de Aperfei\u00e7oamento de Pessoal de N\u00edvel Superior) - Brazil. The corpus is sentence aligned for all language pairs. Approximately 240,000 documents were collected and aligned using the Hunalign algorithm.\n", "citation": "@inproceedings{soares2018parallel,\n title={A Parallel Corpus of Theses and Dissertations Abstracts},\n author={Soares, Felipe and Yamashita, Gabrielli Harumi and Anzanello, Michel Jose},\n booktitle={International Conference on Computational Processing of the Portuguese Language},\n pages={345--352},\n year={2018},\n organization={Springer}\n}\n", "homepage": "https://sites.google.com/view/felipe-soares/datasets#h.p_kxOR6EhHm2a6", "license": "", "features": {"translation": {"languages": ["en", "pt"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "builder_name": "capes", "config_name": "en-pt", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 472484376, "num_examples": 1157611, "dataset_name": "capes"}}, "download_checksums": {"https://ndownloader.figstatic.com/files/14015837": {"num_bytes": 162229298, "checksum": "08e5739e78cd5b68ca6b29507f2a746fd3a5fbdec8dde2700a4141030d21e143"}}, "download_size": 162229298, "post_processing_size": null, "dataset_size": 472484376, "size_in_bytes": 634713674}}
dummy/en-pt/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d8a4bc399d058434e2ebeb63ba0ed453ed8de42e95a0e1bc83d40e653351ab6
3
+ size 1811