system HF staff commited on
Commit
a6a82ef
1 Parent(s): f05cef3

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +166 -0
README.md ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "wmt_t2t"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/data_generators/translate_ende.py](https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/data_generators/translate_ende.py)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 1647.72 MB
37
+ - **Size of the generated dataset:** 1322.40 MB
38
+ - **Total amount of disk used:** 2970.12 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ Translate dataset based on the data from statmt.org.
43
+
44
+ Versions exists for the different years using a combination of multiple data
45
+ sources. The base `wmt_translate` allows you to create your own config to choose
46
+ your own data/language pair by creating a custom `datasets.translate.wmt.WmtConfig`.
47
+
48
+ ```
49
+ config = datasets.wmt.WmtConfig(
50
+ version="0.0.1",
51
+ language_pair=("fr", "de"),
52
+ subsets={
53
+ datasets.Split.TRAIN: ["commoncrawl_frde"],
54
+ datasets.Split.VALIDATION: ["euelections_dev2019"],
55
+ },
56
+ )
57
+ builder = datasets.builder("wmt_translate", config=config)
58
+ ```
59
+
60
+ ### [Supported Tasks](#supported-tasks)
61
+
62
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
63
+
64
+ ### [Languages](#languages)
65
+
66
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
67
+
68
+ ## [Dataset Structure](#dataset-structure)
69
+
70
+ We show detailed information for up to 5 configurations of the dataset.
71
+
72
+ ### [Data Instances](#data-instances)
73
+
74
+ #### de-en
75
+
76
+ - **Size of downloaded dataset files:** 1647.72 MB
77
+ - **Size of the generated dataset:** 1322.40 MB
78
+ - **Total amount of disk used:** 2970.12 MB
79
+
80
+ An example of 'validation' looks as follows.
81
+ ```
82
+ {
83
+ "translation": {
84
+ "de": "Just a test sentence.",
85
+ "en": "Just a test sentence."
86
+ }
87
+ }
88
+ ```
89
+
90
+ ### [Data Fields](#data-fields)
91
+
92
+ The data fields are the same among all splits.
93
+
94
+ #### de-en
95
+ - `translation`: a multilingual `string` variable, with possible languages including `de`, `en`.
96
+
97
+ ### [Data Splits Sample Size](#data-splits-sample-size)
98
+
99
+ |name | train |validation|test|
100
+ |-----|------:|---------:|---:|
101
+ |de-en|4592289| 3000|3003|
102
+
103
+ ## [Dataset Creation](#dataset-creation)
104
+
105
+ ### [Curation Rationale](#curation-rationale)
106
+
107
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
108
+
109
+ ### [Source Data](#source-data)
110
+
111
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
112
+
113
+ ### [Annotations](#annotations)
114
+
115
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
116
+
117
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
118
+
119
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
+
121
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
122
+
123
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
124
+
125
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
126
+
127
+ ### [Discussion of Biases](#discussion-of-biases)
128
+
129
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
130
+
131
+ ### [Other Known Limitations](#other-known-limitations)
132
+
133
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
134
+
135
+ ## [Additional Information](#additional-information)
136
+
137
+ ### [Dataset Curators](#dataset-curators)
138
+
139
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
140
+
141
+ ### [Licensing Information](#licensing-information)
142
+
143
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
144
+
145
+ ### [Citation Information](#citation-information)
146
+
147
+ ```
148
+
149
+ @InProceedings{bojar-EtAl:2014:W14-33,
150
+ author = {Bojar, Ondrej and Buck, Christian and Federmann, Christian and Haddow, Barry and Koehn, Philipp and Leveling, Johannes and Monz, Christof and Pecina, Pavel and Post, Matt and Saint-Amand, Herve and Soricut, Radu and Specia, Lucia and Tamchyna, Ale {s}},
151
+ title = {Findings of the 2014 Workshop on Statistical Machine Translation},
152
+ booktitle = {Proceedings of the Ninth Workshop on Statistical Machine Translation},
153
+ month = {June},
154
+ year = {2014},
155
+ address = {Baltimore, Maryland, USA},
156
+ publisher = {Association for Computational Linguistics},
157
+ pages = {12--58},
158
+ url = {http://www.aclweb.org/anthology/W/W14/W14-3302}
159
+ }
160
+
161
+ ```
162
+
163
+
164
+ ### Contributions
165
+
166
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.