system HF staff commited on
Commit
6ea4446
1 Parent(s): bc15b7f

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +249 -0
README.md ADDED
@@ -0,0 +1,249 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "xnli"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://www.nyu.edu/projects/bowman/xnli/](https://www.nyu.edu/projects/bowman/xnli/)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 7384.70 MB
37
+ - **Size of the generated dataset:** 3076.99 MB
38
+ - **Total amount of disk used:** 10461.69 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ XNLI is a subset of a few thousand examples from MNLI which has been translated
43
+ into a 14 different languages (some low-ish resource). As with MNLI, the goal is
44
+ to predict textual entailment (does sentence A imply/contradict/neither sentence
45
+ B) and is a classification task (given two sentences, predict one of three
46
+ labels).
47
+
48
+ ### [Supported Tasks](#supported-tasks)
49
+
50
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
51
+
52
+ ### [Languages](#languages)
53
+
54
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
55
+
56
+ ## [Dataset Structure](#dataset-structure)
57
+
58
+ We show detailed information for up to 5 configurations of the dataset.
59
+
60
+ ### [Data Instances](#data-instances)
61
+
62
+ #### all_languages
63
+
64
+ - **Size of downloaded dataset files:** 461.54 MB
65
+ - **Size of the generated dataset:** 1535.82 MB
66
+ - **Total amount of disk used:** 1997.37 MB
67
+
68
+ An example of 'train' looks as follows.
69
+ ```
70
+ This example was too long and was cropped:
71
+
72
+ {
73
+ "hypothesis": "{\"language\": [\"ar\", \"bg\", \"de\", \"el\", \"en\", \"es\", \"fr\", \"hi\", \"ru\", \"sw\", \"th\", \"tr\", \"ur\", \"vi\", \"zh\"], \"translation\": [\"احد اع...",
74
+ "label": 0,
75
+ "premise": "{\"ar\": \"واحدة من رقابنا ستقوم بتنفيذ تعليماتك كلها بكل دقة\", \"bg\": \"един от нашите номера ще ви даде инструкции .\", \"de\": \"Eine ..."
76
+ }
77
+ ```
78
+
79
+ #### ar
80
+
81
+ - **Size of downloaded dataset files:** 461.54 MB
82
+ - **Size of the generated dataset:** 104.26 MB
83
+ - **Total amount of disk used:** 565.81 MB
84
+
85
+ An example of 'validation' looks as follows.
86
+ ```
87
+ {
88
+ "hypothesis": "اتصل بأمه حالما أوصلته حافلة المدرسية.",
89
+ "label": 1,
90
+ "premise": "وقال، ماما، لقد عدت للمنزل."
91
+ }
92
+ ```
93
+
94
+ #### bg
95
+
96
+ - **Size of downloaded dataset files:** 461.54 MB
97
+ - **Size of the generated dataset:** 122.38 MB
98
+ - **Total amount of disk used:** 583.92 MB
99
+
100
+ An example of 'train' looks as follows.
101
+ ```
102
+ This example was too long and was cropped:
103
+
104
+ {
105
+ "hypothesis": "\"губиш нещата на следното ниво , ако хората си припомнят .\"...",
106
+ "label": 0,
107
+ "premise": "\"по време на сезона и предполагам , че на твоето ниво ще ги загубиш на следващото ниво , ако те решат да си припомнят отбора на ..."
108
+ }
109
+ ```
110
+
111
+ #### de
112
+
113
+ - **Size of downloaded dataset files:** 461.54 MB
114
+ - **Size of the generated dataset:** 82.18 MB
115
+ - **Total amount of disk used:** 543.73 MB
116
+
117
+ An example of 'train' looks as follows.
118
+ ```
119
+ This example was too long and was cropped:
120
+
121
+ {
122
+ "hypothesis": "Man verliert die Dinge auf die folgende Ebene , wenn sich die Leute erinnern .",
123
+ "label": 0,
124
+ "premise": "\"Du weißt , während der Saison und ich schätze , auf deiner Ebene verlierst du sie auf die nächste Ebene , wenn sie sich entschl..."
125
+ }
126
+ ```
127
+
128
+ #### el
129
+
130
+ - **Size of downloaded dataset files:** 461.54 MB
131
+ - **Size of the generated dataset:** 135.71 MB
132
+ - **Total amount of disk used:** 597.25 MB
133
+
134
+ An example of 'validation' looks as follows.
135
+ ```
136
+ This example was too long and was cropped:
137
+
138
+ {
139
+ "hypothesis": "\"Τηλεφώνησε στη μαμά του μόλις το σχολικό λεωφορείο τον άφησε.\"...",
140
+ "label": 1,
141
+ "premise": "Και είπε, Μαμά, έφτασα στο σπίτι."
142
+ }
143
+ ```
144
+
145
+ ### [Data Fields](#data-fields)
146
+
147
+ The data fields are the same among all splits.
148
+
149
+ #### all_languages
150
+ - `premise`: a multilingual `string` variable, with possible languages including `ar`, `bg`, `de`, `el`, `en`.
151
+ - `hypothesis`: a multilingual `string` variable, with possible languages including `ar`, `bg`, `de`, `el`, `en`.
152
+ - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
153
+
154
+ #### ar
155
+ - `premise`: a `string` feature.
156
+ - `hypothesis`: a `string` feature.
157
+ - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
158
+
159
+ #### bg
160
+ - `premise`: a `string` feature.
161
+ - `hypothesis`: a `string` feature.
162
+ - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
163
+
164
+ #### de
165
+ - `premise`: a `string` feature.
166
+ - `hypothesis`: a `string` feature.
167
+ - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
168
+
169
+ #### el
170
+ - `premise`: a `string` feature.
171
+ - `hypothesis`: a `string` feature.
172
+ - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
173
+
174
+ ### [Data Splits Sample Size](#data-splits-sample-size)
175
+
176
+ | name |train |validation|test|
177
+ |-------------|-----:|---------:|---:|
178
+ |all_languages|392702| 2490|5010|
179
+ |ar |392702| 2490|5010|
180
+ |bg |392702| 2490|5010|
181
+ |de |392702| 2490|5010|
182
+ |el |392702| 2490|5010|
183
+
184
+ ## [Dataset Creation](#dataset-creation)
185
+
186
+ ### [Curation Rationale](#curation-rationale)
187
+
188
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
189
+
190
+ ### [Source Data](#source-data)
191
+
192
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
193
+
194
+ ### [Annotations](#annotations)
195
+
196
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
197
+
198
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
199
+
200
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
201
+
202
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
203
+
204
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
205
+
206
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
207
+
208
+ ### [Discussion of Biases](#discussion-of-biases)
209
+
210
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
211
+
212
+ ### [Other Known Limitations](#other-known-limitations)
213
+
214
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
215
+
216
+ ## [Additional Information](#additional-information)
217
+
218
+ ### [Dataset Curators](#dataset-curators)
219
+
220
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
221
+
222
+ ### [Licensing Information](#licensing-information)
223
+
224
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
225
+
226
+ ### [Citation Information](#citation-information)
227
+
228
+ ```
229
+ @InProceedings{conneau2018xnli,
230
+ author = {Conneau, Alexis
231
+ and Rinott, Ruty
232
+ and Lample, Guillaume
233
+ and Williams, Adina
234
+ and Bowman, Samuel R.
235
+ and Schwenk, Holger
236
+ and Stoyanov, Veselin},
237
+ title = {XNLI: Evaluating Cross-lingual Sentence Representations},
238
+ booktitle = {Proceedings of the 2018 Conference on Empirical Methods
239
+ in Natural Language Processing},
240
+ year = {2018},
241
+ publisher = {Association for Computational Linguistics},
242
+ location = {Brussels, Belgium},
243
+ }
244
+ ```
245
+
246
+
247
+ ### Contributions
248
+
249
+ Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.