loicmagne commited on
Commit
40999f7
1 Parent(s): cfe4917

Upload 46 files

Browse files
README.md ADDED
@@ -0,0 +1,842 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ar
4
+ - bg
5
+ - de
6
+ - el
7
+ - en
8
+ - es
9
+ - fr
10
+ - hi
11
+ - ru
12
+ - sw
13
+ - th
14
+ - tr
15
+ - ur
16
+ - vi
17
+ - zh
18
+ paperswithcode_id: xnli
19
+ pretty_name: Cross-lingual Natural Language Inference
20
+ dataset_info:
21
+ - config_name: all_languages
22
+ features:
23
+ - name: premise
24
+ dtype:
25
+ translation:
26
+ languages:
27
+ - ar
28
+ - bg
29
+ - de
30
+ - el
31
+ - en
32
+ - es
33
+ - fr
34
+ - hi
35
+ - ru
36
+ - sw
37
+ - th
38
+ - tr
39
+ - ur
40
+ - vi
41
+ - zh
42
+ - name: hypothesis
43
+ dtype:
44
+ translation_variable_languages:
45
+ languages:
46
+ - ar
47
+ - bg
48
+ - de
49
+ - el
50
+ - en
51
+ - es
52
+ - fr
53
+ - hi
54
+ - ru
55
+ - sw
56
+ - th
57
+ - tr
58
+ - ur
59
+ - vi
60
+ - zh
61
+ num_languages: 15
62
+ - name: label
63
+ dtype:
64
+ class_label:
65
+ names:
66
+ '0': entailment
67
+ '1': neutral
68
+ '2': contradiction
69
+ splits:
70
+ - name: train
71
+ num_bytes: 1581471691
72
+ num_examples: 392702
73
+ - name: test
74
+ num_bytes: 19387432
75
+ num_examples: 5010
76
+ - name: validation
77
+ num_bytes: 9566179
78
+ num_examples: 2490
79
+ download_size: 963942271
80
+ dataset_size: 1610425302
81
+ - config_name: ar
82
+ features:
83
+ - name: premise
84
+ dtype: string
85
+ - name: hypothesis
86
+ dtype: string
87
+ - name: label
88
+ dtype:
89
+ class_label:
90
+ names:
91
+ '0': entailment
92
+ '1': neutral
93
+ '2': contradiction
94
+ splits:
95
+ - name: train
96
+ num_bytes: 107399614
97
+ num_examples: 392702
98
+ - name: test
99
+ num_bytes: 1294553
100
+ num_examples: 5010
101
+ - name: validation
102
+ num_bytes: 633001
103
+ num_examples: 2490
104
+ download_size: 59215902
105
+ dataset_size: 109327168
106
+ - config_name: bg
107
+ features:
108
+ - name: premise
109
+ dtype: string
110
+ - name: hypothesis
111
+ dtype: string
112
+ - name: label
113
+ dtype:
114
+ class_label:
115
+ names:
116
+ '0': entailment
117
+ '1': neutral
118
+ '2': contradiction
119
+ splits:
120
+ - name: train
121
+ num_bytes: 125973225
122
+ num_examples: 392702
123
+ - name: test
124
+ num_bytes: 1573034
125
+ num_examples: 5010
126
+ - name: validation
127
+ num_bytes: 774061
128
+ num_examples: 2490
129
+ download_size: 66117878
130
+ dataset_size: 128320320
131
+ - config_name: de
132
+ features:
133
+ - name: premise
134
+ dtype: string
135
+ - name: hypothesis
136
+ dtype: string
137
+ - name: label
138
+ dtype:
139
+ class_label:
140
+ names:
141
+ '0': entailment
142
+ '1': neutral
143
+ '2': contradiction
144
+ splits:
145
+ - name: train
146
+ num_bytes: 84684140
147
+ num_examples: 392702
148
+ - name: test
149
+ num_bytes: 996488
150
+ num_examples: 5010
151
+ - name: validation
152
+ num_bytes: 494604
153
+ num_examples: 2490
154
+ download_size: 55973883
155
+ dataset_size: 86175232
156
+ - config_name: el
157
+ features:
158
+ - name: premise
159
+ dtype: string
160
+ - name: hypothesis
161
+ dtype: string
162
+ - name: label
163
+ dtype:
164
+ class_label:
165
+ names:
166
+ '0': entailment
167
+ '1': neutral
168
+ '2': contradiction
169
+ splits:
170
+ - name: train
171
+ num_bytes: 139753358
172
+ num_examples: 392702
173
+ - name: test
174
+ num_bytes: 1704785
175
+ num_examples: 5010
176
+ - name: validation
177
+ num_bytes: 841226
178
+ num_examples: 2490
179
+ download_size: 74551247
180
+ dataset_size: 142299369
181
+ - config_name: en
182
+ features:
183
+ - name: premise
184
+ dtype: string
185
+ - name: hypothesis
186
+ dtype: string
187
+ - name: label
188
+ dtype:
189
+ class_label:
190
+ names:
191
+ '0': entailment
192
+ '1': neutral
193
+ '2': contradiction
194
+ splits:
195
+ - name: train
196
+ num_bytes: 74444026
197
+ num_examples: 392702
198
+ - name: test
199
+ num_bytes: 875134
200
+ num_examples: 5010
201
+ - name: validation
202
+ num_bytes: 433463
203
+ num_examples: 2490
204
+ download_size: 50627367
205
+ dataset_size: 75752623
206
+ - config_name: es
207
+ features:
208
+ - name: premise
209
+ dtype: string
210
+ - name: hypothesis
211
+ dtype: string
212
+ - name: label
213
+ dtype:
214
+ class_label:
215
+ names:
216
+ '0': entailment
217
+ '1': neutral
218
+ '2': contradiction
219
+ splits:
220
+ - name: train
221
+ num_bytes: 81383284
222
+ num_examples: 392702
223
+ - name: test
224
+ num_bytes: 969813
225
+ num_examples: 5010
226
+ - name: validation
227
+ num_bytes: 478422
228
+ num_examples: 2490
229
+ download_size: 53677157
230
+ dataset_size: 82831519
231
+ - config_name: fr
232
+ features:
233
+ - name: premise
234
+ dtype: string
235
+ - name: hypothesis
236
+ dtype: string
237
+ - name: label
238
+ dtype:
239
+ class_label:
240
+ names:
241
+ '0': entailment
242
+ '1': neutral
243
+ '2': contradiction
244
+ splits:
245
+ - name: train
246
+ num_bytes: 85808779
247
+ num_examples: 392702
248
+ - name: test
249
+ num_bytes: 1029239
250
+ num_examples: 5010
251
+ - name: validation
252
+ num_bytes: 510104
253
+ num_examples: 2490
254
+ download_size: 55968680
255
+ dataset_size: 87348122
256
+ - config_name: hi
257
+ features:
258
+ - name: premise
259
+ dtype: string
260
+ - name: hypothesis
261
+ dtype: string
262
+ - name: label
263
+ dtype:
264
+ class_label:
265
+ names:
266
+ '0': entailment
267
+ '1': neutral
268
+ '2': contradiction
269
+ splits:
270
+ - name: train
271
+ num_bytes: 170593964
272
+ num_examples: 392702
273
+ - name: test
274
+ num_bytes: 2073073
275
+ num_examples: 5010
276
+ - name: validation
277
+ num_bytes: 1023915
278
+ num_examples: 2490
279
+ download_size: 70908548
280
+ dataset_size: 173690952
281
+ - config_name: ru
282
+ features:
283
+ - name: premise
284
+ dtype: string
285
+ - name: hypothesis
286
+ dtype: string
287
+ - name: label
288
+ dtype:
289
+ class_label:
290
+ names:
291
+ '0': entailment
292
+ '1': neutral
293
+ '2': contradiction
294
+ splits:
295
+ - name: train
296
+ num_bytes: 129859615
297
+ num_examples: 392702
298
+ - name: test
299
+ num_bytes: 1603466
300
+ num_examples: 5010
301
+ - name: validation
302
+ num_bytes: 786442
303
+ num_examples: 2490
304
+ download_size: 70702606
305
+ dataset_size: 132249523
306
+ - config_name: sw
307
+ features:
308
+ - name: premise
309
+ dtype: string
310
+ - name: hypothesis
311
+ dtype: string
312
+ - name: label
313
+ dtype:
314
+ class_label:
315
+ names:
316
+ '0': entailment
317
+ '1': neutral
318
+ '2': contradiction
319
+ splits:
320
+ - name: train
321
+ num_bytes: 69285725
322
+ num_examples: 392702
323
+ - name: test
324
+ num_bytes: 871651
325
+ num_examples: 5010
326
+ - name: validation
327
+ num_bytes: 429850
328
+ num_examples: 2490
329
+ download_size: 45564152
330
+ dataset_size: 70587226
331
+ - config_name: th
332
+ features:
333
+ - name: premise
334
+ dtype: string
335
+ - name: hypothesis
336
+ dtype: string
337
+ - name: label
338
+ dtype:
339
+ class_label:
340
+ names:
341
+ '0': entailment
342
+ '1': neutral
343
+ '2': contradiction
344
+ splits:
345
+ - name: train
346
+ num_bytes: 176062892
347
+ num_examples: 392702
348
+ - name: test
349
+ num_bytes: 2147015
350
+ num_examples: 5010
351
+ - name: validation
352
+ num_bytes: 1061160
353
+ num_examples: 2490
354
+ download_size: 77222045
355
+ dataset_size: 179271067
356
+ - config_name: tr
357
+ features:
358
+ - name: premise
359
+ dtype: string
360
+ - name: hypothesis
361
+ dtype: string
362
+ - name: label
363
+ dtype:
364
+ class_label:
365
+ names:
366
+ '0': entailment
367
+ '1': neutral
368
+ '2': contradiction
369
+ splits:
370
+ - name: train
371
+ num_bytes: 71637140
372
+ num_examples: 392702
373
+ - name: test
374
+ num_bytes: 934934
375
+ num_examples: 5010
376
+ - name: validation
377
+ num_bytes: 459308
378
+ num_examples: 2490
379
+ download_size: 48509680
380
+ dataset_size: 73031382
381
+ - config_name: ur
382
+ features:
383
+ - name: premise
384
+ dtype: string
385
+ - name: hypothesis
386
+ dtype: string
387
+ - name: label
388
+ dtype:
389
+ class_label:
390
+ names:
391
+ '0': entailment
392
+ '1': neutral
393
+ '2': contradiction
394
+ splits:
395
+ - name: train
396
+ num_bytes: 96441486
397
+ num_examples: 392702
398
+ - name: test
399
+ num_bytes: 1416241
400
+ num_examples: 5010
401
+ - name: validation
402
+ num_bytes: 699952
403
+ num_examples: 2490
404
+ download_size: 46682785
405
+ dataset_size: 98557679
406
+ - config_name: vi
407
+ features:
408
+ - name: premise
409
+ dtype: string
410
+ - name: hypothesis
411
+ dtype: string
412
+ - name: label
413
+ dtype:
414
+ class_label:
415
+ names:
416
+ '0': entailment
417
+ '1': neutral
418
+ '2': contradiction
419
+ splits:
420
+ - name: train
421
+ num_bytes: 101417430
422
+ num_examples: 392702
423
+ - name: test
424
+ num_bytes: 1190217
425
+ num_examples: 5010
426
+ - name: validation
427
+ num_bytes: 590680
428
+ num_examples: 2490
429
+ download_size: 57690058
430
+ dataset_size: 103198327
431
+ - config_name: zh
432
+ features:
433
+ - name: premise
434
+ dtype: string
435
+ - name: hypothesis
436
+ dtype: string
437
+ - name: label
438
+ dtype:
439
+ class_label:
440
+ names:
441
+ '0': entailment
442
+ '1': neutral
443
+ '2': contradiction
444
+ splits:
445
+ - name: train
446
+ num_bytes: 72224841
447
+ num_examples: 392702
448
+ - name: test
449
+ num_bytes: 777929
450
+ num_examples: 5010
451
+ - name: validation
452
+ num_bytes: 384851
453
+ num_examples: 2490
454
+ download_size: 48269855
455
+ dataset_size: 73387621
456
+ configs:
457
+ - config_name: default
458
+ data_files:
459
+ - path: test/*.parquet
460
+ split: test
461
+ - path: train/*.parquet
462
+ split: train
463
+ - path: validation/*.parquet
464
+ split: validation
465
+ - config_name: ar
466
+ data_files:
467
+ - path: xnli/test/ar.parquet
468
+ split: test
469
+ - path: xnli/train/ar.parquet
470
+ split: train
471
+ - path: xnli/validation/ar.parquet
472
+ split: validation
473
+ - config_name: ru
474
+ data_files:
475
+ - path: xnli/test/ru.parquet
476
+ split: test
477
+ - path: xnli/train/ru.parquet
478
+ split: train
479
+ - path: xnli/validation/ru.parquet
480
+ split: validation
481
+ - config_name: el
482
+ data_files:
483
+ - path: xnli/test/el.parquet
484
+ split: test
485
+ - path: xnli/train/el.parquet
486
+ split: train
487
+ - path: xnli/validation/el.parquet
488
+ split: validation
489
+ - config_name: th
490
+ data_files:
491
+ - path: xnli/test/th.parquet
492
+ split: test
493
+ - path: xnli/train/th.parquet
494
+ split: train
495
+ - path: xnli/validation/th.parquet
496
+ split: validation
497
+ - config_name: fr
498
+ data_files:
499
+ - path: xnli/test/fr.parquet
500
+ split: test
501
+ - path: xnli/train/fr.parquet
502
+ split: train
503
+ - path: xnli/validation/fr.parquet
504
+ split: validation
505
+ - config_name: de
506
+ data_files:
507
+ - path: xnli/test/de.parquet
508
+ split: test
509
+ - path: xnli/train/de.parquet
510
+ split: train
511
+ - path: xnli/validation/de.parquet
512
+ split: validation
513
+ - config_name: zh
514
+ data_files:
515
+ - path: xnli/test/zh.parquet
516
+ split: test
517
+ - path: xnli/train/zh.parquet
518
+ split: train
519
+ - path: xnli/validation/zh.parquet
520
+ split: validation
521
+ - config_name: ur
522
+ data_files:
523
+ - path: xnli/test/ur.parquet
524
+ split: test
525
+ - path: xnli/train/ur.parquet
526
+ split: train
527
+ - path: xnli/validation/ur.parquet
528
+ split: validation
529
+ - config_name: sw
530
+ data_files:
531
+ - path: xnli/test/sw.parquet
532
+ split: test
533
+ - path: xnli/train/sw.parquet
534
+ split: train
535
+ - path: xnli/validation/sw.parquet
536
+ split: validation
537
+ - config_name: bg
538
+ data_files:
539
+ - path: xnli/test/bg.parquet
540
+ split: test
541
+ - path: xnli/train/bg.parquet
542
+ split: train
543
+ - path: xnli/validation/bg.parquet
544
+ split: validation
545
+ - config_name: es
546
+ data_files:
547
+ - path: xnli/test/es.parquet
548
+ split: test
549
+ - path: xnli/train/es.parquet
550
+ split: train
551
+ - path: xnli/validation/es.parquet
552
+ split: validation
553
+ - config_name: en
554
+ data_files:
555
+ - path: xnli/test/en.parquet
556
+ split: test
557
+ - path: xnli/train/en.parquet
558
+ split: train
559
+ - path: xnli/validation/en.parquet
560
+ split: validation
561
+ - config_name: vi
562
+ data_files:
563
+ - path: xnli/test/vi.parquet
564
+ split: test
565
+ - path: xnli/train/vi.parquet
566
+ split: train
567
+ - path: xnli/validation/vi.parquet
568
+ split: validation
569
+ - config_name: tr
570
+ data_files:
571
+ - path: xnli/test/tr.parquet
572
+ split: test
573
+ - path: xnli/train/tr.parquet
574
+ split: train
575
+ - path: xnli/validation/tr.parquet
576
+ split: validation
577
+ - config_name: hi
578
+ data_files:
579
+ - path: xnli/test/hi.parquet
580
+ split: test
581
+ - path: xnli/train/hi.parquet
582
+ split: train
583
+ - path: xnli/validation/hi.parquet
584
+ split: validation
585
+ ---
586
+
587
+ # Dataset Card for "xnli"
588
+
589
+ ## Table of Contents
590
+ - [Dataset Description](#dataset-description)
591
+ - [Dataset Summary](#dataset-summary)
592
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
593
+ - [Languages](#languages)
594
+ - [Dataset Structure](#dataset-structure)
595
+ - [Data Instances](#data-instances)
596
+ - [Data Fields](#data-fields)
597
+ - [Data Splits](#data-splits)
598
+ - [Dataset Creation](#dataset-creation)
599
+ - [Curation Rationale](#curation-rationale)
600
+ - [Source Data](#source-data)
601
+ - [Annotations](#annotations)
602
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
603
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
604
+ - [Social Impact of Dataset](#social-impact-of-dataset)
605
+ - [Discussion of Biases](#discussion-of-biases)
606
+ - [Other Known Limitations](#other-known-limitations)
607
+ - [Additional Information](#additional-information)
608
+ - [Dataset Curators](#dataset-curators)
609
+ - [Licensing Information](#licensing-information)
610
+ - [Citation Information](#citation-information)
611
+ - [Contributions](#contributions)
612
+
613
+ ## Dataset Description
614
+
615
+ - **Homepage:** [https://www.nyu.edu/projects/bowman/xnli/](https://www.nyu.edu/projects/bowman/xnli/)
616
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
617
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
618
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
619
+ - **Size of downloaded dataset files:** 7.74 GB
620
+ - **Size of the generated dataset:** 3.23 GB
621
+ - **Total amount of disk used:** 10.97 GB
622
+
623
+ ### Dataset Summary
624
+
625
+ XNLI is a subset of a few thousand examples from MNLI which has been translated
626
+ into a 14 different languages (some low-ish resource). As with MNLI, the goal is
627
+ to predict textual entailment (does sentence A imply/contradict/neither sentence
628
+ B) and is a classification task (given two sentences, predict one of three
629
+ labels).
630
+
631
+ ### Supported Tasks and Leaderboards
632
+
633
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
634
+
635
+ ### Languages
636
+
637
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
638
+
639
+ ## Dataset Structure
640
+
641
+ ### Data Instances
642
+
643
+ #### all_languages
644
+
645
+ - **Size of downloaded dataset files:** 483.96 MB
646
+ - **Size of the generated dataset:** 1.61 GB
647
+ - **Total amount of disk used:** 2.09 GB
648
+
649
+ An example of 'train' looks as follows.
650
+ ```
651
+ This example was too long and was cropped:
652
+
653
+ {
654
+ "hypothesis": "{\"language\": [\"ar\", \"bg\", \"de\", \"el\", \"en\", \"es\", \"fr\", \"hi\", \"ru\", \"sw\", \"th\", \"tr\", \"ur\", \"vi\", \"zh\"], \"translation\": [\"احد اع...",
655
+ "label": 0,
656
+ "premise": "{\"ar\": \"واحدة من رقابنا ستقوم بتنفيذ تعليماتك كلها بكل دقة\", \"bg\": \"един от нашите номера ще ви даде инструкции .\", \"de\": \"Eine ..."
657
+ }
658
+ ```
659
+
660
+ #### ar
661
+
662
+ - **Size of downloaded dataset files:** 483.96 MB
663
+ - **Size of the generated dataset:** 109.32 MB
664
+ - **Total amount of disk used:** 593.29 MB
665
+
666
+ An example of 'validation' looks as follows.
667
+ ```
668
+ {
669
+ "hypothesis": "اتصل بأمه حالما أوصلته حافلة المدرسية.",
670
+ "label": 1,
671
+ "premise": "وقال، ماما، لقد عدت للمنزل."
672
+ }
673
+ ```
674
+
675
+ #### bg
676
+
677
+ - **Size of downloaded dataset files:** 483.96 MB
678
+ - **Size of the generated dataset:** 128.32 MB
679
+ - **Total amount of disk used:** 612.28 MB
680
+
681
+ An example of 'train' looks as follows.
682
+ ```
683
+ This example was too long and was cropped:
684
+
685
+ {
686
+ "hypothesis": "\"губиш нещата на следното ниво , ако хората си припомнят .\"...",
687
+ "label": 0,
688
+ "premise": "\"по време на сезона и предполагам , че на твоето ниво ще ги загубиш на следващото ниво , ако те решат да си припомнят отбора на ..."
689
+ }
690
+ ```
691
+
692
+ #### de
693
+
694
+ - **Size of downloaded dataset files:** 483.96 MB
695
+ - **Size of the generated dataset:** 86.17 MB
696
+ - **Total amount of disk used:** 570.14 MB
697
+
698
+ An example of 'train' looks as follows.
699
+ ```
700
+ This example was too long and was cropped:
701
+
702
+ {
703
+ "hypothesis": "Man verliert die Dinge auf die folgende Ebene , wenn sich die Leute erinnern .",
704
+ "label": 0,
705
+ "premise": "\"Du weißt , während der Saison und ich schätze , auf deiner Ebene verlierst du sie auf die nächste Ebene , wenn sie sich entschl..."
706
+ }
707
+ ```
708
+
709
+ #### el
710
+
711
+ - **Size of downloaded dataset files:** 483.96 MB
712
+ - **Size of the generated dataset:** 142.30 MB
713
+ - **Total amount of disk used:** 626.26 MB
714
+
715
+ An example of 'validation' looks as follows.
716
+ ```
717
+ This example was too long and was cropped:
718
+
719
+ {
720
+ "hypothesis": "\"Τηλεφώνησε στη μαμά του μόλις το σχολικό λεωφορείο τον άφησε.\"...",
721
+ "label": 1,
722
+ "premise": "Και είπε, Μαμά, έφτασα στο σπίτι."
723
+ }
724
+ ```
725
+
726
+ ### Data Fields
727
+
728
+ The data fields are the same among all splits.
729
+
730
+ #### all_languages
731
+ - `premise`: a multilingual `string` variable, with possible languages including `ar`, `bg`, `de`, `el`, `en`.
732
+ - `hypothesis`: a multilingual `string` variable, with possible languages including `ar`, `bg`, `de`, `el`, `en`.
733
+ - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
734
+
735
+ #### ar
736
+ - `premise`: a `string` feature.
737
+ - `hypothesis`: a `string` feature.
738
+ - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
739
+
740
+ #### bg
741
+ - `premise`: a `string` feature.
742
+ - `hypothesis`: a `string` feature.
743
+ - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
744
+
745
+ #### de
746
+ - `premise`: a `string` feature.
747
+ - `hypothesis`: a `string` feature.
748
+ - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
749
+
750
+ #### el
751
+ - `premise`: a `string` feature.
752
+ - `hypothesis`: a `string` feature.
753
+ - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
754
+
755
+ ### Data Splits
756
+
757
+ | name |train |validation|test|
758
+ |-------------|-----:|---------:|---:|
759
+ |all_languages|392702| 2490|5010|
760
+ |ar |392702| 2490|5010|
761
+ |bg |392702| 2490|5010|
762
+ |de |392702| 2490|5010|
763
+ |el |392702| 2490|5010|
764
+
765
+ ## Dataset Creation
766
+
767
+ ### Curation Rationale
768
+
769
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
770
+
771
+ ### Source Data
772
+
773
+ #### Initial Data Collection and Normalization
774
+
775
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
776
+
777
+ #### Who are the source language producers?
778
+
779
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
780
+
781
+ ### Annotations
782
+
783
+ #### Annotation process
784
+
785
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
786
+
787
+ #### Who are the annotators?
788
+
789
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
790
+
791
+ ### Personal and Sensitive Information
792
+
793
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
794
+
795
+ ## Considerations for Using the Data
796
+
797
+ ### Social Impact of Dataset
798
+
799
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
800
+
801
+ ### Discussion of Biases
802
+
803
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
804
+
805
+ ### Other Known Limitations
806
+
807
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
808
+
809
+ ## Additional Information
810
+
811
+ ### Dataset Curators
812
+
813
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
814
+
815
+ ### Licensing Information
816
+
817
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
818
+
819
+ ### Citation Information
820
+
821
+ ```
822
+ @InProceedings{conneau2018xnli,
823
+ author = {Conneau, Alexis
824
+ and Rinott, Ruty
825
+ and Lample, Guillaume
826
+ and Williams, Adina
827
+ and Bowman, Samuel R.
828
+ and Schwenk, Holger
829
+ and Stoyanov, Veselin},
830
+ title = {XNLI: Evaluating Cross-lingual Sentence Representations},
831
+ booktitle = {Proceedings of the 2018 Conference on Empirical Methods
832
+ in Natural Language Processing},
833
+ year = {2018},
834
+ publisher = {Association for Computational Linguistics},
835
+ location = {Brussels, Belgium},
836
+ }
837
+ ```
838
+
839
+
840
+ ### Contributions
841
+
842
+ Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
test/ar.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57bd2e5d600b39fa4cfde461ecef56710a20fce60544aaaacc9d0193d82b99af
3
+ size 233002
test/bg.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:718075303deecc173d9ac9c57381f9c1dc1bf5e3af1ae494bd9897d5ca2474bd
3
+ size 258320
test/de.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d4b959be2b4eae34382fa85d90ecdd763df0e8bd1fea251ef26669341e6c796
3
+ size 220967
test/el.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac87aa1b82f1b8c74a08cb91463ebb9891f7197eb55122678c129327730a1459
3
+ size 280964
test/en.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:513ed50cdd7466ed95bddccc3d556603524ec44211a7e7adcebb1f53bb9c2f74
3
+ size 193135
test/es.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3dbc39043460d8177ff00828e3bbe79931b7cbc0e53f288b9882c899852332ed
3
+ size 209541
test/fr.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6965ea7b3bbd99c0d56628154e1e0d715a8e43203e309a04ee8cbf762d947359
3
+ size 219638
test/hi.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5edb3e7283982448c2a577178ea86996b92a793175ae8ece8e24e07efc8f68a5
3
+ size 300127
test/ru.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86a33be9156bd0b54f1400ef9d107e445c1fe0de1a0e56185a838bfd9648e9f4
3
+ size 275621
test/sw.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:702d64a6aa15691c09499df915cfcbc15b30a0608391fd9cd81605320c88c8f8
3
+ size 193773
test/th.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27dbfd1abbcb583f7690a0aacbe411998247bd5363e61a749293f108c35d3ad9
3
+ size 294030
test/tr.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d08a7a73a2be26a3830bcdf4eefe3a39d5298c60e744f975cc25595ce9cd0553
3
+ size 208048
test/ur.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f584ba23fe1e30df1716c8998792bfd1faee95ba2a38ce7607d4b2a7faf6bc52
3
+ size 254065
test/vi.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd7a07864bf1872b85722f26b86f4e8b49694d56dbb215954ba35543d40fc699
3
+ size 220438
test/zh.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b2b978cbdb2ee19a7775a39ce541998923a68033082b408d846100c40179762
3
+ size 205071
train/ar.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6617cb3a1c2e337c57ad6d4ddea4a632ba9b309e7e3817401d4fced8bff1b90
3
+ size 18929058
train/bg.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca65f443d9066cc249a27800e46c1dbaba65facc5e7c9eccb5c10430fef6afff
3
+ size 20545413
train/de.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b20baa33ff8ee80464ae3b69d590b2d2c2c427d9a727f89e7b686fa565b3c222
3
+ size 17768864
train/el.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23a64c8d6dfa7b777c3935c6770dc2003ee8250514900a9e0bcf96acc9d76f46
3
+ size 22757125
train/en.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab1b5f1d53de417dc88498844b3e1d7d4633601a5be43b050698b962a8814db0
3
+ size 16470620
train/es.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9c5280e3b86bfd524bd5ea16170edc2c6214adadf6f8ab9db338bc7376cc5c4
3
+ size 16916130
train/fr.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b8b9c6b194ad0d3a2b92dde64558faae1d8c6d20407bc2a7b1df659f626b236
3
+ size 17445657
train/hi.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c731a30a10c5250df5f237184d91f7c42001647e72dcc920194c144ce9d89029
3
+ size 23964118
train/ru.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81e45c4bc86d38c76f419978c6f716e03e73b8d3d47b540ecbcda1fc10f56f27
3
+ size 21709852
train/sw.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:876d90874b6fca1b9f05554bebd3f71c2c25ded28d727bc34933f14424a0ede8
3
+ size 14867169
train/th.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6e827f9f087d00d70a4813eac3da6f06e26ddb20633c1e04ddc88cb1d0263b5
3
+ size 24428714
train/tr.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8085f5177381ab872398486aec5e064e3a6930d8a4b2cb8031acc80d9a2d5b34
3
+ size 15605199
train/ur.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02214b020397667d9f30345f147413b741c149944a7529e2008410b8fe38dd6d
3
+ size 14765512
train/vi.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d459426d8e7d9e2488aa69cfee1209f2907512ca0cacfdf1f03512ed17fe2294
3
+ size 17962036
train/zh.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44c2139b466dd0001ef870a502b5b80cb7597226fea0052bebbdb5846aa1306e
3
+ size 16127621
validation/ar.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c1c33814c47595c47def926d174150c7342529d67b5aef950868bb05f23f999
3
+ size 122792
validation/bg.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:164b21516f20fca149f1999adc6332b3ba03fc331ee9312861d51034aff1b76f
3
+ size 137688
validation/de.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e3685282391d83a15470ceb94671a6b4238818af6d2dcb584cd37d5abe0766c
3
+ size 117007
validation/el.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc6ef7484e9b96028f371bd090d20f3397303fa7a7481a37d90dd207ed4d09ec
3
+ size 147924
validation/en.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a6206ed0b8973aca561773316372104d6116f18ec1f9422555d2ab8155239d5
3
+ size 101056
validation/es.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab43da3fe978df1e0432999fd100a1cf442ad4872d209b3fc842913aa7699c6a
3
+ size 110061
validation/fr.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9cb3b60d9c1ea2e241d4ce52df73cc87276cfc6b1ffdc92425473fe28ad4aed
3
+ size 115966
validation/hi.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95a4de348dc5ef55b710219bb34655381cc20b0705d5319d95f017f590b6bd76
3
+ size 155184
validation/ru.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4138d2ccf1adc0e77d8e84c88360cd413ffd75b620ce45d839f1a9fbf756a173
3
+ size 146084
validation/sw.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96aaf35f7940770202b2510073c9ea5bce1157dbd39eba1531917420f5c2ea07
3
+ size 100888
validation/th.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:510d1a23991b16a7ee7087b8afa4b093a21437bbfdcb527d05595d7c32bc99d4
3
+ size 152484
validation/tr.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d39eac5519a91fd5458291955578c74f342a3b59bc345084bc2f59020d3a3bba
3
+ size 108470
validation/ur.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c550bc52943f251280508a59a960dda92028b250293a8a2b57198042d7b7bf0
3
+ size 134087
validation/vi.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f62bd2772b605bceaf305799f5ddf27076b8e77bdaeea17cc1965bea71a66b0
3
+ size 117265
validation/zh.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:854383295007efd210611d0df85b2a08285924422329e7ff920e7ed5894d01c1
3
+ size 109233