Xieyiyiyi commited on
Commit
413db44
1 Parent(s): 08583ba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +611 -3
README.md CHANGED
@@ -1,5 +1,613 @@
1
  ---
2
- license: other
 
 
 
3
  language:
4
- - aa
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - other
6
  language:
7
+ - en
8
+ license:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - extended|other
16
+ task_categories:
17
+ - text-classification
18
+ - token-classification
19
+ - question-answering
20
+ task_ids:
21
+ - natural-language-inference
22
+ - word-sense-disambiguation
23
+ - coreference-resolution
24
+ - extractive-qa
25
+ paperswithcode_id: superglue
26
+ pretty_name: SuperGLUE
27
+ tags:
28
+ - superglue
29
+ - NLU
30
+ - natural language understanding
31
+ dataset_info:
32
+ - config_name: boolq
33
+ features:
34
+ - name: question
35
+ dtype: string
36
+ - name: passage
37
+ dtype: string
38
+ - name: idx
39
+ dtype: int32
40
+ - name: label
41
+ dtype:
42
+ class_label:
43
+ names:
44
+ '0': 'False'
45
+ '1': 'True'
46
+ splits:
47
+ - name: test
48
+ num_bytes: 2107997
49
+ num_examples: 3245
50
+ - name: train
51
+ num_bytes: 6179206
52
+ num_examples: 9427
53
+ - name: validation
54
+ num_bytes: 2118505
55
+ num_examples: 3270
56
+ download_size: 4118001
57
+ dataset_size: 10405708
58
+ - config_name: cb
59
+ features:
60
+ - name: premise
61
+ dtype: string
62
+ - name: hypothesis
63
+ dtype: string
64
+ - name: idx
65
+ dtype: int32
66
+ - name: label
67
+ dtype:
68
+ class_label:
69
+ names:
70
+ '0': entailment
71
+ '1': contradiction
72
+ '2': neutral
73
+ splits:
74
+ - name: test
75
+ num_bytes: 93660
76
+ num_examples: 250
77
+ - name: train
78
+ num_bytes: 87218
79
+ num_examples: 250
80
+ - name: validation
81
+ num_bytes: 21894
82
+ num_examples: 56
83
+ download_size: 75482
84
+ dataset_size: 202772
85
+ - config_name: copa
86
+ features:
87
+ - name: premise
88
+ dtype: string
89
+ - name: choice1
90
+ dtype: string
91
+ - name: choice2
92
+ dtype: string
93
+ - name: question
94
+ dtype: string
95
+ - name: idx
96
+ dtype: int32
97
+ - name: label
98
+ dtype:
99
+ class_label:
100
+ names:
101
+ '0': choice1
102
+ '1': choice2
103
+ splits:
104
+ - name: test
105
+ num_bytes: 60303
106
+ num_examples: 500
107
+ - name: train
108
+ num_bytes: 49599
109
+ num_examples: 400
110
+ - name: validation
111
+ num_bytes: 12586
112
+ num_examples: 100
113
+ download_size: 43986
114
+ dataset_size: 122488
115
+ - config_name: multirc
116
+ features:
117
+ - name: paragraph
118
+ dtype: string
119
+ - name: question
120
+ dtype: string
121
+ - name: answer
122
+ dtype: string
123
+ - name: idx
124
+ struct:
125
+ - name: paragraph
126
+ dtype: int32
127
+ - name: question
128
+ dtype: int32
129
+ - name: answer
130
+ dtype: int32
131
+ - name: label
132
+ dtype:
133
+ class_label:
134
+ names:
135
+ '0': 'False'
136
+ '1': 'True'
137
+ splits:
138
+ - name: test
139
+ num_bytes: 14996451
140
+ num_examples: 9693
141
+ - name: train
142
+ num_bytes: 46213579
143
+ num_examples: 27243
144
+ - name: validation
145
+ num_bytes: 7758918
146
+ num_examples: 4848
147
+ download_size: 1116225
148
+ dataset_size: 68968948
149
+ - config_name: record
150
+ features:
151
+ - name: passage
152
+ dtype: string
153
+ - name: query
154
+ dtype: string
155
+ - name: entities
156
+ sequence: string
157
+ - name: entity_spans
158
+ sequence:
159
+ - name: text
160
+ dtype: string
161
+ - name: start
162
+ dtype: int32
163
+ - name: end
164
+ dtype: int32
165
+ - name: answers
166
+ sequence: string
167
+ - name: idx
168
+ struct:
169
+ - name: passage
170
+ dtype: int32
171
+ - name: query
172
+ dtype: int32
173
+ splits:
174
+ - name: train
175
+ num_bytes: 179232052
176
+ num_examples: 100730
177
+ - name: validation
178
+ num_bytes: 17479084
179
+ num_examples: 10000
180
+ - name: test
181
+ num_bytes: 17200575
182
+ num_examples: 10000
183
+ download_size: 51757880
184
+ dataset_size: 213911711
185
+ - config_name: rte
186
+ features:
187
+ - name: premise
188
+ dtype: string
189
+ - name: hypothesis
190
+ dtype: string
191
+ - name: idx
192
+ dtype: int32
193
+ - name: label
194
+ dtype:
195
+ class_label:
196
+ names:
197
+ '0': entailment
198
+ '1': not_entailment
199
+ splits:
200
+ - name: test
201
+ num_bytes: 975799
202
+ num_examples: 3000
203
+ - name: train
204
+ num_bytes: 848745
205
+ num_examples: 2490
206
+ - name: validation
207
+ num_bytes: 90899
208
+ num_examples: 277
209
+ download_size: 750920
210
+ dataset_size: 1915443
211
+ - config_name: wic
212
+ features:
213
+ - name: word
214
+ dtype: string
215
+ - name: sentence1
216
+ dtype: string
217
+ - name: sentence2
218
+ dtype: string
219
+ - name: start1
220
+ dtype: int32
221
+ - name: start2
222
+ dtype: int32
223
+ - name: end1
224
+ dtype: int32
225
+ - name: end2
226
+ dtype: int32
227
+ - name: idx
228
+ dtype: int32
229
+ - name: label
230
+ dtype:
231
+ class_label:
232
+ names:
233
+ '0': 'False'
234
+ '1': 'True'
235
+ splits:
236
+ - name: test
237
+ num_bytes: 180593
238
+ num_examples: 1400
239
+ - name: train
240
+ num_bytes: 665183
241
+ num_examples: 5428
242
+ - name: validation
243
+ num_bytes: 82623
244
+ num_examples: 638
245
+ download_size: 396213
246
+ dataset_size: 928399
247
+ - config_name: wsc
248
+ features:
249
+ - name: text
250
+ dtype: string
251
+ - name: span1_index
252
+ dtype: int32
253
+ - name: span2_index
254
+ dtype: int32
255
+ - name: span1_text
256
+ dtype: string
257
+ - name: span2_text
258
+ dtype: string
259
+ - name: idx
260
+ dtype: int32
261
+ - name: label
262
+ dtype:
263
+ class_label:
264
+ names:
265
+ '0': 'False'
266
+ '1': 'True'
267
+ splits:
268
+ - name: test
269
+ num_bytes: 31572
270
+ num_examples: 146
271
+ - name: train
272
+ num_bytes: 89883
273
+ num_examples: 554
274
+ - name: validation
275
+ num_bytes: 21637
276
+ num_examples: 104
277
+ download_size: 32751
278
+ dataset_size: 143092
279
+ - config_name: wsc.fixed
280
+ features:
281
+ - name: text
282
+ dtype: string
283
+ - name: span1_index
284
+ dtype: int32
285
+ - name: span2_index
286
+ dtype: int32
287
+ - name: span1_text
288
+ dtype: string
289
+ - name: span2_text
290
+ dtype: string
291
+ - name: idx
292
+ dtype: int32
293
+ - name: label
294
+ dtype:
295
+ class_label:
296
+ names:
297
+ '0': 'False'
298
+ '1': 'True'
299
+ splits:
300
+ - name: test
301
+ num_bytes: 31568
302
+ num_examples: 146
303
+ - name: train
304
+ num_bytes: 89883
305
+ num_examples: 554
306
+ - name: validation
307
+ num_bytes: 21637
308
+ num_examples: 104
309
+ download_size: 32751
310
+ dataset_size: 143088
311
+ - config_name: axb
312
+ features:
313
+ - name: sentence1
314
+ dtype: string
315
+ - name: sentence2
316
+ dtype: string
317
+ - name: idx
318
+ dtype: int32
319
+ - name: label
320
+ dtype:
321
+ class_label:
322
+ names:
323
+ '0': entailment
324
+ '1': not_entailment
325
+ splits:
326
+ - name: test
327
+ num_bytes: 238392
328
+ num_examples: 1104
329
+ download_size: 33950
330
+ dataset_size: 238392
331
+ - config_name: axg
332
+ features:
333
+ - name: premise
334
+ dtype: string
335
+ - name: hypothesis
336
+ dtype: string
337
+ - name: idx
338
+ dtype: int32
339
+ - name: label
340
+ dtype:
341
+ class_label:
342
+ names:
343
+ '0': entailment
344
+ '1': not_entailment
345
+ splits:
346
+ - name: test
347
+ num_bytes: 53581
348
+ num_examples: 356
349
+ download_size: 10413
350
+ dataset_size: 53581
351
+ ---
352
+
353
+ # Dataset Card for "super_glue"
354
+
355
+ ## Table of Contents
356
+ - [Dataset Description](#dataset-description)
357
+ - [Dataset Summary](#dataset-summary)
358
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
359
+ - [Languages](#languages)
360
+ - [Dataset Structure](#dataset-structure)
361
+ - [Data Instances](#data-instances)
362
+ - [Data Fields](#data-fields)
363
+ - [Data Splits](#data-splits)
364
+ - [Dataset Creation](#dataset-creation)
365
+ - [Curation Rationale](#curation-rationale)
366
+ - [Source Data](#source-data)
367
+ - [Annotations](#annotations)
368
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
369
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
370
+ - [Social Impact of Dataset](#social-impact-of-dataset)
371
+ - [Discussion of Biases](#discussion-of-biases)
372
+ - [Other Known Limitations](#other-known-limitations)
373
+ - [Additional Information](#additional-information)
374
+ - [Dataset Curators](#dataset-curators)
375
+ - [Licensing Information](#licensing-information)
376
+ - [Citation Information](#citation-information)
377
+ - [Contributions](#contributions)
378
+
379
+ ## Dataset Description
380
+
381
+ - **Homepage:** [https://github.com/google-research-datasets/boolean-questions](https://github.com/google-research-datasets/boolean-questions)
382
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
383
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
384
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
385
+ - **Size of downloaded dataset files:** 55.66 MB
386
+ - **Size of the generated dataset:** 238.01 MB
387
+ - **Total amount of disk used:** 293.67 MB
388
+
389
+ ### Dataset Summary
390
+
391
+ SuperGLUE (https://super.gluebenchmark.com/) is a new benchmark styled after
392
+ GLUE with a new set of more difficult language understanding tasks, improved
393
+ resources, and a new public leaderboard.
394
+
395
+ BoolQ (Boolean Questions, Clark et al., 2019a) is a QA task where each example consists of a short
396
+ passage and a yes/no question about the passage. The questions are provided anonymously and
397
+ unsolicited by users of the Google search engine, and afterwards paired with a paragraph from a
398
+ Wikipedia article containing the answer. Following the original work, we evaluate with accuracy.
399
+
400
+ ### Supported Tasks and Leaderboards
401
+
402
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
403
+
404
+ ### Languages
405
+
406
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
407
+
408
+ ## Dataset Structure
409
+
410
+ ### Data Instances
411
+
412
+ #### axb
413
+
414
+ - **Size of downloaded dataset files:** 0.03 MB
415
+ - **Size of the generated dataset:** 0.23 MB
416
+ - **Total amount of disk used:** 0.26 MB
417
+
418
+ An example of 'test' looks as follows.
419
+ ```
420
+
421
+ ```
422
+
423
+ #### axg
424
+
425
+ - **Size of downloaded dataset files:** 0.01 MB
426
+ - **Size of the generated dataset:** 0.05 MB
427
+ - **Total amount of disk used:** 0.06 MB
428
+
429
+ An example of 'test' looks as follows.
430
+ ```
431
+
432
+ ```
433
+
434
+ #### boolq
435
+
436
+ - **Size of downloaded dataset files:** 3.93 MB
437
+ - **Size of the generated dataset:** 9.92 MB
438
+ - **Total amount of disk used:** 13.85 MB
439
+
440
+ An example of 'train' looks as follows.
441
+ ```
442
+
443
+ ```
444
+
445
+ #### cb
446
+
447
+ - **Size of downloaded dataset files:** 0.07 MB
448
+ - **Size of the generated dataset:** 0.19 MB
449
+ - **Total amount of disk used:** 0.27 MB
450
+
451
+ An example of 'train' looks as follows.
452
+ ```
453
+
454
+ ```
455
+
456
+ #### copa
457
+
458
+ - **Size of downloaded dataset files:** 0.04 MB
459
+ - **Size of the generated dataset:** 0.12 MB
460
+ - **Total amount of disk used:** 0.16 MB
461
+
462
+ An example of 'train' looks as follows.
463
+ ```
464
+
465
+ ```
466
+
467
+ ### Data Fields
468
+
469
+ The data fields are the same among all splits.
470
+
471
+ #### axb
472
+ - `sentence1`: a `string` feature.
473
+ - `sentence2`: a `string` feature.
474
+ - `idx`: a `int32` feature.
475
+ - `label`: a classification label, with possible values including `entailment` (0), `not_entailment` (1).
476
+
477
+ #### axg
478
+ - `premise`: a `string` feature.
479
+ - `hypothesis`: a `string` feature.
480
+ - `idx`: a `int32` feature.
481
+ - `label`: a classification label, with possible values including `entailment` (0), `not_entailment` (1).
482
+
483
+ #### boolq
484
+ - `question`: a `string` feature.
485
+ - `passage`: a `string` feature.
486
+ - `idx`: a `int32` feature.
487
+ - `label`: a classification label, with possible values including `False` (0), `True` (1).
488
+
489
+ #### cb
490
+ - `premise`: a `string` feature.
491
+ - `hypothesis`: a `string` feature.
492
+ - `idx`: a `int32` feature.
493
+ - `label`: a classification label, with possible values including `entailment` (0), `contradiction` (1), `neutral` (2).
494
+
495
+ #### copa
496
+ - `premise`: a `string` feature.
497
+ - `choice1`: a `string` feature.
498
+ - `choice2`: a `string` feature.
499
+ - `question`: a `string` feature.
500
+ - `idx`: a `int32` feature.
501
+ - `label`: a classification label, with possible values including `choice1` (0), `choice2` (1).
502
+
503
+ ### Data Splits
504
+
505
+ #### axb
506
+
507
+ | |test|
508
+ |---|---:|
509
+ |axb|1104|
510
+
511
+ #### axg
512
+
513
+ | |test|
514
+ |---|---:|
515
+ |axg| 356|
516
+
517
+ #### boolq
518
+
519
+ | |train|validation|test|
520
+ |-----|----:|---------:|---:|
521
+ |boolq| 9427| 3270|3245|
522
+
523
+ #### cb
524
+
525
+ | |train|validation|test|
526
+ |---|----:|---------:|---:|
527
+ |cb | 250| 56| 250|
528
+
529
+ #### copa
530
+
531
+ | |train|validation|test|
532
+ |----|----:|---------:|---:|
533
+ |copa| 400| 100| 500|
534
+
535
+ ## Dataset Creation
536
+
537
+ ### Curation Rationale
538
+
539
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
540
+
541
+ ### Source Data
542
+
543
+ #### Initial Data Collection and Normalization
544
+
545
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
546
+
547
+ #### Who are the source language producers?
548
+
549
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
550
+
551
+ ### Annotations
552
+
553
+ #### Annotation process
554
+
555
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
556
+
557
+ #### Who are the annotators?
558
+
559
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
560
+
561
+ ### Personal and Sensitive Information
562
+
563
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
564
+
565
+ ## Considerations for Using the Data
566
+
567
+ ### Social Impact of Dataset
568
+
569
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
570
+
571
+ ### Discussion of Biases
572
+
573
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
574
+
575
+ ### Other Known Limitations
576
+
577
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
578
+
579
+ ## Additional Information
580
+
581
+ ### Dataset Curators
582
+
583
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
584
+
585
+ ### Licensing Information
586
+
587
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
588
+
589
+ ### Citation Information
590
+
591
+ ```
592
+ @inproceedings{clark2019boolq,
593
+ title={BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions},
594
+ author={Clark, Christopher and Lee, Kenton and Chang, Ming-Wei, and Kwiatkowski, Tom and Collins, Michael, and Toutanova, Kristina},
595
+ booktitle={NAACL},
596
+ year={2019}
597
+ }
598
+ @article{wang2019superglue,
599
+ title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},
600
+ author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},
601
+ journal={arXiv preprint arXiv:1905.00537},
602
+ year={2019}
603
+ }
604
+
605
+ Note that each SuperGLUE dataset has its own citation. Please see the source to
606
+ get the correct citation for each contained dataset.
607
+
608
+ ```
609
+
610
+
611
+ ### Contributions
612
+
613
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.