quincyqiang commited on
Commit
1904eb1
1 Parent(s): a8d0c85

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +611 -1
README.md CHANGED
@@ -1,3 +1,613 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - other
4
+ language_creators:
5
+ - other
6
+ language:
7
+ - en
8
+ license:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ task_ids:
19
+ - acceptability-classification
20
+ - natural-language-inference
21
+ - semantic-similarity-scoring
22
+ - sentiment-classification
23
+ - text-scoring
24
+ paperswithcode_id: glue
25
+ pretty_name: GLUE (General Language Understanding Evaluation benchmark)
26
+ train-eval-index:
27
+ - config: cola
28
+ task: text-classification
29
+ task_id: binary_classification
30
+ splits:
31
+ train_split: train
32
+ eval_split: validation
33
+ col_mapping:
34
+ sentence: text
35
+ label: target
36
+ - config: sst2
37
+ task: text-classification
38
+ task_id: binary_classification
39
+ splits:
40
+ train_split: train
41
+ eval_split: validation
42
+ col_mapping:
43
+ sentence: text
44
+ label: target
45
+ - config: mrpc
46
+ task: text-classification
47
+ task_id: natural_language_inference
48
+ splits:
49
+ train_split: train
50
+ eval_split: validation
51
+ col_mapping:
52
+ sentence1: text1
53
+ sentence2: text2
54
+ label: target
55
+ - config: qqp
56
+ task: text-classification
57
+ task_id: natural_language_inference
58
+ splits:
59
+ train_split: train
60
+ eval_split: validation
61
+ col_mapping:
62
+ question1: text1
63
+ question2: text2
64
+ label: target
65
+ - config: stsb
66
+ task: text-classification
67
+ task_id: natural_language_inference
68
+ splits:
69
+ train_split: train
70
+ eval_split: validation
71
+ col_mapping:
72
+ sentence1: text1
73
+ sentence2: text2
74
+ label: target
75
+ - config: mnli
76
+ task: text-classification
77
+ task_id: natural_language_inference
78
+ splits:
79
+ train_split: train
80
+ eval_split: validation_matched
81
+ col_mapping:
82
+ premise: text1
83
+ hypothesis: text2
84
+ label: target
85
+ - config: mnli_mismatched
86
+ task: text-classification
87
+ task_id: natural_language_inference
88
+ splits:
89
+ train_split: train
90
+ eval_split: validation
91
+ col_mapping:
92
+ premise: text1
93
+ hypothesis: text2
94
+ label: target
95
+ - config: mnli_matched
96
+ task: text-classification
97
+ task_id: natural_language_inference
98
+ splits:
99
+ train_split: train
100
+ eval_split: validation
101
+ col_mapping:
102
+ premise: text1
103
+ hypothesis: text2
104
+ label: target
105
+ - config: qnli
106
+ task: text-classification
107
+ task_id: natural_language_inference
108
+ splits:
109
+ train_split: train
110
+ eval_split: validation
111
+ col_mapping:
112
+ question: text1
113
+ sentence: text2
114
+ label: target
115
+ - config: rte
116
+ task: text-classification
117
+ task_id: natural_language_inference
118
+ splits:
119
+ train_split: train
120
+ eval_split: validation
121
+ col_mapping:
122
+ sentence1: text1
123
+ sentence2: text2
124
+ label: target
125
+ - config: wnli
126
+ task: text-classification
127
+ task_id: natural_language_inference
128
+ splits:
129
+ train_split: train
130
+ eval_split: validation
131
+ col_mapping:
132
+ sentence1: text1
133
+ sentence2: text2
134
+ label: target
135
+ configs:
136
+ - ax
137
+ - cola
138
+ - mnli
139
+ - mnli_matched
140
+ - mnli_mismatched
141
+ - mrpc
142
+ - qnli
143
+ - qqp
144
+ - rte
145
+ - sst2
146
+ - stsb
147
+ - wnli
148
+ tags:
149
+ - qa-nli
150
+ - coreference-nli
151
+ - paraphrase-identification
152
  ---
153
+
154
+ # Dataset Card for GLUE
155
+
156
+ ## Table of Contents
157
+ - [Dataset Card for GLUE](#dataset-card-for-glue)
158
+ - [Table of Contents](#table-of-contents)
159
+ - [Dataset Description](#dataset-description)
160
+ - [Dataset Summary](#dataset-summary)
161
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
162
+ - [ax](#ax)
163
+ - [cola](#cola)
164
+ - [mnli](#mnli)
165
+ - [mnli_matched](#mnli_matched)
166
+ - [mnli_mismatched](#mnli_mismatched)
167
+ - [mrpc](#mrpc)
168
+ - [qnli](#qnli)
169
+ - [qqp](#qqp)
170
+ - [rte](#rte)
171
+ - [sst2](#sst2)
172
+ - [stsb](#stsb)
173
+ - [wnli](#wnli)
174
+ - [Languages](#languages)
175
+ - [Dataset Structure](#dataset-structure)
176
+ - [Data Instances](#data-instances)
177
+ - [ax](#ax-1)
178
+ - [cola](#cola-1)
179
+ - [mnli](#mnli-1)
180
+ - [mnli_matched](#mnli_matched-1)
181
+ - [mnli_mismatched](#mnli_mismatched-1)
182
+ - [mrpc](#mrpc-1)
183
+ - [qnli](#qnli-1)
184
+ - [qqp](#qqp-1)
185
+ - [rte](#rte-1)
186
+ - [sst2](#sst2-1)
187
+ - [stsb](#stsb-1)
188
+ - [wnli](#wnli-1)
189
+ - [Data Fields](#data-fields)
190
+ - [ax](#ax-2)
191
+ - [cola](#cola-2)
192
+ - [mnli](#mnli-2)
193
+ - [mnli_matched](#mnli_matched-2)
194
+ - [mnli_mismatched](#mnli_mismatched-2)
195
+ - [mrpc](#mrpc-2)
196
+ - [qnli](#qnli-2)
197
+ - [qqp](#qqp-2)
198
+ - [rte](#rte-2)
199
+ - [sst2](#sst2-2)
200
+ - [stsb](#stsb-2)
201
+ - [wnli](#wnli-2)
202
+ - [Data Splits](#data-splits)
203
+ - [ax](#ax-3)
204
+ - [cola](#cola-3)
205
+ - [mnli](#mnli-3)
206
+ - [mnli_matched](#mnli_matched-3)
207
+ - [mnli_mismatched](#mnli_mismatched-3)
208
+ - [mrpc](#mrpc-3)
209
+ - [qnli](#qnli-3)
210
+ - [qqp](#qqp-3)
211
+ - [rte](#rte-3)
212
+ - [sst2](#sst2-3)
213
+ - [stsb](#stsb-3)
214
+ - [wnli](#wnli-3)
215
+ - [Dataset Creation](#dataset-creation)
216
+ - [Curation Rationale](#curation-rationale)
217
+ - [Source Data](#source-data)
218
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
219
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
220
+ - [Annotations](#annotations)
221
+ - [Annotation process](#annotation-process)
222
+ - [Who are the annotators?](#who-are-the-annotators)
223
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
224
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
225
+ - [Social Impact of Dataset](#social-impact-of-dataset)
226
+ - [Discussion of Biases](#discussion-of-biases)
227
+ - [Other Known Limitations](#other-known-limitations)
228
+ - [Additional Information](#additional-information)
229
+ - [Dataset Curators](#dataset-curators)
230
+ - [Licensing Information](#licensing-information)
231
+ - [Citation Information](#citation-information)
232
+ - [Contributions](#contributions)
233
+
234
+ ## Dataset Description
235
+
236
+ - **Homepage:** [https://nyu-mll.github.io/CoLA/](https://nyu-mll.github.io/CoLA/)
237
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
238
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
239
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
240
+ - **Size of downloaded dataset files:** 955.33 MB
241
+ - **Size of the generated dataset:** 229.68 MB
242
+ - **Total amount of disk used:** 1185.01 MB
243
+
244
+ ### Dataset Summary
245
+
246
+ GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
247
+
248
+ ### Supported Tasks and Leaderboards
249
+
250
+ The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks:
251
+
252
+ #### ax
253
+
254
+ A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
255
+
256
+ #### cola
257
+
258
+ The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
259
+
260
+ #### mnli
261
+
262
+ The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
263
+
264
+ #### mnli_matched
265
+
266
+ The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
267
+
268
+ #### mnli_mismatched
269
+
270
+ The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
271
+
272
+ #### mrpc
273
+
274
+ The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
275
+
276
+ #### qnli
277
+
278
+ The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
279
+
280
+ #### qqp
281
+
282
+ The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
283
+
284
+ #### rte
285
+
286
+ The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.
287
+
288
+ #### sst2
289
+
290
+ The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.
291
+
292
+ #### stsb
293
+
294
+ The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
295
+
296
+ #### wnli
297
+
298
+ The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
299
+
300
+ ### Languages
301
+
302
+ The language data in GLUE is in English (BCP-47 `en`)
303
+
304
+ ## Dataset Structure
305
+
306
+ ### Data Instances
307
+
308
+ #### ax
309
+
310
+ - **Size of downloaded dataset files:** 0.21 MB
311
+ - **Size of the generated dataset:** 0.23 MB
312
+ - **Total amount of disk used:** 0.44 MB
313
+
314
+ An example of 'test' looks as follows.
315
+ ```
316
+ {
317
+ "premise": "The cat sat on the mat.",
318
+ "hypothesis": "The cat did not sit on the mat.",
319
+ "label": -1,
320
+ "idx: 0
321
+ }
322
+ ```
323
+
324
+ #### cola
325
+
326
+ - **Size of downloaded dataset files:** 0.36 MB
327
+ - **Size of the generated dataset:** 0.58 MB
328
+ - **Total amount of disk used:** 0.94 MB
329
+
330
+ An example of 'train' looks as follows.
331
+ ```
332
+ {
333
+ "sentence": "Our friends won't buy this analysis, let alone the next one we propose.",
334
+ "label": 1,
335
+ "id": 0
336
+ }
337
+ ```
338
+
339
+ #### mnli
340
+
341
+ - **Size of downloaded dataset files:** 298.29 MB
342
+ - **Size of the generated dataset:** 78.65 MB
343
+ - **Total amount of disk used:** 376.95 MB
344
+
345
+ An example of 'train' looks as follows.
346
+ ```
347
+ {
348
+ "premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
349
+ "hypothesis": "Product and geography are what make cream skimming work.",
350
+ "label": 1,
351
+ "idx": 0
352
+ }
353
+ ```
354
+
355
+ #### mnli_matched
356
+
357
+ - **Size of downloaded dataset files:** 298.29 MB
358
+ - **Size of the generated dataset:** 3.52 MB
359
+ - **Total amount of disk used:** 301.82 MB
360
+
361
+ An example of 'test' looks as follows.
362
+ ```
363
+ {
364
+ "premise": "Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.",
365
+ "hypothesis": "Hierbas is a name worth looking out for.",
366
+ "label": -1,
367
+ "idx": 0
368
+ }
369
+ ```
370
+
371
+ #### mnli_mismatched
372
+
373
+ - **Size of downloaded dataset files:** 298.29 MB
374
+ - **Size of the generated dataset:** 3.73 MB
375
+ - **Total amount of disk used:** 302.02 MB
376
+
377
+ An example of 'test' looks as follows.
378
+ ```
379
+ {
380
+ "premise": "What have you decided, what are you going to do?",
381
+ "hypothesis": "So what's your decision?,
382
+ "label": -1,
383
+ "idx": 0
384
+ }
385
+ ```
386
+
387
+ #### mrpc
388
+
389
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
390
+
391
+ #### qnli
392
+
393
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
394
+
395
+ #### qqp
396
+
397
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
398
+
399
+ #### rte
400
+
401
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
402
+
403
+ #### sst2
404
+
405
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
406
+
407
+ #### stsb
408
+
409
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
410
+
411
+ #### wnli
412
+
413
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
414
+
415
+ ### Data Fields
416
+
417
+ The data fields are the same among all splits.
418
+
419
+ #### ax
420
+ - `premise`: a `string` feature.
421
+ - `hypothesis`: a `string` feature.
422
+ - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
423
+ - `idx`: a `int32` feature.
424
+
425
+ #### cola
426
+ - `sentence`: a `string` feature.
427
+ - `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1).
428
+ - `idx`: a `int32` feature.
429
+
430
+ #### mnli
431
+ - `premise`: a `string` feature.
432
+ - `hypothesis`: a `string` feature.
433
+ - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
434
+ - `idx`: a `int32` feature.
435
+
436
+ #### mnli_matched
437
+ - `premise`: a `string` feature.
438
+ - `hypothesis`: a `string` feature.
439
+ - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
440
+ - `idx`: a `int32` feature.
441
+
442
+ #### mnli_mismatched
443
+ - `premise`: a `string` feature.
444
+ - `hypothesis`: a `string` feature.
445
+ - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
446
+ - `idx`: a `int32` feature.
447
+
448
+ #### mrpc
449
+
450
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
451
+
452
+ #### qnli
453
+
454
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
455
+
456
+ #### qqp
457
+
458
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
459
+
460
+ #### rte
461
+
462
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
463
+
464
+ #### sst2
465
+
466
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
467
+
468
+ #### stsb
469
+
470
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
471
+
472
+ #### wnli
473
+
474
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
475
+
476
+ ### Data Splits
477
+
478
+ #### ax
479
+
480
+ | |test|
481
+ |---|---:|
482
+ |ax |1104|
483
+
484
+ #### cola
485
+
486
+ | |train|validation|test|
487
+ |----|----:|---------:|---:|
488
+ |cola| 8551| 1043|1063|
489
+
490
+ #### mnli
491
+
492
+ | |train |validation_matched|validation_mismatched|test_matched|test_mismatched|
493
+ |----|-----:|-----------------:|--------------------:|-----------:|--------------:|
494
+ |mnli|392702| 9815| 9832| 9796| 9847|
495
+
496
+ #### mnli_matched
497
+
498
+ | |validation|test|
499
+ |------------|---------:|---:|
500
+ |mnli_matched| 9815|9796|
501
+
502
+ #### mnli_mismatched
503
+
504
+ | |validation|test|
505
+ |---------------|---------:|---:|
506
+ |mnli_mismatched| 9832|9847|
507
+
508
+ #### mrpc
509
+
510
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
511
+
512
+ #### qnli
513
+
514
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
515
+
516
+ #### qqp
517
+
518
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
519
+
520
+ #### rte
521
+
522
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
523
+
524
+ #### sst2
525
+
526
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
527
+
528
+ #### stsb
529
+
530
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
531
+
532
+ #### wnli
533
+
534
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
535
+
536
+ ## Dataset Creation
537
+
538
+ ### Curation Rationale
539
+
540
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
541
+
542
+ ### Source Data
543
+
544
+ #### Initial Data Collection and Normalization
545
+
546
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
547
+
548
+ #### Who are the source language producers?
549
+
550
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
551
+
552
+ ### Annotations
553
+
554
+ #### Annotation process
555
+
556
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
557
+
558
+ #### Who are the annotators?
559
+
560
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
561
+
562
+ ### Personal and Sensitive Information
563
+
564
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
565
+
566
+ ## Considerations for Using the Data
567
+
568
+ ### Social Impact of Dataset
569
+
570
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
571
+
572
+ ### Discussion of Biases
573
+
574
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
575
+
576
+ ### Other Known Limitations
577
+
578
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
579
+
580
+ ## Additional Information
581
+
582
+ ### Dataset Curators
583
+
584
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
585
+
586
+ ### Licensing Information
587
+
588
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
589
+
590
+ ### Citation Information
591
+
592
+ ```
593
+ @article{warstadt2018neural,
594
+ title={Neural Network Acceptability Judgments},
595
+ author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},
596
+ journal={arXiv preprint arXiv:1805.12471},
597
+ year={2018}
598
+ }
599
+ @inproceedings{wang2019glue,
600
+ title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
601
+ author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
602
+ note={In the Proceedings of ICLR.},
603
+ year={2019}
604
+ }
605
+
606
+ Note that each GLUE dataset has its own citation. Please see the source to see
607
+ the correct citation for each contained dataset.
608
+ ```
609
+
610
+
611
+ ### Contributions
612
+
613
+ Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.