system HF staff commited on
Commit
f021ae4
1 Parent(s): 8a92be9

Update files from the datasets library (from 1.7.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.7.0

Files changed (1) hide show
  1. README.md +22 -9
README.md CHANGED
@@ -63,6 +63,7 @@ task_ids:
63
  - semantic-similarity-scoring
64
  wnli:
65
  - text-classification-other-coreference-nli
 
66
  ---
67
 
68
  # Dataset Card for "glue"
@@ -70,12 +71,12 @@ task_ids:
70
  ## Table of Contents
71
  - [Dataset Description](#dataset-description)
72
  - [Dataset Summary](#dataset-summary)
73
- - [Supported Tasks](#supported-tasks)
74
  - [Languages](#languages)
75
  - [Dataset Structure](#dataset-structure)
76
  - [Data Instances](#data-instances)
77
  - [Data Fields](#data-fields)
78
- - [Data Splits Sample Size](#data-splits-sample-size)
79
  - [Dataset Creation](#dataset-creation)
80
  - [Curation Rationale](#curation-rationale)
81
  - [Source Data](#source-data)
@@ -105,6 +106,10 @@ task_ids:
105
 
106
  GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
107
 
 
 
 
 
108
  #### ax
109
 
110
  A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
@@ -153,13 +158,9 @@ The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of
153
 
154
  The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
155
 
156
- ### Supported Tasks
157
-
158
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
159
-
160
  ### Languages
161
 
162
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
163
 
164
  ## Dataset Structure
165
 
@@ -335,7 +336,7 @@ The data fields are the same among all splits.
335
 
336
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
337
 
338
- ### Data Splits Sample Size
339
 
340
  #### ax
341
 
@@ -403,10 +404,22 @@ The data fields are the same among all splits.
403
 
404
  ### Source Data
405
 
 
 
 
 
 
 
406
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
407
 
408
  ### Annotations
409
 
 
 
 
 
 
 
410
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
411
 
412
  ### Personal and Sensitive Information
@@ -460,4 +473,4 @@ the correct citation for each contained dataset.
460
 
461
  ### Contributions
462
 
463
- Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
 
63
  - semantic-similarity-scoring
64
  wnli:
65
  - text-classification-other-coreference-nli
66
+ paperswithcode_id: glue
67
  ---
68
 
69
  # Dataset Card for "glue"
 
71
  ## Table of Contents
72
  - [Dataset Description](#dataset-description)
73
  - [Dataset Summary](#dataset-summary)
74
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
75
  - [Languages](#languages)
76
  - [Dataset Structure](#dataset-structure)
77
  - [Data Instances](#data-instances)
78
  - [Data Fields](#data-fields)
79
+ - [Data Splits](#data-splits)
80
  - [Dataset Creation](#dataset-creation)
81
  - [Curation Rationale](#curation-rationale)
82
  - [Source Data](#source-data)
 
106
 
107
  GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
108
 
109
+ ### Supported Tasks and Leaderboards
110
+
111
+ The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks:
112
+
113
  #### ax
114
 
115
  A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
 
158
 
159
  The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
160
 
 
 
 
 
161
  ### Languages
162
 
163
+ The language data in GLUE is in English (BCP-47 `en`)
164
 
165
  ## Dataset Structure
166
 
 
336
 
337
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
338
 
339
+ ### Data Splits
340
 
341
  #### ax
342
 
 
404
 
405
  ### Source Data
406
 
407
+ #### Initial Data Collection and Normalization
408
+
409
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
410
+
411
+ #### Who are the source language producers?
412
+
413
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
414
 
415
  ### Annotations
416
 
417
+ #### Annotation process
418
+
419
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
420
+
421
+ #### Who are the annotators?
422
+
423
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
424
 
425
  ### Personal and Sensitive Information
 
473
 
474
  ### Contributions
475
 
476
+ Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.