system HF staff commited on
Commit
fd460ac
1 Parent(s): 68da3e4

Update files from the datasets library (from 1.7.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.7.0

Files changed (1) hide show
  1. README.md +36 -41
README.md CHANGED
@@ -18,38 +18,33 @@ task_categories:
18
  - text-classification
19
  task_ids:
20
  - natural-language-inference
 
21
  ---
22
  # Dataset Card for SNLI
23
 
24
  ## Table of Contents
25
- - [Dataset Card for SNLI](#dataset-card-for-snli)
26
- - [Table of Contents](#table-of-contents)
27
- - [Dataset Description](#dataset-description)
28
- - [Dataset Summary](#dataset-summary)
29
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
- - [Languages](#languages)
31
- - [Dataset Structure](#dataset-structure)
32
- - [Data Instances](#data-instances)
33
- - [Data Fields](#data-fields)
34
- - [Data Splits](#data-splits)
35
- - [Dataset Creation](#dataset-creation)
36
- - [Curation Rationale](#curation-rationale)
37
- - [Source Data](#source-data)
38
- - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
39
- - [Who are the source language producers?](#who-are-the-source-language-producers)
40
- - [Annotations](#annotations)
41
- - [Annotation process](#annotation-process)
42
- - [Who are the annotators?](#who-are-the-annotators)
43
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
44
- - [Considerations for Using the Data](#considerations-for-using-the-data)
45
- - [Social Impact of Dataset](#social-impact-of-dataset)
46
- - [Discussion of Biases](#discussion-of-biases)
47
- - [Other Known Limitations](#other-known-limitations)
48
- - [Additional Information](#additional-information)
49
- - [Dataset Curators](#dataset-curators)
50
- - [Licensing Information](#licensing-information)
51
- - [Citation Information](#citation-information)
52
- - [Contributions](#contributions)
53
 
54
  ## Dataset Description
55
 
@@ -61,7 +56,7 @@ task_ids:
61
 
62
  ### Dataset Summary
63
 
64
- The SNLI corpus (version 1.0) is a collection of 570k human-written English sentence pairs manually labeled for balanced classification with the labels entailment, contradiction, and neutral, supporting the task of natural language inference (NLI), also known as recognizing textual entailment (RTE).
65
 
66
  ### Supported Tasks and Leaderboards
67
 
@@ -69,7 +64,7 @@ The SNLI corpus (version 1.0) is a collection of 570k human-written English sent
69
 
70
  ### Languages
71
 
72
- The language in the dataset is English as spoken by users of the website Flickr and as spoken by crowdworkers from Amazon Mechanical Turk. The BCP-47 code for English is en.
73
 
74
  ## Dataset Structure
75
 
@@ -94,7 +89,7 @@ The average token count for the premises and hypotheses are given below:
94
 
95
  - `premise`: a string used to determine the truthfulness of the hypothesis
96
  - `hypothesis`: a string that may be true, false, or whose truth conditions may not be knowable when compared to the premise
97
- - `label`: an integer whose value may be either _0_, indicating that the hypothesis entails the premise, _1_, indicating that the premise and hypothesis neither entail nor contradict each other, or _2_, indicating that the hypothesis contradicts the premise.
98
 
99
 
100
  ### Data Splits
@@ -111,25 +106,25 @@ The SNLI dataset has 3 splits: _train_, _validation_, and _test_. All of the exa
111
 
112
  ### Curation Rationale
113
 
114
- The [SNLI corpus (version 1.0)](https://nlp.stanford.edu/projects/snli/) was developed as a benchmark for natural langauge inference (NLI), also known as recognizing textual entailment (RTE), with the goal of producing a dataset large enough to train models using neural methodologies.
115
 
116
  ### Source Data
117
 
118
  #### Initial Data Collection and Normalization
119
 
120
- The hypotheses were elicited by presenting crowdworkers with captions from preexisting datasets without the associated photos, but the vocabulary of the hypotheses still reflects the content of the photos as well as the caption style of writing (e.g. mostly present tense). The dataset developers report 37,026 distinct words in the corpus, ignoring case. They allowed bare NPs as well as full sentences. Using the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003) trained on the standard training set as well as on the Brown Corpus (Francis and Kucera 1979), the authors report that 74% of the premises and 88.9% of the hypotheses result in a parse rooted with an 'S'. The corpus was developed between 2014 and 2015.
121
 
122
  Crowdworkers were presented with a caption without the associated photo and asked to produce three alternate captions, one that is definitely true, one that might be true, and one that is definitely false. See Section 2.1 and Figure 1 for details (Bowman et al., 2015).
123
 
124
  The corpus includes content from the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/) and the [VisualGenome corpus](https://visualgenome.org/). The photo captions used to prompt the data creation were collected on Flickr by [Young et al. (2014)](https://www.aclweb.org/anthology/Q14-1006.pdf), who extended the Flickr 8K dataset developed by [Hodosh et al. (2013)](https://www.jair.org/index.php/jair/article/view/10833). Hodosh et al. collected photos from the following Flickr groups: strangers!, Wild-Child (Kids in Action), Dogs in Action (Read the Rules), Outdoor Activities, Action Photography, Flickr-Social (two or more people in the photo). Young et al. do not list the specific groups they collected photos from. The VisualGenome corpus also contains images from Flickr, originally collected in [MS-COCO](https://cocodataset.org/#home) and [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/).
125
 
126
- The premises from the Flickr 30k corpus corrected for spelling using the Linux spell checker and ungrammatical sentences were removed. Bowman et al. do not report any normalization, though they note that punctuation and capitalization are often omitted.
127
 
128
  #### Who are the source language producers?
129
 
130
- A large portion of the premises (160k) were produced in the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/) by an unknown number of crowdworkers. About 2,500 crowdworkers from Amazon Mechanical Turk produced the associated hypotheses. The premises from the Flickr 30k project describe people and animals whose photos were collected and presented to the Flickr 30k crowdworkers, but the SNLI corpus did not present the photos to the hypotheses creators.
131
 
132
- The Flickr 30k corpus did not report crowdworker or photo subject demographic information or crowdworker compensation. The SNLI crowdworkers were compensated per HIT at rates between $.1 and $.5 with no incentives. Workers who ignored the guidelines were disqualified, and automated bulk submissions were rejected. No demographic information was collected from the SNLI crowdworkers.
133
 
134
  An additional 4,000 premises come from the pilot study of the [VisualGenome corpus](https://visualgenome.org/static/paper/Visual_Genome.pdf). Though the pilot study itself is not described, the location information of the 33,000 AMT crowdworkers that participated over the course of the 6 months of data collection are aggregated. Most of the workers were located in the United States (93%), with others from the Philippines, Kenya, India, Russia, and Canada. Workers were paid $6-$8 per hour.
135
 
@@ -137,9 +132,9 @@ An additional 4,000 premises come from the pilot study of the [VisualGenome corp
137
 
138
  #### Annotation process
139
 
140
- 56,941 of the total sentence pairs were further annotated in a validation task. Four annotators each labeled a premise-hypothesis pair as entailment, contradiction, or neither, resulting in 5 total judgements including the original hypothesis author judgement. See Section 2.2 for more details (Bowman et al., 2015).
141
 
142
- The authors report 3/5 annotator agreement on 98% of the validation set and unanimous annotator agreement on 58.3% of the validation set. If a label was chosen by three annotators, that label was made the gold label. Following from this, 2% of the data did not have a consensus label and was labeled '-' by the authors.
143
 
144
  | Label | Fleiss κ |
145
  | --------------- |--------- |
@@ -154,17 +149,17 @@ The annotators of the validation task were a closed set of about 30 trusted crow
154
 
155
  ### Personal and Sensitive Information
156
 
157
- The dataset does not contain any personal information about the authors or the crowdworkers, but may contain descriptions of the people in the original Flickr photos.
158
 
159
  ## Considerations for Using the Data
160
 
161
  ### Social Impact of Dataset
162
 
163
- This dataset was developed as a benchmark for evaluating representational systems for text, especially including those induced by representation learning methods, in the task of predicting truth conditions in a given context. (It should be noted that the truth conditions of a hypothesis given a premise does not necessarily match the truth conditions of the hypothesis in the real world.) Systems that are successful at such a task may be more successful in modeling semantic representations.
164
 
165
  ### Discussion of Biases
166
 
167
- The language reflects the content of the photos collected from Flickr, as described in the [Data Collection](#initial-data-collection-and-normalization) section. [Rudinger et al (2017)](https://www.aclweb.org/anthology/W17-1609.pdf) use pointwise mutual information to calculate a measure of association between a manually selected list of tokens corresponding to identity categories and the other words in the corpus, showing strong evidence of stereotypes across gender categories. They also provide examples in which crowdworkers reproduced harmful stereotypes or pejorative language in the hypotheses.
168
 
169
  ### Other Known Limitations
170
 
 
18
  - text-classification
19
  task_ids:
20
  - natural-language-inference
21
+ paperswithcode_id: snli
22
  ---
23
  # Dataset Card for SNLI
24
 
25
  ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-fields)
33
+ - [Data Splits](#data-splits)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+ - [Contributions](#contributions)
 
 
 
 
 
 
48
 
49
  ## Dataset Description
50
 
 
56
 
57
  ### Dataset Summary
58
 
59
+ The SNLI corpus (version 1.0) is a collection of 570k human-written English sentence pairs manually labeled for balanced classification with the labels entailment, contradiction, and neutral, supporting the task of natural language inference (NLI), also known as recognizing textual entailment (RTE).
60
 
61
  ### Supported Tasks and Leaderboards
62
 
 
64
 
65
  ### Languages
66
 
67
+ The language in the dataset is English as spoken by users of the website Flickr and as spoken by crowdworkers from Amazon Mechanical Turk. The BCP-47 code for English is en.
68
 
69
  ## Dataset Structure
70
 
 
89
 
90
  - `premise`: a string used to determine the truthfulness of the hypothesis
91
  - `hypothesis`: a string that may be true, false, or whose truth conditions may not be knowable when compared to the premise
92
+ - `label`: an integer whose value may be either _0_, indicating that the hypothesis entails the premise, _1_, indicating that the premise and hypothesis neither entail nor contradict each other, or _2_, indicating that the hypothesis contradicts the premise. Dataset instances which don't have any gold label are marked with -1 label. Make sure you filter them before starting the training using `datasets.Dataset.filter`.
93
 
94
 
95
  ### Data Splits
 
106
 
107
  ### Curation Rationale
108
 
109
+ The [SNLI corpus (version 1.0)](https://nlp.stanford.edu/projects/snli/) was developed as a benchmark for natural langauge inference (NLI), also known as recognizing textual entailment (RTE), with the goal of producing a dataset large enough to train models using neural methodologies.
110
 
111
  ### Source Data
112
 
113
  #### Initial Data Collection and Normalization
114
 
115
+ The hypotheses were elicited by presenting crowdworkers with captions from preexisting datasets without the associated photos, but the vocabulary of the hypotheses still reflects the content of the photos as well as the caption style of writing (e.g. mostly present tense). The dataset developers report 37,026 distinct words in the corpus, ignoring case. They allowed bare NPs as well as full sentences. Using the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003) trained on the standard training set as well as on the Brown Corpus (Francis and Kucera 1979), the authors report that 74% of the premises and 88.9% of the hypotheses result in a parse rooted with an 'S'. The corpus was developed between 2014 and 2015.
116
 
117
  Crowdworkers were presented with a caption without the associated photo and asked to produce three alternate captions, one that is definitely true, one that might be true, and one that is definitely false. See Section 2.1 and Figure 1 for details (Bowman et al., 2015).
118
 
119
  The corpus includes content from the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/) and the [VisualGenome corpus](https://visualgenome.org/). The photo captions used to prompt the data creation were collected on Flickr by [Young et al. (2014)](https://www.aclweb.org/anthology/Q14-1006.pdf), who extended the Flickr 8K dataset developed by [Hodosh et al. (2013)](https://www.jair.org/index.php/jair/article/view/10833). Hodosh et al. collected photos from the following Flickr groups: strangers!, Wild-Child (Kids in Action), Dogs in Action (Read the Rules), Outdoor Activities, Action Photography, Flickr-Social (two or more people in the photo). Young et al. do not list the specific groups they collected photos from. The VisualGenome corpus also contains images from Flickr, originally collected in [MS-COCO](https://cocodataset.org/#home) and [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/).
120
 
121
+ The premises from the Flickr 30k corpus corrected for spelling using the Linux spell checker and ungrammatical sentences were removed. Bowman et al. do not report any normalization, though they note that punctuation and capitalization are often omitted.
122
 
123
  #### Who are the source language producers?
124
 
125
+ A large portion of the premises (160k) were produced in the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/) by an unknown number of crowdworkers. About 2,500 crowdworkers from Amazon Mechanical Turk produced the associated hypotheses. The premises from the Flickr 30k project describe people and animals whose photos were collected and presented to the Flickr 30k crowdworkers, but the SNLI corpus did not present the photos to the hypotheses creators.
126
 
127
+ The Flickr 30k corpus did not report crowdworker or photo subject demographic information or crowdworker compensation. The SNLI crowdworkers were compensated per HIT at rates between $.1 and $.5 with no incentives. Workers who ignored the guidelines were disqualified, and automated bulk submissions were rejected. No demographic information was collected from the SNLI crowdworkers.
128
 
129
  An additional 4,000 premises come from the pilot study of the [VisualGenome corpus](https://visualgenome.org/static/paper/Visual_Genome.pdf). Though the pilot study itself is not described, the location information of the 33,000 AMT crowdworkers that participated over the course of the 6 months of data collection are aggregated. Most of the workers were located in the United States (93%), with others from the Philippines, Kenya, India, Russia, and Canada. Workers were paid $6-$8 per hour.
130
 
 
132
 
133
  #### Annotation process
134
 
135
+ 56,941 of the total sentence pairs were further annotated in a validation task. Four annotators each labeled a premise-hypothesis pair as entailment, contradiction, or neither, resulting in 5 total judgements including the original hypothesis author judgement. See Section 2.2 for more details (Bowman et al., 2015).
136
 
137
+ The authors report 3/5 annotator agreement on 98% of the validation set and unanimous annotator agreement on 58.3% of the validation set. If a label was chosen by three annotators, that label was made the gold label. Following from this, 2% of the data did not have a consensus label and was labeled '-' by the authors.
138
 
139
  | Label | Fleiss κ |
140
  | --------------- |--------- |
 
149
 
150
  ### Personal and Sensitive Information
151
 
152
+ The dataset does not contain any personal information about the authors or the crowdworkers, but may contain descriptions of the people in the original Flickr photos.
153
 
154
  ## Considerations for Using the Data
155
 
156
  ### Social Impact of Dataset
157
 
158
+ This dataset was developed as a benchmark for evaluating representational systems for text, especially including those induced by representation learning methods, in the task of predicting truth conditions in a given context. (It should be noted that the truth conditions of a hypothesis given a premise does not necessarily match the truth conditions of the hypothesis in the real world.) Systems that are successful at such a task may be more successful in modeling semantic representations.
159
 
160
  ### Discussion of Biases
161
 
162
+ The language reflects the content of the photos collected from Flickr, as described in the [Data Collection](#initial-data-collection-and-normalization) section. [Rudinger et al (2017)](https://www.aclweb.org/anthology/W17-1609.pdf) use pointwise mutual information to calculate a measure of association between a manually selected list of tokens corresponding to identity categories and the other words in the corpus, showing strong evidence of stereotypes across gender categories. They also provide examples in which crowdworkers reproduced harmful stereotypes or pejorative language in the hypotheses.
163
 
164
  ### Other Known Limitations
165