File size: 15,961 Bytes
c3dfd48
7485e6a
c3dfd48
7485e6a
c3dfd48
3089f17
7485e6a
3089f17
944ee2a
7485e6a
 
 
 
 
 
 
 
 
 
 
142ca7c
fd460ac
f67cbb9
8686edf
cdb5c3d
8686edf
 
 
 
 
 
 
 
 
28d6eea
 
 
8686edf
 
cdb5c3d
8686edf
 
cdb5c3d
8686edf
cdb5c3d
 
 
 
 
 
 
 
 
 
 
 
 
 
c3dfd48
 
 
 
fd460ac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
03d19d0
 
 
944ee2a
 
 
 
 
 
 
 
03d19d0
 
 
fd460ac
03d19d0
 
c3dfd48
944ee2a
 
 
c3dfd48
03d19d0
c3dfd48
fd460ac
c3dfd48
03d19d0
c3dfd48
03d19d0
c3dfd48
03d19d0
c3dfd48
03d19d0
 
 
 
 
c3dfd48
03d19d0
c3dfd48
03d19d0
 
 
 
c3dfd48
03d19d0
c3dfd48
03d19d0
 
795529c
c3dfd48
 
03d19d0
c3dfd48
03d19d0
c3dfd48
03d19d0
 
 
 
 
c3dfd48
03d19d0
c3dfd48
03d19d0
 
fd460ac
03d19d0
 
 
 
c3dfd48
fd460ac
c3dfd48
 
 
944ee2a
c3dfd48
fd460ac
c3dfd48
03d19d0
c3dfd48
fd460ac
c3dfd48
fd460ac
c3dfd48
944ee2a
c3dfd48
03d19d0
c3dfd48
03d19d0
c3dfd48
fd460ac
c3dfd48
fd460ac
c3dfd48
03d19d0
 
 
 
 
 
c3dfd48
03d19d0
c3dfd48
03d19d0
c3dfd48
03d19d0
c3dfd48
fd460ac
c3dfd48
03d19d0
c3dfd48
03d19d0
c3dfd48
fd460ac
c3dfd48
03d19d0
c3dfd48
944ee2a
c3dfd48
03d19d0
c3dfd48
944ee2a
c3dfd48
03d19d0
c3dfd48
03d19d0
c3dfd48
03d19d0
 
 
 
 
c3dfd48
944ee2a
 
 
7485e6a
03d19d0
 
944ee2a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
03d19d0
944ee2a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
03d19d0
 
 
944ee2a
 
 
 
7485e6a
 
8686edf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|other-flicker-30k
- extended|other-visual-genome
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
paperswithcode_id: snli
pretty_name: Stanford Natural Language Inference
dataset_info:
  config_name: plain_text
  features:
  - name: premise
    dtype: string
  - name: hypothesis
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': entailment
          '1': neutral
          '2': contradiction
  splits:
  - name: test
    num_bytes: 1258904
    num_examples: 10000
  - name: validation
    num_bytes: 1263036
    num_examples: 10000
  - name: train
    num_bytes: 65884386
    num_examples: 550152
  download_size: 20439300
  dataset_size: 68406326
configs:
- config_name: plain_text
  data_files:
  - split: test
    path: plain_text/test-*
  - split: validation
    path: plain_text/validation-*
  - split: train
    path: plain_text/train-*
---
# Dataset Card for SNLI

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** https://nlp.stanford.edu/projects/snli/
- **Repository:** [More Information Needed]
- **Paper:** https://aclanthology.org/D15-1075/
- **Paper:** https://arxiv.org/abs/1508.05326
- **Leaderboard:** https://nlp.stanford.edu/projects/snli/
- **Point of Contact:** [Samuel Bowman](mailto:bowman@nyu.edu)
- **Point of Contact:** [Gabor Angeli](mailto:angeli@stanford.edu)
- **Point of Contact:** [Chris Manning](manning@stanford.edu)

### Dataset Summary

The SNLI corpus (version 1.0) is a collection of 570k human-written English sentence pairs manually labeled for balanced classification with the labels entailment, contradiction, and neutral, supporting the task of natural language inference (NLI), also known as recognizing textual entailment (RTE).

### Supported Tasks and Leaderboards

Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), is the task of determining the inference relation between two (short, ordered) texts: entailment, contradiction, or neutral ([MacCartney and Manning 2008](https://aclanthology.org/C08-1066/)).

See the [corpus webpage](https://nlp.stanford.edu/projects/snli/) for a list of published results.

### Languages

The language in the dataset is English as spoken by users of the website Flickr and as spoken by crowdworkers from Amazon Mechanical Turk. The BCP-47 code for English is en.

## Dataset Structure

### Data Instances

For each instance, there is a string for the premise, a string for the hypothesis, and an integer for the label. Note that each premise may appear three times with a different hypothesis and label. See the [SNLI corpus viewer](https://huggingface.co/datasets/viewer/?dataset=snli) to explore more examples.

```
{'premise': 'Two women are embracing while holding to go packages.'
 'hypothesis': 'The sisters are hugging goodbye while holding to go packages after just eating lunch.'
 'label': 1}
```

The average token count for the premises and hypotheses are given below:

| Feature    | Mean Token Count |
| ---------- | ---------------- |
| Premise    | 14.1             |
| Hypothesis | 8.3              |

### Data Fields

- `premise`: a string used to determine the truthfulness of the hypothesis
- `hypothesis`: a string that may be true, false, or whose truth conditions may not be knowable when compared to the premise
- `label`: an integer whose value may be either _0_, indicating that the premise entails the hypothesis, _1_, indicating that the premise neither entails nor contradicts the hypothesis, or _2_, indicating that the premise contradicts the hypothesis. Dataset instances which don't have any gold label are marked with -1 label. Make sure you filter them before starting the training using `datasets.Dataset.filter`.


### Data Splits

The SNLI dataset has 3 splits: _train_, _validation_, and _test_. All of the examples in the _validation_ and _test_ sets come from the set that was annotated in the validation task with no-consensus examples removed. The remaining multiply-annotated examples are in the training set with no-consensus examples removed. Each unique premise/caption shows up in only one split, even though they usually appear in at least three different examples.

| Dataset Split | Number of Instances in Split |
| ------------- |----------------------------- |
| Train         | 550,152                      |
| Validation    | 10,000                       |
| Test          | 10,000                       |

## Dataset Creation

### Curation Rationale

The [SNLI corpus (version 1.0)](https://nlp.stanford.edu/projects/snli/) was developed as a benchmark for natural langauge inference (NLI), also known as recognizing textual entailment (RTE), with the goal of producing a dataset large enough to train models using neural methodologies.

### Source Data

#### Initial Data Collection and Normalization

The hypotheses were elicited by presenting crowdworkers with captions from preexisting datasets without the associated photos, but the vocabulary of the hypotheses still reflects the content of the photos as well as the caption style of writing (e.g. mostly present tense). The dataset developers report 37,026 distinct words in the corpus, ignoring case. They allowed bare NPs as well as full sentences. Using the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003) trained on the standard training set as well as on the Brown Corpus (Francis and Kucera 1979), the authors report that 74% of the premises and 88.9% of the hypotheses result in a parse rooted with an 'S'. The corpus was developed between 2014 and 2015.

Crowdworkers were presented with a caption without the associated photo and asked to produce three alternate captions, one that is definitely true, one that might be true, and one that is definitely false. See Section 2.1 and Figure 1 for details (Bowman et al., 2015).

The corpus includes content from the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/) and the [VisualGenome corpus](https://visualgenome.org/). The photo captions used to prompt the data creation were collected on Flickr by [Young et al. (2014)](https://aclanthology.org/Q14-1006/), who extended the Flickr 8K dataset developed by [Hodosh et al. (2013)](https://www.jair.org/index.php/jair/article/view/10833). Hodosh et al. collected photos from the following Flickr groups: strangers!, Wild-Child (Kids in Action), Dogs in Action (Read the Rules), Outdoor Activities, Action Photography, Flickr-Social (two or more people in the photo). Young et al. do not list the specific groups they collected photos from. The VisualGenome corpus also contains images from Flickr, originally collected in [MS-COCO](https://cocodataset.org/#home) and [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/).

The premises from the Flickr 30k corpus corrected for spelling using the Linux spell checker and ungrammatical sentences were removed. Bowman et al. do not report any normalization, though they note that punctuation and capitalization are often omitted.

#### Who are the source language producers?

A large portion of the premises (160k) were produced in the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/) by an unknown number of crowdworkers. About 2,500 crowdworkers from Amazon Mechanical Turk produced the associated hypotheses. The premises from the Flickr 30k project describe people and animals whose photos were collected and presented to the Flickr 30k crowdworkers, but the SNLI corpus did not present the photos to the hypotheses creators.

The Flickr 30k corpus did not report crowdworker or photo subject demographic information or crowdworker compensation. The SNLI crowdworkers were compensated per HIT at rates between $.1 and $.5 with no incentives. Workers who ignored the guidelines were disqualified, and automated bulk submissions were rejected. No demographic information was collected from the SNLI crowdworkers.

An additional 4,000 premises come from the pilot study of the [VisualGenome corpus](https://homes.cs.washington.edu/~ranjay/visualgenome/index.html). Though the pilot study itself is not described, the location information of the 33,000 AMT crowdworkers that participated over the course of the 6 months of data collection are aggregated. Most of the workers were located in the United States (93%), with others from the Philippines, Kenya, India, Russia, and Canada. Workers were paid $6-$8 per hour.

### Annotations

#### Annotation process

56,941 of the total sentence pairs were further annotated in a validation task. Four annotators each labeled a premise-hypothesis pair as entailment, contradiction, or neither, resulting in 5 total judgements including the original hypothesis author judgement. See Section 2.2 for more details (Bowman et al., 2015).

The authors report 3/5 annotator agreement on 98% of the validation set and unanimous annotator agreement on 58.3% of the validation set. If a label was chosen by three annotators, that label was made the gold label. Following from this, 2% of the data did not have a consensus label and was labeled '-' by the authors.

| Label           | Fleiss κ |
| --------------- |--------- |
| _contradiction_ | 0.77     |
| _entailment_    | 0.72     |
| _neutral_       | 0.60     |
| overall         | 0.70     |

#### Who are the annotators?

The annotators of the validation task were a closed set of about 30 trusted crowdworkers on Amazon Mechanical Turk. No demographic information was collected. Annotators were compensated per HIT between $.1 and $.5 with $1 bonuses in cases where annotator labels agreed with the curators' labels for 250 randomly distributed examples.

### Personal and Sensitive Information

The dataset does not contain any personal information about the authors or the crowdworkers, but may contain descriptions of the people in the original Flickr photos.

## Considerations for Using the Data

### Social Impact of Dataset

This dataset was developed as a benchmark for evaluating representational systems for text, especially including those induced by representation learning methods, in the task of predicting truth conditions in a given context. (It should be noted that the truth conditions of a hypothesis given a premise does not necessarily match the truth conditions of the hypothesis in the real world.) Systems that are successful at such a task may be more successful in modeling semantic representations.

### Discussion of Biases

The language reflects the content of the photos collected from Flickr, as described in the [Data Collection](#initial-data-collection-and-normalization) section. [Rudinger et al (2017)](https://aclanthology.org/W17-1609/) use pointwise mutual information to calculate a measure of association between a manually selected list of tokens corresponding to identity categories and the other words in the corpus, showing strong evidence of stereotypes across gender categories. They also provide examples in which crowdworkers reproduced harmful stereotypes or pejorative language in the hypotheses.

### Other Known Limitations

[Gururangan et al (2018)](https://aclanthology.org/N18-2017/), [Poliak et al (2018)](https://aclanthology.org/S18-2023/), and [Tsuchiya (2018)](https://aclanthology.org/L18-1239/) show that the SNLI corpus has a number of annotation artifacts. Using various classifiers, Poliak et al correctly predicted the label of the hypothesis 69% of the time without using the premise, Gururangan et al 67% of the time, and Tsuchiya 63% of the time.

## Additional Information

### Dataset Curators

The SNLI corpus was developed by Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning as part of the [Stanford NLP group](https://nlp.stanford.edu/).

It was supported by a Google Faculty Research Award, a gift from Bloomberg L.P., the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040, the National Science Foundation under grant no. IIS 1159679, and the Department of the Navy, Office of Naval Research, under grant no. N00014-10-1-0109.

### Licensing Information

The Stanford Natural Language Inference Corpus by The Stanford NLP Group is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-sa/4.0/).

The corpus includes content from the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/), also released under an Attribution-ShareAlike licence.

### Citation Information

The following paper introduces the corpus in detail. If you use the corpus in published work, please cite it: 
```bibtex
@inproceedings{bowman-etal-2015-large,
    title = "A large annotated corpus for learning natural language inference",
    author = "Bowman, Samuel R.  and
      Angeli, Gabor  and
      Potts, Christopher  and
      Manning, Christopher D.",
    editor = "M{\`a}rquez, Llu{\'\i}s  and
      Callison-Burch, Chris  and
      Su, Jian",
    booktitle = "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
    month = sep,
    year = "2015",
    address = "Lisbon, Portugal",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D15-1075",
    doi = "10.18653/v1/D15-1075",
    pages = "632--642",
}
```

The corpus includes content from the [Flickr 30k corpus](http://shannon.cs.illinois.edu/DenotationGraph/), which can be cited by way of this paper: 
```bibtex
@article{young-etal-2014-image,
    title = "From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions",
    author = "Young, Peter  and
      Lai, Alice  and
      Hodosh, Micah  and
      Hockenmaier, Julia",
    editor = "Lin, Dekang  and
      Collins, Michael  and
      Lee, Lillian",
    journal = "Transactions of the Association for Computational Linguistics",
    volume = "2",
    year = "2014",
    address = "Cambridge, MA",
    publisher = "MIT Press",
    url = "https://aclanthology.org/Q14-1006",
    doi = "10.1162/tacl_a_00166",
    pages = "67--78",
}
```

### Contact Information

For any comments or questions, please email [Samuel Bowman](mailto:bowman@nyu.edu), [Gabor Angeli](mailto:angeli@stanford.edu) and [Chris Manning](manning@stanford.edu). 

### Contributions

Thanks to [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) and [@mcmillanmajora](https://github.com/mcmillanmajora) for adding this dataset.