
Datasets:
premise (string) | hypothesis (string) | label (class label) |
---|---|---|
"This church choir sings to the masses as they sing joyous songs from the book at a church."
| "The church has cracks in the ceiling."
| 1
(neutral) |
"This church choir sings to the masses as they sing joyous songs from the book at a church."
| "The church is filled with song."
| 0
(entailment) |
"This church choir sings to the masses as they sing joyous songs from the book at a church."
| "A choir singing at a baseball game."
| 2
(contradiction) |
"A woman with a green headscarf, blue shirt and a very big grin."
| "The woman is young."
| 1
(neutral) |
"A woman with a green headscarf, blue shirt and a very big grin."
| "The woman is very happy."
| 0
(entailment) |
"A woman with a green headscarf, blue shirt and a very big grin."
| "The woman has been shot."
| 2
(contradiction) |
"An old man with a package poses in front of an advertisement."
| "A man poses in front of an ad."
| 0
(entailment) |
"An old man with a package poses in front of an advertisement."
| "A man poses in front of an ad for beer."
| 1
(neutral) |
"An old man with a package poses in front of an advertisement."
| "A man walks by an ad."
| 2
(contradiction) |
"A statue at a museum that no seems to be looking at."
| "The statue is offensive and people are mad that it is on display."
| 1
(neutral) |
"A statue at a museum that no seems to be looking at."
| "There is a statue that not many people seem to be interested in."
| 0
(entailment) |
"A statue at a museum that no seems to be looking at."
| "Tons of people are gathered around the statue."
| 2
(contradiction) |
"A land rover is being driven across a river."
| "A Land Rover is splashing water as it crosses a river."
| 0
(entailment) |
"A land rover is being driven across a river."
| "A vehicle is crossing a river."
| 0
(entailment) |
"A land rover is being driven across a river."
| "A sedan is stuck in the middle of a river."
| 2
(contradiction) |
"A man playing an electric guitar on stage."
| "A man playing banjo on the floor."
| 2
(contradiction) |
"A man playing an electric guitar on stage."
| "A man playing guitar on stage."
| 0
(entailment) |
"A man playing an electric guitar on stage."
| "A man is performing for cash."
| 1
(neutral) |
"A blond-haired doctor and her African american assistant looking threw new medical manuals."
| "A doctor is looking at a book"
| 0
(entailment) |
"A blond-haired doctor and her African american assistant looking threw new medical manuals."
| "A man is eating pb and j"
| 2
(contradiction) |
"A blond-haired doctor and her African american assistant looking threw new medical manuals."
| "A doctor is studying"
| 1
(neutral) |
"One tan girl with a wool hat is running and leaning over an object, while another person in a wool hat is sitting on the ground."
| "A boy runs into a wall"
| 2
(contradiction) |
"One tan girl with a wool hat is running and leaning over an object, while another person in a wool hat is sitting on the ground."
| "A tan girl runs leans over an object"
| 0
(entailment) |
"One tan girl with a wool hat is running and leaning over an object, while another person in a wool hat is sitting on the ground."
| "A man watches his daughter leap"
| 1
(neutral) |
"A young family enjoys feeling ocean waves lap at their feet."
| "A young man and woman take their child to the beach for the first time."
| 1
(neutral) |
"A young family enjoys feeling ocean waves lap at their feet."
| "A family is out at a restaurant."
| 2
(contradiction) |
"A young family enjoys feeling ocean waves lap at their feet."
| "A family is at the beach."
| 0
(entailment) |
"A couple walk hand in hand down a street."
| "A couple is walking together."
| 0
(entailment) |
"A couple walk hand in hand down a street."
| "The couple is married."
| 1
(neutral) |
"A couple walk hand in hand down a street."
| "A couple is sitting on a bench."
| 2
(contradiction) |
"3 young man in hoods standing in the middle of a quiet street facing the camera."
| "Three people sit by a busy street bareheaded."
| 2
(contradiction) |
"3 young man in hoods standing in the middle of a quiet street facing the camera."
| "Three hood wearing people pose for a picture."
| 0
(entailment) |
"3 young man in hoods standing in the middle of a quiet street facing the camera."
| "Three hood wearing people stand in a street."
| 0
(entailment) |
"A man reads the paper in a bar with green lighting."
| "The man is inside."
| 0
(entailment) |
"A man reads the paper in a bar with green lighting."
| "The man is reading the sportspage."
| 1
(neutral) |
"A man reads the paper in a bar with green lighting."
| "The man is climbing a mountain."
| 2
(contradiction) |
"Three firefighter come out of subway station."
| "Three firefighters putting out a fire inside of a subway station."
| 1
(neutral) |
"Three firefighter come out of subway station."
| "Three firefighters coming up from a subway station."
| 0
(entailment) |
"Three firefighter come out of subway station."
| "Three firefighters playing cards inside a fire station."
| 2
(contradiction) |
"A person wearing a straw hat, standing outside working a steel apparatus with a pile of coconuts on the ground."
| "A person is near a pile of coconuts."
| 0
(entailment) |
"A person wearing a straw hat, standing outside working a steel apparatus with a pile of coconuts on the ground."
| "A person is selling coconuts."
| 1
(neutral) |
"A person wearing a straw hat, standing outside working a steel apparatus with a pile of coconuts on the ground."
| "A person is burning a straw hat."
| 2
(contradiction) |
"Male in a blue jacket decides to lay in the grass."
| "The guy in yellow is rolling on the grass"
| 2
(contradiction) |
"Male in a blue jacket decides to lay in the grass."
| "The guy wearing a blue jacket is laying on the green grass"
| 0
(entailment) |
"Male in a blue jacket decides to lay in the grass."
| "The guy wearing a blue jacket is laying on the green grass taking a nap."
| 1
(neutral) |
"During calf roping a cowboy calls off his horse."
| "A first time roper falls off his horse."
| 1
(neutral) |
"During calf roping a cowboy calls off his horse."
| "Cowboy falling off horse."
| -1
|
"During calf roping a cowboy calls off his horse."
| "A man ropes a calf successfully."
| 2
(contradiction) |
"A little boy in a gray and white striped sweater and tan pants is playing on a piece of playground equipment."
| "A boy is on a playground."
| 0
(entailment) |
"A little boy in a gray and white striped sweater and tan pants is playing on a piece of playground equipment."
| "The boy is playing on the swings after school."
| 1
(neutral) |
"A little boy in a gray and white striped sweater and tan pants is playing on a piece of playground equipment."
| "The boy is sitting on the school bus on his way home."
| 2
(contradiction) |
"A woman wearing a ball cap squats down to touch the cracked earth."
| "An archeologist wearing a hat squats to examine the site for a dig"
| 1
(neutral) |
"A woman wearing a ball cap squats down to touch the cracked earth."
| "A squatting woman wearing a hat touching the ground."
| 0
(entailment) |
"A woman wearing a ball cap squats down to touch the cracked earth."
| "A woman wearing a sun bonnet planting a garden."
| 2
(contradiction) |
"Two children re laying on a rug with some wooden bricks laid out in a square between them."
| "Two children are building a brick furnace."
| 1
(neutral) |
"Two children re laying on a rug with some wooden bricks laid out in a square between them."
| "Two children are playing catch at a park."
| 2
(contradiction) |
"Two children re laying on a rug with some wooden bricks laid out in a square between them."
| "Two children are on a rug."
| 0
(entailment) |
"A man standing in front of a building on the phone as two men to the side pain on the side."
| "a guy near a building stands by two other men"
| 0
(entailment) |
"A man standing in front of a building on the phone as two men to the side pain on the side."
| "two girls walk through a hall"
| 2
(contradiction) |
"A man standing in front of a building on the phone as two men to the side pain on the side."
| "a busy man stands with bodyguards"
| 1
(neutral) |
"The two young girls are dressed as fairies, and are playing in the leaves outdoors."
| "Girls are playing outdoors."
| 0
(entailment) |
"The two young girls are dressed as fairies, and are playing in the leaves outdoors."
| "Two girls play dress up indoors."
| 2
(contradiction) |
"The two young girls are dressed as fairies, and are playing in the leaves outdoors."
| "The two girls play in the Autumn."
| 1
(neutral) |
"People jump over a mountain crevasse on a rope."
| "Some people look visually afraid to jump."
| 1
(neutral) |
"People jump over a mountain crevasse on a rope."
| "People are jumping outside."
| 0
(entailment) |
"People jump over a mountain crevasse on a rope."
| "People slide over a mountain crevasse on a slide."
| 2
(contradiction) |
"A snowboarder on a wide plain of snow"
| "A snow field with a snowboarder on it"
| 0
(entailment) |
"A snowboarder on a wide plain of snow"
| "A snowboarder gliding over a field of snow"
| 1
(neutral) |
"A snowboarder on a wide plain of snow"
| "A snowmobile in a blizzard"
| 1
(neutral) |
"An older women tending to a garden."
| "The lady is cooking dinner"
| 2
(contradiction) |
"An older women tending to a garden."
| "The lady is weeding her garden"
| 1
(neutral) |
"An older women tending to a garden."
| "The lady has a garden"
| 0
(entailment) |
"A man in a black shirt overlooking bike maintenance."
| "A man destroys a bike."
| 2
(contradiction) |
"A man in a black shirt overlooking bike maintenance."
| "A man watches bike repairs."
| 0
(entailment) |
"A man in a black shirt overlooking bike maintenance."
| "A man learns bike maintenance."
| 1
(neutral) |
"A man in a black shirt is looking at a bike in a workshop."
| "A man is wearing a red shirt"
| 2
(contradiction) |
"A man in a black shirt is looking at a bike in a workshop."
| "A man is in a black shirt"
| 0
(entailment) |
"A man in a black shirt is looking at a bike in a workshop."
| "A man is deciding which bike to buy"
| 1
(neutral) |
"A man looking over a bicycle's rear wheel in the maintenance garage with various tools visible in the background."
| "A person is in a garage."
| 0
(entailment) |
"A man looking over a bicycle's rear wheel in the maintenance garage with various tools visible in the background."
| "A man repairs bicycles."
| 0
(entailment) |
"A man looking over a bicycle's rear wheel in the maintenance garage with various tools visible in the background."
| "A man waits outside a garage."
| 2
(contradiction) |
"Three people sit on a bench at a station, the man looks oddly at the two women, the redheaded women looks up and forward in an awkward position, and the yellow blond girl twiddles with her hair."
| "Some people stand around."
| 2
(contradiction) |
"Three people sit on a bench at a station, the man looks oddly at the two women, the redheaded women looks up and forward in an awkward position, and the yellow blond girl twiddles with her hair."
| "People run together."
| 2
(contradiction) |
"Three people sit on a bench at a station, the man looks oddly at the two women, the redheaded women looks up and forward in an awkward position, and the yellow blond girl twiddles with her hair."
| "People wait at a station."
| 0
(entailment) |
"A child wearing a red top is standing behind a blond headed child sitting in a wheelbarrow."
| "A child wearing a red top is standing behind a blond headed child"
| 0
(entailment) |
"A child wearing a red top is standing behind a blond headed child sitting in a wheelbarrow."
| "A child wearing a red top is standing on top of a blond headed child"
| 2
(contradiction) |
"A child wearing a red top is standing behind a blond headed child sitting in a wheelbarrow."
| "A child wearing a red top is standing behind a pretty blond headed child"
| 1
(neutral) |
"A person dressed in a dress with flowers and a stuffed bee attached to it, is pushing a baby stroller down the street."
| "A lady sitting on a bench in the park."
| 2
(contradiction) |
"A person dressed in a dress with flowers and a stuffed bee attached to it, is pushing a baby stroller down the street."
| "An old lady pushing a stroller down a busy street."
| 1
(neutral) |
"A person dressed in a dress with flowers and a stuffed bee attached to it, is pushing a baby stroller down the street."
| "A person outside pushing a stroller."
| 0
(entailment) |
"A dog jumping for a Frisbee in the snow."
| "A pet is enjoying a game of fetch with his owner."
| 1
(neutral) |
"A dog jumping for a Frisbee in the snow."
| "A cat washes his face and whiskers with his front paw."
| 2
(contradiction) |
"A dog jumping for a Frisbee in the snow."
| "An animal is outside in the cold weather, playing with a plastic toy."
| 0
(entailment) |
"People are conversing at a dining table under a canopy."
| "People at a party are seated for dinner on the lawn."
| 1
(neutral) |
"People are conversing at a dining table under a canopy."
| "People are talking underneath a covering."
| 0
(entailment) |
"People are conversing at a dining table under a canopy."
| "People are screaming at a boxing match."
| 2
(contradiction) |
"A girl playing a violin along with a group of people"
| "A girl is washing a load of laundry."
| 2
(contradiction) |
"A girl playing a violin along with a group of people"
| "A girl is playing an instrument."
| 0
(entailment) |
"A girl playing a violin along with a group of people"
| "A group of people are playing in a symphony."
| 1
(neutral) |
"A woman within an orchestra is playing a violin."
| "A woman is playing a concert."
| 1
(neutral) |
Dataset Card for SNLI
Dataset Summary
The SNLI corpus (version 1.0) is a collection of 570k human-written English sentence pairs manually labeled for balanced classification with the labels entailment, contradiction, and neutral, supporting the task of natural language inference (NLI), also known as recognizing textual entailment (RTE).
Supported Tasks and Leaderboards
SemBERT (Zhousheng Zhang et al, 2019b) is currently listed as SOTA, achieving 91.9% accuracy on the test set. See the corpus webpage for a list of published results.
Languages
The language in the dataset is English as spoken by users of the website Flickr and as spoken by crowdworkers from Amazon Mechanical Turk. The BCP-47 code for English is en.
Dataset Structure
Data Instances
For each instance, there is a string for the premise, a string for the hypothesis, and an integer for the label. Note that each premise may appear three times with a different hypothesis and label. See the SNLI corpus viewer to explore more examples.
{'premise': 'Two women are embracing while holding to go packages.'
'hypothesis': 'The sisters are hugging goodbye while holding to go packages after just eating lunch.'
'label': 1}
The average token count for the premises and hypotheses are given below:
Feature | Mean Token Count |
---|---|
Premise | 14.1 |
Hypothesis | 8.3 |
Data Fields
premise
: a string used to determine the truthfulness of the hypothesishypothesis
: a string that may be true, false, or whose truth conditions may not be knowable when compared to the premiselabel
: an integer whose value may be either 0, indicating that the hypothesis entails the premise, 1, indicating that the premise and hypothesis neither entail nor contradict each other, or 2, indicating that the hypothesis contradicts the premise. Dataset instances which don't have any gold label are marked with -1 label. Make sure you filter them before starting the training usingdatasets.Dataset.filter
.
Data Splits
The SNLI dataset has 3 splits: train, validation, and test. All of the examples in the validation and test sets come from the set that was annotated in the validation task with no-consensus examples removed. The remaining multiply-annotated examples are in the training set with no-consensus examples removed. Each unique premise/caption shows up in only one split, even though they usually appear in at least three different examples.
Dataset Split | Number of Instances in Split |
---|---|
Train | 550,152 |
Validation | 10,000 |
Test | 10,000 |
Dataset Creation
Curation Rationale
The SNLI corpus (version 1.0) was developed as a benchmark for natural langauge inference (NLI), also known as recognizing textual entailment (RTE), with the goal of producing a dataset large enough to train models using neural methodologies.
Source Data
Initial Data Collection and Normalization
The hypotheses were elicited by presenting crowdworkers with captions from preexisting datasets without the associated photos, but the vocabulary of the hypotheses still reflects the content of the photos as well as the caption style of writing (e.g. mostly present tense). The dataset developers report 37,026 distinct words in the corpus, ignoring case. They allowed bare NPs as well as full sentences. Using the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003) trained on the standard training set as well as on the Brown Corpus (Francis and Kucera 1979), the authors report that 74% of the premises and 88.9% of the hypotheses result in a parse rooted with an 'S'. The corpus was developed between 2014 and 2015.
Crowdworkers were presented with a caption without the associated photo and asked to produce three alternate captions, one that is definitely true, one that might be true, and one that is definitely false. See Section 2.1 and Figure 1 for details (Bowman et al., 2015).
The corpus includes content from the Flickr 30k corpus and the VisualGenome corpus. The photo captions used to prompt the data creation were collected on Flickr by Young et al. (2014), who extended the Flickr 8K dataset developed by Hodosh et al. (2013). Hodosh et al. collected photos from the following Flickr groups: strangers!, Wild-Child (Kids in Action), Dogs in Action (Read the Rules), Outdoor Activities, Action Photography, Flickr-Social (two or more people in the photo). Young et al. do not list the specific groups they collected photos from. The VisualGenome corpus also contains images from Flickr, originally collected in MS-COCO and YFCC100M.
The premises from the Flickr 30k corpus corrected for spelling using the Linux spell checker and ungrammatical sentences were removed. Bowman et al. do not report any normalization, though they note that punctuation and capitalization are often omitted.
Who are the source language producers?
A large portion of the premises (160k) were produced in the Flickr 30k corpus by an unknown number of crowdworkers. About 2,500 crowdworkers from Amazon Mechanical Turk produced the associated hypotheses. The premises from the Flickr 30k project describe people and animals whose photos were collected and presented to the Flickr 30k crowdworkers, but the SNLI corpus did not present the photos to the hypotheses creators.
The Flickr 30k corpus did not report crowdworker or photo subject demographic information or crowdworker compensation. The SNLI crowdworkers were compensated per HIT at rates between $.1 and $.5 with no incentives. Workers who ignored the guidelines were disqualified, and automated bulk submissions were rejected. No demographic information was collected from the SNLI crowdworkers.
An additional 4,000 premises come from the pilot study of the VisualGenome corpus. Though the pilot study itself is not described, the location information of the 33,000 AMT crowdworkers that participated over the course of the 6 months of data collection are aggregated. Most of the workers were located in the United States (93%), with others from the Philippines, Kenya, India, Russia, and Canada. Workers were paid $6-$8 per hour.
Annotations
Annotation process
56,941 of the total sentence pairs were further annotated in a validation task. Four annotators each labeled a premise-hypothesis pair as entailment, contradiction, or neither, resulting in 5 total judgements including the original hypothesis author judgement. See Section 2.2 for more details (Bowman et al., 2015).
The authors report 3/5 annotator agreement on 98% of the validation set and unanimous annotator agreement on 58.3% of the validation set. If a label was chosen by three annotators, that label was made the gold label. Following from this, 2% of the data did not have a consensus label and was labeled '-' by the authors.
Label | Fleiss κ |
---|---|
contradiction | 0.77 |
entailment | 0.72 |
neutral | 0.60 |
overall | 0.70 |
Who are the annotators?
The annotators of the validation task were a closed set of about 30 trusted crowdworkers on Amazon Mechanical Turk. No demographic information was collected. Annotators were compensated per HIT between $.1 and $.5 with $1 bonuses in cases where annotator labels agreed with the curators' labels for 250 randomly distributed examples.
Personal and Sensitive Information
The dataset does not contain any personal information about the authors or the crowdworkers, but may contain descriptions of the people in the original Flickr photos.
Considerations for Using the Data
Social Impact of Dataset
This dataset was developed as a benchmark for evaluating representational systems for text, especially including those induced by representation learning methods, in the task of predicting truth conditions in a given context. (It should be noted that the truth conditions of a hypothesis given a premise does not necessarily match the truth conditions of the hypothesis in the real world.) Systems that are successful at such a task may be more successful in modeling semantic representations.
Discussion of Biases
The language reflects the content of the photos collected from Flickr, as described in the Data Collection section. Rudinger et al (2017) use pointwise mutual information to calculate a measure of association between a manually selected list of tokens corresponding to identity categories and the other words in the corpus, showing strong evidence of stereotypes across gender categories. They also provide examples in which crowdworkers reproduced harmful stereotypes or pejorative language in the hypotheses.
Other Known Limitations
Gururangan et al (2018), Poliak et al (2018), and Tsuchiya (2018) show that the SNLI corpus has a number of annotation artifacts. Using various classifiers, Poliak et al correctly predicted the label of the hypothesis 69% of the time without using the premise, Gururangan et al 67% of the time, and Tsuchiya 63% of the time.
Additional Information
Dataset Curators
The SNLI corpus was developed by Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning as part of the Stanford NLP group.
It was supported by a Google Faculty Research Award, a gift from Bloomberg L.P., the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040, the National Science Foundation under grant no. IIS 1159679, and the Department of the Navy, Office of Naval Research, under grant no. N00014-10-1-0109.
Licensing Information
The Stanford Natural Language Inference Corpus is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Citation Information
@inproceedings{snli:emnlp2015,
Author = {Bowman, Samuel R. and Angeli, Gabor and Potts, Christopher, and Manning, Christopher D.},
Booktitle = {Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
Publisher = {Association for Computational Linguistics},
Title = {A large annotated corpus for learning natural language inference},
Year = {2015}
}
Contributions
Thanks to @mariamabarham, @thomwolf, @lewtun, @patrickvonplaten and @mcmillanmajora for adding this dataset.
- Downloads last month
- 16,245
Models trained or fine-tuned on snli
