|
{ |
|
"paper_id": "I17-1011", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:37:32.284381Z" |
|
}, |
|
"title": "Natural Language Inference from Multiple Premises", |
|
"authors": [ |
|
{ |
|
"first": "Alice", |
|
"middle": [], |
|
"last": "Lai", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Illinois at Urbana-Champaign", |
|
"location": {} |
|
}, |
|
"email": "aylai2@illinois.edu" |
|
}, |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Bisk", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Univ. of Washington", |
|
"location": {} |
|
}, |
|
"email": "ybisk@cs.washington.edu" |
|
}, |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Hockenmaier", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Illinois at Urbana-Champaign", |
|
"location": {} |
|
}, |
|
"email": "juliahmr@illinois.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We define a novel textual entailment task that requires inference over multiple premise sentences. We present a new dataset for this task that minimizes trivial lexical inferences, emphasizes knowledge of everyday events, and presents a more challenging setting for textual entailment. We evaluate several strong neural baselines and analyze how the multiple premise task differs from standard textual entailment.", |
|
"pdf_parse": { |
|
"paper_id": "I17-1011", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We define a novel textual entailment task that requires inference over multiple premise sentences. We present a new dataset for this task that minimizes trivial lexical inferences, emphasizes knowledge of everyday events, and presents a more challenging setting for textual entailment. We evaluate several strong neural baselines and analyze how the multiple premise task differs from standard textual entailment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Standard textual entailment recognition is concerned with deciding whether one statement (the hypothesis) follows from another statement (the premise). However, in some situations, multiple independent descriptions of the same event are available, e.g. multiple news articles describing the same story, social media posts by different people about a single event, or multiple witness reports for a crime. In these cases, we want to use multiple independent reports to infer what really happened.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We therefore introduce a variant of the standard textual entailment task in which the premise text consists of multiple independently written sentences, all describing the same scene (see examples in Figure 1 ). The task is to decide whether the hypothesis sentence 1) can be used to describe the same scene (entailment), 2) cannot be used to describe the same scene (contradiction), or 3) may or may not describe the same scene (neutral). The main challenge is to infer what happened in the scene from the multiple premise statements, in some cases aggregating information across multiple sentences into a coherent whole.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 208, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Premises: 1. Two girls sitting down and looking at a book. 2. A couple laughs together as they read a book on a train. 3. Two travelers on a train or bus reading a book together. 4. A woman wearing glasses and a brown beanie next to a girl with long brown hair holding a book. Hypothesis:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Women smiling. \u21d2ENTAILMENT Premises:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "1. Three men are working construction on top of a building. 2. Three male construction workers on a roof working in the sun. 3. One man is shirtless while the other two men work on construction. 4. Two construction workers working on infrastructure, while one worker takes a break.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A man smoking a cigarette. \u21d2NEUTRAL Premises: 1. A group of individuals performed in front of a seated crowd. 2. Woman standing in front of group with black folders in hand. 3. A group of women with black binders stand in front of a group of people. 4. A group of people are standing at the front of the room, preparing to sing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypothesis:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A group having a meeting. \u21d2CONTRADICTION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypothesis:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Similar to the SICK and SNLI datasets (Marelli et al., 2014; Bowman et al., 2015) , each premise sentence in our data is a single sentence describing everyday events, rather than news paragraphs as in the RTE datasets (Dagan et al., 2006) , which require named entity recognition and coreference resolution. Instead of soliciting humans to write new hypotheses, as SNLI did, we use simplified versions of existing image captions, and use a word overlap filter and the structure of the denotation graph of Young et al. (2014) to minimize the presence of trivial lexical relationships.", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 60, |
|
"text": "(Marelli et al., 2014;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 61, |
|
"end": 81, |
|
"text": "Bowman et al., 2015)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 218, |
|
"end": 238, |
|
"text": "(Dagan et al., 2006)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 505, |
|
"end": 524, |
|
"text": "Young et al. (2014)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Figure 1: The Multiple Premise Entailment Task", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the following datasets, premises are single sentences drawn from image or video caption data that describe concrete, everyday activities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Standard Entailment Tasks", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The SICK dataset (Marelli et al., 2014) consists of 10K sentence pairs. The premise sentences come from the FLICKR8K image caption corpus (Rashtchian et al., 2010) and the MSR Video Paraphrase Corpus (Agirre et al., 2012) , while the hypotheses were automatically generated. This process introduced some errors (e.g. \"A motorcycle is riding standing up on the seat of the vehicle\") and an uneven distribution of phenomena across entailment classes that is easy to exploit (e.g. negation ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 17, |
|
"end": 39, |
|
"text": "(Marelli et al., 2014)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 138, |
|
"end": 163, |
|
"text": "(Rashtchian et al., 2010)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 200, |
|
"end": 221, |
|
"text": "(Agirre et al., 2012)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Standard Entailment Tasks", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The SNLI dataset (Bowman et al., 2015) contains over 570K sentence pairs. The premises come from the FLICKR30K image caption corpus (Young et al., 2014) and VisualGenome (Krishna et al., 2016) . The hypotheses were written by Mechanical Turk workers who were given the premise and asked to write one definitely true sentence, one possibly true sentence, and one definitely false sentence. The task design prompted workers to write hypotheses that frequently parallel the premise in structure and vocabulary, and therefore the semantic relationships between premise and hypothesis are often limited to synonym/hyponym lexical substitution, replacement of short phrases, or exact word matching.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 152, |
|
"text": "(Young et al., 2014)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 170, |
|
"end": 192, |
|
"text": "(Krishna et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Standard Entailment Tasks", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this paper, we propose a variant of entailment where each hypothesis sentence is paired with an unordered set of independently written premise sentences that describe the same event. The premises may contain overlapping information, but are typically not paraphrases. The majority of our dataset requires consideration of multiple premises, including aggregation of information from multiple sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Multiple Premise Entailment Task", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "This Multiple Premise Entailment (MPE) task is inspired by the Approximate Textual Entailment (ATE) task of Young et al. (2014) . Each item in the ATE dataset consists of a premise set of four captions from FLICKR30K, and a short phrase as the hypothesis. The ATE data was created automatically, under the assumption that items are positive (approximately entailing) if the hypothesis comes from the same image as the four premises, and negative otherwise. However, Young et al. found that this assumption was only true for just over half of the positive items. For MPE, we also start with four FLICKR30K captions as the premises and a related/unrelated sentence as the hypothesis, but we restrict the hypothesis to have low word overlap with the premises, and we collect human judgments to label the items as entailing, contradictory, or neutral.", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 127, |
|
"text": "Young et al. (2014)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Multiple Premise Entailment Task", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The MPE dataset (Figure 1 ) contains 10,000 items (8,000 training, 1,000 development and 1,000 test), each consisting of four premise sentences (captions from the same FLICKR30K image), one hypothesis sentence (a simplified FLICKR30K caption), and one label (entailment, neutral, or contradiction) that indicates the relationship between the set of four premises and the hypothesis. This label is based on a consensus of five crowdsourced judgments. To analyze the difference between multiple premise and single premise entailment (Section 5.2), we also collected pair label annotations for each individual premise-hypothesis pair in the development data. This section describes how we selected the premise and hypothesis sentences, and how we labeled the items via crowdsourcing.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 25, |
|
"text": "(Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The MPE Dataset", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Hypothesis simplification The four premise sentences of each MPE item consist of four original FLICKR30K captions from the same image. Since complete captions are too specific and are likely to introduce new details that are not entailed by the premises, the hypotheses sentences are simplified versions of FLICKR30K captions. Each hypothesis sentence is either a simplified variant of the fifth caption of the same image as the premises, or a simplified variant of one of the captions of a random, unrelated image.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generating the Items", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Our simplification process relies on the denotation graph of Young et al. (2014) , a subsumption hierarchy over phrases, constructed from the captions in FLICKR30K. They define a set of normalization and reduction rules (e.g. lemmatization, dropping modifiers and prepositional phrases, replacing nouns with their hypernyms, extracting noun phrases) to transform the original captions into shorter, more generic phrases that are still true descriptions of the original image.", |
|
"cite_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 80, |
|
"text": "Young et al. (2014)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generating the Items", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To simplify a hypothesis caption, we consider all sentence nodes in the denotation graph that are ancestors (more generic versions) of this caption, but exclude nodes that are also ancestors of any of the premises. This ensures that the simplified hypothesis cannot be trivially obtained from a premise via the same automatic simplification procedure. Therefore, we avoid some obvious semantic relationships between premises and hypothesis, such as hypernym replacement, dropping modifiers or PPs, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generating the Items", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Limiting lexical overlap Given the set of simplified, restricted hypotheses, we further restrict the pool of potential items to contain only pairings where the hypothesis has a word overlap \u2264 0.5 with the premise set. We compute word overlap as the fraction of hypothesis tokens that appear in at least one premise (after stopword removal). This eliminates trivial cases of entailment where the hypothesis is simply a subset of the premise text. Table 1 shows that the mean word overlap for our training data is much lower than SNLI. Data selection From this constrained pool of premises-hypothesis pairings, we randomly sampled 8000 items from the FLICKR30K training split for our training data. For test and development data, we sample 1000 items from FLICKR30K test and 1000 from dev. The hypotheses in the training data must be associated with at least two captions in the FLICKR30K train split, while the hypotheses in dev/test must be associated with at least two captions in the union of the training and dev/test, and with at least one caption in dev/test alone. Since the test and dev splits of FLICKR30K are smaller than the training split, this threshold selects hypotheses that are rare enough to be interesting and frequent enough to be reasonable sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 446, |
|
"end": 453, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generating the Items", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Crowdsourcing procedure For each item, we solicited five responses from Crowdflower and Amazon Mechanical Turk as to whether the hypothesis was entailed, contradictory, or neither given a set of four premises. Instructions are shown in Table 2 . We provided labeled examples to illustrate the kinds of assumptions we expected.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 236, |
|
"end": 243, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Assigning Entailment Labels", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We assume three labels (entailment, neutral, contradiction). For entailment, we deliberately asked annotators to judge whether the hypothesis could very probably describe the same scene as the premises, rather than specifying that the hypothesis must definitely be true, as Bowman et al. (2015) did for SNLI. Our instructions align with the standard definition of textual entailment: \"T entails H if humans reading T would typically infer that H is most likely true\" (Dagan et al., 2013). We are not only interested in what is logically required for a hypothesis to be true, but also in what human readers assume is true, given their own world knowledge.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entailment labels", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Final label assignment Of the 10,000 items for which we collected full label annotations, 90% had a majority label based on the five judgments, including 16% with a 3-2 split between entailment and contradiction. The remaining 10% had a 2-2-1 split across the three classes. We manually adjudicated the latter two cases. As a result, 82% of the final labels in the dataset correspond to a majority vote over the judgments (the remaining 18% differ due to our manual correction). The released dataset contains both our final labels and the crowdsourced judgments for all items.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entailment labels", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Image IDs Premises in the our dataset have corresponding image IDs from FLICKR30K. We are interested in the information present in linguistic descriptions of a scene, so our labels reflect the textual entailment relationship between the premise text and the hypothesis. Future work could apply multi-modal representations to this task, with the caveat that the image would likely resolve many neutral items to either entailment or contradiction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entailment labels", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "5 Data Analysis", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entailment labels", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The dataset contains 8000 training items, 1000 development items, and 1000 test items. Table 3 shows overall type and token counts and sentence lengths as well as the label distribution. The mean annotator agreement, i.e. the fraction of annotators who agreed with the final label, is 0.70 for the full dataset, or 0.82 for the entailment Instructions:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 94, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Statistics", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We will show you four caption sentences that describe the same scene, and one proposed sentence. Your task is to decide whether or not the scene described by the four captions can also be described by the proposed sentence. The four captions were written by four different people. All four people were shown the same image, and then wrote a sentence describing the scene in this image. Therefore, there may be slight disagreements among the captions. The images are photographs from Flickr that show everyday scenes, activities, and events. You will not be given the image that the caption writers saw.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Statistics", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Read the four caption sentences and then read the proposed sentence. Choose 1 of 3 possible responses to the question Can the scene described by the four captions also be described by the proposed sentence? Yes: The scene described by the captions can definitely (or very probably) be described by the proposed sentence. The proposed sentence may leave out details that are mentioned in the captions. If the proposed sentence describes something that is not mentioned in the captions, it is probably safe to assume the extra information is true, given what you know from the captions. If there are disagreements among the captions about the details of the scene, the proposed sentence is consistent with at least one caption. Unknown: There is not enough information to decide whether or not the scene described by the captions can be described by the proposed sentence. There may be scenes that can be described by the proposed sentence and the captions, but you don't know whether this is the case here. No: The scene described by the captions can probably not be described by the proposed sentence. The proposed sentence and the captions either contradict each other or describe what appear to be two completely separate events. class, 0.42 for neutral, and 0.78 for contradiction. That is, on average, four of the five crowdsourced judgments agree with the final label for the entailment and contradiction items, whereas for the neutral items, only an average of two of the five original annotators assigned the neutral label, and the other three were split between contradiction and entailment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Process:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Multiple premise entailment (MPE) differs from standard single premise entailment (SPE) in that each premise consists of four independently written sentences about the same scene. To understand how MPE differs from SPE, we used crowdsourcing to collect pairwise single-premise entailment labels for each individual premise-hypothesis pair in the development data. Each consensus label is based on three judgments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MPE vs. Standard Entailment", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In Table 4 , we compare the full MPE entailment labels (bold \u21d2E, \u21d2N, \u21d2C), to the four pair SPE labels (E, N, C). The number of SPE labels that agree with the MPE label yields the five categories in Table 4 , ranging from the most difficult case where none of the SPE labels agree with the MPE label (21.8% of the data) to the simplest case where all four SPE labels agree with the MPE label (9.8% of the data).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 198, |
|
"end": 205, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "MPE vs. Standard Entailment", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We observe that a simple majority voting scheme over the gold standard SPE labels would not be sufficient, since it assigns the correct MPE label to only 34.6% of the development items (i.e. those cases where three or four SPE pairs agree with the MPE label). We also evaluate a slightly more sophisticated voting scheme that applies the following heuristic (here, E, N , C are the number of SPE labels of each class):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MPE vs. Standard Entailment", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "If E > C, predict entailment. Else if C > E, predict contradiction. Otherwise, predict neutral.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MPE vs. Standard Entailment", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "This baseline achieves an accuracy of 41.7%. These results indicate that MPE cannot be trivially reduced to SPE. That is, even if a model had access to the correct SPE label for each individual premise (an unrealistic assumption), it would require more than simple voting heuristics to obtain the correct MPE label from these pairwise labels. Table 4 illustrates that the majority of MPE items require aggregation of information about the described entities and events across multiple premises. In the first example, the first premise is consistent with a scene that involves a team of football players, while only the last premise indi- cates that the team may be waiting. Moreover, the simple majority voting would work on the fourth example but fail on the second example, while the more sophisticated voting scheme would work on the second example and fail on the fourth.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 343, |
|
"end": 350, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "MPE vs. Standard Entailment", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We used a random sample of 100 development items to examine the types of semantic phenomena that are useful for inference in this dataset. We categorized each item by type of knowledge or reasoning necessary to predict the correct label for the hypothesis given the premises. An item belongs to a category if at least one premise in that item exhibits that semantic phenomenon in relation to the hypothesis, and an item may belong to multiple categories. For each category, Table 5 contains its frequency, an illustrative example containing the relevant premise, and the distribution over entailment labels. We did our analysis on full items (four premises and the corresponding hypothesis), but the examples in Table 5 have been simplified to a single premise for simplicity.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 474, |
|
"end": 482, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 713, |
|
"end": 720, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Semantic Phenomena", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Word equivalence Items in this category contain a pair of equivalent words (synonyms or paraphrases). The word in the hypothesis can be exchanged for the word in the premise without significantly changing the meaning of the hypothesis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Phenomena", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Word hypernymy These items involve lexical hypernyms: someone who is a man is also a person (entailment), but a person may or may not be a man (neutral), and somebody who is a man is not a child (contradiction).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Phenomena", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Phrase equivalence These items involve equivalent phrases, i.e. synonyms or paraphrases. The phrase in the hypothesis can be replaced by the phrase in the premise without significantly changing the meaning of the hypothesis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Phenomena", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Phrase hypernymy Items in this category involve a specific phrase and a general phrase: the more general phrase \"doing exercises\" can refer to multiple types of exercises in addition to \"stretching their legs.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Phenomena", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Mutual exclusion Distinguishing between contradiction and neutral items involves identifying Table 5 : Analysis of 100 random dev items. For each phenomenon, we show the distribution over labels and an example. The label is indicated with E, N, C. We use color and underlining to indicate the relevant comparisons. The indicated span of text is part of the necessary information to predict the correct label, but may not be sufficient on its own.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 100, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Semantic Phenomena", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "actions that are mutually exclusive, i.e. cannot be performed simultaneously by the same agent (\"Two doctors perform surgery\" vs. \"Two surgeons are having lunch\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Phenomena", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Compatibility The opposite of mutual exclusion is compatibility: two actions that can be performed simultaneously by the same agent (e.g. \"A boy flying a red and white kite\" vs. \"A boy is smiling\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Phenomena", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "World knowledge These items require extralinguistic knowledge about the relative frequency and co-occurrence of events in the world (not overlapping with the mutual exclusion or compatibility phenomena). A human reader can infer that children in a potato sack race are having fun (while a marathon runner competing in a race might not be described as having fun).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Phenomena", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "In addition to the semantic phenomena we have just discussed, the data presents the challenge of how to combine information across multiple premises. We examined examples from the development data to analyze the different types of information aggregation present in our dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Information Across Premises", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Coreference resolution This case requires cross-caption coreference resolution of entity mentions from multiple premises and the hypothesis. In this example, a human reader can recognize that \"two men\" and \"two senior citizens\" refer to the same entities, i.e. the \"two older men\" in the hypothesis. Given that information, the reader can additionally infer that the two older men on the street are likely to be standing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Information Across Premises", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "1. Two men in tan coats exchange looks on the city sidewalk. 2. Two senior citizens talking on a public street. 3. Two men in brown coats on the street. 4. Two men in beige coats, talking.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Information Across Premises", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "Two older men stand.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Information Across Premises", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "\u21d2ENTAILMENT Event resolution This case requires resolving various event descriptions from multiple premises and the hypothesis. In the following example, a human reader recognizes that the man is sitting on scaffolding so that he can repair the building, and therefore he is doing construction work. Visual ambiguity resolution This case involves reconciling apparently contradictory information across premises. These discrepancies are largely due to the fact that the premise captions were written to describe an image. Sometimes the image contained visually ambiguous entities or events that are then described by different caption writers. In this example, in order to resolve the discrepancy, the reader must recognize from context that \"woman\" and \"young child\" (also \"person\") refer to the same entity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Information Across Premises", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "1. A person in a green jacket and pants appears to be digging in a wooded field with several cars in the background. 2.A young child in a green jacket rakes leaves. 3. A young child rakes leaves in a wooded area. 4. A woman cleaning up a park.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Information Across Premises", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "A woman standing in the forest. \u21d2ENTAILMENT Scene resolution These examples require the reader to build a mental representation of the scene from the premises in order to assess the probability that the hypothesis is true. In the first example, specific descriptions -a jumping horse, a cowboy balancing, a rodeo -combine to assign a high probability that the specific event described by the hypothesis is true.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Information Across Premises", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "1. A man with a cowboy hat is riding a horse that is jumping. 2. A cowboy riding on his horse that is jumping in the air. 3. A cowboy balances on his horse in a rodeo. 4. Man wearing a cowboy hat riding a horse.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Information Across Premises", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "An animal bucking a man. \u21d2ENTAILMENT", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Information Across Premises", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "In the next example, the hypothesis does not contradict any individual premise sentence. However, a reader who understands the generic scene described knows that the very specific hypothesis description is unlikely to go unmentioned. Shirtlessness would be a salient detail in the this scene, so the fact that none of the premises mention it means that the hypothesis is likely to be false. In the final example, the premises present a somewhat generic description of the scene. While some premises lean towards entailment (a woman and a man in discussion could be having a work meeting) and others lean towards contradiction (two people conversing outdoors at a restaurant are probably not working), none of them contain overwhelming evidence that the scene entails or contradicts the hypothesis. Therefore, the hypothesis is neutral given the premises.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Information Across Premises", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "1. A blond woman wearing a gray jacket converses with an older man in a green shirt and glasses while sitting on a restaurant patio. 2. A blond pony-tailed woman and a gray-haired man converse while seated at a restaurant's outdoor area. 3. A woman with blond hair is sitting at a table and talking to a man with glasses. 4. A woman discusses something with an older man at a table outside a restaurant.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Information Across Premises", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "A woman doing work. \u21d2NEUTRAL", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combining Information Across Premises", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "We apply several neural models from the entailment literature to our data. We also present a model designed to handle multiple premises, as this is unique to our dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "LSTM In our experiments, we found that the conditional LSTM (Hochreiter and Schmidhuber, 1997) model of Rockt\u00e4schel et al. (2016) outperformed a Siamese LSTM network (e.g. Bowman et al. (2015)), so we report results using the conditional LSTM. This model consists of two LSTMs that process the hypothesis conditioned on the premise. The first LSTM reads the premise. Its final cell state is used to initialize the cell state of the second LSTM, which reads the hypothesis. The resulting premise vector and hypothesis vector are concatenated and passed through a hidden layer and a softmax prediction layer. When handling four MPE premise sentences, we concatenate them into a single sequence (in the order of the caption IDs) that we pass to the first LSTM. When we only have a single premise sentence, we simply pass it to the first LSTM.", |
|
"cite_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 94, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 104, |
|
"end": 129, |
|
"text": "Rockt\u00e4schel et al. (2016)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Word-to-word attention Neural attention models have shown a lot of success on SNLI. We evaluate the word-to-word attention model of Rockt\u00e4schel et al. (2016) . 1 This model learns a soft alignment of words in the premise and hypothesis. One LSTM reads the premise and produces an output vector after each word. A second LSTM, initialized by the final cell state of the first, reads the hypothesis one word at a time. For each word w t in the hypothesis, the model produces attention weights \u03b1 t over the premise output vectors. The final sentence pair representation is a nonlinear combination of the attention-weighted representation of the premise and the final output vector from the hypothesis LSTM. This final sentence pair representation is passed through a softmax layer to compute the cross-entropy loss. Again, when training on MPE, we concatenate the premise sentences into a single sequence as input to the premise LSTM.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 157, |
|
"text": "Rockt\u00e4schel et al. (2016)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Premise-wise sum of experts (SE) The previous models all assume that the premise is a single sentence, so in order to apply them naively to our dataset, we have to concatenate the four premises. To capture what distinguishes our task from standard entailment, we also consider a premise-wise sum of experts (SE) model that makes four independent decisions for each premise paired with the hypothesis. This model can adjust how it processes each premise based on the relative predictions of the other premises. We apply the conditional LSTM repeatedly to read each premise and the hypothesis, producing four premise vectors p 1 ... p 4 and four hypothesis vectors h 1 ... h 4 (conditioned on each premise). Each premise vector p i is concatenated with its hypothesis vector h i and passed through a feedforward layer to produce logit prediction l i . We sum l 1 ... l 4 to obtain the final prediction, which we use to compute the cross-entropy loss.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "When training on SNLI, we apply the conditional LSTM only once to read the premise and hypothesis and produce p 1 and h 1 . We pass the concatenation of p 1 and h 1 through the feedforward layer to produce l 1 , which we use to compute the cross-entropy loss.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "For the LSTM and SE models, we use 300d GloVe vectors (Pennington et al., 2014) trained on 840B tokens as the input. The attention model uses word2vec vectors (Mikolov et al., 2013 ) (replacing with GloVe had almost no effect on performance). We use the Adam optimizer (Kingma and Ba, 2014) with the default configuration. We train each model for 10 epochs based on convergence on dev. For joint SNLI+MPE training, we use the same parameters and pretrain for 10 epochs on SNLI, then train for 10 epochs on MPE. This was the best joint training approach we found.", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 79, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 159, |
|
"end": 180, |
|
"text": "(Mikolov et al., 2013", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training Details", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "When training on SNLI, we use the best parameters reported for the word-to-word attention model. 2 When training on MPE only, we set dropout, learning rate, and LSTM dimensionality as the result of a grid search on dev. 3 Table 6 contains the test accuracies of the models from Section 6: LSTM, sum of experts (SE), and word-to-word attention under three training regimes: SNLI only, MPE only, and SNLI+MPE.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 229, |
|
"text": "Table 6", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training Details", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We train only on SNLI to see whether models can generalize from one entailment task to the other. Interestingly, the attention model's accuracy on MPE is higher after training only on SNLI than training on MPE, perhaps because it requires much more data to learn reasonable attention weighting parameters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overall Performance", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "When training on SNLI or MPE alone, the best model is SE, the only model that handles the four premises. It is not surprising that the LSTM model performs poorly, as it is forced to reduce a very long sequence of words to a single vector. The LSTM performs on par with SE when training on SNLI+MPE, but our analysis (Section 5.3) shows that their errors are quite different.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overall Performance", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "The attention model trained on SNLI+MPE has the highest accuracy overall. We reason that pretraining on SNLI is necessary to learn reasonable parameters for the attention weights before training on MPE, a smaller dataset where wordto-word inferences may be less obvious. When trained only on MPE, the attention model performs much worse than SE, with particularly low accuracy on entailing items.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overall Performance", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "We implemented a model that adds attention to the SE model, but it overfit on SNLI and could not match other models' accuracy, reaching only about 58% on dev compared to 59-63%. Future work will investigate other approaches to combining the benefits of the SE and attention models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overall Performance", |
|
"sec_num": "8.1" |
|
}, |
|
{ |
|
"text": "To get a better understanding of how our task differs from standard entailment, we analyze how Table 7 : Accuracy for each model (trained on SNLI+MPE) on the dev data subsets that have 0-4 SPE labels that match the MPE label (Table 4) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 102, |
|
"text": "Table 7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 225, |
|
"end": 234, |
|
"text": "(Table 4)", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Performance by Pair Agreement", |
|
"sec_num": "8.2" |
|
}, |
|
{ |
|
"text": "performance is affected by the number of premises whose SPE label agrees with the MPE label. Table 7 shows the accuracy of each SNLI+MPEtrained model on the dev data grouped by SPE-MPE label agreement (as in Table 4 ). The attention model has the highest accuracy on three of five categories, including the most difficult category where none of the SPE labels match the MPE label. SE has the highest accuracy in the remaining two categories. The attention model demonstrates large gains in the easiest categories, perhaps because there is less advantage to aggregating individual premise predictions (as SE does) and more cases where attention weighting of individual words is useful. On the other hand, the attention model also does well on the most difficult category, indicating that it may be able to partially aggregate premises by increasing attention weights on phrases from multiple sentences. Attention and SE exhibit complementary strengths that we hope to combine in future work. Table 8 shows the performance of the three SNLI+MPE-trained models over semantic phenomena, based on the 100 annotated dev items (see Section 5.3 and Table 5 ). It may not be informative to analyze performance on smaller classes (e.g. phrase equivalence and phrase hypernymy), but we can still look at other noticeable differences between models. Although the attention model outperformed both LSTM and SE models in overall accuracy, it is not the best in every category. Both SE and attention have access to the same information, but the attention model does better on items that contain relationships like hypernyms and synonyms for both words and short phrases. The SE model is best at mutual exclusion, compatibility, and world knowledge categories, e.g. knowing that a man who is resting is not kayaking, and a bride is not also a cheerleader. In cases that require analy- ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 208, |
|
"end": 215, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 991, |
|
"end": 998, |
|
"text": "Table 8", |
|
"ref_id": "TABREF12" |
|
}, |
|
{ |
|
"start": 1141, |
|
"end": 1148, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Performance by Pair Agreement", |
|
"sec_num": "8.2" |
|
}, |
|
{ |
|
"text": "We presented a novel textual entailment task that involves inference over longer premise texts and aggregation of information from multiple independent premise sentences. This task is an important step towards a system that can create a coherent scene representation from longer texts, such as multiple independent reports. We introduced a dataset for this task (http://nlp.cs.illinois.edu/ HockenmaierGroup/data.html) which presents a more challenging, realistic entailment problem and cannot be solved by majority voting or related heuristics. We presented the results of several strong neural entailment baselines on this dataset, including one model that aggregates information from the predictions of separate premise sentences. Future work will investigate aggregating information at earlier stages to address the cases that require explicit reasoning about the interaction of multiple premises.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "Our experiments use a reimplementation of their model https://github.com/junfenglx/reasoning_attention", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Dropout: 0.8, learning rate: 0.001, vector dim: 100, batch size: 323 LSTM: dropout: 0.8, vector dim: 75. SE: dropout: 0.8, vector dim: 100. Attention: dropout: 0.6, vector dim: 100. Learning rate: 0.001 for all models", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This project is supported by NSF grants 1053856, 1205627, 1405883 and 1563727, a Google Research Award, and Contract W911NF-15-1-0461 with the US Defense Advanced Research Projects Agency (DARPA) and the Army Research Office (ARO). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Semeval-2012 task 6: A pilot on semantic textual similarity", |
|
"authors": [ |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mona", |
|
"middle": [], |
|
"last": "Diab", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Cer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aitor", |
|
"middle": [], |
|
"last": "Gonzalez-Agirre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "385--393", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pi- lot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Computa- tional Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Pro- ceedings of the Sixth International Workshop on Se- mantic Evaluation, pages 385-393. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A large annotated corpus for learning natural language inference", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Samuel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabor", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Angeli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Potts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "632--642", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632-642.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "The PASCAL recognising textual entailment challenge", |
|
"authors": [ |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Ido Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernardo", |
|
"middle": [], |
|
"last": "Glickman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Magnini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment, MLCW'05", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--190", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entail- ment challenge. In Proceedings of the First In- ternational Conference on Machine Learning Chal- lenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual En- tailment, MLCW'05, pages 177-190, Berlin, Hei- delberg. Springer-Verlag.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Recognizing textual entailment: Models and applications", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Ido Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabio", |
|
"middle": [ |
|
"Massimo" |
|
], |
|
"last": "Sammons", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zanzotto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Synthesis Lectures on Human Language Technologies", |
|
"volume": "6", |
|
"issue": "4", |
|
"pages": "1--220", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ido Dagan, Dan Roth, Mark Sammons, and Fabio Mas- simo Zanzotto. 2013. Recognizing textual entail- ment: Models and applications. Synthesis Lectures on Human Language Technologies, 6(4):1-220.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural Comput", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735- 1780.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "International Conference on Learning Representations (ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations", |
|
"authors": [ |
|
{ |
|
"first": "Ranjay", |
|
"middle": [], |
|
"last": "Krishna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuke", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oliver", |
|
"middle": [], |
|
"last": "Groth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Justin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenji", |
|
"middle": [], |
|
"last": "Hata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [], |
|
"last": "Kravitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephanie", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yannis", |
|
"middle": [], |
|
"last": "Kalantidis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li-Jia", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Shamma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Bernstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Fei-Fei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. 2016. Visual genome: Connecting language and vision using crowdsourced dense image annotations.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Illinois-LH: A denotational and distributional approach to semantics", |
|
"authors": [ |
|
{ |
|
"first": "Alice", |
|
"middle": [], |
|
"last": "Lai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Hockenmaier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "329--334", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alice Lai and Julia Hockenmaier. 2014. Illinois-LH: A denotational and distributional approach to seman- tics. In Proceedings of the 8th International Work- shop on Semantic Evaluation (SemEval 2014), pages 329-334.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A SICK cure for the evaluation of compositional distributional semantic models", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Marelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefano", |
|
"middle": [], |
|
"last": "Menini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luisa", |
|
"middle": [], |
|
"last": "Bentivogli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raffaella", |
|
"middle": [], |
|
"last": "Bernardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Zamparelli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zam- parelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC- 2014).", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "GloVe: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. GloVe: Global vectors for word representation. In Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1532- 1543.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Collecting image annotations using Amazon's Mechanical Turk", |
|
"authors": [ |
|
{ |
|
"first": "Cyrus", |
|
"middle": [], |
|
"last": "Rashtchian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Young", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Micah", |
|
"middle": [], |
|
"last": "Hodosh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Hockenmaier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "139--147", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cyrus Rashtchian, Peter Young, Micah Hodosh, and Julia Hockenmaier. 2010. Collecting image annota- tions using Amazon's Mechanical Turk. In Proceed- ings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechan- ical Turk, pages 139-147. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Reasoning about entailment with neural attention", |
|
"authors": [ |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rockt\u00e4schel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [ |
|
"Moritz" |
|
], |
|
"last": "Hermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Kocisky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tim Rockt\u00e4schel, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. In International Conference on Learning Represen- tations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "From image descriptions to visual denotations", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Young", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alice", |
|
"middle": [], |
|
"last": "Lai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Micah", |
|
"middle": [], |
|
"last": "Hodosh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Hockenmaier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Transactions of the Association of Computational Linguistics", |
|
"volume": "2", |
|
"issue": "1", |
|
"pages": "67--78", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to vi- sual denotations. Transactions of the Association of Computational Linguistics -Volume 2, Issue 1, pages 67-78.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "1. A young couple sits in a park eating ice cream as children play and other people enjoy themselves around them. 2. Couple in park eating ice cream cones with three other adults and two children in background. 3. A couple enjoying ice cream outside on a nice day. 4. A couple eats ice cream in the park. A shirtless man sitting. \u21d2CONTRADICTION", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"content": "<table><tr><td>: Mean word overlap for full training data</td></tr><tr><td>and each label, original and lemmatized sentences.</td></tr><tr><td>MPE has much lower word overlap than SNLI.</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td/><td>SNLI</td><td>MPE</td></tr><tr><td>#Lexical types</td><td>36,616</td><td>9,254</td></tr><tr><td>#Lexical tokens</td><td>12 million</td><td>468,524</td></tr><tr><td>Mean premise length</td><td colspan=\"2\">14.0 \u00b1 6.0 53.2 \u00b1 12.8</td></tr><tr><td>Mean hypothesis length</td><td>8.3 \u00b1 3.2</td><td>5.3 \u00b1 1.8</td></tr><tr><td>Label distribution</td><td/><td/></tr><tr><td>Entailment</td><td>33.3%</td><td>32.3%</td></tr><tr><td>Neutral</td><td>33.3%</td><td>26.3%</td></tr><tr><td>Contradiction</td><td>33.3%</td><td>41.6%</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "The annotation instructions we provided to Crowdflower and Mechanical Turk annotators.", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Type and token counts, sentence lengths, and label distributions for training data.", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"content": "<table><tr><td># pairs</td><td colspan=\"2\">% of data Pair</td><td>Example Hypothesis and Four Premises</td></tr><tr><td>agree</td><td/><td>Label</td><td/></tr><tr><td colspan=\"4\">0 A football player 1 21.8 N N N N 26.9 N A person is half submerged in water in their yellow kayak.</td></tr><tr><td/><td/><td>C</td><td>A woman has positioned her kayak nose down in the water.</td></tr><tr><td/><td/><td>N</td><td>A person in a canoe is rafting in wild waters.</td></tr><tr><td/><td/><td>N</td><td>A kayaker plunges into the river.</td></tr><tr><td/><td/><td/><td>\u21d2C A man in a boat paddling through waters.</td></tr><tr><td>2</td><td>16.7</td><td>E</td><td>A batter playing cricket missed the ball and the person behind him is catching it.</td></tr><tr><td/><td/><td>E</td><td>A cricket player misses the pitch.</td></tr><tr><td/><td/><td>N</td><td>The three men are playing cricket.</td></tr><tr><td/><td/><td>N</td><td>A man struck out playing cricket.</td></tr><tr><td/><td/><td/><td>\u21d2E A man swings a bat.</td></tr><tr><td>3</td><td>24.8</td><td>N</td><td>A young gymnast, jumps high in the air, while performing on a balance beam.</td></tr><tr><td/><td/><td>N</td><td>A gymnast performing on the balance beam in front of an audience.</td></tr><tr><td/><td/><td>E</td><td>The young gymnast's supple body soars above the balance beam.</td></tr><tr><td/><td/><td>N</td><td>A gymnast is performing on the balance beam.</td></tr><tr><td/><td/><td/><td>\u21d2N A woman doing gymnastics.</td></tr><tr><td>4</td><td>9.8</td><td>C</td><td/></tr><tr><td/><td/><td>C</td><td/></tr><tr><td/><td/><td>C</td><td/></tr><tr><td/><td/><td>C</td><td/></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "in a red uniform is standing in front of other football players in a stadium. A football player facing off against two others. A football player wearing a red shirt. Defensive player waiting for the snap. \u21d2E The team waiting.", |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "", |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"content": "<table><tr><td/><td># E N C Example Premise and Hypothesis Pair</td></tr><tr><td>Total</td><td>100 31 29 40</td></tr><tr><td colspan=\"2\">Word equivalence 16 12 4 0 Word hypernymy 19 6 6 7 Phrase 7 6 1 0 A couple in their wedding attire stand behind a table with a wedding cake and flowers.</td></tr><tr><td>equivalence</td><td>Newlyweds standing. \u21d2E</td></tr><tr><td>Phrase</td><td>8 6 2 0 A group of young boys wearing track jackets stretch their legs on a gym floor as they</td></tr><tr><td>hypernymy</td><td>sit in a circle.</td></tr><tr><td/><td>A group doing exercises. \u21d2E</td></tr><tr><td>Mutual</td><td>25 0 0 25 A woman in a red vest working at a computer.</td></tr><tr><td>exclusion</td><td>Lady doing yoga. \u21d2C</td></tr><tr><td>Compatibility</td><td>18 0 18 0 Onlookers watch.</td></tr><tr><td/><td>A girl at bat in a softball game. \u21d2N</td></tr><tr><td>World</td><td>35 14 9 12 A young woman gives directions to an older woman outside a subway station.</td></tr><tr><td>knowledge</td><td>Women standing. \u21d2E</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "A person climbing a rock face. A rock climber scales a cliff. \u21d2E Girl in a blue sweater painting while looking at a bird in a book. A child painting a picture. \u21d2E", |
|
"num": null |
|
}, |
|
"TABREF9": { |
|
"content": "<table><tr><td>: Entailment accuracy on MPE (test). SE is</td></tr><tr><td>best when training only on SNLI or MPE. Atten-</td></tr><tr><td>tion is best when training on SNLI+MPE.</td></tr></table>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "", |
|
"num": null |
|
}, |
|
"TABREF12": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Accuracy for each semantic phenomenon on 100 dev items. While attention was the best model overall, it does not have the highest accuracy for all phenomena. sis of mutually exclusive or compatible events, a model like SE has an advantage since it can reinforce its weighted combination prediction by examining each premise separately.", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |