ACL-OCL / Base_JSON /prefixI /json /insights /2020.insights-1.17.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
160 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:13:06.493534Z"
},
"title": "The Extraordinary Failure of Complement Coercion Crowdsourcing",
"authors": [
{
"first": "Yanai",
"middle": [],
"last": "Elazar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bar Ilan University Allen Institute for Artificial Intelligence",
"location": {}
},
"email": "yanaiela@gmail.com"
},
{
"first": "Victoria",
"middle": [],
"last": "Basmov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bar Ilan University Allen Institute for Artificial Intelligence",
"location": {}
},
"email": ""
},
{
"first": "Shauli",
"middle": [],
"last": "Ravfogel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bar Ilan University Allen Institute for Artificial Intelligence",
"location": {}
},
"email": "shauli.ravfogel@gmail.com"
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bar Ilan University Allen Institute for Artificial Intelligence",
"location": {}
},
"email": "yoav.goldberg@gmail.com"
},
{
"first": "Reut",
"middle": [],
"last": "Tsarfaty",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bar Ilan University Allen Institute for Artificial Intelligence",
"location": {}
},
"email": "reut.tsarfaty@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Crowdsourcing has eased and scaled up the collection of linguistic annotation in recent years. In this work, we follow known methodologies of collecting labeled data for the complement coercion phenomenon. These are constructions with an implied action-e.g., \"I started a new book I bought last week\", where the implied action is reading. We aim to collect annotated data for this phenomenon by reducing it to either of two known tasks: Explicit Completion and Natural Language Inference. However, in both cases, crowdsourcing resulted in low agreement scores, even though we followed the same methodologies as in previous work. Why does the same process fail to yield high agreement scores? We specify our modeling schemes, highlight the differences with previous work and provide some insights about the task and possible explanations for the failure. We conclude that specific phenomena require tailored solutions, not only in specialized algorithms, but also in data collection methods.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Crowdsourcing has eased and scaled up the collection of linguistic annotation in recent years. In this work, we follow known methodologies of collecting labeled data for the complement coercion phenomenon. These are constructions with an implied action-e.g., \"I started a new book I bought last week\", where the implied action is reading. We aim to collect annotated data for this phenomenon by reducing it to either of two known tasks: Explicit Completion and Natural Language Inference. However, in both cases, crowdsourcing resulted in low agreement scores, even though we followed the same methodologies as in previous work. Why does the same process fail to yield high agreement scores? We specify our modeling schemes, highlight the differences with previous work and provide some insights about the task and possible explanations for the failure. We conclude that specific phenomena require tailored solutions, not only in specialized algorithms, but also in data collection methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Crowdsourcing has become extremely popular in recent years for annotating datasets. Many works use frameworks like Amazon Mechanical Turk (AMT) by converting complex linguistic tasks into easy-to-grasp presentations which make it possible to crowdsource linguistically-annotated data at scale (Bowman et al., 2015; FitzGerald et al., 2018; Dasigi et al., 2019; Wolfson et al., 2020) .",
"cite_spans": [
{
"start": 293,
"end": 314,
"text": "(Bowman et al., 2015;",
"ref_id": "BIBREF2"
},
{
"start": 315,
"end": 339,
"text": "FitzGerald et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 340,
"end": 360,
"text": "Dasigi et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 361,
"end": 382,
"text": "Wolfson et al., 2020)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we attempt to use existing methodologies for crowdsourcing linguistic annotations in order to collect annotations for complement coercion (Pustejovsky, 1991 (Pustejovsky, , 1995 , a phenomenon involving an implied action triggered by an eventselecting verb. Specifically, certain verb classes require an event-denoting complement, as in: \"I started reading a book\", \"I finished eating the",
"cite_spans": [
{
"start": 152,
"end": 170,
"text": "(Pustejovsky, 1991",
"ref_id": "BIBREF41"
},
{
"start": 171,
"end": 191,
"text": "(Pustejovsky, , 1995",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Explicit After a heartfelt vow, she agrees {officiating}, \u03c6 and the two begin kissing as the preacher tries to continue the ceremony.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Annotations",
"sec_num": null
},
{
"text": "Hunter waited for max to finish his burger before asking him again. ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entailment",
"sec_num": null
},
{
"text": "ENT NEU CON Hunter waited for max to finish swallowing his burger before asking him again. Table 1 : Examples for the two modeling and annotation schemes used in this work. Both examples are labeled with different (disagreeing) answers. In the Explicit modeling, each label is a set, which can be empty (\u03c6) (meaning that no event is implied), or not (and thus the context suggests an implied event). The second modeling follows the NLI scheme, a standard approach for evaluating language understanding. The ENT, NEU and CON labels refer to the entail, neutral and contradict labels accordingly.",
"cite_spans": [],
"ref_spans": [
{
"start": 91,
"end": 98,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Entailment",
"sec_num": null
},
{
"text": "cake\", etc. However, such event-denoting complements might remain implicit, not appearing in the surface form. Consider for instance, the sentence \"I started a new book.\" Here the event that was started remains implicit. Our task is then, first, to detect that the verb 'started' in this context implies some unmentioned event, and that probable events in this context are reading or writing. Furthermore, we wish to predict that for \"I started the book I bought yesterday\", the more probable event is reading, rather than writing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entailment",
"sec_num": null
},
{
"text": "This phenomenon (described in detail in Section 2) seems intuitive at first, and easy-to-grasp by non-experts. However, we find that collecting annotated data for this task via crowdsourcing is very challenging, achieving low agreement scores between annotators ( \u00a73), despite using two common collection methods in frequently used setups. The two framings we use for data collection along with examples for them are presented in Table 1 . These low agreement scores come as a surprise, given the large body of previous work on crowdsourcing linguistic annotations. Why do such issues arise when collecting data for complement coercion, while for similar phenomena the same approaches yield successful results? Although it is difficult to answer this question, we aim to highlight the similarities and the differences with other tasks, and provide some insights into this question.",
"cite_spans": [],
"ref_spans": [
{
"start": 430,
"end": 437,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Entailment",
"sec_num": null
},
{
"text": "Complement Coercion We are interested in the linguistic phenomenon of complement coercion. 1 In complement coercion, there is a clash between an expectation for a verb argument denoting an event, and the appearance of a noun argument denoting an entity. Uncovering the covert event requires the comprehender to infer the implied event by invoking the comprehender's lexical semantics and/or world knowledge (Zarcone et al., 2017) .",
"cite_spans": [
{
"start": 407,
"end": 429,
"text": "(Zarcone et al., 2017)",
"ref_id": "BIBREF59"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Consider Examples 1 and 2 below, with an implicit event of reading or writing missing in the surface form. Inferring the implicit event (marked ) is necessary in order to construe the full semantics of this sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "1. I started a new book.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "The reconstruction of the covert event requires an interplay between semantics 2 and world knowledge. In example 1 above, the prefix \"I started \" with the event-selecting verb started triggers expectations for some event-denoting object (reading, writing, eating, watching, etc) . The object that follows, \"a new book\", narrows down the expectations -based on world knowledge. As McGregor et al. (2017) puts it, \"Different nouns grant privileged access to different activities, particularly those which are most frequently performed with the entities they denote\". Although the entity narrows down the set of possible events, the implied event might remain ambiguous (in Example 1, both reading and writing are plausible, but eating is not). As can be seen in Example 2, additional context, as in \"I bought last week\", provides further world-knowledge cues, towards accessing a more specific event (in this case reading is more likely than writing), thus resolving the remaining ambiguity.",
"cite_spans": [
{
"start": 237,
"end": 278,
"text": "(reading, writing, eating, watching, etc)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "I started a new book I bought last week.",
"sec_num": "2."
},
{
"text": "Complement coercion is particularly frequent with certain verb classes, including aspectual verbs -verbs that \"describe the initiation, termination, or continuation of an activity\" (Levin, 1993 ) -such as: 'start', 'begin', 'continue' and 'finish' (McGregor et al., 2017) . This set of verbs is the focus of our work. Note however, that such verbs may appear in similar constructions that do not imply any covert action or event. For instance, in the following sentence:",
"cite_spans": [
{
"start": 181,
"end": 193,
"text": "(Levin, 1993",
"ref_id": "BIBREF27"
},
{
"start": 248,
"end": 271,
"text": "(McGregor et al., 2017)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "I started a new book I bought last week.",
"sec_num": "2."
},
{
"text": "3. I started a new company.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I started a new book I bought last week.",
"sec_num": "2."
},
{
"text": "Here, the verb 'start' is used as an entity-selecting (and not event-selecting) verb, a synonym of 'found' or 'establish'. See more examples of similar non-coercive constructions in Appendix B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I started a new book I bought last week.",
"sec_num": "2."
},
{
"text": "Annotated data for complement coercion (Pustejovsky et al., 2010) was collected in the past, based on a tailor-made annotation methodology (Pustejovsky et al., 2009) , consisting of a multi-step process that includes word-sense disambiguation by experts. The annotation focused on coercion detection (as well as labeling the arguments type) and did not involve identifying the implied action. Here, we aim to collect complement coercion data via non-expert annotation, at scale, to test whether models can recover the implicit events and resolve the emerging ambiguities.",
"cite_spans": [
{
"start": 139,
"end": 165,
"text": "(Pustejovsky et al., 2009)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "I started a new book I bought last week.",
"sec_num": "2."
},
{
"text": "Crowdsourcing NLI NLI, originally framed as Recognizing Textual Entailment (RTE), has become a standard framework for testing reasoning capabilities of models. It originated from the work by Dagan et al. (2005) , where a small dataset was curated by experts using precise guidelines with a specific focus on lexical and syntactic variability rather than delicate logical issues, while dismissing cases of disagreements or ambiguity. Bowman et al. (2015); Williams et al. (2018) then scaled up the task and crowdsourced large-scale NLI datasets. In contrast to Dagan et al. (2005) , the task definitions were short and loose, relying on the annotators' common sense understanding. Many works since have been using the NLI framework and the crowdsourcing procedure associated with it to test models for different language phenomena (Marelli et al., 2014; Lai et al., 2017; Naik et al., 2018; Ross and Pavlick, 2019; Yanaka et al., 2020) .",
"cite_spans": [
{
"start": 191,
"end": 210,
"text": "Dagan et al. (2005)",
"ref_id": "BIBREF5"
},
{
"start": 455,
"end": 477,
"text": "Williams et al. (2018)",
"ref_id": "BIBREF54"
},
{
"start": 560,
"end": 579,
"text": "Dagan et al. (2005)",
"ref_id": "BIBREF5"
},
{
"start": 830,
"end": 852,
"text": "(Marelli et al., 2014;",
"ref_id": "BIBREF29"
},
{
"start": 853,
"end": 870,
"text": "Lai et al., 2017;",
"ref_id": "BIBREF57"
},
{
"start": 871,
"end": 889,
"text": "Naik et al., 2018;",
"ref_id": "BIBREF32"
},
{
"start": 890,
"end": 913,
"text": "Ross and Pavlick, 2019;",
"ref_id": "BIBREF49"
},
{
"start": 914,
"end": 934,
"text": "Yanaka et al., 2020)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "I started a new book I bought last week.",
"sec_num": "2."
},
{
"text": "We begin by directly modeling the phenomenon. For a set of sentences containing possibly-coercive verbs, we wish to determine for each verb if it entails an implicit event, and if so, to figure out what the event is. This direct task-definition approach is reminiscent of studies that collected annotated data for other missing elements phenomena, such as Verb-Phrase Ellipsis (Bos and Spenader, 2011) , Numeric Fused-Heads (Elazar and Goldberg, 2019) , Bridging (Roesiger, 2018; Hou et al., 2018) and Sluicing (Hansen and S\u00f8gaard, 2020). However, when attempting to crowdsource and label complement coercion instances, we reach very low agreement scores in the first step: determining whether there is an implied event or not. We discuss this experiment in greater detail in Appendix C.",
"cite_spans": [
{
"start": 377,
"end": 401,
"text": "(Bos and Spenader, 2011)",
"ref_id": "BIBREF1"
},
{
"start": 424,
"end": 451,
"text": "(Elazar and Goldberg, 2019)",
"ref_id": "BIBREF7"
},
{
"start": 463,
"end": 479,
"text": "(Roesiger, 2018;",
"ref_id": "BIBREF47"
},
{
"start": 480,
"end": 497,
"text": "Hou et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Completion Attempt",
"sec_num": "3.1"
},
{
"text": "In light of the low agreements on explicit modeling of the task of complement coercion, we turn to a different crowdsourcing approach which was proven successful for many linguistic phenomena -using NLI as discussed above ( \u00a72). NLI was used to collect data for a wide range of linguistic phenomena: Paraphrase Inference, Anaphora Resolution, Numerical Reasoning, Implicatures and more (White et al., 2017; Poliak et al., 2018; Jeretic et al., 2020; Yanaka et al., 2020; Naik et al., 2018 ) (see Poliak (2020) ). Therefore, we take a similar approach, with similar methodologies, and make use of NLI as an evaluation setup for the complement coercion phenomenon.",
"cite_spans": [
{
"start": 386,
"end": 406,
"text": "(White et al., 2017;",
"ref_id": "BIBREF53"
},
{
"start": 407,
"end": 427,
"text": "Poliak et al., 2018;",
"ref_id": "BIBREF40"
},
{
"start": 428,
"end": 449,
"text": "Jeretic et al., 2020;",
"ref_id": "BIBREF21"
},
{
"start": 450,
"end": 470,
"text": "Yanaka et al., 2020;",
"ref_id": "BIBREF56"
},
{
"start": 471,
"end": 488,
"text": "Naik et al., 2018",
"ref_id": "BIBREF32"
},
{
"start": 496,
"end": 509,
"text": "Poliak (2020)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLI for Complement Coercion",
"sec_num": "3.2"
},
{
"text": "Here we do not directly model the identification and recovery of event verbs, but rather, we reduce it to an NLI task. Intuitively, if in Example 2 the semantically plausible implied event is reading, we expect the sentence \"I started a book I bought last week\" to entail a sentence that contains the event explicitly: \"I started reading a book I bought last week\" (Table 2) . 3 In contrast, we expect \"I started a book\" to be neutral with respect to \"I started reading a book\", since both reading and writing are plausible in that context, and there is no reason to prefer one of these complements over the other. Examples of this format, along with the different labels we employ, are shown in Table 2 . Table 2 : Examples for NLI pairs with a complement coercion structure. The ENT, NEU and CON labels refers to entail, neutral and contradict accordingly.",
"cite_spans": [
{
"start": 377,
"end": 378,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 365,
"end": 374,
"text": "(Table 2)",
"ref_id": null
},
{
"start": 696,
"end": 703,
"text": "Table 2",
"ref_id": null
},
{
"start": 706,
"end": 713,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "NLI for Complement Coercion",
"sec_num": "3.2"
},
{
"text": "Corpus Candidates In order to keep the task simple, we avoid complexities of lexical, semantic and grammatical differences. Each example is composed of a minimal-pair (Kaushik et al., 2019; Gardner et al., 2020) consisting of two sentences; one as the premise and the other as the hypothesis. We construct minimal pairs as follows: First, we extract dependencyparsed sentences from the Book Corpus (Zhu et al., 2015) containing the lemma of one of the verbs: 'start', 'begin', 'continue' and 'finish'. 4 Then, we keep sentences where the anchor verb is attached to another verb with an 'xcomp' dependency 5 (e.g. 'started' in \"started reading\"). These sentences are used as the hypotheses. To construct the premises, we remove the dependent verb (e.g. 'read'), as well as all the words between the anchor and the dependent verb (e.g. 'to' in the infinitive form: \"to read\"). Additional examples are provided in Appendix D. Note that this procedure sometimes generates ungrammatical or implausible sentences, which are flagged by the annotators.",
"cite_spans": [
{
"start": 167,
"end": 189,
"text": "(Kaushik et al., 2019;",
"ref_id": "BIBREF23"
},
{
"start": 190,
"end": 211,
"text": "Gardner et al., 2020)",
"ref_id": "BIBREF55"
},
{
"start": 398,
"end": 416,
"text": "(Zhu et al., 2015)",
"ref_id": "BIBREF61"
},
{
"start": 502,
"end": 503,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLI for Complement Coercion",
"sec_num": "3.2"
},
{
"text": "Crowdsourcing Procedure We follow the standard procedure of collecting NLI data with crowdsourcing and collect annotations from Amazon Mechanical Turk (AMT). Specifically, we follow the instruction from Glockner et al. (2018) , which involves three questions:",
"cite_spans": [
{
"start": 203,
"end": 225,
"text": "Glockner et al. (2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLI for Complement Coercion",
"sec_num": "3.2"
},
{
"text": "1. Do the sentences describe the same event? 2. Does the new sentence add new information to the original sentence?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLI for Complement Coercion",
"sec_num": "3.2"
},
{
"text": "3. Is the new sentence incorrect/ungrammatical?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLI for Complement Coercion",
"sec_num": "3.2"
},
{
"text": "We discard any example which at least one worker marked as incorrect/ungrammatical. If the answer to the first question was negative, we considered the label as contradict. Otherwise, we considered the label as entail if the answer to the second question was negative, and neutral if it was positive. A screenshot of the interface is displayed in Figure 2 in the Appendix. We require an approved rate of at least 99%, at least 5000 completed HITs, and filter workers to be from English-speaking countries. We also condition the turkers to pass a validation test with a perfect score. We pay 8 cents per HIT.",
"cite_spans": [],
"ref_spans": [
{
"start": 347,
"end": 355,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "NLI for Complement Coercion",
"sec_num": "3.2"
},
{
"text": "We collect 76 6 pairs (after filtering ungrammatical sentences), each labeled by three different annotators. The Fleiss Kappa (Fleiss, 1971) agreement is k = 0.24. This score is remarkably low, compared to previous work that similarly collected NLI labels and achieved scores between 0.61 and 0.7. Why does this happen? Consider the following examples, along with their labels: 4. \"We finished Letterman and I got up from the couch and said, I'm going to bed.\" ; \"We finished watching Letterman and I got up from the couch and said, I'm going to bed.\"",
"cite_spans": [
{
"start": 126,
"end": 140,
"text": "(Fleiss, 1971)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "ENT ENT ENT 5. \"Flo set the sack of sausage and egg biscuits on the counter right as the young man finished his case.\" ; \"Flo set the sack of sausage and egg biscuits on the counter right as the young man finished pleading his case.\" ENT NEU CON 6. \"We start the interviews later today.\" ; \"We start shooting the interviews later today.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "Example 4 was labeled by all three annotators as entail. However, annotators were in disagreement on examples 5, 6. Example 5 was annotated with all three possible labels (entail, contradict and neutral). Indeed, different readings of this phrase are possible -more formally, different readers construe the meaning of the utterance differently; \"[Construal] is a dynamic process of meaning construction, in which speakers and hearers encode and decode, respectively\" (Trott et al., 2020 ). An annotator who understands the word 'case' as a legal case, will choose entail, while an annotator who interprets 'case' as a bag and imagines a different background story (for example, a young man packing a brief-case), will choose contradict. Finally, an annotator who thinks of both scenarios will choose neutral, which can be argued to be the correct answer. However, we find that for a human hearer, holding both scenarios in mind at the same time is hard, which we attribute to the construal of meanings. When a human construes an interpretation, they construes it in a single fashion until primed otherwise. So, it is not natural to conceive competing meaning scenarios when one is already \"locked in\" on a specific construal.",
"cite_spans": [
{
"start": 467,
"end": 486,
"text": "(Trott et al., 2020",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NEU CON CON",
"sec_num": null
},
{
"text": "Although the sentence pairs were carefully built to exclude lexical and syntactic variances, ambiguous sentences such as the above recur throughout the dataset. We believe that these disagreements are inherent to this type of problem, and are not due to other factors such as poor annotations. As evidence, the authors of this work also annotated a subset of these examples and reached a similar (low) agreement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NEU CON CON",
"sec_num": null
},
{
"text": "Inherent Disagreements in Human Textual Inferences Recently, Pavlick and Kwiatkowski (2019) discussed a similar trend of disagreements in five popular NLI datasets (RTE (Dagan et al., 2005) , SNLI (Bowman et al., 2015), MNLI (Williams et al., 2018) , JOCI (Zhang et al., 2017) and DNC (Poliak et al., 2018) ). In their study, annotators had to select the degree to which a premise entails a hypothesis, on a scale (Chen et al., 2020 ) (instead of discrete labels). Pavlick and Kwiatkowski (2019) show that even though these datasets are reported to have high agreement scores, specific examples suffer from inherent disagreements. For instance, in about 20% of the inspected examples, \"there is a nontrivial second component\" (e.g. entailment and neutral). Our findings are related to theirs, although not identical: while the disagreements they report are due to the individuals' interpretations of a situation, in our case, disagreements are due to the difficulty in imagining a different scenario. While some works propose to collect annotator disagreements and use them as inputs (Plank et al., 2014; Palomaki et al., 2018 ) (see Pavlick and Kwiatkowski (2019) for an elaborated overview), this will not hold in our case, because only one of the labels is typically correct.",
"cite_spans": [
{
"start": 164,
"end": 189,
"text": "(RTE (Dagan et al., 2005)",
"ref_id": null
},
{
"start": 225,
"end": 248,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF54"
},
{
"start": 256,
"end": 276,
"text": "(Zhang et al., 2017)",
"ref_id": "BIBREF60"
},
{
"start": 285,
"end": 306,
"text": "(Poliak et al., 2018)",
"ref_id": "BIBREF40"
},
{
"start": 414,
"end": 432,
"text": "(Chen et al., 2020",
"ref_id": "BIBREF4"
},
{
"start": 465,
"end": 495,
"text": "Pavlick and Kwiatkowski (2019)",
"ref_id": "BIBREF35"
},
{
"start": 1084,
"end": 1104,
"text": "(Plank et al., 2014;",
"ref_id": "BIBREF38"
},
{
"start": 1105,
"end": 1126,
"text": "Palomaki et al., 2018",
"ref_id": "BIBREF34"
},
{
"start": 1134,
"end": 1164,
"text": "Pavlick and Kwiatkowski (2019)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "However, the bottom-line is the same: these dis-agreements cannot be dismissed as 'noise', they are more profound. We hypothesize that when tackling specific phenomena like the one we address in this work, which involve sources of disagreements that are often 'ignored' (not intentionally) during the collection of large datasets, 7 these sources of disagreements are highlighted and manifest themselves more clearly. This results in low agreement scores as we see in our study. Scale Annotations Recent works have proposed to collect labels for NLI pairs on a scale (Pavlick and Kwiatkowski, 2019; Chen et al., 2020; Nie et al., 2020) . Although we agree that this technique may produce a more fine-grained understanding of human judgments, Pavlick and Kwiatkowski (2019) ; Nie et al. (2020) observed that scale annotations may result in a multi-modality of the distribution. The different distributions can be viewed as different construals, where each individual interprets the example differently. Task Definition Another issue might arise from the task definition itself. As opposed to annotation efforts for linguistic tasks such as parsing (Marcus et al., 1993) and semantic role labeling (Carreras and M\u00e0rquez, 2005 ) that are carried out by expert annotators and often have annotation guidelines of dozens of pages, the transition to crowdsourcing has reduced the guidelines to a few phrases, and expert annotators have been replaced by laymen. This transition required to simplify the guidelines and to avoid complex definition and corner-cases. Even though crowdsourcing enabled an easier annotation process and collection of huge amounts of data, it also came with a cost: lack of refined definitions and relying on people's \"common sense\" and \"intuition\". However, as we see in this work, such intuitions are not consistent across individuals and are not sufficient for some tasks. We believe that, similar to the issues mentioned above, the lack of proper definitions tends to amplify disagreements when dealing with specific phenomena, which was often the reason behind the elaborated and long guidelines in classic datasets (Kalouli et al., 2019) . Possible Solution As we approach \"solving\" current NLP dataset, which were once perceived as complicated, we also reach an understanding that the datasets at hand do not reflect the full capacity of language, and specific linguistic phenomena, which may posses specific challenges, are lost in the crowds. Some phenomena turn out to be more complex, and require specific solutions. In this work we show that, like we do with algorithmic solutions we need to reconsider the data collection process. We hold that data collection for these phenomena also require training of the annotators (Roit et al., 2020; Pyatkin et al., 2020) , whether experts or crowdsourcing workers, and may also require coming up with novel annotation protocols. Another potential solution is to use deliberation between the workers as a mean to improve agreement (Schaekermann et al., 2018) . With respect to the disagreements we observed, a deliberation between workers would allow them to share the construals each individual had imagined, thus reaching a consensus on the labels. It would also serve as a training for recovering more construals, allowing them to better identify the neutral cases.",
"cite_spans": [
{
"start": 567,
"end": 598,
"text": "(Pavlick and Kwiatkowski, 2019;",
"ref_id": "BIBREF35"
},
{
"start": 599,
"end": 617,
"text": "Chen et al., 2020;",
"ref_id": "BIBREF4"
},
{
"start": 618,
"end": 635,
"text": "Nie et al., 2020)",
"ref_id": "BIBREF33"
},
{
"start": 742,
"end": 772,
"text": "Pavlick and Kwiatkowski (2019)",
"ref_id": "BIBREF35"
},
{
"start": 775,
"end": 792,
"text": "Nie et al. (2020)",
"ref_id": "BIBREF33"
},
{
"start": 1147,
"end": 1168,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF28"
},
{
"start": 1196,
"end": 1223,
"text": "(Carreras and M\u00e0rquez, 2005",
"ref_id": "BIBREF3"
},
{
"start": 2140,
"end": 2162,
"text": "(Kalouli et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 2752,
"end": 2771,
"text": "(Roit et al., 2020;",
"ref_id": "BIBREF48"
},
{
"start": 2772,
"end": 2793,
"text": "Pyatkin et al., 2020)",
"ref_id": "BIBREF46"
},
{
"start": 3003,
"end": 3030,
"text": "(Schaekermann et al., 2018)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "In this work, we attempt to crowdsource annotations for complement coercion constructions. We use two modeling methods, which were successful in similar settings, but resulted in low agreement scores in our setup. We highlight some of the issues we believe are causing the disagreements. The main one being different construals (Trott et al., 2020) of the utterances by different people -as well as the difficulty to consider a different one, once fixating on a specific construal -that led to different answers. We connect our findings to previous work that observed some inherent disagreement in human judgments in popular datasets, such as SNLI and MNLI (Pavlick and Kwiatkowski, 2019) . Although this issue is less prominent in these datasets (which is manifested as higher agreement scores), we notice that when tackling a specific phenomenon, e.g. involving implicit elements, these issues may arise.",
"cite_spans": [
{
"start": 328,
"end": 348,
"text": "(Trott et al., 2020)",
"ref_id": null
},
{
"start": 657,
"end": 688,
"text": "(Pavlick and Kwiatkowski, 2019)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "We also argue that the lack of detailed definitions in the commonly used NLI tasks may lead to poor performance on small buckets of language-specific phenomena. This drop might be lost in large-scale datasets, but may have critical effects when modeling and studying specific phenomena. As a community, we claim, we should seek to identify those buckets and further investigate them, using more profound approaches for data collection, with clear and grounded definitions. We hope that our attempted trial in data collection will allow others to learn from our failure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Complement coercion has been studied in linguistics from many theoretical viewpoints. Lexical semantic accounts (such as Pustejovsky 1991, 1995 and others) and Construction Grammar accounts (e.g. Goldberg 1995 ) \"attempt to formalize what semantic features of a lexical item have been changed to conform to those of the construction\" (Yoon, 2012) . One of the main approaches is the Type-Shifting analysis (Pustejovsky, 1991 (Pustejovsky, , 1995 Jackendoff, 1996 Jackendoff, , 2002 , \"which asserts that complement coercion involves a type-shifting operation that coerces the entity-denoting complement to an event\"(Yao-Ying, 2017). Another approach (de Almeida and Dwivedi 2008 and others) \"claims that complement coercion involves a hidden VP structure with an empty verb head, which is saturated by pragmatical inference in context\" (Yao-Ying, 2017). Cognitive linguistics accounts (such as K\u00f6vecses and Radden 1998) exploit metonymy as the mechanism behind coercion constructions (Yoon, 2012) . Complement coercion has been also extensively investigated in the framework of neurolinguistic research (for example, Kuperberg et al. 2010) and psycholinguistic studies (e.g., McElree et al. 2006) . The latter often show that \"coercion sentences elicit increased processing times\" (Husband et al., 2011) compared with non-coercion sentences. Such theories as the Type-Shifting Hypothesis mentioned above and the Structured-Individual Hypothesis (Pi\u00f1ango and Deo, 2016) suggest different explanations for this associated processing cost (Yao-Ying, 2017).",
"cite_spans": [
{
"start": 121,
"end": 147,
"text": "Pustejovsky 1991, 1995 and",
"ref_id": null
},
{
"start": 148,
"end": 155,
"text": "others)",
"ref_id": null
},
{
"start": 196,
"end": 209,
"text": "Goldberg 1995",
"ref_id": "BIBREF13"
},
{
"start": 334,
"end": 346,
"text": "(Yoon, 2012)",
"ref_id": "BIBREF58"
},
{
"start": 406,
"end": 424,
"text": "(Pustejovsky, 1991",
"ref_id": "BIBREF41"
},
{
"start": 425,
"end": 445,
"text": "(Pustejovsky, , 1995",
"ref_id": "BIBREF42"
},
{
"start": 446,
"end": 462,
"text": "Jackendoff, 1996",
"ref_id": "BIBREF19"
},
{
"start": 463,
"end": 481,
"text": "Jackendoff, , 2002",
"ref_id": "BIBREF20"
},
{
"start": 894,
"end": 919,
"text": "K\u00f6vecses and Radden 1998)",
"ref_id": "BIBREF24"
},
{
"start": 984,
"end": 996,
"text": "(Yoon, 2012)",
"ref_id": "BIBREF58"
},
{
"start": 1117,
"end": 1139,
"text": "Kuperberg et al. 2010)",
"ref_id": "BIBREF25"
},
{
"start": 1176,
"end": 1196,
"text": "McElree et al. 2006)",
"ref_id": "BIBREF30"
},
{
"start": 1445,
"end": 1468,
"text": "(Pi\u00f1ango and Deo, 2016)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Linguistic Background",
"sec_num": null
},
{
"text": "Here we provide some additional examples of constructions that are similar to the ones in Examples 1,2 (the verb 'start' is followed by a non-eventdenoting complement) but do not function as complement coercion constructions. Consider the following sentences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Complement Coercion: Counter Examples",
"sec_num": null
},
{
"text": "7. I started a new company.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Complement Coercion: Counter Examples",
"sec_num": null
},
{
"text": "8. His name started the list.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Complement Coercion: Counter Examples",
"sec_num": null
},
{
"text": "9. Her wedding dress started a new tradition among brides.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Complement Coercion: Counter Examples",
"sec_num": null
},
{
"text": "In example 7 the verb 'start' is used as an entityselecting (and not event-selecting) verb, a synonym of 'found', 'establish', so that there is no type clash. In example 8 the verb 'start' is used in its 'non-eventive' (Zarcone et al., 2017) or 'stative' (Pi\u00f1ango and Deo, 2014) sense ('constitute the initial part of something'). When used this way, the verb 'start' does not exclusively select for eventive complements, so, again, there is no type clash. Also, some authors (Godard and Jayez, 1993; Yao-Ying, 2017; Pustejovsky and Bouillon, 1994) argue that in coercion constructions the subject should be an \"intentional controller of the event\" (Godard and Jayez, 1993) . In example 9 this condition does not hold, therefore there is no coercion.",
"cite_spans": [
{
"start": 219,
"end": 241,
"text": "(Zarcone et al., 2017)",
"ref_id": "BIBREF59"
},
{
"start": 255,
"end": 278,
"text": "(Pi\u00f1ango and Deo, 2014)",
"ref_id": "BIBREF37"
},
{
"start": 476,
"end": 500,
"text": "(Godard and Jayez, 1993;",
"ref_id": "BIBREF12"
},
{
"start": 501,
"end": 516,
"text": "Yao-Ying, 2017;",
"ref_id": "BIBREF57"
},
{
"start": 517,
"end": 548,
"text": "Pustejovsky and Bouillon, 1994)",
"ref_id": "BIBREF43"
},
{
"start": 649,
"end": 673,
"text": "(Godard and Jayez, 1993)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B Complement Coercion: Counter Examples",
"sec_num": null
},
{
"text": "In the Explicit Completion approach, the goal is to add the implicit argument of the coercion construction, if such completion exists. For instance, in the sentence \"I started a new book\", possible completions are 'reading' and 'writing', and in Example 7 no completion fits. Concretely, given a sentence with a complement coercion verb candidate, the task is to complete it with a set of possible verbs that describe the covert event. As not all candidates function as parts of complement coercion constructions, annotators can mark that no additional verb is adequate in the context. In cases where there is more than one semantically plausible answer (e.g. Ex. 1), we ask annotators to provide two completion sets, each consisting of a group of semantic equivalent verbs, which correspond to different possible understandings of the text. A screenshot of the task presented to the turkers is shown in Figure 1 . This approach to task definition is reminiscent of those used for other missing elements phenom-ena, such as Verb Phrase Ellipsis (Bos and Spenader, 2011) , Numeric Fused-Heads (Elazar and Goldberg, 2019) , Bridging (Roesiger, 2018; Hou et al., 2018) and Sluicing (Hansen and S\u00f8gaard, 2020) . However, in contrast to these tasks, where the answers can usually be found in the context, 8 the answers in our case are more open-ended (although still bounded by some restrictions (Godard and Jayez, 1993; Pustejovsky and Bouillon, 1994) ). This makes this task more challenging for annotation.",
"cite_spans": [
{
"start": 1045,
"end": 1069,
"text": "(Bos and Spenader, 2011)",
"ref_id": "BIBREF1"
},
{
"start": 1092,
"end": 1119,
"text": "(Elazar and Goldberg, 2019)",
"ref_id": "BIBREF7"
},
{
"start": 1131,
"end": 1147,
"text": "(Roesiger, 2018;",
"ref_id": "BIBREF47"
},
{
"start": 1148,
"end": 1165,
"text": "Hou et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 1170,
"end": 1205,
"text": "Sluicing (Hansen and S\u00f8gaard, 2020)",
"ref_id": null
},
{
"start": 1391,
"end": 1415,
"text": "(Godard and Jayez, 1993;",
"ref_id": "BIBREF12"
},
{
"start": 1416,
"end": 1447,
"text": "Pustejovsky and Bouillon, 1994)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 904,
"end": 912,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "C Explicit Modeling",
"sec_num": null
},
{
"text": "Corpus Candidates In the explicit completion setting, we look for natural sentences that contain one of the following anchor verbs: 'start', 'begin', 'continue' and 'finish', -immediately followed by a direct object without any dependent verb in between.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Explicit Modeling",
"sec_num": null
},
{
"text": "Annotation Procedure We use the same restrictions from the previous procedure and create a new validation test, tailored for the new task. We pay 4 cents per Hit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Explicit Modeling",
"sec_num": null
},
{
"text": "We collect annotations for 200 sentences, with two annotations per sentence. We compute the Fleiss Kappa (Fleiss, 1971) after a relaxation of the annotations into two labels: added a complement or not. Similarly to the previous modeling, the agreement score is k = 0.18, which is considered to be low. Consider the following examples: 9. \"In 2011, Old Navy began a second rebranding to emphasize a family-oriented environment, known as Project ONE.\", -{advertising, promoting, endorsing}, \u03c6 10. \"After he had finished his studies Sadra began to explore unorthodox doctrines and as a result was both condemned and excommunicated by some Shi'i 'ulam\u0101'.\", -{pursuing, doing}, \u03c6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "According to the definition of complement coercion, these examples do not require a complement. However, as can be seen from these examples, the proposed complements do contribute to an easier understanding of the sentence. We note that this concept of 'missing' is hard to explain and can be also subjective. Another obstacle is that strict adherence to the linguistic definition does not always 8 Although not always. Some of the answers in the NFH work by Elazar and Goldberg (2019) are also open-ended, but those are relatively rare. Furthermore, the answers in sluicing are sometimes a modification of the text. Figure 2 : Screenshot of the interface shown to the turkers for collecting labels. This setup follows the instructions used for labeling NLI data in Glockner et al. (2018) .",
"cite_spans": [
{
"start": 397,
"end": 398,
"text": "8",
"ref_id": null
},
{
"start": 459,
"end": 485,
"text": "Elazar and Goldberg (2019)",
"ref_id": "BIBREF7"
},
{
"start": 766,
"end": 788,
"text": "Glockner et al. (2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 617,
"end": 625,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "contribute to potential usefulness of the task for downstream applications. For this phenomenon, we did not follow the strict linguistic definition and used a more relaxed one. Additional examples along with their annotations are provided in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 242,
"end": 250,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "We provide a screenshot of the NLI interface shown to the turkers in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 69,
"end": 77,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "D NLI Framing: Additional Material",
"sec_num": null
},
{
"text": "NLI Data We provide additional examples for the original and the modified sentences (hypotheses and premises accordingly) used in the NLI framing ( \u00a73.2), along with the three obtained labels, in Table 3. ",
"cite_spans": [],
"ref_spans": [
{
"start": 196,
"end": 205,
"text": "Table 3.",
"ref_id": null
}
],
"eq_spans": [],
"section": "D NLI Framing: Additional Material",
"sec_num": null
},
{
"text": "Complement coercion has been studied in linguistics from many theoretical viewpoints. See Appendix A for background.2 E.g., understanding the difference between entitydenoting and event-denoting elements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We follow Bowman et al.(2015), who modeled entailment based on event coreference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These are frequent verbs that often appear in complement coercion constructions(McGregor et al., 2017).5 We use spaCy's parser(Honnibal and Johnson, 2015;Honnibal and Montani, 2017).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We stopped at 76 examples since we did not see fit to annotate more data with the low agreements we obtained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Due to large scale annotations, 'marginal' phenomena might be ignored to keep the instructions clear and concise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Adam Poliak and Abhilasha Ravichander for providing valuable feedback on this paper. Moreover, we would like to thank the reviewers, as well as the workshop organizers for their constructive reviews. Yanai Elazar is grateful to be partially supported by the PBC fellowship for outstanding Phd candidates in Data Science. This project has received funding from the Europoean Research Council (ERC) under the Europoean Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT) and grant agreement No. 677362 (NLPRO).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Hypothesis Annotations that gives us something to work with if he starts trouble. that gives us something to work with if he starts making trouble.ENT ENT ENT I do hope you will continue mrs. cox's incredible hospitality. I do hope you will continue to enjoy mrs. cox's incredible hospitality. CON .. it will likely travel in a parabola, continuing its stabilizing spin, ... \u03c6, \u03c6 Afterwards, they decide to continue the pub crawl to avoid attracting suspicion.{doing}, {doing} I was surprised he did not continue his openness at the RFPERM.{embue}, {showing, displaying, ...} In 1994, he joined Motilal Oswal to start their institutional desk before moving to UBS in 1996. {employ} 1 , {work} 2 , {working} In 1943 she started a career as an actress with the stage name Sheila Scott a name ... \u03c6, {pursuing} ..., giving him the opportunity to continue the work left by his predecessors as well as ... \u03c6, {researching, studying} In the Middle Ages it was a battle cry , which was used to start a Feud or a Combat reenactment. \u03c6, {f ighting} In addition, deductions are taken if the man finishes the element on two feet ... \u03c6, {competing} Table 4 : Examples for the Explicit modeling. \u03c6 denotes the empty set, meaning no event is implied. When a subscript is present it denotes the different interpretation of the sentence, by the same annotator.",
"cite_spans": [
{
"start": 294,
"end": 297,
"text": "CON",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1137,
"end": 1144,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Premise",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Coercion without lexical decomposition: Type-shifting effects revisited. The Canadian Journal of Linguistics / La revue canadienne de linguistique",
"authors": [
{
"first": "G",
"middle": [],
"last": "Roberto",
"suffix": ""
},
{
"first": "Roberto",
"middle": [
"G"
],
"last": "De Almeida",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Veena",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dwivedi",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "53",
"issue": "",
"pages": "301--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto G. de Almeida and Roberto G. Veena D. Dwivedi. 2008. Coercion without lexical decompo- sition: Type-shifting effects revisited. The Cana- dian Journal of Linguistics / La revue canadienne de linguistique, 53:301 -326.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An annotated corpus for the analysis of vp ellipsis. Language Resources and Evaluation",
"authors": [
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Spenader",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "45",
"issue": "",
"pages": "463--494",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johan Bos and Jennifer Spenader. 2011. An annotated corpus for the analysis of vp ellipsis. Language Re- sources and Evaluation, 45(4):463-494.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Introduction to the conll-2005 shared task: Semantic role labeling",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ninth conference on computational natural language learning (CoNLL-2005)",
"volume": "",
"issue": "",
"pages": "152--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Carreras and Llu\u00eds M\u00e0rquez. 2005. Introduc- tion to the conll-2005 shared task: Semantic role la- beling. In Proceedings of the ninth conference on computational natural language learning (CoNLL- 2005), pages 152-164.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Uncertain natural language inference",
"authors": [
{
"first": "Tongfei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhengping",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 58th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tongfei Chen, Zhengping Jiang, Adam Poliak, Keisuke Sakaguchi, and Benjamin Van Durme. 2020. Un- certain natural language inference. In Proceedings of The 58th Annual Meeting of the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The pascal recognising textual entailment challenge",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Ido Dagan",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Glickman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Magnini",
"suffix": ""
}
],
"year": 2005,
"venue": "Machine Learning Challenges Workshop",
"volume": "",
"issue": "",
"pages": "177--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In Machine Learning Challenges Work- shop, pages 177-190. Springer.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Quoref: A reading comprehension dataset with questions requiring coreferential reasoning",
"authors": [
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Nelson",
"suffix": ""
},
{
"first": "Ana",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marasovic",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gardner",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5927--5934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pradeep Dasigi, Nelson F Liu, Ana Marasovic, Noah A Smith, and Matt Gardner. 2019. Quoref: A read- ing comprehension dataset with questions requir- ing coreferential reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5927-5934.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Where's my head? definition, data set, and models for numeric fused-head identification and resolution",
"authors": [
{
"first": "Yanai",
"middle": [],
"last": "Elazar",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "519--535",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00280"
]
},
"num": null,
"urls": [],
"raw_text": "Yanai Elazar and Yoav Goldberg. 2019. Where's my head? definition, data set, and models for numeric fused-head identification and resolution. Transac- tions of the Association for Computational Linguis- tics, 7:519-535.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Large-scale qa-srl parsing",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2051--2060",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas FitzGerald, Julian Michael, Luheng He, and Luke Zettlemoyer. 2018. Large-scale qa-srl parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2051-2060.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Measuring nominal scale agreement among many raters",
"authors": [
{
"first": "L",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fleiss",
"suffix": ""
}
],
"year": 1971,
"venue": "Psychological bulletin",
"volume": "76",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph L Fleiss. 1971. Measuring nominal scale agree- ment among many raters. Psychological bulletin, 76(5):378.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Breaking NLI systems with sentences that require simple lexical inferences",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Glockner",
"suffix": ""
},
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "650--655",
"other_ids": {
"DOI": [
"10.18653/v1/P18-2103"
]
},
"num": null,
"urls": [],
"raw_text": "Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that re- quire simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 650-655, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Towards a proper treatment of coercion phenomena",
"authors": [
{
"first": "Daniele",
"middle": [],
"last": "Godard",
"suffix": ""
},
{
"first": "Jacques",
"middle": [],
"last": "Jayez",
"suffix": ""
}
],
"year": 1993,
"venue": "Sixth Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniele Godard and Jacques Jayez. 1993. Towards a proper treatment of coercion phenomena. In Sixth Conference of the European Chapter of the Associ- ation for Computational Linguistics, Utrecht, The Netherlands. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Constructions: A construction grammar approach to argument structure",
"authors": [
{
"first": "A",
"middle": [
"E"
],
"last": "Goldberg",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. E. Goldberg. 1995. Constructions: A construction grammar approach to argument structure. Chicago: University of Chicago Press.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "What do you mean'why?': Resolving sluices in conversations",
"authors": [
{
"first": "Bach",
"middle": [],
"last": "Victor Petr\u00e9n",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2020,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "7887--7894",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Petr\u00e9n Bach Hansen and Anders S\u00f8gaard. 2020. What do you mean'why?': Resolving sluices in con- versations. In AAAI, pages 7887-7894.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "An improved non-monotonic transition system for dependency parsing",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1373--1378",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal and Mark Johnson. 2015. An im- proved non-monotonic transition system for depen- dency parsing. In Proceedings of the 2015 confer- ence on empirical methods in natural language pro- cessing, pages 1373-1378.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "Ines",
"middle": [],
"last": "Montani",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "7",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal and Ines Montani. 2017. spacy 2: Natural language understanding with bloom embed- dings, convolutional neural networks and incremen- tal parsing. To appear, 7(1).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Unrestricted bridging resolution",
"authors": [
{
"first": "Yufang",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Katja",
"middle": [],
"last": "Markert",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2018,
"venue": "Computational Linguistics",
"volume": "44",
"issue": "2",
"pages": "237--284",
"other_ids": {
"DOI": [
"10.1162/COLI{_}a{_}00315"
]
},
"num": null,
"urls": [],
"raw_text": "Yufang Hou, Katja Markert, and Michael Strube. 2018. Unrestricted bridging resolution. Computational Linguistics, 44(2):237-284.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Using complement coercion to understand the neural basis of semantic composition: Evidence from an fmri study",
"authors": [
{
"first": "E",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Husband",
"suffix": ""
},
{
"first": "Lisa",
"middle": [
"A"
],
"last": "Kelly",
"suffix": ""
},
{
"first": "David",
"middle": [
"C"
],
"last": "Zhu",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Cognitive Neuroscience",
"volume": "23",
"issue": "",
"pages": "3254--3266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Matthew Husband, Lisa A. Kelly, and David C. Zhu. 2011. Using complement coercion to understand the neural basis of semantic composition: Evidence from an fmri study. Journal of Cognitive Neuro- science, 23:3254-3266.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The architecture of the language faculty",
"authors": [
{
"first": "Ray",
"middle": [],
"last": "Jackendoff",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ray Jackendoff. 1996. The architecture of the lan- guage faculty. MIT Press.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Foundations of language: Brain, meaning, grammar, evolution",
"authors": [
{
"first": "Ray",
"middle": [],
"last": "Jackendoff",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ray Jackendoff. 2002. Foundations of language: Brain, meaning, grammar, evolution.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Are natural language inference models IMPPRESsive? Learning IMPlicature and PRESupposition",
"authors": [
{
"first": "Paloma",
"middle": [],
"last": "Jeretic",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Warstadt",
"suffix": ""
},
{
"first": "Suvrat",
"middle": [],
"last": "Bhooshan",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8690--8705",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.768"
]
},
"num": null,
"urls": [],
"raw_text": "Paloma Jeretic, Alex Warstadt, Suvrat Bhooshan, and Adina Williams. 2020. Are natural language infer- ence models IMPPRESsive? Learning IMPlicature and PRESupposition. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 8690-8705, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Livy Real, Martha Palmer, and Valeria dePaiva",
"authors": [
{
"first": "Aikaterini-Lida",
"middle": [],
"last": "Kalouli",
"suffix": ""
},
{
"first": "Annebeth",
"middle": [],
"last": "Buis",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th Linguistic Annotation Workshop",
"volume": "",
"issue": "",
"pages": "132--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aikaterini-Lida Kalouli, Annebeth Buis, Livy Real, Martha Palmer, and Valeria dePaiva. 2019. Explain- ing simple natural language inference. In Proceed- ings of the 13th Linguistic Annotation Workshop, pages 132-143.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning the difference that makes a difference with counterfactually-augmented data",
"authors": [
{
"first": "Divyansh",
"middle": [],
"last": "Kaushik",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Lipton",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Divyansh Kaushik, Eduard Hovy, and Zachary Lipton. 2019. Learning the difference that makes a differ- ence with counterfactually-augmented data. In Inter- national Conference on Learning Representations.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Metonymy: Developing a cognitive linguistic view. Cognitive linguistics",
"authors": [
{
"first": "Zolt\u00e1n",
"middle": [],
"last": "K\u00f6vecses",
"suffix": ""
},
{
"first": "G\u00fcnter",
"middle": [],
"last": "Radden",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "9",
"issue": "",
"pages": "37--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zolt\u00e1n K\u00f6vecses and G\u00fcnter Radden. 1998. Metonymy: Developing a cognitive linguistic view. Cognitive linguistics, 9(1):37-77.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Electrophysiological correlates of complement coercion",
"authors": [
{
"first": "Gina",
"middle": [
"R"
],
"last": "Kuperberg",
"suffix": ""
},
{
"first": "Arim",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Neil",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Paczynski",
"suffix": ""
},
{
"first": "Ray",
"middle": [],
"last": "Jackendoff",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Cognitive Neuroscience",
"volume": "22",
"issue": "",
"pages": "2685--2701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gina R. Kuperberg, Arim Choi, Neil Cohn, Martin Paczynski, and Ray Jackendoff. 2010. Electrophysi- ological correlates of complement coercion. Journal of Cognitive Neuroscience, 22:2685-2701.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Natural language inference from multiple premises",
"authors": [
{
"first": "Alice",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "100--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alice Lai, Yonatan Bisk, and Julia Hockenmaier. 2017. Natural language inference from multiple premises. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 100-109.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "English Verb Classes and Alternations",
"authors": [
{
"first": "Beth",
"middle": [],
"last": "Levin",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beth Levin. 1993. English Verb Classes and Alterna- tions. The University of Chicago Press.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Building a large annotated corpus of english: The penn treebank",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A sick cure for the evaluation of compositional distributional semantic models",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Marelli",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "216--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, Roberto Zamparelli, et al. 2014. A sick cure for the evaluation of com- positional distributional semantic models. In LREC, pages 216-223.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A time course analysis of enriched composition",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Mcelree",
"suffix": ""
},
{
"first": "Liina",
"middle": [],
"last": "Pylkk\u00e4nen",
"suffix": ""
},
{
"first": "Martin",
"middle": [
"J"
],
"last": "Pickering",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"J"
],
"last": "Traxler",
"suffix": ""
}
],
"year": 2006,
"venue": "Psychonomic Bulletin & Review",
"volume": "13",
"issue": "",
"pages": "53--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian McElree, Liina Pylkk\u00e4nen, Martin J. Pickering, and Matthew J. Traxler. 2006. A time course analy- sis of enriched composition. Psychonomic Bulletin & Review, 13:53-59.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A geometric method for detecting semantic coercion",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Mcgregor",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Jezek",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Purver",
"suffix": ""
},
{
"first": "Geraint",
"middle": [],
"last": "Wiggins",
"suffix": ""
}
],
"year": 2017,
"venue": "IWCS 2017 -12th International Conference on Computational Semantics -Long papers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen McGregor, Elisabetta Jezek, Matthew Purver, and Geraint Wiggins. 2017. A geometric method for detecting semantic coercion. In IWCS 2017 -12th International Conference on Computational Seman- tics -Long papers.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Stress test evaluation for natural language inference",
"authors": [
{
"first": "Aakanksha",
"middle": [],
"last": "Naik",
"suffix": ""
},
{
"first": "Abhilasha",
"middle": [],
"last": "Ravichander",
"suffix": ""
},
{
"first": "Norman",
"middle": [],
"last": "Sadeh",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [],
"last": "Rose",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2340--2353",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340-2353.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "What can we learn from collective human opinions on natural language inference data?",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yixin Nie, Xiang Zhou, and Mohit Bansal. 2020. What can we learn from collective human opinions on nat- ural language inference data?",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A case for a range of acceptable annotations",
"authors": [
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Rhinehart",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Tseng",
"suffix": ""
}
],
"year": 2018,
"venue": "SAD/CrowdBias@ HCOMP",
"volume": "",
"issue": "",
"pages": "19--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jennimaria Palomaki, Olivia Rhinehart, and Michael Tseng. 2018. A case for a range of acceptable anno- tations. In SAD/CrowdBias@ HCOMP, pages 19- 31.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Inherent disagreements in human textual inferences",
"authors": [
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "677--694",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellie Pavlick and Tom Kwiatkowski. 2019. Inherent disagreements in human textual inferences. Transac- tions of the Association for Computational Linguis- tics, 7:677-694.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Reanalyzing the complement coercion effect through a generalized lexical semantics for aspectual verbs",
"authors": [
{
"first": "Maria",
"middle": [
"Mercedes"
],
"last": "Pi\u00f1ango",
"suffix": ""
},
{
"first": "Ashwini",
"middle": [],
"last": "Deo",
"suffix": ""
}
],
"year": 2016,
"venue": "J. Semantics",
"volume": "33",
"issue": "",
"pages": "359--408",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Mercedes Pi\u00f1ango and Ashwini Deo. 2016. Re- analyzing the complement coercion effect through a generalized lexical semantics for aspectual verbs. J. Semantics, 33:359-408.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Reanalyzing the complement coercion effect through a generalized lexical semantics for aspectual verbs",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Pi\u00f1ango",
"suffix": ""
},
{
"first": "Ashwini",
"middle": [],
"last": "Deo",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Semantics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1093/jos/ffv003"
]
},
"num": null,
"urls": [],
"raw_text": "Maria Pi\u00f1ango and Ashwini Deo. 2014. Reanalyzing the complement coercion effect through a general- ized lexical semantics for aspectual verbs. Journal of Semantics, 33.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Learning part-of-speech taggers with inter-annotator agreement loss",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "742--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Plank, Dirk Hovy, and Anders S\u00f8gaard. 2014. Learning part-of-speech taggers with inter-annotator agreement loss. In Proceedings of the 14th Confer- ence of the European Chapter of the Association for Computational Linguistics, pages 742-751.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "A survey on recognizing textual entailment as an nlp evaluation",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Poliak. 2020. A survey on recognizing textual entailment as an nlp evaluation.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Collecting diverse natural language inference problems for sentence representation evaluation",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "Aparajita",
"middle": [],
"last": "Haldar",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"Steven"
],
"last": "White",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "67--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Poliak, Aparajita Haldar, Rachel Rudinger, J Ed- ward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018. Collecting diverse nat- ural language inference problems for sentence rep- resentation evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 67-81.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "The generative lexicon",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 1991,
"venue": "Comput. Linguistics",
"volume": "17",
"issue": "",
"pages": "409--441",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky. 1991. The generative lexicon. Comput. Linguistics, 17:409-441.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "The Generative Lexicon",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky. 1995. The Generative Lexicon. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "On the proper role of coercion in semantic typing",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Pierrette",
"middle": [],
"last": "Bouillon",
"suffix": ""
}
],
"year": 1994,
"venue": "The 15th International Conference on Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky and Pierrette Bouillon. 1994. On the proper role of coercion in semantic typing. In COLING 1994 Volume 2: The 15th International Conference on Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Glml: Annotating argument selection and coercion",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Jessica",
"middle": [],
"last": "Moszkowicz",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Batiukova",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Eight International Conference on Computational Semantics",
"volume": "",
"issue": "",
"pages": "169--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky, Jessica Moszkowicz, Olga Batiukova, and Anna Rumshisky. 2009. Glml: Annotating argument selection and coercion. In Proceedings of the Eight International Conference on Computational Semantics, pages 169-180.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Semeval-2010 task 7: Argument selection and coercion",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Plotnick",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Jezek",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Batiukova",
"suffix": ""
},
{
"first": "Valeria",
"middle": [],
"last": "Quochi",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th international workshop on semantic evaluation",
"volume": "",
"issue": "",
"pages": "27--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky, Anna Rumshisky, Alex Plotnick, Elisabetta Jezek, Olga Batiukova, and Valeria Quochi. 2010. Semeval-2010 task 7: Argument se- lection and coercion. In Proceedings of the 5th in- ternational workshop on semantic evaluation, pages 27-32.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Qadiscourse -discourse relations as qa pairs: Representation, crowdsourcing and baselines",
"authors": [
{
"first": "Valentina",
"middle": [],
"last": "Pyatkin",
"suffix": ""
},
{
"first": "Ayal",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Reut",
"middle": [],
"last": "Tsarfaty",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentina Pyatkin, Ayal Klein, Reut Tsarfaty, and Ido Dagan. 2020. Qadiscourse -discourse relations as qa pairs: Representation, crowdsourcing and base- lines.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "BASHI: A Corpus of Wall Street Journal Articles Annotated with Bridging Links",
"authors": [
{
"first": "Ina",
"middle": [],
"last": "Roesiger",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ina Roesiger. 2018. BASHI: A Corpus of Wall Street Journal Articles Annotated with Bridging Links. In Proceedings of the Eleventh International Confer- ence on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Controlled crowdsourcing for high-quality qa-srl annotation",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Roit",
"suffix": ""
},
{
"first": "Ayal",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Stepanov",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Mamou",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7008--7013",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Roit, Ayal Klein, Daniela Stepanov, Jonathan Mamou, Julian Michael, Gabriel Stanovsky, Luke Zettlemoyer, and Ido Dagan. 2020. Controlled crowdsourcing for high-quality qa-srl annotation. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 7008- 7013.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "How well do nli models capture verb veridicality?",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2230--2240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Ross and Ellie Pavlick. 2019. How well do nli models capture verb veridicality? In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2230-2240.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Resolvable vs. irresolvable disagreement: A study on worker deliberation in crowd work",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Schaekermann",
"suffix": ""
},
{
"first": "Joslin",
"middle": [],
"last": "Goh",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Larson",
"suffix": ""
},
{
"first": "Edith",
"middle": [],
"last": "Law",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the ACM on Human-Computer Interaction",
"volume": "2",
"issue": "",
"pages": "1--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Schaekermann, Joslin Goh, Kate Larson, and Edith Law. 2018. Resolvable vs. irresolvable dis- agreement: A study on worker deliberation in crowd work. Proceedings of the ACM on Human- Computer Interaction, 2(CSCW):1-19.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "2020. (re)construing meaning in NLP",
"authors": [
{
"first": "Sean",
"middle": [],
"last": "Trott",
"suffix": ""
},
{
"first": "Nancy",
"middle": [],
"last": "Tiago Timponi Torrent",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schneider",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5170--5184",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.462"
]
},
"num": null,
"urls": [],
"raw_text": "Sean Trott, Tiago Timponi Torrent, Nancy Chang, and Nathan Schneider. 2020. (re)construing meaning in NLP. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 5170-5184, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Blimp: The benchmark of linguistic minimal pairs for english",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Warstadt",
"suffix": ""
},
{
"first": "Alicia",
"middle": [],
"last": "Parrish",
"suffix": ""
},
{
"first": "Haokun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Anhad",
"middle": [],
"last": "Mohananey",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Sheng-Fu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Samuel R",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "377--392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo- hananey, Wei Peng, Sheng-Fu Wang, and Samuel R Bowman. 2020. Blimp: The benchmark of linguis- tic minimal pairs for english. Transactions of the As- sociation for Computational Linguistics, 8:377-392.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Inference is everything: Recasting semantic resources into a unified evaluation framework",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Steven White",
"suffix": ""
},
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "996--1005",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaron Steven White, Pushpendre Rastogi, Kevin Duh, and Benjamin Van Durme. 2017. Inference is ev- erything: Recasting semantic resources into a uni- fied evaluation framework. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 996-1005.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Break it down: A question understanding benchmark",
"authors": [
{
"first": "Tomer",
"middle": [],
"last": "Wolfson",
"suffix": ""
},
{
"first": "Mor",
"middle": [],
"last": "Geva",
"suffix": ""
},
{
"first": "Ankit",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Deutch",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomer Wolfson, Mor Geva, Ankit Gupta, Matt Gard- ner, Yoav Goldberg, Daniel Deutch, and Jonathan Berant. 2020. Break it down: A question under- standing benchmark. Transactions of the Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Do neural models learn systematicity of monotonicity inference in natural language?",
"authors": [
{
"first": "Hitomi",
"middle": [],
"last": "Yanaka",
"suffix": ""
},
{
"first": "Koji",
"middle": [],
"last": "Mineshima",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Bekki",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL2020)",
"volume": "",
"issue": "",
"pages": "6105--6117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, and Kentaro Inui. 2020. Do neural models learn sys- tematicity of monotonicity inference in natural lan- guage? In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics (ACL2020), pages 6105--6117.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "The complement coercion phenomenon: Implications for models of sentence processing",
"authors": [
{
"first": "Lai",
"middle": [],
"last": "Yao-Ying",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lai Yao-Ying. 2017. The complement coercion phe- nomenon: Implications for models of sentence pro- cessing. Ph.D. thesis, Yale University.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Constructions, semantic compatibility, and coercion: An empirical usage-based approach",
"authors": [
{
"first": "Soyeon",
"middle": [],
"last": "Yoon",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soyeon Yoon. 2012. Constructions, semantic compat- ibility, and coercion: An empirical usage-based ap- proach.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Complement coercion: The joint effects of type and typicality",
"authors": [
{
"first": "Alessandra",
"middle": [],
"last": "Zarcone",
"suffix": ""
},
{
"first": "Ken",
"middle": [],
"last": "Mcrae",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 1987,
"venue": "Frontiers in Psychology",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3389/fpsyg.2017.01987"
]
},
"num": null,
"urls": [],
"raw_text": "Alessandra Zarcone, Ken McRae, Alessandro Lenci, and Sebastian Pad\u00f3. 2017. Complement coercion: The joint effects of type and typicality. Frontiers in Psychology, 8:1987.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Ordinal common-sense inference",
"authors": [
{
"first": "Sheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "379--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sheng Zhang, Rachel Rudinger, Kevin Duh, and Ben- jamin Van Durme. 2017. Ordinal common-sense in- ference. Transactions of the Association for Compu- tational Linguistics, 5:379-395.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books",
"authors": [
{
"first": "Yukun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Urtasun",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Fidler",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE international conference on computer vision",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE inter- national conference on computer vision, pages 19- 27.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "A screenshot of the explicit task presented to the annotators."
},
"TABREF0": {
"content": "<table/>",
"type_str": "table",
"text": "Example Label I started a book I bought last week. ; I started reading a book I bought last week. ENT I started a book. ; I started reading a book.",
"num": null,
"html": null
}
}
}
}