Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I17-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:39:45.646600Z"
},
"title": "Improving Implicit Semantic Role Labeling by Predicting Semantic Frame Arguments",
"authors": [
{
"first": "Quynh",
"middle": [],
"last": "Ngoc",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Thi",
"middle": [],
"last": "Do",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Katholieke Universiteit Leuven",
"location": {
"country": "Belgium"
}
},
"email": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Arizona",
"location": {
"country": "United States"
}
},
"email": "bethard@email.arizona.edu"
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Katholieke Universiteit Leuven",
"location": {
"country": "Belgium"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Implicit semantic role labeling (iSRL) is the task of predicting the semantic roles of a predicate that do not appear as explicit arguments, but rather regard common sense knowledge or are mentioned earlier in the discourse. We introduce an approach to iSRL based on a predictive recurrent neural semantic frame model (PRNSFM) that uses a large unannotated corpus to learn the probability of a sequence of semantic arguments given a predicate. We leverage the sequence probabilities predicted by the PRNSFM to estimate selectional preferences for predicates and their arguments. On the NomBank iSRL test set, our approach improves state-of-the-art performance on implicit semantic role labeling with less reliance than prior work on manually constructed language resources.",
"pdf_parse": {
"paper_id": "I17-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "Implicit semantic role labeling (iSRL) is the task of predicting the semantic roles of a predicate that do not appear as explicit arguments, but rather regard common sense knowledge or are mentioned earlier in the discourse. We introduce an approach to iSRL based on a predictive recurrent neural semantic frame model (PRNSFM) that uses a large unannotated corpus to learn the probability of a sequence of semantic arguments given a predicate. We leverage the sequence probabilities predicted by the PRNSFM to estimate selectional preferences for predicates and their arguments. On the NomBank iSRL test set, our approach improves state-of-the-art performance on implicit semantic role labeling with less reliance than prior work on manually constructed language resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Semantic role labeling (SRL) has traditionally focused on semantic frames consisting of verbal or nominal predicates and explicit arguments that occur within the clause or sentence that contains the predicate. However, many predicates, especially nominal ones, may bear arguments that are left implicit because they regard common sense knowledge or because they are mentioned earlier in a discourse (Ruppenhofer et al., 2010; Gerber et al., 2009) . These arguments, called implicit arguments, are resolved by another semantic task, implicit semantic role labeling (iSRL). Consider a NomBank (Meyers et al., 2004) annotation example: The predicate loss in the first sentence has two arguments annotated explicitly: A0, the entity losing something, and A1, the thing lost. Meanwhile, the other instance of the same predicate in the second sentence has no associated arguments. However, for a good reader, a reasonable interpretation of the second loss should be that it receives the same A0 and A1 as the first instance. These arguments are implicit to the second loss.",
"cite_spans": [
{
"start": 399,
"end": 425,
"text": "(Ruppenhofer et al., 2010;",
"ref_id": "BIBREF17"
},
{
"start": 426,
"end": 446,
"text": "Gerber et al., 2009)",
"ref_id": "BIBREF5"
},
{
"start": 591,
"end": 612,
"text": "(Meyers et al., 2004)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "[",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As an emerging task, implicit semantic role labeling faces a lack of resources. First, hand-crafted implicit role annotations for use as training data are seriously limited: SemEval 2010 Task 10 (Baker et al., 1998) provided FrameNet-style (Baker et al., 1998) annotations for a fairly large number of predicates but with few annotations per predicate, while Gerber and Chai (2010) provided PropBank-style (Palmer et al., 2005) data with many more annotations per predicate but covering just 10 predicates. Second, most existing iSRL systems depend on other systems (explicit semantic role labelers, named entity taggers, lexical resources, etc.), and as a result not only need iSRL annotations to train the iSRL system, but annotations or manually built resources for all of their sub-systems as well.",
"cite_spans": [
{
"start": 195,
"end": 215,
"text": "(Baker et al., 1998)",
"ref_id": "BIBREF0"
},
{
"start": 240,
"end": 260,
"text": "(Baker et al., 1998)",
"ref_id": "BIBREF0"
},
{
"start": 359,
"end": 381,
"text": "Gerber and Chai (2010)",
"ref_id": "BIBREF6"
},
{
"start": 406,
"end": 427,
"text": "(Palmer et al., 2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose an iSRL approach that addresses these challenges, requiring no manually annotated iSRL data and only a single sub-system, an explicit semantic role labeler. We introduce a predictive recurrent neural semantic frame model (PRNSFM), which can estimate the probability of a sequence of semantic arguments given a predicate, and can be trained on unannotated data drawn from the Wikipedia, Reuters, and Brown corpora, coupled with the predictions of the MATE (Bj\u00f6rkelund et al., 2010) explicit semantic role labeler on these texts. The PRNSFM forms the foundation for our iSRL system, where we use its probability estimates over sequences of semantic arguments to predict selectional preferences for associating predicates with their implicit semantic roles. Our PRNSFM-based iSRL model improves state-of-the-art performance, outperforming the only other system that depends on just an explicit semantic role labeler by 10 % F1, and achieving equal or better F1 score than several other models that require many more lexical resources.",
"cite_spans": [
{
"start": 466,
"end": 491,
"text": "(Bj\u00f6rkelund et al., 2010)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work fits today's interest in natural language understanding, which is hampered by the fact that content in a discourse is often not expressed explicitly because it was mentioned earlier or because it regards common sense or world knowledge that resides in the mind of the communicator or the audience. In contrast, humans easily combine relevant evidence to infer meaning, determine hidden meanings and make explicit what was left implicit in the text, using the anticipatory power of the brain that predicts or \"imagines\" circumstantial situations and outcomes of actions (Friston, 2010; Vernon, 2014) which makes language processing extremely effective and fast (Kurby and Zacks, 2015; Schacter and Madore, 2016) . The neural semantic frame representations inferred by our PRNSFM take a first step towards encoding something like anticipatory power for natural language understanding systems.",
"cite_spans": [
{
"start": 578,
"end": 593,
"text": "(Friston, 2010;",
"ref_id": "BIBREF4"
},
{
"start": 594,
"end": 607,
"text": "Vernon, 2014)",
"ref_id": "BIBREF21"
},
{
"start": 669,
"end": 692,
"text": "(Kurby and Zacks, 2015;",
"ref_id": "BIBREF8"
},
{
"start": 693,
"end": 719,
"text": "Schacter and Madore, 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of the paper is organized as follows: First, section 2 describes the related work. Second, section 3 proposes the predictive recurrent neural semantic frame model including the formal definition, architecture, and an algorithm to extract selectional preferences from the trained model. Third, in section 4, we introduce the application of our PRNSFM in implicit semantic role labeling. Fourth, the experimental results and discussions are presented in section 5. Finally, we conclude our work and suggest some future work in section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Language Modeling Language models, from ngram models to continuous space language models (Mikolov et al., 2013; Pennington et al., 2014) , provide probability distributions over sequences of words and have shown their usefulness in many natural language processing tasks. However, to our knowledge, they have not yet been used to model semantic frames. Recently, Peng and Roth (2016) developed two distinct models that capture semantic frame chains and discourse information while abstracting over the specific mentions of predicates and entities, but these models focus on discourse processing tasks, not semantic frame processing.",
"cite_spans": [
{
"start": 89,
"end": 111,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF11"
},
{
"start": 112,
"end": 136,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF15"
},
{
"start": 363,
"end": 383,
"text": "Peng and Roth (2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In unsupervised SRL, Woodsend and Lapata (2015) and Titov and Khoddam (2015) induce embeddings to represent a predicate and its arguments from unannotated texts, but in their approaches, the arguments are words only, not the semantic role labels, while in our models, both are considered.",
"cite_spans": [
{
"start": 21,
"end": 47,
"text": "Woodsend and Lapata (2015)",
"ref_id": "BIBREF22"
},
{
"start": 52,
"end": 76,
"text": "Titov and Khoddam (2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Role Labeling",
"sec_num": null
},
{
"text": "Low-resource Implicit Semantic Role Labeling Several approaches have attempted to address the lack of resources for training iSRL systems. Laparra and Rigau (2013) proposed an approach based on exploiting argument coherence over different instances of a predicate, which did not require any manual iSRL annotations but did require many other manually-constructed resources: an explicit SRL system, WordNet super-senses, a named entity tagger, and a manual categorization of Super-SenseTagger semantic classes. Roth and Frank (2015) generated additional training data for iSRL through comparable texts, but the resulting model performed below the previous state-of-the-art of Laparra and Rigau (2013) . Schenk and Chiarcos (2016) proposed an approach to induce prototypical roles using distributed word representations, which required only an explicit SRL system and a large unannotated corpus, but their model performance was almost 10 points lower than the state-of-the-art of Laparra and Rigau (2013) . Similar to Schenk and Chiarcos (2016) , our model requires only an explicit SRL system and a large unannotated corpus, but we take a very different approach to leveraging these, and as a result improve state-of-the-art performance.",
"cite_spans": [
{
"start": 510,
"end": 531,
"text": "Roth and Frank (2015)",
"ref_id": "BIBREF16"
},
{
"start": 675,
"end": 699,
"text": "Laparra and Rigau (2013)",
"ref_id": "BIBREF9"
},
{
"start": 702,
"end": 728,
"text": "Schenk and Chiarcos (2016)",
"ref_id": "BIBREF19"
},
{
"start": 978,
"end": 1002,
"text": "Laparra and Rigau (2013)",
"ref_id": "BIBREF9"
},
{
"start": 1016,
"end": 1042,
"text": "Schenk and Chiarcos (2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Role Labeling",
"sec_num": null
},
{
"text": "Our goal is to use unlabeled data to acquire selectional preferences that characterize how likely a phrase is to be an argument of a semantic frame. We rely on the fact that current explicit SRL systems achieve high performance on verbal predicates, and run a state-of-the-art explicit SRL system on unlabeled data. We then construct a predictive recurrent neural semantic frame model (PRNSFM) from these explicit frames and roles. Our PRNSFM views semantic frames as a sequence: a predicate, followed by the arguments in their textual order, and terminated by a special EOS symbol. We draw predicates from PropBank verbal semantic frames, and represent arguments with their nominal/pronominal heads. For example, Michael Phelps swam at the Olympics is represented as [swam:PRED, Phelps:A0, Olympics:AM-LOC, EOS], where the predicate is labeled PRED and the arguments Phelps and Olympics are labeled A0 and AM-LOC, respectively. Our PRNSFM's task is thus to take a predicate and zero or more arguments, and predict the next argument in the sequence, or EOS if no more arguments will follow. We choose to model semantic frames as a sequence (rather than, say, a bag of arguments) because in English, there are often fairly strict constraints on the order in which arguments of a verb may appear. A sequential model should thus be able to capture these constraints and use them to improve its probability estimates. Moreover, a sequential model has the ability to learn the interaction between arguments in the same semantic frame. For example, considering a swimming event, if Phelps is A0, then Olympics is more likely to be the AM-LOC than lake.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predictive Recurrent Neural Semantic Frame Model",
"sec_num": "3"
},
{
"text": "Formally, for each t th argument of a semantic frame f , we denote its word (e.g., Phelps) as w f,t , its semantic label (e.g., A0) as l f,t , where w \u2208 V, the word vocabulary, and l \u2208 L \u222a [PRED], the set of semantic labels. We denote the predicate word and label, which are always at the 0 th position in the sequence, in the same way as arguments: w f,0 and l f,0 . We denote the sequence [w f,0 , w f,1 , . . . , w f,t\u22121 ] as w f,<t , and the sequence [l f,0 , l f,1 , . . . , l f,t\u22121 ] as l f,<t . Our model aims to estimate the conditional probability of the occurrence of w f,t as semantic role l f,t given the preceding words and their labels:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predictive Recurrent Neural Semantic Frame Model",
"sec_num": "3"
},
{
"text": "P (w f,t :l f,t |w f,<t :l f,<t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predictive Recurrent Neural Semantic Frame Model",
"sec_num": "3"
},
{
"text": "We use a recurrent neural network to learn this probability distribution over sequences of semantic frame arguments. For a semantic frame f with N arguments, at each time step 0 \u2264 t \u2264 N , given the input w f,t :l f,t , the model computes the distribution P (w f,t+1 :l f,t+1 |w f,<t+1 :l f,<t+1 ) and predicts the next most likely argument (or EOS). During training, model parameters are optimized by minimizing prediction errors over all time steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predictive Recurrent Neural Semantic Frame Model",
"sec_num": "3"
},
{
"text": "We consider two versions of this model that differ in input (V in ) and output (V out ) vocabularies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predictive Recurrent Neural Semantic Frame Model",
"sec_num": "3"
},
{
"text": "We adopt the standard recurrent neural network language model (Mikolov et al., 2010) , which is a natural architecture to deal with a sequence prediction problem.",
"cite_spans": [
{
"start": 62,
"end": 84,
"text": "(Mikolov et al., 2010)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1: Joint Embedding LSTM",
"sec_num": "3.1"
},
{
"text": "Model 1 consists of three layers (see Figure 1 ): an embedding layer that learns vector representations for input values; a long short-term memory (LSTM) layer that controls the sequential information receiving the vector representation as input; and a softmax layer to predict the next argument using the output of the LSTM layer as input.",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 46,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model 1: Joint Embedding LSTM",
"sec_num": "3.1"
},
{
"text": "This model treats the word and semantic label as a single unit in both input and output layers. The model, therefore, learns joint embeddings for the word and its corresponding semantic label. For example, if we take \"Michael Phelps swam at the Olympics\" as training data, the three input values would be swam:PRED, Phelps:A0 and Olympics:AM-LOC, and the three expected outputs would be Phelps:A0, Olympics:AM-LOC, EOS. Since each word:label is considered as a single unit, the embedding layer will learn three vector representations, one for swam:PRED, one for Phelps:A0, and one for Olympics:AM-LOC. As can be seen, an important difference between our problem and the traditional language model is that we have to deal with two different types of information -word and label. By concatenating word and label, the standard recurrent neural network model can be applied directly to our data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1: Joint Embedding LSTM",
"sec_num": "3.1"
},
{
"text": "The detail of Model 1 is as following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1: Joint Embedding LSTM",
"sec_num": "3.1"
},
{
"text": "Embedding Layer is a matrix of size |V in | \u00d7 d that maps each unit of input into an d-dimensional vector. The matrix is initialized randomly and updated during network training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1: Joint Embedding LSTM",
"sec_num": "3.1"
},
{
"text": "LSTM Layer consists of m standard LSTM units which take as input the output of the embedding layer, x t , and produce an output h t by updating at every time step 0 \u2264 t \u2264 T :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1: Joint Embedding LSTM",
"sec_num": "3.1"
},
{
"text": "i t = sigmoid(W i x t + U i h t\u22121 + b i ) C t = tanh(W c x t + U c h t\u22121 + b c ) f t = sigmoid(W f x t + U f h t\u22121 + b f ) C t = i t * \u0108 t + f t * C t\u22121 o t = sigmoid(W o x t + U o h t\u22121 + b o ) h t = o t * tanh(C t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1: Joint Embedding LSTM",
"sec_num": "3.1"
},
{
"text": "where Softmax Layer computes the probability distribution of the next argument given the preceding arguments at time step t:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1: Joint Embedding LSTM",
"sec_num": "3.1"
},
{
"text": "W i , W c , W f , W o are weight matrices of size d \u00d7 m; U i , U c , U f , U",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1: Joint Embedding LSTM",
"sec_num": "3.1"
},
{
"text": "P (w f,t+1 :l f,t+1 |w f,<t+1 :l f,<t+1 ) = sof tmax(h t W + b) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1: Joint Embedding LSTM",
"sec_num": "3.1"
},
{
"text": "where W is a weight matrix of size m \u00d7 |V out |, and b is a bias vector of size |V out |. The predicted next argument is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1: Joint Embedding LSTM",
"sec_num": "3.1"
},
{
"text": "argmax w f,t+1 :l f,t+1 P (w f,t+1 :l f,t+1 |w f,<t+1 :l f,<t+1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1: Joint Embedding LSTM",
"sec_num": "3.1"
},
{
"text": "The network is trained using the negative loglikelihood loss function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 1: Joint Embedding LSTM",
"sec_num": "3.1"
},
{
"text": "Model 2 shares the same basic structure as Model 1, but considers the word and the semantic label as two different units in the input layer. As shown in Figure 2 , we use two different embedding layers, one for word values and one for semantic labels, and the two embedding vectors are concatenated before being passed to the LSTM layer. The LSTM and softmax layers are then the same as in Model 1. For example, if we take \"Michael Phelps swam at the Olympics\" as training data, the three input words would be swam, Phelps, and Olympics, the three input roles would be PRED, A0 and AM-LOC, and the three expected outputs would be Phelps:A0, Olympics:AM-LOC, EOS. A total of six different vector representations will be learned: a word embedding for each of swam, Phelps, and Olympics, and a label embedding for each of PRED, A0 and AM-LOC. In this model, the embedding layer for labels is initialized randomly (as in Model 1), but the embedding layer for word values is initialized with publicly available word embeddings that have been trained on a large dataset (Mikolov et al., 2013) .",
"cite_spans": [
{
"start": 1064,
"end": 1086,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 153,
"end": 161,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Model 2: Separate Embedding LSTM",
"sec_num": "3.2"
},
{
"text": "As compared to the joint-embedding Model 1, the separate-embedding Model 2 gives up a little power to represent the interaction between words and labels, but has a less sparse input vocabulary and gains the ability to incorporate pre-trained word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model 2: Separate Embedding LSTM",
"sec_num": "3.2"
},
{
"text": "While the PRNSFM can predict the probability of an argument given the predicate and the preceding arguments, P (w f,t :l f,t |w f,<t :l f,<t ), an iSRL P(w:l|p) ~ P w +P w1 P 1 +P w2 P 2 +P w11 P 11 P 1 + P w12 P 12 P 1 + P w21 P 21 P 2 + P w22 P 22 P 2 Figure 3 : Selectional Preference Inference example: k=2, T =3. The possible sequences are represented as a tree. Each arrow label is the probability of the target node to be predicted given the path from the tree root to the parent of the target node.",
"cite_spans": [],
"ref_spans": [
{
"start": 254,
"end": 262,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Selectional Preferences",
"sec_num": "3.3"
},
{
"text": "system needs a selectional preference score representing the probability of a word w being the l argument of predicate p, P (w:l|p:P RED). Thus, to convert our PRNSFM probabilities to selectional preferences, we need to marginalize over the possible argument sequences. We approximate this marginalization by constructing a tree where the root is the predicate, p, the branches are likely sequences of arguments, and the leaves are the word and label for which we need to estimate a probability, w:l. Formally, we define this tree of possible sequences as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selectional Preferences",
"sec_num": "3.3"
},
{
"text": "S t = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 {[p:P RED]} if t = 0 {[q, w t :l t ] : q \u2208 S t\u22121 , w t :l t \u2208 argmax k (q)} if 0 < t < T {[q, w:l] : q \u2208 S t\u22121 } if t = T",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selectional Preferences",
"sec_num": "3.3"
},
{
"text": "where w f,0 :l f :0 = p:P RED; k and T are thresholds; and argmax k (q) is the k word:label pairs that have the highest probability of being the next argument given the sequence q according to the PRNSFM. We then estimate P (w:l|p:P RED) as the sum of the probabilities of all the sequences encoded in the tree. Formally:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selectional Preferences",
"sec_num": "3.3"
},
{
"text": "P (w:l|p:P RED) \u2248 0\u2264t\u2264T P (w:l|w f,<t+1 :l f,<t+1 ) \u2248 0\u2264t\u2264T q\u2208S t P (w:l|q) \u00d7 P (q)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selectional Preferences",
"sec_num": "3.3"
},
{
"text": "where the probability of an argument sequence q is the product of the PRNSFM's estimates for each step in the sequence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selectional Preferences",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (q) = P (w t :l t |w t\u22121 :l t\u22121 , . . . , p:PRED) \u00d7 P (w t\u22121 :l t\u22121 |w t\u22122 :l t\u22122 , . . . , p:PRED) \u00d7 . . . \u00d7 P (w 1 :l 1 |p:PRED)",
"eq_num": "(2)"
}
],
"section": "Selectional Preferences",
"sec_num": "3.3"
},
{
"text": "An example of the calculation of P (w:l|p:P RED) is shown in Figure 3 . Intuitively, the tree enumerates all possible argument sequences that start with the predicate, have zero or more intervening arguments, and end with the word and label of interest, w:l. The probability of w:l given the predicate is the sum of the probabilities of all branches in this tree, i.e., of all possible sequences that end with w:l. In reality, we do not have the computational power to explore all possible sequences, so we must limit the tree somehow. Thus, we only ask the PRNSFM for its top k predictions at each branch point, and we only explore sequences with a maximum length of T .",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 69,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Selectional Preferences",
"sec_num": "3.3"
},
{
"text": "As you will recall from previous sections, implicit semantic role labeling is the task of identifying discourse-level arguments of a semantic frame, which are missed by standard semantic role labeling, which operates on individual sentences. For instance, in \"This house has a new owner. The sale was finalized 10 days ago.\", the semantic frame evoked by \"sale\" in the second sentence should receive \"the house\" as an implicit A1 semantic role. Humans easily resolve the object of the sale given the candidates (in our example: \"house\" and \"owner\"), but for a machine this is more difficult unless it has knowledge on what the likely objects of a sale are. This kind of knowledge of selectional preferences can be extracted from our trained PRNSFM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit Semantic Role Labeling",
"sec_num": "4"
},
{
"text": "The previous section described how to extract selectional preferences from our PRNSFM. However, that model is trained on verbal predicates, and the test data that we use (Gerber and Chai, 2010) contains nominal predicates. Thus, for each triple of a nominal predicate np, a word candidate w, and a label l, we approximate the selectional preference score of w being the implicit argument role l of np as: P (w:l|np) = max p\u2208V (np) P (w:l|p:PRED)",
"cite_spans": [
{
"start": 170,
"end": 193,
"text": "(Gerber and Chai, 2010)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit Semantic Role Labeling",
"sec_num": "4"
},
{
"text": "where P (w:l|p) is the selectional preference score described in Section 3.3, and V (np) is set of verbal forms of np. Here, we use the NomBank lexicon to get verbs associated with each nominal predicate, and then find instances of those verbs in the explicit SRL training data. For example, for the noun funds, V (funds) = {funds, fund, funding, funded}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit Semantic Role Labeling",
"sec_num": "4"
},
{
"text": "We apply selectional preferences to iSRL following (Laparra and Rigau, 2013) . For each nominal predicate np and implicit label l, the current and previous two sentences are designated the context window. Each sentence in the context window is annotated with the explicit SRL system. If any instances of np or V (np) in the text have an explicit argument of type l, we deterministically predict the closest such argument as the implicit l argument of np. Otherwise, we run the PRNSFM over each word in the context window, and select the word with the highest selectional preference score above a threshold s. If all the candidates' scores are less than s, the system leaves the missing argument unfilled. We optimized this threshold on the development data, resulting in s = 0.0003.",
"cite_spans": [
{
"start": 51,
"end": 76,
"text": "(Laparra and Rigau, 2013)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit Semantic Role Labeling",
"sec_num": "4"
},
{
"text": "As in Laparra and Rigau (2013) , we apply a sentence recency factor to emphasize recent candidates. The selectional preference score x is updated as x = x \u2212 z + z \u00d7 \u03b1 d where d is the sentence distance, and \u03b1 and z are parameters. We set z = 0.00005 based on the development set and set \u03b1 = 0.5 as in (Laparra and Rigau, 2013) .",
"cite_spans": [
{
"start": 6,
"end": 30,
"text": "Laparra and Rigau (2013)",
"ref_id": "BIBREF9"
},
{
"start": 301,
"end": 326,
"text": "(Laparra and Rigau, 2013)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit Semantic Role Labeling",
"sec_num": "4"
},
{
"text": "We evaluate the two PRNSFM models on the iSRL task. The tools, resources, and settings we used are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Semantic Role Labeling We used the full pipeline from MATE (https://code.google.com/ archive/p/mate-tools/) (Bj\u00f6rkelund et al., 2010) as the explicit SRL system, retraining it on just the CoNLL 2009 training portion.",
"cite_spans": [
{
"start": 108,
"end": 133,
"text": "(Bj\u00f6rkelund et al., 2010)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The unannotated data used in the experiments was drawn from Wikipedia (http://corpus.byu.edu/wiki/), Reuters (http://about. reuters.com/researchandstandards/corpus/), and Brown (https://catalog.ldc.upenn.edu/ldc99t42).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unannotated Data",
"sec_num": null
},
{
"text": "Dataset for PRNSFM The first 15 milion short and medium (less than 100 words) sentences from the unannotated data (described above) were annotated automatically by the explicit SRL system. The obtained annotations were then used together with the gold standard CoNLL 2009 SRL training data to train the PRNSFM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unannotated Data",
"sec_num": null
},
{
"text": "Neural network training and inference Parameters were selected using the CoNLL 2009 development set. We set the dimensions of word and label embeddings in the PRNSFM to 50 and 16, respectively. The hidden sizes of LSTM layers are the same as their input sizes. Word embedding layers are initialized by Skip-gram embeddings learned by training the word2vec tool (Mikolov et al., 2013) on the unannotated data. Our models were trained for 120 epochs using the AdaDelta optimization algorithm (Zeiler, 2012). For fast selectional preference computing, we set k = 1 and T = 4 1 .",
"cite_spans": [
{
"start": 361,
"end": 383,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unannotated Data",
"sec_num": null
},
{
"text": "Evaluation We follow the evaluation setting in Gerber and Chai (2010) ; Laparra and Rigau (2013); Schenk and Chiarcos (2016) 2 : the method is evaluated on the evaluation portion of the nominal iSRL data by Dice coefficient metrics. For each missing argument position of a predicate instance, the system is required to either (1) identify a single constituent that fills the missing argument position or (2) make no prediction and leave the missing argument position unfilled. To give partial credit for inexact argument boundaries, predictions are scored by using the Dice coefficient, which is defined as follows:",
"cite_spans": [
{
"start": 47,
"end": 69,
"text": "Gerber and Chai (2010)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unannotated Data",
"sec_num": null
},
{
"text": "Dice(predicted, true) = 2 |predicted \u2229 true| |predicted| + |true| P redicted contains the tokens that the model has identified as the filler of the implicit argument position. T rue is the set of tokens from a single annotated constituent that truely fill the missing argument position. The model's prediction receives a score equal to the maximum Dice overlap across any of the annotated fillers (AF) 3 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unannotated Data",
"sec_num": null
},
{
"text": "Score(predicted) = max true\u2208AF Dice(predicted, true)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unannotated Data",
"sec_num": null
},
{
"text": "Precision is equal to the summed prediction scores divided by the number of argument positions filled by the model. Recall is equal to the summed prediction scores divided by the number of argument positions filled in the annotated data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unannotated Data",
"sec_num": null
},
{
"text": "In the baseline mode, instead of using the PRNSFM, we only use the deterministic prediction by the explicit SRL system. We refer to this mode as Baseline in Table 1 . In the main mode, the joint embedding LSTM model (Model 1) and the separate embedding LSTM model (Model 2) were trained on the same dataset which is a combination of the automatic SRL annotations and the gold standard CoNLL small values reported in the article achieved similar results with faster processing times.",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 164,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "2 Following Schenk and Chiarcos (2016), we do not perform the alternative evaluation of Gerber and Chai (2012) that evaluates systems on the iSRL training set, since the iSRL training set overlaps with the CoNLL 2009 explicit semantic role training set on which MATE is trained.",
"cite_spans": [
{
"start": 88,
"end": 110,
"text": "Gerber and Chai (2012)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "3 For iSRL, one implicit role may receive more than one annotated filler across a coreference chain in the discourse. 2009 training data as described in the previous section. We denote this mode as gold CoNLL 2009 + unlabeled in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 236,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "To evaluate how well the system acquires knowledge from unlabeled data, we also train the PRNSFM only on the gold standard CoNLL 2009 training data. We denote this mode as CoNLL 2009 in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 193,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "In order to compare the performance of our sequential model to a non-sequential model, we train a skip-gram neural language model on the same unlabeled and labeled data as the PRNSFM in the main mode. The skip-gram model treats the predicates and arguments as a bag of labeled words rather than a sequence. The P (w:l|p) is computed at the output layer of the skip-gram model by considering w:l as the context of p. We denote this mode as Skip-gram in Table 1 . Table 1 shows the prior state-of-the-art and the performance of the baseline, skip-gram and our PRNSFM-based methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 452,
"end": 459,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 462,
"end": 469,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "Our Model 2 achieves the highest precision and F1 score. This is notable because the first two models require many more language resources than just an explicit SRL system: Gerber and Chai (2010) use WordNet and manually annotated iSRL data, while Laparra and Rigau (2013) use WordNet, named entity annotations, and manual semantic category mappings. Schenk and Chiarcos (2016) , like our approach, use only an explicit SRL system, but both our models strongly outperform their results. We assume that the difference here is caused by our proposed neural semantic frame model (PRNSFM). Schenk and Chiarcos (2016) measure the selectional preference of a predicate and a role as a cosine between a standard word2vec embedding for the candidate word, and the average of all word2vec embeddings for all words that appear in that role. Our algorithms are very different: we take a language modeling approach and leverage the sequence of semantic roles, we learn custom word/role embeddings tuned for SRL, and then marginalize over many possible argument sequences. We assume that the learned PRNSFM representations are better informed about semantic frames than simple word embeddings, which only capture knowledge of contextual words. pared to training on only the CoNLL 2009 labeled data, providing evidence that the models have acquired linguistic knowledge from the unlabeled data. Although the automatically annotated data used to train the PRNSFM can be noisy, using a large amount of data has smoothed out the noise.",
"cite_spans": [
{
"start": 173,
"end": 195,
"text": "Gerber and Chai (2010)",
"ref_id": "BIBREF6"
},
{
"start": 248,
"end": 272,
"text": "Laparra and Rigau (2013)",
"ref_id": "BIBREF9"
},
{
"start": 351,
"end": 377,
"text": "Schenk and Chiarcos (2016)",
"ref_id": "BIBREF19"
},
{
"start": 586,
"end": 612,
"text": "Schenk and Chiarcos (2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.2"
},
{
"text": "Moreover, the better performance of our models over the standard skip-gram neural language model proves the effectiveness of modeling semantic frames as sequential data. The intuition here is that explicit semantic arguments have typical orderings in which they occur, so a sequential model should be a good fit for this problem. Modeling this sequential aspect of the problem is effective, but requires us to marginalize out positional information to compute selectional preferences, since implicit semantic arguments can occur anywhere in the discourse and do not have a typical position.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.2"
},
{
"text": "Among our two models, Model 2, which learns separate vector representations for words and semantic roles, is better than Model 1, which learns a single vector representation of each (word, semantic role) pair. The separate representation of words and roles means that Model 2 can share information across multiple occurrences of a word even if the semantic roles of that word are different, and this model can use publicly available embeddings pre-trained from even larger unannotated corpora when initializing its embeddings. Gerber and Chai (2012) report an inter-annotator agreement of 64.3% using Cohen's kappa measure on the annotated NomBank-based iSRL data. This value is borderline between low and moderate agreement indicating the sheer complexity of the Table 2 : A comparison on F1 scores (%). 2010: (Gerber and Chai, 2010) , 2013: (Laparra and Rigau, 2013) , 2016: Best model from (Schenk and Chiarcos, 2016) , 2017: Our best model (Model 2).",
"cite_spans": [
{
"start": 527,
"end": 549,
"text": "Gerber and Chai (2012)",
"ref_id": "BIBREF7"
},
{
"start": 811,
"end": 834,
"text": "(Gerber and Chai, 2010)",
"ref_id": "BIBREF6"
},
{
"start": 843,
"end": 868,
"text": "(Laparra and Rigau, 2013)",
"ref_id": "BIBREF9"
},
{
"start": 893,
"end": 920,
"text": "(Schenk and Chiarcos, 2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 764,
"end": 771,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.2"
},
{
"text": "annotation task, and explaining the relatively low performance of the iSRL systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.2"
},
{
"text": "In Table 2 , we compare the F1 scores over all the ten predicates of our Model 2 to other stateof-the-art systems 4 . Our system obtains relatively high scores (> 50%) on three predicates including \"sale\", \"plan\" and \"loss\". These three are the most frequent predicates (among the 10 defined in the nominal iSRL dataset) in the CoNLL 2009 training data -they occur 1016, 318 and 275 times in verbal forms, respectively. In contrast, irregular predicates such as \"bid\" or \"loan\" usually have low performance. This is possibly caused by the de-pendence of our PRNSFM on the performance of the explicit semantic role labeling system on verbal predicates.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.2"
},
{
"text": "It is important to consider how iSRL can be extended beyond the 10 annotated predicates of Gerber and Chai (2010). Our models do not require any handcrafted iSRL annotations for training, and thus can be applied to all predicates observed in large unannotated data on which they are trained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.2"
},
{
"text": "However, as other work in iSRL, our approach still relies on a resource-heavy SRL system to learn selectional preferences. It would be interesting to investigate in further studies whether this SRL system can be replaced by a low-resource system (Collobert et al., 2011; Connor et al., 2012) .",
"cite_spans": [
{
"start": 246,
"end": 270,
"text": "(Collobert et al., 2011;",
"ref_id": "BIBREF2"
},
{
"start": 271,
"end": 291,
"text": "Connor et al., 2012)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.2"
},
{
"text": "We have presented recurrent neural semantic frame models for learning probability distributions over semantic argument sequences. By modeling selectional preferences from these probability distributions, we have improved state-of-the-art performance on the NomBank iSRL task while using fewer language resources. In the future, we believe that our semantic frame models are valuable in many language processing tasks that require discourse-level understanding of language, such as summarization, question answering and machine translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "We selected relatively small values for the parameters to reduce the training and prediction time. We tried some larger values of the parameters on a small dataset, but found that the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "As an overly conservative estimate, we take a t-test over the 10 predicate-level F1 scores as can be seen inTable 2. Comparing against Model 2, this yields p=0.28 forGerber and Chai (2010), p=0.46 forLaparra and Rigau (2013), and most importantly p=0.058 forSchenk and Chiarcos (2016).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is carried out in the frame of the EU CHIST-ERA project \"MUltimodal processing of Spatial and TEmporal expRessions\" (MUSTER), and the \"MAchine Reading of patient recordS\" project (MARS, KU Leuven, C22/015/016).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The berkeley framenet project",
"authors": [
{
"first": "Collin",
"middle": [
"F"
],
"last": "Baker",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
},
{
"first": "John",
"middle": [
"B"
],
"last": "Lowe",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--90",
"other_ids": {
"DOI": [
"10.3115/980845.980860"
]
},
"num": null,
"urls": [],
"raw_text": "Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The berkeley framenet project. In Proceed- ings of the 36th Annual Meeting of the Associa- tion for Computational Linguistics and 17th Inter- national Conference on Computational Linguistics -Volume 1. Association for Computational Linguis- tics, Stroudsburg, PA, USA, ACL '98, pages 86-90. https://doi.org/10.3115/980845.980860.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A high-performance syntactic and semantic dependency parser",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "Bj\u00f6rkelund",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Love",
"middle": [],
"last": "Hafdell",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Nugues",
"suffix": ""
}
],
"year": 2010,
"venue": "Coling 2010: Demonstrations. Coling",
"volume": "",
"issue": "",
"pages": "33--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders Bj\u00f6rkelund, Bernd Bohnet, Love Hafdell, and Pierre Nugues. 2010. A high-performance syn- tactic and semantic dependency parser. In Col- ing 2010: Demonstrations. Coling 2010 Orga- nizing Committee, Beijing, China, pages 33-36. http://www.aclweb.org/anthology/C10-3009.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "J. Mach. Learn. Res",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12:2493-2537.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Starting from Scratch in Semantic Role Labeling: Early Indirect Supervision. Cognitive Aspects of Computational Language Acquisition",
"authors": [
{
"first": "M",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Fisher",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Connor, C. Fisher, and D. Roth. 2012. Starting from Scratch in Semantic Role Labeling: Early Indirect Supervision. Cognitive Aspects of Computational Language Acquisition .",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The free-energy principle: a unified brain theory?",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Friston",
"suffix": ""
}
],
"year": 2010,
"venue": "Nature Reviews Neuroscience",
"volume": "11",
"issue": "2",
"pages": "127--138",
"other_ids": {
"DOI": [
"10.1038/nrn2787"
]
},
"num": null,
"urls": [],
"raw_text": "Karl Friston. 2010. The free-energy principle: a uni- fied brain theory? Nature Reviews Neuroscience 11(2):127-138. https://doi.org/10.1038/nrn2787.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The role of implicit argumentation in nominal srl",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gerber",
"suffix": ""
},
{
"first": "Joyce",
"middle": [
"Y"
],
"last": "Chai",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Meyers",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "146--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Gerber, Joyce Y. Chai, and Adam Meyers. 2009. The role of implicit argumentation in nominal srl. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Lin- guistics. Association for Computational Linguistics, Stroudsburg, PA, USA, NAACL '09, pages 146-154.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Beyond NomBank: A Study of Implicit Arguments for Nominal Predicates",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Gerber",
"suffix": ""
},
{
"first": "Joyce",
"middle": [],
"last": "Chai",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1583--1592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Gerber and Joyce Chai. 2010. Beyond NomBank: A Study of Implicit Arguments for Nominal Predicates. In Proceedings of the 48th Annual Meeting of the Association for Computa- tional Linguistics. Association for Computational Linguistics, Uppsala, Sweden, pages 1583-1592. http://www.aclweb.org/anthology-new/P/P10/P10- 1160.bib.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Semantic role labeling of implicit arguments for nominal predicates",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Gerber",
"suffix": ""
},
{
"first": "Joyce",
"middle": [
"Y"
],
"last": "Chai",
"suffix": ""
}
],
"year": 2012,
"venue": "Computational Linguistics",
"volume": "38",
"issue": "4",
"pages": "755--798",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Gerber and Joyce Y. Chai. 2012. Semantic role labeling of implicit arguments for nominal pred- icates. Computational Linguistics 38(4):755-798.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Situation Models in Naturalistic Comprehension",
"authors": [
{
"first": "C",
"middle": [
"A"
],
"last": "Kurby",
"suffix": ""
},
{
"first": "J",
"middle": [
"M"
],
"last": "Zacks",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "59--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C.A. Kurby and J.M. Zacks. 2015. Situation Models in Naturalistic Comprehension, Cambridge University Press, pages 59-76.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Impar: A deterministic algorithm for implicit semantic role labelling",
"authors": [
{
"first": "Egoitz",
"middle": [],
"last": "Laparra",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL",
"volume": "1",
"issue": "",
"pages": "1180--1189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Egoitz Laparra and German Rigau. 2013. Impar: A deterministic algorithm for implicit semantic role labelling. In Proceedings of the 51st Annual Meeting of the Association for Computational Lin- guistics, ACL 2013, 4-9 August 2013, Sofia, Bul- garia, Volume 1: Long Papers. pages 1180-1189. http://aclweb.org/anthology/P/P13/P13-1116.pdf.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The nombank project: An interim report",
"authors": [
{
"first": "A",
"middle": [],
"last": "Meyers",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Reeves",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Macleod",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Szekely",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Zielinska",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2004,
"venue": "Workshop: Frontiers in Corpus Annotation",
"volume": "",
"issue": "",
"pages": "24--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Meyers, R. Reeves, C. Macleod, R. Szekely, V. Zielinska, B. Young, and R. Grishman. 2004. The nombank project: An interim report. In A. Meyers, editor, HLT-NAACL 2004 Workshop: Frontiers in Corpus Annotation. Association for Computational Linguistics, Boston, Massachusetts, USA, pages 24- 31.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word rep- resentations in vector space. CoRR abs/1301.3781. http://arxiv.org/abs/1301.3781.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafit",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Cernock",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2010,
"venue": "INTERSPEECH. ISCA",
"volume": "",
"issue": "",
"pages": "1045--1048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Martin Karafit, Lukas Burget, Jan Cer- nock, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Takao Kobayashi, Keikichi Hirose, and Satoshi Nakamura, editors, INTERSPEECH. ISCA, pages 1045-1048.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The Proposition Bank: An annotated corpus of semantic roles",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "1",
"pages": "71--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics 31(1):71-106.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Two discourse driven language models for semantics",
"authors": [
{
"first": "Haoruo",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haoruo Peng and Dan Roth. 2016. Two discourse driven language models for semantics. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7- 12, 2016, Berlin, Germany, Volume 1: Long Papers. http://aclweb.org/anthology/P/P16/P16-1028.pdf.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP). pages 1532-1543. http://www.aclweb.org/anthology/D14-1162.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Inducing Implicit Arguments from Comparable Texts: A Framework and its Applications",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "41",
"issue": "",
"pages": "625--664",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Roth and Anette Frank. 2015. Inducing Im- plicit Arguments from Comparable Texts: A Frame- work and its Applications. Computational Linguis- tics 41:625-664.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Semeval-2010 task 10: Linking events and their participants in discourse",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Sporleder",
"suffix": ""
},
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Collin",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josef Ruppenhofer, Caroline Sporleder, Roser Morante, Collin Baker, and Martha Palmer. 2010. Semeval-2010 task 10: Linking events and their participants in discourse. In Proceedings of the 5th International Workshop on Semantic Evalua- tion. Association for Computational Linguistics, Stroudsburg, PA, USA, SemEval '10, pages 45-50.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Remembering the past and imagining the future: Identifying and enhancing the contribution of episodic memory",
"authors": [
{
"first": "D",
"middle": [
"L"
],
"last": "Schacter",
"suffix": ""
},
{
"first": "K",
"middle": [
"P"
],
"last": "Madore",
"suffix": ""
}
],
"year": 2016,
"venue": "Memory Studies",
"volume": "9",
"issue": "3",
"pages": "245--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D.L. Schacter and K.P. Madore. 2016. Remember- ing the past and imagining the future: Identifying and enhancing the contribution of episodic memory. Memory Studies 9(3):245-255.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Unsupervised learning of prototypical fillers for implicit semantic role labeling",
"authors": [
{
"first": "Niko",
"middle": [],
"last": "Schenk",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Chiarcos",
"suffix": ""
}
],
"year": 2016,
"venue": "The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1473--1479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niko Schenk and Christian Chiarcos. 2016. Unsuper- vised learning of prototypical fillers for implicit se- mantic role labeling. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego Cali- fornia, USA, June 12-17, 2016. pages 1473-1479. http://aclweb.org/anthology/N/N16/N16-1173.pdf.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Unsupervised induction of semantic roles within a reconstructionerror minimization framework",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Ehsan",
"middle": [],
"last": "Khoddam",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the North American chapter of the Association for Computational Linguistics (NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Titov and Ehsan Khoddam. 2015. Unsupervised induction of semantic roles within a reconstruction- error minimization framework. In Proceedings of the North American chapter of the Association for Computational Linguistics (NAACL).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Artificial Cognitive Systems: A Primer",
"authors": [
{
"first": "David",
"middle": [],
"last": "Vernon",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Vernon. 2014. Artificial Cognitive Systems: A Primer. The MIT Press.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Distributed representations for unsupervised semantic role labeling",
"authors": [
{
"first": "Kristian",
"middle": [],
"last": "Woodsend",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2482--2491",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristian Woodsend and Mirella Lapata. 2015. Dis- tributed representations for unsupervised semantic role labeling. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Lan- guage Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 2482-2491. http://aclweb.org/anthology/D15-1295.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "ADADELTA: an adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew D. Zeiler. 2012. ADADELTA: an adap- tive learning rate method. CoRR abs/1212.5701. http://arxiv.org/abs/1212.5701.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "A0 The network] had been expected to have [ NP losses] [ A1 of $20 million] . . . Those [ NP losses] may widen because of the short Series.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "o are weight matrices of size m \u00d7 m; b i , b c , b f , b o are bias vectors of size m; and * is element-wise multiplication. As per the stan-dard LSTM formulation, i t ,\u0108 t , f t , C t , o t representModel 2 -Separate Embedding LSTM the input gate, states of the memory cells, activation of the memory cells' forget gates, memory cells' new state, and output gates' values, respectively.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"html": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>also shows that training on large unla-</td></tr><tr><td>beled data results in a marked improvement com-</td></tr></table>",
"num": null
}
}
}
}