{ "paper_id": "D19-1033", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:09:03.806922Z" }, "title": "Event Detection with Trigger-Aware Lattice Neural Network", "authors": [ { "first": "Ning", "middle": [], "last": "Ding", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tsinghua University", "location": {} }, "email": "" }, { "first": "Ziran", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tsinghua University", "location": {} }, "email": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tsinghua University", "location": { "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "Hai-Tao", "middle": [], "last": "Zheng", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tsinghua University", "location": {} }, "email": "zheng.haitao@sz.tsinghua.edu.cn" }, { "first": "Zibo", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tsinghua University", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Event detection (ED) aims to locate trigger words in raw text and then classify them into correct event types. In this task, neural network based models became mainstream in recent years. However, two problems arise when it comes to languages without natural delimiters, such as Chinese. First, word-based models severely suffer from the problem of wordtrigger mismatch, limiting the performance of the methods. In addition, even if trigger words could be accurately located, the ambiguity of polysemy of triggers could still affect the trigger classification stage. To address the two issues simultaneously, we propose the Trigger-aware Lattice Neural Network (TLNN). (1) The framework dynamically incorporates word and character information so that the trigger-word mismatch issue can be avoided. (2) Moreover, for polysemous characters and words, we model all senses of them with the help of an external linguistic knowledge base, so as to alleviate the problem of ambiguous triggers. Experiments on two benchmark datasets show that our model could effectively tackle the two issues and outperforms previous state-of-the-art methods significantly, giving the best results. The source code of this paper can be obtained from https://github.com/thunlp/TLNN.", "pdf_parse": { "paper_id": "D19-1033", "_pdf_hash": "", "abstract": [ { "text": "Event detection (ED) aims to locate trigger words in raw text and then classify them into correct event types. In this task, neural network based models became mainstream in recent years. However, two problems arise when it comes to languages without natural delimiters, such as Chinese. First, word-based models severely suffer from the problem of wordtrigger mismatch, limiting the performance of the methods. In addition, even if trigger words could be accurately located, the ambiguity of polysemy of triggers could still affect the trigger classification stage. To address the two issues simultaneously, we propose the Trigger-aware Lattice Neural Network (TLNN). (1) The framework dynamically incorporates word and character information so that the trigger-word mismatch issue can be avoided. (2) Moreover, for polysemous characters and words, we model all senses of them with the help of an external linguistic knowledge base, so as to alleviate the problem of ambiguous triggers. Experiments on two benchmark datasets show that our model could effectively tackle the two issues and outperforms previous state-of-the-art methods significantly, giving the best results. The source code of this paper can be obtained from https://github.com/thunlp/TLNN.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Event Detection (ED) is a pivotal part of Event Extraction, which aims to detect the position of event triggers in raw text and classify them into corresponding event types. Conventionally, the stage of locating trigger words is known as Trigger Identification (TI), and the stage of classifying trigger words into particular event types is called Trigger Classification (TC). Although neural network methods have achieved significant progress in event detection (Nguyen and Grishman, 2015; Chen et al., 2015; Zeng et al., 2016) , both steps are still exposed to the following two issues.", "cite_spans": [ { "start": 463, "end": 490, "text": "(Nguyen and Grishman, 2015;", "ref_id": "BIBREF19" }, { "start": 491, "end": 509, "text": "Chen et al., 2015;", "ref_id": "BIBREF1" }, { "start": 510, "end": 528, "text": "Zeng et al., 2016)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the TI stage, the problem of trigger-word mismatch could severely impact the performance of event detection systems. Because in languages without natural delimiters such as Chinese, mainstream approaches are mostly word-based models, in which the segmentation should be firstly performed as a necessary preprocessing step. Unfortunately, these word-wise methods neglect an important problem that a trigger could be a specific part of one word or contain multiple words. As shown in Figure 1 (a), \"\u5c04\" (shoot) and \"\u6740\" (kill) are two triggers that both are parts of the word \"\u5c04 \u6740\" (shoot and kill) . In the other case, \"\u793a\u5a01\u6e38 \u884c\" (demonstration) is a trigger that crosses two words. Under this circumstance, triggers could not be located correctly with word-based methods, thereby becoming a serious limitation of the task. Some feature-based methods are proposed (Chen and Ji, 2009; Qin et al., 2010; Li and Zhou, 2012) to alleviate the issue, but they heavily rely on the hand-crafted features. Lin et al. (2018) proposes the nugget proposal networks (NPN) in terms of this issue, which uses a neural network to model character compositional structure of trigger words in a fix-sized window. However, the mechanism of the NPN limits the scope of trigger candidates within a fix-sized window, which is inflexible and suffering from the problem of trigger overlaps. Even if the locations of triggers can be correctly detected in the TI step, the TC step could still be severely affected by the inherent problem of ambiguity of polysemy. Because a trigger word with multiple word senses could be classified into different event types. Take 1(b) as an example, a polysemous trigger word \"\u91ca\u653e\" (release) could represent two distinctly different event types. In the first case, the word 'release' triggers an Attack event (release tear gas). But in the second case, the event triggered by 'release' becomes Release-Parole (release a man in court).", "cite_spans": [ { "start": 861, "end": 880, "text": "(Chen and Ji, 2009;", "ref_id": "BIBREF3" }, { "start": 881, "end": 898, "text": "Qin et al., 2010;", "ref_id": "BIBREF23" }, { "start": 899, "end": 917, "text": "Li and Zhou, 2012)", "ref_id": "BIBREF10" }, { "start": 994, "end": 1011, "text": "Lin et al. (2018)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 485, "end": 493, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To further illustrate that the two problems mentioned above do exist, we make manual statistics on the proportion of mismatch triggers and polysemous triggers on two widely used datasets. The statistics are illustrated in Table 1 , and we can observe that data with trigger-word mismatch and trigger polysemy do account for a considerable proportion and then affect the task.", "cite_spans": [], "ref_spans": [ { "start": 222, "end": 229, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose the Trigger-aware Lattice Network (TLNN), a comprehensive model that can simultaneously tackle both issues. To avoid error propagation by NLP tools like segmentor, we take characters as the basic units of the input sequence. Moreover, we utilize HowNet (Dong and Dong, 2003) , an external knowledge base that manually annotates polysemous Chinese and English words, to obtain the sense-level information. Further, we develop the trigger-aware lattice LSTM as the feature extractor of our model, which could leverage character-level, word-level and sense-level information at the same time. More specifically, in order to address the triggerword mismatch issue, we construct short cut paths to link the cell state between the start and the end characters for each word. It is worth mentioning that the paths are sense-level, which means all the sense information of words that end in one specific character will flow into the memory cell of the character. Hence, with the utilization of multiple granularity of information (character, word and word sense), the problem of polysemous triggers could be effectively alleviated.", "cite_spans": [ { "start": 279, "end": 300, "text": "(Dong and Dong, 2003)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We conduct sets of experiments on two realworld datasets in the task of event detection. Empirical results of the main experiments show that our model can efficiently address both mentioned issues. With comprehensive comparisons with other proposed methods, our model achieves the state-of-the-art results on both datasets. Further, sets of subsidiary experiments are conducted to further analyze how TLNN addresses the two issues. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the paper, event detection is regarded as a sequence labelling task. For each character, the model should identify if it is a part of one trigger and correctly classify the trigger into one specific event type.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "The architecture of our model is shown in Figure 2, which primarily includes the following three parts: (1) Hierarchical Representation Learning, which reveals the character-level, word-level and sense-level embedding vectors in an unsupervised way. (2) Trigger-aware Feature Extractor, which automatically extracts different levels of semantic features by a tree structure LSTM model. (3) Sequence Tagger, which calculates the probability of being a trigger for each character candidate.", "cite_spans": [], "ref_spans": [ { "start": 42, "end": 48, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "Given an input sequence S = {c 1 , c 2 , ..., c N }, where c i represents the ith character in the sequence. In character level, each character will be represented as an embedding vector x c by Skip-Gram method (Mikolov et al., 2013) .", "cite_spans": [ { "start": 211, "end": 233, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Representation Learning", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "x c i = e(c i )", "eq_num": "(1)" } ], "section": "Hierarchical Representation Learning", "sec_num": "2.1" }, { "text": "In the word level, the input sequence S could also be S = {w 1 , w 2 , ..., w M }, where the basic unit is a single word w i . In this paper, we will use two indexes b and e to represent the start and the end of a word. In this case, the word embeddings are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Representation Learning", "sec_num": "2.1" }, { "text": "x w b,e = e(w b,e )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Representation Learning", "sec_num": "2.1" }, { "text": "However, the Skip-Gram method maps each word to only one single embedding, ignoring the fact that many words have multiple senses. Hence representation of finer granularity is still necessary to represent deep semantics. With the help of HowNet (Dong and Dong, 2003) , we can obtain the representation of each sense of a character or a word. For each character c, there are possible multiple senses sen (c i ) \u2208 S (c) annotated in HowNet. Similarly, for each word w, the senses could be sen (w i ) \u2208 S (w) . Consequently, we can obtain the embeddings of senses by jointly learning word and sense embeddings via Skip-gram manner. This mechanism is also applied to (Niu et al., 2017) .", "cite_spans": [ { "start": 245, "end": 266, "text": "(Dong and Dong, 2003)", "ref_id": "BIBREF4" }, { "start": 502, "end": 505, "text": "(w)", "ref_id": null }, { "start": 663, "end": 681, "text": "(Niu et al., 2017)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Representation Learning", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s c i j = e(sen (c i ) j ) (3) s w b,e j = e(sen (w b,e ) j )", "eq_num": "(4)" } ], "section": "Hierarchical Representation Learning", "sec_num": "2.1" }, { "text": "where sen ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Representation Learning", "sec_num": "2.1" }, { "text": "The trigger-aware feature extractor is the core component of our model. After training, the outputs of the extractor are the hidden state vectors h of an input sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "Conventional LSTM. LSTM (Hochreiter and Schmidhuber, 1997) is an extension of the recurrent neural network (RNN) with additional gates to control the information. Traditionally, there are following basic gates in LSTM: input gate i, output gate o and forget gate f . They collectively controls which information will be reserved, forgotten and output. All three gates are accompanied by corresponding weight matrix W . Current cell state c records all historical information flow up to the current time. Therefore, the character-based LSTM functions are:", "cite_spans": [ { "start": 24, "end": 58, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 i c i o c i f c \u0129 c c i \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb = \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 \u03c3 \u03c3 \u03c3 tanh \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb (W c x c i h c i\u22121 + b c ) (5) c c i = f c i c c i\u22121 + i c i c c i (6) h c i = o c i tanh(c c i )", "eq_num": "(7)" } ], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "where h c i is the hidden state vector. Trigger-Aware Lattice LSTM. Trigger-Aware Lattice LSTM is the core feature extractor of our framework, which is an extension of LSTM and lattice LSTM. In this subsection, We will derive and theoretically analyze the model in detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "In this section, characters and words are assumed to have K senses. As mentioned in 2.1, for the jth sense of the ith character c i , the embedding would be s c i j . Then an additional LSTMCell is utilized to integrate all senses of the character, hence the calculation of the cell gate of the multisense character c i would be:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\uf8ee \uf8ef \uf8f0 i c i j f c i j c c i j \uf8f9 \uf8fa \uf8fb = \uf8ee \uf8ef \uf8f0 \u03c3 \u03c3 tanh \uf8f9 \uf8fa \uf8fb (W c s c i j h c i\u22121 + b c ) (8) c c i j = f c i j c c i\u22121 + i c i j c c i j", "eq_num": "(9)" } ], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "Figure 3: The structure of Trigger-Aware Feature Extractor, the input of the example is a part of the sentence \"\u82e5\u7f6a\u540d\u6210\u7acb\uff0c\u4ed6\u5c06\u88ab\u902e\u6355\" (If convicted, he will be arrested) . In this case, \"\u7f6a\u540d\u6210\u7acb\" (convicted) is a trigger with event type Justice: Sentence. \"\u6210\u7acb\" (convicted/found) and \"\u7acb\" (stand/conclude) are polysemous words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "To keep the figure concise, we (1) only show two senses for each polysemous word;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "(2) only show the forward direction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "where c c i j is the cell state of the jth sense of the ith character, c c i\u22121 is the final cell state of the i \u2212 1th character. In order to obtain the cell state of the character, an additional gate is used:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g c i j = \u03c3(W x c i c c i j + b)", "eq_num": "(10)" } ], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "Then all the senses should be dynamically integrated into the temporary cell state:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c * c i = K j \u03b1 c i j c c i j", "eq_num": "(11)" } ], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "where \u03b1 c i j is the character sense gate after normalization:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 c i j = exp(g c i j ) K k exp(g c i k )", "eq_num": "(12)" } ], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "Eq.11 obtains the temporary cell state of the character c * c i by incorporating all the senses information of the character. However, word-level information needs to be considered as well. As mentioned in 2.1, s", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "w b,e j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "is the embedding for the jth sense of the word out w b,e . Similar to characters, extra LSTMCell is used to calculate the cell state of each word that matches the lexicon D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\uf8ee \uf8ef \uf8f0 i w b,e j f w b,e j c w b,e j \uf8f9 \uf8fa \uf8fb = \uf8ee \uf8ef \uf8f0 \u03c3 \u03c3 tanh \uf8f9 \uf8fa \uf8fb (W c s w b,e j h c b\u22121 + b c ) (13) c w b,e j = f w b,e j c c b\u22121 + i w b,e j c w b,e j", "eq_num": "(14)" } ], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "Similar to Eq.11, the cell state of the word could be computed by incorporating all the cells of senses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g w b,e j = \u03c3(W x c b c w b,e j + b) (15) c w b,e = K j \u03b1 w b,e j c w b,e j", "eq_num": "(16)" } ], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "where \u03b1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "w b,e j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "is the word sense gate after normalization:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 w b,e j = exp(g w b,e j ) K k exp(g w b,e k )", "eq_num": "(17)" } ], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "For a character c i , the temporary cell state c * c i that contains sense information is calculated by Eq.11. Moreover, we could calculate all the cell states of words that end in the index i by Eq.16, which are represented as {c w b,i |b \u2208 [1, i], w b,i \u2208 D}. In order to ensure the corresponding information could flow into the final cell state of c i , an extra gate g m b,i is used to merge character and word cells:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g m b,i = \u03c3(W l x c i c w b,i + b l )", "eq_num": "(18)" } ], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "and the computation of the final cell state of the character c c i is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "c c i = b\u2208{b |w d b ,i \u2208D} \u03b1 w b,i c w b,i +\u03b1 c i c * c i (19)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "where \u03b1 w b,i and \u03b1 c i are word gate and character gate after normalization. The computation is similar to Eq. 12 and Eq. 17. Therefore, the final cell state c c i could represent the ambiguous characters and words in a dynamic manner. Similar to Eq. 7, hidden state vectors could be calculated to transmit to the sequence tagger layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Trigger-Aware Feature Extractor", "sec_num": "2.2" }, { "text": "In this paper, the event detection task is regarded as a sequence tagging problem. For an input sequence S = {c 1 , c 2 , ..., c N }, there is a corresponding label sequence L = {y 1 , y 2 , ..., y N }. Hidden vectors h for each character obtained in 2.2 are used as the input. We use a classic CRF layer to perform the sequence tagging, thus the probability distribution is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence Tagger", "sec_num": "2.3" }, { "text": "P (L|S) = exp( N i=1 (S(y i )+T (y i\u22121 , y i ))) L \u2208C exp( N i=1 (S(y i )+T (y i\u22121 , y i ))) , (20)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence Tagger", "sec_num": "2.3" }, { "text": "where S is the score function to compute the emission score from hidden vector h i to the label y i :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence Tagger", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "S(y i ) = W y i CRF h i + b y i CRF .", "eq_num": "(21)" } ], "section": "Sequence Tagger", "sec_num": "2.3" }, { "text": "W y i CRF and b y i CRF are learned parameters specific to y i . And in Eq. 20, T is the transition function to compute the transition score from y i\u22121 to y i . C contains all the possible label sequences on sequence S and L is a random label sequence in C.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence Tagger", "sec_num": "2.3" }, { "text": "We use standard Viterbi (Viterbi, 1967) algorithm as a decoder to decode the highest scored label sequence. The loss function of our model is log-likelihood in sentence-level.", "cite_spans": [ { "start": 24, "end": 39, "text": "(Viterbi, 1967)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Sequence Tagger", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Loss = M i=1 log(P (L i |S i ))", "eq_num": "(22)" } ], "section": "Sequence Tagger", "sec_num": "2.3" }, { "text": "where M is the number of sentences, L i is the correct label for the sentence S i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence Tagger", "sec_num": "2.3" }, { "text": "3 Experiments", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence Tagger", "sec_num": "2.3" }, { "text": "Datasets. In this paper, we conduct a series of experiments on two real-world datasets: ACE 2005 Chinese dataset (ACE2005) and TAC KBP 2017 Event Nugget Detection Evaluation dataset (KBP2017). For better comparison, we use the same data split as previous works (Chen and Ji, 2009; Zeng et al., 2016; Feng et al., 2018; Lin et al., 2018) . Specifically, the ACE2005 (LDC2006T06) contains 697 articles, with 569 articles for training, 64 for validating and the rest 64 for testing. For KBP2017 Chinese dataset (LDC2017E55), we follow the same setup as Lin et al. 2018, using 506/20/167 documents as training/development/test set respectively. Evaluation Metrics. Standard micro-averaged Precision, Recall and F1 are used as evaluation metrics. For ACE2005 the computation is the same as Chen and Ji (2009) . To remain rigorous, we use the official evaluation toolkit 1 to perform the metrics for KBP2017.", "cite_spans": [ { "start": 261, "end": 280, "text": "(Chen and Ji, 2009;", "ref_id": "BIBREF3" }, { "start": 281, "end": 299, "text": "Zeng et al., 2016;", "ref_id": "BIBREF28" }, { "start": 300, "end": 318, "text": "Feng et al., 2018;", "ref_id": "BIBREF5" }, { "start": 319, "end": 336, "text": "Lin et al., 2018)", "ref_id": "BIBREF13" }, { "start": 785, "end": 803, "text": "Chen and Ji (2009)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Experimental Settings", "sec_num": "3.1" }, { "text": "Hyper-Parameter Settings. We tune the parameters of our models by grid searching on the validation dataset. Adam (Kingma and Ba, 2014) with a learning rate decay is utilized as the optimizer. The embedding sizes of characters and senses are all 50. To avoid overfitting, Dropout mechanism (Srivastava et al., 2014) is used in the system, and the dropout rate is set to 0.5. We select the best models by early stopping using the F1 results on the validation dataset. Because of the limited influence, we follow empirical settings for other hyper-parameters.", "cite_spans": [ { "start": 289, "end": 314, "text": "(Srivastava et al., 2014)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Experimental Settings", "sec_num": "3.1" }, { "text": "In this section, we compare our model with previous state-of-the-art methods. The proposed models are as follows: Table 2 : Overall results of proposed methods and TLNN on ACE2005 and KBP2017. * indicates the results adapted from the original paper. For KBP2017, \"Trigger Identification\" and \"Trigger Classification\" correspond to the \"Span\" and \"Type\" metrics in the official evaluation.", "cite_spans": [], "ref_spans": [ { "start": 114, "end": 121, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Overall Results", "sec_num": "3.2" }, { "text": "DMCNN (Chen et al., 2015) put forward a dynamic Multi-pooling CNN as a sentence-level feature extractor. Moreover, we add a classifier to DMCNN using IOB encoding.", "cite_spans": [ { "start": 6, "end": 25, "text": "(Chen et al., 2015)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Overall Results", "sec_num": "3.2" }, { "text": "C-BiLSTM (Zeng et al., 2016) put forward the Convolutional Bi-LSTM model for the event detection task.", "cite_spans": [ { "start": 9, "end": 28, "text": "(Zeng et al., 2016)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Overall Results", "sec_num": "3.2" }, { "text": "HNN (Feng et al., 2018) designed a Hybrid Neural Network model which combines CNN with Bi-LSTM.", "cite_spans": [ { "start": 4, "end": 23, "text": "(Feng et al., 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Overall Results", "sec_num": "3.2" }, { "text": "HBTNGMA (Chen et al., 2018) put forward a Hierarchical and Bias Tagging Networks with Gated Multi-level Attention Mechanisms to integrate sentence-level and document-level information collectively.", "cite_spans": [ { "start": 8, "end": 27, "text": "(Chen et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Overall Results", "sec_num": "3.2" }, { "text": "NPN (Lin et al., 2018) proposed a comprehensive model by automatically learning the inner compositional structures of triggers to solve the trigger mismatch problem.", "cite_spans": [ { "start": 4, "end": 22, "text": "(Lin et al., 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Overall Results", "sec_num": "3.2" }, { "text": "The results of all the models are shown in Table 2 . From the results, we can observe that:", "cite_spans": [], "ref_spans": [ { "start": 43, "end": 51, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Overall Results", "sec_num": "3.2" }, { "text": "(1) Both for ACE2005 and KBP2017, TLNN outperform other proposed models significantly, achieving the best results on two datasets. This demonstrates that the trigger-aware lattice structure could enhance the accuracy of locating triggers. Further, thanks to the usage of sense-level information, triggers could be more precisely classified into correct event types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overall Results", "sec_num": "3.2" }, { "text": "(2) On the TI stage, TLNN gives the best performance. By linking shortcut paths of all word candidates with the current character, the model could effectively exploit both character and word information, and then alleviates the issue of triggerword mismatch. (3) On the TC stage, TLNN still maintain its advantages. The results indicate that the linguistic knowledge of HowNet and the unique structure to dynamically utilize sense-level information could enhance the performance on the TC stage. More located triggers could be classified into correct event types by considering the ambiguity of triggers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overall Results", "sec_num": "3.2" }, { "text": "In this section, we design a set of experiments to explore the effect of the trigger-aware feature extractor. We implement strong character-based and word-based baselines by replacing the triggeraware lattice LSTM with the standard Bi-LSTM. For word-based baselines, the input is segmented into word sequences firstly. Furthermore, we implement extra CNN and LSTM to learn character-level features as additional modules. For character-based baselines, the basic units of the Table 4 : Recall rates of two trigger-word match splits of two datasets on Trigger Identification task.", "cite_spans": [], "ref_spans": [ { "start": 475, "end": 482, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Effect of Trigger-aware Feature Extractor", "sec_num": "3.3" }, { "text": "input sequence are characters. Then we enhance the character representation by adding external word-level features including bigram and softword (word in which the current character is located). Hence, both baselines could collectively utilize character and word information. As shown in Table 3 , experiments of two types of baselines and our model are conducted on ACE2005 and KBP2017. For the word baseline, although adding character-level features can improve the performance, the effects are relative limited. For the char baseline, it gains considerable improvements when word-level features are taken into account. The results of baselines indicate that integrating different level of information is an effective strategy to improve the performance of models. Compared with baselines, the TLNN achieves the best F1-score compared to all the baselines on both datasets, showing remarkable superiority and robustness. The results show that by dynamically combining multi-grained information, the trigger-aware feature extractor could effectively explore deeper semantic features than the feature-based strategies used in baselines.", "cite_spans": [], "ref_spans": [ { "start": 288, "end": 295, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Effect of Trigger-aware Feature Extractor", "sec_num": "3.3" }, { "text": "In order to explore the influence of trigger mismatch problem, we split the test data of ACE2005 and KBP2017 into two types: match and mismatch. Table 1 shows the proportion of wordtrigger match and mismatch on two datasets.", "cite_spans": [], "ref_spans": [ { "start": 145, "end": 152, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Influence of Trigger Mismatch", "sec_num": "3.4" }, { "text": "The recall of different methods of each split on Trigger Identification task is shown in Table 4 . We can observe that:", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 96, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Influence of Trigger Mismatch", "sec_num": "3.4" }, { "text": "(1) The result indicates that the word-trigger mismatch problem could severely impact the performance of the task. All approaches except ours give lower recall rates in the trigger-mismatch part than in the trigger-match part. In contrast, our model could robustly address the word-trigger mismatch problem, reaching the best results on both parts of the two datasets. Table 6 : F1-score of two splits of two datasets on Trigger Classification task. The splits are based on the polysemy of triggers. \"Poly\" and \"Mono\" correspond to polysemous and monosemous trigger splits.", "cite_spans": [], "ref_spans": [ { "start": 369, "end": 376, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Influence of Trigger Mismatch", "sec_num": "3.4" }, { "text": "(2) To a certain extent, the NPN model could alleviate the problem by utilizing hybrid representation learning and nugget generator in a fix-sized window. However, the mechanism is still not flexible and robust to integrate character and word information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Influence of Trigger Mismatch", "sec_num": "3.4" }, { "text": "(3) The word-based baseline is most severely affected by the trigger-word mismatch problem. This phenomenon is explainable because if one trigger could not be segmented as a specific word in the preprocessing stage, it is impossible to be located correctly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Influence of Trigger Mismatch", "sec_num": "3.4" }, { "text": "In this section, we mainly focus on the influence of polysemous triggers. We select NPN model for comparison. And we implement a version of TLNN without sense information, which is denoted as TLNN -w/o Sense info in Table 5 and Table 6 .", "cite_spans": [], "ref_spans": [ { "start": 216, "end": 236, "text": "Table 5 and Table 6", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Influence of Trigger Polysemy", "sec_num": "3.5" }, { "text": "Empirical results in Table 5 show the overall performance on ACE2005 and KBP2017. We can observe that the TLNN is weakened by removing sense information, which indicates the effectiveness of the usage of sense-level information. Even without sense information, our model could still outperform the NPN model on both two datasets.", "cite_spans": [], "ref_spans": [ { "start": 21, "end": 28, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Influence of Trigger Polysemy", "sec_num": "3.5" }, { "text": "To further explore and analyze the effect of word sense information, we split the KBP2017 dataset into two parts based on the polysemy of triggers and their contexts. The F1-score of each split is shown in Table 6 , in which the TLNN yields the best results on both \"Poly\" parts. With- Table 7 : Two model prediction examples. The first sentence is an example of trigger-word mismatch, while the second one is about polysemous triggers. For each prediction result, (A,B) indicates that A is a trigger with event type B. out sense information, TLNN -w/o sense info could give comparable F1-scores with TLNN on the \"Mono\" parts. The results indicate that the trigger-aware feature extractor could dynamically learn all the senses of characters and words, gaining significant improvements under the condition of polysemy. Table 7 shows two examples comparing the TLNN model with other ED methods. The former example is about trigger-word mismatch, in which the correct trigger \"\u6297\"(resist) is part of the idiom word \"\u6297\u654c\u63f4\u53cb\" (resist the enemies and aid the allies). In this case, the word baseline gives the whole word \"\u6297\u654c\u63f4\u53cb\" as prediction because it is impossible for word-based methods to detect partof-word triggers. Additionally, the NPN model recognizes a non-existent word \"\u523b\u6297\". The reason is that the NPN enumerates the combinations of all characters within a window as trigger candidates, which is likely to generate invalid words. In contrast, our model detects the event trigger \"\u6297\" accurately.", "cite_spans": [], "ref_spans": [ { "start": 206, "end": 213, "text": "Table 6", "ref_id": null }, { "start": 286, "end": 293, "text": "Table 7", "ref_id": null }, { "start": 819, "end": 826, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Influence of Trigger Polysemy", "sec_num": "3.5" }, { "text": "In the latter example, the trigger \"\u9001\"(send) is a polysemous word with two different meanings: \"\u9001\u884c\"(see him off) and \"\u9001\u94b1\"(give him money). Without considering multiple word senses of polysemes, the NPN and TLNN (w/o Sense info) classify trigger \"\u9001\" into wrong event type Trans-ferPerson. On the contrary, the TLNN can dynamically select word sense for polysemous triggers by utilizing context information. Thus the correct event type TransferMoney is predicted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Study", "sec_num": "3.6" }, { "text": "Event Detection (ED) is a crucial subtask in Event Extraction task. Feature-based methods (Ahn, 2006; Ji and Grishman, 2008; Liao and Grishman, 2010; Huang and Riloff, 2012; Patwardhan and Riloff, 2009; McClosky et al., 2011) were widely used in the ED task, but these traditional methods are heavily rely on the manual features, limiting the scalability and robustness.", "cite_spans": [ { "start": 90, "end": 101, "text": "(Ahn, 2006;", "ref_id": "BIBREF0" }, { "start": 102, "end": 124, "text": "Ji and Grishman, 2008;", "ref_id": "BIBREF8" }, { "start": 125, "end": 149, "text": "Liao and Grishman, 2010;", "ref_id": "BIBREF12" }, { "start": 150, "end": 173, "text": "Huang and Riloff, 2012;", "ref_id": "BIBREF7" }, { "start": 174, "end": 202, "text": "Patwardhan and Riloff, 2009;", "ref_id": "BIBREF22" }, { "start": 203, "end": 225, "text": "McClosky et al., 2011)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "Recent developments in deep learning have led to a renewed interest in neural event detection. Neural networks can automatically learn features of the input sequence and conduct token-level classification. CNN-based models are the seminal neural network models in ED (Nguyen and Grishman, 2015; Chen et al., 2015; . However, these models can only capture the local context features in a fixed size window. Some approaches design comprehensive models to explore the interdependency among trigger words (Chen et al., 2018; Feng et al., 2018) . To further improve the ED task, some joint models are designed Lu and Nguyen, 2018; Yang and Mitchell, 2016) . These methods have achieved great success in English datasets.", "cite_spans": [ { "start": 267, "end": 294, "text": "(Nguyen and Grishman, 2015;", "ref_id": "BIBREF19" }, { "start": 295, "end": 313, "text": "Chen et al., 2015;", "ref_id": "BIBREF1" }, { "start": 501, "end": 520, "text": "(Chen et al., 2018;", "ref_id": "BIBREF2" }, { "start": 521, "end": 539, "text": "Feng et al., 2018)", "ref_id": "BIBREF5" }, { "start": 605, "end": 625, "text": "Lu and Nguyen, 2018;", "ref_id": "BIBREF14" }, { "start": 626, "end": 650, "text": "Yang and Mitchell, 2016)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "However, in languages without delimiters, such as Chinese, the mismatch of word-trigger become significantly severe. Some feature-based methods are proposed to solve the problem (Chen and Ji, 2009; Qin et al., 2010; Li and Zhou, 2012) , but they heavily rely on the hand-crafted features. Lin et al. (2018) proposes NPN, a neural network based method to address the issue. However, the mechanism of NPNs limits the scope of trigger candidates within a fix-sized window, which will cause two problems in the progress. First, the NPNs still cannot take all the possible trigger candidates into account, leading to meaningless computation. Furthermore, the overlap of triggers is serious in NPNs. Lattice-based models were used in other fields to combine character and word information (Li et al., 2019; . Mainstream methods also suffer from the problem of trigger polysemy. Lu and Nguyen (2018) proposes a multi-task learning model which uses word sense disambiguation to alleviate the effect of the trigger polysemy problem. But in this work, word disambiguation datasets are necessary. In contrast, our model can solve both word-trigger mismatch and trigger polysemy problems at the same time.", "cite_spans": [ { "start": 178, "end": 197, "text": "(Chen and Ji, 2009;", "ref_id": "BIBREF3" }, { "start": 198, "end": 215, "text": "Qin et al., 2010;", "ref_id": "BIBREF23" }, { "start": 216, "end": 234, "text": "Li and Zhou, 2012)", "ref_id": "BIBREF10" }, { "start": 289, "end": 306, "text": "Lin et al. (2018)", "ref_id": "BIBREF13" }, { "start": 783, "end": 800, "text": "(Li et al., 2019;", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "We propose a novel framework TLNN for event detection, which can simultaneously address the problems of trigger-word mismatch and polysemous triggers. With the hierarchical representation learning and the trigger-aware feature extractor, TLNN efficaciously exploits multi-grained information and learn deep semantic features. Sets of experiments on two real-world datasets show that TLNN could efficiently address the two issues and yield better empirical results than a variety of neural network models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "In future work, we will conduct experiments on more languages with and without explicit word delimiters. In addition, we will try developing a dynamic mechanism to selectively consider the sense-level information rather than take all the senses of characters and words into account.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "github.com/hunterhector/EvmEval", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The stages of event extraction", "authors": [ { "first": "David", "middle": [], "last": "Ahn", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Workshop on Annotating and Reasoning about Time and Events", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Ahn. 2006. The stages of event extraction. In Proceedings of the Workshop on Annotating and Reasoning about Time and Events, pages 1-8.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Event extraction via dynamic multi-pooling convolutional neural networks", "authors": [ { "first": "Yubo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Liheng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Daojian", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "167--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dy- namic multi-pooling convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), vol- ume 1, pages 167-176.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Collective event detection via a hierarchical and bias tagging networks with gated multilevel attention mechanisms", "authors": [ { "first": "Yubo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yantao", "middle": [], "last": "Jia", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1267--1276", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yubo Chen, Hang Yang, Kang Liu, Jun Zhao, and Yan- tao Jia. 2018. Collective event detection via a hier- archical and bias tagging networks with gated multi- level attention mechanisms. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1267-1276.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Language specific issue and feature exploration in chinese event extraction", "authors": [ { "first": "Zheng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2009, "venue": "The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers", "volume": "", "issue": "", "pages": "209--212", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zheng Chen and Heng Ji. 2009. Language specific issue and feature exploration in chinese event ex- traction. In Proceedings of Human Language Tech- nologies: The 2009 Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics, Companion Volume: Short Pa- pers, pages 209-212.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Hownet-a hybrid language and knowledge resource", "authors": [ { "first": "Zhendong", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Dong", "suffix": "" } ], "year": 2003, "venue": "Proceedings of NLP-KE", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhendong Dong and Qiang Dong. 2003. Hownet-a hy- brid language and knowledge resource. In Proceed- ings of NLP-KE.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A language-independent neural network for event detection", "authors": [ { "first": "Xiaocheng", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "Science China Information Sciences", "volume": "61", "issue": "9", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaocheng Feng, Bing Qin, and Ting Liu. 2018. A language-independent neural network for event detection. Science China Information Sciences, 61(9):092106.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Modeling textual cohesion for event extraction", "authors": [ { "first": "Ruihong", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 2012, "venue": "Twenty-Sixth AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruihong Huang and Ellen Riloff. 2012. Modeling tex- tual cohesion for event extraction. In Twenty-Sixth AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Refining event extraction through cross-document inference", "authors": [ { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "254--262", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. Pro- ceedings of ACL-08: HLT, pages 254-262.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Employing morphological structures and sememes for chinese event extraction", "authors": [ { "first": "Peifeng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2012, "venue": "Proceedings of COLING 2012", "volume": "", "issue": "", "pages": "1619--1634", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peifeng Li and Guodong Zhou. 2012. Employing mor- phological structures and sememes for chinese event extraction. Proceedings of COLING 2012, pages 1619-1634.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Chinese relation extraction with multi-grained information and external linguistic knowledge", "authors": [ { "first": "Ziran", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ning", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Haitao", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Shen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4377--4386", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ziran Li, Ning Ding, Zhiyuan Liu, Haitao Zheng, and Ying Shen. 2019. Chinese relation extraction with multi-grained information and external linguis- tic knowledge. In Proceedings of the 57th Confer- ence of the Association for Computational Linguis- tics, pages 4377-4386.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Using document level cross-event inference to improve event extraction", "authors": [ { "first": "Shasha", "middle": [], "last": "Liao", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "789--797", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shasha Liao and Ralph Grishman. 2010. Using doc- ument level cross-event inference to improve event extraction. In Proceedings of the 48th Annual Meet- ing of the Association for Computational Linguis- tics, pages 789-797. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Nugget proposal networks for chinese event detection", "authors": [ { "first": "Hongyu", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Yaojie", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Xianpei", "middle": [], "last": "Han", "suffix": "" }, { "first": "Le", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.00249" ] }, "num": null, "urls": [], "raw_text": "Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2018. Nugget proposal networks for chinese event detection. arXiv preprint arXiv:1805.00249.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Similar but not the same: Word sense disambiguation improves event detection via neural representation matching", "authors": [ { "first": "Weiyi", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Thien", "middle": [], "last": "Huu Nguyen", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weiyi Lu and Thien Huu Nguyen. 2018. Similar but not the same: Word sense disambiguation improves event detection via neural representation matching.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "4822--4828", "other_ids": {}, "num": null, "urls": [], "raw_text": "In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 4822-4828.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Event extraction as dependency parsing", "authors": [ { "first": "David", "middle": [], "last": "Mcclosky", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1626--1635", "other_ids": {}, "num": null, "urls": [], "raw_text": "David McClosky, Mihai Surdeanu, and Christopher D Manning. 2011. Event extraction as dependency parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 1626-1635. Association for Computational Linguis- tics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Joint event extraction via recurrent neural networks", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Thien Huu Nguyen", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Cho", "suffix": "" }, { "first": "", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "300--309", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thien Huu Nguyen, Kyunghyun Cho, and Ralph Gr- ishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of the 2016 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 300-309.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Event detection and domain adaptation with convolutional neural networks", "authors": [ { "first": "Huu", "middle": [], "last": "Thien", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "365--371", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thien Huu Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 365-371.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Modeling skip-grams for event detection with convolutional neural networks", "authors": [ { "first": "Huu", "middle": [], "last": "Thien", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "886--891", "other_ids": { "DOI": [ "10.18653/v1/D16-1085" ] }, "num": null, "urls": [], "raw_text": "Thien Huu Nguyen and Ralph Grishman. 2016. Mod- eling skip-grams for event detection with convolu- tional neural networks. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing, pages 886-891, Austin, Texas. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Improved word representation learning with sememes", "authors": [ { "first": "Yilin", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Ruobing", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2049--2058", "other_ids": { "DOI": [ "10.18653/v1/P17-1187" ] }, "num": null, "urls": [], "raw_text": "Yilin Niu, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2017. Improved word representation learn- ing with sememes. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2049- 2058, Vancouver, Canada. Association for Compu- tational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A unified model of phrasal and sentential evidence for information extraction", "authors": [ { "first": "Siddharth", "middle": [], "last": "Patwardhan", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "1", "issue": "", "pages": "151--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddharth Patwardhan and Ellen Riloff. 2009. A uni- fied model of phrasal and sentential evidence for in- formation extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Lan- guage Processing: Volume 1-Volume 1, pages 151- 160. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Event type recognition based on trigger expansion", "authors": [ { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Yanyan", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Guofu", "middle": [], "last": "Zhai", "suffix": "" } ], "year": 2010, "venue": "Tsinghua Science and Technology", "volume": "15", "issue": "3", "pages": "251--258", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bing Qin, Yanyan Zhao, Xiao Ding, Ting Liu, and Guofu Zhai. 2010. Event type recognition based on trigger expansion. Tsinghua Science and Technol- ogy, 15(3):251-258.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Dropout: a simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "The Journal of Machine Learning Research", "volume": "15", "issue": "1", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Error bounds for convolutional codes and an asymptotically optimum decoding algorithm", "authors": [ { "first": "Andrew", "middle": [], "last": "Viterbi", "suffix": "" } ], "year": 1967, "venue": "IEEE transactions on Information Theory", "volume": "13", "issue": "2", "pages": "260--269", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Viterbi. 1967. Error bounds for convolutional codes and an asymptotically optimum decoding al- gorithm. IEEE transactions on Information Theory, 13(2):260-269.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Joint extraction of events and entities within a document context", "authors": [ { "first": "Bishan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1609.03632" ] }, "num": null, "urls": [], "raw_text": "Bishan Yang and Tom Mitchell. 2016. Joint extrac- tion of events and entities within a document con- text. arXiv preprint arXiv:1609.03632.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Subword encoding in lattice lstm for chinese word segmentation", "authors": [ { "first": "Jie", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shuailong", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.12594" ] }, "num": null, "urls": [], "raw_text": "Jie Yang, Yue Zhang, and Shuailong Liang. 2018. Sub- word encoding in lattice lstm for chinese word seg- mentation. arXiv preprint arXiv:1810.12594.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A convolution bilstm neural network model for chinese event extraction", "authors": [ { "first": "Ying", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Honghui", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yansong", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Dongyan", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2016, "venue": "Natural Language Understanding and Intelligent Applications", "volume": "", "issue": "", "pages": "275--287", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ying Zeng, Honghui Yang, Yansong Feng, Zheng Wang, and Dongyan Zhao. 2016. A convolution bilstm neural network model for chinese event ex- traction. In Natural Language Understanding and Intelligent Applications, pages 275-287. Springer.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Chinese ner using lattice lstm", "authors": [ { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.02023" ] }, "num": null, "urls": [], "raw_text": "Yue Zhang and Jie Yang. 2018. Chinese ner using lat- tice lstm. arXiv preprint arXiv:1805.02023.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Examples about trigger-word mismatch and polysemous trigger in Event Detection.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "The architecture of TLNN.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "sense for the character c i and word w b,e in the sequence. And then s c i j and s w b,e j are the embeddings of c i and w b,e .", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "content": "", "text": "Proportion of Trigger-word mismatch and Polysemous Triggers on ACE 2005 and KBP 2017.", "num": null, "type_str": "table", "html": null }, "TABREF2": { "content": "
ACE2005KBP2017
ModelTrigger Identification Trigger Classification Trigger Identification Trigger Classification
PRF1PRF1PRF1PRF1
DMCNN60.10 61.60 60.90 57.10 58.50 57.80 53.67 49.92 51.73 50.03 46.53 48.22
CharC-BiLSTM*65.60 66.70 66.10 60.00 60.90 60.40------
HBTNGMA41.67 51
HNN*74.20 63.10 68.20 77.10 53.10 63.00------
HBTNGMA54.29 62.82 58.25 49.86 57.69 53.49 46.92 53.57 50.02 37.54 42.86 40.03
FeatureRich-C* KBP2017 Best*62.20 71.90 66.70 58.90 68.10 63.20 -------67.76 45.92 54.74 62.69 42.48 50.64 -----
HibirdNPN TLNN (Ours)70.63 64.74 67.56 67.13 61.54 64.21 58.03 59.91 58.96 52.04 53.73 52.87 67.34 74.68 70.82 64.45 71.47 67.78 65.93 59.07 62.31 60.72 54.41 57.39
", "text": "59.29 48.94 38.74 55.13 45.50 40.52 46.76 43.41 35.93 41.47 38.50 Word DMCNN 66.60 63.60 65.10 61.60 58.80 60.20 60.43 51.64 55.69 54.81 46.84 50.", "num": null, "type_str": "table", "html": null }, "TABREF4": { "content": "", "text": "F1-score of Word-based and Character-based baselines and TLNN on ACE2005 and KBP2017.", "num": null, "type_str": "table", "html": null }, "TABREF6": { "content": "
ModelACE2005 TI TCKBP2017 TI TC
NPN67.56 64.21 58.96 52.87
TLNN70.82 67.78 62.31 57.39
-w/o
", "text": "Sense info 68.65 65.83 61.17 56.55", "num": null, "type_str": "table", "html": null }, "TABREF7": { "content": "
ModelACE2005 Poly Mono Poly Mono KBP2017
NPN66.27 61.08 51.15 48.63
TLNN69.11 62.56 58.61 56.56
-w/o Sense info 67.53 62.89 56.01 55.96
", "text": "F1-score of NPN, TLNN and TLNN -w/o Sense info on ACE2005 and KBP2017.", "num": null, "type_str": "table", "html": null }, "TABREF8": { "content": "
Sentence 1Word baselineNPNTLNNAnswer
\u7acb\u523b/\u6297 \u6297 \u6297\u654c\u63f4\u53cb(\u6297\u654c\u63f4\u53cb,Attack)(\u523b\u6297,Attack)(\u6297,Attack)(\u6297,Attack)
Sentence 2NPNTLNN -Sense infoTLNNAnswer
\u9001 \u9001 \u9001/\u4ed6/\u4e00\u7b14/\u8d74/\u6b27\u6d32/\u7684/\u65c5\u8d39(\u9001,TransferPerson) (\u9001,TransferPerson) (\u9001,TransferMoney) (\u9001,TransferMoney)
Send him money for European tourism
", "text": "Resist the enemies and aid the allies at once", "num": null, "type_str": "table", "html": null } } } }