{ "paper_id": "D19-1022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:12:55.194960Z" }, "title": "Improving Relation Extraction with Knowledge-attention", "authors": [ { "first": "Pengfei", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nanyang Technological University", "location": { "country": "Singapore" } }, "email": "" }, { "first": "Kezhi", "middle": [], "last": "Mao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nanyang Technological University", "location": { "country": "Singapore" } }, "email": "ekzmao@ntu.edu.sg" }, { "first": "Xuefeng", "middle": [], "last": "Yang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Qi", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nanyang Technological University", "location": { "country": "Singapore" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "While attention mechanisms have been proven to be effective in many NLP tasks, majority of them are data-driven. We propose a novel knowledge-attention encoder which incorporates prior knowledge from external lexical resources into deep neural networks for relation extraction task. Furthermore, we present three effective ways of integrating knowledge-attention with self-attention to maximize the utilization of both knowledge and data. The proposed relation extraction system is end-to-end and fully attention-based. Experiment results show that the proposed knowledge-attention mechanism has complementary strengths with self-attention, and our integrated models outperform existing CNN, RNN, and self-attention based models. Stateof-the-art performance is achieved on TA-CRED, a complex and large-scale relation extraction dataset.", "pdf_parse": { "paper_id": "D19-1022", "_pdf_hash": "", "abstract": [ { "text": "While attention mechanisms have been proven to be effective in many NLP tasks, majority of them are data-driven. We propose a novel knowledge-attention encoder which incorporates prior knowledge from external lexical resources into deep neural networks for relation extraction task. Furthermore, we present three effective ways of integrating knowledge-attention with self-attention to maximize the utilization of both knowledge and data. The proposed relation extraction system is end-to-end and fully attention-based. Experiment results show that the proposed knowledge-attention mechanism has complementary strengths with self-attention, and our integrated models outperform existing CNN, RNN, and self-attention based models. Stateof-the-art performance is achieved on TA-CRED, a complex and large-scale relation extraction dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Relation extraction aims to detect the semantic relationship between two entities in a sentence. For example, given the sentence: \"James Dobson has resigned as chairman of Focus On The Family, which he founded thirty years ago.\", the goal is to recognize the organization-founder relation held between \"Focus On The Family\" and \"James Dobson\". The various relations between entities extracted from large-scale unstructured texts can be used for ontology and knowledge base population (Chen et al., 2018a; Fossati et al., 2018) , as well as facilitating downstream tasks that requires relational understanding of texts such as question answering (Yu et al., 2017) and dialogue systems (Young et al., 2018) .", "cite_spans": [ { "start": 484, "end": 504, "text": "(Chen et al., 2018a;", "ref_id": "BIBREF2" }, { "start": 505, "end": 526, "text": "Fossati et al., 2018)", "ref_id": "BIBREF7" }, { "start": 645, "end": 662, "text": "(Yu et al., 2017)", "ref_id": "BIBREF33" }, { "start": 684, "end": 704, "text": "(Young et al., 2018)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Traditional feature-based and kernel-based approaches require extensive feature engineering (Suchanek et al., 2006; Qian et al., 2008; Rink and Harabagiu, 2010) . Deep neural networks such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have the ability of exploring more complex semantics and extracting features automatically from raw texts for relation extraction tasks Vu et al., 2016; Lee et al., 2017) . Recently, attention mechanisms have been introduced to deep neural networks to improve their performance (Zhou et al., 2016; Wang et al., 2016; . Especially, the Transformer proposed by Vaswani et al. (2017) is based solely on self-attention and has demonstrated better performance than traditional RNNs (Bilan and Roth, 2018; Verga et al., 2018) . However, deep neural networks normally require sufficient labeled data to train their numerous model parameters. The scarcity or low quality of training data will limit the model's ability to recognize complex relations and also cause overfitting issue.", "cite_spans": [ { "start": 92, "end": 115, "text": "(Suchanek et al., 2006;", "ref_id": "BIBREF24" }, { "start": 116, "end": 134, "text": "Qian et al., 2008;", "ref_id": "BIBREF17" }, { "start": 135, "end": 160, "text": "Rink and Harabagiu, 2010)", "ref_id": "BIBREF18" }, { "start": 402, "end": 418, "text": "Vu et al., 2016;", "ref_id": "BIBREF28" }, { "start": 419, "end": 436, "text": "Lee et al., 2017)", "ref_id": "BIBREF11" }, { "start": 544, "end": 563, "text": "(Zhou et al., 2016;", "ref_id": "BIBREF38" }, { "start": 564, "end": 582, "text": "Wang et al., 2016;", "ref_id": "BIBREF30" }, { "start": 625, "end": 646, "text": "Vaswani et al. (2017)", "ref_id": "BIBREF26" }, { "start": 743, "end": 765, "text": "(Bilan and Roth, 2018;", "ref_id": "BIBREF1" }, { "start": 766, "end": 785, "text": "Verga et al., 2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A recent study (Li and Mao, 2019) shows that incorporating prior knowledge from external lexical resources into deep neural network can reduce the reliance on training data and improve relation extraction performance. Motivated by this, we propose a novel knowledge-attention mechanism, which transforms texts from word semantic space into relational semantic space by attending to relation indicators that are useful in recognizing different relations. The relation indicators are automatically generated from lexical knowledge bases which represent keywords and cue phrases of different relation expressions. While the existing self-attention encoder learns internal semantic features by attending to the input texts themselves, the proposed knowledge-attention encoder captures the linguistic clues of different relations based on external knowledge. Since the two attention mechanisms complement each other, we integrate them into a single model to maximize the uti-lization of both knowledge and data, and achieve optimal performance for relation extraction.", "cite_spans": [ { "start": 15, "end": 33, "text": "(Li and Mao, 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In summary, the main contributions of the paper are: (1) We propose knowledge-attention encoder, a novel attention mechanism which incorporates prior knowledge from external lexical resources to effectively capture the informative linguistic clues for relation extraction. (2) To take the advantages of both knowledge-attention and self-attention, we propose three integration strategies: multi-channel attention, softmax interpolation, and knowledgeinformed self-attention. Our final models are fully attention-based and can be easily set up for end-toend training. 3We present detailed analysis on knowledge-attention encoder. Results show that it has complementary strengths with self-attention encoder, and the integrated models achieve startof-the-art results for relation extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We focus here on deep neural networks for relation extraction since they have demonstrated better performance than traditional feature-based and kernel-based approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are the earliest and commonly used approaches for relation extraction. Zeng et al. (2014) showed that CNN with position embeddings is effective for relation extraction. Similarly, CNN with multiple filter sizes (Nguyen and Grishman, 2015), pairwise ranking loss function (dos Santos et al., 2015) and auxiliary embeddings (Lee et al., 2017) were proposed to improve performance. Zhang and Wang (2015) proposed bi-directional RNN with max pooling to model the sequential relations. Instead of modeling the whole sentence, performing RNN on sub-dependency trees (e.g. shortest dependency path between two entities) has demonstrated to be effective in capturing longdistance relation patterns Miwa and Bansal, 2016) . Zhang et al. (2018) proposed graph convolution over dependency trees and achieved state-of-the-art results on TACRED dataset.", "cite_spans": [ { "start": 145, "end": 163, "text": "Zeng et al. (2014)", "ref_id": "BIBREF34" }, { "start": 345, "end": 370, "text": "(dos Santos et al., 2015)", "ref_id": "BIBREF20" }, { "start": 396, "end": 414, "text": "(Lee et al., 2017)", "ref_id": "BIBREF11" }, { "start": 453, "end": 474, "text": "Zhang and Wang (2015)", "ref_id": "BIBREF35" }, { "start": 764, "end": 786, "text": "Miwa and Bansal, 2016)", "ref_id": "BIBREF14" }, { "start": 789, "end": 808, "text": "Zhang et al. (2018)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Recently, attention mechanisms have been widely applied to CNNs (Wang et al., 2016; and RNNs (Zhou et al., 2016; . The improved performance demonstrated the effectiveness of attention mechanisms in deep neural networks. Particu-larly, Vaswani et al. (2017) proposed a solely selfattention-based model called Transformer, which is more effective than RNNs in capturing longdistance features since it is able to draw global dependencies without regard to their distances in the sequences. Bilan and Roth (2018) first applied self-attention encoder to relation extraction task and achieved competitive results on TACRED dataset. Verga et al. (2018) used self-attention to encode long contexts spanning multiple sentences for biological relation extraction. However, more attention heads and layers are required for self-attention encoder to capture complex semantic and syntactic information since learning is solely based on training data. Hence, more high quality training data and computational power are needed. Our work utilizes the knowledge from external lexical resources to improve deep neural network's ability of capturing informative linguistic clues.", "cite_spans": [ { "start": 64, "end": 83, "text": "(Wang et al., 2016;", "ref_id": "BIBREF30" }, { "start": 93, "end": 112, "text": "(Zhou et al., 2016;", "ref_id": "BIBREF38" }, { "start": 235, "end": 256, "text": "Vaswani et al. (2017)", "ref_id": "BIBREF26" }, { "start": 487, "end": 508, "text": "Bilan and Roth (2018)", "ref_id": "BIBREF1" }, { "start": 626, "end": 645, "text": "Verga et al. (2018)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "External knowledge has shown to be effective in neural networks for many NLP tasks. Existing works focus on utilizing external knowledge to improve embedding representations Liu et al., 2015; Sinoara et al., 2019) , CNNs (Toutanova et al., 2015; Wang et al., 2017; Mao, 2019), and RNNs (Ahn et al., 2016; Chen et al., , 2018b Shen et al., 2018) . Our work is the first to incorporate knowledge into Transformer through a novel knowledge-attention mechanism to improve its performance on relation extraction task.", "cite_spans": [ { "start": 174, "end": 191, "text": "Liu et al., 2015;", "ref_id": "BIBREF13" }, { "start": 192, "end": 213, "text": "Sinoara et al., 2019)", "ref_id": "BIBREF22" }, { "start": 221, "end": 245, "text": "(Toutanova et al., 2015;", "ref_id": "BIBREF25" }, { "start": 246, "end": 264, "text": "Wang et al., 2017;", "ref_id": "BIBREF29" }, { "start": 265, "end": 280, "text": "Mao, 2019), and", "ref_id": "BIBREF12" }, { "start": 281, "end": 304, "text": "RNNs (Ahn et al., 2016;", "ref_id": null }, { "start": 305, "end": 325, "text": "Chen et al., , 2018b", "ref_id": "BIBREF3" }, { "start": 326, "end": 344, "text": "Shen et al., 2018)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "We present the proposed knowledge-attention encoder in this section. Relation indicators are first generated from external lexical resources (Section 3.1); Then the input texts are transformed from word semantic space into relational semantic space by attending to the relation indicators using knowledge-attention mechanism (Section 3.2); Finally, position-aware attention is used to summarize the input sequence by taking both relation semantics and relative positions into consideration (Section 3.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge-attention Encoder", "sec_num": "3" }, { "text": "Relation indicators represent the keywords or cue phrases of various relation types, which are essential for knowledge-attention encoder to capture the linguistic clues of certain relation from texts. We utilize two publicly available lexical resources including FrameNet 1 and Thesaurus.com 2 to find such lexical units. FrameNet is a large lexical knowledge base which categorizes English words and sentences into higher level semantic frames (Ruppenhofer et al., 2006) . Each frame is a conceptual structure describing a type of event, object or relation. FrameNet contains over 1200 semantic frames, many of which represent various semantic relations. For each relation type in our relation extraction task, we first find all the relevant semantic frames by searching from FrameNet (refer Appendix for detailed semantic frames used). Then we extract all the lexical units involved in these frames, which are exactly the keywords or phrases that often used to express such relation. Thesaurus.com is the largest online thesaurus which has over 3 million synonyms and antonyms. It also has the flexibility to filter search results by relevance, POS tag, word length, and complexity. To broaden the coverage of relation indicators, we utilize the synonyms in Thesaurus.com to extend the lexical units extracted from FrameNet. To reduce noise, only the most relevant synonyms with the same POS tag are selected.", "cite_spans": [ { "start": 445, "end": 471, "text": "(Ruppenhofer et al., 2006)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Relation Indicators Generation", "sec_num": "3.1" }, { "text": "Relation indicators are generated based on the word embeddings and POS tags of lexical units. Formally, given a word in a lexical unit, we find its word embedding w i \u2208 R dw and POS embedding t i \u2208 R dt by looking up the word embedding matrix W wrd \u2208 R dw\u00d7V wrd and POS embedding matrix W pos \u2208 R dt\u00d7V pos respectively, where d w and d t are the dimensions of word and POS embeddings, V wrd is vocabulary size 3 and V pos is total number of POS tags. The corresponding relation indicator is formed by concatenating word embedding and POS embedding,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation Indicators Generation", "sec_num": "3.1" }, { "text": "k i = [w i , t i ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation Indicators Generation", "sec_num": "3.1" }, { "text": ". If a lexical unit contains multiple words (i.e. phrase), the corresponding relation indicator is formed by averaging the embeddings of all words. Eventually, around 3000 relation indicators (including 2000 synonyms) are generated:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation Indicators Generation", "sec_num": "3.1" }, { "text": "K = {k 1 , k 2 , ..., k m }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation Indicators Generation", "sec_num": "3.1" }, { "text": "In a typical attention mechanism, a query (q) is compared with the keys (K) in a set of key-value pairs and the corresponding attention weights are calculated. The attention output is weighted sum of values (V ) using the attention weights. In our proposed knowledge-attention encoder, the queries are input texts and the key-value pairs are both relation indicators. The detailed process of knowledge-attention is shown in Figure 1 (left).", "cite_spans": [], "ref_spans": [ { "start": 424, "end": 432, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Knowledge-attention process", "sec_num": "3.2.1" }, { "text": "Formally, given text input x = {x 1 , x 2 , ..., x n }, the input embeddings Q = {q 1 , q 2 , ..., q n } are generated by concatenating each word's word embedding and POS embedding in the same way as relation indicator generation in Section 3.1. The hidden representations H = {h 1 , h 2 , ..., h n } are obtained by attending to the knowledge indicators K, as shown in Equation 1. The final knowledgeattention outputs are obtained by subtracting the hidden representations with the relation indicators mean, as shown in Equation 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge-attention process", "sec_num": "3.2.1" }, { "text": "H = sof tmax( QK T \u221a d k )V (1) knwl(Q, K, V) = H \u2212 K/m (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge-attention process", "sec_num": "3.2.1" }, { "text": "where knwl indicates knowledge-attention process, m is the number of relation indicators, and d k is dimension of key/query vectors which is a scaling factor same as in Vaswani et al. (2017) . The subtraction of relation indicators mean will result in small outputs for irrelevant words. More importantly, the resulted output will be close to the related relation indicators and further apart from other relation indicators in relational semantic space. Therefore, the proposed knowledgeattention mechanism is effective in capturing the linguistic clues of relations represented by relation indicators in the relational semantic space.", "cite_spans": [ { "start": 169, "end": 190, "text": "Vaswani et al. (2017)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge-attention process", "sec_num": "3.2.1" }, { "text": "Inspired by the multi-head attention in Transformer (Vaswani et al., 2017) , we also have multi-head knowledge-attention which first linearly transforms Q, K and V h times, and then perform h knowledge-attentions simultaneously, as shown in Figure 1 (right) .", "cite_spans": [ { "start": 52, "end": 74, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 241, "end": 257, "text": "Figure 1 (right)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Multi-head knowledge-attention", "sec_num": "3.2.2" }, { "text": "Different from the Transformer encoder, we use the same linear transformation for Q and K in each head to keep the correspondence between queries and keys.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-head knowledge-attention", "sec_num": "3.2.2" }, { "text": "head i = knwl(QW Q i , KW Q i , VW V i ) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-head knowledge-attention", "sec_num": "3.2.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-head knowledge-attention", "sec_num": "3.2.2" }, { "text": "W Q i , W V i \u2208 R d k \u00d7(d k /h) and i \u2208 [1, 2, ...h]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-head knowledge-attention", "sec_num": "3.2.2" }, { "text": ". Besides, only one residual connection from input embeddings to outputs of position-wise feed forward networks is used. We also mask the outputs of padding tokens using zero vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-head knowledge-attention", "sec_num": "3.2.2" }, { "text": "The multi-head structure in knowledgeattention allows the model to jointly attend inputs to different relational semantic subspaces with different contributions of relation indicators. This is beneficial in recognizing complex relations where various compositions of relation indicators are needed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-head knowledge-attention", "sec_num": "3.2.2" }, { "text": "It has been proven that the relative position information of each token with respective to the two target entities is beneficial for relation extraction task (Zeng et al., 2014) . We modify the positionaware attention originally proposed by to incorporate such relative position information and find the importance of each token to the final sentence representation.", "cite_spans": [ { "start": 158, "end": 177, "text": "(Zeng et al., 2014)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Position-aware Attention", "sec_num": "3.3" }, { "text": "Assume the relative position of token x i to target entity isp i . We apply position binning function (Equation 4) to make it easier for the model to distinguish long and short relative distances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Position-aware Attention", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p i = p i |p i | \u2264 2 p i |p i | log 2 |p i | + 1 |p i | > 2", "eq_num": "(4)" } ], "section": "Position-aware Attention", "sec_num": "3.3" }, { "text": "After getting the relative positions p s i and p o i to the two entities of interest (subject and object respectively), we map them to position embeddings base on a shared position embedding matrix W p . The two embeddings are concatenated to form the final position embedding for token", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Position-aware Attention", "sec_num": "3.3" }, { "text": "x i : p i = [p s i , p o i ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Position-aware Attention", "sec_num": "3.3" }, { "text": ". Position-aware attention is performed on the outputs of knowledge-attention O \u2208 R n\u00d7d k , taking the corresponding relative position embeddings P \u2208 R n\u00d7dp into consideration:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Position-aware Attention", "sec_num": "3.3" }, { "text": "f = O T sof tmax(tanh(OW o + PW p )c) (5) where W o \u2208 R d k \u00d7da , W p \u2208 R dp\u00d7da , d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Position-aware Attention", "sec_num": "3.3" }, { "text": "a is attention dimension, and c \u2208 R da is a context vector learned by the neural network.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Position-aware Attention", "sec_num": "3.3" }, { "text": "Self-attention", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrate Knowledge-attention with", "sec_num": "4" }, { "text": "The self-attention encoder proposed by Vaswani et al. (2017) learns internal semantic features by modeling pair-wise interactions within the texts themselves, which is effective in capturing longdistance dependencies. Our proposed knowledgeattention encoder has complementary strengths of capturing the linguistic clues of relations precisely based on external knowledge. Therefore, it is beneficial to integrate the two models to maximize the utilization of both external knowledge and training data. In this section, we propose three integration approaches as shown in Figure 2 , and each approach has its own advantages. ", "cite_spans": [ { "start": 39, "end": 60, "text": "Vaswani et al. (2017)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 571, "end": 579, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Integrate Knowledge-attention with", "sec_num": "4" }, { "text": "In this approach, self-attention and knowledgeattention are treated as two separate channels to model sentence from different perspectives. After applying position-aware attention, two feature vectors f 1 and f 2 are obtained from self-attention and knowledge-attention respectively. We apply another attention mechanism called multi-channel attention to integrate the feature vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-channel Attention", "sec_num": "4.1" }, { "text": "In multi-channel attention, feature vectors are first fed into a fully connected neural network to get their hidden representations h i . Then attention weights are calculated using a learnable context vector c, which reflects the importance of each feature vector to final relation classification. Finally, the feature vectors are integrated based on attention weights, as shown in Equation 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-channel Attention", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r = i sof tmax(h T i c)h i", "eq_num": "(6)" } ], "section": "Multi-channel Attention", "sec_num": "4.1" }, { "text": "After obtaining the integrated feature vector r, we pass it to a softmax classifier to determine the relation class. The model is trained using stochastic gradient descent with momentum and learning rate decay to minimize the cross-entropy loss. The main advantage of this approach is flexibility. Since the two channels process information independently, the input components are not necessary to be the same. Besides, we can add more features from other sources (e.g. subject and object categories) to multi-channel attention to make final decision based on all the information sources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-channel Attention", "sec_num": "4.1" }, { "text": "Similar as multi-channel attention, we also use two independent channels for self-attention and knowledge-attention in softmax interpolation. Instead of integrating the feature vectors, we make two independent predictions using two softmax classifiers based on the feature vectors from the two channels. The loss function is defined as total cross-entropy loss of the two classifiers. The final prediction is obtained using an interpolation function of the two softmax distributions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Softmax Interpolation", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p = \u03b2 \u2022 p 1 + (1 \u2212 \u03b2) \u2022 p 2", "eq_num": "(7)" } ], "section": "Softmax Interpolation", "sec_num": "4.2" }, { "text": "where p 1 , p 2 are the softmax distributions obtained form self-attention and knowledgeattention respectively, and \u03b2 is the priority weight assigned to self-attention.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Softmax Interpolation", "sec_num": "4.2" }, { "text": "Since knowledge-attention focuses on capturing the keywords and cue phrases of relations, the precision will be higher than self-attention while the recall is lower. The proposed softmax interpolation approach is able to take the advantages of both attention mechanisms and balance the precision and recall by adjusting the priority weight \u03b2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Softmax Interpolation", "sec_num": "4.2" }, { "text": "Since knowledge-attention and self-attention share similar structures, it is also possible to integrate them into a single channel. We propose knowledge-informed self-attention encoder which incorporates knowledge-attention into every self-attention head to jointly model the semantic relations based on both knowledge and data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge-informed Self-attention", "sec_num": "4.3" }, { "text": "The structure of knowledge-informed selfattention is shown in Figure 3 . Formally, given texts input matrix Q \u2208 R n\u00d7d k and knowledge indicators K \u2208 R m\u00d7d k . The output of each attention head is calculated as follows: where knwl and self indicate knowledgeattention and self-attention respectively, and all the linear transformation weight matrices have the dimensionality of", "cite_spans": [], "ref_spans": [ { "start": 62, "end": 70, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Knowledge-informed Self-attention", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "head i = knwl(QW Q i , KW Q i , KW V i )+ self (QW Qs i , QW Ks i , QW Vs i )", "eq_num": "(8)" } ], "section": "Knowledge-informed Self-attention", "sec_num": "4.3" }, { "text": "W \u2208 R d k \u00d7(d k /h) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge-informed Self-attention", "sec_num": "4.3" }, { "text": "Since each self-attention head is aided with prior knowledge in knowledge-attention, the knowledge-informed self-attention encoder is able to capture more lexical and semantic information than single attention encoder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge-informed Self-attention", "sec_num": "4.3" }, { "text": "To study the performance of our proposed models, the following baseline models are used for comparison: CNN-based models including: (1) CNN: the classical convolutional neural network for sentence classification (Kim, 2014) . (2) CNN-PE: CNN with position embeddings dedicated for relation classification (Nguyen and Grishman, 2015). (3) GCN: a graph convolutional network over the pruned dependency trees of the sentence (Zhang et al., 2018) . RNN-based models including: (1) LSTM: long short-term memory network to sequentially model the texts. Classification is based on the last hidden output. (2) PA-LSTM: Similar position-aware attention mechanism as our work is used to summarize the LSTM outputs . CNN-RNN hybrid model including contextualized GCN (C-GCN) where the input vectors are obtained using bi-directional LSTM network (Zhang et al., 2018) . Self-attention-based model (Self-attn) which uses self-attention encoder to model the input sentence. Our implementation is based on Bilan and Roth (2018) where several modifications are made on the original Transformer encoder, including the use of relative positional encodings instead of absolute sinusoidal encodings, as well as other configurations such as residual connection, activation function and normalization.", "cite_spans": [ { "start": 212, "end": 223, "text": "(Kim, 2014)", "ref_id": "BIBREF10" }, { "start": 422, "end": 442, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF36" }, { "start": 835, "end": 855, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF36" }, { "start": 991, "end": 1012, "text": "Bilan and Roth (2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "5.1" }, { "text": "For our model, we evaluate both the proposed knowledge-attention encoder (Knwl-attn) as well as the integrated models with self-attention including multi-channel attention (MCA), softmax interpolation (SI) and knowledge-informed selfattention (KISA).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "5.1" }, { "text": "We conduct our main experiments on TACRED, a large-scale relation extraction dataset introduced by . TACRED contains over 106k sentences with hand-annotated subject and object entities as well as the relations between them. It is a very complex relation extraction dataset with 41 relation types and a no relation class when no relation is hold between entities. The dataset is suited for real-word relation extraction since it is unbalanced with 79.5% no relation samples, and multiple relations between different entity pairs can be exist in one sentence. Besides, the samples are normally long sentences with an average of 36.2 words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Settings", "sec_num": "5.2" }, { "text": "Since the dataset is already partitioned into train (68124 samples), dev (22631 samples) and test (15509 samples) sets, we tune model hyperparameters using dev set and evaluate model using test set. The evaluation metrics are microaveraged precision, recall and F 1 score. For fair comparison, we select the model with median F 1 score on dev set from 5 independent runs, same as . The same \"entity mask\" strategy is used which replaces subject (or object) entity with special NER -SUBJ (or NER -OBJ) tokens to avoid overfittting on specific entities and provide entity type information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Settings", "sec_num": "5.2" }, { "text": "Besides TACRED, another dataset called SemEval2010-Task8 (Hendrickx et al., 2009) is used to evaluate the generalization ability of our proposed model. The dataset is significantly smaller and simpler than TACRED, which has 8000 training samples and 2717 testing samples. It contains 9 directed relations and 1 other relation (19 relation classes in total). We use the official macro-averaged F 1 score as evaluation metric.", "cite_spans": [ { "start": 57, "end": 81, "text": "(Hendrickx et al., 2009)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Settings", "sec_num": "5.2" }, { "text": "We use one layer encoder with 6 attention heads for both knowledge-attention and self-attention since further increasing the number of layers and attention heads will degrade the performance. For softmax interpolation, we choose \u03b2 = 0.8 to balance precision and recall. Word embeddings are fine-tuned based on pre-trained GloVe (Pennington et al., 2014) with dimensionality of 300. Dropout (Srivastava et al., 2014) is used during trianing to alleviate overfitting. Other model hyperparameters and training details are described in Appendix due to space limitations. Table 1 shows the results of baseline as well as our proposed models on TACRED dataset. It is observed that our proposed knowledge-attention encoder outperforms all CNN-based and RNNbased models by at least 1.3 F 1 . Meanwhile, it achieves comparable results with C-GCN and selfattention encoder, which are the current start-ofthe-art single-model systems.", "cite_spans": [ { "start": 328, "end": 353, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF16" }, { "start": 390, "end": 415, "text": "(Srivastava et al., 2014)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 567, "end": 574, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experiment Settings", "sec_num": "5.2" }, { "text": "Comparing with self-attention encoder, it is observed that knowledge-attention encoder results in higher precision but lower recall. This is reasonable since knowledge-attention encoder focuses on capturing the significant linguistic clues of relations based on external knowledge, it will result in high precision for the predicted relations similar to rule-based systems. Self-attention encoder is able to capture more long-distance dependency features by learning from data, resulting in better recall. By integrating self-attention and knowledge-attention using the proposed approaches, a more balanced precision and recall can be obtained, suggesting the complementary effects of self-attention and knowledge-attention mechanisms. The integrated models improve performance by at least 0.9 F 1 score and achieve new state-of-the-art results among all the single endto-end models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results on TACRED dataset", "sec_num": "5.3.1" }, { "text": "Comparing the three integrated models, softmax interpolation (SI) achieves the best performance. More interestingly, we found that the precision and recall can be controlled by adjusting the priority weight \u03b2. Figure 4 shows impact of \u03b2 on precision, recall and F 1 score. As \u03b2 increases, precision decreases and recall increases. Therefore, we can choose a small \u03b2 for relation extraction system which requires high precision, and a large \u03b2 for the system requiring better recall. F 1 score reaches the highest value when precision and re- Table 1 : Micro-averaged precision (P), recall (R) and F 1 score on TACRED dataset. \u2020, \u2021 and \u2020 \u2020 mark the results reported in , (Zhang et al., 2018) and (Bilan and Roth, 2018) respectively. * marks statistically significant improvements over Selfattn with p < 0.01 under one-tailed t-test. call are balanced (\u03b2 = 0.8).", "cite_spans": [ { "start": 669, "end": 689, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF36" }, { "start": 694, "end": 716, "text": "(Bilan and Roth, 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 210, "end": 218, "text": "Figure 4", "ref_id": "FIGREF3" }, { "start": 541, "end": 548, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results on TACRED dataset", "sec_num": "5.3.1" }, { "text": "Knowledge-informed self-attention (KISA) has comparable performance with softmax interpolation, and without the need of hyper-parameter tuning since knowledge-attention and self-attention are integrated into a single channel. The performance gain over self-attention encoder is 1.2 F 1 with much improved precision, demonstrating the effectiveness of incorporating knowledgeattention into self-attention to jointly model the sentence based on both knowledge and data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results on TACRED dataset", "sec_num": "5.3.1" }, { "text": "Performance gain is the lowest for multichannel attention (MCA) . However, the model is more flexible in the way that features from other information sources can be easily added to the model to further improve its performance. Table 2 shows the results of adding NER embeddings of each token to self-attention channel, and entity (subject and object) categorical embeddings to multi-channel attention as additional feature vectors. We use dimensionality of 30 and 60 for NER and entity categorical embeddings respectively, and the two embedding matrixes are learned by the neural network. Results show that adding NER and entity categorical information to MCA integrated model improves F 1 score by 0.2 and 0.5 respectively, and adding both improves precision significantly, resulting a new best F 1 score.", "cite_spans": [ { "start": 58, "end": 63, "text": "(MCA)", "ref_id": null } ], "ref_spans": [ { "start": 227, "end": 235, "text": "Table 2", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results on TACRED dataset", "sec_num": "5.3.1" }, { "text": "We use SemEval2010-Task8 dataset to evaluate the generalization ability of our proposed model. Experiments are conducted in two manners: mask or keep the entities of interest. Results in Table 3 show that the \"entity mask\" strategy degrades the performance, indicating that there exist strong correlations between entities of interest and relation classes in SemEval2010-Task8 dataset. Although the results of keeping the entities are better, the model tends to remember these entities instead of focusing on learning the linguistic clues of relations. This will result in bad generalization for sentences with unseen entities. Regardless of whether the entity mask is used, by incorporating knowledge-attention mechanism, our model improves the performance of selfattention by a statistically significant margin, especially the softmax interpolation integrated model. The results on SemEval2010-Task8 are consistent with that of TACRED, demonstrating the effectiveness and robustness of our proposed method.", "cite_spans": [], "ref_spans": [ { "start": 187, "end": 195, "text": "Table 3", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Results on SemEval2010-Task8 dataset", "sec_num": "5.3.2" }, { "text": "To study the contributions of specific components of knowledge-attention encoder, we perform ablation experiments on the dev set of TACRED. without certain components are shown in Table 4 . It is observed that: (1) The proposed multihead knowledge-attention structure outperforms single-head significantly. This demonstrates the effectiveness of jointly attending texts to different relational semantic subspaces in the multi-head structure. (2) The synonyms improve the performance of knowledge-attention since they are able to broaden the coverage of relation indicators and form a robust relational semantic space. (3) The subtraction of relation indicators mean vector from attention hidden representations helps to suppress the activation of irrelevant words and results in a better representation for each word to capture the linguistic clues of relations. (4-5) The two masking strategies are helpful for our model: the output masking eliminates the effects of the padding tokens and the entity masking avoids entity overfitting while providing entity type information. (6) The relative position embedding term in position-aware attention contributes a significant amount of F 1 score. This shows that positional information is particularly important for relation extraction task.", "cite_spans": [], "ref_spans": [ { "start": 180, "end": 187, "text": "Table 4", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Ablation study", "sec_num": "5.3.3" }, { "text": "To verify the complementary effects of knowledge-attention encoder and self-attention encoder, we compare the attention weights assigned to words from the two encoders. Table 5 presents the attention visualization results on sample sentences. For each sample sentence, attention weights from knowledge-attention encoder are visualized first, followed by self-attention encoder. It is observed that knowledge-attention encoder focuses more on the specific keywords or cue phrases of certain relations, such as \"graduated\", \"executive director\" and \"founded\"; while self-attention encoder attends to a wide range of words in the sentence and pays more attention to the surrounding words of target entities especially the words indicating the syntactic structure, such as \"is\", \"in\" and \"of\". Therefore, knowledgeattention encoder and self-attention encoder have complementary strengths that focus on different perspectives for relation extraction.", "cite_spans": [], "ref_spans": [ { "start": 169, "end": 176, "text": "Table 5", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Attention visualization", "sec_num": "5.3.4" }, { "text": "To investigate the limitations of our proposed model and provide insights for future research, we analyze the errors produced by the system on the test set of TACRED. For knowledge-attention encoder, 58% errors are false negative (FN) due to the limited ability in capturing long-distance dependencies and some unseen linguistic clues during training. For our integrated model 4 that takes the benefits of both self-attention and knowledgeattention, FN is reduced by 10%. However, false positive (FP) is not improved due to overfitting that leads to wrong predictions. Many errors are caused by multiple entities with different relations co-occurred in one sentence. Our model may mistake irrelevant entities as a relation pair. We also observed that many FP errors are due to the confusions between related relations such as \"city of death\"and \"city of residence\". More data or knowledge is needed to distinguish \"death\" and \"residence\". Besides, some errors are caused by imperfect annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error analysis", "sec_num": "5.3.5" }, { "text": "We introduce knowledge-attention encoder which effectively incorporates prior knowledge from external lexical resources for relation extraction. The proposed knowledge-attention mechanism transforms texts from word space into relational semantic space and captures the informative linguistic clues of relations effectively. Furthermore, we show the complementary strengths of knowledgeattention and self-attention, and propose three different ways of integrating them to maximize the utilization of both knowledge and data. The proposed models are fully attention-based end-toend systems and achieve state-of-the-art results on TACRED dataset, outperforming existing CNN, RNN, and self-attention based models. In future work, besides lexical knowledge, we will incorporate conceptual knowledge from encyclopedic knowledge bases into knowledgeattention encoder to capture the high-level semantics of texts. We will also apply knowledgeattention in other tasks such as text classification, sentiment analysis and question answering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "https://framenet.icsi.berkeley.edu/ fndrupal 2 https://www.thesaurus.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Same word embedding matrix is used for relation indicators and input texts, hence the vocabulary also includes all the words in the training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We observed similar error behaviors of the three proposed integrated models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A neural knowledge language model", "authors": [ { "first": "Heeyoul", "middle": [], "last": "Sungjin Ahn", "suffix": "" }, { "first": "Tanel", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "P\u00e4rnamaa", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1608.00318" ] }, "num": null, "urls": [], "raw_text": "Sungjin Ahn, Heeyoul Choi, Tanel P\u00e4rnamaa, and Yoshua Bengio. 2016. A neural knowledge lan- guage model. arXiv preprint arXiv:1608.00318.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Position-aware self-attention with relative positional encodings for slot filling", "authors": [ { "first": "Ivan", "middle": [], "last": "Bilan", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1807.03052" ] }, "num": null, "urls": [], "raw_text": "Ivan Bilan and Benjamin Roth. 2018. Position-aware self-attention with relative positional encodings for slot filling. arXiv preprint arXiv:1807.03052.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "On2vec: Embeddingbased relation prediction for ontology population", "authors": [ { "first": "Muhao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yingtao", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Xuelu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zijun", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Carlo", "middle": [], "last": "Zaniolo", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 SIAM International Conference on Data Mining", "volume": "", "issue": "", "pages": "315--323", "other_ids": {}, "num": null, "urls": [], "raw_text": "Muhao Chen, Yingtao Tian, Xuelu Chen, Zijun Xue, and Carlo Zaniolo. 2018a. On2vec: Embedding- based relation prediction for ontology population. In Proceedings of the 2018 SIAM International Confer- ence on Data Mining, pages 315-323. SIAM.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Neural natural language inference models enhanced with external knowledge", "authors": [ { "first": "Qian", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zhen-Hua", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" }, { "first": "Si", "middle": [], "last": "Wei", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2406--2417", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. 2018b. Neural natural language inference models enhanced with external knowl- edge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 2406-2417.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Knowledge as a teacher: Knowledgeguided structural attention networks", "authors": [ { "first": "Yun-Nung", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-Tur", "suffix": "" }, { "first": "Gokhan", "middle": [], "last": "Tur", "suffix": "" }, { "first": "Asli", "middle": [], "last": "Celikyilmaz", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1609.03286" ] }, "num": null, "urls": [], "raw_text": "Yun-Nung Chen, Dilek Hakkani-Tur, Gokhan Tur, Asli Celikyilmaz, Jianfeng Gao, and Li Deng. 2016. Knowledge as a teacher: Knowledge- guided structural attention networks. arXiv preprint arXiv:1609.03286.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Revisiting word embedding for contrasting meaning", "authors": [ { "first": "Zhigang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Qian", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaoping", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Si", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "106--115", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhigang Chen, Wei Lin, Qian Chen, Xiaoping Chen, Si Wei, Hui Jiang, and Xiaodan Zhu. 2015. Re- visiting word embedding for contrasting meaning. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 106-115.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Multi-level structured self-attentions for distantly supervised relation extraction", "authors": [ { "first": "Jinhua", "middle": [], "last": "Du", "suffix": "" }, { "first": "Jingguang", "middle": [], "last": "Han", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Way", "suffix": "" }, { "first": "Dadong", "middle": [], "last": "Wan", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2216--2225", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinhua Du, Jingguang Han, Andy Way, and Dadong Wan. 2018. Multi-level structured self-attentions for distantly supervised relation extraction. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2216-2225.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "N-ary relation extraction for simultaneous tbox and a-box knowledge base augmentation", "authors": [ { "first": "Marco", "middle": [], "last": "Fossati", "suffix": "" }, { "first": "Emilio", "middle": [], "last": "Dorigatti", "suffix": "" }, { "first": "Claudio", "middle": [], "last": "Giuliano", "suffix": "" } ], "year": 2018, "venue": "Semantic Web", "volume": "9", "issue": "4", "pages": "413--439", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Fossati, Emilio Dorigatti, and Claudio Giuliano. 2018. N-ary relation extraction for simultaneous t- box and a-box knowledge base augmentation. Se- mantic Web, 9(4):413-439.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Hierarchical relation extraction with coarse-to-fine grained attention", "authors": [ { "first": "Xu", "middle": [], "last": "Han", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2236--2245", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu Han, Pengfei Yu, Zhiyuan Liu, Maosong Sun, and Peng Li. 2018. Hierarchical relation extraction with coarse-to-fine grained attention. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 2236-2245.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals", "authors": [ { "first": "Iris", "middle": [], "last": "Hendrickx", "suffix": "" }, { "first": "Su", "middle": [ "Nam" ], "last": "Kim", "suffix": "" }, { "first": "Zornitsa", "middle": [], "last": "Kozareva", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Diarmuid\u00f3", "middle": [], "last": "S\u00e9aghdha", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Pennacchiotti", "suffix": "" }, { "first": "Lorenza", "middle": [], "last": "Romano", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions", "volume": "", "issue": "", "pages": "94--99", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid\u00d3 S\u00e9aghdha, Sebastian Pad\u00f3, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2009. Semeval-2010 task 8: Multi-way classification of semantic relations be- tween pairs of nominals. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions, pages 94-99. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1746--1751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1746-1751.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Mit at semeval-2017 task", "authors": [ { "first": "Ji", "middle": [ "Young" ], "last": "Lee", "suffix": "" }, { "first": "Franck", "middle": [], "last": "Dernoncourt", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Szolovits", "suffix": "" } ], "year": 2017, "venue": "Relation extraction with convolutional neural networks", "volume": "10", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.01523" ] }, "num": null, "urls": [], "raw_text": "Ji Young Lee, Franck Dernoncourt, and Peter Szolovits. 2017. Mit at semeval-2017 task 10: Rela- tion extraction with convolutional neural networks. arXiv preprint arXiv:1704.01523.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Knowledge-oriented convolutional neural network for causal relation extraction from natural language texts", "authors": [ { "first": "Pengfei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Kezhi", "middle": [], "last": "Mao", "suffix": "" } ], "year": 2019, "venue": "Expert Systems with Applications", "volume": "115", "issue": "", "pages": "512--523", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengfei Li and Kezhi Mao. 2019. Knowledge-oriented convolutional neural network for causal relation ex- traction from natural language texts. Expert Systems with Applications, 115:512-523.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Learning semantic word embeddings based on ordinal knowledge constraints", "authors": [ { "first": "Quan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Si", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Zhen-Hua", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1501--1511", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quan Liu, Hui Jiang, Si Wei, Zhen-Hua Ling, and Yu Hu. 2015. Learning semantic word embeddings based on ordinal knowledge constraints. In Pro- ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1501- 1511.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "End-to-end relation extraction using lstms on sequences and tree structures", "authors": [ { "first": "Makoto", "middle": [], "last": "Miwa", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1105--1116", "other_ids": {}, "num": null, "urls": [], "raw_text": "Makoto Miwa and Mohit Bansal. 2016. End-to-end re- lation extraction using lstms on sequences and tree structures. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1105- 1116.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Relation extraction: Perspective from convolutional neural networks", "authors": [ { "first": "Huu", "middle": [], "last": "Thien", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing", "volume": "", "issue": "", "pages": "39--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thien Huu Nguyen and Ralph Grishman. 2015. Rela- tion extraction: Perspective from convolutional neu- ral networks. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Pro- cessing, pages 39-48.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Exploiting constituent dependencies for tree kernel-based semantic relation extraction", "authors": [ { "first": "Longhua", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Fang", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Qiaoming", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Peide", "middle": [], "last": "Qian", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "697--704", "other_ids": {}, "num": null, "urls": [], "raw_text": "Longhua Qian, Guodong Zhou, Fang Kong, Qiaoming Zhu, and Peide Qian. 2008. Exploiting constituent dependencies for tree kernel-based semantic relation extraction. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 697-704. Association for Computational Lin- guistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Utd: Classifying semantic relations by combining lexical and semantic resources", "authors": [ { "first": "Bryan", "middle": [], "last": "Rink", "suffix": "" }, { "first": "Sanda", "middle": [], "last": "Harabagiu", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 5th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "256--259", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bryan Rink and Sanda Harabagiu. 2010. Utd: Clas- sifying semantic relations by combining lexical and semantic resources. In Proceedings of the 5th Inter- national Workshop on Semantic Evaluation, pages 256-259. Association for Computational Linguis- tics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Framenet ii: Extended theory and practice", "authors": [ { "first": "Josef", "middle": [], "last": "Ruppenhofer", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Ellsworth", "suffix": "" }, { "first": "Myriam", "middle": [], "last": "Schwarzer-Petruck", "suffix": "" }, { "first": "R", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Josef Ruppenhofer, Michael Ellsworth, Myriam Schwarzer-Petruck, Christopher R Johnson, and Jan Scheffczyk. 2006. Framenet ii: Extended theory and practice.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Classifying relations by ranking with convolutional neural networks", "authors": [ { "first": "Santos", "middle": [], "last": "Cicero Dos", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "626--634", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with con- volutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 626-634.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Knowledge-aware attentive neural network for ranking question answer pairs", "authors": [ { "first": "Ying", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Min", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yaliang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Du", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Lei", "suffix": "" } ], "year": 2018, "venue": "The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval", "volume": "", "issue": "", "pages": "901--904", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ying Shen, Yang Deng, Min Yang, Yaliang Li, Nan Du, Wei Fan, and Kai Lei. 2018. Knowledge-aware at- tentive neural network for ranking question answer pairs. In The 41st International ACM SIGIR Con- ference on Research & Development in Information Retrieval, pages 901-904. ACM.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Knowledge-enhanced document embeddings for text classification. Knowledge-Based Systems", "authors": [ { "first": "A", "middle": [], "last": "Roberta", "suffix": "" }, { "first": "Jose", "middle": [], "last": "Sinoara", "suffix": "" }, { "first": "", "middle": [], "last": "Camacho-Collados", "suffix": "" }, { "first": "G", "middle": [], "last": "Rafael", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Rossi", "suffix": "" }, { "first": "Solange", "middle": [ "O" ], "last": "Navigli", "suffix": "" }, { "first": "", "middle": [], "last": "Rezende", "suffix": "" } ], "year": 2019, "venue": "", "volume": "163", "issue": "", "pages": "955--971", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberta A Sinoara, Jose Camacho-Collados, Rafael G Rossi, Roberto Navigli, and Solange O Rezende. 2019. Knowledge-enhanced document embeddings for text classification. Knowledge-Based Systems, 163:955-971.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Dropout: a simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "The Journal of Machine Learning Research", "volume": "15", "issue": "1", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Combining linguistic and statistical analysis to extract relations from web documents", "authors": [ { "first": "M", "middle": [], "last": "Fabian", "suffix": "" }, { "first": "Georgiana", "middle": [], "last": "Suchanek", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Ifrim", "suffix": "" }, { "first": "", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "712--717", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabian M Suchanek, Georgiana Ifrim, and Gerhard Weikum. 2006. Combining linguistic and statistical analysis to extract relations from web documents. In Proceedings of the 12th ACM SIGKDD interna- tional conference on Knowledge discovery and data mining, pages 712-717. ACM.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Representing text for joint embedding of text and knowledge bases", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "Hoifung", "middle": [], "last": "Poon", "suffix": "" }, { "first": "Pallavi", "middle": [], "last": "Choudhury", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Gamon", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1499--1509", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoi- fung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1499-1509.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Simultaneously self-attending to all mentions for full-abstract biological relation extraction", "authors": [ { "first": "Patrick", "middle": [], "last": "Verga", "suffix": "" }, { "first": "Emma", "middle": [], "last": "Strubell", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "872--884", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Verga, Emma Strubell, and Andrew McCallum. 2018. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 872-884.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Combining recurrent and convolutional neural networks for relation classification", "authors": [ { "first": "Ngoc", "middle": [ "Thang" ], "last": "Vu", "suffix": "" }, { "first": "Heike", "middle": [], "last": "Adel", "suffix": "" }, { "first": "Pankaj", "middle": [], "last": "Gupta", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "534--539", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ngoc Thang Vu, Heike Adel, Pankaj Gupta, et al. 2016. Combining recurrent and convolutional neural net- works for relation classification. In Proceedings of NAACL-HLT, pages 534-539.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Combining knowledge with deep convolutional neural networks for short text classification", "authors": [ { "first": "Jin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhongyuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Dawei", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2017, "venue": "IJCAI", "volume": "", "issue": "", "pages": "2915--2921", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jin Wang, Zhongyuan Wang, Dawei Zhang, and Jun Yan. 2017. Combining knowledge with deep convo- lutional neural networks for short text classification. In IJCAI, pages 2915-2921.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Relation classification via multi-level attention cnns", "authors": [ { "first": "Linlin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhu", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Gerard", "middle": [ "De" ], "last": "Melo", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linlin Wang, Zhu Cao, Gerard De Melo, and Zhiyuan Liu. 2016. Relation classification via multi-level at- tention cnns.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Improved relation classification by deep recurrent neural networks with data augmentation", "authors": [ { "first": "Yan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Ran", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Lili", "middle": [], "last": "Mou", "suffix": "" }, { "first": "Ge", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yunchuan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yangyang", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Zhi", "middle": [], "last": "Jin", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "1461--1470", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yan Xu, Ran Jia, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, and Zhi Jin. 2016. Improved rela- tion classification by deep recurrent neural networks with data augmentation. In Proceedings of COLING 2016, the 26th International Conference on Compu- tational Linguistics: Technical Papers, pages 1461- 1470.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Augmenting end-to-end dialogue systems with commonsense knowledge", "authors": [ { "first": "Tom", "middle": [], "last": "Young", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Cambria", "suffix": "" }, { "first": "Iti", "middle": [], "last": "Chaturvedi", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Subham", "middle": [], "last": "Biswas", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2018, "venue": "Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Young, Erik Cambria, Iti Chaturvedi, Hao Zhou, Subham Biswas, and Minlie Huang. 2018. Aug- menting end-to-end dialogue systems with common- sense knowledge. In Thirty-Second AAAI Confer- ence on Artificial Intelligence.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Improved neural relation detection for knowledge base question answering", "authors": [ { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Wenpeng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "", "middle": [], "last": "Kazi Saidul Hasan", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Cicero Dos Santos", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.06194" ] }, "num": null, "urls": [], "raw_text": "Mo Yu, Wenpeng Yin, Kazi Saidul Hasan, Ci- cero dos Santos, Bing Xiang, and Bowen Zhou. 2017. Improved neural relation detection for knowl- edge base question answering. arXiv preprint arXiv:1704.06194.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Relation classification via convolutional deep neural network", "authors": [ { "first": "Daojian", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Siwei", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Guangyou", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, Jun Zhao, et al. 2014. Relation classification via convolutional deep neural network.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Relation classification via recurrent neural network", "authors": [ { "first": "Dongxu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.01006" ] }, "num": null, "urls": [], "raw_text": "Dongxu Zhang and Dong Wang. 2015. Relation classi- fication via recurrent neural network. arXiv preprint arXiv:1508.01006.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Graph convolution over pruned dependency trees improves relation extraction", "authors": [ { "first": "Yuhao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2205--2215", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuhao Zhang, Peng Qi, and Christopher D Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 2205-2215.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Positionaware attention and supervised data improve slot filling", "authors": [ { "first": "Yuhao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "35--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor An- geli, and Christopher D Manning. 2017. Position- aware attention and supervised data improve slot fill- ing. In Proceedings of the 2017 Conference on Em- pirical Methods in Natural Language Processing, pages 35-45.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Attentionbased bidirectional long short-term memory networks for relation classification", "authors": [ { "first": "Peng", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Zhenyu", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Bingchen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Hongwei", "middle": [], "last": "Hao", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "207--212", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention- based bidirectional long short-term memory net- works for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), volume 2, pages 207-212.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Knowledge-attention process (left) and multi-head structure (right) of knowledge-attention encoder.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "Three ways of integrating knowledge-attention with self-attention: multi-channel attention and softmax interpolation (top), as well as knowledge-informed self-attention (bottom).", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "Knowledge-informed self-attention structure. Q, K represent input matrix and knowledge indicators respectively, h is the number of attention heads.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "Change of precision, recall and F 1 score on dev set as the priority weight \u03b2 in softmax interpolation changes.", "num": null, "uris": null, "type_str": "figure" }, "TABREF5": { "text": "", "html": null, "num": null, "content": "
: Results of adding NER embeddings and entity
categorical embeddings to the multi-channel attention
(MCA) integrated model.
", "type_str": "table" }, "TABREF7": { "text": "", "html": null, "num": null, "content": "
: Macro-averaged F 1 score on SemEval2010-
Task8 dataset. * marks statistically significant im-
provements over Self-attn with p < 0.01 under one-
tailed t-test.
ModelDev F 1
Knwl-attn Encoder66.5
1. \u2212 Multi-head structure64.6
2. \u2212 Synonym relation indicators64.7
3. \u2212 Relation indicators mean65.0
4. \u2212 Output masking65.8
5. \u2212 Entity masking65.4
6. \u2212 Relative positions63.0
", "type_str": "table" }, "TABREF8": { "text": "Ablation study on knowledge-attention encoder. Results are the median F 1 scores of 5 independent runs on dev set of TACRED.", "html": null, "num": null, "content": "", "type_str": "table" }, "TABREF9": { "text": "SUBJ-PERSON graduated in 1992 from the OBJ-ORGANIZATION OBJ-ORGANIZATION OBJ-ORGANIZATION with a degree in computer science and had worked as a systems analyst at a Pittsburgh law firm since 1999 . PERSON graduated in 1992 from the OBJ-ORGANIZATION OBJ-ORGANIZATION OBJ-ORGANIZATION with a degree in computer science and had worked as a systems analyst at a Pittsburgh law firm since 1999 . correct OBJ-PERSON OBJ-PERSON , a public affairs and government relations strategist , was executive director of the SUBJ-ORGANIZATION SUBJ-ORGANIZATION Policy Institute from 2005 to 2010 .", "html": null, "num": null, "content": "
Sample SentencesTrue RelationPredict
per:schoolscorrect
attended
SUBJ-org:top memberscorrect
/employees
OBJ-PERSON OBJ-PERSON , a public affairs and government relations strategist , was executive director of the
SUBJ-ORGANIZATION SUBJ-ORGANIZATION Policy Institute from 2005 to 2010 .wrong
Founded in 1992 in Schaumburg , Illinois , the SUBJ-ORGANIZATION is one of the largest Chinese -Americanorg:country ofwrong
associations of professionals in the OBJ-COUNTRY OBJ-COUNTRY .headquarters
Founded in 1992 in Schaumburg , Illinois , the SUBJ-ORGANIZATION is one of the largest Chinese -American
associations of professionals in the OBJ-COUNTRY OBJ-COUNTRY .correct
", "type_str": "table" }, "TABREF10": { "text": "Attention visualization for knowledge-attention encoder (first) and self-attention encoder (second). Words are highlighted based on the attention weights assigned to them. Best viewed in color.", "html": null, "num": null, "content": "", "type_str": "table" } } } }