{ "paper_id": "D19-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:12:47.186082Z" }, "title": "Knowledge Enhanced Contextual Word Representations", "authors": [ { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "", "affiliation": { "laboratory": "", "institution": "Allen Institute for Artificial Intelligence", "location": { "settlement": "Seattle", "region": "WA", "country": "USA" } }, "email": "matthewp@allenai.org" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "", "affiliation": { "laboratory": "", "institution": "Allen Institute for Artificial Intelligence", "location": { "settlement": "Seattle", "region": "WA", "country": "USA" } }, "email": "markn@allenai.org" }, { "first": "Robert", "middle": [ "L" ], "last": "Logan Iv", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "settlement": "Irvine", "region": "CA", "country": "USA" } }, "email": "" }, { "first": "Roy", "middle": [], "last": "Schwartz", "suffix": "", "affiliation": { "laboratory": "", "institution": "Allen Institute for Artificial Intelligence", "location": { "settlement": "Seattle", "region": "WA", "country": "USA" } }, "email": "roys@allenai.org" }, { "first": "Vidur", "middle": [], "last": "Joshi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Allen Institute for Artificial Intelligence", "location": { "settlement": "Seattle", "region": "WA", "country": "USA" } }, "email": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "settlement": "Irvine", "region": "CA", "country": "USA" } }, "email": "sameer@uci.edu" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "", "affiliation": { "laboratory": "", "institution": "Allen Institute for Artificial Intelligence", "location": { "settlement": "Seattle", "region": "WA", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Contextual word representations, typically trained on unstructured, unlabeled text, do not contain any explicit grounding to real world entities and are often unable to remember facts about those entities. We propose a general method to embed multiple knowledge bases (KBs) into large scale models, and thereby enhance their representations with structured, human-curated knowledge. For each KB, we first use an integrated entity linker to retrieve relevant entity embeddings, then update contextual word representations via a form of word-to-entity attention. In contrast to previous approaches, the entity linkers and selfsupervised language modeling objective are jointly trained end-to-end in a multitask setting that combines a small amount of entity linking supervision with a large amount of raw text. After integrating WordNet and a subset of Wikipedia into BERT, the knowledge enhanced BERT (KnowBert) demonstrates improved perplexity, ability to recall facts as measured in a probing task and downstream performance on relationship extraction, entity typing, and word sense disambiguation. KnowBert's runtime is comparable to BERT's and it scales to large KBs.", "pdf_parse": { "paper_id": "D19-1005", "_pdf_hash": "", "abstract": [ { "text": "Contextual word representations, typically trained on unstructured, unlabeled text, do not contain any explicit grounding to real world entities and are often unable to remember facts about those entities. We propose a general method to embed multiple knowledge bases (KBs) into large scale models, and thereby enhance their representations with structured, human-curated knowledge. For each KB, we first use an integrated entity linker to retrieve relevant entity embeddings, then update contextual word representations via a form of word-to-entity attention. In contrast to previous approaches, the entity linkers and selfsupervised language modeling objective are jointly trained end-to-end in a multitask setting that combines a small amount of entity linking supervision with a large amount of raw text. After integrating WordNet and a subset of Wikipedia into BERT, the knowledge enhanced BERT (KnowBert) demonstrates improved perplexity, ability to recall facts as measured in a probing task and downstream performance on relationship extraction, entity typing, and word sense disambiguation. KnowBert's runtime is comparable to BERT's and it scales to large KBs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Large pretrained models such as ELMo (Peters et al., 2018) , GPT (Radford et al., 2018) , and BERT (Devlin et al., 2019) have significantly improved the state of the art for a wide range of NLP tasks. These models are trained on large amounts of raw text using self-supervised objectives. However, they do not contain any explicit grounding to real world entities and as a result have difficulty recovering factual knowledge (Logan et al., 2019) .", "cite_spans": [ { "start": 37, "end": 58, "text": "(Peters et al., 2018)", "ref_id": "BIBREF35" }, { "start": 65, "end": 87, "text": "(Radford et al., 2018)", "ref_id": "BIBREF37" }, { "start": 99, "end": 120, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF13" }, { "start": 425, "end": 445, "text": "(Logan et al., 2019)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Knowledge bases (KBs) provide a rich source of high quality, human-curated knowledge that can be used to ground these models. In addition, they often include complementary information to that found in raw text, and can encode factual knowledge that is difficult to learn from selectional preferences either due to infrequent mentions of commonsense knowledge or long range dependencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We present a general method to insert multiple KBs into a large pretrained model with a Knowledge Attention and Recontextualization (KAR) mechanism. The key idea is to explicitly model entity spans in the input text and use an entity linker to retrieve relevant entity embeddings from a KB to form knowledge enhanced entity-span representations. Then, the model recontextualizes the entity-span representations with word-toentity attention to allow long range interactions between contextual word representations and all entity spans in the context. The entire KAR is inserted between two layers in the middle of a pretrained model such as BERT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In contrast to previous approaches that integrate external knowledge into task-specific models with task supervision (e.g., Yang and Mitchell, 2017; Chen et al., 2018) , our approach learns the entity linkers with self-supervision on unlabeled data. This results in general purpose knowledge enhanced representations that can be applied to a wide range of downstream tasks.", "cite_spans": [ { "start": 124, "end": 148, "text": "Yang and Mitchell, 2017;", "ref_id": "BIBREF60" }, { "start": 149, "end": 167, "text": "Chen et al., 2018)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our approach has several other benefits. First, it leaves the top layers of the original model unchanged so we may retain the output loss layers and fine-tune on unlabeled corpora while training the KAR. This also allows us to simply swap out BERT for KnowBert in any downstream application. Second, by taking advantage of the existing high capacity layers in the original model, the KAR is lightweight, adding minimal additional parameters and runtime. Finally, it is easy to incorporate additional KBs by simply inserting them at other locations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "KnowBert is agnostic to the form of the KB, subject to a small set of requirements (see Sec. 3.2) . We experiment with integrating both WordNet (Miller, 1995) and Wikipedia, thus explicitly adding word sense knowledge and facts about named entities (including those unseen at training time). However, the method could be extended to commonsense KBs such as ConceptNet (Speer et al., 2017) or domain specific ones (e.g., UMLS; Bodenreider, 2004) . We evaluate KnowBert with a mix of intrinsic and extrinsic tasks. Despite being based on the smaller BERT BASE model, the experiments demonstrate improved masked language model perplexity and ability to recall facts over BERT LARGE . The extrinsic evaluations demonstrate improvements for challenging relationship extraction, entity typing and word sense disambiguation datasets, and often outperform other contemporaneous attempts to incorporate external knowledge into BERT.", "cite_spans": [ { "start": 88, "end": 97, "text": "Sec. 3.2)", "ref_id": null }, { "start": 144, "end": 158, "text": "(Miller, 1995)", "ref_id": "BIBREF30" }, { "start": 368, "end": 388, "text": "(Speer et al., 2017)", "ref_id": "BIBREF42" }, { "start": 420, "end": 425, "text": "UMLS;", "ref_id": null }, { "start": 426, "end": 444, "text": "Bodenreider, 2004)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Pretrained word representations Initial work learning word vectors focused on static word embeddings using multi-task learning objectives (Collobert and Weston, 2008) or corpus level cooccurence statistics (Mikolov et al., 2013a; Pennington et al., 2014) . Recently the field has shifted toward learning context-sensitive embeddings (Dai and Le, 2015; Peters et al., 2018; Devlin et al., 2019) . We build upon these by incorporating structured knowledge into these models.", "cite_spans": [ { "start": 138, "end": 166, "text": "(Collobert and Weston, 2008)", "ref_id": "BIBREF9" }, { "start": 206, "end": 229, "text": "(Mikolov et al., 2013a;", "ref_id": "BIBREF28" }, { "start": 230, "end": 254, "text": "Pennington et al., 2014)", "ref_id": "BIBREF34" }, { "start": 333, "end": 351, "text": "(Dai and Le, 2015;", "ref_id": "BIBREF10" }, { "start": 352, "end": 372, "text": "Peters et al., 2018;", "ref_id": "BIBREF35" }, { "start": 373, "end": 393, "text": "Devlin et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Entity embeddings Entity embedding methods produce continuous vector representations from external knowledge sources. Knowledge graphbased methods optimize the score of observed triples in a knowledge graph. These methods broadly fall into two categories: translational distance models (Bordes et al., 2013; Wang et al., 2014b; Lin et al., 2015; Xiao et al., 2016) which use a distance-based scoring function, and linear models (Nickel et al., 2011; Yang et al., 2014; Trouillon et al., 2016; Dettmers et al., 2018) which use a similarity-based scoring function. We experiment with TuckER (Balazevic et al., 2019) embeddings, a recent linear model which generalizes many of the aforecited models. Other methods combine entity metadata with the graph (Xie et al., 2016) , use entity contexts (Chen et al., 2014; Ganea and Hofmann, 2017) , or a combination of contexts and the KB (Wang et al., 2014a; Gupta et al., 2017) . Our approach is agnostic to the details of the entity embedding method and as a result is able to use any of these methods.", "cite_spans": [ { "start": 286, "end": 307, "text": "(Bordes et al., 2013;", "ref_id": "BIBREF5" }, { "start": 308, "end": 327, "text": "Wang et al., 2014b;", "ref_id": "BIBREF55" }, { "start": 328, "end": 345, "text": "Lin et al., 2015;", "ref_id": "BIBREF23" }, { "start": 346, "end": 364, "text": "Xiao et al., 2016)", "ref_id": "BIBREF57" }, { "start": 428, "end": 449, "text": "(Nickel et al., 2011;", "ref_id": "BIBREF33" }, { "start": 450, "end": 468, "text": "Yang et al., 2014;", "ref_id": "BIBREF61" }, { "start": 469, "end": 492, "text": "Trouillon et al., 2016;", "ref_id": "BIBREF47" }, { "start": 493, "end": 515, "text": "Dettmers et al., 2018)", "ref_id": "BIBREF12" }, { "start": 589, "end": 613, "text": "(Balazevic et al., 2019)", "ref_id": "BIBREF2" }, { "start": 750, "end": 768, "text": "(Xie et al., 2016)", "ref_id": "BIBREF58" }, { "start": 791, "end": 810, "text": "(Chen et al., 2014;", "ref_id": "BIBREF7" }, { "start": 811, "end": 835, "text": "Ganea and Hofmann, 2017)", "ref_id": "BIBREF16" }, { "start": 878, "end": 898, "text": "(Wang et al., 2014a;", "ref_id": "BIBREF54" }, { "start": 899, "end": 918, "text": "Gupta et al., 2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Entity-aware language models Some previous work has focused on adding KBs to generative language models (LMs) (Ahn et al., 2017; Logan et al., 2019) or building entity-centric LMs (Ji et al., 2017) . However, these methods introduce latent variables that require full annotation for training, or marginalization. In contrast, we adopt a method that allows training with large amounts of unannotated text.", "cite_spans": [ { "start": 110, "end": 128, "text": "(Ahn et al., 2017;", "ref_id": "BIBREF0" }, { "start": 129, "end": 148, "text": "Logan et al., 2019)", "ref_id": "BIBREF25" }, { "start": 180, "end": 197, "text": "(Ji et al., 2017)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Task-specific KB architectures Other work has focused on integrating KBs into neural architectures for specific downstream tasks (Yang and Mitchell, 2017; Sun et al., 2018; Chen et al., 2018; Bauer et al., 2018; Mihaylov and Frank, 2018; Wang and Jiang, 2019; Yang et al., 2019) . Our approach instead uses KBs to learn more generally transferable representations that can be used to improve a variety of downstream tasks.", "cite_spans": [ { "start": 129, "end": 154, "text": "(Yang and Mitchell, 2017;", "ref_id": "BIBREF60" }, { "start": 155, "end": 172, "text": "Sun et al., 2018;", "ref_id": null }, { "start": 173, "end": 191, "text": "Chen et al., 2018;", "ref_id": "BIBREF6" }, { "start": 192, "end": 211, "text": "Bauer et al., 2018;", "ref_id": "BIBREF3" }, { "start": 212, "end": 237, "text": "Mihaylov and Frank, 2018;", "ref_id": "BIBREF27" }, { "start": 238, "end": 259, "text": "Wang and Jiang, 2019;", "ref_id": "BIBREF51" }, { "start": 260, "end": 278, "text": "Yang et al., 2019)", "ref_id": "BIBREF59" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "KnowBert incorporates knowledge bases into BERT using the Knowledge Attention and Recontextualization component (KAR). We start by describing the BERT and KB components. We then move to introducing KAR. Finally, we describe the training procedure, including the multitask training regime for jointly training KnowBert and an entity linker.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KnowBert", "sec_num": "3" }, { "text": "We describe KnowBert as an extension to (and candidate replacement for) BERT, although the method is general and can be applied to any deep pretrained model including left-to-right and rightto-left LMs such as ELMo and GPT. Formally, BERT accepts as input a sequence of N Word-Piece tokens (Sennrich et al., 2016; Wu et al., 2016) , (x 1 , . . . , x N ), and computes L layers of D-dimensional contextual representations H i \u2208 R N \u00d7D by successively applying non-linear functions H i = F i (H i\u22121 ). The non-linear function is a multi-headed self-attention layer followed by a position-wise multilayer perceptron (MLP) (Vaswani et al., 2017) :", "cite_spans": [ { "start": 290, "end": 313, "text": "(Sennrich et al., 2016;", "ref_id": "BIBREF39" }, { "start": 314, "end": 330, "text": "Wu et al., 2016)", "ref_id": "BIBREF56" }, { "start": 619, "end": 641, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Pretrained BERT", "sec_num": "3.1" }, { "text": "F i (H i\u22121 ) = TransformerBlock(H i\u22121 ) = MLP(MultiHeadAttn(H i\u22121 , H i\u22121 , H i\u22121 )).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pretrained BERT", "sec_num": "3.1" }, { "text": "The multi-headed self-attention uses H i\u22121 as the query, key, and value to allow each vector to attend to every other vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pretrained BERT", "sec_num": "3.1" }, { "text": "BERT is trained to minimize an objective function that combines both next-sentence prediction (NSP) and masked LM log-likelihood (MLM):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pretrained BERT", "sec_num": "3.1" }, { "text": "L BERT = L NSP + L MLM .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pretrained BERT", "sec_num": "3.1" }, { "text": "Given two inputs x A and x B , the next-sentence prediction task is binary classification to predict whether x B is the next sentence following x A . The masked LM objective randomly replaces a percentage of input word pieces with a special [MASK] token and computes the negative loglikelihood of the missing token with a linear layer and softmax over all possible word pieces.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pretrained BERT", "sec_num": "3.1" }, { "text": "The key contribution of this paper is a method to incorporate knowledge bases (KB) into a pretrained BERT model. To encompass as wide a selection of prior knowledge as possible, we adopt a broad definition for a KB in the most general sense as fixed collection of K entity nodes, e k , from which it is possible to compute entity embeddings, e k \u2208 R E . This includes KBs with a typical (subj, rel, obj) graph structure, KBs that contain only entity metadata without a graph, and those that combine both a graph and entity metadata, as long as there is some method for embedding the entities in a low dimensional vector space. We also do not make any assumption that the entities are typed. As we show in Sec. 4.1 this flexibility is beneficial, where we compute entity embeddings from WordNet using both the graph and synset definitions, but link directly to Wikipedia pages without a graph by using embeddings computed from the entity description.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Bases", "sec_num": "3.2" }, { "text": "We also assume that the KB is accompanied by an entity candidate selector that takes as input some text and returns a list of C potential entity links, each consisting of the start and end indices of the potential mention span and M m candidate entities in the KG:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Bases", "sec_num": "3.2" }, { "text": "C = { (start m , end m ), (e m,1 , . . . , e m,Mm ) | m \u2208 1 . . . C, e k \u2208 1 . . . K}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Bases", "sec_num": "3.2" }, { "text": "In practice, these are often implemented using precomputed dictionaries (e.g., CrossWikis; Spitkovsky and Chang, 2012), KB specific rules (e.g., a WordNet lemmatizer), or other heuristics (e.g., string match; Mihaylov and Frank, 2018) . Ling et al. (2015) showed that incorporating candidate priors into entity linkers can be a powerful signal, so we optionally allow for the candidate selector to return an associated prior probability for each entity candidate. In some cases, it is beneficial to over-generate potential candidates and add a special NULL entity to each candidate list, thereby allowing the linker to discriminate between actual links and false positive candidates. In this work, the entity candidate selectors are fixed but their output is passed to a learned context dependent entity linker to disambiguate the candidate mentions.", "cite_spans": [ { "start": 209, "end": 234, "text": "Mihaylov and Frank, 2018)", "ref_id": "BIBREF27" }, { "start": 237, "end": 255, "text": "Ling et al. (2015)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge Bases", "sec_num": "3.2" }, { "text": "Finally, by restricting the number of candidate entities to a fixed small number (we use 30), KnowBert's runtime is independent of the size the KB, as it only considers a small subset of all possible entities for any given text. As the candidate selection is rule-based and fixed, it is fast and in our implementation is performed asynchronously on CPU. The only overhead for scaling up the size of the KB is the memory footprint to store the entity embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Bases", "sec_num": "3.2" }, { "text": "The Knowledge Attention and Recontextualization component (KAR) is the heart of KnowBert. The KAR accepts as input the contextual representations at a particular layer, H i , and computes knowledge enhanced representations H i = KAR(H i , C). This is fed into the next pretrained layer, H i+1 = TransformerBlock(H i ), and the remainder of BERT is run as usual.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KAR", "sec_num": "3.3" }, { "text": "In this section, we describe the KAR's key components: mention-span representations, retrieval of relevant entity embeddings using an entity linker, update of mention-span embeddings with retrieved information, and recontextualization of entity-span embeddings with word-to-entity-span attention. We describe the KAR for a single KB, but extension to multiple KBs at different layers is straightforward. See Fig. 1 for an overview.", "cite_spans": [], "ref_spans": [ { "start": 408, "end": 414, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "KAR", "sec_num": "3.3" }, { "text": "Mention-span representations The KAR starts with the KB entity candidate selector that provides a list of candidate mentions which it uses to compute mention-span representations. H i is first pro- (1), then pooled over candidate mentions spans (2) to compute S, and contextualized into S e using mention-span self-attention (3). An integrated entity linker computes weighted average entity embeddings\u1ebc (4), which are used to enhance the span representations with knowledge from the KB (5), computing S e . Finally, the BERT word piece representations are recontextualized with word-to-entity-span attention (6) and projected back to the BERT dimension (7) resulting in H i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KAR", "sec_num": "3.3" }, { "text": "jected to the entity dimension (E, typically 200 or 300, see Sec. 4.1) with a linear projection,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KAR", "sec_num": "3.3" }, { "text": "H proj i = H i W proj 1 + b proj 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KAR", "sec_num": "3.3" }, { "text": "( 1)Then, the KAR computes C mention-span representations s m \u2208 R E , one for each candidate mention, by pooling over all word pieces in a mentionspan using the self-attentive span pooling from Lee et al. (2017) . The mention-spans are stacked into a matrix S \u2208 R C\u00d7E .", "cite_spans": [ { "start": 194, "end": 211, "text": "Lee et al. (2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "KAR", "sec_num": "3.3" }, { "text": "The entity linker is responsible for performing entity disambiguation for each potential mention from among the available candidates. It first runs mention-span self-attention to compute S e = TransformerBlock(S).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity linker", "sec_num": null }, { "text": "The span self-attention is identical to the typical transformer layer, exception that the self-attention is between mention-span vectors instead of word piece vectors. This allows KnowBert to incorporate global information into each linking decision so that it can take advantage of entity-entity cooccurrence and resolve which of several overlapping candidate mentions should be linked. 1 Following Kolitsas et al. (2018) , S e is used to score each of the candidate entities while incorporating the candidate entity prior from the KB. Each candidate span m has an associated mention-span vector s e m (computed via Eq. 2), M m candidate entities with embeddings e mk (from the KB), and prior probabilities p mk . We compute M m scores using the prior and dot product between the entityspan vectors and entity embeddings,", "cite_spans": [ { "start": 400, "end": 422, "text": "Kolitsas et al. (2018)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Entity linker", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c8 mk = MLP(p mk , s e m \u2022 e mk ),", "eq_num": "(3)" } ], "section": "Entity linker", "sec_num": null }, { "text": "with a two-layer MLP (100 hidden dimensions). If entity linking (EL) supervision is available, we can compute a loss with the gold entity e mg . The exact form of the loss depends on the KB, and we use both log-likelihood,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity linker", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L EL = \u2212 m log exp(\u03c8 mg ) k exp(\u03c8 mk ) ,", "eq_num": "(4)" } ], "section": "Entity linker", "sec_num": null }, { "text": "and max-margin,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity linker", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L EL = max(0, \u03b3 \u2212 \u03c8 mg ) + e mk =emg max(0, \u03b3 + \u03c8 mk ),", "eq_num": "(5)" } ], "section": "Entity linker", "sec_num": null }, { "text": "formulations (see Sec. 4.1 for details).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity linker", "sec_num": null }, { "text": "Knowledge enhanced entity-span representations KnowBert next injects the KB entity information into the mention-span representations computed from BERT vectors (s e m ) to form entityspan representations. For a given span m, we first disregard all candidate entities with score \u03c8 below a fixed threshold, and softmax normalize the remaining scores:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity linker", "sec_num": null }, { "text": "\u03c8 mk = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 exp(\u03c8 mk ) \u03c8 mk \u2265\u03b4 exp(\u03c8 mk ) , \u03c8 mk \u2265 \u03b4 0, \u03c8 mk < \u03b4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity linker", "sec_num": null }, { "text": "Then the weighted entity embedding is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity linker", "sec_num": null }, { "text": "e m = k\u03c8 mk e mk .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity linker", "sec_num": null }, { "text": "If all entity linking scores are below the threshold \u03b4, we substitute a special NULL embedding for\u1ebd m . Finally, the entity-span representations are updated with the weighted entity embeddings s e m = s e m +\u1ebd m ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity linker", "sec_num": null }, { "text": "which are packed into a matrix S e \u2208 R C\u00d7E .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity linker", "sec_num": null }, { "text": "Recontextualization After updating the entityspan representations with the weighted entity vectors, KnowBert uses them to recontextualize the word piece representations. This is accomplished using a modified transformer layer that substitutes the multi-headed self-attention with a multiheaded attention between the projected word piece representations and knowledge enhanced entityspan vectors. As introduced by Vaswani et al. for the query, and S e for both the key and value:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity linker", "sec_num": null }, { "text": "H proj i = MLP(MultiHeadAttn(H proj i , S e , S e )).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity linker", "sec_num": null }, { "text": "This allows each word piece to attend to all entity-spans in the context, so that it can propagate entity information over long contexts. After the multi-headed word-to-entity-span attention, we run a position-wise MLP analogous to the standard transformer layer. 2 Finally, H proj i is projected back to the BERT dimension with a linear transformation and a residual connection added:", "cite_spans": [ { "start": 264, "end": 265, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Entity linker", "sec_num": null }, { "text": "H i = H proj i W proj 2 + b proj 2 + H i (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity linker", "sec_num": null }, { "text": "Alignment of BERT and entity vectors As KnowBert does not place any restrictions on the entity embeddings, it is essential to align them with the pretrained BERT contextual representations. To encourage this alignment we initialize W proj 2 as the matrix inverse of W proj 1 (Eq. 1). The use of dot product similarity (Eq. 3) and residual connection (Eq. 7) further aligns the entity-span representations with entity embeddings. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity linker", "sec_num": null }, { "text": "L KnowBert = L BERT + j i=1 L EL i end", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity linker", "sec_num": null }, { "text": "Our training regime incrementally pretrains increasingly larger portions of KnowBert before fine-tuning all trainable parameters in a multitask setting with any available EL supervision. It is similar in spirit to the \"chain-thaw\" approach in Felbo et al. (2017) , and is summarized in Alg. 1.", "cite_spans": [ { "start": 243, "end": 262, "text": "Felbo et al. (2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Training Procedure", "sec_num": "3.4" }, { "text": "We assume access to a pretrained BERT model and one or more KBs with their entity candidate selectors. To add the first KB, we begin by pretraining entity embeddings (if not already provided from another source), then freeze them in all subsequent training, including task-specific finetuning. If EL supervision is available, it is used to pretrain the KB specific EL parameters, while freezing the remainder of the network. Finally, the entire network is fine-tuned to convergence by minimizing", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Procedure", "sec_num": "3.4" }, { "text": "L KnowBert = L BERT + L EL .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Procedure", "sec_num": "3.4" }, { "text": "We apply gradient updates to homogeneous batches randomly sampled from either the unlabeled corpus or EL supervision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Procedure", "sec_num": "3.4" }, { "text": "To add a second KB, we repeat the process, inserting it in any layer above the first one. When adding a KB, the BERT layers above it will experience large gradients early in training, as they are subject to the randomly initialized parameters associated with the new KB. They are thus expected to move further from their pretrained values before convergence compared to parameters below the KB. By adding KBs from bottom to top, we minimize disruption of the network and decrease the likelihood that training will fail. See Sec. 4.1 for details of where each KB was added.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Procedure", "sec_num": "3.4" }, { "text": "The entity embeddings and selected candidates contain lexical information (especially in the case of WordNet), that will make the masked LM predictions significantly easier. To prevent leaking into the masked word pieces, we adopt the BERT strategy and replace all entity candidates from the selectors with a special [MASK] entity if the candidate mention span overlaps with a masked word piece. 3 This prevents KnowBert from relying on the selected candidates to predict masked word pieces.", "cite_spans": [ { "start": 396, "end": 397, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training Procedure", "sec_num": "3.4" }, { "text": "We used the English uncased BERT BASE model (Devlin et al., 2019) to train three versions of KnowBert:", "cite_spans": [ { "start": 44, "end": 65, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "KnowBert-Wiki, KnowBert-WordNet, and KnowBert-W+W that includes both Wikipedia and WordNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "The entity linker in KnowBert-Wiki borrows both the entity candidate selectors and embeddings from Ganea and Hofmann (2017). The candidate selectors and priors are a combination of CrossWikis, a large, precomputed dictionary that combines statistics from Wikipedia and a web corpus (Spitkovsky and Chang, 2012), and the YAGO dictionary (Hoffart et al., 2011) . The entity embeddings use a skipgram like objective (Mikolov et al., 2013b) to learn 300-dimensional embeddings of Wikipedia page titles directly from Wikipedia descriptions without using any explicit graph structure between nodes. As such, nodes in the KB are Wikipedia page titles, e.g., Prince (musician). Ganea and Hofmann (2017) provide pretrained embeddings for a subset of approximately 470K entities. Early experiments with embeddings derived from Wikidata relations 4 did not improve results.", "cite_spans": [ { "start": 336, "end": 358, "text": "(Hoffart et al., 2011)", "ref_id": "BIBREF19" }, { "start": 413, "end": 436, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "KnowBert-Wiki", "sec_num": null }, { "text": "We used the AIDA-CoNLL dataset (Hoffart et al., 2011) for supervision, adopting the standard splits. This dataset exhaustively annotates entity links for named entities of person, organization and location types, as well as a miscellaneous type. It does not annotate links to common nouns or other Wikipedia pages. At both train and test time, we consider all selected candidate spans and the top 30 entities, to which we add the special NULL entity to allow KnowBert to discriminate between actual links and false positive links from the selector. As such, KnowBert models both entity mention detection and disambiguation in an end-to-end manner. Eq. 5 was used as the objective.", "cite_spans": [ { "start": 31, "end": 53, "text": "(Hoffart et al., 2011)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "KnowBert-Wiki", "sec_num": null }, { "text": "KnowBert-WordNet Our WordNet KB combines synset metadata, lemma metadata and the relational graph. To construct the graph, we first extracted all synsets, lemmas, and their relationships from WordNet 3.0 using the nltk interface. After disregarding certain symmetric relationships (e.g., we kept the hypernym relationship, but removed the inverse hyponym relationship) we were left with 28 synset-synset and lemma-lemma relationships. From these, we constructed a graph where each node is either a synset or lemma, and intro- duced the special lemma in synset relationship to link synsets and lemmas. The candidate selector uses a rule-based lemmatizer without partof-speech (POS) information. 5 Our embeddings combine both the graph and synset glosses (definitions), as early experiments indicated improved perplexity when using both vs. just graph-based embeddings.", "cite_spans": [ { "start": 694, "end": 695, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "KnowBert-Wiki", "sec_num": null }, { "text": "We used TuckER (Balazevic et al., 2019) to compute 200dimensional vectors for each synset and lemma using the relationship graph. Then, we extracted the gloss for each synset and used an off-theshelf state-of-the-art sentence embedding method (Subramanian et al., 2018) to produce 2048dimensional vectors. These are concatenated to the TuckER embeddings. To reduce the dimensionality for use in KnowBert, the frozen 2248dimensional embeddings are projected to 200dimensions with a learned linear transformation.", "cite_spans": [ { "start": 15, "end": 39, "text": "(Balazevic et al., 2019)", "ref_id": "BIBREF2" }, { "start": 243, "end": 269, "text": "(Subramanian et al., 2018)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "KnowBert-Wiki", "sec_num": null }, { "text": "For supervision, we combined the SemCor word sense disambiguation (WSD) dataset (Miller et al., 1994) with all lemma example usages from WordNet 6 and link directly to synsets. The loss function is Eq. 4. At train time, we did not provide gold lemmas or POS tags, so KnowBert must learn to implicitly model coarse grained POS tags to disambiguate each word. At test time when evaluating we restricted candidate entities to just those matching the gold lemma and POS tag, consistent with the standard WSD evaluation.", "cite_spans": [ { "start": 80, "end": 101, "text": "(Miller et al., 1994)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "KnowBert-Wiki", "sec_num": null }, { "text": "Training details To control for the unlabeled corpus, we concatenated Wikipedia and the Books Corpus and followed the data preparation process in BERT with the exception of heavily biasing our dataset to shorter sequences of 128 word pieces for efficiency. Both KnowBert-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KnowBert-Wiki", "sec_num": null }, { "text": "AIDA-A AIDA-B Daiber et al. (2013) 49.9 52.0 Hoffart et al. (2011) 68.8 71.9 Kolitsas et al. (2018) 86.6 82.6 KnowBert-Wiki 80.2 74.4 KnowBert-W+W 82.1 73.7 Table 3 : End-to-end entity linking strong match, micro averaged F 1 .", "cite_spans": [ { "start": 14, "end": 34, "text": "Daiber et al. (2013)", "ref_id": "BIBREF11" }, { "start": 45, "end": 66, "text": "Hoffart et al. (2011)", "ref_id": "BIBREF19" }, { "start": 77, "end": 99, "text": "Kolitsas et al. (2018)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 157, "end": 164, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "System", "sec_num": null }, { "text": "Wiki and KnowBert-WordNet insert the KB between layers 10 and 11 of the 12-layer BERT BASE model. KnowBert-W+W adds the Wikipedia KB between layers 10 and 11, with WordNet between layers 11 and 12. Earlier experiments with KnowBert-WordNet in a lower layer had worse perplexity. We generally followed the fine-tuning procedure in Devlin et al. (2019) ; see supplemental materials for details.", "cite_spans": [ { "start": 330, "end": 350, "text": "Devlin et al. (2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "System", "sec_num": null }, { "text": "Perplexity Table 1 compares masked LM perplexity for KnowBert with BERT BASE and BERT LARGE . To rule out minor differences due to our data preparation, the BERT models are finetuned on our training data before being evaluated. Overall, KnowBert improves the masked LM perplexity, with all KnowBert models outperforming BERT LARGE , despite being derived from BERT BASE .", "cite_spans": [], "ref_spans": [ { "start": 11, "end": 18, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Intrinsic Evaluation", "sec_num": "4.2" }, { "text": "Factual recall To test KnowBert's ability to recall facts from the KBs, we extracted 90K tuples from Wikidata (Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014) for 17 different relationships such as companyFoundedBy. Each tuple was written into natural language such as \"Adidas was founded by Adolf Dassler\" and used to construct two test instances, one that masks out the subject and one that masks the object. Then, we evaluated whether a model could recover the masked entity by computing the mean reciprocal rank (MRR) of the masked word pieces. of (frozen) parameters in the entity embeddings (Table 1) . KnowBert is much faster than BERT LARGE . By taking advantage of the already high capacity model, the number of trainable parameters added by KnowBert is a fraction of the total parameters in BERT. The faster speed is partially due to the entity parameter efficiency in KnowBert as only as small fraction of parameters in the entity embeddings are used for any given input due to the sparse linker. Our candidate generators consider the top 30 candidates and produce approximately O(number tokens) candidate spans. For a typical 25 token sentence, approximately 2M entity embedding parameters are actually used. In contrast, BERT LARGE uses the majority of its 336M parameters for each input.", "cite_spans": [ { "start": 110, "end": 140, "text": "(Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014)", "ref_id": "BIBREF49" } ], "ref_spans": [ { "start": 579, "end": 588, "text": "(Table 1)", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Intrinsic Evaluation", "sec_num": "4.2" }, { "text": "Integrated EL It is also possible to evaluate the performance of the integrated entity linkers inside KnowBert using diagnostic probes without any further fine-tuning. As these were trained in a multitask setting primarily with raw text, we do not a priori expect high performance as they must balance specializing for the entity linking task and learning general purpose representations suitable for language modeling. Table 2 displays fine-grained WSD F 1 using the evaluation framework from and the ALL dataset (combing SemEval 2007 . By linking to nodes in our WordNet graph and restricting to gold lemmas at test time we can recast the WSD task under our general entity linking framework. The ELMo and BERT baselines use a nearest neighbor approach trained on the SemCor dataset, similar to the evaluation in Melamud et al. (2016) , which has previously been shown to be competitive with task-specific architectures . As can be seen, KnowBert provides competitive performance, and KnowBert-W+W is able to match the performance of KnowBert-WordNet despite incorporating both Wikipedia and WordNet. Table 3 reports end-to-end entity linking performance for the AIDA-A and AIDA-B datasets. Here, KnowBert's performance lags behind the current state-of-the-art model from Kolitsas et al. (2018) , but still provides strong performance compared to other established systems such as AIDA (Hoffart et al., 2011) and DBpedia Spotlight (Daiber et al., 2013) . We believe this is due to the selective annotation in the AIDA data that only annotates named entities. The CrossWikisbased candidate selector used in KnowBert generates candidate mentions for all entities including common nouns from which KnowBert may be learning to extract information, at the detriment of specializing to maximize linking performance for AIDA.", "cite_spans": [ { "start": 514, "end": 535, "text": "(combing SemEval 2007", "ref_id": null }, { "start": 814, "end": 835, "text": "Melamud et al. (2016)", "ref_id": "BIBREF26" }, { "start": 1273, "end": 1295, "text": "Kolitsas et al. (2018)", "ref_id": "BIBREF21" }, { "start": 1387, "end": 1409, "text": "(Hoffart et al., 2011)", "ref_id": "BIBREF19" }, { "start": 1432, "end": 1453, "text": "(Daiber et al., 2013)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 420, "end": 427, "text": "Table 2", "ref_id": "TABREF5" }, { "start": 1102, "end": 1109, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Intrinsic Evaluation", "sec_num": "4.2" }, { "text": "This section evaluates KnowBert on downstream tasks to validate that the addition of knowledge improves performance on tasks expected to benefit from it. Given the overall superior performance of KnowBert-W+W on the intrinsic evaluations, we focus on it exclusively for evaluation in this section. The main results are included in this section; see the supplementary material for full details.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Downstream Tasks", "sec_num": "4.3" }, { "text": "The baselines we compare against are BERT BASE , BERT LARGE , the pre-BERT state of the art, and two contemporaneous papers that add similar types of knowledge to BERT. ERNIE (Zhang et al., 2019) uses TAGME (Ferragina and Scaiella, 2010) to link entities to Wikidata, retrieves the associated entity embeddings, and fuses them into BERT BASE by fine-tuning. Soares et al. (2019) learns relationship representations by fine-tuning BERT LARGE with large scale \"matching the blanks\" (MTB) pretraining using entity linked text. Relation extraction Our first task is relation extraction using the TACRED (Zhang et al., 2017) and SemEval 2010 Task 8 (Hendrickx et al., 2009) datasets. Systems are given a sentence with marked a subject and object, and asked to predict which of several different relations (or no relation) holds. Following Soares et al. 2019 ] to mark the location of the subject and object in the input sentence, then concatenates the contextual word representations for [E1] and [E2] to predict the relationship. For TACRED, we also encode the subject and object types with special tokens and concatenate them to the end of the sentence.", "cite_spans": [ { "start": 175, "end": 195, "text": "(Zhang et al., 2019)", "ref_id": "BIBREF65" }, { "start": 207, "end": 237, "text": "(Ferragina and Scaiella, 2010)", "ref_id": "BIBREF15" }, { "start": 599, "end": 619, "text": "(Zhang et al., 2017)", "ref_id": "BIBREF64" }, { "start": 644, "end": 668, "text": "(Hendrickx et al., 2009)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Downstream Tasks", "sec_num": "4.3" }, { "text": "For TACRED (Table 4) , KnowBert-W+W significantly outperforms the comparable BERT BASE systems including ERNIE by 3.5%, improves over BERT LARGE by 1.4%, and is able to match the performance of the relationship specific MTB pretraining in Soares et al. (2019) . For SemEval 2010 Task 8 (Table 5) , KnowBert-W+W F 1 falls between the entity aware BERT BASE model from Wang et al. (2019b) , and the BERT LARGE model from Soares et al. (2019).", "cite_spans": [ { "start": 239, "end": 259, "text": "Soares et al. (2019)", "ref_id": "BIBREF41" }, { "start": 367, "end": 386, "text": "Wang et al. (2019b)", "ref_id": "BIBREF52" } ], "ref_spans": [ { "start": 11, "end": 20, "text": "(Table 4)", "ref_id": null }, { "start": 286, "end": 295, "text": "(Table 5)", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Downstream Tasks", "sec_num": "4.3" }, { "text": "Words in Context (WiC) WiC (Pilehvar and Camacho-Collados, 2019) is a challenging task that presents systems with two sentences both containing a word with the same lemma and asks them to determine if they are from the same sense or not. It is designed to test the quality of contextual word representations. We follow standard practice and concatenate both sentences with a [SEP] token and fine-tune the [CLS] embedding. As shown in Table 6 , KnowBert-W+W sets a new state of the art for this task, improving over BERT LARGE by 1.4% and reducing the relative gap to 80% human performance by 13.3%. 76.4 71.0 73.6 ERNIE 78.4 72.9 75.6 KnowBert-W+W 78.6 73.7 76.1 Table 7 : Test set results for entity typing using the nine general types from (Choi et al., 2018) .", "cite_spans": [ { "start": 742, "end": 761, "text": "(Choi et al., 2018)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 434, "end": 441, "text": "Table 6", "ref_id": "TABREF9" }, { "start": 663, "end": 670, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Downstream Tasks", "sec_num": "4.3" }, { "text": "Entity typing We also evaluated KnowBert-W+W using the entity typing dataset from Choi et al. (2018) . To directly compare to ERNIE, we adopted the evaluation protocol in Zhang et al. (2019) which considers the nine general entity types. 7 Our model marks the location of a target span with the special [E] and [/E] tokens and uses the representation of the [E] token to predict the type. As shown in Table 7 , KnowBert-W+W shows an improvement of 0.6% F 1 over ERNIE and 2.5% over BERT BASE .", "cite_spans": [ { "start": 82, "end": 100, "text": "Choi et al. (2018)", "ref_id": "BIBREF8" }, { "start": 171, "end": 190, "text": "Zhang et al. (2019)", "ref_id": "BIBREF65" } ], "ref_spans": [ { "start": 401, "end": 408, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Downstream Tasks", "sec_num": "4.3" }, { "text": "We have presented an efficient and general method to insert prior knowledge into a deep neural model. Intrinsic evaluations demonstrate that the addition of WordNet and Wikipedia to BERT improves the quality of the masked LM and significantly improves its ability to recall facts. Downstream evaluations demonstrate improvements for relationship extraction, entity typing and word sense disambiguation datasets. Future work will involve incorporating a diverse set of domain specific KBs for specialized NLP applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "We found a small transformer layer with four attention heads and a 1024 feed-forward hidden dimension was sufficient, significantly smaller than each of the layers in BERT. Early experiments demonstrated the effectiveness of this layer with improved entity linking performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "As for the multi-headed entity-span self-attention, we found a small transformer layer to be sufficient, with four attention heads and 1024 hidden units in the MLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Following BERT, for 80% of masked word pieces all candidates are replaced with [MASK], 10% are replaced with random candidates and 10% left unmasked.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/facebookresearch/ PyTorch-BigGraph", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://spacy.io/ 6 To provide a fair evaluation on the WiC dataset which is partially based on the same source, we excluded all WiC train, development and test instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Data obtained from https://github.com/ thunlp/ERNIE", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors acknowledge helpful feedback from anonymous reviewers and the AllenNLP team. This research was funded in part by the NSF under awards IIS-1817183 and CNS-1730158.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A neural knowledge language model", "authors": [ { "first": "Heeyoul", "middle": [], "last": "Sungjin Ahn", "suffix": "" }, { "first": "Tanel", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "P\u00e4rnamaa", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1608.00318" ] }, "num": null, "urls": [], "raw_text": "Sungjin Ahn, Heeyoul Choi, Tanel P\u00e4rnamaa, and Yoshua Bengio. 2017. A neural knowledge lan- guage model. arXiv:1608.00318.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Improving relation extraction by pre-trained language representations", "authors": [ { "first": "Christoph", "middle": [], "last": "Alt", "suffix": "" }, { "first": "Marc", "middle": [], "last": "H\u00fcbner", "suffix": "" }, { "first": "Leonhard", "middle": [], "last": "Hennig", "suffix": "" } ], "year": 2019, "venue": "AKBC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christoph Alt, Marc H\u00fcbner, and Leonhard Hennig. 2019. Improving relation extraction by pre-trained language representations. In AKBC.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "TuckER: Tensor factorization for knowledge graph completion", "authors": [ { "first": "Ivana", "middle": [], "last": "Balazevic", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Allen", "suffix": "" }, { "first": "Timothy", "middle": [ "M" ], "last": "Hospedales", "suffix": "" } ], "year": 2019, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivana Balazevic, Carl Allen, and Timothy M. Hospedales. 2019. TuckER: Tensor factorization for knowledge graph completion. In EMNLP.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Commonsense for generative multi-hop question answering tasks", "authors": [ { "first": "Lisa", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Yicheng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2018, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lisa Bauer, Yicheng Wang, and Mohit Bansal. 2018. Commonsense for generative multi-hop question an- swering tasks. In EMNLP.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The Unified Medical Language System (UMLS): integrating biomedical terminology", "authors": [ { "first": "Olivier", "middle": [], "last": "Bodenreider", "suffix": "" } ], "year": 2004, "venue": "Nucleic Acids Research", "volume": "32", "issue": "", "pages": "267--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olivier Bodenreider. 2004. The Unified Medical Lan- guage System (UMLS): integrating biomedical ter- minology. Nucleic Acids Research, 32 Database issue:D267-70.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Translating embeddings for modeling multirelational data", "authors": [ { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Usunier", "suffix": "" }, { "first": "Alberto", "middle": [], "last": "Garcia-Duran", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Oksana", "middle": [], "last": "Yakhnenko", "suffix": "" } ], "year": 2013, "venue": "NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In NeurIPS.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Neural natural language inference models enhanced with external knowledge", "authors": [ { "first": "Qian", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zhen-Hua", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" }, { "first": "Si", "middle": [], "last": "Wei", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. 2018. Neural natural language inference models enhanced with external knowl- edge. In ACL.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A unified model for word sense representation and disambiguation", "authors": [ { "first": "Xinxiong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In EMNLP.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Ultra-fine entity typing", "authors": [ { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eunsol Choi, Omer Levy, Yejin Choi, and Luke Zettle- moyer. 2018. Ultra-fine entity typing. In ACL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2008, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Semisupervised sequence learning", "authors": [ { "first": "M", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "", "middle": [], "last": "Dai", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew M. Dai and Quoc V. Le. 2015. Semi- supervised sequence learning. In NeurIPS.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Improving efficiency and accuracy in multilingual entity extraction", "authors": [ { "first": "Joachim", "middle": [], "last": "Daiber", "suffix": "" }, { "first": "Max", "middle": [], "last": "Jakob", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Hokamp", "suffix": "" }, { "first": "Pablo", "middle": [ "N" ], "last": "Mendes", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joachim Daiber, Max Jakob, Chris Hokamp, and Pablo N. Mendes. 2013. Improving efficiency and accuracy in multilingual entity extraction. In I- SEMANTICS.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Convolutional 2d knowledge graph embeddings", "authors": [ { "first": "Tim", "middle": [], "last": "Dettmers", "suffix": "" }, { "first": "Pasquale", "middle": [], "last": "Minervini", "suffix": "" }, { "first": "Pontus", "middle": [], "last": "Stenetorp", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2018, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In AAAI.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL-HLT.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm", "authors": [ { "first": "Bjarke", "middle": [], "last": "Felbo", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Mislove", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "Iyad", "middle": [], "last": "Rahwan", "suffix": "" }, { "first": "Sune", "middle": [], "last": "Lehmann", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bjarke Felbo, Alan Mislove, Anders S\u00f8gaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain represen- tations for detecting sentiment, emotion and sar- casm. In EMNLP.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "TAGME: on-the-fly annotation of short text fragments (by wikipedia entities)", "authors": [ { "first": "Paolo", "middle": [], "last": "Ferragina", "suffix": "" }, { "first": "Ugo", "middle": [], "last": "Scaiella", "suffix": "" } ], "year": 2010, "venue": "CIKM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paolo Ferragina and Ugo Scaiella. 2010. TAGME: on-the-fly annotation of short text fragments (by wikipedia entities). In CIKM.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Deep joint entity disambiguation with local neural attention", "authors": [ { "first": "Eugen", "middle": [], "last": "Octavian", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Ganea", "suffix": "" }, { "first": "", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Octavian-Eugen Ganea and Thomas Hofmann. 2017. Deep joint entity disambiguation with local neural attention. In EMNLP.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Entity linking via joint encoding of types, descriptions, and context", "authors": [ { "first": "Nitish", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Gupta, Sameer Singh, and Dan Roth. 2017. En- tity linking via joint encoding of types, descriptions, and context. In EMNLP.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "SemEval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals", "authors": [ { "first": "Iris", "middle": [], "last": "Hendrickx", "suffix": "" }, { "first": "Su", "middle": [ "Nam" ], "last": "Kim", "suffix": "" }, { "first": "Zornitsa", "middle": [], "last": "Kozareva", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Diarmuid\u00f3", "middle": [], "last": "S\u00e9aghdha", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Pennacchiotti", "suffix": "" }, { "first": "Lorenza", "middle": [], "last": "Romano", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2009, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid\u00d3 S\u00e9aghdha, Sebastian Pad\u00f3, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2009. SemEval-2010 task 8: Multi-way classification of semantic relations be- tween pairs of nominals. In HLT-NAACL.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Robust disambiguation of named entities in text", "authors": [ { "first": "Johannes", "middle": [], "last": "Hoffart", "suffix": "" }, { "first": "Mohamed", "middle": [ "Amir" ], "last": "Yosef", "suffix": "" }, { "first": "Ilaria", "middle": [], "last": "Bordino", "suffix": "" }, { "first": "Hagen", "middle": [], "last": "F\u00fcrstenau", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Pinkal", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Spaniol", "suffix": "" }, { "first": "Bilyana", "middle": [], "last": "Taneva", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Thater", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2011, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bor- dino, Hagen F\u00fcrstenau, Manfred Pinkal, Marc Span- iol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In EMNLP.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Dynamic entity representations in neural language models", "authors": [ { "first": "Yangfeng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Chenhao", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Martschat", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, and Noah A Smith. 2017. Dynamic entity rep- resentations in neural language models. In EMNLP.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "End-to-end neural entity linking", "authors": [ { "first": "Nikolaos", "middle": [], "last": "Kolitsas", "suffix": "" }, { "first": "", "middle": [], "last": "Octavian-Eugen", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Ganea", "suffix": "" }, { "first": "", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikolaos Kolitsas, Octavian-Eugen Ganea, and Thomas Hofmann. 2018. End-to-end neural entity linking. In CoNLL.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "End-to-end neural coreference resolution", "authors": [ { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [ "S" ], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke S. Zettlemoyer. 2017. End-to-end neural coreference resolution. In EMNLP.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Learning entity and relation embeddings for knowledge graph completion", "authors": [ { "first": "Yankai", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xuan", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2015, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation em- beddings for knowledge graph completion. In AAAI.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Design challenges for entity linking", "authors": [ { "first": "Xiao", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" } ], "year": 2015, "venue": "Transactions of the Association for Computational Linguistics", "volume": "3", "issue": "", "pages": "315--328", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiao Ling, Sameer Singh, and Daniel S. Weld. 2015. Design challenges for entity linking. Transactions of the Association for Computational Linguistics, 3:315-328.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Barack's wife hillary: Using knowledge graphs for fact-aware language modeling", "authors": [ { "first": "L", "middle": [], "last": "Robert", "suffix": "" }, { "first": "Nelson", "middle": [ "F" ], "last": "Logan", "suffix": "" }, { "first": "Matthew", "middle": [ "E" ], "last": "Liu", "suffix": "" }, { "first": "Matthew", "middle": [ "Ph" ], "last": "Peters", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert L Logan, Nelson F. Liu, Matthew E. Peters, Matthew Ph Gardner, and Sameer Singh. 2019. Barack's wife hillary: Using knowledge graphs for fact-aware language modeling. In ACL.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "context2vec: Learning generic context embedding with bidirectional LSTM", "authors": [ { "first": "Oren", "middle": [], "last": "Melamud", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Goldberger", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context em- bedding with bidirectional LSTM. In CoNLL.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge", "authors": [ { "first": "Todor", "middle": [], "last": "Mihaylov", "suffix": "" }, { "first": "Anette", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Todor Mihaylov and Anette Frank. 2018. Knowledge- able reader: Enhancing cloze-style reading compre- hension with external commonsense knowledge. In ACL.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. arXiv:1301.3781.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In NeurIPS.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "WordNet: a lexical database for English", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "38", "issue": "11", "pages": "39--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller. 1995. WordNet: a lexical database for English. Communications of the ACM, 38(11):39-41.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Using a semantic concordance for sense identification", "authors": [ { "first": "George", "middle": [ "A" ], "last": "Miller", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Chodorow", "suffix": "" }, { "first": "Shari", "middle": [], "last": "Landes", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Leacock", "suffix": "" }, { "first": "Robert", "middle": [ "G" ], "last": "Thomas", "suffix": "" } ], "year": 1994, "venue": "HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A. Miller, Martin Chodorow, Shari Landes, Claudia Leacock, and Robert G. Thomas. 1994. Us- ing a semantic concordance for sense identification. In HLT.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Word sense disambiguation: A unified evaluation framework and empirical comparison", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Jos\u00e9", "middle": [], "last": "Camacho-Collados", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Raganato", "suffix": "" } ], "year": 2017, "venue": "EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli, Jos\u00e9 Camacho-Collados, and Alessandro Raganato. 2017. Word sense disam- biguation: A unified evaluation framework and empirical comparison. In EACL.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A three-way model for collective learning on multi-relational data", "authors": [ { "first": "Maximilian", "middle": [], "last": "Nickel", "suffix": "" }, { "first": "Hans-Peter", "middle": [], "last": "Volker Tresp", "suffix": "" }, { "first": "", "middle": [], "last": "Kriegel", "suffix": "" } ], "year": 2011, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In ICML.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. GloVe: Global vectors for word representation. In EMNLP.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In NAACL-HLT.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "WiC: the word-in-context dataset for evaluating context-sensitive meaning representations", "authors": [ { "first": "Mohammad", "middle": [], "last": "Taher Pilehvar", "suffix": "" }, { "first": "Jos\u00e9", "middle": [], "last": "Camacho-Collados", "suffix": "" } ], "year": 2019, "venue": "NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammad Taher Pilehvar and Jos\u00e9 Camacho- Collados. 2019. WiC: the word-in-context dataset for evaluating context-sensitive meaning representa- tions. In NAACL-HLT.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Improving language understanding by generative pre-training", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Narasimhan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Neural sequence learning models for word sense disambiguation", "authors": [ { "first": "Alessandro", "middle": [], "last": "Raganato", "suffix": "" }, { "first": "Claudio", "middle": [ "Delli" ], "last": "Bovi", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alessandro Raganato, Claudio Delli Bovi, and Roberto Navigli. 2017. Neural sequence learning models for word sense disambiguation. In EMNLP.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Simple BERT models for relation extraction and semantic role labeling", "authors": [ { "first": "Peng", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng Shi and Jimmy Lin. 2019. Simple BERT models for relation extraction and semantic role labeling.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Matching the blanks: Distributional similarity for relation learning", "authors": [ { "first": "B", "middle": [], "last": "Livio", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Soares", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Fitzgerald", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Ling", "suffix": "" }, { "first": "", "middle": [], "last": "Kwiatkowski", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Livio B. Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Dis- tributional similarity for relation learning. In ACL.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "ConceptNet 5.5: An open multilingual graph of general knowledge", "authors": [ { "first": "Robert", "middle": [], "last": "Speer", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Chin", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Havasi", "suffix": "" } ], "year": 2017, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Speer, Joshua Chin, and Catherine Havasi. 2017. ConceptNet 5.5: An open multilingual graph of general knowledge. In AAAI.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "A cross-lingual dictionary for English Wikipedia concepts", "authors": [ { "first": "I", "middle": [], "last": "Valentin", "suffix": "" }, { "first": "Angel", "middle": [ "X" ], "last": "Spitkovsky", "suffix": "" }, { "first": "", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2012, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Valentin I. Spitkovsky and Angel X. Chang. 2012. A cross-lingual dictionary for English Wikipedia con- cepts. In LREC.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Learning general purpose distributed sentence representations via large scale multi-task learning", "authors": [ { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Trischler", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Christopher", "middle": [ "J" ], "last": "Pal", "suffix": "" } ], "year": 2018, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sandeep Subramanian, Adam Trischler, Yoshua Ben- gio, and Christopher J Pal. 2018. Learning gen- eral purpose distributed sentence representations via large scale multi-task learning. In ICLR.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Open domain question answering using early fusion of knowledge bases and text", "authors": [ { "first": "", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2018, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen. 2018. Open domain question answering us- ing early fusion of knowledge bases and text. In EMNLP.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Complex embeddings for simple link prediction", "authors": [ { "first": "Th\u00e9o", "middle": [], "last": "Trouillon", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Welbl", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "\u00c9ric", "middle": [], "last": "Gaussier", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Bouchard", "suffix": "" } ], "year": 2016, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Th\u00e9o Trouillon, Johannes Welbl, Sebastian Riedel,\u00c9ric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In ICML.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Wikidata: A free collaborative knowledgebase. Commun", "authors": [ { "first": "Denny", "middle": [], "last": "Vrande\u010di\u0107", "suffix": "" }, { "first": "Markus", "middle": [], "last": "Kr\u00f6tzsch", "suffix": "" } ], "year": 2014, "venue": "", "volume": "57", "issue": "", "pages": "78--85", "other_ids": { "DOI": [ "10.1145/2629489" ] }, "num": null, "urls": [], "raw_text": "Denny Vrande\u010di\u0107 and Markus Kr\u00f6tzsch. 2014. Wiki- data: A free collaborative knowledgebase. Com- mun. ACM, 57(10):78-85.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "SuperGLUE: A stickier benchmark for general-purpose language understanding systems", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yada", "middle": [], "last": "Pruksachatkun", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1905.00537" ] }, "num": null, "urls": [], "raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. arXiv:1905.00537.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Explicit utilization of general knowledge in machine reading comprehension", "authors": [ { "first": "Chao", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chao Wang and Hui Jiang. 2019. Explicit utilization of general knowledge in machine reading comprehen- sion. In ACL.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Extracting multiple-relations in one-pass with pre-trained transformers", "authors": [ { "first": "Haoyu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Shiyu", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Dakuo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xiaoxiao", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Saloni", "middle": [], "last": "Potdar", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haoyu Wang, Ming Tan, Mo Yu, Shiyu Chang, Dakuo Wang, Kun Xu, Xiaoxiao Guo, and Saloni Potdar. 2019b. Extracting multiple-relations in one-pass with pre-trained transformers. In ACL.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Relation classification via multi-level attention CNNs", "authors": [ { "first": "Linlin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhu", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Gerard", "middle": [], "last": "De Melo", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. 2016. Relation classification via multi-level at- tention CNNs. In ACL.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Knowledge graph and text jointly embedding", "authors": [ { "first": "Zhen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jianwen", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jianlin", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014a. Knowledge graph and text jointly em- bedding. In EMNLP.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Knowledge graph embedding by translating on hyperplanes", "authors": [ { "first": "Zhen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jianwen", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jianlin", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2014, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014b. Knowledge graph embedding by translating on hyperplanes. In AAAI.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Google's neural machine translation system: Bridging the gap between human and machine translation", "authors": [ { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Le", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Norouzi", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Qin", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Gao", "suffix": "" }, { "first": "", "middle": [], "last": "Macherey", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1609.08144" ] }, "num": null, "urls": [], "raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv:1609.08144.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "From one point to a manifold: knowledge graph embedding for precise link prediction", "authors": [ { "first": "Han", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2016, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Han Xiao, Minlie Huang, and Xiaoyan Zhu. 2016. From one point to a manifold: knowledge graph em- bedding for precise link prediction. In AAAI.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Representation learning of knowledge graphs with entity descriptions", "authors": [ { "first": "Ruobing", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "J", "middle": [ "J" ], "last": "Jia", "suffix": "" }, { "first": "Huanbo", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruobing Xie, Zhiyuan Liu, J. J. Jia, Huanbo Luan, and Maosong Sun. 2016. Representation learning of knowledge graphs with entity descriptions. In AAAI.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Enhancing pre-trained language representations with rich knowledge for machine reading comprehension", "authors": [ { "first": "An", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Quan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yajuan", "middle": [], "last": "Lyu", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Qiaoqiao", "middle": [], "last": "She", "suffix": "" }, { "first": "Sujian", "middle": [], "last": "Li", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "An Yang, Quan Wang, Jing Liu, Kai Liu, Yajuan Lyu, Hua Wu, Qiaoqiao She, and Sujian Li. 2019. En- hancing pre-trained language representations with rich knowledge for machine reading comprehension. In ACL.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Leveraging knowledge bases in LSTMs for improving machine reading", "authors": [ { "first": "Bishan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Tom", "middle": [ "Michael" ], "last": "Mitchell", "suffix": "" } ], "year": 2017, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bishan Yang and Tom Michael Mitchell. 2017. Lever- aging knowledge bases in LSTMs for improving ma- chine reading. In ACL.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Embedding entities and relations for learning and inference in knowledge bases", "authors": [ { "first": "Bishan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6575" ] }, "num": null, "urls": [], "raw_text": "Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv:1412.6575.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Reference-aware language models", "authors": [ { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zichao Yang, Phil Blunsom, Chris Dyer, and Wang Ling. 2017. Reference-aware language models. In EMNLP.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Graph convolution over pruned dependency trees improves relation extraction", "authors": [ { "first": "Yuhao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2018, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In EMNLP.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Positionaware attention and supervised data improve slot filling", "authors": [ { "first": "Yuhao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor An- geli, and Christopher D. Manning. 2017. Position- aware attention and supervised data improve slot fill- ing. In EMNLP.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "ERNIE: Enhanced language representation with informative entities", "authors": [ { "first": "Zhengyan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Han", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: En- hanced language representation with informative en- tities. In ACL.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "authors": [ { "first": "Yukun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Richard", "middle": [ "S" ], "last": "Zemel", "suffix": "" }, { "first": "Ruslan", "middle": [ "R" ], "last": "Salakhutdinov", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Urtasun", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Torralba", "suffix": "" }, { "first": "Sanja", "middle": [], "last": "Fidler", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan R. Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. ICCV.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "et al. (2019) BERT LARGE 89.2 Soares et al. (2019) BERT LARGE \u2020 89.5 KnowBert-W+W BERT BASE 89.1", "uris": null, "num": null, "type_str": "figure" }, "TABREF2": { "text": "KnowBert training method Input: Pretrained BERT and J KBs Output: KnowBert for j = 1 . . . J do Compute entity embeddings for KB j if EL supervision available then", "content": "
Freeze all network parameters except
those in (Eq. 1-3)
Train to convergence using (Eq. 4) or
(Eq. 5)
end
Initialize Wproj 2 as (Wproj 1 ) \u22121
Unfreeze all parameters except entity
embeddings
Minimize
", "html": null, "type_str": "table", "num": null }, "TABREF3": { "text": "Comparison of masked LM perplexity, Wikidata probing MRR, and number of parameters (in millions) in the masked LM (word piece embeddings, transformer layers, and output layers), KAR, and entity embeddings for BERT and KnowBert. The table also includes the total time to run one forward and backward pass (in seconds) on a TITAN Xp GPU (12 GB RAM) for a batch of 32 sentence pairs with total length 80 word pieces. Due to memory constraints, the BERT LARGE batch is accumulated over two smaller batches.", "content": "
SystemPPLWikidata MRR# params. masked LM# params. KAR# params. entity embed.Fwd. / Bwd. time
BERT BASE5.50.09110000.25
BERT LARGE4.50.11336000.75
KnowBert-Wiki4.30.261102.41410.27
KnowBert-WordNet 4.10.221104.92650.31
KnowBert-W+W3.50.311107.34060.33
Table 1:
", "html": null, "type_str": "table", "num": null }, "TABREF5": { "text": "Fine-grained WSD F 1 .", "content": "", "html": null, "type_str": "table", "num": null }, "TABREF6": { "text": "", "content": "
displays a sum-
", "html": null, "type_str": "table", "num": null }, "TABREF7": { "text": "Test set F 1 for SemEval 2010 Task 8 relationship extraction. \u2020 with MTB pretraining.", "content": "", "html": null, "type_str": "table", "num": null }, "TABREF9": { "text": "Test set results for the WiC dataset (v1.0).", "content": "
\u2020 Pilehvar and Camacho-Collados (2019)
\u2020 \u2020 Wang et al. (2019a)
", "html": null, "type_str": "table", "num": null } } } }