Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1040",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:11:35.570263Z"
},
"title": "EntEval: A Holistic Evaluation Benchmark for Entity Representations",
"authors": [
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Toyota Technological Institute at Chicago",
"location": {
"region": "IL",
"country": "USA"
}
},
"email": ""
},
{
"first": "Zewei",
"middle": [],
"last": "Chu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Chicago",
"location": {
"region": "IL",
"country": "USA"
}
},
"email": "zeweichu@uchicago.edu"
},
{
"first": "Yang",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ohio State University",
"location": {
"region": "OH",
"country": "USA"
}
},
"email": ""
},
{
"first": "Karl",
"middle": [],
"last": "Stratos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rutgers University",
"location": {
"region": "NJ",
"country": "USA"
}
},
"email": "stratos@cs.rutgers.edu"
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Toyota Technological Institute at Chicago",
"location": {
"region": "IL",
"country": "USA"
}
},
"email": "kgimpel@ttic.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Rich entity representations are useful for a wide class of problems involving entities. Despite their importance, there is no standardized benchmark that evaluates the overall quality of entity representations. In this work, we propose EntEval: a test suite of diverse tasks that require nontrivial understanding of entities including entity typing, entity similarity, entity relation prediction, and entity disambiguation. In addition, we develop training techniques for learning better entity representations by using natural hyperlink annotations in Wikipedia. We identify effective objectives for incorporating the contextual information in hyperlinks into state-of-the-art pretrained language models (Peters et al., 2018a) and show that they improve strong baselines on multiple EntEval tasks. 1 * Equal contribution. Listed in alphabetical order. \u2020 Work done while the author was at Toyota Technological Institute at Chicago.",
"pdf_parse": {
"paper_id": "D19-1040",
"_pdf_hash": "",
"abstract": [
{
"text": "Rich entity representations are useful for a wide class of problems involving entities. Despite their importance, there is no standardized benchmark that evaluates the overall quality of entity representations. In this work, we propose EntEval: a test suite of diverse tasks that require nontrivial understanding of entities including entity typing, entity similarity, entity relation prediction, and entity disambiguation. In addition, we develop training techniques for learning better entity representations by using natural hyperlink annotations in Wikipedia. We identify effective objectives for incorporating the contextual information in hyperlinks into state-of-the-art pretrained language models (Peters et al., 2018a) and show that they improve strong baselines on multiple EntEval tasks. 1 * Equal contribution. Listed in alphabetical order. \u2020 Work done while the author was at Toyota Technological Institute at Chicago.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Entity representations play a key role in numerous important problems including language modeling (Ji et al., 2017) , dialogue generation (He et al., 2017) , entity linking (Gupta et al., 2017) , and story generation . One successful line of work on learning entity representations has been learning static embeddings: that is, assign a unique vector to each entity in the training data (Gupta et al., 2017; Yamada et al., 2016 Yamada et al., , 2017 . While these embeddings are useful in many applications, they have the obvious drawback of not accommodating unknown entities. Another limiting factor is the lack of an evaluation benchmark: it is often difficult to know which entity representations are better for which tasks.",
"cite_spans": [
{
"start": 98,
"end": 115,
"text": "(Ji et al., 2017)",
"ref_id": "BIBREF28"
},
{
"start": 138,
"end": 155,
"text": "(He et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 173,
"end": 193,
"text": "(Gupta et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 387,
"end": 407,
"text": "(Gupta et al., 2017;",
"ref_id": "BIBREF19"
},
{
"start": 408,
"end": 427,
"text": "Yamada et al., 2016",
"ref_id": "BIBREF68"
},
{
"start": 428,
"end": 449,
"text": "Yamada et al., , 2017",
"ref_id": "BIBREF69"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We introduce EntEval: a carefully designed benchmark for holistically evaluating entity representations. It is a test suite of diverse tasks that require nontrivial understanding of entities, including entity typing, entity similarity, entity relation prediction, and entity disambiguation. Motivated by the recent success of contextualized word representations (henceforth: CWRs) from pretrained models (McCann et al., 2017; Peters et al., 2018a; Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019b) , we propose to encode the mention context or the description to dynamically represent an entity. In addition, we perform an in-depth comparison of ELMo and BERT-based embeddings and find that they show different characteristics on different tasks. We analyze each layer of the CWRs and make the following observations:",
"cite_spans": [
{
"start": 404,
"end": 425,
"text": "(McCann et al., 2017;",
"ref_id": "BIBREF41"
},
{
"start": 426,
"end": 447,
"text": "Peters et al., 2018a;",
"ref_id": "BIBREF49"
},
{
"start": 448,
"end": 468,
"text": "Devlin et al., 2018;",
"ref_id": "BIBREF13"
},
{
"start": 469,
"end": 487,
"text": "Yang et al., 2019;",
"ref_id": "BIBREF70"
},
{
"start": 488,
"end": 506,
"text": "Liu et al., 2019b)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The dynamically encoded entity representations show a strong improvement on the entity disambiguation task compared to prior work using static entity embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 BERT-based entity representations require further supervised training to perform well on downstream tasks, while ELMo-based representations are more capable of performing zeroshot tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 In general, higher layers of ELMo and BERTbased CWRs are more transferable to EntEval tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To further improve contextualized and descriptive entity representations (CER/DER), we leverage natural hyperlink annotations in Wikipedia. We identify effective objectives for incorporating the contextual information in hyperlinks and improve ELMo-based CWRs on a variety of entity related tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "EntEval and the training objectives considered in this work are built on previous works that involve reasoning over entities. We give a brief overview of relevant works.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Entity linking/disambiguation. Entity linking is a fundamental task in information extraction with a wealth of literature (He et al., 2013; Guo and Barbosa, 2014; Ling et al., 2015; Huang et al., 2015; Francis-Landau et al., 2016; Le and Titov, 2018; Martins et al., 2019) . The goal of this task is to map a mention in context to the corresponding entity in a database. A natural approach is to learn entity representations that enable this mapping. Recent works focused on learning a fixed embedding for each entity using Wikipedia hyperlinks (Yamada et al., 2016; Ganea and Hofmann, 2017; Le and Titov, 2019) . Gupta et al. (2017) additionally train context and description embeddings jointly, but this mainly aims to improve the quality of the fixed entity embeddings rather than using the context and description embeddings directly; we find that their context and description encoders perform poorly on EntEval tasks.",
"cite_spans": [
{
"start": 122,
"end": 139,
"text": "(He et al., 2013;",
"ref_id": "BIBREF21"
},
{
"start": 140,
"end": 162,
"text": "Guo and Barbosa, 2014;",
"ref_id": "BIBREF18"
},
{
"start": 163,
"end": 181,
"text": "Ling et al., 2015;",
"ref_id": "BIBREF34"
},
{
"start": 182,
"end": 201,
"text": "Huang et al., 2015;",
"ref_id": "BIBREF26"
},
{
"start": 202,
"end": 230,
"text": "Francis-Landau et al., 2016;",
"ref_id": "BIBREF16"
},
{
"start": 231,
"end": 250,
"text": "Le and Titov, 2018;",
"ref_id": "BIBREF30"
},
{
"start": 251,
"end": 272,
"text": "Martins et al., 2019)",
"ref_id": "BIBREF40"
},
{
"start": 545,
"end": 566,
"text": "(Yamada et al., 2016;",
"ref_id": "BIBREF68"
},
{
"start": 567,
"end": 591,
"text": "Ganea and Hofmann, 2017;",
"ref_id": "BIBREF17"
},
{
"start": 592,
"end": 611,
"text": "Le and Titov, 2019)",
"ref_id": "BIBREF31"
},
{
"start": 614,
"end": 633,
"text": "Gupta et al. (2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A closely related concurrent work by (Logeswaran et al., 2019) jointly encodes a mention in context and an entity description from Wikia to perform zero-shot entity linking. In contrast, here we seek to pretrain a general purpose entity representations that can function well either given or not given entity descriptions or mention contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Other entity-related tasks involve entity typing (Yaghoobzadeh and Sch\u00fctze, 2015; Murty et al., 2017; Del Corro et al., 2015; Rabinovich and Klein, 2017; Onoe and Durrett, 2019; Obeidat et al., 2019) and coreference resolution (Durrett and Klein, 2013; Wiseman et al., 2016; Lee et al., 2017; Webster et al., 2018; Kantor and Globerson, 2019) .",
"cite_spans": [
{
"start": 49,
"end": 81,
"text": "(Yaghoobzadeh and Sch\u00fctze, 2015;",
"ref_id": "BIBREF67"
},
{
"start": 82,
"end": 101,
"text": "Murty et al., 2017;",
"ref_id": "BIBREF44"
},
{
"start": 102,
"end": 125,
"text": "Del Corro et al., 2015;",
"ref_id": "BIBREF12"
},
{
"start": 126,
"end": 153,
"text": "Rabinovich and Klein, 2017;",
"ref_id": "BIBREF51"
},
{
"start": 154,
"end": 177,
"text": "Onoe and Durrett, 2019;",
"ref_id": "BIBREF47"
},
{
"start": 178,
"end": 199,
"text": "Obeidat et al., 2019)",
"ref_id": "BIBREF46"
},
{
"start": 227,
"end": 252,
"text": "(Durrett and Klein, 2013;",
"ref_id": "BIBREF14"
},
{
"start": 253,
"end": 274,
"text": "Wiseman et al., 2016;",
"ref_id": "BIBREF66"
},
{
"start": 275,
"end": 292,
"text": "Lee et al., 2017;",
"ref_id": "BIBREF32"
},
{
"start": 293,
"end": 314,
"text": "Webster et al., 2018;",
"ref_id": "BIBREF65"
},
{
"start": 315,
"end": 342,
"text": "Kantor and Globerson, 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Evaluating pretrained representations. Recent work has sought to evaluate the knowledge acquired by pretrained language models (Shi et al., 2016; Adi et al., 2017; Peters et al., 2018b; Conneau and Kiela, 2018; Wang et al., 2018; Liu et al., 2019a; Chen et al., 2019a, inter alia) . In this work, we focus on evaluating their capabilities in modeling entities.",
"cite_spans": [
{
"start": 127,
"end": 145,
"text": "(Shi et al., 2016;",
"ref_id": "BIBREF54"
},
{
"start": 146,
"end": 163,
"text": "Adi et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 164,
"end": 185,
"text": "Peters et al., 2018b;",
"ref_id": "BIBREF50"
},
{
"start": 186,
"end": 210,
"text": "Conneau and Kiela, 2018;",
"ref_id": "BIBREF10"
},
{
"start": 211,
"end": 229,
"text": "Wang et al., 2018;",
"ref_id": "BIBREF63"
},
{
"start": 230,
"end": 248,
"text": "Liu et al., 2019a;",
"ref_id": "BIBREF35"
},
{
"start": 249,
"end": 280,
"text": "Chen et al., 2019a, inter alia)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Part of EntEval involves evaluating world knowledge about entities, relating them to fact checking (Vlachos and Riedel, 2014; Wang, 2017; Thorne et al., 2018; Yin and Roth, 2018; , and commonsense learning (Angeli and Manning, 2014; Bowman et al., 2015; Li et al., 2016; Mihaylov et al., 2018; Zellers et al., 2018; Trinh and Le, 2018; Talmor et al., 2019; Zellers et al., 2019; Sap et al., 2019; Rajani et al., 2019) . Another related line of work is to integrate entityrelated knowledge into the training of language models (Logan et al., 2019; .",
"cite_spans": [
{
"start": 99,
"end": 125,
"text": "(Vlachos and Riedel, 2014;",
"ref_id": "BIBREF62"
},
{
"start": 126,
"end": 137,
"text": "Wang, 2017;",
"ref_id": "BIBREF64"
},
{
"start": 138,
"end": 158,
"text": "Thorne et al., 2018;",
"ref_id": "BIBREF60"
},
{
"start": 159,
"end": 178,
"text": "Yin and Roth, 2018;",
"ref_id": "BIBREF71"
},
{
"start": 206,
"end": 232,
"text": "(Angeli and Manning, 2014;",
"ref_id": "BIBREF1"
},
{
"start": 233,
"end": 253,
"text": "Bowman et al., 2015;",
"ref_id": "BIBREF4"
},
{
"start": 254,
"end": 270,
"text": "Li et al., 2016;",
"ref_id": "BIBREF33"
},
{
"start": 271,
"end": 293,
"text": "Mihaylov et al., 2018;",
"ref_id": "BIBREF43"
},
{
"start": 294,
"end": 315,
"text": "Zellers et al., 2018;",
"ref_id": "BIBREF72"
},
{
"start": 316,
"end": 335,
"text": "Trinh and Le, 2018;",
"ref_id": "BIBREF61"
},
{
"start": 336,
"end": 356,
"text": "Talmor et al., 2019;",
"ref_id": "BIBREF59"
},
{
"start": 357,
"end": 378,
"text": "Zellers et al., 2019;",
"ref_id": "BIBREF73"
},
{
"start": 379,
"end": 396,
"text": "Sap et al., 2019;",
"ref_id": "BIBREF53"
},
{
"start": 397,
"end": 417,
"text": "Rajani et al., 2019)",
"ref_id": "BIBREF52"
},
{
"start": 526,
"end": 546,
"text": "(Logan et al., 2019;",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Contextualized word representations. Contextualized word representations and pretrained language representation models, such as ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2018) , are powerful pretrained models that have been shown to be effective for a variety of downstream tasks such as text classification, sentence relation prediction, named entity recognition, and question answering. Recent work has sought to evaluate the knowledge acquired by such models (Shi et al., 2016; Adi et al., 2017; Conneau and Kiela, 2018; Liu et al., 2019a) . In this work, we focus on evaluating their capabilities in modeling entities.",
"cite_spans": [
{
"start": 133,
"end": 155,
"text": "(Peters et al., 2018a)",
"ref_id": "BIBREF49"
},
{
"start": 165,
"end": 186,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 473,
"end": 491,
"text": "(Shi et al., 2016;",
"ref_id": "BIBREF54"
},
{
"start": 492,
"end": 509,
"text": "Adi et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 510,
"end": 534,
"text": "Conneau and Kiela, 2018;",
"ref_id": "BIBREF10"
},
{
"start": 535,
"end": 553,
"text": "Liu et al., 2019a)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We are interested in two approaches: contextualized entity representations (henceforth: CER) and descriptive entity representations (henceforth: DER), both encoding fixed-length vector representations for entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EntEval",
"sec_num": "3"
},
{
"text": "The contextualized entity representations encodes an entity based on the context it appears regardless of whether the entity is seen before. The motivation behind contextualized entity representations is that we want an entity encoder that does not depend on entries in a knowledge base, but is capable of inferring knowledge about an entity from the context it appears.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EntEval",
"sec_num": "3"
},
{
"text": "As opposed to contextualized entity representations, descriptive entity representations do rely on entries in Wikipedia. We use a model-specific function f to obtain a fixed-length vector representation from the entity's textual description.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EntEval",
"sec_num": "3"
},
{
"text": "To evaluate CERs and DERs, we propose a wide range of entity related tasks. Since our purpose is for examining the learned entity representations, we only use a linear classifier and freeze the entity representations when performing the follow- ing tasks. Unless otherwise noted, when the task involves a pair of entities, the input to the classifier are the entity representations x 1 and x 2 , concatenated with their element-wise product and absolute difference:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EntEval",
"sec_num": "3"
},
{
"text": "[x 1 , x 2 , x 1 x 2 , |x 1 \u2212 x 2 |].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EntEval",
"sec_num": "3"
},
{
"text": "This input format has been used in SentEval (Conneau and Kiela, 2018) .",
"cite_spans": [
{
"start": 44,
"end": 69,
"text": "(Conneau and Kiela, 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "EntEval",
"sec_num": "3"
},
{
"text": "The datasets used in EntEval tasks are summarized in table 1. It shows the number of instances in train/valid/test split for each dataset, and the number of target classes if this is a classification task. We describe the proposed tasks in the following subsections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EntEval",
"sec_num": "3"
},
{
"text": "The task of entity typing (ET) is to assign types to an entity given only the context of the entity mention. ET is context-sensitive, making it an effective approach to probe the knowledge of context encoded in pretrained representations. For example, in the sentence \"Bill Gates has donated billions to eradicate malaria\", \"Bill Gates\" has the type of \"philanthropist\" instead of \"inventor\" . In this task, we will contextualized entity representations, followed by a linear layer to make predictions. We use the annotated ultra-fine entity typing dataset of with standard data splits. As shown in Figure 1, there can be multiple labels for an instance. We use binary log loss for training using all positive and negative entity types, and report F 1 score. Thresholds are tuned based on validation set accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 599,
"end": 605,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Entity Typing (ET)",
"sec_num": "3.1"
},
{
"text": "Given two entities and the associated context, the task is to determine whether they refer to the same entity. Solving this task may require the knowledge of entities. For example, in the sentence \"Revenues of $14.5 billion were posted by Dell 1 . The company 1 ...\", there is no prior context of \"Dell\", so having known \"Dell\" is a company instead of the people \"Michael Dell\" will surely ben-efit the model (Durrett and Klein, 2014) . Unlike other tasks, coreference typically involves longer context. To restrict the effect of broad context, we only keep two groups of coreference arcs from smaller context. One includes mentions that are in the same sentence (\"same\") for examining the model capability of encoding local context. The other includes mentions that are in consecutive sentences (\"next\") for the broader context. We create this task from the PreCo dataset (Chen et al., 2018) , which has mentions annotated even when they are not part of coreference chains. We filter out instances in which both mentions are pronouns. All non-coreferent mention pairs are considered to be negative samples.",
"cite_spans": [
{
"start": 409,
"end": 434,
"text": "(Durrett and Klein, 2014)",
"ref_id": "BIBREF15"
},
{
"start": 873,
"end": 892,
"text": "(Chen et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Arc Prediction (CAP)",
"sec_num": "3.2"
},
{
"text": "To make this task more challenging, for each instance we compute cosine similarity of mentions by averaging GloVe word vectors. We group the instances into bins by cosine similarity, and randomly select the same number of positive and negative instances from each bin to ensure that models do not solve this task by simply comparing similarity of mention names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Arc Prediction (CAP)",
"sec_num": "3.2"
},
{
"text": "We use the contextualized entity representations of the two mentions to infer coreference arcs with supervised training and report the averaged accuracy of \"same\" and \"next\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Arc Prediction (CAP)",
"sec_num": "3.2"
},
{
"text": "The entity factuality prediction (EFP) task involves determining the correctness of statements regarding entities. We use the manually-annotated FEVER dataset (Thorne et al., 2018) for this task. FEVER is a task to verify whether a statement is supported by evidences. The original FEVER dataset includes three classes, namely \"Supports\", \"Refutes\", and \"NotEnoughInfo\" and evidences are additionally available for each instance. As our purpose is to examine the knowledge encoded in entity representations, we discard the last category (\"NotEnoughInfo\") and the evidence. In rare cases, instances in FEVER may include multiple entity mentions, so we randomly pick one. We randomly sample 10000, 2000, and 2000 instances for our training, validation, and test sets, respectively.",
"cite_spans": [
{
"start": 159,
"end": 180,
"text": "(Thorne et al., 2018)",
"ref_id": "BIBREF60"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Factuality Prediction (EFP)",
"sec_num": "3.3"
},
{
"text": "In this task, entity representations can be obtained either by contextualized entity representations or descriptive entity representations. In practice, we observe descriptive entity representations give better performance, which presumably is be- cause these statements are more similar to descriptions than entity mentions. As shown in Figure 2 , without providing additional evidences, solving this task requires knowledge of entities encoded in representations. We directly use entity representations as input to the classifier.",
"cite_spans": [],
"ref_spans": [
{
"start": 338,
"end": 346,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Entity Factuality Prediction (EFP)",
"sec_num": "3.3"
},
{
"text": "The task of contexualized entity relationship prediction (CERP) modeling determines the connection between two entities appeared in the same context. We use sentences from Con-ceptNet (Speer et al., 2017) with automatically parsed mentions and templates used to construct the dataset. We filter out non-English concepts and relations such as 'related', 'translation', 'synonym', and 'likely to find' since we seek to evaluate more complicated knowledge of entities encoded in representations. We further filter out nonentity mentions and entities with type 'DATE', 'TIME', 'PERCENT', 'MONEY', 'QUANTITY', 'ORDINAL', and 'CARDINAL' according to SpaCy (Honnibal and Montani, 2017) . After filtering, we have 13374 assertions. Negative samples are generated based on the following rules:",
"cite_spans": [
{
"start": 184,
"end": 204,
"text": "(Speer et al., 2017)",
"ref_id": "BIBREF56"
},
{
"start": 650,
"end": 678,
"text": "(Honnibal and Montani, 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contexualized Entity Relationship Prediction (CERP)",
"sec_num": "3.4"
},
{
"text": "1. For each relationship, we replace an entity with similar negative entities based on cosine similarity of averaged GloVe embeddings (Pennington et al., 2014 2. We change the relationship in positive samples from affirmation to negation (e.g., 'is' to 'is not'). These serve as negative samples.",
"cite_spans": [
{
"start": 134,
"end": 158,
"text": "(Pennington et al., 2014",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contexualized Entity Relationship Prediction (CERP)",
"sec_num": "3.4"
},
{
"text": "3. We further sample positive samples from (1) in an attempt to prevent the 'not' token from being biased towards negative samples. Therefore, for negative samples we get from (1), we change the relationship from affirmation to negation as in (2) to get positive samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contexualized Entity Relationship Prediction (CERP)",
"sec_num": "3.4"
},
{
"text": "For example, let 'A is B' be the positive sample. (1) changes it to 'C is B' which serves as a negative sample and (2) changes it to 'A is not B' as another negative sample. (3) changes it to 'C is not B' as a positive example. In the end, we randomly sample 7000 instances from each class. This ends up yielding a 10000/2000/2000 train/dev/test dataset. As shown in Figure 3 , this task cannot be solved by relying on surface form of sentences, instead it requires the input representations to encode knowledge of entities based on the context. We use contextualized entity representations in this task.",
"cite_spans": [],
"ref_spans": [
{
"start": 367,
"end": 375,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Contexualized Entity Relationship Prediction (CERP)",
"sec_num": "3.4"
},
{
"text": "Given two entities with their descriptions from Wikipedia, the task is to determine their similarity or relatedness. After the entity descriptions are encoded into vector representations, we compute their cosine similarity as predictions. We use the KORE (Hoffart et al., 2012) and Wik-iSRS (Newman-Griffis et al., 2018) datasets in this task. Since the original datasets only provide entity names, we automatically add Wikipedia descriptions to each entity and manually ensure that every entity is matched to a Wikipedia description. We use Spearman's rank correlation coefficient between our computed cosine similarity and the gold standard similarity/relatedness scores to measure the performance of entity representations.",
"cite_spans": [
{
"start": 255,
"end": 277,
"text": "(Hoffart et al., 2012)",
"ref_id": "BIBREF23"
},
{
"start": 282,
"end": 320,
"text": "Wik-iSRS (Newman-Griffis et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Similarity and Relatedness (ESR)",
"sec_num": "3.5"
},
{
"text": "As KORE does not provide similarity scores of entity pairs, but simply ranks the candidate entities by their similarities to a target entity, we assign scores from 20 to 1 accordingly to each entity in the order of similarity. Table 2 shows an example from KORE. The fact that \"Apple Inc.\" is more related to \"Steve Jobs\" than \"Microsoft\" requires multiple steps of inference, which motivates this task. Since the predictor we use is cosine similarity, which does not introduce additional parameters, we directly use encoded representations on the test set without any supervised training.",
"cite_spans": [],
"ref_spans": [
{
"start": 227,
"end": 234,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Entity Similarity and Relatedness (ESR)",
"sec_num": "3.5"
},
{
"text": "As another popular resource for common knowledge, we consider using Freebase (Bollacker et al., 2008) for probing the encoded knowledge by classifying the types of relations between pair of entities. First, we extract entity relation tuples (entity1, relation, entity2) from Freebase and then filter out easy tuples based on training a classifier using averaged GloVe vectors of entity names as input, which leaves us 626 types of relations, including \"internet.website.owner\", \"film.film art director.films art directed\", and \"comic books.comic book series.genre\". We randomly sample 5 instances for each relation type to form our training set and 10 instances per type the for validation and test sets. We use Wikipedia descriptions for each entity in the pair whose relation we are predicting and we use descriptive entity representations for each entity with supervised training.",
"cite_spans": [
{
"start": 77,
"end": 101,
"text": "(Bollacker et al., 2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Relationship Typing (ERT)",
"sec_num": "3.6"
},
{
"text": "Named entity disambiguation is the task of linking a named-entity mention to its corresponding instance in a knowledge base such as Wikipedia. In this task, we consider CoNLL-YAGO (CoNLL; Hoffart et al., 2011) and Rare Entity Prediction (Rare; Long et al., 2017) . 27,816 mentions with valid entries in the knowledge base. For each entity mention m in its context, we generate a set of (at most) its top 30 candidate entities C m = {c j } using Cross-Wikis (Spitkovsky and Chang, 2012). Some gold standard candidates c are not present in Cross-Wikis, so we set the prior probability p prior (y) for those to 1e-6 and normalize the resulting priors for the candidate entities. When adding Wikipedia descriptions, we manually ensure gold standard mentions are attached to a description, however, we discard candidate mentions that cannot be aligned to a Wikipedia page. We use contextualized entity representations for entity mentions and use descriptive entity representations for candidate entities. Training minimizes binary log loss using all negative examples. At test time, we use arg max c\u2208Cm [p prior (c)+p classifier (c)] as the prediction. We note that directly using prior as predictions yields an accuracy of 58.2%. Long et al. (2017) introduce the task of rare entity prediction. The task has a similar format to CoNLL-YAGO entity linking. Given a document with a blank in it, the task is to select an entity from a provided list of entities with descriptions. Only rare entities are used in this dataset so that performing well on the task requires the ability to effectively represent entity descriptions. We randomly select 10k/4k/4k examples to construct train/valid/test sets. For simplicity, we only keep instances with four candidate entities. Figure 4 shows an example from CoNLL-YAGO, where the \"China\" in context has many deceptive meanings. Here the candidate \"China\" has exact string match of the entity name but it should not be selected as it is an after-game report on soccer. To match the entities, this task requires both effective contextualize entity representations and descriptive entity representation.",
"cite_spans": [
{
"start": 188,
"end": 209,
"text": "Hoffart et al., 2011)",
"ref_id": "BIBREF24"
},
{
"start": 244,
"end": 262,
"text": "Long et al., 2017)",
"ref_id": "BIBREF39"
},
{
"start": 1226,
"end": 1244,
"text": "Long et al. (2017)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 1762,
"end": 1770,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Named Entity Disambiguation (NED)",
"sec_num": "3.7"
},
{
"text": "Practically, we encode the context using CER to be x 1 , and encode each entity description using DER to be x 2 , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Disambiguation (NED)",
"sec_num": "3.7"
},
{
"text": "pass [x 1 , x 2 , x 1 x 2 , |x 1 \u2212 x 2 |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Disambiguation (NED)",
"sec_num": "3.7"
},
{
"text": "] to a linear model to predict whether it is the correct entity to fill in. The model is trained with cross entropy loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Disambiguation (NED)",
"sec_num": "3.7"
},
{
"text": "We first describe how we define encoders for contextualized entity representations (Section 4.1) and descriptive entity representations (Section 4.2), then we discuss how we train new encoders tailored to capture information from the hyperlink structure of Wikipedia (Section 4.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "For defining these encoders, we assume we have a sentence s = (w 1 , . . . , w T ) where span (w i , . . . , w j ) refers to an entity mention. When using ELMo, we first encode the sentence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoders for Contextualized Entity Representations",
"sec_num": "4.1"
},
{
"text": "(c 1 , . . . , c T ) = ELMo(w 1 , \u2022 \u2022 \u2022 , w T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoders for Contextualized Entity Representations",
"sec_num": "4.1"
},
{
"text": ", and we use the average of contextualized hidden states corresponding to the entity span as the contextualized entity representation. That is, f ELMo (w 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoders for Contextualized Entity Representations",
"sec_num": "4.1"
},
{
"text": "T , i, j) = j k=i c k j\u2212i+1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoders for Contextualized Entity Representations",
"sec_num": "4.1"
},
{
"text": "With BERT, following Onoe and Durrett (2019), we concatenate the full sentence with the entity mention, starting with [CLS] and separating the two by [SEP], i.e., [CLS] ",
"cite_spans": [
{
"start": 163,
"end": 168,
"text": "[CLS]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encoders for Contextualized Entity Representations",
"sec_num": "4.1"
},
{
"text": ", w 1 , . . . , w T , [SEP], w i , . . . , w j , [SEP].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoders for Contextualized Entity Representations",
"sec_num": "4.1"
},
{
"text": "We encode the full sequence using BERT and use the output from the [CLS] token as the entity mention representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoders for Contextualized Entity Representations",
"sec_num": "4.1"
},
{
"text": "We encode an entity description by treating the entity description as a sentence, and use the average of the hidden states from ELMo as the entity description representation. With BERT, we use the output from the [CLS] token as the description representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoders for Descriptive Entity Representations",
"sec_num": "4.2"
},
{
"text": "An entity mentioned in a Wikipedia article is often linked to its Wikipedia page, which provides a useful description of the mentioned entity. The same Wikipedia page may correspond to many different entity mentions. Likewise, the same entity mention may refer to different Wikipedia pages de-France won the match 4-2 to claim their second World Cup title.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperlink-Based Training",
"sec_num": "4.3"
},
{
"text": "The France national football team represents France in international foot ball Figure 5 : An example of hyperlinks in Wikipedia. \"France\" is linked to the Wikipedia page of \"France national football team\" instead of the country France.",
"cite_spans": [],
"ref_spans": [
{
"start": 79,
"end": 87,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Hyperlink-Based Training",
"sec_num": "4.3"
},
{
"text": "pending on its context. For instance, as shown in Figure 5 , based on the context, \"France\" is linked to the Wikipedia page of \"France national football team\" instead of the country. The specific entity in the knowledge base can be inferred from the context information. In such cases, we believe Wikipedia provides valuable complementary information to the current pretrained CWRs such as BERT and ELMo.",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 58,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Hyperlink-Based Training",
"sec_num": "4.3"
},
{
"text": "To incorporate such information during training, we automatically construct a hyperlinkenriched dataset from Wikipedia that we will refer to as WIKIENT. Prior work has used similar resources (Singh et al., 2012; Gupta et al., 2017 ), but we aim to standardize the process and will release the dataset.",
"cite_spans": [
{
"start": 191,
"end": 211,
"text": "(Singh et al., 2012;",
"ref_id": "BIBREF55"
},
{
"start": 212,
"end": 230,
"text": "Gupta et al., 2017",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperlink-Based Training",
"sec_num": "4.3"
},
{
"text": "The WIKIENT dataset consists of sentences with contextualized entity mentions and their corresponding descriptions obtained via hyperlinked Wikipedia pages. When processing descriptions, we only keep the first 100 word tokens at most as the description of a Wikipedia page; similar truncation has been done in prior work (Gupta et al., 2017) . For context sentences, we remove those without hyperlinks from the training data and duplicate those with multiple hyperlinks. We also remove context sentences for which we cannot find matched Wikipedia descriptions. These processing steps result in a training set of approximately 92 million instances and over 3 million unique entities.",
"cite_spans": [
{
"start": 321,
"end": 341,
"text": "(Gupta et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperlink-Based Training",
"sec_num": "4.3"
},
{
"text": "We define a hyperlink-based training objective and add it to ELMo. In particular, we use contextualized entity representations to decode the hyperlinked Wikipedia description, and also use the descriptive entity representations to decode the linked context. We use bag-of-words decoders in both decoding processes. More specifically, given a context sentence x 1:Tx with mention span (i, j) and a description sentence y 1:Ty , we use the same bidirectional language modeling loss l lang (x 1:Tx ) + l lang (y 1:Ty ) in ELMo where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperlink-Based Training",
"sec_num": "4.3"
},
{
"text": "l lang (u 1:T ) = \u2212 T t=1 log p(u t+1 |u 1 , . . . , u t )+ log p(u t\u22121 |u t , . . . , u T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperlink-Based Training",
"sec_num": "4.3"
},
{
"text": "and p is defined by the ELMo parameters. In addition, we define the two bag-of-words reconstruction losses:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperlink-Based Training",
"sec_num": "4.3"
},
{
"text": "l ctx = \u2212 t log q(x t |f ELMo ([BOD]y 1:Ty , 1, T y )) l desc = \u2212 t log q(y t |f ELMo ([BOC]x 1:Tx , i, j))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperlink-Based Training",
"sec_num": "4.3"
},
{
"text": "where [BOD] and [BOC] are special symbols prepended to sentences to distinguish descriptions from contexts. The distribution q is parameterized by a linear layer that transforms the conditioning embedding into weights over the vocabulary. The final training loss is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperlink-Based Training",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "l lang (x 1:Tx ) + l lang (y 1:Ty ) + l ctx + l desc",
"eq_num": "(1)"
}
],
"section": "Hyperlink-Based Training",
"sec_num": "4.3"
},
{
"text": "Same as the original ELMo, each log loss is approximated with negative sampling (Jean et al., 2015) . We write EntELMo to denote the model trained by Eq. (1). When using EntELMo for contextualized entity representations and descriptive entity representations, we use it analogously to ELMo.",
"cite_spans": [
{
"start": 80,
"end": 99,
"text": "(Jean et al., 2015)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperlink-Based Training",
"sec_num": "4.3"
},
{
"text": "As a baseline for hyperlink-based training, we train EntELMo on the WIKIENT dataset with only a bidirectional language model loss. Due to the limitation of computational resources, both variants of EntELMo are trained for one epoch (3 weeks time) with smaller dimensions than ELMo. We set the hidden dimension of each directional long short-term memory network (LSTM; Hochreiter and Schmidhuber, 1997) layer to be 600, and project it to 300 dimensions. The resulting vectors from each layer are thus 600 dimensional. We use 1024 as the negative sampling size for each positive word token. For bag-of-words reconstruction, we randomly sample at most 50 word tokens as positive samples from the the target word tokens. Other hyperparameters are the same as ELMo. EntELMo is implemented based on the official ELMo implementation. 2",
"cite_spans": [
{
"start": 368,
"end": 401,
"text": "Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "5.1"
},
{
"text": "As a baseline for contextualized and descriptive entity representations, we use GloVe word averaging of the entity mention as the \"contextualized\" entity representation, and use word averaging of the truncated entity description text as its description representation. We also experiment two variants of EntELMo, namely EntELMo w/o l ctx and EntELMo with l etn . For second variant, we replace l ctx with l etn , where we only decode entity mentions instead of the whole context from descriptions. We lowercased all training data as well as the evaluation benchmarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "5.1"
},
{
"text": "We evaluate the transferrability of ELMo, Ent-ELMo, and BERT by using trainable mixing weights for each layer. For ELMo and EntELMo, we follow the recommendation from Peters et al. (2018a) to first pass mixing weights through a softmax layer and then multiply the weightedsummed representations by a scalar. For BERT, we find it better to just use unnormalized mixing weights. In addition, we investigate per-layer performance for both models in Section 6. Table 3 shows the performance of our models on the EntEval tasks. Our findings are detailed below:",
"cite_spans": [
{
"start": 167,
"end": 188,
"text": "Peters et al. (2018a)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [
{
"start": 457,
"end": 464,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Setup",
"sec_num": "5.1"
},
{
"text": "\u2022 Pretrained CWRs (ELMo, BERT) perform the best on EntEval overall, indicating that they capture knowledge about entities in contextual mentions or as entity descriptions. \u2022 BERT performs poorly on entity similarity and relatedness tasks. Since this task is zero-shot, it validates the recommended setting of finetuning BERT (Devlin et al., 2018) on downstream tasks, while the embedding of the [CLS] token does not necessarily capture the semantics of the entity. \u2022 BERT Large is better than BERT Base on average, showing large improvements in ERT and NED. To perform well at ERT, a model must either glean particular relationships from pairs of lengthy entity descriptions or else leverage knowledge from pretraining about the entities considered. Relatedly, performance on NED is expected to increase with both the ability to extract knowledge from descriptions and by starting with increased knowledge from pretraining.",
"cite_spans": [
{
"start": 325,
"end": 346,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "The Large model appears to be handling these capabilities better than the Base model. Table 4 : Accuracies (%) in comparing the use of description encoder (Des.) to entity name (Name).",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 93,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "tasks but suffers on others. The hyperlink-based training helps on CERP, EFP, ET, and NED. Since the hyperlink loss is closely-associated to the NED problem, it is unsurprising that NED performance is improved. Overall, we believe that hyperlink-based training benefits contextualized entity representations but does not benefit descriptive entity representations (see, for example, the drop of nearly 2 points on ESR, which is based solely on descriptive representations). This pattern may be due to the difficulty of using descriptive entity representations to reconstruct their appearing context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "Is descriptive entity representation necessary? A natural question to ask is whether the entity description is needed, as for humans, the entity names carry sufficient amount of information for a lot of tasks. To answer this question, we experiment with encoding entity names by the descriptive entity encoder for ERT (entity relationship typing) and NED (named entity disambiguation) tasks. The results in Table 4 show that encoding the entity names by themselves already captures a great deal of knowledge regarding entities, especially for CoNLL-YAGO. However, in tasks like ERT, the entity descriptions are crucial as the CoNLL ELMo 71.2 Gupta et al. (2017) 65.1 Deep ED 66.7 Table 5 : Accuracies (%) on CoNLL-YAGO with static or non-static entity representations.",
"cite_spans": [
{
"start": 642,
"end": 661,
"text": "Gupta et al. (2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 407,
"end": 414,
"text": "Table 4",
"ref_id": null
},
{
"start": 680,
"end": 687,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "names do not reveal enough information to categorize their relationships. Table 5 reports the performance of different descriptive entity representations on the CoNLL-YAGO task. The three models all use ELMo as the context encoder. \"ELMo\" encodes the entity name with ELMo as descriptive encoder, while both Gupta et al. (2017) and Deep ED (Ganea and Hofmann, 2017) use their trained static entity embeddings. 3 As Gupta et al. (2017) and Deep ED have different embedding sizes from ELMo, we add an extra linear layer after them to map to the same dimension. These two models are designed for entity linking, which gives them potential advantages. Even so, ELMo outperforms them both by a wide margin.",
"cite_spans": [
{
"start": 308,
"end": 327,
"text": "Gupta et al. (2017)",
"ref_id": "BIBREF19"
},
{
"start": 410,
"end": 411,
"text": "3",
"ref_id": null
},
{
"start": 415,
"end": 434,
"text": "Gupta et al. (2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 74,
"end": 81,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "Per-Layer Analysis. We evaluate each ELMo and EntELMo layer, i.e., the character CNN layer and two bidirectional LSTM layers, as well as each BERT layer on the EntEval tasks. Figure 6 reveals that for ELMo models, the first and second LSTM layers capture most of the entity knowledge from context and descriptions. The BERT layers show more diversity. Lower layers perform better on ESR (entity similarity and relatedness), while 429 for other tasks higher layers are more effective.",
"cite_spans": [],
"ref_spans": [
{
"start": 175,
"end": 183,
"text": "Figure 6",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "6"
},
{
"text": "Our proposed EntEval test suite provides a standardized evaluation method for entity representations. We demonstrate that EntEval tasks can benefit from the success of contextualized word representations such as ELMo and BERT. Augmenting encoding-decoding loss leveraging natural hyperlinks from Wikipedia further improves ELMo on some EntEval tasks. As shown by our experimental results, the contextualized entity encoder benefits more from this hyperlink-based training objective, suggesting future works to prioritize encoding entity description from its mention context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Our implementation is available at https: //github.com/mingdachen/bilm-tf",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We note that the numbers reported here are not strictly comparable to the ones in their original paper since we keep all the top 30 candidates from Crosswiki while prior work employs different pruning heuristics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported in part by a Bloomberg data science research grant to K. Stratos and K. Gimpel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fine-grained analysis of sentence embeddings using auxiliary prediction tasks",
"authors": [
{
"first": "Yossi",
"middle": [],
"last": "Adi",
"suffix": ""
},
{
"first": "Einat",
"middle": [],
"last": "Kermany",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Lavi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained anal- ysis of sentence embeddings using auxiliary predic- tion tasks. In ICLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Nat-uralLI: Natural logic inference for common sense reasoning",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "534--545",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1059"
]
},
"num": null,
"urls": [],
"raw_text": "Gabor Angeli and Christopher D. Manning. 2014. Nat- uralLI: Natural logic inference for common sense reasoning. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 534-545, Doha, Qatar. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Evaluating layers of representation in neural machine translation on part-of-speech and semantic tagging tasks",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "Fahim",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov, Llu\u00eds M\u00e0rquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2017. Evaluating layers of representation in neural ma- chine translation on part-of-speech and semantic tagging tasks. In Proceedings of the Eighth In- ternational Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1-10, Taipei, Taiwan. Asian Federation of Natural Lan- guage Processing.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Freebase: a collaboratively created graph database for structuring human knowledge",
"authors": [
{
"first": "Kurt",
"middle": [],
"last": "Bollacker",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Praveen",
"middle": [],
"last": "Paritosh",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Sturge",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 ACM SIGMOD international conference on Management of data",
"volume": "",
"issue": "",
"pages": "1247--1250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collab- oratively created graph database for structuring hu- man knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247-1250. AcM.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1075"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "PreCo: A large-scale dataset in preschool vocabulary for coreference resolution",
"authors": [
{
"first": "Hong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhenhua",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Yuille",
"suffix": ""
},
{
"first": "Shu",
"middle": [],
"last": "Rong",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "172--181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hong Chen, Zhenhua Fan, Hao Lu, Alan Yuille, and Shu Rong. 2018. PreCo: A large-scale dataset in preschool vocabulary for coreference resolution. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 172-181, Brussels, Belgium. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Evaluation benchmarks and learning criteriafor discourse-aware sentence representations",
"authors": [
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zewei",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mingda Chen, Zewei Chu, and Kevin Gimpel. 2019a. Evaluation benchmarks and learning criteriafor discourse-aware sentence representations. In Proc. of EMNLP.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Seeing things from a different angle:discovering diverse perspectives about claims",
"authors": [
{
"first": "Sihao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Khashabi",
"suffix": ""
},
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "542--557",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1053"
]
},
"num": null,
"urls": [],
"raw_text": "Sihao Chen, Daniel Khashabi, Wenpeng Yin, Chris Callison-Burch, and Dan Roth. 2019b. Seeing things from a different angle:discovering diverse perspectives about claims. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 542-557, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Ultra-fine entity typing",
"authors": [
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "87--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eunsol Choi, Omer Levy, Yejin Choi, and Luke Zettle- moyer. 2018. Ultra-fine entity typing. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 87-96, Melbourne, Australia. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Neural text generation in stories using entity representations as context",
"authors": [
{
"first": "Elizabeth",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2250--2260",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1204"
]
},
"num": null,
"urls": [],
"raw_text": "Elizabeth Clark, Yangfeng Ji, and Noah A. Smith. 2018. Neural text generation in stories using en- tity representations as context. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 2250-2260, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Senteval: An evaluation toolkit for universal sentence representations",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representa- tions. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2126--2136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Germ\u00e1n Kruszewski, Guillaume Lample, Lo\u00efc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic proper- ties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 2126-2136, Melbourne, Australia. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "FINET: Context-aware fine-grained named entity typing",
"authors": [
{
"first": "Luciano",
"middle": [],
"last": "Del Corro",
"suffix": ""
},
{
"first": "Abdalghani",
"middle": [],
"last": "Abujabal",
"suffix": ""
},
{
"first": "Rainer",
"middle": [],
"last": "Gemulla",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "868--878",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1103"
]
},
"num": null,
"urls": [],
"raw_text": "Luciano Del Corro, Abdalghani Abujabal, Rainer Gemulla, and Gerhard Weikum. 2015. FINET: Context-aware fine-grained named entity typing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 868-878, Lisbon, Portugal. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Easy victories and uphill battles in coreference resolution",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1971--1982",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proceed- ings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1971-1982, Seattle, Washington, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A joint model for entity analysis: Coreference, typing, and linking",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "477--490",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00197"
]
},
"num": null,
"urls": [],
"raw_text": "Greg Durrett and Dan Klein. 2014. A joint model for entity analysis: Coreference, typing, and linking. Transactions of the Association for Computational Linguistics, 2:477-490.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Capturing semantic similarity for entity linking with convolutional neural networks",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Francis-Landau",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1256--1261",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1150"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Francis-Landau, Greg Durrett, and Dan Klein. 2016. Capturing semantic similarity for en- tity linking with convolutional neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1256-1261, San Diego, California. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Deep joint entity disambiguation with local neural attention",
"authors": [
{
"first": "Eugen",
"middle": [],
"last": "Octavian",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Ganea",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2619--2629",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1277"
]
},
"num": null,
"urls": [],
"raw_text": "Octavian-Eugen Ganea and Thomas Hofmann. 2017. Deep joint entity disambiguation with local neural attention. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 2619-2629, Copenhagen, Denmark. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Robust entity linking via random walks",
"authors": [
{
"first": "Zhaochen",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Denilson",
"middle": [],
"last": "Barbosa",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, CIKM '14",
"volume": "",
"issue": "",
"pages": "499--508",
"other_ids": {
"DOI": [
"10.1145/2661829.2661887"
]
},
"num": null,
"urls": [],
"raw_text": "Zhaochen Guo and Denilson Barbosa. 2014. Robust entity linking via random walks. In Proceedings of the 23rd ACM International Conference on Confer- ence on Information and Knowledge Management, CIKM '14, pages 499-508, New York, NY, USA. ACM.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Entity linking via joint encoding of types, descriptions, and context",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2681--2690",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1284"
]
},
"num": null,
"urls": [],
"raw_text": "Nitish Gupta, Sameer Singh, and Dan Roth. 2017. En- tity linking via joint encoding of types, descriptions, and context. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Pro- cessing, pages 2681-2690, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings",
"authors": [
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Anusha",
"middle": [],
"last": "Balakrishnan",
"suffix": ""
},
{
"first": "Mihail",
"middle": [],
"last": "Eric",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1766--1776",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1162"
]
},
"num": null,
"urls": [],
"raw_text": "He He, Anusha Balakrishnan, Mihail Eric, and Percy Liang. 2017. Learning symmetric collaborative dia- logue agents with dynamic knowledge graph embed- dings. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1766-1776, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning entity representation for entity disambiguation",
"authors": [
{
"first": "Zhengyan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Shujie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Longkai",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "30--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengyan He, Shujie Liu, Mu Li, Ming Zhou, Longkai Zhang, and Houfeng Wang. 2013. Learning entity representation for entity disambiguation. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 30-34, Sofia, Bulgaria. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Kore: keyphrase overlap relatedness for entity disambiguation",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Hoffart",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Seufert",
"suffix": ""
},
{
"first": "Dat",
"middle": [],
"last": "Ba Nguyen",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Theobald",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 21st ACM international conference on Information and knowledge management",
"volume": "",
"issue": "",
"pages": "545--554",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Hoffart, Stephan Seufert, Dat Ba Nguyen, Martin Theobald, and Gerhard Weikum. 2012. Kore: keyphrase overlap relatedness for entity dis- ambiguation. In Proceedings of the 21st ACM inter- national conference on Information and knowledge management, pages 545-554. ACM.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Robust disambiguation of named entities in text",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Hoffart",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [
"Amir"
],
"last": "Yosef",
"suffix": ""
},
{
"first": "Ilaria",
"middle": [],
"last": "Bordino",
"suffix": ""
},
{
"first": "Hagen",
"middle": [],
"last": "F\u00fcrstenau",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Pinkal",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Spaniol",
"suffix": ""
},
{
"first": "Bilyana",
"middle": [],
"last": "Taneva",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Thater",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "782--792",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bor- dino, Hagen F\u00fcrstenau, Manfred Pinkal, Marc Span- iol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing, pages 782-792. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "2017. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "Ines",
"middle": [],
"last": "Montani",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal and Ines Montani. 2017. spacy 2: Natural language understanding with bloom embed- dings, convolutional neural networks and incremen- tal parsing. To appear.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Leveraging deep neural networks and knowledge graphs for entity disambiguation",
"authors": [
{
"first": "Hongzhao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Heck",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1504.07678"
]
},
"num": null,
"urls": [],
"raw_text": "Hongzhao Huang, Larry Heck, and Heng Ji. 2015. Leveraging deep neural networks and knowledge graphs for entity disambiguation. arXiv preprint arXiv:1504.07678.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "On using very large target vocabulary for neural machine translation",
"authors": [
{
"first": "S\u00e9bastien",
"middle": [],
"last": "Jean",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Memisevic",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1--10",
"other_ids": {
"DOI": [
"10.3115/v1/P15-1001"
]
},
"num": null,
"urls": [],
"raw_text": "S\u00e9bastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1-10, Beijing, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Dynamic entity representations in neural language models",
"authors": [
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Chenhao",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Martschat",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1830--1839",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1195"
]
},
"num": null,
"urls": [],
"raw_text": "Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, and Noah A. Smith. 2017. Dynamic entity representations in neural language models. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing, pages 1830- 1839, Copenhagen, Denmark. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Coreference resolution with entity equalization",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Kantor",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "673--677",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Kantor and Amir Globerson. 2019. Coreference resolution with entity equalization. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 673-677, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Improving entity linking by modeling latent relations between mentions",
"authors": [
{
"first": "Phong",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1595--1604",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1148"
]
},
"num": null,
"urls": [],
"raw_text": "Phong Le and Ivan Titov. 2018. Improving entity link- ing by modeling latent relations between mentions. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1595-1604, Melbourne, Aus- tralia. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Boosting entity linking performance by leveraging unlabeled documents",
"authors": [
{
"first": "Phong",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1935--1945",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phong Le and Ivan Titov. 2019. Boosting entity linking performance by leveraging unlabeled documents. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1935-1945, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "End-to-end neural coreference resolution",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "188--197",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1018"
]
},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference reso- lution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 188-197, Copenhagen, Denmark. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Commonsense knowledge base completion",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Aynaz",
"middle": [],
"last": "Taheri",
"suffix": ""
},
{
"first": "Lifu",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1445--1455",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1137"
]
},
"num": null,
"urls": [],
"raw_text": "Xiang Li, Aynaz Taheri, Lifu Tu, and Kevin Gimpel. 2016. Commonsense knowledge base completion. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1445-1455, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Design challenges for entity linking",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "315--328",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00141"
]
},
"num": null,
"urls": [],
"raw_text": "Xiao Ling, Sameer Singh, and Daniel S. Weld. 2015. Design challenges for entity linking. Transactions of the Association for Computational Linguistics, 3:315-328.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Linguistic knowledge and transferability of contextual representations",
"authors": [
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019a. Lin- guistic knowledge and transferability of contextual representations. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Barack's wife hillary: Using knowledge graphs for fact-aware language modeling",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Logan",
"suffix": ""
},
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5962--5971",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Logan, Nelson F. Liu, Matthew E. Peters, Matt Gardner, and Sameer Singh. 2019. Barack's wife hillary: Using knowledge graphs for fact-aware lan- guage modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 5962-5971, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Zero-shot entity linking by reading entity descriptions",
"authors": [
{
"first": "Lajanugen",
"middle": [],
"last": "Logeswaran",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Honglak",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3449--3460",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, and Honglak Lee. 2019. Zero-shot entity linking by reading en- tity descriptions. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 3449-3460, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "World knowledge for reading comprehension: Rare entity prediction with hierarchical lstms using external descriptions",
"authors": [
{
"first": "Teng",
"middle": [],
"last": "Long",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Jackie Chi Kit",
"middle": [],
"last": "Cheung",
"suffix": ""
},
{
"first": "Doina",
"middle": [],
"last": "Precup",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "825--834",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Teng Long, Emmanuel Bengio, Ryan Lowe, Jackie Chi Kit Cheung, and Doina Precup. 2017. World knowledge for reading comprehension: Rare entity prediction with hierarchical lstms using external de- scriptions. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Pro- cessing, pages 825-834.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Joint learning of named entity recognition and entity linking",
"authors": [
{
"first": "Pedro",
"middle": [
"Henrique"
],
"last": "Martins",
"suffix": ""
},
{
"first": "Zita",
"middle": [],
"last": "Marinho",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "190--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pedro Henrique Martins, Zita Marinho, and Andr\u00e9 F. T. Martins. 2019. Joint learning of named en- tity recognition and entity linking. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Work- shop, pages 190-196, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Learned in translation: Contextualized word vectors",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Con- textualized word vectors. In I. Guyon, U. V.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Advances in Neural Information Processing Systems",
"authors": [
{
"first": "S",
"middle": [],
"last": "Luxburg",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Fergus",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Vishwanathan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Garnett",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "30",
"issue": "",
"pages": "6294--6305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vish- wanathan, and R. Garnett, editors, Advances in Neu- ral Information Processing Systems 30, pages 6294- 6305. Curran Associates, Inc.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Can a suit of armor conduct electricity? a new dataset for open book question answering",
"authors": [
{
"first": "Todor",
"middle": [],
"last": "Mihaylov",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2381--2391",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1260"
]
},
"num": null,
"urls": [],
"raw_text": "Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct elec- tricity? a new dataset for open book question an- swering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 2381-2391, Brussels, Belgium. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Finer grained entity typing with typenet",
"authors": [
{
"first": "Shikhar",
"middle": [],
"last": "Murty",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Verga",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Vilnis",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.05795"
]
},
"num": null,
"urls": [],
"raw_text": "Shikhar Murty, Patrick Verga, Luke Vilnis, and Andrew McCallum. 2017. Finer grained entity typing with typenet. arXiv preprint arXiv:1711.05795.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Jointly embedding entities and text with distant supervision",
"authors": [
{
"first": "Denis",
"middle": [],
"last": "Newman-Griffis",
"suffix": ""
},
{
"first": "Albert",
"middle": [
"M"
],
"last": "Lai",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Fosler-Lussier",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The Third Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "195--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denis Newman-Griffis, Albert M. Lai, and Eric Fosler- Lussier. 2018. Jointly embedding entities and text with distant supervision. In Proceedings of The Third Workshop on Representation Learning for NLP, pages 195-206, Melbourne, Australia. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Description-based zero-shot fine-grained entity typing",
"authors": [
{
"first": "Rasha",
"middle": [],
"last": "Obeidat",
"suffix": ""
},
{
"first": "Xiaoli",
"middle": [],
"last": "Fern",
"suffix": ""
},
{
"first": "Hamed",
"middle": [],
"last": "Shahbazi",
"suffix": ""
},
{
"first": "Prasad",
"middle": [],
"last": "Tadepalli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "807--814",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1087"
]
},
"num": null,
"urls": [],
"raw_text": "Rasha Obeidat, Xiaoli Fern, Hamed Shahbazi, and Prasad Tadepalli. 2019. Description-based zero-shot fine-grained entity typing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 807-814, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Learning to denoise distantly-labeled data for entity typing",
"authors": [
{
"first": "Yasumasa",
"middle": [],
"last": "Onoe",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yasumasa Onoe and Greg Durrett. 2019. Learning to denoise distantly-labeled data for entity typing. In NAACL-HLT.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1532- 1543.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Dissecting contextual word embeddings: Architecture and representation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1499--1509",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1179"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018b. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 1499-1509, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Fine-grained entity typing with high-multiplicity assignments",
"authors": [
{
"first": "Maxim",
"middle": [],
"last": "Rabinovich",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "330--334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maxim Rabinovich and Dan Klein. 2017. Fine-grained entity typing with high-multiplicity assignments. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 330-334.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Explain yourself! leveraging language models for commonsense reasoning",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Nazneen Fatema Rajani",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.02361"
]
},
"num": null,
"urls": [],
"raw_text": "Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense rea- soning. arXiv preprint arXiv:1906.02361.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Socialiqa: Commonsense reasoning about social interactions",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Rashkin",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Lebras",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09728"
]
},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. 2019. Socialiqa: Com- monsense reasoning about social interactions. arXiv preprint arXiv:1904.09728.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Does string-based neural MT learn source syntax?",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Inkit",
"middle": [],
"last": "Padhi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1526--1534",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1159"
]
},
"num": null,
"urls": [],
"raw_text": "Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural MT learn source syntax? In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, pages 1526- 1534, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Wikilinks: A large-scale cross-document coreference corpus labeled via links to Wikipedia",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Amarnag",
"middle": [],
"last": "Subramanya",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Singh, Amarnag Subramanya, Fernando Pereira, and Andrew McCallum. 2012. Wikilinks: A large-scale cross-document coreference corpus la- beled via links to Wikipedia. Technical Report UM- CS-2012-015.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Conceptnet 5.5: An open multilingual graph of general knowledge",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Chin",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Havasi",
"suffix": ""
}
],
"year": 2017,
"venue": "Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Thirty-First AAAI Confer- ence on Artificial Intelligence.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "A cross-lingual dictionary for English Wikipedia concepts",
"authors": [
{
"first": "I",
"middle": [],
"last": "Valentin",
"suffix": ""
},
{
"first": "Angel",
"middle": [
"X"
],
"last": "Spitkovsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012)",
"volume": "",
"issue": "",
"pages": "3168--3175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentin I. Spitkovsky and Angel X. Chang. 2012. A cross-lingual dictionary for English Wikipedia concepts. In Proceedings of the Eighth Interna- tional Conference on Language Resources and Eval- uation (LREC-2012), pages 3168-3175, Istanbul, Turkey. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Ernie: Enhanced representation through knowledge integration",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Shuohuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shikun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xuyi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Danxiang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Hao Tian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09223"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced rep- resentation through knowledge integration. arXiv preprint arXiv:1904.09223.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "CommonsenseQA: A question answering challenge targeting commonsense knowledge",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Talmor",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Herzig",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Lourie",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4149--4158",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1421"
]
},
"num": null,
"urls": [],
"raw_text": "Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A ques- tion answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149-4158, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Fever: a large-scale dataset for fact extraction and verification",
"authors": [
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Christodoulopoulos",
"suffix": ""
},
{
"first": "Arpit",
"middle": [],
"last": "Mittal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "809--819",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. Fever: a large-scale dataset for fact extraction and verification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "A simple method for commonsense reasoning",
"authors": [
{
"first": "H",
"middle": [],
"last": "Trieu",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Trinh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1806.02847"
]
},
"num": null,
"urls": [],
"raw_text": "Trieu H Trinh and Quoc V Le. 2018. A simple method for commonsense reasoning. arXiv preprint arXiv:1806.02847.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Fact checking: Task definition and dataset construction",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science",
"volume": "",
"issue": "",
"pages": "18--22",
"other_ids": {
"DOI": [
"10.3115/v1/W14-2508"
]
},
"num": null,
"urls": [],
"raw_text": "Andreas Vlachos and Sebastian Riedel. 2014. Fact checking: Task definition and dataset construction. In Proceedings of the ACL 2014 Workshop on Lan- guage Technologies and Computational Social Sci- ence, pages 18-22, Baltimore, MD, USA. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5446"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "liar, liar pants on fire\": A new benchmark dataset for fake news detection",
"authors": [
{
"first": "William",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "422--426",
"other_ids": {
"DOI": [
"10.18653/v1/P17-2067"
]
},
"num": null,
"urls": [],
"raw_text": "William Yang Wang. 2017. \"liar, liar pants on fire\": A new benchmark dataset for fake news detection. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 422-426, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Mind the GAP: A balanced corpus of gendered ambiguous pronouns",
"authors": [
{
"first": "Kellie",
"middle": [],
"last": "Webster",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "Axelrod",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "605--617",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00240"
]
},
"num": null,
"urls": [],
"raw_text": "Kellie Webster, Marta Recasens, Vera Axelrod, and Ja- son Baldridge. 2018. Mind the GAP: A balanced corpus of gendered ambiguous pronouns. Transac- tions of the Association for Computational Linguis- tics, 6:605-617.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Learning global features for coreference resolution",
"authors": [
{
"first": "Sam",
"middle": [],
"last": "Wiseman",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Stuart",
"middle": [
"M"
],
"last": "Shieber",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "994--1004",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1114"
]
},
"num": null,
"urls": [],
"raw_text": "Sam Wiseman, Alexander M. Rush, and Stuart M. Shieber. 2016. Learning global features for coref- erence resolution. In Proceedings of the 2016 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 994-1004, San Diego, California. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Corpus-level fine-grained entity typing using contextual information",
"authors": [
{
"first": "Yadollah",
"middle": [],
"last": "Yaghoobzadeh",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "715--725",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1083"
]
},
"num": null,
"urls": [],
"raw_text": "Yadollah Yaghoobzadeh and Hinrich Sch\u00fctze. 2015. Corpus-level fine-grained entity typing using con- textual information. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 715-725, Lisbon, Portugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Joint learning of the embedding of words and entities for named entity disambiguation",
"authors": [
{
"first": "Ikuya",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Shindo",
"suffix": ""
},
{
"first": "Hideaki",
"middle": [],
"last": "Takeda",
"suffix": ""
},
{
"first": "Yoshiyasu",
"middle": [],
"last": "Takefuji",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "250--259",
"other_ids": {
"DOI": [
"10.18653/v1/K16-1025"
]
},
"num": null,
"urls": [],
"raw_text": "Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint learning of the em- bedding of words and entities for named entity dis- ambiguation. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 250-259, Berlin, Germany. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Learning distributed representations of texts and entities from knowledge base",
"authors": [
{
"first": "Ikuya",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Shindo",
"suffix": ""
},
{
"first": "Hideaki",
"middle": [],
"last": "Takeda",
"suffix": ""
},
{
"first": "Yoshiyasu",
"middle": [],
"last": "Takefuji",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "1",
"pages": "397--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2017. Learning distributed rep- resentations of texts and entities from knowledge base. Transactions of the Association for Compu- tational Linguistics, 5(1):397-411.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.08237"
]
},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretrain- ing for language understanding. arXiv preprint arXiv:1906.08237.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "TwoWingOS: A two-wing optimization strategy for evidential claim verification",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "105--114",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1010"
]
},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin and Dan Roth. 2018. TwoWingOS: A two-wing optimization strategy for evidential claim verification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Pro- cessing, pages 105-114, Brussels, Belgium. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "Swag: A large-scale adversarial dataset for grounded commonsense inference",
"authors": [
{
"first": "Rowan",
"middle": [],
"last": "Zellers",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.05326"
]
},
"num": null,
"urls": [],
"raw_text": "Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. arXiv preprint arXiv:1808.05326.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "HellaSwag: Can a machine really finish your sentence?",
"authors": [
{
"first": "Rowan",
"middle": [],
"last": "Zellers",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4791--4800",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791-4800, Florence, Italy. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF74": {
"ref_id": "b74",
"title": "ERNIE: Enhanced language representation with informative entities",
"authors": [
{
"first": "Zhengyan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1441--1451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: En- hanced language representation with informative en- tities. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 1441-1451, Florence, Italy. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "An example taken from ET. Targeted entity mention is bold. Candidate categories are on the right. Gold standard categories are in gray.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Two examples from the EFP.TRUE: Gin and vermouth can make a martini FALSE: Connecticut is not a state",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "Examples from the CERP.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "For CoNLL-YAGO, followingHoffart et al. (2011) andYamada et al. (2016), we used theSOCCER -JAPAN GET LUCKY WIN, CHINA IN SURPRISE DEFEAT. country in East Asia and the world's most populous country \u2026 The Chinese men's national basketball team represents the People's Republic of China and \u2026 The Chinese national football team recognized as China PR by FIFA \u2026 Porcelain is a ceramic material made by heating materials, generally including \u2026 An example from CoNLL-YAGO. Only four candidates are shown due to space constraints. The target mention is underlined. Sentences in gray are Wikipedia descriptions. The gold standard is boldfaced.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF4": {
"text": "Heatmap showing per-layer performances for ELMo, EntELMo baseline, EntELMo, BERT Base, and BERT Large.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Statistics of datasets used in EntEval tasks. CAP: coreference arc prediction, CERP: contexualized entity relationship prediction, EFP: entity factuality prediction, ET: entity typing, ESR: entity similarity and relatedness, ERT: entity relationship typing, NED: named entity disambiguation, Rare: rare entity prediction, CoNLL: CoNLL-YAGO named entity disambiguation. The New York City Landmarks Preservation Commission consists of zero commissioners. SUPPORTS: TD Garden has held Bruins games.",
"html": null
},
"TABREF2": {
"content": "<table><tr><td>Score</td><td>Entity Name</td></tr><tr><td>-</td><td>Apple Inc.</td></tr><tr><td>20</td><td>Steve Jobs</td></tr><tr><td>...</td><td>...</td></tr><tr><td>11</td><td>Microsoft</td></tr><tr><td>...</td><td>...</td></tr><tr><td>1</td><td>Ford Motor Company</td></tr></table>",
"num": null,
"type_str": "table",
"text": ").",
"html": null
},
"TABREF3": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "",
"html": null
},
"TABREF5": {
"content": "<table><tr><td/><td>Rare</td><td>CoNLL</td><td>ERT</td></tr><tr><td/><td colspan=\"3\">Des. Name Des. Name Des. Name</td></tr><tr><td>ELMo</td><td colspan=\"3\">38.1 36.7 63.4 71.2 46.8 31.5</td></tr><tr><td colspan=\"4\">BERT Base 42.2 36.6 64.7 74.3 42.2 34.3</td></tr><tr><td colspan=\"4\">BERT Large 48.8 44.0 64.6 74.8 48.8 32.6</td></tr></table>",
"num": null,
"type_str": "table",
"text": "Performances of entity representations on EntEval tasks. Best performing model in each task is boldfaced. CAP: coreference arc prediction, CERP: contexualized entity relationship prediction, EFP: entity factuality prediction, ET: entity typing, ESR: entity similarity and relatedness, ERT: entity relationship typing, NED: named entity disambiguation. EntELMo baseline is trained on the same dataset as EntELMo but not using the hyperlink-based training. EntELMo w/ l etn is trained with a modified version of l ctx , where we only decode entity mentions instead of the whole context.",
"html": null
}
}
}
}