Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I17-1042",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:39:31.923596Z"
},
"title": "WiNER: A Wikipedia Annotated Corpus for Named Entity Recognition",
"authors": [
{
"first": "Abbas",
"middle": [],
"last": "Ghaddar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "RALI-DIRO Universit\u00e9 de Montr\u00e9al Montr\u00e9al",
"location": {
"country": "Canada"
}
},
"email": "abbas.ghaddar@umontreal.ca"
},
{
"first": "Philippe",
"middle": [],
"last": "Langlais",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 de Montr\u00e9al Montr\u00e9al",
"location": {
"country": "Canada"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We revisit the idea of mining Wikipedia in order to generate named-entity annotations. We propose a new methodology that we applied to the English Wikipedia to build WiNER, a large, high quality, annotated corpus. We evaluate its usefulness on 6 NER tasks, comparing 4 popular state-of-the art approaches. We show that LSTM-CRF is the approach that benefits the most from our corpus. We report impressive gains with this model when using a small portion of WiNER on top of the CONLL training material. Last, we propose a simple but efficient method for exploiting the full range of WiNER, leading to further improvements.",
"pdf_parse": {
"paper_id": "I17-1042",
"_pdf_hash": "",
"abstract": [
{
"text": "We revisit the idea of mining Wikipedia in order to generate named-entity annotations. We propose a new methodology that we applied to the English Wikipedia to build WiNER, a large, high quality, annotated corpus. We evaluate its usefulness on 6 NER tasks, comparing 4 popular state-of-the art approaches. We show that LSTM-CRF is the approach that benefits the most from our corpus. We report impressive gains with this model when using a small portion of WiNER on top of the CONLL training material. Last, we propose a simple but efficient method for exploiting the full range of WiNER, leading to further improvements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Named-Entity Recognition (NER) is the task of identifying textual mentions and classifying them into a predefined set of types. It is an important pre-processing step in NLP and Information Extraction. Various approaches have been proposed to tackle the task, including conditional random fields (Finkel et al., 2005) , perceptrons (Ratinov and Roth, 2009) , and neural network approaches (Collobert et al., 2011; Lample et al., 2016; Chiu and Nichols, 2016) .",
"cite_spans": [
{
"start": 296,
"end": 317,
"text": "(Finkel et al., 2005)",
"ref_id": "BIBREF9"
},
{
"start": 332,
"end": 356,
"text": "(Ratinov and Roth, 2009)",
"ref_id": "BIBREF21"
},
{
"start": 389,
"end": 413,
"text": "(Collobert et al., 2011;",
"ref_id": "BIBREF8"
},
{
"start": 414,
"end": 434,
"text": "Lample et al., 2016;",
"ref_id": "BIBREF13"
},
{
"start": 435,
"end": 458,
"text": "Chiu and Nichols, 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One issue with NER is the small amount of annotated data available for training, and their limited scope (see Section 4.1). Furthermore, some studies (Onal and Karagoz, 2015; Augenstein et al., 2017) have demonstrated that namedentity systems trained on news-wire data perform poorly when tested on other text genres. This motivated some researchers to create a named-entity labelled corpus from Wikipedia. This was notably attempted by and more re-cently revisited by Al-Rfou et al. (2015) in a multilingual context. Both studies leverage the link structure of Wikipedia to generate named-entity annotations. Because only a tiny portion of texts in Wikipedia are anchored, some strategies are typically needed to infer more annotations (Ghaddar and Langlais, 2016b) . Such a process typically yields a noisy corpus for which filtering is required.",
"cite_spans": [
{
"start": 150,
"end": 174,
"text": "(Onal and Karagoz, 2015;",
"ref_id": "BIBREF16"
},
{
"start": 175,
"end": 199,
"text": "Augenstein et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 469,
"end": 490,
"text": "Al-Rfou et al. (2015)",
"ref_id": "BIBREF0"
},
{
"start": 737,
"end": 766,
"text": "(Ghaddar and Langlais, 2016b)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we revisit the idea of automatically extracting named-entity annotations out of Wikipedia. Similarly to the aforementioned works, we gather anchored strings in a page as well as their type according to Freebase (Bollacker et al., 2008) but, more importantly, we also generate annotations for texts not anchored in Wikipedia. We do this by considering coreference mentions of anchored strings as candidate annotations, and by exploiting the out-link structure of Wikipedia. We applied our methodology on a 2013 English Wikipedia dump, leading to a large annotated corpus called WiNER, which contains more annotations than similar corpora and, as we shall see, is more useful for training NER systems.",
"cite_spans": [
{
"start": 226,
"end": 250,
"text": "(Bollacker et al., 2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We discuss related work in Section 2 and present the methodology we used to automatically extract annotations from Wikipedia in Section 3. The remainder of the article describes the experiment we conducted to measure the impact of WiNER for training NER systems. We describe the datasets and the different NER systems we trained in Section 4. We report the experiments we conducted in Section 5. We propose a simple but efficient two stage strategy we designed in order to benefit the full WiNER corpus in Section 6. We report error analysis in Section 7 and conclude in Section 8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Turning Wikipedia into a corpus of named entities annotated with types is a task that received some attention in a monolingual setting (Toral and Munoz, 2006; , as well as in a multilingual one (Richman and Schone, 2004; Al-Rfou et al., 2015) .",
"cite_spans": [
{
"start": 135,
"end": 158,
"text": "(Toral and Munoz, 2006;",
"ref_id": "BIBREF25"
},
{
"start": 194,
"end": 220,
"text": "(Richman and Schone, 2004;",
"ref_id": "BIBREF22"
},
{
"start": 221,
"end": 242,
"text": "Al-Rfou et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In the authors describe an approach that exploits links between articles in Wikipedia in order to detect entity mentions. They describe a pipeline able to detect their types (ORG, PER, LOC, MISC), making use of handcrafted rules specific to Wikipedia, and a bootstrapping approach for identifying a subset of Wikipedia articles where the type of the entity can be predicted with confidence. Since anchored strings in Wikipedia lack coverage (in part because Wikipedia rules recommend that only the first mention of a given concept be anchored in a page), the authors also describe heuristics based on redirects to identify more named-entity mentions. They tested several variants of their corpus on three NER benchmarks and showed that systems trained on Wikipedia data may perform better than domain-specific systems in an out-domain setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Al-Rfou et al. (2015), follow a similar path albeit in a multilingual setting. They use Freebase to identify categories (PER, LOC, ORG), and trained a neural network on the annotations extracted. In order to deal with non-anchored mentions in Wikipedia, they propose a first-order coreference resolution algorithm where they link mentions in a text using exact string matching (thus Obama will be linked to the concept Barack Obama and labelled PER). They still had to perform some sentence selection, based on an oversampling strategy, in order to construct a subset of the original training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our work revisits the idea developed in these two studies. Our main contribution consists in dealing specifically with non anchored strings in Wikipedia pages. We do this by analyzing the outlink structure in Wikipedia, coupled to the information of all the surface forms that have been used in a Wikipedia article to mention the main concept being described by this article. This process, detailed in the next section, leads to a much larger set of annotations, whose quality obviates the need for ad-hoc filtering or oversampling strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We applied the pipeline described hereafter to a dump of English Wikipedia from 2013, and obtained WiNER, a resource built out of 3.2M Wikipedia articles, comprising more than 1.3G tokens accounting for 54M sentences, 41M of which contain at least one named-entity annotation. We generated a total of 106M annotations (an average of 2 entities per sentence).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WiNER",
"sec_num": "3"
},
{
"text": "The pipeline used to extract named-entity annotations from Wikipedia is illustrated in Figure 1 , for an excerpt of the Wikipedia article Chilly Gonzales, hereafter named the target article. Similarly to Al-Rfou et al., 2015) , the anchored strings of out-links in the target article are elected mentions of named entities. For instance, we identify Warner Bros. Records and Paris as mentions in our target article. In general, a Wikipedia article has an equivalent page in Freebase. We remove mentions that do not have such a page. This way, we filter out anchored strings that are not named entities (such as List of Presidents of the United States). We associate a category with each mention by a simple strategy, similar to (Al-Rfou et al., 2015) , which consists in mapping Freebase attributes to entity types. For instance, we map organization/organization, location/location and people/person attributes to ORG, LOC and PER, respectively. If an entry does not belong to any of the previous classes, we tag it as MISC.",
"cite_spans": [
{
"start": 204,
"end": 225,
"text": "Al-Rfou et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 728,
"end": 750,
"text": "(Al-Rfou et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 87,
"end": 95,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotation Pipeline",
"sec_num": "3.1"
},
{
"text": "Because the number of anchored strings in Wikipedia is rather small -less than 3% of the text tokens according to (Al-Rfou et al., 2015)we propose to leverage: (1) the out-link structure of Wikipedia, (2) the information of all the surface strings used to describe the main concept of a Wikipedia article. For the latter, we rely on the resource 1 described in (Ghaddar and Langlais, 2016a) Figure 1 : Illustration of the process with which we gather annotations into WiNER for the target page https://en.wikipedia.org/wiki/Chilly_Gonzales. Bracketed segments are the annotations, underlined text are anchored strings in the corresponding Wikipedia page. OLT represents the out-link table (which is compiled from the Wikipedia out-link graph structure), and CT represents the coreference table we gathered from the resource. former) and pronominal (e.g. he) mentions that refer to Chilly Gonzales. From this resource, we consider proper name mentions, along with their Freebase type.",
"cite_spans": [
{
"start": 361,
"end": 390,
"text": "(Ghaddar and Langlais, 2016a)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 391,
"end": 399,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotation Pipeline",
"sec_num": "3.1"
},
{
"text": "Our strategy for collecting extra annotations is a 3-step process, where:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Pipeline",
"sec_num": "3.1"
},
{
"text": "1. We consider direct out-links of the target article. We search in its text the titles of the articles we reach that way. We also search for their coreferences as listed in the aforementioned resource. For instance, we search (exact match) Warner Bros. Records and its coreferences (e.g. Warner, Warner Bros.) in the target article. Each match is labelled with the type associated (in Freebase) with the out-linked article (in our example, ORG).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Pipeline",
"sec_num": "3.1"
},
{
"text": "2. We follow out-links of out-links, and search in the target article (by an exact string match) the titles of the articles reached. For instance, we search for the strings Europe, France, Napoleon, as well as other article titles from the out-link list of the article Paris. The matched strings are elected named entities and are labeled with their Freebase type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Pipeline",
"sec_num": "3.1"
},
{
"text": "3. For the titles matched at step 2, we also match their coreferent mentions. For instance, because we matched France, we also search its coreferences as listed in the coreference table (CT).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Pipeline",
"sec_num": "3.1"
},
{
"text": "During this process, some collisions may occur. We solve the issue of overlapping annotations by applying the steps exactly in the order presented above. Our steps have been ordered in such a way that the earlier the step, the more confidence we have in the strings matched at that step. It may also happen that two out-link articles contain the same mention (for instance Washington State and George Washington both contain the mention Washington), in which case we annotate this ambiguous mention with the type of the closest 2 unambiguous mention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Pipeline",
"sec_num": "3.1"
},
{
"text": "Step 1 of our pipeline raises the coverage 3 from less than 3% to 9.5%, while step 2 and 3 increase it to 11.3% and 15% respectively. This is actually very close to the coverage of the manually annotated CONLL-2003 dataset, which is 17%. Considering that we do not apply any specific filtering, as is done for instance in , our corpus contains many more annotations than existing Wikipedia-based named-entity annotated corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Pipeline",
"sec_num": "3.1"
},
{
"text": "We assessed the annotation quality of a random subset of 1000 mentions. While we measure an accuracy of 92% for mentions detected during step 1, the accuracy decreases to 88% and 77% during step 2 and 3 respectively. We identified two main sources for errors in the coreferent mentions detection procedure. One source of error comes from the resource used to identify the mentions of the main concept. We measured in a previous work (Ghaddar and Langlais, 2016a) , that the process we rely on for this (a binary classifier) has an accuracy of 89%. Example (a) of Figure 2 illus-trates such a mistake where the family name Pope is wrongly assumed coreferent to the brewery Eldridge Pope. We also found that our 3-step process and the disambiguation rule fails in 15% of the cases. Figure 2 illustrates an example where we erroneously recognize the mention Toronto (referring to the town) as a coreferent of the (non ambiguous mention) Toronto FC, simply because the latter is close to the former. Table 1 shows the counts of token strings annotated with at least two types. For instance, there are 230k entities that are annotated in WiNER as PER and LOC. It is reassuring that different mentions with the same string are labelled differently. The cells on the diagonal indicate the number of mentions labelled with a given tag. We further examined a random subset of 100 strings that were annotated differently (in different contexts) and found that 89% of the time, the correct type was identified. For instance, in example Figure 2c ) -a sentence of the Chilly Gonzales article -the mention Rolodex is labelled as ORG, while the correct type is MISC. Our pipeline fails to disambiguate the company from its product.",
"cite_spans": [
{
"start": 433,
"end": 462,
"text": "(Ghaddar and Langlais, 2016a)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 563,
"end": 571,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 780,
"end": 788,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 996,
"end": 1003,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 1525,
"end": 1534,
"text": "Figure 2c",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Manual Evaluation",
"sec_num": "3.2"
},
{
"text": "PER LOC ORG MISC PER 28M 230k 80k 250k LOC - 29M 120k 190k ORG - - 13M 206k MISC - - - 36M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manual Evaluation",
"sec_num": "3.2"
},
{
"text": "We used a number of datasets in our experiments. For CONLL, MUC and ONTO, that are often used to benchmark NER, we used the test sets distributed in official splits. For the other test sets, that are typically smaller, we used the full dataset as a test material. MUC the MUC-6 (Chinchor and Sundheim, 2003) dataset consists of newswire articles from the Wall Street Journal annotated with PER, LOC, ORG, as well as a number of temporal and numerical entities that we excluded from our evaluation for the sake of homogeneity.",
"cite_spans": [
{
"start": 278,
"end": 307,
"text": "(Chinchor and Sundheim, 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets",
"sec_num": "4.1"
},
{
"text": "ONTO the OntoNotes 5.0 dataset (Pradhan et al., 2012) includes texts from five different text genres: broadcast conversation (200k), broadcast news (200k), magazine (120k), newswire (625k), and web data (300k). This dataset is annotated with 18 fine grained NE categories. Following (Nothman, 2008) , we applied the procedure for mapping annotations to the CONLL tag set. We used the CONLL 2012 (Pradhan et al., 2013) standard test set for evaluation.",
"cite_spans": [
{
"start": 283,
"end": 298,
"text": "(Nothman, 2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets",
"sec_num": "4.1"
},
{
"text": "WGOLD WikiGold (Balasuriya et al., 2009 ) is a set of Wikipedia articles (40k tokens) manually annotated with CONLL-2003 NE classes. The articles were randomly selected from a 2008 English dump and cover a number of topics.",
"cite_spans": [
{
"start": 15,
"end": 39,
"text": "(Balasuriya et al., 2009",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets",
"sec_num": "4.1"
},
{
"text": "WEB Ratinov and Roth (2009) annotated 20 web pages (8k tokens) on different topics with the CONLL-2003 tag set.",
"cite_spans": [
{
"start": 4,
"end": 27,
"text": "Ratinov and Roth (2009)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets",
"sec_num": "4.1"
},
{
"text": "TWEET Ritter et al. (2011) annotated 2400 tweets (comprising 34k tokens) with 10 named-entity classes, which we mapped to the CONLL-2003 NE classes.",
"cite_spans": [
{
"start": 6,
"end": 26,
"text": "Ritter et al. (2011)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets",
"sec_num": "4.1"
},
{
"text": "Since we use many test sets in this work, we are confronted with a number of inconsistencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.2"
},
{
"text": "One is the definition of the MISC class, which differs from a dataset to another, in addition to not being annotated in MUC. This led us to report token-level F1 score for 3 classes only (LOC, ORG and PER). We computed this metric with the conlleval script. 4 We further report OD F 1 , a score that measures how well a named-entity recognizer performs on out-domain material. We compute it by randomly sampling 500 sentences 5 for each out-domain test set, on which we measure the token-level F1. Sampling the same number of sentences per test set allows to weight each corpus equally. This process is repeated 10 times, and we report the average over those 10 folds. On average, the newly assembled test set contains 50k tokens and roughly 3.5k entity mentions. We excluded the CONLL-2003 test set from the computation since this corpus is in-domain 6 (see section 5.2).",
"cite_spans": [
{
"start": 258,
"end": 259,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.2"
},
{
"text": "We chose two feature-based models: the StanfordNER (Finkel et al., 2005) CRF classifier, and the perceptron-based Illinois NE Tagger (Ratinov and Roth, 2009) . Those systems have been shown to yield good performance overall. Both systems use handcrafted features; the latter includes gazetteer features as well.",
"cite_spans": [
{
"start": 51,
"end": 72,
"text": "(Finkel et al., 2005)",
"ref_id": "BIBREF9"
},
{
"start": 133,
"end": 157,
"text": "(Ratinov and Roth, 2009)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reference systems",
"sec_num": "4.3"
},
{
"text": "We also deployed two neural network systems: the one of (Collobert et al., 2011) , as implemented by Attardi (2015) , and the LSTM-CRF system of Lample et al. (2016) . Both systems capitalize on representations learnt from large quantities of unlabeled text 7 . We use the default configuration for each system.",
"cite_spans": [
{
"start": 56,
"end": 80,
"text": "(Collobert et al., 2011)",
"ref_id": "BIBREF8"
},
{
"start": 101,
"end": 115,
"text": "Attardi (2015)",
"ref_id": "BIBREF1"
},
{
"start": 145,
"end": 165,
"text": "Lample et al. (2016)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reference systems",
"sec_num": "4.3"
},
{
"text": "We compare WiNER to existing Wikipedia-based annotated corpora. released two versions of their corpus, WP2 and WP3, each containing 3.5 million tokens. Both versions enrich the annotations deduced from anchored strings in Wikipedia by identifying coreferences among NE mentions. They differ by the rules used to conduct coreference resolution. We randomly generated 10 equally-sized subsets of WiNER (of 3.5 million tokens each). On each subset, we trained the Illinois NER tagger and compared the performances obtained on the CONLL test set by the resulting models, compared to those trained on WP2 and WP3. Phrase-level F1 score are reported in Table 2 . We also report the results published in (Al-Rfou et al., 2015) Using WiNER as a source of annotations systematically leads to better performance, which validates the approach we described in Section 3. Note that in order to generate WP2 and WP3, the authors applied filtering rules that are responsible for the loss of 60% of the annotations. Al-Rfou et al. (2015) also perform sentence selection. We have no such heuristics here, but we still observe a competitive performance. This is a satisfactory result considering that WiNER is much larger.",
"cite_spans": [
{
"start": 697,
"end": 719,
"text": "(Al-Rfou et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 647,
"end": 654,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Other Wikipedia-based corpora",
"sec_num": "5.1"
},
{
"text": "In this experiment, we conduct a cross-domain evaluation of the reference systems described in Section 4.3 on the six different test sets presented in Section 4.1. Following a common trend in the field, we evaluate the performance of those systems when they are trained on the CONLL material. We also consider systems trained on CONLL plus a subset of WiNER. We report results obtained with a subset of randomly chosen sentences summing up to 3 million tokens, as well as a variant where we use as much as possible of the training material available in WiNER. Larger datasets CONLL were created by randomly appending material to smaller ones. Datasets were chosen once (no cross-validation, as that would have required too much time for some models). Moreover, for the comparison to be meaningful, each model was trained on the same 3M dataset. The results are reported in Table 3 . First, we observe the best overall performance with the LSTM-CRF system (73% OD F 1 ), the second best system being a variant of the Illinois system (69.5% OD F 1 ). We also observe that the former system is the one that benefits the most from WiNER (an absolute gain of 8% in OD F 1 ). This may be attributed to the fact that this model can explore the context on both sides of a word with (at least in theory) no limit on the context size considered. Still, it is outperformed by the Illinois system on the WEB and the TWEET test sets. Arguably, those two test sets have a NE distribution which differs greatly from the training material.",
"cite_spans": [],
"ref_spans": [
{
"start": 873,
"end": 880,
"text": "Table 3",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Cross-domain evaluation",
"sec_num": "5.2"
},
{
"text": "ONTO MUC TWEET WEB WGOLD OD F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-domain evaluation",
"sec_num": "5.2"
},
{
"text": "Second, on the CONLL setting, our results are satisfyingly similar to those reported in (Ratinov and Roth, 2009) and (Lample et al., 2016) . The former reports 91.06 phrasal-level F1 score on 4 classes, while our score is 90.8 .The latter reports an F1 score of 90.94 while we have 90.76. The best results reported far on the CONLL setting are those of (Chiu and Nichols, 2016) with a BiLSTM-CNN model, and a phrasal-level F1 score of 91.62 on 4 classes. So while the models we tested are slightly behind on CONLL, they definitely are competitive. For other tasks, the comparison with other studies is difficult since the performance is typically reported with the full tagset.",
"cite_spans": [
{
"start": 88,
"end": 112,
"text": "(Ratinov and Roth, 2009)",
"ref_id": "BIBREF21"
},
{
"start": 117,
"end": 138,
"text": "(Lample et al., 2016)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-domain evaluation",
"sec_num": "5.2"
},
{
"text": "Third, the best performances are obtained by configurations that use WiNER, with the exception of CONLL. That this does not carry over to CONLL confirms the observations made by several authors (Finkel et al., 2005; Al-Rfou et al., 2015) , who highlight the specificity of CONLL's annotation guidelines as well as the very nature of the annotated text, where sport teams are overrepresented. These teams add to the confusion because they are often referred to with a city name. We observe that, on CONLL, the LSTM-CRF model is the one that registers the lowest drop in performance. The drop is also modest for the CRF model. The WiNER's impact is particularly observable on TWEET (an absolute gain of 8.8 points) and WEB (a gain of 5.5), again two very different test sets. This suggests that WiNER helps models to generalize.",
"cite_spans": [
{
"start": 194,
"end": 215,
"text": "(Finkel et al., 2005;",
"ref_id": "BIBREF9"
},
{
"start": 216,
"end": 237,
"text": "Al-Rfou et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-domain evaluation",
"sec_num": "5.2"
},
{
"text": "Last, we observe that systems differ in their ability to exploit large training sets. For the two feature-based models we tested, the bottleneck is memory. We did train models with less features, but with a significantly lower performance. With the CRF model, we could only digest a subset of WiNER of 1 million tokens, while Illinois could handle 30 times more. As far as neural network systems are concerned, the is-sue is training time. On the computer we used for this work -a Linux cluster equipped with a GPU -training Senna and LSTM-CRF required over a month each for 7 and 5 millions WiNER tokens respectively. This prevents us from measuring the benefit of the complete WiNER resource.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-domain evaluation",
"sec_num": "5.2"
},
{
"text": "6 Scaling up to WiNER 6.1 Our 2-stage approach Because we were not able to employ the full WiNER corpus with the NER systems mentioned above, we resorted to a simple method to leverage all the annotations available in the corpus. It consists in decoupling the segmentation of NEs in a sentence -we leave this to a reference NER system -from their labelling, for which we train a local classifier based on contextual features computed from WiNER. Decoupling the two decision processes is not exactly satisfying, but allows us to scale very efficiently to the full size of WiNER, our main motivation here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-domain evaluation",
"sec_num": "5.2"
},
{
"text": "Our classifier exploits a small number of features computed from two representations of WiNER. In one of them, each named-entity is bounded by a beginning and end token tags -both encoding its type -as illustrated on line MIX of Figure 3 . In the second representation, the words of the namedentity are replaced with its type, as illustrated on line CONT. The former representation encodes information from both the context and the the words of the segment we wish to label while the second one only encodes the context of a segment.",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 237,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Contextual representations",
"sec_num": "6.1.1"
},
{
"text": "WiNER [Gonzales] PER will be featured on [Daft Punk] MISC .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual representations",
"sec_num": "6.1.1"
},
{
"text": "MIX B-PER Gonzales L-PER will be featured on B-MISC Daft Punk L-MISC CONT PER will be featured on MISC . With each representation, we train a 6-gram backoff language model using kenLM (Heafield et al., 2013) . For the MIX one, we also train word embeddings of dimension 50 using Glove (Pennington et al., 2014). 8 Thus, we have the embed-dings of plain words, as well as those of token tags. The language and embedding models are used to provide features to our classifier.",
"cite_spans": [
{
"start": 184,
"end": 207,
"text": "(Heafield et al., 2013)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual representations",
"sec_num": "6.1.1"
},
{
"text": "Given a sentence and its hypothesized segmentation into named-entities (as provided by another NER system), we compute with the Viterbi algorithm the sequence of token tags that leads to the smallest perplexity according to each language model. Given this sequence, we modify the tagging of each segment in turn, leading to a total of 4 perplexity values per segment and per language model. We normalize those perplexity values so as to interpret them as probabilities. Table 4 shows the probability given by both language models to the segment Gonzales of the sentence of our running example. We observe that both models agree that the segment should be labelled PER. We also generate features thanks to the embedding model. This time, however, this is done without considering the context: we represent a segment as the sum of the representation of its words. We then compute the cosine similarity between this segment representation and that of each of the 4 possible tag pairs (the sum of the representation of the begin and end tags); leading to 4 similarity scores per segment. Those similarities are reported on line EMB in Table 4 Table 4 : Features for the segment Gonzales in the sentence Gonzales will be featured on Daft Punk.",
"cite_spans": [],
"ref_spans": [
{
"start": 470,
"end": 477,
"text": "Table 4",
"ref_id": null
},
{
"start": 1131,
"end": 1138,
"text": "Table 4",
"ref_id": null
},
{
"start": 1139,
"end": 1146,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "6.1.2"
},
{
"text": "To these 4 scores provided by each model, we add 16 binary features that encode the rank of each token tag according to one model (does tag have rank i ?). We also compute the score difference given by a model to any two possible tag pairs, leading to 6 more scores. Since we have 3 models, we end up with 78 features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "6.1.2"
},
{
"text": "We use scikit-learn (Pedregosa et al., 2011) to train a Random Forest classifier 9 on the 29k mentions of the CONLL training data. We adopted this training material to ensure a fair comparison with other systems that are typically trained on this dataset. Another possibility would be to split WiNER into two parts, one for computing features, and the other for training the classifier. We leave this investigation as future work. Because of the small feature set we have, training such a classifier is very fast.",
"cite_spans": [
{
"start": 20,
"end": 44,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "6.1.3"
},
{
"text": "We measure the usefulness of the complete WiNER resource by varying the size of the training material of both language models and word embeddings, from 5M tokens (the maximum size the LSTM-CRF mode could process) to the full WiNER resource size. All 90.5 76.9 85.9 46.6 65.3 77.0 74.7 Table 5 : Influence of the portion of WiNER used in our 2-stage approach for the CONLL test set, using the segmentation produced by LSTM-CRF+WiNER(5M). These results have to be contrasted with the last line of Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 285,
"end": 292,
"text": "Table 5",
"ref_id": null
},
{
"start": 495,
"end": 502,
"text": "Table 3",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2"
},
{
"text": "To this end, we provide the performance of our 2-stage approach on CONLL, using the segmentation output by LSTM-CRF+WiNER(5M) 10 . Results are reported in Table 5 . As expected, we observe that computing features on the same WiNER(5M) dataset exploited by LSTM-CRF leads to a notable loss overall (OD F 1 of 68.1 versus 73.0), while still outperforming LSTM-CRF trained on CONLL only (OD F 1 of 65.0). More interestingly, we observe that for all test sets, using more of WiNER leads to better performance, even if a plateau effect emerges. Our approach does improve systematically across all test sets by considering 100 times more WiNER data than what LSTM-CRF can handle in our case. Using all of WiNER leads to an OD F 1 score of 74.7, an increase of 1.7 absolute points over LSTM-CRF+WiNER(5M). Table 6 reports the improvements in OD F 1 of our 2-stage approach (RF), which uses all of 10 The best configuration according to the WiNER material and the segmentation produced by several native systems. Applying our 2-stage approach systematically improves the performance of the native configuration. Gains are larger for native configurations that cannot exploit a large quantity of WiNER. We also observe that the 2-stage approach delivers roughly the same level of performance (OD F 1 74) when using the segmentation produced by the Illinois or the LSTM-CRF systems. Table 7 indicates the number of disagreements between the LSTM-CRF+WiNER(5M) system (columns) and the 2-stage approach (rows). The table also shows the percentage of times the latter system was correct. For instance, the bottom left cell indicates that, on 38 distinct occasions, the classifier changed the tag PER proposed by the native system to ORG and that is was right in 85% of these occasions. We exclude errors made by both systems, which explains the low counts observed (1.7% is the absolute difference between the two approaches). We observe that in most cases the classifier makes the right decision when an entity tag is changed from PER to either LOC or ORG (86% and Table 7 : Percentage of correctness of the 2stage system (rows) when tagging a named-entity differently than the LSTM-CRF+WiNER(5M) (columns). Bracketed figures indicate the average number of differences over the out-domain test sets. 85% respectively). Most often, re-classified entities are ambiguous ones. Our approach chooses correctly mostly by examining the context of the mention. For instance, the entity Olin in example (a) of Figure 4 is commonly known as a last name. It was correctly re-classified as ORG thanks to its surrounding context. Replacing its by his in the sentence makes the classifier tag the entity as PER. Similarly, the entity Piedmont in example (b) was re-classified as ORG, although it is mostly used as the region name (even in Wikipedia), thanks to the context-based CONT and MIX features that identify the entity as ORG (0.61 and 0.63 respectively). Misclassification errors do occur, especially when the native system tagged an entity as ORG. In such cases, the classifier is often misled by a strong signal emerging from one family of features. For instance, in example (c) of Figure 4 , both MIX -p(ORG) = 0.39 vs. p(LOC) = 0.33 -and EMB -p(ORG) = 0.39 vs. p(LOC) = 0.38 -features are suggesting that the entity should be tagged as LOC, but the CONT signalp(LOC) = 0.63 vs. p(ORG) = 0.1 -strongly impacts the final decision. This was to be expected considering the simplicity of our classifier, and leaves room for further improvements.",
"cite_spans": [
{
"start": 890,
"end": 892,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 155,
"end": 162,
"text": "Table 5",
"ref_id": null
},
{
"start": 799,
"end": 806,
"text": "Table 6",
"ref_id": "TABREF11"
},
{
"start": 1373,
"end": 1380,
"text": "Table 7",
"ref_id": null
},
{
"start": 2054,
"end": 2061,
"text": "Table 7",
"ref_id": null
},
{
"start": 2490,
"end": 2498,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 3167,
"end": 3175,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2"
},
{
"text": "We revisited the task of using Wikipedia for generating annotated data suitable for training NER systems. We significantly extended the number of annotations of non anchored strings, thanks to coreference information and an analysis of the Wikipedia's link structure. We applied our approach to a dump of English Wikipedia from 2013, leading to WiNER, a corpus which surpasses other similar corpora, both in terms of quantity and of annotation quality. We evaluated the impact of our corpus on 4 reference NER systems with 6 different NER benchmarks. The LTSM-CRF system of (Lample et al., 2016) seems to be the one that benefits the most from WiNER overall. Still, shortage of memory or lengthy training times prevent us from measuring the full potential of our corpus. Thus, we proposed an entity-type classifier that exploits a set of features computed over an arbitrary large part of WiNER. Using this classifier for labelling the types of segments identified by a reference NER system yields a 2-stage process that further improves overall performance. WiNER and the classifier we trained are available at http://rali.iro.umontreal.ca/ rali/en/winer-wikipedia-for-ner. As future work, we want to study the usefulness of WiNER on a fine-grained entity type task, possibly revisiting the simple classifier we resorted to in this work, and testing its benefits for other currently successful models.",
"cite_spans": [
{
"start": 574,
"end": 595,
"text": "(Lample et al., 2016)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "http://rali.iro.umontreal.ca/rali/en/ wikipedia-main-concept",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Before or after the named-entity.3 Ratio of annotated tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.cnts.ua.ac.be/conll2000/ chunking/conlleval.txt 5 The smallest test set has 617 sentences.6 Figures including this test set do not change drastically from what we observe hereafter.7 We use the pre-trained representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used a window size of 5 in this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We tried other algorithms provided by the platform with less success.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been partly funded by the TRIBE Natural Sciences and Engineering Research Council of Canada CREATE program and Nuance Foundation. We are grateful to the reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Polyglot-NER: Massive multilingual named entity recognition",
"authors": [
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Perozzi",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Skiena",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 SIAM International Conference on Data Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rami Al-Rfou, Vivek Kulkarni, Bryan Perozzi, and Steven Skiena. 2015. Polyglot-NER: Massive mul- tilingual named entity recognition. In Proceed- ings of the 2015 SIAM International Conference on Data Mining, Vancouver, British Columbia, Canada. SIAM.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Deepnl: A Deep Learning NLP Pipeline",
"authors": [
{
"first": "Giuseppe",
"middle": [],
"last": "Attardi",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of Workshop on Vector Space Modeling for NLP, NAACL-HLT",
"volume": "",
"issue": "",
"pages": "109--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giuseppe Attardi. 2015. Deepnl: A Deep Learning NLP Pipeline. In Proceedings of Workshop on Vec- tor Space Modeling for NLP, NAACL-HLT, pages 109-115.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Generalisation in named entity recognition: A quantitative analysis",
"authors": [
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
}
],
"year": 2017,
"venue": "Computer Speech & Language",
"volume": "44",
"issue": "",
"pages": "61--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isabelle Augenstein, Leon Derczynski, and Kalina Bontcheva. 2017. Generalisation in named entity recognition: A quantitative analysis. Computer Speech & Language, 44:61-83.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Named Entity Recognition in Wikipedia",
"authors": [
{
"first": "Dominic",
"middle": [],
"last": "Balasuriya",
"suffix": ""
},
{
"first": "Nicky",
"middle": [],
"last": "Ringland",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Nothman",
"suffix": ""
},
{
"first": "Tara",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "James R",
"middle": [],
"last": "Curran",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Workshop on The People's Web Meets NLP: Collaboratively Constructed Semantic Resources",
"volume": "",
"issue": "",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominic Balasuriya, Nicky Ringland, Joel Nothman, Tara Murphy, and James R Curran. 2009. Named Entity Recognition in Wikipedia. In Proceed- ings of the 2009 Workshop on The People's Web Meets NLP: Collaboratively Constructed Semantic Resources, pages 10-18. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Freebase: A Collaboratively Created Graph Database for Structuring Human Knowledge",
"authors": [
{
"first": "Kurt",
"middle": [],
"last": "Bollacker",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Praveen",
"middle": [],
"last": "Paritosh",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Sturge",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A Col- laboratively Created Graph Database for Structur- ing Human Knowledge. In Proceedings of the 2008",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "ACM SIGMOD International Conference on Management of Data, SIGMOD '08",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "1247--1250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ACM SIGMOD International Conference on Man- agement of Data, SIGMOD '08, pages 1247-1250.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Message understanding conference (MUC) 6. LDC2003T13",
"authors": [
{
"first": "Nancy",
"middle": [],
"last": "Chinchor",
"suffix": ""
},
{
"first": "Beth",
"middle": [],
"last": "Sundheim",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nancy Chinchor and Beth Sundheim. 2003. Message understanding conference (MUC) 6. LDC2003T13.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Named entity recognition with bidirectional LSTM-CNNs",
"authors": [
{
"first": "P",
"middle": [
"C"
],
"last": "Jason",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nichols",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason PC Chiu and Eric Nichols. 2016. Named en- tity recognition with bidirectional LSTM-CNNs. In Proceedings of the 54st Annual Meeting of the Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493-2537.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Trond",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "363--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating Non-local Informa- tion into Information Extraction Systems by Gibbs Sampling. In Proceedings of the 43rd Annual Meet- ing on Association for Computational Linguistics, pages 363-370. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Coreference in Wikipedia: Main Concept Resolution",
"authors": [
{
"first": "Abbas",
"middle": [],
"last": "Ghaddar",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Langlais",
"suffix": ""
}
],
"year": 2016,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "229--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abbas Ghaddar and Philippe Langlais. 2016a. Coref- erence in Wikipedia: Main Concept Resolution. In CoNLL, pages 229-238.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Wiki-Coref: An English Coreference-annotated Corpus of Wikipedia Articles",
"authors": [
{
"first": "Abbas",
"middle": [],
"last": "Ghaddar",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Langlais",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abbas Ghaddar and Philippe Langlais. 2016b. Wiki- Coref: An English Coreference-annotated Corpus of Wikipedia Articles. In Proceedings of the Tenth In- ternational Conference on Language Resources and Evaluation (LREC 2016), Portoro\u017e, Slovenia.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Scalable Modified Kneser-Ney Language Model Estimation",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Pouzyrevsky",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "690--696",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable Modified Kneser-Ney Language Model Estimation. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics, pages 690-696, Sofia, Bulgaria.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Neural architectures for Named Entity Recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.01360"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for Named Entity Recognition. arXiv preprint arXiv:1603.01360.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning named entity recognition from Wikipedia",
"authors": [
{
"first": "Joel",
"middle": [],
"last": "Nothman",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel Nothman. 2008. Learning named entity recogni- tion from Wikipedia. Ph.D. thesis, The University of Sydney Australia 7.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Transforming Wikipedia into named entity training data",
"authors": [
{
"first": "Joel",
"middle": [],
"last": "Nothman",
"suffix": ""
},
{
"first": "Tara",
"middle": [],
"last": "James R Curran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Murphy",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Australian Language Technology Workshop",
"volume": "",
"issue": "",
"pages": "124--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel Nothman, James R Curran, and Tara Murphy. 2008. Transforming Wikipedia into named entity training data. In Proceedings of the Australian Lan- guage Technology Workshop, pages 124-132.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Named entity recognition from scratch on social media",
"authors": [
{
"first": "Dilek",
"middle": [],
"last": "Kezban",
"suffix": ""
},
{
"first": "Pinar",
"middle": [],
"last": "Onal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Karagoz",
"suffix": ""
}
],
"year": 2015,
"venue": "ECML-PKDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kezban Dilek Onal and Pinar Karagoz. 2015. Named entity recognition from scratch on social media. In ECML-PKDD, MUSE Workshop.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Scikit-learn: Machine learning in Python",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Duchesnay",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Pas- sos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Glove: Global Vectors for Word Representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "14",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global Vectors for Word Representation. In EMNLP, volume 14, pages 1532-1543.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Towards Robust Linguistic Analysis using OntoNotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Bj\u00f6rkelund",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhong",
"suffix": ""
}
],
"year": 2013,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "143--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj\u00f6rkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards Robust Linguistic Analysis using OntoNotes. In CoNLL, pages 143-152.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2012,
"venue": "Joint Conference on EMNLP and CoNLL-Shared Task",
"volume": "",
"issue": "",
"pages": "1--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL- 2012 shared task: Modeling multilingual unre- stricted coreference in OntoNotes. In Joint Confer- ence on EMNLP and CoNLL-Shared Task, pages 1- 40.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Design challenges and misconceptions in named entity recognition",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "147--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Com- putational Natural Language Learning, pages 147- 155. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Mining Wiki Resources for Multilingual Named Entity Recognition In proceedings",
"authors": [
{
"first": "Alexander",
"middle": [
"E"
],
"last": "Richman",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Schone",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander E. Richman and Patrick Schone. 2004. Min- ing Wiki Resources for Multilingual Named Entity Recognition In proceedings. In Proceedings of the 46th Annual Meeting of the Association for Compu- tational Linguistics, pages 1-9.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Named Entity Recognition in Tweets: An Experimental Study",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Mausam",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named Entity Recognition in Tweets: An Ex- perimental Study. In EMNLP.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik F Tjong Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003",
"volume": "4",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 142-147. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A proposal to automatically build and maintain gazetteers for Named Entity Recognition by using Wikipedia",
"authors": [
{
"first": "A",
"middle": [],
"last": "Toral",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Munoz",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the EACL-2006 Workshop on New Text: Wikis and blogs and other dynamic text sourcesEACL Workhop on NEW TEXT-Wikis and blogs and ther dynamic text sources",
"volume": "",
"issue": "",
"pages": "56--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Toral and R. Munoz. 2006. A proposal to automati- cally build and maintain gazetteers for Named Entity Recognition by using Wikipedia. In Proceedings of the EACL-2006 Workshop on New Text: Wikis and blogs and other dynamic text sourcesEACL Workhop on NEW TEXT-Wikis and blogs and ther dynamic text sources, pages 56-61.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Examples of errors in our annotation pipeline. Faulty annotations are marked with a star."
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Two representations of WiNER's annotation used for feature extraction."
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "(a) . . . would give [Olin] PER\u2192ORG access to its production processes . . . (b) Wall Street traders said [Piedmont] LOC\u2192ORG shares fell partly . . . (c) . . . performed as a tenor at New York City 's [Carnegie Hall] ORG\u2192LOC ."
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Example of entities re-classified by our 2-stage approach."
},
"TABREF0": {
"type_str": "table",
"text": "Canadian] MISC musician who resided in [Paris] LOC , [France] LOC for several years, and now lives in [Cologne] LOC , [Germany] LOC . Though best known for his first MC [...], he is a pianist, producer, and songwriter. He was signed to a three-album deal with Warner Music Canada in 1995, a subsidiary of [Warner Bros. Records] ORG . . . While the album's production values were limited [Warner Bros.] ORG simply . . .",
"num": null,
"content": "<table><tr><td colspan=\"2\">[Chilly Gonzales] PER (born [Jason Charles Beck] PER ; 20 March 1972) is a [Paris LOC \u2192 Europe, France, Napoleon, . . .</td></tr><tr><td>Cologne LOC</td><td/></tr><tr><td>\u2192 Germany, Alsace, . . .</td><td>OLT</td></tr><tr><td colspan=\"2\">Warner Bros. Records ORG</td></tr><tr><td>\u2192 Warner, Warner Bros., . . .</td><td/></tr><tr><td>France LOC</td><td>CT</td></tr><tr><td colspan=\"2\">\u2192 French Republic, Kingdom. . .</td></tr><tr><td colspan=\"2\">that lists, for all the articles in Wikipedia</td></tr><tr><td colspan=\"2\">(those that have a Freebase counterpart), all the</td></tr><tr><td colspan=\"2\">text mentions that are coreferring to the main con-</td></tr><tr><td colspan=\"2\">cept of an article. For instance, for the article</td></tr><tr><td colspan=\"2\">Chilly Gonzales, the resource lists proper names</td></tr><tr><td colspan=\"2\">(e.g. Gonzales, Beck), nominal (e.g. the per-</td></tr></table>",
"html": null
},
"TABREF1": {
"type_str": "table",
"text": "a) [Eldridge Pope] ORG was a traditional brewery.....Sixteen years later the [Pope] ORG brothers floated the business...",
"num": null,
"content": "<table><tr><td>b) Montreal Impact's biggest rival is [Toronto</td></tr><tr><td>FC] ORG because Canada's two largest cities</td></tr><tr><td>have rivalries in and out of sport. Mon-</td></tr><tr><td>treal and [Toronto] ORG professional soccer</td></tr><tr><td>teams have competed against each other for</td></tr><tr><td>over 40 years.</td></tr></table>",
"html": null
},
"TABREF2": {
"type_str": "table",
"text": "Number of times a text string (mention) is labelled with (at least) two types in WiNER. The cells on the diagonal indicate the number of annotations.",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF5": {
"type_str": "table",
"text": "Performance of the Illinois toolkit on CONLL, as a function of the Wikipedia-based training material used. The figures on the last line are averaged over the 10 subsets of WiNER we randomly sampled. Bracketed figures indicate the minimum and maximum values.",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF7": {
"type_str": "table",
"text": "Cross-domain evaluation of NER systems trained on different mixes of CONLL and WiNER. Figures are token-level F1 score on 3 classes, while figures in parentheses indicate absolute gains over the configuration using only the CONLL training material. Bold figures highlight column-wise best results.",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF8": {
"type_str": "table",
"text": ".",
"num": null,
"content": "<table><tr><td/><td>LOC MISC ORG</td><td>PER</td></tr><tr><td colspan=\"2\">CONT 0.11 0.35 0.06</td><td>0.48</td></tr><tr><td>MIX</td><td>0.26 0.19 0.18</td><td>0.37</td></tr><tr><td>EMB</td><td colspan=\"2\">0.39 0.23 0.258 0.46</td></tr></table>",
"html": null
},
"TABREF10": {
"type_str": "table",
"text": "",
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">Native RF</td></tr><tr><td>CRF</td><td/><td/></tr><tr><td>CONLL</td><td>67.0</td><td>73.6 (+6.6)</td></tr><tr><td>+WiNER(3M)</td><td>-</td><td>-</td></tr><tr><td>+WiNER(1M)</td><td>69.2</td><td>73.0 (+2.8)</td></tr><tr><td>Illinois</td><td/><td/></tr><tr><td>CONLL</td><td>68.3</td><td>74.4 (+6.1)</td></tr><tr><td>+WiNER(3M)</td><td>69.5</td><td>74.2 (+4.7)</td></tr><tr><td colspan=\"2\">+WiNER(30M) 69.0</td><td>74.3 (+4.3)</td></tr><tr><td>Senna</td><td/><td/></tr><tr><td>CONLL</td><td>64.3</td><td>70.1 (+5.8)</td></tr><tr><td>+WiNER(3M)</td><td>67.0</td><td>70.8 (+3.8)</td></tr><tr><td>+WiNER(7M)</td><td>66.2</td><td>72.0 (+5.8)</td></tr><tr><td>LSTM-CRF</td><td/><td/></tr><tr><td>CONLL</td><td>65.0</td><td>69.7 (+4.7)</td></tr><tr><td>+WiNER(3M)</td><td>72.0</td><td>74.8 (+2.8)</td></tr><tr><td>+WiNER(5M)</td><td>73.0</td><td>74.7 (+1.7)</td></tr></table>",
"html": null
},
"TABREF11": {
"type_str": "table",
"text": "OD F 1 score of native configurations, and of our two-stage approach (RF) which exploits the full WiNER corpus. Figures in parenthesis indicate absolute gains over the native configuration.",
"num": null,
"content": "<table/>",
"html": null
}
}
}
}