{ "paper_id": "I13-1045", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:14:34.800035Z" }, "title": "Automatically Developing a Fine-grained Arabic Named Entity Corpus and Gazetteer by utilizing Wikipedia", "authors": [ { "first": "Fahd", "middle": [], "last": "Alotaibi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Birmingham", "location": { "country": "UK" } }, "email": "" }, { "first": "Mark", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Birmingham", "location": { "country": "UK" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents a methodology to exploit the potential of Arabic Wikipedia to assist in the automatic development of a large Fine-grained Named Entity (NE) corpus and gazetteer. The corner stone of this approach is efficient classification of Wikipedia articles to target NE classes. The resources developed were thoroughly evaluated to ensure reliability and a high quality. Results show the developed gazetteer boosts the performance of the NE classifier on a news-wire domain by at least 2 points F-measure. Moreover, by combining a learning NE classifier with the developed corpus the score achieved is a high F-measure of 85.18%. The developed resources overcome the limitations of traditional Arabic NE tasks by more fine-grained analysis and providing a beneficial route for further studies.", "pdf_parse": { "paper_id": "I13-1045", "_pdf_hash": "", "abstract": [ { "text": "This paper presents a methodology to exploit the potential of Arabic Wikipedia to assist in the automatic development of a large Fine-grained Named Entity (NE) corpus and gazetteer. The corner stone of this approach is efficient classification of Wikipedia articles to target NE classes. The resources developed were thoroughly evaluated to ensure reliability and a high quality. Results show the developed gazetteer boosts the performance of the NE classifier on a news-wire domain by at least 2 points F-measure. Moreover, by combining a learning NE classifier with the developed corpus the score achieved is a high F-measure of 85.18%. The developed resources overcome the limitations of traditional Arabic NE tasks by more fine-grained analysis and providing a beneficial route for further studies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Previous efforts that have been made to develop an Arabic NER either focused on traditional NE classes (Benajiba et al., 2010) or sought to expand only one class at a time (Shaalan and Raza, 2007) . Applications such as Question Answering (QA) receive more benefits when a fine-grained NER is developed. This is true when we consider that, the majority of factoid questions are about named entities (Noguera et al., 2005) . Having a finer NER, results in the possibility of extracting more semantic knowledge from the context. For example, if we consider the following sentence: \u202b\ufeed\ufeb3\ufe8e\ufe8b\ufede(\u202c \u202b\ufeb7\ufeae\ufedb\ufe8e\ufe95\u202c \u202b\ufe83\ufedb\ufe92\ufeae\u202c \u202b\ufeeb\ufef2\u202c \u202b\u062f\ufef3\ufeb0\ufee7\ufef2\u202c \u202b\ufeed\ufe8d\ufedf\ufe96\u202c \u202b\ufeb7\ufeae\ufedb\ufe94\u202c \u202b\ufe8d\ufedf\ufecc\ufe8e\ufedf\ufee2\u202c \u202b\ufed3\ufef2\u202c \u202b\ufeed\ufe8d\ufedf\ufe98\ufeae\ufed3\ufef4\ufeea\u202c \u202b\ufe8d\ufef9\ufecb\ufefc\ufee1\u202c /\u0161rk wAlt dyzny hy Okb\u0159 srkAt wsA\u0177l AlI\u03c2lAm wAltrfyh fy Al\u03c2Alm/ 'Walt Disney is the largest media company in the entertainment world') 1 We would have more semantic information if we could tag \u202b\u062f\ufef3\ufeb0\ufee7\ufef2(\u202c \u202b\ufeed\ufe8d\ufedf\ufe96\u202c /wAlt dyzny/ 'Walt Disney') 1 Throughout this paper, Arabic words are represented in three variants: (Arabic word /HSB transliteration scheme (Habash et al., 2007) / \"English translation\") as [ORG-ENTERTAINMENT] rather than just [ORG] . This deeper semantics is very helpful when answering factoid question like \"What is the largest entertainment company?\"", "cite_spans": [ { "start": 103, "end": 126, "text": "(Benajiba et al., 2010)", "ref_id": "BIBREF5" }, { "start": 172, "end": 196, "text": "(Shaalan and Raza, 2007)", "ref_id": "BIBREF24" }, { "start": 399, "end": 421, "text": "(Noguera et al., 2005)", "ref_id": "BIBREF17" }, { "start": 798, "end": 799, "text": "1", "ref_id": null }, { "start": 900, "end": 901, "text": "1", "ref_id": null }, { "start": 1014, "end": 1035, "text": "(Habash et al., 2007)", "ref_id": "BIBREF13" }, { "start": 1101, "end": 1106, "text": "[ORG]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Supervised machine learning technologies have been successfully adopted for several natural language tasks, including NER. These technologies require a reasonable portion of data to be accessible in the training phase, containing a number of positive and negative examples to learn from and to circumvent the problem of data sparseness. Traditional methods for compiling such data involve recruiting individuals to annotate a certain corpus manually. This is tedious work, as well as costly and time consuming. Moreover, manually annotating a large portion of a relatively open domain corpus beyond a news-wire and across various genres is not easy for an individual to achieve.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Therefore, when developing a reasonable finegrained NE corpus two questions should be answered. First, what proper fine-grained semantic classes should be established? Second, how to develop a reasonable sized fine-grained NE corpus at minimum cost? This work answers those questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To these ends the methodology we devised was designed to utilise the availability and growth of Arabic Wikipedia to develop a large and extendable finegrained named entity corpus and gazetteer with minimum human intervention. The contributions of this paper are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. It introduces a two-level tagset for Wikipedia NEs;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. It develops a large fine-grained automatic NE corpus using minimum human intervention;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. It develops a large fine-grained gazetteer; and 4. It thoroughly evaluates the resulting corpus and gazetteer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Wikipedia is an extensive collaborative project on the web in which articles are published and reviewed by volunteers from around the world. Wikipedia includes 271 different languages, with the Arabic version ranked 27th with more than 210,000 articles. The annual increase in the number of articles is 30% (Wikipedia, 2013) . The actual relationship between the Named Entity and Wikipedia is that a large percentage of Wikipedia articles are about named entities (Alotaibi and Lee, 2012) . This provided the motivation to utilise Wikipedia's underlying structure to produce the target resources.", "cite_spans": [ { "start": 307, "end": 324, "text": "(Wikipedia, 2013)", "ref_id": "BIBREF27" }, { "start": 464, "end": 488, "text": "(Alotaibi and Lee, 2012)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Arabic Wikipedia and Named Entity", "sec_num": "2" }, { "text": "To this end, it is beneficial to provide an overview of the critical aspects of the Wikipedia structure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Arabic Wikipedia and Named Entity", "sec_num": "2" }, { "text": "\u2022 Articles: These can be one of the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Arabic Wikipedia and Named Entity", "sec_num": "2" }, { "text": "1. Normal article: Each article has a unique title and contains authentic content; i.e. textual data, images, tables, items and links, related to the concept represented in the title. These are in the majority.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Arabic Wikipedia and Named Entity", "sec_num": "2" }, { "text": "2. Redirected article: These contain a specific tag to redirect the enquirer to a normal article. For example: for the redirected article titled \u202b\ufe91\ufeae\ufef3\ufec4\ufe8e\ufee7\ufef4\ufe8e(\u202c \u202b\ufe8d\ufedf\ufecc\ufec8\ufee4\ufef0\u202c /bryTAnyA Al\u03c2\u010em\u00fd/ 'Great Britain'), there is a redirected tag to \u202b\ufe8d\ufedf\ufee4\ufe98\ufea4\ufeaa\ufe93(\u202c \u202b\ufe8d\ufedf\ufee4\ufee4\ufee0\ufedc\ufe94\u202c /Almmlk AlmtHd / 'United Kingdom'). This tag is written thus #REDIRECTED[[\u202b\ufe8d\ufedf\ufee4\ufee4\ufee0\ufedc\ufe94\u202c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Arabic Wikipedia and Named Entity", "sec_num": "2" }, { "text": "3. Disambiguation article: These are used to list all the article titles that share ambiguities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u202b.]]\ufe8d\ufedf\ufee4\ufe98\ufea4\ufeaa\ufe93\u202c", "sec_num": null }, { "text": "\u2022 Links types: There are two types of links in Wikipedia and they are described below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u202b.]]\ufe8d\ufedf\ufee4\ufe98\ufea4\ufeaa\ufe93\u202c", "sec_num": null }, { "text": "1. Non-piped links: this type of links denotes that the display phrase of the link and the article's title are the same. For example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u202b.]]\ufe8d\ufedf\ufee4\ufe98\ufea4\ufeaa\ufe93\u202c", "sec_num": null }, { "text": "[[London]].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u202b.]]\ufe8d\ufedf\ufee4\ufe98\ufea4\ufeaa\ufe93\u202c", "sec_num": null }, { "text": "2. Piped links: this type of link allows for the text that appears in the contextual data to be different from the actual article it refers to. For example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u202b.]]\ufe8d\ufedf\ufee4\ufe98\ufea4\ufeaa\ufe93\u202c", "sec_num": null }, { "text": "[[UK|United Kingdom]]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u202b.]]\ufe8d\ufedf\ufee4\ufe98\ufea4\ufeaa\ufe93\u202c", "sec_num": null }, { "text": ", where \"UK\" appears in the display text, while \"United Kingdom\" refers to the titles of the article.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u202b.]]\ufe8d\ufedf\ufee4\ufe98\ufea4\ufeaa\ufe93\u202c", "sec_num": null }, { "text": "Throughout this paper, the terms \"link\" and \"link phrase\" are used interchangeably to refer to the same thing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u202b.]]\ufe8d\ufedf\ufee4\ufe98\ufea4\ufeaa\ufe93\u202c", "sec_num": null }, { "text": "\u2022 Connectivity: Used links, of any type, in the contextual data of any normal article, provide connectivity and thereby an underlying structure for Wikipedia; we are seeking to utilise to achieve our goal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u202b.]]\ufe8d\ufedf\ufee4\ufe98\ufea4\ufeaa\ufe93\u202c", "sec_num": null }, { "text": "In this section we present in detail the approach advised to automatically develop a tagged fine-grained named entity corpus and gazetteer based on Arabic Wikipedia.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transforming Arabic Wikipedia into a Fine-grained NE corpus and Gazetteer", "sec_num": "3" }, { "text": "Our assumption regarding this work is as follows: If we are able to classify Wikipedia articles into NE classes, we will then be able to map the resultant labelling back into contextualised linked phrases. This involves the following steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Conciseness of the Approach", "sec_num": "3.1" }, { "text": "1. Defining a fine-grained taxonomy suitable to Wikipedia; 2. Classifying Arabic Wikipedia articles into a predefined set of fine-grained NE classes;", "cite_spans": [ { "start": 48, "end": 58, "text": "Wikipedia;", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Conciseness of the Approach", "sec_num": "3.1" }, { "text": "3. Mapping the results of the classification back to the linked phrases in the text;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Conciseness of the Approach", "sec_num": "3.1" }, { "text": "4. Detecting successive mentions of NE that have not been associated with links, while taking into account the Arabic morphological variation of the NE phrase; and 5. Selecting sentences to be included in the final corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Conciseness of the Approach", "sec_num": "3.1" }, { "text": "Sekine et al. 2002proposed a hierarchical named entity taxonomy that is very fine, with 150 subclasses. The methodology they used to construct semantic classes relies on analysing the named entities in a newswire corpus, in addition to analysing the answer type for a set of questions used in a Text Retrieval Conference TREC-QA task. WordNet noun hierarchy is also used to shape the classes further. Two years later, Sekine and Nobat (2004) added an extra 50 classes and decomposed some classes, such as \"disease\" and numeric expression respectively. Although the spectrum of classes is very wide, the specific descriptions and definitions for each class strives to avoid overlap and ambiguity, making it difficult to define. This taxonomy has been applied to both English and Japanese. Some NLP applications, such as QA have designed their own named entity classes, based on the criteria they believe to be the most valuable. Harabagiu et al. (2003) developed a named entity recognition component in which one level consists of 20 defined fine grained classes. Knowing that factoid type questions require named entities, Li and Roth (2006) defined a fine grained taxonomy to answer certain types of questions. Although, their two layer taxonomy covered 50 fine grain classes of different types, some types were unrelated to named entities such as definition, description, manner and reason. Based on the same trend, Brunstein (2002) presented a two-level taxonomy in which 29 answer types are subdivided into 105 subtypes. Other researchers have adopted and used their taxonomy for named entity taxonomy (Nothman et al., 2008) .", "cite_spans": [ { "start": 418, "end": 441, "text": "Sekine and Nobat (2004)", "ref_id": "BIBREF22" }, { "start": 928, "end": 951, "text": "Harabagiu et al. (2003)", "ref_id": "BIBREF14" }, { "start": 1123, "end": 1141, "text": "Li and Roth (2006)", "ref_id": "BIBREF15" }, { "start": 1418, "end": 1434, "text": "Brunstein (2002)", "ref_id": "BIBREF6" }, { "start": 1606, "end": 1628, "text": "(Nothman et al., 2008)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Defining Fine-grained Semantic NE Classes", "sec_num": "3.2" }, { "text": "It is evident that there is no widely agreed fine grained taxonomy that can be directly adopted into Arabic; although ACE taxonomy is a reasonable choice in the sense that it organises granularity into two layers, i.e. coarse and fine grained. In the evaluation of ACE (2008) , the number of fine grain classes is 45. This taxonomy is designed in two levels of granularities and frequently used in the news-wire domain. Moreover, two-level taxonomy allows us to map a tagset into different traditional schemes easily, such as CoNLL or MUC.", "cite_spans": [ { "start": 265, "end": 275, "text": "ACE (2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Defining Fine-grained Semantic NE Classes", "sec_num": "3.2" }, { "text": "Thus, ACE (2008) taxonomy was selected and because it is designed for a news-wire domain we applied some amendments to tailor it for use in a relatively open domain corpus, such as Wikipedia. For example, there are many articles in Wikipedia about people in different subclasses, such as scientists, athletes, artists, politicians, etc. These fine classes are not included in ACE, as it only involves three sub-classes: the individual, group and indeterminate. Another modification is performed; a new class called \"Product\" is added. This modified taxonomy is presented in Table 1 . ", "cite_spans": [], "ref_spans": [ { "start": 574, "end": 581, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Defining Fine-grained Semantic NE Classes", "sec_num": "3.2" }, { "text": "The aim of classifying Wikipedia articles is to produce a list of two tuples, like . The following sub sections describe the steps taken to achieve this goal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikipedia Document Classification", "sec_num": "3.3" }, { "text": "Quality Evaluation In order to classify Arabic Wikipedia articles into named entity classes, we manually annotated 4000 articles into two levels of granularity, i.e. coarse and fine grained, using the modified taxonomy shown in Table 1 . Two Arabic natives were involved in the annotation process and the inter-annotator agreement between the annotators was calculated using Kappa Statistic (Carletta, 1996) . Table 2 shows that the inter-annotator agreement was calculated for different sizes of documents, i.e. 500, 2000 and 4000. This revealed difficulties that might be encountered during the annotation process.", "cite_spans": [ { "start": 392, "end": 408, "text": "(Carletta, 1996)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 228, "end": 236, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 411, "end": 418, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Fine-grained Document Annotation and", "sec_num": "3.3.1" }, { "text": "Kappa: n=500", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Level", "sec_num": null }, { "text": "Kappa: n=2000", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Level", "sec_num": null }, { "text": "Kappa: n=4000", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Level", "sec_num": null }, { "text": "Coarse-grained 92 98 99 Fine-grained 80 95 97 Table 2 : Inter-annotator agreement in coarse and fine grained levels", "cite_spans": [], "ref_spans": [ { "start": 46, "end": 53, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Level", "sec_num": null }, { "text": "We developed our classification model relying on the set of features proposed by Alotaibi and Lee (2012) as these score 90% on the F-measure for coarse grained level. The features were:", "cite_spans": [ { "start": 81, "end": 104, "text": "Alotaibi and Lee (2012)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Features Engineering and Representation", "sec_num": "3.3.2" }, { "text": "1. Simple Features (SF): which represent the raw dataset as a simple bag of words without further processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features Engineering and Representation", "sec_num": "3.3.2" }, { "text": "involving removing the punctuation and symbols, filtering stop words and normalising digits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Filtered Features (FF):", "sec_num": "2." }, { "text": "represent the tokens in their stem form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language-dependent Features (LF):", "sec_num": "3." }, { "text": "involving tokenising the sentence and assigning parts of speech for each token. This allows filtering of the dataset by involving only nouns (for instance) in the classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Enhanced Language-dependent Features (ELF):", "sec_num": "4." }, { "text": "In addition, we extended this set of features by extracting two more features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Enhanced Language-dependent Features (ELF):", "sec_num": "4." }, { "text": "1. First paragraph: Instead of just relying on the first sentence as in (Alotaibi and Lee, 2012), we identified useful features spread across the first paragraph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Enhanced Language-dependent Features (ELF):", "sec_num": "4." }, { "text": "2. Bigram: By using this feature, we aim to examine the effects of the collocation of tokens. Here we added the representation of a bigram while still preserving the unigram.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Enhanced Language-dependent Features (ELF):", "sec_num": "4." }, { "text": "We represent the feature space using the term frequency-inverse document frequency (tf-idf).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Enhanced Language-dependent Features (ELF):", "sec_num": "4." }, { "text": "The annotated dataset was divided into training and test at 80% and 20% respectively. We chose the Support Vector Machine (SVM) and Stochastic Gradient Decent (SGD) as a probabilistic model for the classifier. In each round of the classification, we tested one set of features and selected the one that performed best. Table 3 shows the overall results for the fine-grained classification. There are three main findings. First, both classifiers tend to perform in a very similar way; therefore, in practice, use of either classifier to perform the final classification for the whole Wikipedia dataset will be expected to deliver very similar results. The second finding is that, the bigram features have little effect when different features are set. Finally, the best result for both classifiers was achieved using the ELFUni feature. Table 3 : The average fine-grained classification results when using SGD and SVM over different features sets where (tf-idf) is applied", "cite_spans": [], "ref_spans": [ { "start": 319, "end": 326, "text": "Table 3", "ref_id": null }, { "start": 836, "end": 843, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Fine-grained Document Classification Results", "sec_num": "3.3.3" }, { "text": "Compilation of the final corpus was achieved according to the pipeline steps as follows: As a convention, a linking phrase in the text of any Wikipedia article should only be assigned the first time it appears in context; successive mentions of the phrase appear with no link. Therefore, not all NE phrases are linked every time. Detecting successive mentions works by finding and matching possible NE phrases in the text that share similarity, to a certain extent, with each phrase in the list of linked NE phrases. The main goal of this step is to augment the plain text with NE tags and to address some of the lexical and morphological variations that arise when a named entity is contextualised. For example, a named entity of \u202b\ufe8d\ufedf\ufed4\ufef4\ufebc\ufede(\u202c \u202b\ufeb3\ufecc\ufeee\u062f\u202c /s\u03c2wd AlfySl/ 'Saud Alfaisal') is expected to be repeated in context with either the first name \u202b\ufeb3\ufecc\ufeee\u062f(\u202c /s\u03c2wd/ 'Saud') or the last name \u202b\ufe8d\ufedf\ufed4\ufef4\ufebc\ufede(\u202c /AlfySl/ 'Alfaisal') or both together. This can also be difficult when prefixes are used. For example \u202b\ufeed\ufedf\ufeb4\ufecc\ufeee\u062f(\u202c /wls\u03c2wd/ 'and for Saud'). Therefore, we prepare for and match all the variations of prefixes that can be attached to the NE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compiling the Corpus", "sec_num": "3.4" }, { "text": "6. Produce the NE annotated corpus by selecting sentences to be included in the final corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compiling the Corpus", "sec_num": "3.4" }, { "text": "We decided to compile two versions of the developed corpus. The first version is called \"WikiFANE W hole \", which means that we retrieved all the sentences from the articles. On the other hand, the second version, i.e. WikiFANE Selective , is compiled by selecting only the sentences, which have at least one named entity phrase. This creates a Wikipedia corpus that has as high a density of tags as possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "To Which Extent to Select Sentences to be Involved in the Final Corpus?", "sec_num": "3.4.1" }, { "text": "In this paper and for evaluation purposes, we compiled the corpus for more than 2 million tokens as shown in Table 4 . Meanwhile, this methodology allows all of Arabic Wikipedia to become a tagged finegrained NE corpus. Moreover, both versions of this dataset were freely distributed to the research community 2 . ", "cite_spans": [], "ref_spans": [ { "start": 109, "end": 116, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "To Which Extent to Select Sentences to be Involved in the Final Corpus?", "sec_num": "3.4.1" }, { "text": "The process of classifying Wikipedia articles into NE classes provides the benefit of compiling a large Arabic NE Gazetteer at two levels of granularity. Based on our best knowledge, the only Arabic NE gazetteer currently available is that produced by Benajiba et al. (2007) covering only three traditional NE classes, i.e. PER, ORG and LOC. The size of this gazetteer is 4132 entities. ", "cite_spans": [ { "start": 252, "end": 274, "text": "Benajiba et al. (2007)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introducing a Fine-grained Arabic NE Gazetteer", "sec_num": "4" }, { "text": "To evaluate the fine-grained NE corpus and gazetteer produced, we conducted a set of thorough experiments. The aims of the evaluation were to answer the following questions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Results", "sec_num": "5" }, { "text": "\u2022 What is the quality of the corpus produced and the gazetteer in terms of annotation?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Results", "sec_num": "5" }, { "text": "\u2022 How efficient is the NE classifier when used with WikiFANE W hole and WikiFANE Selective and tested over cross-domain and within-domain datasets?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation and Results", "sec_num": "5" }, { "text": "The performance of document classification across all Wikipedia articles is crucial to avoid error propagation from the document classification stage when compiling the final version of the annotated corpus. Therefore, the first evaluation focused on this aspect. After classifying all articles to the target NE classes, we drew another 4000 articles, to be represented as a sample for all Wikipedia articles, and manually annotated them. The selection of the articles was made by selecting the first 4000 articles with identical glyphs to those used most frequently in other Wikipedia articles. This criteria ensured that the most frequent NE were classified properly with a minimum error rate. After this, we calculated the inter-annotation agreement between the manually annotated, gold-standard documents, and that classified based on step 3 in Section 3.4. Table 6 shows the result for both levels of granularity. The overall Kappa for the fine-grained level is 82.6% and this is Table 7 : The set of language dependent and independent features extracted to be used by the classifier consistent with the results shown in Section 3.3.3. This gives the impression that, the error rate is at a minimum, even when performing the classification across all Wikipedia articles with small amounts of training data.", "cite_spans": [], "ref_spans": [ { "start": 862, "end": 869, "text": "Table 6", "ref_id": "TABREF6" }, { "start": 985, "end": 992, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Evaluating the Annotation Quality", "sec_num": "5.1" }, { "text": "This evaluation was designed to evaluate the corpus developed by using it as training data to test it over crossdomain and within-domain datasets. Moreover, this assists evaluation of the efficiency of using gazetteer as external knowledge resource. We parsed the different datasets and tokenised the sentences using AMIRA (Diab, 2009) relying on the scheme (Conjunction + Preposition + Prefix). The concept behind using this tokenisation scheme is that, the notable sparseness issues regarding Arabic NE are caused by agglutination of the prefixes. In this scheme, we guaranteed that the named entities like \u202b\ufea7\ufe8e\ufedf\ufeaa(\u202c /xAld/ 'Khalid') in the training data also refer to \u202b\ufeed\ufedf\ufea8\ufe8e\ufedf\ufeaa(\u202c /wlxAld/ 'and for Khalid') in the test data. This happens by tokenising the words and splitting the prefixes, so the result will be three different tokens \u202b\ufeed(\u202c /w/ 'and'), \u202b\ufedd(\u202c /l/ 'for') and \u202b\ufea7\ufe8e\ufedf\ufeaa(\u202c /xAld/ 'Khalid').", "cite_spans": [ { "start": 323, "end": 335, "text": "(Diab, 2009)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluating the Corpus Developed by Learning NE Classifier", "sec_num": "5.2" }, { "text": "We extracted traditional sets of features at different levels; including lexical, morphological, syntactical and external knowledge. Table 7 summarises the features used where a window of five features are encoded in the classifier including the current position.", "cite_spans": [], "ref_spans": [ { "start": 133, "end": 140, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Evaluating the Corpus Developed by Learning NE Classifier", "sec_num": "5.2" }, { "text": "The following set of experiments was conducted relying on the Conditional Random Field (CRF) probabilistic model to perform the sequence labelling. In all the experiments, we divided the datasets into training and test at 80% and 20% respectively. We used the three metrics, precision, recall and F-measure, to evaluate the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluating the Corpus Developed by Learning NE Classifier", "sec_num": "5.2" }, { "text": "Newswire-based NE Corpora and WikiFANE", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tags Distribution for the Gold-standard", "sec_num": "5.2.1" }, { "text": "Different corpora have been used by researchers to develop NER. The first one is ANERcorp, which developed by Benajiba et al. (2007) and is freely accessible. It is a 150K news-wire based corpus tagged with CoNLL traditional coarse classes, i.e. PER, ORG, LOC and MISC. ACE produced two datasets named ACE 2004 4 and ACE 2005 5 which are subject to a costly licence. This prevents us using those corpora in the evaluation. However, ACE also produced a multilingual small corpus called REFLEX Entity Translation Training/DevTest (REFLEX for short), which consists of about 60K of tokens with two levels of classes. This is divided according to its origin into news-wire (NW), treebank (TB) and web blogs (WL). We used both the ANERcorp and the Arabic portion of REFLEX as goldstandard corpora to conduct the evaluation. Table 8 shows the tag distribution for each corpus per class and the total per token and phrase. We use (NA) as an indication of no availability in the dataset. It is clearly shown that WikiFANE Selective has wider distributed tags compared with WikiFANE W hole .", "cite_spans": [ { "start": 110, "end": 132, "text": "Benajiba et al. (2007)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 819, "end": 826, "text": "Table 8", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Tags Distribution for the Gold-standard", "sec_num": "5.2.1" }, { "text": "Using gazetteer as an external knowledge source in NER helps to boost the performance of NER (Carreras et al., 2002) . To evaluate the gazetteer produced, we learned the classifier by news-wire dataset one at a time. Each time, we evaluated the presence and absence of WikiFANE Gazet . Due to ANERcorp dataset being coarse-grain level, we decided to map the RE-FLEX dataset to the same scheme used by ANERcorp. In addition, we eliminated the MISC class used by ANERcorp because there is no direct equivalent in REFLEX. Three main points arose from this experiment. First, the F-measure increased by at least 2 points for all datasets, showing the overall positive effect of the developed gazetteer. Second, the recall metric clearly boosted the classifier enabling retrieval of more NE phrases than would be possible without WikiFANE Gazet . Third, the TB sub-dataset of RE-FLEX showed dramatic improvement in comparison with other datasets, because that TB dataset had comparatively less noise. To elaborate more on cross-domain evaluation we evaluated the merging of WikiFANE Selective , since it performed best in the previous experiment, with both ANERcorp and REFLEX. The idea behind this experiment was to understand how the classifier performs when different domains and genera are combined together. The most notable findings, as shown in Table 11 are that, the recall metric shows a sharp drop in all datasets. However, the precision shows high scores, suggesting the Wikipedia corpus is strong in difference when compared with the news-wire domain.", "cite_spans": [ { "start": 93, "end": 116, "text": "(Carreras et al., 2002)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 1347, "end": 1356, "text": "Table 11", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Gazetteer Evaluation", "sec_num": "5.2.2" }, { "text": "The traditional practice of learning NE classifier is to draw the training and test datasets from single domain. Therefore, we divided WikiFANE W hole and WikiFANE Selective into training and test for 80% and 20% respectively and then training the CRF classifier on WikiFANE W hole and WikiFANE Selective separately with and without the injection of the WikiFANE Gazet as an external knowledge source. Table 12 shows that, the use of WikiFANE Gazet creates a notable improvement across datasets by at least 3 points on the F-measure. In addition, WikiFANE Selective has a slightly superiority over WikiFANE W hole advising that both datasets are performing at a promising level of accuracy.", "cite_spans": [], "ref_spans": [ { "start": 402, "end": 410, "text": "Table 12", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Within-domain Evaluation", "sec_num": "5.2.4" }, { "text": "A promising trend in the research is towards automatically developing an annotated NE corpus that extends beyond both traditional classes and the domain of newswire, in order to create novel resources. One of the earliest of these approaches was presented by in which the web was used to build a target corpus, using bootstrapping to build an anno-tated NE corpus. A further approach utilises parallel corpora to build an NE corpus automatically. This relies on the suggestion that once one corpus is annotated then other parallel corpora can be easily annotated using projection. Ehrmann et al. (2011) developed multilingual NE corpora for English, French, Spanish, German and Czech. Similarly, Fu et al. (2011) developed a Chinese annotated NE corpus exploiting an English aligned corpus. The difference here is that the alignment is conducted between both corpora at the wordlevel.", "cite_spans": [ { "start": 581, "end": 602, "text": "Ehrmann et al. (2011)", "ref_id": "BIBREF11" }, { "start": 696, "end": 712, "text": "Fu et al. (2011)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Beyond the newswire-based corpora, Wikipedia becomes more attractive for different NLP tasks. Some researchers have exploited the unrestricted accessibility of Wikipedia to establish an automatic fully annotated NE corpus with different granularity; meanwhile others are merely focusing on partially utilising Wikipedia to achieve specific goals, such as developing a NE gazetteer (Attia et al., 2010) or classifying Wikipedia articles into NE semantic classes (Saleh et al., 2010) . Tkatchenko et al. (2011) expanded the classification into an 18 fine-grain taxonomy extracted from (BNN). To prepare training data for use in the classification stage, a small set of seeds is constructed, as undertaken by Nadeau et al. (2006) , in which a semi-supervised bootstrapping approach was used to construct long lists of entities in different fine-grain NE classes from the web. After the list is constructed, the entities are then intersected with Wikipedia articles so as to classify each article according to its target class. Therefore, a set of 40 articles per fine-grain class was produced for use in training with the Na\u00efve Bayes and Support Vector Machine (SVM). Several similar features have been selected (e.g. (Saleh et al., 2010; Dakka and Cucerzan, 2008) ).", "cite_spans": [ { "start": 381, "end": 401, "text": "(Attia et al., 2010)", "ref_id": "BIBREF3" }, { "start": 461, "end": 481, "text": "(Saleh et al., 2010)", "ref_id": "BIBREF21" }, { "start": 484, "end": 508, "text": "Tkatchenko et al. (2011)", "ref_id": "BIBREF26" }, { "start": 706, "end": 726, "text": "Nadeau et al. (2006)", "ref_id": "BIBREF16" }, { "start": 1215, "end": 1235, "text": "(Saleh et al., 2010;", "ref_id": "BIBREF21" }, { "start": 1236, "end": 1261, "text": "Dakka and Cucerzan, 2008)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Instead of relying on machine learning, Richman and Schon Richman and Schon (2008) defined a set of heuristics involving using assigned category links to classify articles. Phrasal patterns for each semantic NE class were specified when a matching article was classified; alternatively the procedure searched the upper level of categories to find candidates. These articles are still classified according to traditional coarse grain classes.", "cite_spans": [ { "start": 58, "end": 82, "text": "Richman and Schon (2008)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Closely related to our work are attempts to build a completely annotated NE corpus free from human in- Table 12 : The result for within-domain evaluation tervention. The first attempt to transform Wikipedia into an annotated NE corpus was made by Nothman et al. (2008) ; they assumed that many NEs are associated with Wikipedia inter-links, i.e. the hyperlinks associated with a phrase in contexts pointing to another article. Therefore, the procedure first identified NEs using heuristics to exploit capitalisation, and then the target articles were classified into NE semantic classes. A bootstrapping approach is then used to extract seeds from a set of 1300 articles. Two distinguishing features were extracted per article; i.e. the head noun for the category links and the head noun for the definitional sentence. The corpus produced covered 60 fine-grained classes in two layers. An alternative approach to the same data set is presented by Tardif et al. (2009) , in which the classification relies on supervised machine learning. Like Dakka and Cucerzan (2008) , both Na\u00efve Bayes and the Support Vector Machine (SVM) have been used as statistical interfaces for the purpose of classification. A total of 2311 articles have been manually annotated and a combination of structured and unstructured features extracted.", "cite_spans": [ { "start": 247, "end": 268, "text": "Nothman et al. (2008)", "ref_id": "BIBREF18" }, { "start": 947, "end": 967, "text": "Tardif et al. (2009)", "ref_id": "BIBREF25" }, { "start": 1042, "end": 1067, "text": "Dakka and Cucerzan (2008)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 103, "end": 111, "text": "Table 12", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "The corpus produced by Nothman et al. (2008) has been thoroughly experimented with to evaluate the impact of its performance. Three different gold-standard corpora, i.e. MUC, CoNLL and BNN, were used for comparative purposes and separate models built for each corpus. The experiment showed that, when in conjunction with other gold-standard corpora the Wikipedia-based corpus could raise their performance; it also performs well for non-Wikipedia texts (Nothman et al., 2009) .", "cite_spans": [ { "start": 23, "end": 44, "text": "Nothman et al. (2008)", "ref_id": "BIBREF18" }, { "start": 453, "end": 475, "text": "(Nothman et al., 2009)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "We presented a methodology to develop a large size fine-grained named entity corpus and gazetteer using an automatic approach. This involved recruiting document classifications. Using this methodology, we produced constantly evolving NE resources that will exploit the yearly growth rate of Arabic Wikipedia.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "The freely fine-grained NE corpus and gazetteer produced when used on their own are of a very promising quality and extend the scope of research beyond traditional NE tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "The fine-grained Arabic NE corpora, i.e. WikiFANE W hole and WikiFANE Selective are freely available at http://www.cs.bham.ac.uk/\u02dcfsa081/resources.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Ace (automatic content extraction) english annotation guidelines for entities, 06", "authors": [], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "ACE. 2008. Ace (automatic content extraction) en- glish annotation guidelines for entities, 06. [ac- cessed 10 April 2013].", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Mapping Arabic Wikipedia into the named entities taxonomy", "authors": [ { "first": "Fahd", "middle": [], "last": "Alotaibi", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2012, "venue": "The COLING 2012 Organizing Committee", "volume": "", "issue": "", "pages": "43--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fahd Alotaibi and Mark Lee. 2012. Mapping Ara- bic Wikipedia into the named entities taxonomy. In Proceedings of COLING 2012: Posters, pages 43- 52, Mumbai, India, December. The COLING 2012 Organizing Committee.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Automatic acquisition of named entity tagged corpus from world wide web", "authors": [ { "first": "Joohui", "middle": [], "last": "An", "suffix": "" }, { "first": "Seungwoo", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Gary Geunbae", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "165--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joohui An, Seungwoo Lee, and Gary Geunbae Lee. 2003. Automatic acquisition of named entity tagged corpus from world wide web. In Proceedings of the 41st Annual Meeting on Association for Computa- tional Linguistics, pages 165-168. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "An automatically built named entity lexicon for arabic", "authors": [ { "first": "Mohammed", "middle": [], "last": "Attia", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Toral", "suffix": "" }, { "first": "Lamia", "middle": [], "last": "Tounsi", "suffix": "" }, { "first": "Monica", "middle": [], "last": "Monachini", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Van Genabith", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC10)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammed Attia, Antonio Toral, Lamia Tounsi, Mon- ica Monachini, and Josef van Genabith. 2010. An automatically built named entity lexicon for ara- bic. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC10), Valletta, Malta. European Language Re- sources Association (ELRA).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Anersys: An arabic named entity recognition system based on maximum entropy", "authors": [ { "first": "Yassine", "middle": [], "last": "Benajiba", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Rosso", "suffix": "" }, { "first": "Jos\u00e9 Miguel", "middle": [], "last": "Bened\u00edruiz", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics and Intelligent Text Processing", "volume": "4394", "issue": "", "pages": "143--153", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yassine Benajiba, Paolo Rosso, and Jos\u00e9 Miguel Bened\u00edruiz. 2007. Anersys: An arabic named entity recognition system based on maximum entropy. In Alexander Gelbukh, editor, Computational Linguis- tics and Intelligent Text Processing, volume 4394 of Lecture Notes in Computer Science, pages 143-153.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Arabic named entity recognition: Using features extracted from noisy data", "authors": [ { "first": "Y", "middle": [], "last": "Benajiba", "suffix": "" }, { "first": "I", "middle": [], "last": "Zitouni", "suffix": "" }, { "first": "M", "middle": [], "last": "Diab", "suffix": "" }, { "first": "P", "middle": [], "last": "Rosso", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the ACL 2010 Conference Short Papers", "volume": "", "issue": "", "pages": "281--285", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Benajiba, I. Zitouni, M. Diab, and P. Rosso. 2010. Arabic named entity recognition: Using features ex- tracted from noisy data. In Proceedings of the ACL 2010 Conference Short Papers, pages 281-285, Up- psala, Sweden. Association for Computational Lin- guistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Annotation guidelines for answer types. LDC2005T33 [accessed 02", "authors": [ { "first": "Ada", "middle": [], "last": "Brunstein", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ada Brunstein. 2002. Annotation guidelines for an- swer types. LDC2005T33 [accessed 02 January 2012].", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Assessing agreement on classification tasks: the kappa statistic", "authors": [ { "first": "Jean", "middle": [], "last": "Carletta", "suffix": "" } ], "year": 1996, "venue": "Comput. Linguist", "volume": "22", "issue": "2", "pages": "249--254", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jean Carletta. 1996. Assessing agreement on classifi- cation tasks: the kappa statistic. Comput. Linguist., 22(2):249-254, June.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Named entity extraction using adaboost", "authors": [ { "first": "Xavier", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "Lluis", "middle": [], "last": "Marquez", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "Padr\u00f3", "suffix": "" } ], "year": 2002, "venue": "proceedings of the 6th conference on Natural language learning", "volume": "20", "issue": "", "pages": "1--4", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xavier Carreras, Lluis Marquez, and Llu\u00eds Padr\u00f3. 2002. Named entity extraction using adaboost. In proceedings of the 6th conference on Natural lan- guage learning-Volume 20, pages 1-4. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Augmenting wikipedia with named entity tags", "authors": [ { "first": "Wisam", "middle": [], "last": "Dakka", "suffix": "" }, { "first": "Silviu", "middle": [], "last": "Cucerzan", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 3rd International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "545--552", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wisam Dakka and Silviu Cucerzan. 2008. Augment- ing wikipedia with named entity tags. In Proceed- ings of the 3rd International Joint Conference on Natural Language Processing, pages 545-552, Hy- derabad, India. Asian Federation of Natural Lan- guage Processing.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Second generation amira tools for arabic processing: Fast and robust tokenization, pos tagging, and base phrase chunking", "authors": [ { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" } ], "year": 2009, "venue": "2nd International Conference on Arabic Language Resources and Tools", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mona Diab. 2009. Second generation amira tools for arabic processing: Fast and robust tokenization, pos tagging, and base phrase chunking. In 2nd Inter- national Conference on Arabic Language Resources and Tools.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Building a multilingual named entityannotated corpus using annotation projection", "authors": [ { "first": "Maud", "middle": [], "last": "Ehrmann", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Turchi", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Steinberger", "suffix": "" } ], "year": 2011, "venue": "Proceedings of Recent Advances in Natural Language Processing (RANLP)", "volume": "", "issue": "", "pages": "118--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maud Ehrmann, Marco Turchi, and Ralf Steinberger. 2011. Building a multilingual named entity- annotated corpus using annotation projection. In Proceedings of Recent Advances in Natural Lan- guage Processing (RANLP), pages 118-124, Hissar, Bulgaria.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Generating chinese named entity data from a parallel corpus", "authors": [ { "first": "Ruiji", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 5th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "264--272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruiji Fu, Bing Qin, and Ting Liu. 2011. Generat- ing chinese named entity data from a parallel cor- pus. In Proceedings of the 5th International Joint Conference on Natural Language Processing, pages 264-272, Chiang Mai, Thailand. Asian Federation of Natural Language Processing (AFNLP).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "On arabic transliteration", "authors": [ { "first": "Nizar", "middle": [], "last": "Habash", "suffix": "" }, { "first": "Abdelhadi", "middle": [], "last": "Soudi", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Buckwalter", "suffix": "" } ], "year": 2007, "venue": "Arabic Computational Morphology", "volume": "38", "issue": "", "pages": "15--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nizar Habash, Abdelhadi Soudi, and Tim Buckwalter. 2007. On arabic transliteration. In Arabic Com- putational Morphology, volume 38 of Text, Speech and Language Technology, pages 15-22. Springer Netherlands.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Answer mining by combining extraction techniques with abductive reasoning", "authors": [ { "first": "Sanda", "middle": [], "last": "Harabagiu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Mitchell", "middle": [], "last": "Bowden", "suffix": "" }, { "first": "John", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Jeremy", "middle": [], "last": "Bensley", "suffix": "" } ], "year": 2003, "venue": "Proceedings of 12th Text Retrieval Conference", "volume": "", "issue": "", "pages": "375--382", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanda Harabagiu, Dan Moldovan, Christine Clark, Mitchell Bowden, John Williams, and Jeremy Bens- ley. 2003. Answer mining by combining extraction techniques with abductive reasoning. In Proceed- ings of 12th Text Retrieval Conference, volume 2003, pages 375-382. NIST.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Learning question classifiers: the role of semantic information", "authors": [ { "first": "Xin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2006, "venue": "Natural Language Engineering", "volume": "12", "issue": "03", "pages": "229--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xin Li and Dan Roth. 2006. Learning question clas- sifiers: the role of semantic information. Natural Language Engineering, 12(03):229-249.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Unsupervised named-entity recognition: Generating gazetteers and resolving ambiguity. Advances in Artificial Intelligence", "authors": [ { "first": "David", "middle": [], "last": "Nadeau", "suffix": "" }, { "first": "Peter", "middle": [ "D" ], "last": "Turney", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Matwin", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "266--277", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Nadeau, Peter D. Turney, and Stan Matwin. 2006. Unsupervised named-entity recognition: Gen- erating gazetteers and resolving ambiguity. Ad- vances in Artificial Intelligence, pages 266-277.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Reducing question answering input data using named entity recognition", "authors": [ { "first": "Elisa", "middle": [], "last": "Noguera", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Toral", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Llopis", "suffix": "" }, { "first": "Rafael", "middle": [], "last": "Mu\u0144oz", "suffix": "" } ], "year": 2005, "venue": "Text, Speech and Dialogue", "volume": "", "issue": "", "pages": "428--434", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elisa Noguera, Antonio Toral, Fernando Llopis, and Rafael Mu\u0144oz. 2005. Reducing question answering input data using named entity recognition. In Text, Speech and Dialogue, pages 428-434. Springer.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Transforming wikipedia into named entity training data", "authors": [ { "first": "Joel", "middle": [], "last": "Nothman", "suffix": "" }, { "first": "James", "middle": [], "last": "Curran", "suffix": "" }, { "first": "Tara", "middle": [], "last": "Murphy", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Australian Language Technology Association Workshop", "volume": "", "issue": "", "pages": "124--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joel Nothman, James Curran, and Tara Murphy. 2008. Transforming wikipedia into named entity training data. In Proceedings of the Australian Language Technology Association Workshop, pages 124-132, Hobart, Australia. ALTA.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Analysing wikipedia and gold-standard corpora for ner training", "authors": [ { "first": "Joel", "middle": [], "last": "Nothman", "suffix": "" }, { "first": "Tara", "middle": [], "last": "Murphy", "suffix": "" }, { "first": "James", "middle": [], "last": "Curran", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "612--620", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joel Nothman, Tara Murphy, and James Curran. 2009. Analysing wikipedia and gold-standard corpora for ner training. In Proceedings of the 12th Confer- ence of the European Chapter of the Association for Computational Linguistics, pages 612-620, Athens, Greece. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Mining wiki resources for multilingual named entity recognition", "authors": [ { "first": "Alexander", "middle": [], "last": "Richman", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Schon", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Richman and Patrick Schon. 2008. Mining wiki resources for multilingual named entity recog- nition. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 1-9, Columbus, Ohio, USA. Association for Computational Linguis- tics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Classifying wikipedia articles into ne's using svm's with threshold adjustment", "authors": [ { "first": "Iman", "middle": [], "last": "Saleh", "suffix": "" }, { "first": "Kareem", "middle": [], "last": "Darwish", "suffix": "" }, { "first": "Aly", "middle": [], "last": "Fahmy", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Named Entities Workshop", "volume": "", "issue": "", "pages": "85--92", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iman Saleh, Kareem Darwish, and Aly Fahmy. 2010. Classifying wikipedia articles into ne's using svm's with threshold adjustment. In Proceedings of the 2010 Named Entities Workshop, pages 85-92, Up- psala, Sweden. Association for Computational Lin- guistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Definition, dictionaries and tagger for extended named entity hierarchy", "authors": [ { "first": "Satoshi", "middle": [], "last": "Sekine", "suffix": "" }, { "first": "Chikashi", "middle": [], "last": "Nobat", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 4th International Conference on Language Resources And Evaluation", "volume": "", "issue": "", "pages": "1977--1980", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satoshi Sekine and Chikashi Nobat. 2004. Definition, dictionaries and tagger for extended named entity hierarchy. In Proceedings of the 4th International Conference on Language Resources And Evaluation, pages 1977-1980, Lisbon, Portugal. ELRA.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Extended named entity hierarchy", "authors": [ { "first": "Satoshi", "middle": [], "last": "Sekine", "suffix": "" }, { "first": "Kiyoshi", "middle": [], "last": "Sudo", "suffix": "" }, { "first": "Chikashi", "middle": [], "last": "Nobata", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the third International Conference on Language Resources and Evaluation", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satoshi Sekine, Kiyoshi Sudo, and Chikashi Nobata. 2002. Extended named entity hierarchy. In Proceed- ings of the third International Conference on Lan- guage Resources and Evaluation, volume 2, Las Pal- mas, Spain. ELRA.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Person name entity recognition for arabic", "authors": [ { "first": "Khaled", "middle": [], "last": "Shaalan", "suffix": "" }, { "first": "Hafsa", "middle": [], "last": "Raza", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Workshop on Computational Approaches to Semitic Languages: Common Issues and Resources", "volume": "", "issue": "", "pages": "17--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Khaled Shaalan and Hafsa Raza. 2007. Person name entity recognition for arabic. In Proceedings of the 2007 Workshop on Computational Approaches to Semitic Languages: Common Issues and Resources, pages 17-24, Prague, Czech Republic. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Improved text categorisation for wikipedia named entities", "authors": [ { "first": "Sam", "middle": [], "last": "Tardif", "suffix": "" }, { "first": "James", "middle": [], "last": "Curran", "suffix": "" }, { "first": "Tara", "middle": [], "last": "Murphy", "suffix": "" } ], "year": 2009, "venue": "Australasian Language Technology Association Workshop", "volume": "", "issue": "", "pages": "104--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sam Tardif, James Curran, and Tara Murphy. 2009. Improved text categorisation for wikipedia named entities. In Australasian Language Technology As- sociation Workshop 2009, pages 104-108, Sydney, Australia.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Classifying wikipedia entities into fine-grained classes", "authors": [ { "first": "Maksim", "middle": [], "last": "Tkatchenko", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Ulanov", "suffix": "" }, { "first": "Andrey", "middle": [], "last": "Simanovsky", "suffix": "" } ], "year": 2011, "venue": "IEEE 27th International Conference on", "volume": "", "issue": "", "pages": "212--217", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maksim Tkatchenko, Alexander Ulanov, and Andrey Simanovsky. 2011. Classifying wikipedia enti- ties into fine-grained classes. In Data Engineering Workshops (ICDEW), 2011 IEEE 27th International Conference on, pages 212-217. IEEE.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "The statistic of arabic wikipedia, 05", "authors": [ { "first": "", "middle": [], "last": "Wikipedia", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wikipedia. 2013. The statistic of arabic wikipedia, 05. [accessed 10 May 2013].", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "0.79 0.78 0.78 0.79 0.78 SFUni+Bigram 0.80 0.81 0.80 0.80 0.81 0.79 FFUni 0.80 0.81 0.80 0.81 0.82 0.80 FFUni+Bigram 0.81 0.82 0.81 0.81 0.82 0.81 LFUni 0.77 0.78 0.77 0.78 0.79 0.78 LFUni+Bigram 0.79 0.80 0.79 0.79 0.80 0.79 ELFUni 0.82 0.83 0.82 0.82 0.83 0.82 ELFUni+Bigram 0.81 0.82 0.81 0.82 0.82 0.81", "type_str": "figure", "uris": null }, "TABREF0": { "type_str": "table", "num": null, "content": "", "html": null, "text": "ACE (2008) modified taxonomy. The modified or added classes are represented with italics and asterisks" }, "TABREF3": { "type_str": "table", "num": null, "content": "
", "html": null, "text": "The total number of sentences and tokens for the compiled corpora" }, "TABREF4": { "type_str": "table", "num": null, "content": "
Class ANERgazet WikiFANE Gazet
PER192030821
ORG2626664
LOC19501424
GPENA20785
FACNA2182
VEHNA518
WEANA274
PRONA5624
Total413268355
", "html": null, "text": "compares the distribution between ANERgazet and WikiFANE Gazet . Due to the space limitation, we only present the coarse level distribution of WikiFANE Gazet . It is clearly shown that, WikiFANE Gazet has superiority in the sense of type and coverage. The gazetteer produced is freely available to the research community to use and extend 3 ." }, "TABREF5": { "type_str": "table", "num": null, "content": "", "html": null, "text": "" }, "TABREF6": { "type_str": "table", "num": null, "content": "
3 Thefine-grainedArabicNEgazeet-
terWikiFANEGazetisfreelyavailableat
http://www.cs.bham.ac.uk/\u02dcfsa081/resources.html
", "html": null, "text": "Inter-annotation agreement between the classified articles and the gold-standard" }, "TABREF8": { "type_str": "table", "num": null, "content": "
: The distribution of the coarse-grained NE tags
across different corpora
No GazetteerWikiFANEGazet
CorpusPRFPRF
ANERcorp87.1369.2777.1887.8672.3479.35
NW88.5169.3777.7888.2172.7979.76
REFLEXTB79.0970.1674.3689.2076.6182.43
WL83.7862.2371.4184.6966.6174.57
", "html": null, "text": "" }, "TABREF9": { "type_str": "table", "num": null, "content": "
5.2.3 Cross-domain Evaluation
The purpose of cross-domain evaluation is to train the
classifier on a certain domain and then test this over dif-
ferent datasets with different domains or genres. The
aim behind this experiment is to evaluate the effect
when using WikiFANE W hole and WikiFANE Selective
as training data versus news-wire domain datasets.
This experiment helps to clarify the suitability of using
WikiFANE as a relatively open domain corpus. It is
evident from Table 10 that, self-training of ANERcorp
and REFLEX produces the best performance. Mean-
while, there are some interesting findings. Even though
REFLEX is a news-wire based corpus, its performance
is dramatically lower when it is used as training dataset
and tested over ANERcorp. This is also the case when
training ANERcorp and testing it over REFLEX. This
implies that, even within the same domain, news-wire,
there is less generalisability for the current news-wire
dataset across different datasets. Another interesting
finding is that, the version of WikiFANE Selective per-
forms better than WikiFANE W hole on different test
sets, except for with ANERcorp. This might be be-
cause WikiFANE Selective has a greater tag density than
WikiFANE W hole , which leads to more positive exam-
ples in the dataset.
", "html": null, "text": "The comparison for using WikiFANE Gazet as external knowledge over news-wire dataset 72.34 79.35 80.60 58.38 67.71 79.31 64.92 71.40 74.23 52.55 61.54 REFLEX 73.57 50.07 59.59 88.21 72.79 79.76 89.20 76.61 82.43 84.69 66.61 74.57 WikiFANE W hole 81.53 43.10 56.39 71.43 37.84 49.47 84.11 51.21 63.66 71.43 36.50 48.31 WikiFANE Selective 88.10 37.52 52.62 86.99 42.16 56.80 86.49 51.61 64.65 84.43 37.59 52.02" }, "TABREF10": { "type_str": "table", "num": null, "content": "
CorpusPRF
ANERcorp +90.40 58.21 70.81
WikiFANE Selective
REFLEX + WikiFANE SelectiveNW 90.55 62.16 73.72 TB 86.52 62.10 72.30 WL 86.01 52.74 65.38
", "html": null, "text": "The result of cross-domain evaluation" }, "TABREF11": { "type_str": "table", "num": null, "content": "", "html": null, "text": "The result of combining WikiFANE Selective with news-wire corpora" }, "TABREF12": { "type_str": "table", "num": null, "content": "
CorpusPPER RFPORG RFPLOC RFPOverall RF
WikiFANE W hole
(no gaz)
", "html": null, "text": "93.15 85.41 89.11 93.69 89.34 91.46 83.39 66.81 74.19 88.51 76.18 81.88 WikiFANE Selective (no gaz) 92.82 85.80 89.17 93.41 88.83 91.06 81.76 72.24 76.70 86.92 78.62 82.56 WikiFANE W hole 97.35 88.61 92.78 97.74 93.10 95.36 84.58 70.37 76.83 91.10 79.62 84.98 WikiFANE Selective 96.37 88.75 92.40 96.12 91.73 93.87 82.55 75.73 78.99 88.77 81.86 85.18" } } } }