Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I13-1034",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:15:48.925037Z"
},
"title": "Bootstrapping Large-scale Named Entities using URL-Text Hybrid Patterns",
"authors": [
{
"first": "Chao",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc No",
"location": {
"addrLine": "10, Shangdi 10th Street, Haidian District",
"postCode": "100085",
"settlement": "Beijing",
"country": "China"
}
},
"email": "zhangchao01@baidu.com"
},
{
"first": "Shiqi",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc No",
"location": {
"addrLine": "10, Shangdi 10th Street, Haidian District",
"postCode": "100085",
"settlement": "Beijing",
"country": "China"
}
},
"email": "zhaoshiqi@baidu.com"
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc No",
"location": {
"addrLine": "10, Shangdi 10th Street, Haidian District",
"postCode": "100085",
"settlement": "Beijing",
"country": "China"
}
},
"email": "wanghaifeng@baidu.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatically mining named entities (NE) is an important but challenging task, pattern-based and bootstrapping strategy is the most widely accepted solution. In this paper, we propose a novel method for NE mining using web document titles. In addition to the traditional text patterns, we propose to use url-text hybrid patterns that introduce url criterion to better pinpoint high-quality NEs. We also design a multiclass collaborative learning mechanism in bootstrapping, in which different patterns and different classes work together to determine better patterns and NE instances. Experimental results show that the precision of NEs mined with the proposed method is 0.96 and 0.94 on Chinese and English corpora, respectively. Comparison result also shows that the proposed method significantly outperforms a representative method that mines NEs from large-scale query logs.",
"pdf_parse": {
"paper_id": "I13-1034",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatically mining named entities (NE) is an important but challenging task, pattern-based and bootstrapping strategy is the most widely accepted solution. In this paper, we propose a novel method for NE mining using web document titles. In addition to the traditional text patterns, we propose to use url-text hybrid patterns that introduce url criterion to better pinpoint high-quality NEs. We also design a multiclass collaborative learning mechanism in bootstrapping, in which different patterns and different classes work together to determine better patterns and NE instances. Experimental results show that the precision of NEs mined with the proposed method is 0.96 and 0.94 on Chinese and English corpora, respectively. Comparison result also shows that the proposed method significantly outperforms a representative method that mines NEs from large-scale query logs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The task of named entity mining (NEM) aims to mine named entities (NE) of given categories from raw data. NEM is essential in many applications. For example, NEM can generate NE gazetteers necessary for the task of named entity recognition (NER) (Cohen and Sarawagi, 2004; Kazama and Torisawa, 2008; Talukdar et al., 2006) . It can also help improve the search results in web search (Pa\u015fca, 2004) , and increase the coverage of knowledge graphs.",
"cite_spans": [
{
"start": 246,
"end": 272,
"text": "(Cohen and Sarawagi, 2004;",
"ref_id": "BIBREF1"
},
{
"start": 273,
"end": 299,
"text": "Kazama and Torisawa, 2008;",
"ref_id": "BIBREF6"
},
{
"start": 300,
"end": 322,
"text": "Talukdar et al., 2006)",
"ref_id": "BIBREF12"
},
{
"start": 383,
"end": 396,
"text": "(Pa\u015fca, 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Extensive research has been conducted on NEM, in which pattern-based methods are the most popular. Handcrafted or automatically learnt patterns are usually used to extract NE instances from various corpora, such as web documents, search engine's retrieved snippets, and query logs. Bootstrapping strategy is often applied to generate more patterns and instances iteratively so as to improve the coverage of the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The method we propose also belongs to the family of pattern-based NEM. However, our method is a departure from the previous ones. It makes contributions from the following aspects: First, we design url-text hybrid patterns instead of the traditional text patterns. We take url criterion into account, so as to measure the quality of the source webpages. Second, we propose Multiclass Collaborative Learning (MCL) mechanism, which globally scores and ranks the patterns and NE instances within mutiple classes in bootstrapping.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate our method in two languages, i.e., Chinese and English, so as to demonstrate the language-independent nature of the method. We mine NEs using the system for five categories in both languages, including star, film, TV play, song, and PC game. Experimental results show that the average precision of the extracted NEs is 96% in Chinese and 94% in English. Meanwhile, the average coverage computed against a benchmark repository is 61% and 55% for the two languages. Comparative experiments further show that our method significantly outperforms a representative conventional method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we review the previous studies on NEM from three aspects: the data resource used, the proposed methods, and particularly the bootstrapping strategy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Various resources have been exploited for NEM. Many researchers make use of large-scale web corpora and learn NEs surrounded by certain context patterns (Pa\u015fca, 2004; Downey et al., 2007) . Others mine NEs using web search engines. They submit extraction patterns as queries to search engines, and extract NEs matching the patterns from the retrieved snippets (Etzioni et al., 2005; Etzioni et al., 2004; ?; Kozareva and Hovy, 2010) . There are also studies extracting NEs structured HTML tables (Dalvi et al., 2012) . Besides web documents, NEs as well as their attributes can also be mined from search engine query logs, since many users tend to search for named entities in their queries (Pa\u015fca, 2007a; Pa\u015fca, 2007b) .",
"cite_spans": [
{
"start": 153,
"end": 166,
"text": "(Pa\u015fca, 2004;",
"ref_id": "BIBREF8"
},
{
"start": 167,
"end": 187,
"text": "Downey et al., 2007)",
"ref_id": "BIBREF3"
},
{
"start": 360,
"end": 382,
"text": "(Etzioni et al., 2005;",
"ref_id": "BIBREF5"
},
{
"start": 383,
"end": 404,
"text": "Etzioni et al., 2004;",
"ref_id": "BIBREF4"
},
{
"start": 405,
"end": 407,
"text": "?;",
"ref_id": null
},
{
"start": 408,
"end": 432,
"text": "Kozareva and Hovy, 2010)",
"ref_id": "BIBREF7"
},
{
"start": 496,
"end": 516,
"text": "(Dalvi et al., 2012)",
"ref_id": "BIBREF2"
},
{
"start": 691,
"end": 705,
"text": "(Pa\u015fca, 2007a;",
"ref_id": "BIBREF9"
},
{
"start": 706,
"end": 719,
"text": "Pa\u015fca, 2007b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As an alternative, this paper proposes to mine NEs from vertical websites titles, based on our observation that NEs of a class c can generally be found in webpage titles of some vertical websites of class c. Our statistics show that 99% out of 10, 000 random NEs appear in webpage titles. Besides, webpage titles have the advantage that they are of better quality than free-text documents, while less noisy than user queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Pattern-based methods are the most popular ones in NEM (Riloff and Jones, 1999; Thelen and Riloff, 2002; Etzioni et al., 2004; Pa\u015fca, 2004; Talukdar et al., 2006; Pa\u015fca, 2007b; Wang and Cohen, 2009; Kozareva and Hovy, 2010) . NE extraction patterns in previous papers can be roughly classified into two types, i.e., Hearst patterns and class-specific wrappers. Hearst patterns are named after Hearst (1992), who among the first to design patterns, such as \"E is a C\", \"C including E\", to extract hyponyms / hypernyms. The surface patterns were later extended to lexico-syntactic patterns (Thelen and Riloff, 2002; Pa\u015fca, 2004) , so that the pattern-filling instances can be identified more accurately via considered constraints.",
"cite_spans": [
{
"start": 55,
"end": 79,
"text": "(Riloff and Jones, 1999;",
"ref_id": "BIBREF11"
},
{
"start": 80,
"end": 104,
"text": "Thelen and Riloff, 2002;",
"ref_id": "BIBREF13"
},
{
"start": 105,
"end": 126,
"text": "Etzioni et al., 2004;",
"ref_id": "BIBREF4"
},
{
"start": 127,
"end": 139,
"text": "Pa\u015fca, 2004;",
"ref_id": "BIBREF8"
},
{
"start": 140,
"end": 162,
"text": "Talukdar et al., 2006;",
"ref_id": "BIBREF12"
},
{
"start": 163,
"end": 176,
"text": "Pa\u015fca, 2007b;",
"ref_id": "BIBREF10"
},
{
"start": 177,
"end": 198,
"text": "Wang and Cohen, 2009;",
"ref_id": "BIBREF15"
},
{
"start": 199,
"end": 223,
"text": "Kozareva and Hovy, 2010)",
"ref_id": "BIBREF7"
},
{
"start": 588,
"end": 613,
"text": "(Thelen and Riloff, 2002;",
"ref_id": "BIBREF13"
},
{
"start": 614,
"end": 626,
"text": "Pa\u015fca, 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Hearst patterns are binary patterns containing two slots. In contrast, class-specific wrappers are unary patterns with a single slot (Pa\u015fca, 2007b; Wang and Cohen, 2008) . For example, the pattern \"the film * was directed by\" is a wrapper for the film class, in which the place holder \"*\" can be replaced by any film name. Wrappers need to be learnt for each NE class of interest. Our method proposed in this paper is also a pattern-based one. However, we design a novel type of url-text hybrid pattern, which not only benefits from the conventional textual wrappers, but also takes advantage of url constraint.",
"cite_spans": [
{
"start": 133,
"end": 147,
"text": "(Pa\u015fca, 2007b;",
"ref_id": "BIBREF10"
},
{
"start": 148,
"end": 169,
"text": "Wang and Cohen, 2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Most methods mentioned above are weaklysupervised, in which a few patterns, heuristic rules or instances are fed to the system as seeds, and the system enriches patterns and NE instances iteratively. Bootstrapping is widely used in these methods (Riloff and Jones, 1999; Thelen and Rilof-f, 2002; Pa\u015fca, 2007b; Wang and Cohen, 2008) . The bootstrapping algorithm can effectively reduce manual intervention in building the system. However, it is prone to noise brought in during iterations. We therefore design a Multiclass Collaborative Learning (MCL, detailed in Section 3.4) mechanism in this paper, which guarantees the quality of the generated new patterns and instances by introducing inter-class and intra-class scoring criteria. Named entities of a category are often organized in corresponding vertical websites, where each named entity is displayed in a single webpage. For example, it's easy to extract film NEs from IMDB 1 web titles with regular expressions.",
"cite_spans": [
{
"start": 246,
"end": 270,
"text": "(Riloff and Jones, 1999;",
"ref_id": "BIBREF11"
},
{
"start": 271,
"end": 296,
"text": "Thelen and Rilof-f, 2002;",
"ref_id": null
},
{
"start": 297,
"end": 310,
"text": "Pa\u015fca, 2007b;",
"ref_id": "BIBREF10"
},
{
"start": 311,
"end": 332,
"text": "Wang and Cohen, 2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our method learns NE extraction patterns from web titles (text pattern) and introduces url constraint (url pattern) to make the extraction results more precise. As described in Figure 1 , our method uses bootstrapping strategy in pattern generation and seed extraction. We also propose Multiclass Collaborative Learning (MCL) mechanism to filter noise introduced in iterations.",
"cite_spans": [],
"ref_spans": [
{
"start": 177,
"end": 185,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Text patterns are widely used as wrappers in tasks like information extraction and relation extraction. To improve the accuracy of wrappers, a lot of constraints such as part-of-speech tags (Etzioni et al., 2005) and trigger words (Talukdar et al., 2006) were introduced to tackle the tricky conditions. However, simple wrappers can also acquire high-quality NEs in specific conditions. For example, \"\u02c6(.+?)$\" is qualified to extract person name from the titles whose corresponding urls match the regular expression \"http://www.nndb.com/people/\\d+/\\d+/\". Based on the consideration above, we take the quality of urls into consideration when using wrappers. We use simple text patterns if the websites are of high-quality, and have to use complicated text patterns if the website's quality is low. We therefore design url-text hybrid patterns to guarantee the capability of the patterns from both url and text aspects.",
"cite_spans": [
{
"start": 190,
"end": 212,
"text": "(Etzioni et al., 2005)",
"ref_id": "BIBREF5"
},
{
"start": 231,
"end": 254,
"text": "(Talukdar et al., 2006)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "3.2.1"
},
{
"text": "Similar urls share the same pattern in many websites (Blanco et al., 2011) . For example, all IMDB webpages describing video information match pattern \"http://www.imdb.com/title/tt\\d+/$\". Therefore, we can take the url pattern as the identity of the website. Url patterns are globally learned using a large-scale url database. The process is as follows:",
"cite_spans": [
{
"start": 53,
"end": 74,
"text": "(Blanco et al., 2011)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "URL Patterns",
"sec_num": "3.2.2"
},
{
"text": "1. Given a url, we generate candidate url patterns by replacing the segments separated with \"/\" from its non-domain parts with slots respectively. For example, for url: \"www.AAA.com/BBB/123\" in which \"/BBB/123\" is the non-domain part, we can generate two candidate patterns \"www.AAA.com/SLOT/123\" and \"www.AAA.com/BBB/SLOT\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "URL Patterns",
"sec_num": "3.2.2"
},
{
"text": "the url database. The ones with a frequency above a pre-defined threshold k are retained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "All candidate patterns are accumulated on",
"sec_num": "2."
},
{
"text": "3. For each retained candidate pattern, we generate the final url pattern by replacing the slot with a regular expression based on the statistics of its slot fillers. For instance, for the candidate pattern \"www.AAA.com/BBB/SLOT\", if the slot can be filled with \"123\", \"234\", and \"456\", then the slot can be replaced with \"\\d+\", meaning that this slot can be filled with any number sequence. Accordingly, the final url pattern should be \"www.AAA.com/BBB/\\d+\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "All candidate patterns are accumulated on",
"sec_num": "2."
},
{
"text": "Text patterns are commonly used in NEM. Here we use a classical method to generate text patterns. Given a seed NE s in a category c, and a title t containing s, the text patterns are generated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Patterns",
"sec_num": "3.2.3"
},
{
"text": "1. Segment the title t into a word sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Patterns",
"sec_num": "3.2.3"
},
{
"text": "2. Match the seed s in t, and replace s with the slot \"(.+?)\" 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Patterns",
"sec_num": "3.2.3"
},
{
"text": "3. Generate patterns that contain the slot as well as words preceding and succeeding the slot within a pre-defined window size. Several patterns can be yielded in this way given different window sizes. We set the window size to 2, 3, 4, and 5 in our experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Patterns",
"sec_num": "3.2.3"
},
{
"text": "A url-text hybrid pattern (utp), combining both url and text patterns, is defined as a 4-tuple: utp = (up, tp, c, f ), where up and tp are the url pattern and text pattern respectively, c indicates the category that utp belongs to, and f (scored by Eq.(6)) denotes the confidence of utp. We use U T P to denote a set of utps, and use U T P i to denote the U T P of the i-th category c i . A hybrid pattern is more strict than a url pattern and a text pattern separately. As we will show, the NEs it can extract are of better quality and coverage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hybrid Patterns",
"sec_num": "3.2.4"
},
{
"text": "As described in Algorithm 1, our method GenerateU T P generates raw patterns (r U T P k ) with seeds (Seeds k\u22121 ) from web titles (WT) in the k-th iteration. Likewise, raw NE instances (r ins k ) are extracted by ExtractN E in the following steps. SelectU T P and SelectN E output high quality patterns and NE instances respectively. These two functions are based on MCL mechanism described in Section 3.4. During these processes, each pattern is scored by Eq.(5) and is kept if its score is above a threshold, and the instances yielded in each iteration are ranked according to Eq.(6), and those ranked in the top 1/k are selected and added (with function AddSeeds) into the seed set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bootstrapping",
"sec_num": "3.3"
},
{
"text": "We use #(ins k ) to denote the number of instances after the k-th iteration. The iterations will terminate if #(ins k )/#(ins k\u22121 ) < \u03b7, where \u03b7 is Algorithm 1 Bootstrapping for NE Mining",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bootstrapping",
"sec_num": "3.3"
},
{
"text": "Seeds 0 for n categories: {S 0 1 , S 0 2 , . . . , S 0 n } webpage titles (WT); iteration count k = 1; Ensure:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Require:",
"sec_num": null
},
{
"text": "1: while Terminate criterion is not met do 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Require:",
"sec_num": null
},
{
"text": "r U T P k = GenerateU T P (Seeds k\u22121 , W T );",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Require:",
"sec_num": null
},
{
"text": "3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Require:",
"sec_num": null
},
{
"text": "U T P k = SelectU T P (r U T P k , Seeds k\u22121 , W T ); 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Require:",
"sec_num": null
},
{
"text": "r ins k = ExtractN E(U T P k , W T );",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Require:",
"sec_num": null
},
{
"text": "5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Require:",
"sec_num": null
},
{
"text": "ins k = SelectN E(r ins k ); 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Require:",
"sec_num": null
},
{
"text": "Seeds k = AddSeeds(ins k , Seeds k\u22121 ); 7: k = k + 1; 8: end while 9: return ins k :Named entities for n categories",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Require:",
"sec_num": null
},
{
"text": "{N E 1 , N E 2 , . . . , N E n };",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Require:",
"sec_num": null
},
{
"text": "a threshold (\u03b7 = 1.01 in our experiments). All the extracted NEs after the last iteration are output along with their confidence score computed according to Equ.(6). One can set threshold w.r.t. the confidence score, so as to select high-quality named entities for certain applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Require:",
"sec_num": null
},
{
"text": "In this section, we design collaborative learning mechanism, which contains inter-class and intraclass scoring criteria, to better control the quality of the patterns and NE instances bootstrapped in iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiclass Collaborative Learning (MCL)",
"sec_num": "3.4"
},
{
"text": "If an NE of category c i can also be extracted with patterns from other categories, it is likely that it is noise, or at least is an ambiguous NE that is unsuitable to be used as a seed of c i . Likewise, if a pattern of class c i can also be generated by seeds from other categories, this pattern is obviously not a high-quality pattern for category c i . Thus, we can score the patterns and seeds of the target category with the help of the other classes, which is termed \"inter-class scoring\". The inter-class score for patterns is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-class Scoring",
"sec_num": "3.4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P 1 (c i |utp) = P (c i ) \u00d7 P (utp|c i ) j P (c j ) \u00d7 P (utp|c j )",
"eq_num": "(1)"
}
],
"section": "Inter-class Scoring",
"sec_num": "3.4.1"
},
{
"text": "where:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-class Scoring",
"sec_num": "3.4.1"
},
{
"text": "P (c i ) = |S i |/| S j |,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-class Scoring",
"sec_num": "3.4.1"
},
{
"text": "in which S i denotes the seed set of category c i and | \u2022 |means the size of a set. During initialization, we prepare approximately the same number of seeds for each class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-class Scoring",
"sec_num": "3.4.1"
},
{
"text": "P (utp|c i ) = |S i (utp)|/|S i |,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-class Scoring",
"sec_num": "3.4.1"
},
{
"text": "in which S i (utp) denotes the set of seeds in class c i which generate the pattern utp.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-class Scoring",
"sec_num": "3.4.1"
},
{
"text": "The inter-class score for instances 3 is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-class Scoring",
"sec_num": "3.4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P 1 (c i |s) = P (c i ) \u00d7 P (s|c i ) j P (c j ) \u00d7 P (s|c j )",
"eq_num": "(2)"
}
],
"section": "Inter-class Scoring",
"sec_num": "3.4.1"
},
{
"text": "where: P (c i ) is defined as above, and P (s|c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-class Scoring",
"sec_num": "3.4.1"
},
{
"text": "i ) = F req i (s) s \u2208S i F req i (s )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-class Scoring",
"sec_num": "3.4.1"
},
{
"text": ", F req i (s) means the number of c i 's patterns that can extract instance s, S i means all instances of category c i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-class Scoring",
"sec_num": "3.4.1"
},
{
"text": "Besides inter-class scoring, we also design an intra-class scoring criterion. The basic hypothesis is that, if a pattern generates a lot of instances that cannot be recalled by other patterns in this class, the pattern is likely to be incorrect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intra-class Scoring",
"sec_num": "3.4.2"
},
{
"text": "For each class c i , and the set of m patterns in the current iteration U T P i = utp 1 i , utp 2 i , . . . , utp m i , we compute the intra-class score for utp (say, utp is the j-th pattern utp j i ) as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intra-class Scoring",
"sec_num": "3.4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P 2 (c i |utp) = |S i (utp) \u2229 S H i | |S i (utp)|",
"eq_num": "(3)"
}
],
"section": "Intra-class Scoring",
"sec_num": "3.4.2"
},
{
"text": "where S i (utp) means the set of instances extracted by utp in class c i , S H i is a set of high-quality instances extracted with all patterns in class c i . Here \"high-quality\" is guaranteed by discarding the instances with frequency lower than a threshold T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intra-class Scoring",
"sec_num": "3.4.2"
},
{
"text": "Likewise, intra-class scoring can also be defined for instances: the instances matching more patterns in class c i are more likely to be correct instances of this class. The intra-class score for a seed s is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intra-class Scoring",
"sec_num": "3.4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P 2 (c i |s) = |U T P i (s)| |U T P i |",
"eq_num": "(4)"
}
],
"section": "Intra-class Scoring",
"sec_num": "3.4.2"
},
{
"text": "where U T P i (s) denotes the set of patterns in class c i that can extract instance s, while U T P i denotes the set of all patterns in c i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intra-class Scoring",
"sec_num": "3.4.2"
},
{
"text": "The final score for patterns and instances linearly combines both inter-and intra-class scores as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linearly-combined Scoring",
"sec_num": "3.4.3"
},
{
"text": "For patterns:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linearly-combined Scoring",
"sec_num": "3.4.3"
},
{
"text": "P (c i |utp) = \u03bbP 1 (c i |utp) + (1 \u2212 \u03bb)P 2 (c i |utp) (5) For instances: P (c i |s) = \u03bbP 1 (c i |s) + (1 \u2212 \u03bb)P 2 (c i |s) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linearly-combined Scoring",
"sec_num": "3.4.3"
},
{
"text": "In our experiments, \u03bb is set to 0.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linearly-combined Scoring",
"sec_num": "3.4.3"
},
{
"text": "We evaluate our NE extraction method on five classes, i.e., star, film, TV play, song, and PC game. The reason to select these classes is that they are among the most frequently searched in search engines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "In the experiments, we mainly evaluate the proposed method on Chinese. However, we also test the effectiveness of the method on English (Section 5.3). We therefore prepare experimental data for both languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Data",
"sec_num": "4.1"
},
{
"text": "We run our model on approximately 9.7 billion Chinese web titles and 13 billion English web titles respectively. Chinese web titles were collected from high-quality webpages after spam filtering and pageranking while English web titles were taken from all of our crawled English webpages. Note that, although the English corpus is larger than the Chinese one, it is still noisy and more sparse, given the fact that there are much more English (56.6%) webpages than Chinese (4.5%) on the whole internet 4 . The English titles are lowercased in preprocessing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Data",
"sec_num": "4.1"
},
{
"text": "We evaluate the methods based on precision (P ), coverage (C), as well as the volume (V ) of the extracted NEs. In particular, precision is defined as the percentage of correct NEs of a given class from the automatically extracted ones. Precision is manually evaluated, in which we randomly sample 100 NEs from each resulting NE set of a given class, and ask two annotators to independently annotate whether each extracted NE belongs to the target class. The samples with different annotations are then reviewed by both annotators to produce the final result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.2"
},
{
"text": "Recall is more difficult to assess. Inspired by (Etzioni et al., 2004) , we evaluate coverage against benchmark NE repositories. More specifically, we select a popular website for each given category in the corresponding language. For example, we use IMDB as the benchmark NE repository for categories star, film and TV play in English. All of the websites for constructing benchmark data on both Chinese (CH) and English (EN) are summarized in Table 1 .",
"cite_spans": [
{
"start": 48,
"end": 70,
"text": "(Etzioni et al., 2004)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 445,
"end": 452,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.2"
},
{
"text": "Website The benchmark NEs were extracted from the websites using handcrafted patterns. Postprocessing is done for the Chinese data, including discarding films and TV plays scored by only one viewer and songs played no more than 10 times. These filtering clues are extracted from the websites along with the NEs. For the English data, NEs are limited to those beginning with English characters and consisting of only English characters and some specific symbols ( .\u2212 :, !#). All English NEs are lowercased. The last column of Table 1 shows the statistics of the benchmark data. Coverage is computed as the percentage of NEs in a given benchmark set covered by the automatically extracted NEs. Please note that those websites for constructing benchmark data are not used in url patterns in the following experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 525,
"end": 532,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Class",
"sec_num": null
},
{
"text": "To extract Chinese NEs for the 5 examined classes, we first select 200 seeding NEs for each class. These seeds are randomly sampled from the top-5000 hot NEs for each class from Baidu 5 query logs. The results are shown in Table 2 . As can be seen from the Table 2 , the precision of the extracted NEs is pretty high, which exceeds 0.92 on all five classes. On the other hand, the coverage varies across different classes. Especially, the coverage on songs is very low, which is only 0.12. After observing the extraction patterns, we found that the low coverage of songs is mainly due to the complexity of the patterns. Specifically, the titles of music websites usually contain not only the song's name, but also the singer's name. For example, the title \"R\u00b1\u00f7-h p&-(\u00bf\u00d5,mp3 }-w \u00f3P(Green Flower Porcelain Jay Chou online audition mp3 download kuwo music)\" is from a music website \"www.kuwo.cn\", in which \"R\u00b1\u00f7(Green Flower Porcelain)\" is the song's name, while \"h p &(Jay Chou)\" is the singer's name. The singer's name may seriously influence the generality of the induced text patterns.",
"cite_spans": [],
"ref_spans": [
{
"start": 223,
"end": 230,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 257,
"end": 264,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results on Chinese",
"sec_num": "4.3"
},
{
"text": "We further evaluate our method's coverage of hot NEs. Here an NE is deemed hot if its daily search frequency is no less than 10 according to our query logs. The fourth column (C hot ) of Table 2 depicts the results. We can see that the coverage of hot NEs is evidently higher than that of random NEs for all five categories. The volume of extracted NEs for each class is listed in the last column of Table 2 . Furthermore, row 1 of Table 3 depicts the percentage of the extracted Chinese NEs that are out of the benchmark dataset, from which we can see that our method actually mines a lot of NEs that are not covered by the benchmark data. This demonstrates the importance of extracting NEs from multiple websites.",
"cite_spans": [],
"ref_spans": [
{
"start": 400,
"end": 407,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 432,
"end": 439,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results on Chinese",
"sec_num": "4.3"
},
{
"text": "In this section, we compare our method with the method proposed by (Pa\u015fca, 2007b ). Pa\u015fca's method is guided by a small set of seed instances for each class. The method extracts NEs from user queries in 5 steps: (1) generating query patterns matching the seed instances, (2) identifying candidate NEs using the patterns, (3) representing each candidate NE with a vector of patterns extracting it, (4) representing each class with a vector of patterns extracting its seeds, (5) computing the similarity between the representing vectors of each candidate NE and the class, and ranking the candidate NEs according to the similarity. Extracting NEs from query logs is a promising direction since search queries reflect the netizen's true requirements. In our experiments, we implement Pa\u015fca's method using our query log data, which contains a total of 100 million Chinese queries from Baidu search engine. The seeds used here are the same as in our method. We compare two methods based on precision at different numbers of extracted NEs, by annotating 100 NEs out of the first 500, 5k and all respectively, as well as coverage. The comparison results are shown in Table 4 . We can find from the result that our method significantly outperforms Pa\u015fca's in both precision and coverage (C). Especially, the precision of NEs extracted by Pa\u015fca's method sharply decreases when lower ranked NEs are examined, whereas the quality of NEs extracted by our method seems quite stable.",
"cite_spans": [
{
"start": 67,
"end": 80,
"text": "(Pa\u015fca, 2007b",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 1160,
"end": 1167,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Comparison Results",
"sec_num": "4.4"
},
{
"text": "This section analyzes the influence of the experimental settings. We first introduce the performance when using text pattern only, and then examine the contribution of the inter-and intra-class scoring in the MCL learning. Finally, we show how the performance varies with different number of iterations. Table 5 shows the P@500, P@5k Table 5 : Performance when using text pattern only and P@all performance when only using text patterns. The precision seems relatively good but the coverage is generally low. The precision falls rapidly as the number of selected NEs grows except the category film. This table indicates that url patterns play an important role in our method, without which the quality of the extracted NEs cannot be guaranteed. Table 6 shows the P@500 and P@5k performance of our method when we only use intra-class or inter-class scoring in MCL learning. We can find that there is a dramatic decrease in the performance in both settings, suggesting that both interclass and intra-class scoring criterion are necessary to guarantee the accuracy of the extracted NEs, and they should be used together. Table 7 shows the performance after 1, 3, and 5 iterations. The number of url patterns is also listed along with precision and coverage. As can be seen, the average precision only slightly drops from 0.97 to 0.96 after 5 iterations, whereas the average coverage increases significantly from 0.53 to 0.61. This is mainly because the extraction sources grow almost 3 times, from 314 urlpatterns to 1129 for each category on average.",
"cite_spans": [],
"ref_spans": [
{
"start": 304,
"end": 311,
"text": "Table 5",
"ref_id": null
},
{
"start": 334,
"end": 341,
"text": "Table 5",
"ref_id": null
},
{
"start": 745,
"end": 752,
"text": "Table 6",
"ref_id": null
},
{
"start": 1118,
"end": 1125,
"text": "Table 7",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Analysis of Experiment Settings",
"sec_num": "5.1"
},
{
"text": "We have analyzed the erroneous NEs extracted by our method. This paragraph analyzes errors regarding precision while the following paragraph describes errors about recall. It turns out that ambiguity is a main reason for the errors. We find Category P@500 P@5k Table 6 : Performance when using only intra-class or inter-class scoring in MCL it quite common that an NE belongs to more than one class. For example, a TV play might be adapted from a novel with the same name, a biographical film might be named after the protagonist, etc. Statistics reveal that in \"www.mtime.com\", which is the benchmark data for extracting Chinese films and TV plays in our work, 12.8% of the TV plays have homonymic films in the same website, while the percentage is 14% in its English counterpart \"www.imdb.com\". Our method suffers from the ambiguity problem since the homonymic NEs might yield url-text patterns belonging to other classes, and thereby bring in noisy NEs. Besides, as pointed out in Section 4.3, title complexity is the main problem that hinders NE extraction. Particularly, some titles contain more than one NE, which makes it difficult to induce text-patterns for a certain class from these titles. Another reason leading to the mismatch of benchmark NEs is that some NEs have different forms in different sources. For instance, we extract the song \"New Years Project\", but the correct form in the benchmark data is \"New Year's Project\".",
"cite_spans": [],
"ref_spans": [
{
"start": 261,
"end": 268,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.2"
},
{
"text": "Our method is language-independent. This section presents the evaluation in English. The English data is described in Section 4.1. Table 8 shows the performance of our method. We can see from the table that the precision of the extracted English NEs is also high. Compared with Table 2 above, we can find that the coverage of the English NEs is lower than that on Chinese. However, the volume of the extracted NEs is al- most 30 times larger. This is unsurprising, since there are much more NEs written in English than in Chinese on the internet. Given that the English corpus used in our experiments is only 1.4 times larger than the Chinese corpus, we believe that data sparseness might be a major cause of the low coverage. Likewise, from the row 2 of Table 3 , our method also acquires a large proportion of NEs in English which do not exist in the benchmark websites.",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 138,
"text": "Table 8",
"ref_id": "TABREF12"
},
{
"start": 278,
"end": 285,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 755,
"end": 762,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Language Adaptation",
"sec_num": "5.3"
},
{
"text": "To have a better understanding of the coverage problem, we examined the cases not extracted with our method. As we have analyzed the problem of song, we randomly sampled 1000 NEs missed by our method for the other 3 classes with low coverage on the Chinese test dataset, i.e., star, film, and TV play. We then examined whether the missed cases contain a lot of hot NEs according to the following heuristics: (1) If a star has no picture on the imdb page, then it should not be deemed hot. Our statistics show that 97.3% missed stars have no pictures. (2) If a film's duration is no longer than one hour and the number of viewers grading it on IMDB is less than 10, then the film should not be hot. 90.6% missed films are not hot according to this criterion. (3) Similar to film, if the number of reviewers grading a TV play is less than 10, then it is not hot. 69.9% of the missed TV plays are not hot accordingly. On the whole, the above numbers suggest that the NEs not covered by our method are mostly unpopular ones, which may seldom be used in real applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Adaptation",
"sec_num": "5.3"
},
{
"text": "In this paper, we propose to extract NEs from web document titles using url-text hybrid patterns. A multiclass collaborative learning mechanism is introduced into the bootstrapping algorithm to better perform the quality control. We evaluate our method on five categories popular in real applications, in both Chinese and English. The results reveal that the precision and coverage (against benchmark data) of the extracted NEs are 0.96 / 0.61 in Chinese, and 0.94 / 0.55 in English. Detailed analysis demonstrates that the urltext hybrid patterns are superior to conventional text wrappers, and the multiclass collaborative learning mechanism is effective. Further comparison also shows that our method can significantly outperform a representative method that learns NEs from query logs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Our future work will be carried out along two directions, i.e. improving the text-pattern induction approach and testing the method in more other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "This work is supported by National High-tech R&D Program of China (863 Program) under the grant number: 2011AA01A207. We give warm thanks to Prof. Jian-Yun Nie and other anonymous reviewers for their comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": "7"
},
{
"text": "www.imdb.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "\"(.+?)\" is a regular expression used to extract arbitrary strings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "we also use s to denote an instance generated during bootstrapping.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://en.wikipedia.org/wiki/Languages used on the Internet",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "www.baidu.com, the largest Chinese search engine in the world.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Highly efficient algorithms for structural clustering of large websites",
"authors": [
{
"first": "Lorenzo",
"middle": [],
"last": "Blanco",
"suffix": ""
},
{
"first": "Nilesh",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Ashwin",
"middle": [],
"last": "Machanavajjhala",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 20th international conference on World wide web, WWW '11",
"volume": "",
"issue": "",
"pages": "437--446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lorenzo Blanco, Nilesh Dalvi, and Ashwin Machanavajjhala. 2011. Highly efficient algo- rithms for structural clustering of large websites. In Proceedings of the 20th international conference on World wide web, WWW '11, pages 437-446, New York, NY, USA. ACM.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Exploiting dictionaries in named entity extraction: combining semi-markov extraction processes and data integration methods",
"authors": [
{
"first": "W",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Sunita",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sarawagi",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '04",
"volume": "",
"issue": "",
"pages": "89--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William W. Cohen and Sunita Sarawagi. 2004. Ex- ploiting dictionaries in named entity extraction: combining semi-markov extraction processes and data integration methods. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '04, pages 89-98, New York, NY, USA. ACM.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Websets: extracting sets of entities from the web using unsupervised information extraction",
"authors": [
{
"first": "Bharat",
"middle": [],
"last": "Bhavana",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Callan",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the fifth ACM international conference on Web search and data mining, WSDM '12",
"volume": "",
"issue": "",
"pages": "243--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhavana Bharat Dalvi, William W. Cohen, and Jamie Callan. 2012. Websets: extracting sets of entities from the web using unsupervised information ex- traction. In Proceedings of the fifth ACM interna- tional conference on Web search and data mining, WSDM '12, pages 243-252, New York, NY, USA. ACM.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Locating complex named entities in web text",
"authors": [
{
"first": "Doug",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Broadhead",
"suffix": ""
},
{
"first": "Oren",
"middle": [
"Etzioni"
],
"last": "",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 20th international joint conference on Artifical intelligence, IJCAI'07",
"volume": "",
"issue": "",
"pages": "2733--2739",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Doug Downey, Matthew Broadhead, and Oren Etzion- i. 2007. Locating complex named entities in we- b text. In Proceedings of the 20th international joint conference on Artifical intelligence, IJCAI'07, pages 2733-2739, San Francisco, CA, USA. Mor- gan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Web-scale information extraction in knowitall: (preliminary results)",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Cafarella",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "Stanley",
"middle": [],
"last": "Kok",
"suffix": ""
},
{
"first": "Ana-Maria",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Shaked",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Yates",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 13th international conference on World Wide Web, WWW '04",
"volume": "",
"issue": "",
"pages": "100--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Etzioni, Michael Cafarella, Doug Downey, Stan- ley Kok, Ana-Maria Popescu, Tal Shaked, Stephen Soderland, Daniel S. Weld, and Alexander Yates. 2004. Web-scale information extraction in know- itall: (preliminary results). In Proceedings of the 13th international conference on World Wide Web, WWW '04, pages 100-110, New York, NY, USA. ACM.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Unsupervised named-entity extraction from the web: An experimental study",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Cafarella",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "Ana-Maria",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Shaked",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Yates",
"suffix": ""
}
],
"year": 2005,
"venue": "Artif. Intell",
"volume": "165",
"issue": "1",
"pages": "91--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Etzioni, Michael Cafarella, Doug Downey, Ana- Maria Popescu, Tal Shaked, Stephen Soderland, Daniel S. Weld, and Alexander Yates. 2005. Un- supervised named-entity extraction from the web: An experimental study. Artif. Intell., 165(1):91-134, June.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Inducing gazetteers for named entity recognition by largescale clustering of dependency relations",
"authors": [
{
"first": "Kentaro",
"middle": [],
"last": "Jun'ichi Kazama",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Torisawa",
"suffix": ""
}
],
"year": 2008,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "407--415",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun'ichi Kazama and Kentaro Torisawa. 2008. Induc- ing gazetteers for named entity recognition by large- scale clustering of dependency relations. In ACL, pages 407-415.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A semi-supervised method to learn and construct taxonomies using the web",
"authors": [
{
"first": "Zornitsa",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10",
"volume": "",
"issue": "",
"pages": "1110--1118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zornitsa Kozareva and Eduard Hovy. 2010. A semi-supervised method to learn and construct tax- onomies using the web. In Proceedings of the 2010 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP '10, pages 1110-1118, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Acquisition of categorized named entities for web search",
"authors": [
{
"first": "Marius",
"middle": [],
"last": "Pa\u015fca",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the thirteenth ACM international conference on Information and knowledge management, CIKM '04",
"volume": "",
"issue": "",
"pages": "137--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marius Pa\u015fca. 2004. Acquisition of categorized named entities for web search. In Proceedings of the thir- teenth ACM international conference on Informa- tion and knowledge management, CIKM '04, pages 137-145, New York, NY, USA. ACM.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Organizing and searching the world wide web of facts -step two: harnessing the wisdom of the crowds",
"authors": [
{
"first": "Marius",
"middle": [],
"last": "Pa\u015fca",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 16th international conference on World Wide Web, WWW '07",
"volume": "",
"issue": "",
"pages": "101--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marius Pa\u015fca. 2007a. Organizing and searching the world wide web of facts -step two: harnessing the wisdom of the crowds. In Proceedings of the 16th international conference on World Wide Web, WWW '07, pages 101-110, New York, NY, USA. ACM.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Weakly-supervised discovery of named entities using web search queries",
"authors": [
{
"first": "Marius",
"middle": [],
"last": "Pa\u015fca",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, CIKM '07",
"volume": "",
"issue": "",
"pages": "683--690",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marius Pa\u015fca. 2007b. Weakly-supervised discovery of named entities using web search queries. In Pro- ceedings of the sixteenth ACM conference on Con- ference on information and knowledge managemen- t, CIKM '07, pages 683-690, New York, NY, USA. ACM.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning dictionaries for information extraction by multi-level bootstrapping",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Rosie",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence, AAAI '99/IAAI '99",
"volume": "",
"issue": "",
"pages": "474--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen Riloff and Rosie Jones. 1999. Learning dic- tionaries for information extraction by multi-level bootstrapping. In Proceedings of the sixteenth na- tional conference on Artificial intelligence and the eleventh Innovative applications of artificial intelli- gence conference innovative applications of artifi- cial intelligence, AAAI '99/IAAI '99, pages 474- 479, Menlo Park, CA, USA. American Association for Artificial Intelligence.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A context pattern induction method for named entity extraction",
"authors": [
{
"first": "Partha",
"middle": [],
"last": "Pratim Talukdar",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Liberman",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Tenth Conference on Computational Natural Language Learning, CoNLL-X 06",
"volume": "",
"issue": "",
"pages": "141--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Partha Pratim Talukdar, Thorsten Brants, Mark Liber- man, and Fernando Pereira. 2006. A context pat- tern induction method for named entity extraction. In Proceedings of the Tenth Conference on Compu- tational Natural Language Learning, CoNLL-X 06, pages 141-148, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A bootstrapping method for learning semantic lexicons using extraction pattern contexts",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Thelen",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 conference on Empirical methods in natural language processing",
"volume": "10",
"issue": "",
"pages": "214--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Thelen and Ellen Riloff. 2002. A bootstrap- ping method for learning semantic lexicons using extraction pattern contexts. In Proceedings of the ACL-02 conference on Empirical methods in natu- ral language processing -Volume 10, EMNLP '02, pages 214-221, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Iterative set expansion of named entities using the web",
"authors": [
{
"first": "C",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Eighth IEEE International Conference on Data Mining, ICDM '08",
"volume": "",
"issue": "",
"pages": "1091--1096",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard C. Wang and William W. Cohen. 2008. Iter- ative set expansion of named entities using the web. In Proceedings of the 2008 Eighth IEEE Internation- al Conference on Data Mining, ICDM '08, pages 1091-1096, Washington, DC, USA. IEEE Comput- er Society.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Automatic set instance extraction using the web",
"authors": [
{
"first": "C",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "1",
"issue": "",
"pages": "441--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard C. Wang and William W. Cohen. 2009. Auto- matic set instance extraction using the web. In Pro- ceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 -Volume 1, ACL '09, pages 441- 449, Stroudsburg, PA, USA. Association for Com- putational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Framework of named entity mining",
"num": null,
"type_str": "figure"
},
"TABREF2": {
"content": "<table/>",
"html": null,
"text": "",
"type_str": "table",
"num": null
},
"TABREF4": {
"content": "<table><tr><td>star</td><td>film</td><td colspan=\"2\">TV play song</td><td>PC game</td></tr><tr><td colspan=\"3\">CH 0.52 0.53 0.86</td><td colspan=\"2\">0.11 0.79</td></tr><tr><td colspan=\"3\">EN 0.78 0.82 0.87</td><td colspan=\"2\">0.78 0.84</td></tr></table>",
"html": null,
"text": "NEM results on the Chinese corpus",
"type_str": "table",
"num": null
},
"TABREF5": {
"content": "<table/>",
"html": null,
"text": "Percentage of NEs out of benchmark dataset",
"type_str": "table",
"num": null
},
"TABREF7": {
"content": "<table/>",
"html": null,
"text": "Comparison with Pa\u015fca (2007b)'s method",
"type_str": "table",
"num": null
},
"TABREF11": {
"content": "<table><tr><td colspan=\"2\">Category P</td><td>C</td><td>Vol</td></tr><tr><td>star</td><td colspan=\"3\">0.98 0.65 1,589,002</td></tr><tr><td>film</td><td colspan=\"3\">0.92 0.40 352,152</td></tr><tr><td>TV play</td><td colspan=\"3\">0.89 0.59 71,273</td></tr><tr><td>song</td><td colspan=\"3\">0.95 0.31 240,335</td></tr><tr><td colspan=\"4\">PC game 0.97 0.80 29,166</td></tr><tr><td>average</td><td colspan=\"3\">0.94 0.55 456,386</td></tr></table>",
"html": null,
"text": "Performance for varying number of iterations",
"type_str": "table",
"num": null
},
"TABREF12": {
"content": "<table/>",
"html": null,
"text": "",
"type_str": "table",
"num": null
}
}
}
}