Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
90.1 kB
{
"paper_id": "I11-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:30:32.050423Z"
},
"title": "Keyphrase Extraction from Online News Using Binary Integer Programming",
"authors": [
{
"first": "Zhuoye",
"middle": [],
"last": "Ding",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fudan University",
"location": {
"addrLine": "{09110240024,qz"
}
},
"email": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fudan University",
"location": {
"addrLine": "{09110240024,qz"
}
},
"email": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fudan University",
"location": {
"addrLine": "{09110240024,qz"
}
},
"email": "xjhuang@fudan.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In recent years, keyphrase extraction has received great attention, and been successfully employed by various applications. Keyphrases extracted from news articles can be used to concisely represent main contents of news events. Keyphrases can help users to speed up browsing and find the desired contents more quickly. In this paper, we first present several criteria of high-quality news keyphrases. After that, in order to integrate those criteria into the keyphrase extraction task, we propose a novel formulation which converts the task to a binary integer programming problem. The formulation cannot only encode the prior knowledge as constraints, but also learn constraints from data. We evaluate the proposed approach on a manually labeled corpus. Experimental results demonstrate that our approach achieves better performances compared with the state-of-the-art methods.",
"pdf_parse": {
"paper_id": "I11-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "In recent years, keyphrase extraction has received great attention, and been successfully employed by various applications. Keyphrases extracted from news articles can be used to concisely represent main contents of news events. Keyphrases can help users to speed up browsing and find the desired contents more quickly. In this paper, we first present several criteria of high-quality news keyphrases. After that, in order to integrate those criteria into the keyphrase extraction task, we propose a novel formulation which converts the task to a binary integer programming problem. The formulation cannot only encode the prior knowledge as constraints, but also learn constraints from data. We evaluate the proposed approach on a manually labeled corpus. Experimental results demonstrate that our approach achieves better performances compared with the state-of-the-art methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Keyphrase extraction is a long studied topic in natural language processing. A keyphrase, which consists a word or a group of words, is defined as a precise and concise expression of one or more documents. It has been widely used in various applications such as summarization, clustering, categorizing, browsing, and so on. In recent years, keyphrase extraction has received much attention Zha, 2002; Hulth, 2003; Tomokiyo and Hurst, 2003; Chen et al., 2005; Medelyan et al., 2009; Liu et al., 2009) .",
"cite_spans": [
{
"start": 390,
"end": 400,
"text": "Zha, 2002;",
"ref_id": "BIBREF21"
},
{
"start": 401,
"end": 413,
"text": "Hulth, 2003;",
"ref_id": "BIBREF4"
},
{
"start": 414,
"end": 439,
"text": "Tomokiyo and Hurst, 2003;",
"ref_id": "BIBREF14"
},
{
"start": 440,
"end": 458,
"text": "Chen et al., 2005;",
"ref_id": "BIBREF2"
},
{
"start": 459,
"end": 481,
"text": "Medelyan et al., 2009;",
"ref_id": "BIBREF10"
},
{
"start": 482,
"end": 499,
"text": "Liu et al., 2009)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Keyphrases are usually manually chosen by authors, for scientific publications, magazine articles, books, et al. Due to the expensive and time consuming effort of manually assigning keyphrase, web pages and online news rarely contain keyphrases. It should be useful to automatically extract keyphrases from online news to represent their main contents. There are already a number of studies which focus on extracting keyphrases from scientific publications or single news article Turney, 2000; Wan and Xiao, 2008; Jiang et al., 2009) . We also notice that, currently, many websites provide the service which group related news together to facilitate users' browsing. In this paper, we focus on extracting keyphrases from a group of news articles which describe the same news event by different publishers.",
"cite_spans": [
{
"start": 480,
"end": 493,
"text": "Turney, 2000;",
"ref_id": "BIBREF15"
},
{
"start": 494,
"end": 513,
"text": "Wan and Xiao, 2008;",
"ref_id": "BIBREF17"
},
{
"start": 514,
"end": 533,
"text": "Jiang et al., 2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous studies on keyphrase extraction can be roughly categorized into two groups: supervised and unsupervised. Unsupervised approaches usually select a set of candidates and use different ranking methods to select the candidates with the highest scores as keyphrases. Most of ranking methods are based on the information extracted from the document, such as TF\u2022IDF, position, syntactic relation with other words, and so on. Supervised methods convert the task into a binary classification problem, which categorizes phrases as keyphrases or non-keyphrases. Similar as other tasks applied by supervised methods, a large amount of domain dependent training data is required. When the domain is changed, the labeled corpus should also be changed. And corpus labeling is a time-consuming and tedious task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of the current methods focus on judging the importance of each phrase, and individually extract phrases with the highest scores. After analyzing the human assigned keyphrases, we observe that the keyphrases of news should satisfy the following properties:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Relevance. The keyphrases should be semantically relevant to the news theme. The most important ones should be selected as keyphrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Coverage. The keyphrases should be indicative of the whole news event. The extracted keyphrases should cover most of the aspects of the news event.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "Coherence. The keyphrases should be semantically related to each other, and logically consistent and holding together as a harmonious whole.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "Conciseness. The keyphrases should not contain keyphrases with redundant information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "In order to automatically select keyphrases which can satisfy the above properties, in this paper, we propose a novel formulation which converts keyphrase extraction to a binary integer programming problem (BIP) (Alevras and Padberg, 2001 ). An objective function and a number of constraints which high-quality keyphrases should satisfy are specified. BIP, which is the special case of integer programming and a well-studied optimization framework, is used to efficiently search the entire space to extract keyphrases. The formulation provides a flexible framework for integrating different criteria as objective functions or constraints.",
"cite_spans": [
{
"start": 212,
"end": 238,
"text": "(Alevras and Padberg, 2001",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "The major contributions of this work can be summarized as follows: 1) We propose a novel formulation of keyphrase extraction as a binary integer programming problem; 2) Several criteria which high-quality keyphrases should satisfy are converted to the objective function and a set of constraints in order to fit the formulation; 3) Keyphrases are extracted as a set with consideration of their relationships; 4) Experimental results on the dataset consisting of 150 groups of news articles with human annotated keyphrases demonstrate that the proposed method performs better than the state-of-the-art algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "The rest of this paper is organized as follows: Section 2 reviews some related studies. We propose our approach in Section 3. In Section 4, the experimental results are shown and discussed. Finally, we conclude this paper in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "As mentioned in the previous section, most of current studies on keyphrase extraction can be roughly divided into two categories: supervised and unsupervised approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Unsupervised approaches usually select general sets of candidates and use a ranking step to select the most important candidates. For example, Mihalcea and Tarau proposed a graphbased approach called TextRank, where the graph nodes are tokens and the edges reflect cooccurrence relations between tokens in the document (Mihalcea and Tarau, 2004) . Wan and Xiao expanded TextRank by using a small number of topic-related documents to provide more knowledge, which improved results compared with standard TextRank and a tf.idf baseline (Wan and Xiao, 2008) . Tomokiyo and Hurst used pointwise KL-divergence between language models derived from the documents and a reference corpus (Tomokiyo and Hurst, 2003) . Matsuo and Ishizuka presented a statistical keyphrases extraction approach that did not make use of a reference corpus, but was based on cooccurrences of terms in a single document (Y. Matsuo and M.Ishizuka, 2004) . In this paper the proposed BIP based method can combine those unsupervised methods as assignment value in the objective function. TF\u2022IDF and locality information are used in our approach.",
"cite_spans": [
{
"start": 319,
"end": 345,
"text": "(Mihalcea and Tarau, 2004)",
"ref_id": "BIBREF11"
},
{
"start": 534,
"end": 554,
"text": "(Wan and Xiao, 2008)",
"ref_id": "BIBREF17"
},
{
"start": 679,
"end": 705,
"text": "(Tomokiyo and Hurst, 2003)",
"ref_id": "BIBREF14"
},
{
"start": 893,
"end": 921,
"text": "Matsuo and M.Ishizuka, 2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Supervised approaches use a corpus of training data to learn a keyphrase extraction model that is able to classify candidates as keyphrases or non-keyphrases. A well known supervised system is KEA that uses all n-grams of a certain length as candidates, and ranks them based on a Naive Bayes classifier using tf.idf and position as its features . Then Medelyan and Witten presented the improved KEA++ that selected candidates with reference to a controlled vocabulary from a thesaurus or Wikipedia (Medelyan and Witten, 2006) . \"Extractor\" was another supervised system that used stems and stemmed n-grams as candidates (Turney, 2000) . Its features are tuned using a genetic algorithm. Turney introduced a feature set based on statistical word association to ensure that the returned keyphrases set is coherent (Turney, 2003) . Experimental results showed that coherence features can significantly improve the performance and they were not domain-specific. Nguyen and Kan presented a keyphrase extrac-tion algorithm for scientific publications and introduced novel features towards scientific publications such as section information and certain morphological phenomena often found in scientific papers (T.D. Nguyen and Kan., 2007) .",
"cite_spans": [
{
"start": 498,
"end": 525,
"text": "(Medelyan and Witten, 2006)",
"ref_id": "BIBREF9"
},
{
"start": 620,
"end": 634,
"text": "(Turney, 2000)",
"ref_id": "BIBREF15"
},
{
"start": 812,
"end": 826,
"text": "(Turney, 2003)",
"ref_id": "BIBREF16"
},
{
"start": 1210,
"end": 1232,
"text": "Nguyen and Kan., 2007)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Since integer linear programming (Alevras and Padberg, 2001 ) can be used to incorporate both local features and non-local features, which are difficult to handle with traditional algorithms, it has received much attention in various NLP problems in recent years. Roth and Yih (2005) extended CRF models by applying inference procedure based on ILP to naturally and efficiently support general constraint structures. They applied their model on semantic role labeling (SRL) task. Martin et al. (2009) formulated the problem of nonprojective dependency parsing as a polynomial-sized integer linear program. Woodsend and Lapata (2010) presented a joint content selection and compression model for singledocument summarization using an integer linear programming formulation.",
"cite_spans": [
{
"start": 33,
"end": 59,
"text": "(Alevras and Padberg, 2001",
"ref_id": "BIBREF0"
},
{
"start": 264,
"end": 283,
"text": "Roth and Yih (2005)",
"ref_id": "BIBREF12"
},
{
"start": 480,
"end": 500,
"text": "Martin et al. (2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The objective of keyphrase extraction is to select the most informative group of phrases, which are relevant to the news event and subject to constraints including the number of phrases, topic/aspect coverage, and coherence. Since these constraints are global, and cannot be adequately satisfied by optimizing each of them individually, our approach uses the BIP formulation, a wellstudied optimization framework, which can be efficiently solved using standard optimization tools, to extract keyphrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyphrase Extraction Using BIP",
"sec_num": "3"
},
{
"text": "Integer Linear Programming (ILP) denotes a set of constraint optimization problems which have a linear objective function, subject to linear equality and linear inequality constraints, and require the objective variables to be integers. ILP can be expressed in canonical form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyphrase Extraction Using BIP",
"sec_num": "3"
},
{
"text": "maximize c T x subject to Ax \u2264 b (1) Gx = d x \u2208 Z n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyphrase Extraction Using BIP",
"sec_num": "3"
},
{
"text": "Binary Integer Programming (BIP) is the special case of ILP where variables are either 0 or 1. In this paper, we treat the keyphrase extraction task as a two class labeling problem. Given a group of documents D, for each word w \u2208 D, we decide to select this word as a keyphrase (assign label \"1\" to the word), or non-keyphrase (assign label \"0\"). We use a vector of binary variables x = (x 1 , x 2 , ..., x n ) over word w i \u2208 D, to indicate whether the corresponding word should be selected or not. With the objective variables x and word w i \u2208 D, c = (c 1 , c 2 , ..., c n ) is defined as the assignment value. The variable c i gives the expected value of labeling w i as a keyphrase. The basic extraction model is shown in Eq.(2). Our goal is to find the optimal point of weights x * satisfying the constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyphrase Extraction Using BIP",
"sec_num": "3"
},
{
"text": "maximize c T x subject to 0 \u2264 x i \u2264 1 (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyphrase Extraction Using BIP",
"sec_num": "3"
},
{
"text": "x \u2208 Z n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyphrase Extraction Using BIP",
"sec_num": "3"
},
{
"text": "With the BIP formulation, objective function c T x = k c k x k denotes the expected informative scores over all the words of a solution x. Maximizing the expected scores biases the words with highest c i values as keyphrases. Various features can be considered as the values c. In this work, we use two basic features TF\u2022IDF and locality. They have also been widely used in existing keyphrase extraction methods. The objective function is given in the Eq.(3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c T x, c i = \u03b1 \u2022 d\u2208D T F \u2022IDF (w i ,d) |D| +\u03b2 \u2022 \u00b5 i + \u03b3 \u2022 \u03bd i",
"eq_num": "(3)"
}
],
"section": "Objective Function",
"sec_num": "3.1"
},
{
"text": "Three parameters \u03b1,\u03b2, and \u03b3 are used to tradeoff among the different parts, |D| is the number of documents in this news group. The latter section provides detailed description of this equation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "3.1"
},
{
"text": "TF\u2022IDF compares the frequency of a phrase in a particular document with that in general corpus. The TF\u2022IDF for word w i is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TF\u2022IDF",
"sec_num": "3.1.1"
},
{
"text": "T F \u2022IDF (w i , d) = freq(w i , d) |d| \u2022log 2 N df (w i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TF\u2022IDF",
"sec_num": "3.1.1"
},
{
"text": ", where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TF\u2022IDF",
"sec_num": "3.1.1"
},
{
"text": "freq(w i , d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TF\u2022IDF",
"sec_num": "3.1.1"
},
{
"text": ") is the number of times w i occurs in d; df (w i ) is the number of documents containing w i in the global corpus; N is the size of the global corpus; |d| is the length of the document of d..",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TF\u2022IDF",
"sec_num": "3.1.1"
},
{
"text": "In this paper, we use the average TF\u2022IDF over all the news articles belonging to the same group. TF\u2022IDF has also been used as features by almost all the keyphrase extraction algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TF\u2022IDF",
"sec_num": "3.1.1"
},
{
"text": "The first occurrence position of the candidate phrase is an important feature for keyphrase extraction. It has also been used by many existing methods Zha, 2002; Liu et al., 2009) . In this paper, we also incorporate the information as parts of objective function.",
"cite_spans": [
{
"start": 151,
"end": 161,
"text": "Zha, 2002;",
"ref_id": "BIBREF21"
},
{
"start": 162,
"end": 179,
"text": "Liu et al., 2009)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Locality",
"sec_num": "3.1.2"
},
{
"text": "For the words in the title of news articles, we define a bonus \u00b5 for their informative scores. It is the second component in the Eq.(3). The \u00b5 i is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locality",
"sec_num": "3.1.2"
},
{
"text": "\u00b5 i = \u00b5, w i \u2208 T 0, otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locality",
"sec_num": "3.1.2"
},
{
"text": ", where T represents the set of all the title words. Similarly, we define \u03bd for those words which occur in the first sentences. It is the third component of the objective function. The \u03bd i is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locality",
"sec_num": "3.1.2"
},
{
"text": "\u03bd i = \u03bd, w i \u2208 F S 0, otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locality",
"sec_num": "3.1.2"
},
{
"text": ",where F S represents the set of words which occur in the first sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locality",
"sec_num": "3.1.2"
},
{
"text": "One limitation of existing keyphrase extraction methods is that they usually separately make judgment of individual phrase instead of considering the qualities of the set of phrases as a whole. In this section, we define several constraints converted from the coverage and coherence criteria, and the number of extracted phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraints",
"sec_num": "3.2"
},
{
"text": "From both observations we make, and the properties proposed by Liu et al.(2009) , we believe that high-quality keyphrases should cover the whole document or group of documents well. For example, if we have a document describing \"Toyota recalls Prius\" from various aspects of \"reason\", \"scope\", \"influence\" and so on., the extracted keyphrases should cover as many aspects as possible.",
"cite_spans": [
{
"start": 63,
"end": 79,
"text": "Liu et al.(2009)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "3.2.1"
},
{
"text": "In order to satisfy this criterion, topic model is used to estimate words distribution over topics. In this paper, we use latent Dirichlet allocation (LDA) (Blei et al., 2003) to do it 1 . LDA is a three-level hierarchical Bayesian model, in which each word is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities.",
"cite_spans": [
{
"start": 156,
"end": 175,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "3.2.1"
},
{
"text": "From LDA model, we can get p(w|z), which represents aspect distributions over words. It indicates which words are important to an aspect. We use matrix G to represent p(w|z). The vector g i denote the distribution over words of aspect i. The projection g T i x gives us the aspect coverage of topic i under current solution x. We want the coverage of every aspect to exceed the same threshold \u03b6. The constraint can be expressed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "3.2.1"
},
{
"text": "G T x \u03b6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "3.2.1"
},
{
"text": "According to the properties which high-quality keyphrases should satisfy, the keyphrases should be semantically related and coherent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence",
"sec_num": "3.2.2"
},
{
"text": "Turney (2003) also mentioned this issue and pointed out that incoherent keyphrases might highly impact the quality and user experience.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence",
"sec_num": "3.2.2"
},
{
"text": "An intuitive method for measuring word relations is based on word cooccurrence relations within the document. It indicates that word pairs with high cooccurrence frequency should be selected together. For instance, the words \"economy\", \"unemployment\", and \"loan\" are likely to cooccur in documents about \"financial crisis\". And we are aiming to extract them together to ensure coherence property. In this paper, we use mutual information (MI) to measure the word's coherence. MI is a measure of association which quantifies the discrepancy between the dependent joint distribution and the independent individual distributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence",
"sec_num": "3.2.2"
},
{
"text": "For each word pair < w i , w j >, whose mutual information I(w i , w j ) is bigger than a pre-defined threshold \u03be, we add the following constraint:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence",
"sec_num": "3.2.2"
},
{
"text": "x i \u2212 x j = 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence",
"sec_num": "3.2.2"
},
{
"text": "It encodes the fact that keyphrases pairs with high cooccurrence frequency should be selected together.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coherence",
"sec_num": "3.2.2"
},
{
"text": "According to the limitations of space or other constraints given by applications, the number of extracted phrases should also be constrained. Since we use a vector of binary variables x = (x 1 , x 2 , ..., x n ) over words w i \u2208 D, the constraint can be represented as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of Extracted Phrases",
"sec_num": "3.2.3"
},
{
"text": "n i=1 x i \u2264 K",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of Extracted Phrases",
"sec_num": "3.2.3"
},
{
"text": ",where K is the pre-defined threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of Extracted Phrases",
"sec_num": "3.2.3"
},
{
"text": "Putting the objective function and all the constraints together, we obtain the BIP program to extract keyphrases as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BIP Problem",
"sec_num": "3.3"
},
{
"text": "maximize c T x subject to G T x \u03b6 x i \u2212 x j = 0 , if I(w i , w j ) \u2265 \u03be n i=1 x i \u2264 K (4) x i \u2208 {0, 1} , i = 1 \u2022 \u2022 \u2022 n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BIP Problem",
"sec_num": "3.3"
},
{
"text": "Binary integer programming is a popular optimization technique and many effective solvers have been developed. In this paper we use CPLEX solver, which is part of AIMMS 2 system, to estimate the optimal solution from the Eq.(4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BIP Problem",
"sec_num": "3.3"
},
{
"text": "In this section, we perform evaluations of the proposed method. The data sets we used in the experiments are described in the first part. After that, experimental results are given and detailedly described in the following sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "There are almost no publicly available datasets with manually annotated gold standard keyphrases for news, due to the high expense of labor and time for manual annotation. In this experiment, we randomly selected 150 groups of online news articles from Goolge News. Three annotators participated in the annotation task. They were asked to manually assign keyphrases for each group of news. The keyphrases which at least two annotators have agreed on are selected as the \"Golden\" ones. Statistics on the dataset are shown in Table 1 . The corpus data is divided into development set and test set. The development set, which contains 50 groups of news, is used to tune the parameters. The other 100 groups of news are used as test set.",
"cite_spans": [],
"ref_spans": [
{
"start": 524,
"end": 532,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset and Evaluation Metric",
"sec_num": "4.1"
},
{
"text": "We regard an extracted keyphrase as \"correct\" if it matches one of the ground truth. We measure the 2 http://www.aimms.com/ Description value # News articles 1103 # Words 345K # News articles per group 7.35 # Labeled keyphrases per group 5.83 Table 1 : Statistics on the dataset performance by Precision (the percentage of correct extracted keyphrases out of all the extracted ones), Recall (the percentage of correct extracted keyphrases out of the ground truth) and F-Measure (the harmonic mean of the precision and recall).",
"cite_spans": [],
"ref_spans": [
{
"start": 243,
"end": 250,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset and Evaluation Metric",
"sec_num": "4.1"
},
{
"text": "Since the dataset used in this paper is manually labeled by ourselves, we implement three baseline methods on the same dataset for comparison.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons with Other Methods",
"sec_num": "4.2"
},
{
"text": "BL-1: The titles of news articles provide a reasonable summary or keyphrase sequence. So baseline 1 is performed based on the titles of news articles. We sort the phrases in multi-news titles according to the TF\u2022IDF scores and select top-k as keyphrases. We assign K to 6 after tuning the parameter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons with Other Methods",
"sec_num": "4.2"
},
{
"text": "BL-2: Many existing methods converted the keyphrase extraction as a classification problem. In this paper, we used SVM 3 as baseline 2. The features include TF\u2022IDF, \"First occurrence\", and \"Is in title or not\". Those feature sets are similar to our objective function. We divided the dataset into five subsets and conducted a 5-fold crossvalidation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons with Other Methods",
"sec_num": "4.2"
},
{
"text": "BL-3: We re-implemented the ranking approach proposed by Jiang et al. (2009) as baseline 3. This method employed Ranking SVM (Joachims, 2006) , the learning to rank method, to perform keyphrase extraction. Feature sets are the same as the feature sets used in the BL-2. We also conducted a 5-fold cross-validation.",
"cite_spans": [
{
"start": 57,
"end": 76,
"text": "Jiang et al. (2009)",
"ref_id": "BIBREF5"
},
{
"start": 125,
"end": 141,
"text": "(Joachims, 2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons with Other Methods",
"sec_num": "4.2"
},
{
"text": "We used the following default values for the parameters of our method: \u03b1 = 0.4, \u03b2 = 0.3, \u03b3 = 0.3, \u00b5 = 0.1, \u03bd = 0.05, \u03b6 = 0.005, \u03be = 16.5, and K = 6. The meaning of these parameters are described in the previous section. And how to learning the optimal values will be discussed in section 4.4. The test set is used in this experiment. Since the average number of manual-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons with Other Methods",
"sec_num": "4.2"
},
{
"text": "3 SV M light",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons with Other Methods",
"sec_num": "4.2"
},
{
"text": "is used in our experiments, which can be downloaded from http://www.cs.cornell.edu/People/tj/svm light. labeled keyphrases is six, we selected the top 6 ones as keyphrases in all three baseline methods. Figure 1 shows the performance comparison of BIP-based method with baselines. From the figure, we have the following observations. Firstly, BIP-based method consistantly outperforms all baselines under all evaluation metrics -Precision, Recall, and F1-Score. This indicates the robustness and effectiveness of our method. Furthermore, compared with the supervised methods, BIP-based method does not need any labeled corpus. Secondly, Ranking SVM performs slightly better than SVM. This is congruence with the previous conclusion given by Jiang et al. (2009) . However, the improvement of BL-3 over BL-2 is not significant. We also observe that the performances of BL-1 are quite good. The precision, recall, and F1-score achieved by it are comparable with results of SVM and Ranking SVM.",
"cite_spans": [
{
"start": 741,
"end": 760,
"text": "Jiang et al. (2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 203,
"end": 211,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Comparisons with Other Methods",
"sec_num": "4.2"
},
{
"text": "To determine the contribution of different components of objective function and individual constraints, we omit components and constraints one by one to identify its contribution to the performance. Table 2 shows the results on development set. The first row represents the performance of the BIP-based method with all constraints and objective function with all three components. The parameters are default ones listed in the previous section.",
"cite_spans": [],
"ref_spans": [
{
"start": 199,
"end": 206,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Contribution of Constraints and Objective Function",
"sec_num": "4.3"
},
{
"text": "The contribution of different components in the objective function is shown from the second row to fourth row. From the results we can observe that, TF\u2022IDF is the most important feature in the objective function. Without the feature of TF\u2022IDF, the evaluation metrics drop sharply from 72.68% to 59.82%. The candidate occurs in the title is also an important feature. It is consistent with the observations given by the results of BL-1. It gives about 17.27% relative improvement over the performance without it. Compared with the two features, the occurrence in the first sentence gives less contribution. The improvement given by it is not significant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contribution of Constraints and Objective Function",
"sec_num": "4.3"
},
{
"text": "The fifth row shows the results without the coverage constraint. From the result, we observe that the coverage constraint is effective, which can give more than 4.2% relative improvement. The contribution of coherence constraint is shown in the end of the table. Althoug its contribution is less than that of coverage constraint, an outlier keyphrase may highly impact the user experience. Coherence constraint is important to improve user experience.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contribution of Constraints and Objective Function",
"sec_num": "4.3"
},
{
"text": "As we mentioned in the previous section, there are eight parameters which should be adjusted in our method. One may concern the problem of parameters tuning. In order to answer this question, in this section, we explore the impact of different parameters on our approach's performance in the development set. Except the parameter under investigation, the other parameters are set to the default values which are listed in the Section 4.2. Figure 2 presents the performance varying the number of keyphrases, which ranges from 1 to 10. K is one of the most important arguments leading to the trade-off between precision and recall. Larger K increases recall but decreases precision. From this figure, we observe that the best result is achieved at the point K = 6, which is similar to the average number of manually selected keyphrases. We also observe that the F1-Score drops quickly when K is bigger than 7. The main reason is that only a small number of phrases which should be selected are ranked after the top 10. ",
"cite_spans": [],
"ref_spans": [
{
"start": 439,
"end": 447,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Varying Parameters",
"sec_num": "4.4"
},
{
"text": "In the objective function, there are three parameters \u03b1,\u03b2, and \u03b3 , which are used to trade off among TF\u2022IDF and two locality features. Figure 3 gives the F1-Score surface varying \u03b1 and \u03b2. Since \u03b1 + \u03b2 + \u03b3 equals to 1, \u03b1 and \u03b2 are used as xaxis and y-axis in the figure. We have found that the surfaces are almost concave around a number of areas. Therefore, a simple hill-climbing search can be used to optimize F1-Score. Since the surface is almost concave, the global maximum can be easily achieved though a few initial seeds. For example, the optimal parameters for this experiment are \u03b1 = 0.4, \u03b2 = 0.3. The \u03b3 can be calculated through function 1 \u2212 \u03b1 \u2212 \u03b2.",
"cite_spans": [],
"ref_spans": [
{
"start": 135,
"end": 143,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "\u03b1, \u03b2, \u03b3 in the Objective Function",
"sec_num": "4.4.2"
},
{
"text": "Coverage threshold \u03b6 represents the property that the extracted keyphrases should cover most of the important aspects of a news event. We want the aspect coverage for all topics to exceed the threshold. Table 3 shows the results when \u03b6 ranges from 0.001 to 0.009. All of them perform better than the result without coverage constraint, and the best result is achieved at \u03b6 = 0.005. From the results, we observe that the coverage threshold \u03b6 can also be easily selected. From 0.005 to 0.008, the changes of F1-score are not significant. When the coverage threshold is above 0.02, in order to get the solution of the ILP program, the impact of objective function would be limited. We think that it is the main reason of why best result is achieved at a small value threshold.",
"cite_spans": [],
"ref_spans": [
{
"start": 203,
"end": 210,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Coverage threshold \u03b6",
"sec_num": "4.4.3"
},
{
"text": "Finally, we explore the inference of \u03be, which is used to represent the word coherence. When the threshold is below 12, there would be too many coherence constraints. More than 30.05% word pairs can satisfy the threshold. Under this condition, no solution can be estimated in some cases. When the threshold is above 32, there are rarely word pairs satisfying the threshold. In other words, there would be no coherence constraints. In table 4 we show the influence of \u03be, which ranges from 15 to 18. Similar to the results of coverage threshold, a large range of \u03be's value can achieve satisfactory result. \u03be = 16.5 achieves the best result 72.53%. Table 5 shows examples of extracted keyphrases by different methods from a group of news articles about \"Master Kong applies for TDR listing in Taiwan\". Top 6 extracted keyphrases for each method are shown in the table, and the correct ones are marked with \"(+)\". From table 5, we observe that keyphrases extracted through BIP-based method are relevant, coherent, with good coverage. Without the coverage constraint, \"Taiwan Depositary Receipt\" and it's abbreviation \"TDR\" are both selected. And, the topic coverage cannot be well satisfied through the top keyphrases. For SVM and Ranking SVM, they separately consider each word, some of the high frequency words are selected as keyphrases, such as \"billion\", and \"issue\". However, those words are not meaningful.",
"cite_spans": [],
"ref_spans": [
{
"start": 645,
"end": 652,
"text": "Table 5",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Coherence threshold \u03be",
"sec_num": "4.4.4"
},
{
"text": "In this paper, we have presented a novel keyphrase extraction approach. It adapts the integer linear programming methods to the keyphrase extraction problem by casting features and criteria as objective function and constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "By integrating TF\u2022IDF and two locality features as objective function, and the coverage and coherence properties as constraints, the proposed ILPbased unsupervised approach achieves better performance than the state-of-the-art supervised approaches, SVM and Ranking SVM. Contributions of constraints and different components of the object function are experimental evaluated. In the objective function, the TF\u2022IDF is the most important feature. Locality features can further improve the performance. Results also demonstrate that both the coverage and coherence constraints are useful to keyphrase extraction task. We also detail the impact of parameters used in our approach. Through experimental results, we demon- strate that the parameters are not sensitive. The value of them can be easily estimated using simple hill-climbing search methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "We use MALLET 2.0.6 in the experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The author wishes to thank the anonymous reviewers for their helpful comments. This work was partially funded by 973 Program 2010CB327906 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Linear Optimization and Extensions: Problems and Solutions",
"authors": [
{
"first": "Dimitris",
"middle": [],
"last": "Alevras",
"suffix": ""
},
{
"first": "Manfred",
"middle": [
"W"
],
"last": "Padberg",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitris Alevras and Manfred W. Padberg. 2001. Lin- ear Optimization and Extensions: Problems and So- lutions. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993-1022, March.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A practical system of keyphrase extraction for web pages",
"authors": [
{
"first": "Mo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jian-Tao",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Hua-Jun",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kwok-Yan",
"middle": [],
"last": "Lam",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 14th ACM international conference on Information and knowledge management, CIKM '05",
"volume": "",
"issue": "",
"pages": "277--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mo Chen, Jian-Tao Sun, Hua-Jun Zeng, and Kwok-Yan Lam. 2005. A practical system of keyphrase extrac- tion for web pages. In Proceedings of the 14th ACM international conference on Information and knowl- edge management, CIKM '05, pages 277-278, New York, NY, USA. ACM.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Domain-specific keyphrase extraction",
"authors": [
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Gordon",
"middle": [
"W"
],
"last": "Paynter",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Gutwin",
"suffix": ""
},
{
"first": "Craig",
"middle": [
"G"
],
"last": "Nevill-Manning",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 16th international joint conference on Artificial intelligence",
"volume": "2",
"issue": "",
"pages": "668--673",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eibe Frank, Gordon W. Paynter, Ian H. Witten, Carl Gutwin, and Craig G. Nevill-Manning. 1999. Domain-specific keyphrase extraction. In Proceed- ings of the 16th international joint conference on Ar- tificial intelligence -Volume 2, pages 668-673, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improved automatic keyword extraction given more linguistic knowledge",
"authors": [
{
"first": "Anette",
"middle": [],
"last": "Hulth",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 conference on Empirical methods in natural language processing",
"volume": "10",
"issue": "",
"pages": "216--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anette Hulth. 2003. Improved automatic keyword ex- traction given more linguistic knowledge. In Pro- ceedings of the 2003 conference on Empirical meth- ods in natural language processing -Volume 10, pages 216-223, Morristown, NJ, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A ranking approach to keyphrase extraction",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Yunhua",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, SIGIR '09",
"volume": "",
"issue": "",
"pages": "756--757",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xin Jiang, Yunhua Hu, and Hang Li. 2009. A ranking approach to keyphrase extraction. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, SIGIR '09, pages 756-757, New York, NY, USA. ACM.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Training linear svms in linear time",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '06",
"volume": "",
"issue": "",
"pages": "217--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims. 2006. Training linear svms in lin- ear time. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '06, pages 217-226, New York, NY, USA. ACM.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Clustering to find exemplar terms for keyphrase extraction",
"authors": [
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yabin",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "257--266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiyuan Liu, Peng Li, Yabin Zheng, and Maosong Sun. 2009. Clustering to find exemplar terms for keyphrase extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Lan- guage Processing: Volume 1 -Volume 1, EMNLP '09, pages 257-266, Morristown, NJ, USA. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Concise integer linear programming formulations for dependency parsing",
"authors": [
{
"first": "F",
"middle": [
"T"
],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Martins",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "1",
"issue": "",
"pages": "342--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andr\u00e9 F. T. Martins, Noah A. Smith, and Eric P. Xing. 2009. Concise integer linear programming formula- tions for dependency parsing. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Vol- ume 1 -Volume 1, ACL-IJCNLP '09, pages 342- 350, Morristown, NJ, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Thesaurus based automatic keyphrase indexing",
"authors": [
{
"first": "Olena",
"middle": [],
"last": "Medelyan",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 6th ACM/IEEE-CS joint conference on Digital libraries, JCDL '06",
"volume": "",
"issue": "",
"pages": "296--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olena Medelyan and Ian H. Witten. 2006. Thesaurus based automatic keyphrase indexing. In Proceed- ings of the 6th ACM/IEEE-CS joint conference on Digital libraries, JCDL '06, pages 296-297, New York, NY, USA. ACM.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Human-competitive tagging using automatic keyphrase extraction",
"authors": [
{
"first": "Olena",
"middle": [],
"last": "Medelyan",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "3",
"issue": "",
"pages": "1318--1327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olena Medelyan, Eibe Frank, and Ian H. Witten. 2009. Human-competitive tagging using automatic keyphrase extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Lan- guage Processing: Volume 3 -Volume 3, EMNLP '09, pages 1318-1327, Morristown, NJ, USA. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Textrank: Bringing order into texts",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Tarau",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP 2004",
"volume": "",
"issue": "",
"pages": "404--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into texts. In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 404-411, Barcelona, Spain, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Integer linear programming inference for conditional random fields",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 22nd international conference on Machine learning, ICML '05",
"volume": "",
"issue": "",
"pages": "736--743",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Roth and Wen-tau Yih. 2005. Integer linear pro- gramming inference for conditional random fields. In Proceedings of the 22nd international conference on Machine learning, ICML '05, pages 736-743, New York, NY, USA. ACM.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Keyphrase extraction in scientific publications",
"authors": [
{
"first": "T",
"middle": [
"D"
],
"last": "Nguyen",
"suffix": ""
},
{
"first": "M.-Y",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of International Conference on Asian Digital Libraries",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T.D.Nguyen and M.-Y. Kan. 2007. Keyphrase extrac- tion in scientific publications. In Proceedings of In- ternational Conference on Asian Digital Libraries.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A language model approach to keyphrase extraction",
"authors": [
{
"first": "Takashi",
"middle": [],
"last": "Tomokiyo",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Hurst",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the ACL 2003 workshop on Multiword expressions: analysis, acquisition and treatment",
"volume": "18",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takashi Tomokiyo and Matthew Hurst. 2003. A lan- guage model approach to keyphrase extraction. In Proceedings of the ACL 2003 workshop on Multi- word expressions: analysis, acquisition and treat- ment -Volume 18, pages 33-40, Morristown, NJ, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning algorithms for keyphrase extraction",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "2",
"issue": "",
"pages": "303--336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney. 2000. Learning algorithms for keyphrase extraction. volume 2, pages 303-336, Hingham, MA, USA, May. Kluwer Academic Pub- lishers.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Coherent keyphrase extraction via web mining",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 18th international joint conference on Artificial intelligence",
"volume": "",
"issue": "",
"pages": "434--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney. 2003. Coherent keyphrase extraction via web mining. In Proceedings of the 18th inter- national joint conference on Artificial intelligence, pages 434-439, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Single document keyphrase extraction using neighborhood knowledge",
"authors": [
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Jianguo",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 23rd national conference on Artificial intelligence",
"volume": "2",
"issue": "",
"pages": "855--860",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojun Wan and Jianguo Xiao. 2008. Single document keyphrase extraction using neighborhood knowledge. In Proceedings of the 23rd national conference on Artificial intelligence -Volume 2, pages 855-860. AAAI Press.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Kea: practical automatic keyphrase extraction",
"authors": [
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
},
{
"first": "Gordon",
"middle": [
"W"
],
"last": "Paynter",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Gutwin",
"suffix": ""
},
{
"first": "Craig",
"middle": [
"G"
],
"last": "Nevill-Manning",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the fourth ACM conference on Digital libraries, DL '99",
"volume": "",
"issue": "",
"pages": "254--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian H. Witten, Gordon W. Paynter, Eibe Frank, Carl Gutwin, and Craig G. Nevill-Manning. 1999. Kea: practical automatic keyphrase extraction. In Pro- ceedings of the fourth ACM conference on Digital libraries, DL '99, pages 254-255, New York, NY, USA. ACM.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Automatic generation of story highlights",
"authors": [
{
"first": "Kristian",
"middle": [],
"last": "Woodsend",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL '10",
"volume": "",
"issue": "",
"pages": "565--574",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristian Woodsend and Mirella Lapata. 2010. Auto- matic generation of story highlights. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL '10, pages 565- 574, Morristown, NJ, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Keyword extraction from a single document using word co-occurrence statistical information",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Matsuo",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ishizuka",
"suffix": ""
}
],
"year": 2004,
"venue": "In International Journal on Artificial Intelligence Tools",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y.Matsuo and M.Ishizuka. 2004. Keyword extraction from a single document using word co-occurrence statistical information. In International Journal on Artificial Intelligence Tools.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Generic summarization and keyphrase extraction using mutual reinforcement principle and sentence clustering",
"authors": [
{
"first": "Hongyuan",
"middle": [],
"last": "Zha",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR '02",
"volume": "",
"issue": "",
"pages": "113--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongyuan Zha. 2002. Generic summarization and keyphrase extraction using mutual reinforcement principle and sentence clustering. In Proceedings of the 25th annual international ACM SIGIR confer- ence on Research and development in information retrieval, SIGIR '02, pages 113-120, New York, NY, USA. ACM.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Comparison results of Title, SVM, Ranking SVM and our BIP-based methods .",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Results of varying the number of extracted keyphrases using the proposed BIP-based extraction method.",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "Results of varying the parameters \u03b1, \u03b2, \u03b3 in the objective function.",
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"uris": null,
"text": "Masker Kong(+), Ting Hsin International Group(+), Taiwan Depositary Receipt(+), instant noodle, Taiwan Stock Exchange(+), NT$30 billion BIP without Coverage constraint Masker Kong(+), Taiwan Depositary Receipt(+), TDR, lunch, Taiwan Stock Exchange(+), Taiwan BIP without Coherence constraint Masker Kong(+), Taiwan Depositary Receipt(+), Taiwanese-invested food producer(+), IPO, issue, Taiwan Stock Exchange(+) SVM Taiwan Depositary Receipt(+), Masker Kong(+), China market, TDR, Hong Kong Exchanges, billion Ranking SVM Masker Kong(+), China market, TDR, Taiwan Depositary Receipt(+), issue, Taiwan * The keyphrases are translated from Chinese.",
"type_str": "figure"
},
"TABREF0": {
"html": null,
"content": "<table><tr><td/><td>Pre.</td><td>Rec.</td><td>F.</td></tr><tr><td>All</td><td colspan=\"3\">71.45% 73.96% 72.68%</td></tr><tr><td>All -TF\u2022IDF</td><td colspan=\"3\">58.86% 60.82% 59.82%</td></tr><tr><td>All -InTitle</td><td colspan=\"3\">60.96% 62.77% 61.85%</td></tr><tr><td colspan=\"4\">All -InFirstSentence 71.00% 73.20% 72.08%</td></tr><tr><td>All -Coverage</td><td colspan=\"3\">68.56% 70.68% 69.60%</td></tr><tr><td>All -Coherence</td><td colspan=\"3\">70.67% 72.85% 71.74%</td></tr></table>",
"num": null,
"type_str": "table",
"text": "Contribution of different components of objective function (TFIDF, InTitle, InFirstSenetence) and two constraints (Coverage and Coherence) under Precision, Recall, and F1-Score."
},
"TABREF1": {
"html": null,
"content": "<table><tr><td>\u03b6</td><td>Pre.</td><td>Rec.</td><td>F.</td></tr><tr><td>0.001</td><td/><td/><td/></tr></table>",
"num": null,
"type_str": "table",
"text": "Influence of the Coverage threshold \u03b6 69.44% 71.59% 70.50% 0.002 70.33% 72.50% 71.40% 0.003 71.22% 73.43% 72.31% 0.004 71.00% 73.20% 72.08% 0.005 71.45% 73.65% 72.53% 0.006 71.33% 73.54% 72.42% 0.007 71.22% 73.43% 72.31% 0.008 71.11% 73.31% 72.19% 0.009 70.67% 72.85% 71.74%"
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>\u03be</td><td>Pre.</td><td>Rec.</td><td>F.</td></tr><tr><td>15.0</td><td/><td/><td/></tr></table>",
"num": null,
"type_str": "table",
"text": "Influence of the Coherence threshold \u03be 68.00% 70.10% 69.04% 15.5 69.89% 72.05% 70.95% 16.0 70.78% 72.97% 71.85% 16.5 71.45% 73.65% 72.53% 17.0 71.22% 73.43% 72.30% 17.5 71.00% 73.20% 72.08% 18.0 71.00% 73.20% 72.08%"
},
"TABREF3": {
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Example of extracted keyphrases by SVM, Ranking SVM and BIP-based method * ."
}
}
}
}