{ "paper_id": "D09-1027", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:38:42.755746Z" }, "title": "Clustering to Find Exemplar Terms for Keyphrase Extraction", "authors": [ { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tsinghua University", "location": { "postCode": "100084", "settlement": "Beijing", "country": "China" } }, "email": "" }, { "first": "Peng", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tsinghua University", "location": { "postCode": "100084", "settlement": "Beijing", "country": "China" } }, "email": "pengli09@gmail.com" }, { "first": "Yabin", "middle": [], "last": "Zheng", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tsinghua University", "location": { "postCode": "100084", "settlement": "Beijing", "country": "China" } }, "email": "yabin.zheng@gmail.com" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tsinghua University", "location": { "postCode": "100084", "settlement": "Beijing", "country": "China" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Keyphrases are widely used as a brief summary of documents. Since manual assignment is time-consuming, various unsupervised ranking methods based on importance scores are proposed for keyphrase extraction. In practice, the keyphrases of a document should not only be statistically important in the document, but also have a good coverage of the document. Based on this observation, we propose an unsupervised method for keyphrase extraction. Firstly, the method finds exemplar terms by leveraging clustering techniques, which guarantees the document to be semantically covered by these exemplar terms. Then the keyphrases are extracted from the document using the exemplar terms. Our method outperforms sate-of-the-art graphbased ranking methods (TextRank) by 9.5% in F1-measure.", "pdf_parse": { "paper_id": "D09-1027", "_pdf_hash": "", "abstract": [ { "text": "Keyphrases are widely used as a brief summary of documents. Since manual assignment is time-consuming, various unsupervised ranking methods based on importance scores are proposed for keyphrase extraction. In practice, the keyphrases of a document should not only be statistically important in the document, but also have a good coverage of the document. Based on this observation, we propose an unsupervised method for keyphrase extraction. Firstly, the method finds exemplar terms by leveraging clustering techniques, which guarantees the document to be semantically covered by these exemplar terms. Then the keyphrases are extracted from the document using the exemplar terms. Our method outperforms sate-of-the-art graphbased ranking methods (TextRank) by 9.5% in F1-measure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "With the development of Internet, information on the web is emerging exponentially. How to effectively seek and manage information becomes an important research issue. Keyphrases, as a brief summary of a document, provide a solution to help organize, manage and retrieve documents, and are widely used in digital libraries and information retrieval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Keyphrases in articles of journals and books are usually assigned by authors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, most articles on the web usually do not have human-assigned keyphrases. Therefore, automatic keyphrase extraction is an important research task. Existing methods can be divided into supervised and unsupervised approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The supervised approach (Turney, 1999) regards keyphrase extraction as a classification task.", "cite_spans": [ { "start": 24, "end": 38, "text": "(Turney, 1999)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this approach, a model is trained to determine whether a candidate term of the document is a keyphrase, based on statistical and linguistic features. For the supervised keyphrase extraction approach, a document set with human-assigned keyphrases is required as training set. However, human labelling is time-consuming. Therefore, in this study we focus on unsupervised approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As an example of an unsupervised keyphrase extraction approach, the graph-based ranking (Mihalcea and Tarau, 2004) regards keyphrase extraction as a ranking task, where a document is represented by a term graph based on term relatedness, and then a graph-based ranking algorithm is used to assign importance scores to each term. Existing methods usually use term cooccurrences within a specified window size in the given document as an approximation of term relatedness (Mihalcea and Tarau, 2004) .", "cite_spans": [ { "start": 88, "end": 114, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF15" }, { "start": 470, "end": 496, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As we know, none of these existing works gives an explicit definition on what are appropriate keyphrases for a document. In fact, the existing methods only judge the importance of each term, and extract the most important ones as keyphrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "From the observation of human-assigned keyphrases, we conclude that good keyphrases of a document should satisfy the following properties:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. Understandable. The keyphrases are understandable to people. This indicates the extracted keyphrases should be grammatical. For example, \"machine learning\" is a grammatical phrase, but \"machine learned\" is not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. Relevant. The keyphrases are semantically relevant with the document theme. For example, for a document about \"machine learning\", we want the keyphrases all about this theme.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. Good coverage. The keyphrases should cover the whole document well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Suppose we have a document describing \"Beijing\" from various aspects of \"location\", \"atmosphere\" and \"culture\", the extracted keyphrases should cover all the three aspects, instead of just a partial subset of them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The classification-based approach determines whether a term is a keyphrase in isolation, which could not guarantee Property 3. Neither does the graph-based approach guarantee the top-ranked keyphrases could cover the whole document. This may cause the resulting keyphrases to be inappropriate or badly-grouped.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To extract the appropriate keyphrases for a document, we suggest an unsupervised clusteringbased method. Firstly the terms in a document are grouped into clusters based on semantic relatedness. Each cluster is represented by an exemplar term, which is also the centroid of each cluster. Then the keyphrases are extracted from the document using these exemplar terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this method, we group terms based on semantic relatedness, which guarantees a good coverage of the document and meets Property 2 and 3. Moreover, we only extract the keyphrases in accordance with noun group (chunk) patterns, which guarantees the keyphrases satisfy Property 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experiments show that the clustering-based method outperforms the state-of-the-art graphbased approach on precision, recall and F1measure. Moreover, this method is unsupervised and language-independent, which is applicable in the web era with enormous information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is organized as follows. In Section 2, we introduce and discuss the related work in this area. In Section 3, we give an overview of our method for keyphrase extraction. From Section 4 to Section 7, the algorithm is described in detail. Empirical experiment results are demonstrated in Section 8, followed by our conclusions and plans for future work in Section 9.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A straightforward method for keyphrase extraction is to select keyphrases according to frequency criteria. However, the poor performance of this method drives people to explore other methods. A pioneering achievement is carried out in (Turney, 1999) , as mentioned in Section 1, a supervised machine learning method was suggested in this paper which regards keyphrase extraction as a classification task. In this work, parameterized heuristic rules are combined with a genetic algorithm into a system for keyphrase extraction. A different learning algorithm, Naive Bayes method, is applied in (Frank et al., 1999) with improved results on the same data used in (Turney, 1999) . Hulth Hulth, 2004) adds more linguistic knowledge, such as syntactic features, to enrich term representation, which significantly improves the performance. Generally, the supervised methods need manually annotated training set, which may sometimes not be practical, especially in the web scenario.", "cite_spans": [ { "start": 235, "end": 249, "text": "(Turney, 1999)", "ref_id": "BIBREF16" }, { "start": 593, "end": 613, "text": "(Frank et al., 1999)", "ref_id": "BIBREF5" }, { "start": 661, "end": 675, "text": "(Turney, 1999)", "ref_id": "BIBREF16" }, { "start": 684, "end": 696, "text": "Hulth, 2004)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Starting with TextRank (Mihalcea and Tarau, 2004) , graph-based ranking methods are becoming the most widely used unsupervised approach for keyphrase extraction. The work in (Litvak and Last, 2008) applies HITS algorithm on the word graph of a document under the assumption that the top-ranked nodes should be the document keywords. Experiments show that classificationbased supervised method provides the highest keyword identification accuracy, while the HITS algorithm gets the highest F-measure. Work in (Huang et al., 2006 ) also considers each document as a term graph where the structural dynamics of these graphs can be used to identify keyphrases. Wan and Xiao (Wan and Xiao, 2008b ) use a small number of nearest neighbor documents to provide more knowledge to improve graph-based keyphrase extraction algorithm for single document. Motivated by similar idea, Wan and Xiao (Wan and Xiao, 2008a) propose to adopt clustering methods to find a small number of similar documents to provide more knowledge for building word graphs for keyword extraction. Moreover, after our submission of this paper, we find that a method using community detection on semantic term graphs is proposed for keyphrase extraction from multi-theme documents (Grineva et al., 2009) . In addition, some practical systems, such as KP-Miner (Elbeltagy and Rafea, 2009) , also do not need to be trained on a particular humanannotated document set.", "cite_spans": [ { "start": 23, "end": 49, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF15" }, { "start": 174, "end": 197, "text": "(Litvak and Last, 2008)", "ref_id": "BIBREF14" }, { "start": 508, "end": 527, "text": "(Huang et al., 2006", "ref_id": "BIBREF10" }, { "start": 657, "end": 690, "text": "Wan and Xiao (Wan and Xiao, 2008b", "ref_id": "BIBREF19" }, { "start": 883, "end": 904, "text": "(Wan and Xiao, 2008a)", "ref_id": "BIBREF18" }, { "start": 1242, "end": 1264, "text": "(Grineva et al., 2009)", "ref_id": "BIBREF8" }, { "start": 1321, "end": 1348, "text": "(Elbeltagy and Rafea, 2009)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In recent years, a number of systems are developed for extracting keyphrases from web documents (Kelleher and Luz, 2005; Chen et al., 2005) , email (Dredze et al., 2008) and some other specific sources, which indicates the importance of keyphrase extraction in the web era. However, none of these previous works has overall consideration on the essential properties of appropriate keyphrases mentioned in Section 1.", "cite_spans": [ { "start": 96, "end": 120, "text": "(Kelleher and Luz, 2005;", "ref_id": "BIBREF13" }, { "start": 121, "end": 139, "text": "Chen et al., 2005)", "ref_id": "BIBREF0" }, { "start": 148, "end": 169, "text": "(Dredze et al., 2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We should also note that, although the precision and recall of most current keyphrase extractors are still much lower compared to other NLPtasks, it does not indicate the performance is poor because even different annotators may assign different keyphrases to the same document. As described in (Wan and Xiao, 2008b) , when two annotators were asked to label keyphrases on 308 documents, the Kappa statistic for measuring interagreement among them was only 0.70.", "cite_spans": [ { "start": 295, "end": 316, "text": "(Wan and Xiao, 2008b)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The method proposed in this paper is mainly inspired by the nature of appropriate keyphrases mentioned in Section 1, namely understandable, semantically relevant with the document and high coverage of the whole document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm Overview", "sec_num": "3" }, { "text": "Let's analyze the document describing \"Beijing\" from the aspects of \"location\", \"atmosphere\" and \"culture\". Under the bag-of-words assumption, each term in the document, except for function words, is used to describe an aspect of the theme. Based on these aspects, terms are grouped into different clusters. The terms in the same cluster are more relevant with each other than with the ones in other clusters. Taking the terms \"temperature\", \"cold\" and \"winter\" for example, they may serve the aspect \"atmosphere\" instead of \"location\" or some other aspects when talking about \"Beijing\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm Overview", "sec_num": "3" }, { "text": "Based on above description, it is thus reasonable to propose a clustering-based method for keyphrase extraction. The overview of the method is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm Overview", "sec_num": "3" }, { "text": "1. Candidate term selection. We first filter out the stop words and select candidate terms for keyphrase extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm Overview", "sec_num": "3" }, { "text": "Not all words in a document are possible to be selected as keyphrases. In order to filter out the noisy words in advance, we select candidate terms using some heuristic rules. This step proceeds as follows. Firstly the text is tokenized for English or segmented into words for Chinese and other languages without word-separators. Then we remove the stop words and consider the remaining single terms as candidates for calculating semantic relatedness and clustering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Term Selection", "sec_num": "4" }, { "text": "In methods like (Turney, 1999; Elbeltagy and Rafea, 2009) , candidate keyphrases were first found using n-gram. Instead, in this method, we just find the single-word terms as the candidate terms at the beginning. After identifying the exemplar terms within the candidate terms, we extract multi-word keyphrases using the exemplars.", "cite_spans": [ { "start": 16, "end": 30, "text": "(Turney, 1999;", "ref_id": "BIBREF16" }, { "start": 31, "end": 57, "text": "Elbeltagy and Rafea, 2009)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Candidate Term Selection", "sec_num": "4" }, { "text": "After selecting candidate terms, it is important to measure term relatedness for clustering. In this paper, we propose two approaches to calculate term relatedness: one is based on term cooccurrence within the document, and the other by leveraging human knowledge bases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculating Term Relatedness", "sec_num": "5" }, { "text": "An intuitive method for measuring term relatedness is based on term cooccurrence relations within the given document. The cooccurrence relation expresses the cohesion relationships between terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cooccurrence-based Term Relatedness", "sec_num": "5.1" }, { "text": "In this paper, cooccurrence-based relatedness is simply set to the count of cooccurrences within a window of maximum w words in the whole document. In the following experiments, the window size w is set from 2 to 10 words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cooccurrence-based Term Relatedness", "sec_num": "5.1" }, { "text": "Each document can be regarded as a word sequence for computing cooccurrence-based relatedness. There are two types of word sequence for counting term cooccurrences. One is the original word sequence without filtering out any words, and the other is after filtering out the stop words or the words with specified part-of-speech (POS) tags. In this paper we select the first type because each word in the sequence takes important role for measuring term cooccurrences, no matter whether it is a stop word or something else. If we filter out some words, the term relatedness will not be as precise as before.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cooccurrence-based Term Relatedness", "sec_num": "5.1" }, { "text": "In experiments, we will investigate how the window size influences the performance of keyphrase extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cooccurrence-based Term Relatedness", "sec_num": "5.1" }, { "text": "Many methods have been proposed for measuring the relatedness between terms using external resources. One principled method is leveraging human knowledge bases. Inspired by (Gabrilovich and Markovitch, 2007) , we adopt Wikipedia, the largest encyclopedia collected and organized by human on the web, as the knowledge base to measure term relatedness.", "cite_spans": [ { "start": 173, "end": 207, "text": "(Gabrilovich and Markovitch, 2007)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Wikipedia-based Term Relatedness", "sec_num": "5.2" }, { "text": "The basic idea of computing term relatedness by leveragting Wikipedia is to consider each Wikipedia article as a concept. Then the semantic meaning of a term could be represented as a weighted vector of Wikipedia concepts, of which the values are the term's TFIDF within corresponding Wikipedia articles. We could compute the term relatedness by comparing the concept vectors of the terms. Empirical evaluations confirm that the idea is effective and practical for computing term relatedness (Gabrilovich and Markovitch, 2007) .", "cite_spans": [ { "start": 492, "end": 526, "text": "(Gabrilovich and Markovitch, 2007)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Wikipedia-based Term Relatedness", "sec_num": "5.2" }, { "text": "In this paper, we select cosine similarity, Euclidean distance, Point-wise Mutual Information and Normalized Google Similarity Distance (Cilibrasi and Vitanyi, 2007) for measuring term relatedness based on the vector of Wikipedia concepts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikipedia-based Term Relatedness", "sec_num": "5.2" }, { "text": "Denote the Wikipedia-concept vector of the term t i as C i = {c i1 , c i2 , ..., c iN }, where N indicates the number of Wikipedia articles, and c ik is the TFIDF value of w i in the kth Wikipedia article. The cosine similarity is defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikipedia-based Term Relatedness", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "cos(i, j) = C i \u2022 C j C i C j", "eq_num": "(1)" } ], "section": "Wikipedia-based Term Relatedness", "sec_num": "5.2" }, { "text": "The definition of Euclidean distance is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikipedia-based Term Relatedness", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "euc(i, j) = N k=1 (c ik \u2212 c jk ) 2", "eq_num": "(2)" } ], "section": "Wikipedia-based Term Relatedness", "sec_num": "5.2" }, { "text": "Point-wise Mutual Information (PMI) is a common approach to quantify relatedness. Here we take three ways to measure term relatedness using PMI. One is based on Wikipedia page count,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikipedia-based Term Relatedness", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "pmi p (i, j) = log 2 N \u00d7 p(i, j) p(i) \u00d7 p(j)", "eq_num": "(3)" } ], "section": "Wikipedia-based Term Relatedness", "sec_num": "5.2" }, { "text": "where p(i, j) is the number of Wikipedia articles containing both t i and t j , while p(i) is the number of articles which contain t i . The second is based on the term count in Wikipedia articles,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikipedia-based Term Relatedness", "sec_num": "5.2" }, { "text": "pmi t (i, j) = log 2 T \u00d7 t(i, j) t(i) \u00d7 t(j) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikipedia-based Term Relatedness", "sec_num": "5.2" }, { "text": "where T is the number of terms in Wikipedia, t(i, j) is the number of t i and t j occurred adjacently in Wikipedia, and t(i) is the number of t i in Wikipedia. The third one is a combination of the above two PMI ways,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikipedia-based Term Relatedness", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "pmi c (i, j) = log 2 N \u00d7 pt(i, j) p(i) \u00d7 p(j)", "eq_num": "(5)" } ], "section": "Wikipedia-based Term Relatedness", "sec_num": "5.2" }, { "text": "where pt(i, j) indicates the number of Wikipedia articles containing t i and t j as adjacency. It is obvious that pmi c (i, j) \u2264 pmi p (i, j), and pmi c (i, j) is more strict and accurate for measuring relatedness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikipedia-based Term Relatedness", "sec_num": "5.2" }, { "text": "Normalized Google Similarity Distance (NGD) is a new measure for measuring similarity between terms proposed by (Cilibrasi and Vitanyi, 2007) based on information distance and Kolmogorov complexity. It could be applied to compute term similarity from the World Wide Web or any large enough corpus using the page counts of terms. NGD used in this paper is based on Wikipedia article count, defined as", "cite_spans": [ { "start": 112, "end": 141, "text": "(Cilibrasi and Vitanyi, 2007)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Wikipedia-based Term Relatedness", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "ngd(i, j) = max(log p(i), log p(j)) \u2212 logp(i, j) log N \u2212 min(logp(i), logp(j))", "eq_num": "(6)" } ], "section": "Wikipedia-based Term Relatedness", "sec_num": "5.2" }, { "text": "where N is the number of Wikipedia articles used as normalized factor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikipedia-based Term Relatedness", "sec_num": "5.2" }, { "text": "Once we get the term relatedness, we could then group the terms using clustering techniques and find exemplar terms for each cluster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikipedia-based Term Relatedness", "sec_num": "5.2" }, { "text": "Clustering is an important unsupervised learning problem, which is the assignment of objects into groups so that objects from the same cluster are more similar to each other than objects from different clusters (Han and Kamber, 2005) . In this paper, we use three widely used clustering algorithms, hierarchical clustering, spectral clustering and Affinity Propagation, to cluster the candidate terms of a given document based on the semantic relatedness between them.", "cite_spans": [ { "start": 211, "end": 233, "text": "(Han and Kamber, 2005)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Term Clustering", "sec_num": "6" }, { "text": "Hierarchical clustering groups data over a variety of scales by creating a cluster tree. The tree is a multilevel hierarchy, where clusters at one level are joined as clusters at the next level. The hierarchical clustering follows this procedure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Clustering", "sec_num": "6.1" }, { "text": "1. Find the distance or similarity between every pair of data points in the dataset;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Clustering", "sec_num": "6.1" }, { "text": "2. Group the data points into a binary and hierarchical cluster tree;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Clustering", "sec_num": "6.1" }, { "text": "3. Determine where to cut the hierarchical tree into clusters. In hierarchical clustering, we have to specify the cluster number m in advance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Clustering", "sec_num": "6.1" }, { "text": "In this paper, we use the hierarchical clustering implemented in Matlab Statistics Toolbox. Note that although we use hierarchical clustering here, the cluster hierarchy is not necessary for the clustering-based method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Clustering", "sec_num": "6.1" }, { "text": "In recent years, spectral clustering has become one of the most popular modern clustering algorithms. Spectral clustering makes use of the spectrum of the similarity matrix of the data to perform dimensionality reduction for clustering into fewer dimensions, which is simple to implement and often outperforms traditional clustering methods such as k-means. Detailed introduction to spectral clustering could be found in (von Luxburg, 2006) .", "cite_spans": [ { "start": 421, "end": 440, "text": "(von Luxburg, 2006)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Spectral Clustering", "sec_num": "6.2" }, { "text": "In this paper, we use the spectral clustering toolbox developed by Wen-Yen Chen, et al. (Chen et al., 2008) 1 . Since the cooccurrence-based term relatedness is usually sparse, the traditional eigenvalue decomposition in spectral clustering will sometimes get run-time error. In this paper, we use the singular value decomposition (SVD) technique for spectral clustering instead.", "cite_spans": [ { "start": 67, "end": 107, "text": "Wen-Yen Chen, et al. (Chen et al., 2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Spectral Clustering", "sec_num": "6.2" }, { "text": "For spectral clustering, two parameters are required to be set by the user: the cluster number m, and \u03c3 which is used in computing similarities from object distances", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spectral Clustering", "sec_num": "6.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s(i, j) = exp( \u2212d(i, j) 2 2\u03c3 2 )", "eq_num": "(7)" } ], "section": "Spectral Clustering", "sec_num": "6.2" }, { "text": "where s(i, j) and d(i, j) are the similarity and distance between i and j respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spectral Clustering", "sec_num": "6.2" }, { "text": "Another powerful clustering method, Affinity Propagation, is based on message passing techniques. AP was proposed in (Frey and Dueck, 2007) , where AP was reported to find clusters with much lower error than those found by other methods. In this paper, we use the toolbox developed by Frey, et al. 2 . Detailed description of the algorithm could be found in (Frey and Dueck, 2007) . Here we introduced three parameters for AP:", "cite_spans": [ { "start": 117, "end": 139, "text": "(Frey and Dueck, 2007)", "ref_id": "BIBREF6" }, { "start": 298, "end": 299, "text": "2", "ref_id": null }, { "start": 358, "end": 380, "text": "(Frey and Dueck, 2007)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Affinity Propagation", "sec_num": "6.3" }, { "text": "\u2022 Preference. Rather than requiring predefined number of clusters, Affinity Propagation takes as input a real number p for each term, so that the terms with larger p are more likely to be chosen as exemplars, i.e., centroids of clusters. These values are referred to as \"preferences\". The preferences are usually be set as the maximum, minimum, mean or median of s(i, j), i = j.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affinity Propagation", "sec_num": "6.3" }, { "text": "\u2022 Convergence criterion. AP terminates if (1) the local decisions stay constant for I 1 iterations; or (2) the number of iterations reaches I 2 . In this work, we set I 1 to 100 and I 2 to 1, 000.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affinity Propagation", "sec_num": "6.3" }, { "text": "\u2022 Damping factor. When updating the messages, it is important to avoid numerical oscillations by using damping factor. Each message is set to \u03bb times its value from the previous iteration plus 1 \u2212 \u03bb times its prescribed updated value, where the damping factor \u03bb is between 0 and 1. In this paper we set \u03bb = 0.9.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Affinity Propagation", "sec_num": "6.3" }, { "text": "After term clustering, we select the exemplar terms of each clusters as seed terms. In Affinity Propagation, the exemplar terms are directly obtained from the clustering results. In hierarchical clustering, exemplar terms could also be obtained by the Matlab toolbox. While in spectral clustering, we select the terms that are most close to the centroid of a cluster as exemplar terms. As reported in , most manually assigned keyphrases turn out to be noun groups. Therefore, we annotate the document with POS tags using Stanford Log-Linear Tagger 3 , and then extract the noun groups whose pattern is zero or more adjectives followed by one or more nouns. The pattern can be represented using regular expressions as follows", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Exemplar Terms to Keyphrases", "sec_num": "7" }, { "text": "(JJ) * (N N |N N S|N N P )+", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Exemplar Terms to Keyphrases", "sec_num": "7" }, { "text": "where JJ indicates adjectives and various forms of nouns are represented using N N , N N S and N N P . From these noun groups, we select the ones that contain one or more exemplar terms to be the keyphrases of the document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Exemplar Terms to Keyphrases", "sec_num": "7" }, { "text": "In this process, we may find single-word keyphrases. In practice, only a small fraction of keyphrases are single-word. Thus, as a part of postprocessing process, we have to use a frequent word list to filter out the terms that are too common to be keyphrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "From Exemplar Terms to Keyphrases", "sec_num": "7" }, { "text": "The dataset used in the experiments is a collection of scientific publication abstracts from the Inspec database and the corresponding manually assigned keyphrases 4 . The dataset is used in both and (Mihalcea and Tarau, 2004) . Each abstract has two kinds of keyphrases: controlled keyphrases, restricted to a given dictionary, and uncontrolled keyphrases, freely assigned by the experts. We use the uncontrolled keyphrases for evaluation as proposed in and followed by (Mihalcea and Tarau, 2004) .", "cite_spans": [ { "start": 200, "end": 226, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF15" }, { "start": 471, "end": 497, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Evaluation Metric", "sec_num": "8.1" }, { "text": "As indicated in Mihalcea and Tarau, 2004) , in uncontrolled manually assigned keyphrases, only the ones that occur in the corresponding abstracts are considered in evaluation. The extracted keyphrases of various methods and manually assigned keyphrases are compared after stemming.", "cite_spans": [ { "start": 16, "end": 41, "text": "Mihalcea and Tarau, 2004)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Evaluation Metric", "sec_num": "8.1" }, { "text": "In the experiments of , for her supervised method, Hulth splits a total of 2, 000 abstracts into 1, 000 for training, 500 for validation and 500 for test. In (Mihalcea and Tarau, 2004) , due to the unsupervised method, only the test set was used for comparing the performance of Tex-tRank and Hulth's method.", "cite_spans": [ { "start": 158, "end": 184, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Evaluation Metric", "sec_num": "8.1" }, { "text": "For computing Wikipedia-based relatedness, we use a snapshot on November 11, 2005 5 . The frequent word list used in the postprocessing step for filtering single-word phrases is also computed from Wikipedia. In the experiments of this paper, we add the words that occur more than 1, 000 times in Wikipedia into the list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets and Evaluation Metric", "sec_num": "8.1" }, { "text": "The clustering-based method is completely unsupervised. Here, we mainly run our method on test set and investigate the influence of relatedness measurements and clustering methods with different parameters. Then we compare our method with two baseline methods: Hulth's method and TextRank. Finally, we analyze and discuss the performance of the method by taking the abstract of this paper as a demonstration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets and Evaluation Metric", "sec_num": "8.1" }, { "text": "We first investigate the influence of semantic relatedness measurements. By systematic experiments, we find that Wikipedia-based relatedness outperforms cooccurrence-based relatedness for keyphrase extraction, though the improvement is not significant. In Table 1 , we list the performance of spectral clustering with various relatedness measurements for demonstration. In this table, the w indicates the window size for counting cooccurrences in cooccurrence-based relatedness. cos, euc, etc. are different measures for computing Wikipedia-based relatedness which we presented in Section 5.2. We use spectral clustering here because it outperforms other clustering techniques, which will be shown in the next subsection. The results in Table 1 are obtained when the cluster number m = 2 3 n, where n is the number of candidate terms obtained in Section 5. Besides, for Euclidean distance and Google distance, we set \u03c3 = 36 of Formula 7 to convert them to corresponding similarities, where we get the best result when we conduct different trails with \u03c3 = 9, 18, 36, 54, though there are only a small margin among them.", "cite_spans": [], "ref_spans": [ { "start": 256, "end": 263, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 737, "end": 744, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Influence of Relatedness Measurements", "sec_num": "8.2" }, { "text": "As shown in Table 1 , although the method using Wikipedia-based relatedness outperforms that using cooccurrence-based relatedness, the improvement is not prominent. Wikipedia-based relatedness is computed according to global statistical information on Wikipedia. Therefore it is more precise than cooccurrence-based relatedness, which is reflected in the performance of the keyphrase extraction. However, on the other hand, Wikipediabased relatedness does not catch the documentspecific relatedness, which is represented by the cooccurrence-based relatedness. It will be an interesting future work to combine these two types of relatedness measurements.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Influence of Relatedness Measurements", "sec_num": "8.2" }, { "text": "From this subsection, we conclude that, although the method using Wikipedia-based relatedness performs better than cooccurrence-based one, due to the expensive computation of Wikipediabased relatedness, the cooccurrence-based one is good enough for practical applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Influence of Relatedness Measurements", "sec_num": "8.2" }, { "text": "To demonstrate the influence of clustering methods for keyphrase extraction, we fix the relatedness measurement as Wikipedia-based pmi c , which has been shown in Section 8.2 to be the best relatedness measurement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Influence of Clustering Methods and Their Parameters", "sec_num": "8.3" }, { "text": "In Table 2 , we show the performance of three clustering techniques for keyphrase extraction. For hierarchical clustering and spectral clustering, the cluster number m are set explicitly as the proportion of candidate terms n, while for Affinity Propagation, we set preferences as the minimum, mean, median and maximum of s(i, j) to get different number of clusters, denoted as min, mean, median and max in the table respectively.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Influence of Clustering Methods and Their Parameters", "sec_num": "8.3" }, { "text": "As shown in the table, when cluster number m is large, spectral clustering outperforms hierarchical clustering and Affinity Propagation. Among Table 3 lists the results of the clustering-based method compared with the best results reported in Mihalcea and Tarau, 2004) on the same dataset. For each method, the table lists the total number of assigned keyphrases, the mean number of keyphrases per abstract, the total number of correct keyphrases, and the mean number of correct keyphrases. The table also lists precision, recall and F1-measure. In this table, hierarchical clustering, spectral clustering and Affinity Propagation are abbreviated by \"HC\", \"SC\" and \"AP\" respectively. The result of Hulth's method listed in this table is the best one reported in on the same dataset. This is a supervised classificationbased method, which takes more linguistic features in consideration for keyphrase extraction. The best result is obtained using n-gram as candidate keyphrases and adding POS tags as candidate features for classification.", "cite_spans": [ { "start": 243, "end": 268, "text": "Mihalcea and Tarau, 2004)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 143, "end": 150, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Influence of Clustering Methods and Their Parameters", "sec_num": "8.3" }, { "text": "The result of TextRank listed here is the best one reported in (Mihalcea and Tarau, 2004) on the same dataset. To obtain the best result, the authors built an undirected graph using window w = 2 on word sequence of the given document, and ran In this table, the best result of hierarchical clustering is obtained by setting the cluster number m = 2 3 n and using Euclidean distance for computing Wikipedia-based relatedness. The parameters of spectral clustering are the same as in last subsection. For Affinity Propagation, the best result is obtained under p = max and using Wikipediabased Euclidean distance as relatedness measure.", "cite_spans": [ { "start": 63, "end": 89, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Comparing with Other Algorithms", "sec_num": "8.4" }, { "text": "From this table, we can see clusteringbased method outperforms TextRank and Hulth's method. For spectral clustering, F1-measure achieves an approximately 9.5% improvement as compared to TextRank.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparing with Other Algorithms", "sec_num": "8.4" }, { "text": "Furthermore, since the clustering-based method is unsupervised, we do not need any set for training and validation. In this paper, we also carry out an experiment on the whole Hulth's dataset with 2, 000 abstracts. The performance is similar to that on 500 abstracts as shown above. The best result is obtained when we use spectral clustering by setting m = 2 3 n with Wikipedia-based pmi c relatedness, which is the same in 500 abstracts. In this result, we extract 29, 517 keyphrases, among which 9, 655 are correctly extracted. The precision, recall and F1-measure are 0.327, 0.653 and 0.436 respectively. The experiment results show that the clustering-based method is stable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparing with Other Algorithms", "sec_num": "8.4" }, { "text": "From the above experiment results, we can see the clustering-based method is both robust and effective for keyphrase extraction as an unsupervised method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis and Discussions", "sec_num": "8.5" }, { "text": "Here, as an demonstration, we use spectral clustering and Wikipedia-based pmi c relatedness to extract keyphrases from the abstract of this paper. The extracted stemmed keyphrases under various cluster numbers are shown in Figure 1 . In this figure, we find that when m = 1 4 n, 1 3 n, 1 2 n, the extracted keyphrases are identical, where the exemplar terms under m = 1 3 n are marked in boldface. We find several aspects like \"unsupervised\", \"exemplar term\" and \"keyphrase extraction\" are extracted correctly. In fact, \"clustering technique\" in the abstract should also be extracted as a keyphrase. However, since \"clustering\" is tagged as a verb that ends in -ing, which disagrees the noun group patterns, thus the phrase is not among the extracted keyphrases.", "cite_spans": [], "ref_spans": [ { "start": 223, "end": 231, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Analysis and Discussions", "sec_num": "8.5" }, { "text": "When m = 2 3 n, the extracted keyphrases are noisy with many single-word phrases. As the cluster number increases, more exemplar terms are identified from these clusters, and more keyphrases will be extracted from the document based on exemplar terms. If we set the cluster number to m = n, all terms will be selected as exemplar terms. In this extreme case, all noun groups will be extracted as keyphrases, which is obviously not proper for keyphrase extraction. Thus, it is important for this method to appropriately specify the cluster number.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis and Discussions", "sec_num": "8.5" }, { "text": "In the experiments, we also notice that frequent word list is important for keyphrase extraction. Without the list for filtering, the best F1-measure will decrease by about 5 percent to 40%. However, the solution of using frequent word list is somewhat too simple, and in future work, we plan to investigate a better combination of clusteringbased method with traditional methods using term frequency as the criteria.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis and Discussions", "sec_num": "8.5" }, { "text": "In this paper, we propose an unsupervised clustering-based keyphrase extraction algorithm. This method groups candidate terms into clusters and identify the exemplar terms. Then keyphrases are extracted from the document based on the exemplar terms. The clustering based on term semantic relatedness guarantees the extracted keyphrases have a good coverage of the document. Experiment results show the method has a good ef- Keyphrases when m = 1 4 n, 1 3 n, 1 2 n unsupervis method; various unsupervis rank method; exemplar term; state-of-the-art graph-bas rank method; keyphras; keyphras extract Keyphrases when m = 2 3 n unsupervis method; manual assign; brief summari; various unsupervis rank method; exemplar term; document; state-of-the-art graph-bas rank method; experi; keyphras; import score; keyphras extract fectiveness and robustness, and outperforms baselines significantly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "9" }, { "text": "Future work may include:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "9" }, { "text": "1. Investigate the feasibility of clustering directly on noun groups;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "9" }, { "text": "2. Investigate the feasibility of combining cooccurrence-based and Wikipedia-based relatedness for clustering;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "9" }, { "text": "3. Investigate the performance of the method on other types of documents, such as long articles, product reviews and news;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "9" }, { "text": "4. The solution of using frequent word list for filtering out too common single-word keyphrases is undoubtedly simple, and we plan to make a better combination of the clustering-based method with traditional frequency-based methods for keyphrase extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "9" }, { "text": ". Calculating term relatedness. We use some measures to calculate the semantic relatedness of candidate terms.3. Term clustering. Based on term relatedness,we group candidate terms into clusters and find the exemplar terms of each cluster.4. From exemplar terms to keyphrases. Finally, we use these exemplar terms to extract keyphrases from the document.In the next four sections we describe the algorithm in detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The package could be accessed via http://www.cs. ucsb.edu/\u02dcwychen/sc.html.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The package could be accessed via http://www. psi.toronto.edu/affinitypropagation/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The package could be accessed via http://http:// nlp.stanford.edu/software/tagger.shtml.4 Many thanks to Anette Hulth for providing us the dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The dataset could be get from http://www.cs. technion.ac.il/\u02dcgabr/resources/code/ wikiprep/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A practical system of keyphrase extraction for web pages", "authors": [ { "first": "Mo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jian-Tao", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Hua-Jun", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Kwok-Yan", "middle": [], "last": "Lam", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 14th ACM international conference on Information and knowledge management", "volume": "", "issue": "", "pages": "277--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mo Chen, Jian-Tao Sun, Hua-Jun Zeng, and Kwok-Yan Lam. 2005. A practical system of keyphrase extrac- tion for web pages. In Proceedings of the 14th ACM international conference on Information and knowl- edge management, pages 277-278.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Psc: Paralel spectral clustering", "authors": [ { "first": "Y", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Yangqiu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hongjie", "middle": [], "last": "Song", "suffix": "" }, { "first": "Chih", "middle": [ "J" ], "last": "Bai", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Lin", "suffix": "" }, { "first": "", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wen Y. Chen, Yangqiu Song, Hongjie Bai, Chih J. Lin, and Edward Chang. 2008. Psc: Paralel spectral clustering. Submitted.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The google similarity distance", "authors": [ { "first": "L", "middle": [], "last": "Rudi", "suffix": "" }, { "first": "", "middle": [], "last": "Cilibrasi", "suffix": "" }, { "first": "M", "middle": [ "B" ], "last": "Paul", "suffix": "" }, { "first": "", "middle": [], "last": "Vitanyi", "suffix": "" } ], "year": 2007, "venue": "IEEE Transactions on Knowledge and Data Engineering", "volume": "19", "issue": "3", "pages": "370--383", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rudi L. Cilibrasi and Paul M. B. Vitanyi. 2007. The google similarity distance. IEEE Transactions on Knowledge and Data Engineering, 19(3):370-383.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Generating summary keywords for emails using topics", "authors": [ { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" }, { "first": "Hanna", "middle": [ "M" ], "last": "Wallach", "suffix": "" }, { "first": "Danny", "middle": [], "last": "Puller", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 13th international conference on Intelligent user interfaces", "volume": "", "issue": "", "pages": "199--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Dredze, Hanna M. Wallach, Danny Puller, and Fernando Pereira. 2008. Generating summary key- words for emails using topics. In Proceedings of the 13th international conference on Intelligent user in- terfaces, pages 199-206.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Kp-miner: A keyphrase extraction system for english and arabic documents", "authors": [ { "first": "S", "middle": [], "last": "Elbeltagy", "suffix": "" }, { "first": "A", "middle": [], "last": "Rafea", "suffix": "" } ], "year": 2009, "venue": "Information Systems", "volume": "34", "issue": "1", "pages": "132--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Elbeltagy and A. Rafea. 2009. Kp-miner: A keyphrase extraction system for english and arabic documents. Information Systems, 34(1):132-144.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Domain-specific keyphrase extraction", "authors": [ { "first": "Eibe", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Gordon", "middle": [ "W" ], "last": "Paynter", "suffix": "" }, { "first": "Ian", "middle": [ "H" ], "last": "Witten", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Gutwin", "suffix": "" }, { "first": "Craig", "middle": [ "G" ], "last": "Nevill-Manning", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 16th International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "668--673", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eibe Frank, Gordon W. Paynter, Ian H. Witten, Carl Gutwin, and Craig G. Nevill-Manning. 1999. Domain-specific keyphrase extraction. In Proceed- ings of the 16th International Joint Conference on Artificial Intelligence, pages 668-673.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Clustering by passing messages between data points", "authors": [ { "first": "J", "middle": [ "J" ], "last": "Brendan", "suffix": "" }, { "first": "Delbert", "middle": [], "last": "Frey", "suffix": "" }, { "first": "", "middle": [], "last": "Dueck", "suffix": "" } ], "year": 2007, "venue": "Science", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brendan J J. Frey and Delbert Dueck. 2007. Clustering by passing messages between data points. Science.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Computing semantic relatedness using wikipedia-based explicit semantic analysis", "authors": [ { "first": "E", "middle": [], "last": "Gabrilovich", "suffix": "" }, { "first": "S", "middle": [], "last": "Markovitch", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 20th International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "6--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Gabrilovich and S. Markovitch. 2007. Computing semantic relatedness using wikipedia-based explicit semantic analysis. In Proceedings of the 20th Inter- national Joint Conference on Artificial Intelligence, pages 6-12.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Extracting key terms from noisy and multi-theme documents", "authors": [ { "first": "M", "middle": [], "last": "Grineva", "suffix": "" }, { "first": "M", "middle": [], "last": "Grinev", "suffix": "" }, { "first": "D", "middle": [], "last": "Lizorkin", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 18th international conference on World wide web", "volume": "", "issue": "", "pages": "661--670", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Grineva, M. Grinev, and D. Lizorkin. 2009. Ex- tracting key terms from noisy and multi-theme docu- ments. In Proceedings of the 18th international con- ference on World wide web, pages 661-670. ACM New York, NY, USA.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Data Mining: Concepts and Techniques", "authors": [ { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" }, { "first": "Micheline", "middle": [], "last": "Kamber", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiawei Han and Micheline Kamber. 2005. Data Min- ing: Concepts and Techniques, second edition. Mor- gan Kaufmann.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Keyphrase extraction using semantic networks structure analysis", "authors": [ { "first": "Chong", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Yonghong", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Zhi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Charles", "middle": [ "X" ], "last": "Ling", "suffix": "" }, { "first": "Tiejun", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 6th International Conference on Data Mining", "volume": "", "issue": "", "pages": "275--284", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chong Huang, Yonghong Tian, Zhi Zhou, Charles X. Ling, and Tiejun Huang. 2006. Keyphrase extrac- tion using semantic networks structure analysis. In Proceedings of the 6th International Conference on Data Mining, pages 275-284.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Improved automatic keyword extraction given more linguistic knowledge", "authors": [ { "first": "Anette", "middle": [], "last": "Hulth", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 conference on Empirical methods in natural language processing", "volume": "", "issue": "", "pages": "216--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anette Hulth. 2003. Improved automatic keyword ex- traction given more linguistic knowledge. In Pro- ceedings of the 2003 conference on Empirical meth- ods in natural language processing, pages 216-223.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Reducing false positives by expert combination in automatic keyword indexing. Recent Advances in Natural Language Processing III: Selected Papers from RANLP", "authors": [ { "first": "A", "middle": [], "last": "Hulth", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Hulth. 2004. Reducing false positives by expert combination in automatic keyword indexing. Re- cent Advances in Natural Language Processing III: Selected Papers from RANLP 2003, page 367.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Automatic hypertext keyphrase detection", "authors": [ { "first": "Daniel", "middle": [], "last": "Kelleher", "suffix": "" }, { "first": "Saturnino", "middle": [], "last": "Luz", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 19th International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Kelleher and Saturnino Luz. 2005. Automatic hypertext keyphrase detection. In Proceedings of the 19th International Joint Conference on Artificial In- telligence.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Graph-based keyword extraction for single-document summarization", "authors": [ { "first": "Marina", "middle": [], "last": "Litvak", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Last", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the workshop Multi-source Multilingual Information Extraction and Summarization", "volume": "", "issue": "", "pages": "17--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marina Litvak and Mark Last. 2008. Graph-based keyword extraction for single-document summariza- tion. In Proceedings of the workshop Multi-source Multilingual Information Extraction and Summa- rization, pages 17-24.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Textrank: Bringing order into texts", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Tarau", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into texts. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning to Extract Keyphrases from Text. National Research Council Canada, Institute for Information Technology", "authors": [ { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Turney", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter D. Turney. 1999. Learning to Extract Keyphrases from Text. National Research Council Canada, In- stitute for Information Technology, Technical Report ERB-1057.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A tutorial on spectral clustering", "authors": [ { "first": "U", "middle": [], "last": "Luxburg", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "U. von Luxburg. 2006. A tutorial on spectral clus- tering. Technical report, Max Planck Institute for Biological Cybernetics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Collabrank: Towards a collaborative approach to singledocument keyphrase extraction", "authors": [ { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Jianguo", "middle": [], "last": "Xiao", "suffix": "" } ], "year": 2008, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "969--976", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaojun Wan and Jianguo Xiao. 2008a. Col- labrank: Towards a collaborative approach to single- document keyphrase extraction. In Proceedings of COLING, pages 969-976.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Single document keyphrase extraction using neighborhood knowledge", "authors": [ { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Jianguo", "middle": [], "last": "Xiao", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "855--860", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaojun Wan and Jianguo Xiao. 2008b. Single document keyphrase extraction using neighborhood knowledge. In Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, pages 855-860.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "Keyphrases in stemmed form extracted from this paper's abstract.", "num": null }, "TABREF0": { "type_str": "table", "content": "
Influence of relatedness measurements
for keyphrase extraction.
Parameters Precision Recall F1-measure
Cooccurrence-based Relatedness
w = 20.3310.6260.433
w = 40.3330.6210.434
w = 60.3310.6300.434
w = 80.3300.6230.432
w = 100.3330.6320.436
Wikipedia-based Relatedness
cos0.3480.6550.455
euc0.3440.6340.446
pmi p0.3440.6210.443
pmi t0.3440.6190.442
pmi c0.3500.6600.457
ngd0.3430.6200.442
", "num": null, "text": "", "html": null }, "TABREF1": { "type_str": "table", "content": "
Parameters Precision Recall F1-measure
Hierarchical Clustering
m = 1 4 n m = 1 3 n m = 1 2 n m = 2 3 n m = 4 5 n0.365 0.365 0.351 0.346 0.3400.369 0.369 0.562 0.629 0.6570.367 0.367 0.432 0.446 0.448
Spectral Clustering
m = 1 4 n m = 1 3 n m = 1 2 n m = 2 3 n m = 4 5 n0.385 0.374 0.374 0.350 0.3400.409 0.497 0.497 0.660 0.6790.397 0.427 0.427 0.457 0.453
Affinity Propagation
p = max0.3310.6880.447
p = mean0.4330.0700.121
p = median0.4220.0780.132
p = min0.4190.0590.103
these methods, only Affinity Propagation under
some parameters performs poorly.
", "num": null, "text": "Influence of clustering methods for keyphrase extraction.", "html": null }, "TABREF2": { "type_str": "table", "content": "
AssignedCorrect
MethodTotalMean TotalMean Precision Recall F1-measure
Hulth's7,815 15.61,973 3.90.2520.517 0.339
TextRank6,784 13.72,116 4.20.3120.431 0.362
HC7,303 14.62,494 5.00.3420.657 0.449
SC7,158 14.32,505 5.00.3500.660 0.457
AP8,013 16.02,648 5.30.3300.697 0.448
PageRank on it.
", "num": null, "text": "Comparison results of Hulth's method, TextRank and our clustering-based method.", "html": null } } } }