{ "paper_id": "I13-1002", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:15:33.651395Z" }, "title": "WordTopic-MultiRank : A New Method for Automatic Keyphrase Extraction", "authors": [ { "first": "Fan", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "The Shenzhen Key Lab for Cloud Computing Technology and Applications Peking University Shenzhen Graduate School", "institution": "", "location": { "postCode": "518055", "settlement": "Shenzhen", "country": "P.R.China" } }, "email": "" }, { "first": "\u2020", "middle": [], "last": "Lian'en Huang", "suffix": "", "affiliation": { "laboratory": "The Shenzhen Key Lab for Cloud Computing Technology and Applications Peking University Shenzhen Graduate School", "institution": "", "location": { "postCode": "518055", "settlement": "Shenzhen", "country": "P.R.China" } }, "email": "" }, { "first": "Bo", "middle": [], "last": "Peng", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Automatic keyphrase extraction aims to pick out a set of terms as a representation of a document without manual assignment efforts. Supervised and unsupervised graph-based ranking methods have been studied for this task. However, previous methods usually computed importance scores of words under the assumption of single relation between words. In this work, we propose WordTopic-MultiRank as a new method for keyphrase extraction, based on the idea that words relate with each other via multiple relations. First we treat various latent topics in documents as heterogeneous relations between words and construct a multi-relational word network. Then, a novel ranking algorithm, named Biased-MultiRank, is applied to score the importance of words and topics simultaneously, as words and topics are considered to have mutual influence on each other. Experimental results on two different data sets show the outstanding performance and robustness of our proposed approach in automatic keyphrase extraction task.", "pdf_parse": { "paper_id": "I13-1002", "_pdf_hash": "", "abstract": [ { "text": "Automatic keyphrase extraction aims to pick out a set of terms as a representation of a document without manual assignment efforts. Supervised and unsupervised graph-based ranking methods have been studied for this task. However, previous methods usually computed importance scores of words under the assumption of single relation between words. In this work, we propose WordTopic-MultiRank as a new method for keyphrase extraction, based on the idea that words relate with each other via multiple relations. First we treat various latent topics in documents as heterogeneous relations between words and construct a multi-relational word network. Then, a novel ranking algorithm, named Biased-MultiRank, is applied to score the importance of words and topics simultaneously, as words and topics are considered to have mutual influence on each other. Experimental results on two different data sets show the outstanding performance and robustness of our proposed approach in automatic keyphrase extraction task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Keyphrases refer to the meaningful words and phrases that can precisely and compactly represent documents. Appropriate keyphrases help users a lot in better grasping and remembering key ideas of articles, as well as fast browsing and reading. Moreover, qualities of some information retrieval and natural language processing tasks have been improved with the help of document keyphrases, such as document indexing, categorizing, cluster- \u2021 Corresponding author.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "ing and summarizing (Gutwin et al., 1999; Krulwich and Burkey, 1996; Hammouda et al., 2005) .", "cite_spans": [ { "start": 20, "end": 41, "text": "(Gutwin et al., 1999;", "ref_id": "BIBREF2" }, { "start": 42, "end": 68, "text": "Krulwich and Burkey, 1996;", "ref_id": "BIBREF7" }, { "start": 69, "end": 91, "text": "Hammouda et al., 2005)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Usually, keyphrases are manually assigned by authors, which is time consuming. With the fast development of Internet, it becomes impractical to label them by human effort as articles on the Web increase exponentially. Therefore, automatic keyphrase extraction plays an important role in keyphrases assignment task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In most existing work, words are assumed under a single relation and then scored or judged within it. Considering the famous TextRank (Mihalcea and Tarau, 2004) , a term graph under a single relatedness was built first, then a graph-based ranking algorithm, such as PageRank (Page et al., 1999) , was used to determine the importance score for each term. Another compelling example is (Liu et al., 2010) , where words were scored under each topic separately.", "cite_spans": [ { "start": 134, "end": 160, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF12" }, { "start": 275, "end": 294, "text": "(Page et al., 1999)", "ref_id": "BIBREF15" }, { "start": 385, "end": 403, "text": "(Liu et al., 2010)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this study, inspired by some multi-relational data mining techniques, such as (Ng et al., 2011) , we assume each topic as a single relation type and construct an intra-topic word network for each relation type. In other words, it is to map word relatedness within multiple topics to heterogeneous relations, meaning that words have interactions with others based on different topics.", "cite_spans": [ { "start": 81, "end": 98, "text": "(Ng et al., 2011)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A multi-relational words example of our proposed WordTopic-MultiRank model is shown in Figure 1 (a). There are four words and three relations in this example, implying that there are three potential topics contained in the document. Further, we represent such multi-relational data in a tensor shape in Figure 1(b) , where each twodimensional plane represents an adjacency matrix for one type of topics. Then the heterogeneous network can be depicted as a tensor of size 4 \u00d7 4 \u00d7 3, where (i, j, k) entry is nonzero if the ith word is related to the jth word under kth topic.", "cite_spans": [], "ref_spans": [ { "start": 87, "end": 95, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 303, "end": 314, "text": "Figure 1(b)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "After that, we raise a novel measurement of word relatedness considering different topics, and then apply Biased-MultiRank algorithm to deal with multi-relational words for co-ranking purpose, based on the idea that words and topics have mutual influence on each other. More specifically, a word, connected with highly scored words via highly scored topics, should receive a high score itself, and similarly, a topic, connecting highly scored words, should get a high score as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experiments have been performed on two different data sets. One is a collection of scientific publication abstracts, while the other consists of news articles with human-annotated keyphrases. Experimental results demonstrate that our WordTopic-MultiRank method outperforms representative baseline approaches in specified evaluation metrics. And we have investigated how different parameter values influence the performance of our method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is organized as follows. Section 2 introduces related work. In Section 3, details of constructing and applying WordTopic-MultiRank model are presented. Section 4 shows experiments and results on two different data sets. Finally, in Section 5, conclusion and future work are discussed .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Existing methods for keyphrase extraction task can be divided into supervised and unsupervised approaches. The supervised methods mainly treat keyphrase extraction as a classification task, so a model needs to be trained before classifying whether a candidate phrase is a keyphrase or not. Turney (1999) firstly utilized a genetic algorithm with parameterized heuristic rules for keyphrase extraction, then Hulth (2003) added more linguistic knowledge as features to achieve better perfor-mance. Jiang et al. (2009) employed linear Ranking SVM, a learning to rank method, to extract keyphrase lately. However, supervised methods require a training set which would demand timeconsuming human-assigned work, making it impractical in the vast Internet space. In this work, we principally concentrate on unsupervised methods.", "cite_spans": [ { "start": 407, "end": 419, "text": "Hulth (2003)", "ref_id": "BIBREF5" }, { "start": 496, "end": 515, "text": "Jiang et al. (2009)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Among those unsupervised approaches, clustering and graph-based ranking methods showed good performance in this task. Representative studies of clustering approaches are (Liu et al., 2009) and (Grineva et al., 2009) . Liu et al. (2009) made use of clustering methods to find exemplar terms and then selected terms from each cluster as keyphrases. Grineva et al. (2009) applied graph community detection techniques to partition the term graph into thematically cohesive groups and selected groups that contained key terms, discarding groups with unimportant terms. But as is widely known, one of the major difficulties in clustering is to predefine the cluster number which influences performance heavily.", "cite_spans": [ { "start": 170, "end": 188, "text": "(Liu et al., 2009)", "ref_id": "BIBREF10" }, { "start": 193, "end": 215, "text": "(Grineva et al., 2009)", "ref_id": "BIBREF1" }, { "start": 218, "end": 235, "text": "Liu et al. (2009)", "ref_id": "BIBREF10" }, { "start": 347, "end": 368, "text": "Grineva et al. (2009)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "As for basic graph-based approaches, such as (Mihalcea and Tarau, 2004) and (Litvak and Last, 2008) , a graph based on word linkage or word similarity was first constructed, then a ranking algorithm was used to determine the importance score of each term. Wan et al. (2007) presented an idea of extracting summary and keywords simultaneously under the assumption that summary and keywords of the same document can be mutually boosted. Moreover, Wan and Xiao (2008a) used a small number of nearest neighbor documents for providing more knowledge to improve performance and similarly, Wan and Xiao (2008b) made use of multiple documents with a cluster context. Recently, topical information was under consideration to be combined with graphbased approaches. One of the outstanding studies was Topic-sensitive PageRank (Haveliwala, 2002) , which computed scores of web pages by incorporating topics of the context. As another representative, Topical PageRank (Liu et al., 2010) applied a Biased PageRank to assign an importance score to each term under every latent topic separately.", "cite_spans": [ { "start": 45, "end": 71, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF12" }, { "start": 76, "end": 99, "text": "(Litvak and Last, 2008)", "ref_id": "BIBREF9" }, { "start": 256, "end": 273, "text": "Wan et al. (2007)", "ref_id": "BIBREF19" }, { "start": 445, "end": 465, "text": "Wan and Xiao (2008a)", "ref_id": "BIBREF17" }, { "start": 583, "end": 603, "text": "Wan and Xiao (2008b)", "ref_id": "BIBREF18" }, { "start": 816, "end": 834, "text": "(Haveliwala, 2002)", "ref_id": "BIBREF4" }, { "start": 956, "end": 974, "text": "(Liu et al., 2010)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "To the best of our knowledge, previous graphbased researches are based on the assumption that all words exist under a unified relation, while in this work, we view latent topics within documents as word relations and words as multi-relational data, in order to make full use of word-word relatedness, word-topic interaction and inter-topic impacts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this section, we will introduce our proposed WordTopic-MultiRank method in details, including topic decomposition, word relatedness measurement, heterogeneous network construction and Biased-MultiRank algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WordTopic-MultiRank Method", "sec_num": "3" }, { "text": "There are some existing methods to infer latent topics of words and documents. Latent Dirichlet Allocation (LDA) (Blei et al., 2003) is adopted in our work as it is more feasible for inference and it can reduce the risk of over-fitting.", "cite_spans": [ { "start": 113, "end": 132, "text": "(Blei et al., 2003)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Topic Detection via Latent Dirichlet Allocation", "sec_num": "3.1" }, { "text": "Firstly, we denote the learning corpus for LDA as C, and |C| represents the total number of documents in C. The i th document in the corpus is denoted as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Detection via Latent Dirichlet Allocation", "sec_num": "3.1" }, { "text": "d i , in which i = 1, 2, \u2022 \u2022 \u2022 , |C|.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Detection via Latent Dirichlet Allocation", "sec_num": "3.1" }, { "text": "Then, words are denoted as w ij where i indicates that word w ij appears in document d i and j refers to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Detection via Latent Dirichlet Allocation", "sec_num": "3.1" }, { "text": "jth position in d i (j = 1, 2, \u2022 \u2022 \u2022 , |d i |, |d i | is the to- tal word number in d i ). Further, topics inferred from |C| is z k , k = 1, 2, \u2022 \u2022 \u2022 , |T |,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Detection via Latent Dirichlet Allocation", "sec_num": "3.1" }, { "text": "while T stands for the topic set detected from C and |T | is the total number of topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Detection via Latent Dirichlet Allocation", "sec_num": "3.1" }, { "text": "According to LDA, observed words in each document are supposed to be generated by a document-specific mixture of corpus-wide latent topics. More specifically, each word w ij in document d i is generated by first sampling a topic z k from d i 's document-topic multinomial distribution \u03b8 d i , and then sampling a word from z k 's topicword multinomial distribution \u03d5 z k . And each \u03b8 d i is generated by a conjugate Dirichlet prior with parameter \u03b1, while each \u03d5 z k is generated by a conjugate Dirichlet prior with parameter \u03b2. The full generative model for w ij is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Detection via Latent Dirichlet Allocation", "sec_num": "3.1" }, { "text": "p(w ij |d i , \u03b1, \u03b2) = |T | \u2211 k=1 p(w ij |z k , \u03b2)p(z k |d i , \u03b1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Detection via Latent Dirichlet Allocation", "sec_num": "3.1" }, { "text": "(1) Using LDA, we finally obtain the documenttopic distribution, namely p(z k |d i ) for all the topics z k on each document d i , as well as the topicword distribution, namely p(w ij |z k ) for all the words w ij on each topic z k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Detection via Latent Dirichlet Allocation", "sec_num": "3.1" }, { "text": "In this work, we use GibbsLDA++ 1 , a C/C++ implementation of LDA using Gibbs Sampling, to detect latent topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Detection via Latent Dirichlet Allocation", "sec_num": "3.1" }, { "text": "Next, we apply Bayes' theorem to get word-topic distribution p(z k |w ij ) for every word in a given document d i :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measurement of Word Relatedness under Multi-relations", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(z k |w ij ) = p(w ij |z k , \u03b2)p(z k |d i , \u03b1) \u2211 |T | k=1 p(w ij |z k , \u03b2)p(z k |d i , \u03b1)", "eq_num": "(2)" } ], "section": "Measurement of Word Relatedness under Multi-relations", "sec_num": "3.2" }, { "text": "Therefore, we can obtain word relatedness as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measurement of Word Relatedness under Multi-relations", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(w im |w in , z k ) = p(w im |z k )p(z k |w in )", "eq_num": "(3)" } ], "section": "Measurement of Word Relatedness under Multi-relations", "sec_num": "3.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measurement of Word Relatedness under Multi-relations", "sec_num": "3.2" }, { "text": "m, n = 1, 2, \u2022 \u2022 \u2022 , |d i |, and p(w im |w in , z k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measurement of Word Relatedness under Multi-relations", "sec_num": "3.2" }, { "text": "represents the relatedness of word w im and word w in under kth topic. From the view of probability, p(z k |w in ) is the probability of word w in being assigned to topic z k and p(w im |z k ) is the probability of generating word w im from the same topic z k . Therefore, p(w im |w in , z k ) shows the probability of generating word w im if we have observed word w in under topic z k . Obviously, this point of view corresponds with LDA and it connects words via topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measurement of Word Relatedness under Multi-relations", "sec_num": "3.2" }, { "text": "Like Figure 1 (a) shown in Introduction, now we construct a multi-relational network for words. In the same way mentioned by typical graph-based methods, for every document d i in corpus C, we treat every single word as a vertex and make use of word co-occurrences to construct a word graph as it indicates the cohesion relationship between words in the context of document d i . In this process, a sliding window with maximum W words is used upon the word sequences of documents. Those words appearing in the same window will have a link to each other under all the relations in the network. Further, we obtain the word relatedness under every topic from Formula (3), and use them as weights of edges for constructing the heterogeneous network. For instance, p(w im |w in , z k ) is regarded as the weight of the edge from w in to w im under kth relation if there is a co-occurrence relation between the two words in document d i .", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 13, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Constructing a Heterogeneous Network on Words", "sec_num": "3.3" }, { "text": "As (Hulth, 2003) pointed out, most manually assigned keyphrases were noun groups whose pattern was zero or more adjectives followed by one or more nouns. We only take adjectives and nouns into consideration while constructing networks in experiments.", "cite_spans": [ { "start": 3, "end": 16, "text": "(Hulth, 2003)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Constructing a Heterogeneous Network on Words", "sec_num": "3.3" }, { "text": "In our proposed method, we employ Biased-MultiRank algorithm for co-ranking the importance of words and topics. It is obtained by adding prior knowledge of words and topics to Basic-MultiRank, a basic co-ranking scheme designed for objects and relations in multi-relational data. Therefore, we will demonstrate Basic-MultiRank first, then derive Biased-MultiRank algorithm from it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking Algorithm", "sec_num": "3.4" }, { "text": "In this subsection, we take document d i into discussion for convenience. First, we call Figure 1 (b) is a (2, 1)th order (4\u00d73)-dimensional tensor representation of a document, in which there are 4 words and 3 topics.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 97, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "A = (a w im ,w in ,z k ) a real (2, 1)th order (|d i | \u00d7 |T |)- dimensional rectangular tensor, where a w im ,w in ,z k denotes p(w im |w in , z k ) obtained in last subsec- tion, in which m, n = 1, 2, \u2022 \u2022 \u2022 , |d i | and k = 1, 2, \u2022 \u2022 \u2022 , |T |. For example,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "Then two transition probability tensors O = (o w im ,w in ,z k ) and R = (r w im ,w in ,z k ) are constructed with respect to words and topics by normalizing all the entries of A:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "o w im ,w in ,z k = a w im ,w in ,z k \u2211 |d i | m=1 a w im ,w in ,z k (4) r w im ,w in ,z k = a w im ,w in ,z k \u2211 |T | k=1 a w im ,w in ,z k", "eq_num": "(5)" } ], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "Here we deal with dangling node problem in the same way as PageRank (Page et al., 1999) . Namely, if a w im ,w in ,z k is equal to 0 for all words w im , which means that word w in had no link out to any other words via topic z k , we set o w im ,w in ,z k to be 1/|d i |. Likewise, if a w im ,w in ,z k is equal to 0 for all z k , which means that word w in had no link out to words w im via all topics, we set r w im ,w in ,z k to be 1/|T |. In this way, we ensure that", "cite_spans": [ { "start": 68, "end": 87, "text": "(Page et al., 1999)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "0 \u2264 o w im ,w in ,z k \u2264 1, |d i | \u2211 m=1 o w im ,w in ,z k = 1 0 \u2264 r w im ,w in ,z k \u2264 1, |T | \u2211 k=1 r w im ,w in ,z k = 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "Following the rule of Markov chain, we derive the probabilities like:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "P [Xt=wim]= |d i | \u2211 n=1 |T | \u2211 k=1 ow im ,w in ,z k \u00d7P [Xt\u22121=win,Yt=z k ] (6) P [Yt=z k ]= |d i | \u2211 m=1 |d i | \u2211 n=1 rw im ,w in ,z k \u00d7P [Xt=wim,Xt\u22121=win] (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "where subscript t denotes the iteration number.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "Notice that Formula (6) and 7accord with our basic idea that, a word connected with high probability words via high probability relations, should have a high probability so that it will be visited more likely, and a topic connecting words with high probabilities, should also get a high one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "After employing a product form of individual probability distributions, we decouple the two joint probability distributions in Formula (6) and (7) as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P [Xt\u22121=win,Yt=z k ]=P [Xt\u22121=win]P [Yt=z k ] (8) P [Xt=wim,Xt\u22121=win]=P [Xt=wim]P [Xt\u22121=win]", "eq_num": "(9)" } ], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "Considering stationary distributions of words and topics, while t goes infinity, the WordTopic-MultiRank values are given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "x=[x w i1 ,x w i2 ,\u2022\u2022\u2022,x w i|d i | ] T (10) y=[y z 1 ,y z 2 ,\u2022\u2022\u2022,y z | T | ] T", "eq_num": "(11)" } ], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "with", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "x w im = lim t\u2192\u221e P [X t =w im ]", "eq_num": "(12)" } ], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y z k = lim t\u2192\u221e P [Y t =z k ]", "eq_num": "(13)" } ], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "Under the assumptions from Formula (8) to (13), we can derive these from Formula (6) and (7):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "x w im = |d i | \u2211 n=1 |T | \u2211 k=1 o w im ,w in ,z k x w in y z k", "eq_num": "(14)" } ], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y z k = |d i | \u2211 m=1 |d i | \u2211 n=1 r w im ,w in ,z k x w im x w in", "eq_num": "(15)" } ], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "which mean that the score of w im depends on its weighted-links with other words via all topics and the score of z k depends on scores of the words which it connects with. Now we are able to solve two tensor equations shown below to obtain the WordTopic-MultiRank values of words and relations according to tensor operations Formula 14and 15:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Oxy=x (16) Rx 2 =y", "eq_num": "(17)" } ], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "Ng et al. 2011show the existence and uniqueness of stationary probability distributions x and y, then propose MultiRank, an iterative algorithm, to solve Formula (16) and 17utilizing Formula 14and (15). We refer it as Basic-MultiRank algorithm, shown as Algorithm 1, for the reason that it will be modified later in the following subsection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Basic-MultiRank Algorithm", "sec_num": "3.4.1" }, { "text": "Require: Tensor A , initial probability distributions x 0 and y 0 (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 Basic-MultiRank algorithm", "sec_num": null }, { "text": "\u2211 |d i | m=1 [x 0 ] wm =1 and \u2211 |T | k=1 [y 0 ] z k =1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 Basic-MultiRank algorithm", "sec_num": null }, { "text": ", tolerance \u03f5 Ensure: Two stationary probability distributions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 Basic-MultiRank algorithm", "sec_num": null }, { "text": "x and y 1: compute tensor O and R; 2: set t = 1; 3: Compute x t =Ox t\u22121 y t\u22121 ; 4: Compute y t =Rx 2 t ; 5: if ||x t \u2212x t\u22121 ||+||y t \u2212y t\u22121 ||<\u03f5, then stop, otherwise set t=t+1 and goto Step 3; 6: return x t and y t .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1 Basic-MultiRank algorithm", "sec_num": null }, { "text": "Inspired by the idea of Biased PageRank (Liu et al., 2010) , we treat document-word distribution p(w ij |d i ), which can be computed from Formula (1), and document-topic distribution p(z k |d i ), acquired from topic decomposition, as prior knowledge for words and topics in each document d i . Therefore, we modify Formula (16) and (17) by adding prior knowledge to it as follows:", "cite_spans": [ { "start": 40, "end": 58, "text": "(Liu et al., 2010)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Biased-MultiRank Algorithm", "sec_num": "3.4.2" }, { "text": "(1\u2212\u03bb)Oxy+\u03bbx p =x (18) (1\u2212\u03b3)Rx 2 +\u03b3y p =y (19) where, xp=[p(wi1|di),p(wi2|di),\u2022\u2022\u2022,p(w i|d i | |di)] T and y p =[p(z1|di),p(z2|di),\u2022\u2022\u2022,p(z |T | |di)] T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Biased-MultiRank Algorithm", "sec_num": "3.4.2" }, { "text": "Then we propose Biased-MultiRank, shown as Algorithm 2, as a new algorithm to solve the prior-tensors and Formula (18) and (19). Finally it is used in our WordTopic-MultiRank model. Algorithm 2 Biased-MultiRank algorithm Require: Tensor A, initial probability distributions x 0 and y 0 (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Biased-MultiRank Algorithm", "sec_num": "3.4.2" }, { "text": "\u2211 |d i | m=1 [x 0 ] wm =1 and \u2211 |T | k=1 [y 0 ] z k =1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Biased-MultiRank Algorithm", "sec_num": "3.4.2" }, { "text": ", prior distribution of words x p and topics y p , parameters \u03bb and \u03b3 (0\u2264\u03bb,\u03b3< 1), tolerance \u03f5 Ensure: Two stationary probability distributions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Biased-MultiRank Algorithm", "sec_num": "3.4.2" }, { "text": "x and y 1: compute tensors O and R; 2: set t = 1; 3: Compute x t =(1\u2212\u03bb)Ox t\u22121 y t\u22121 +\u03bbx p ; 4: Compute y t =(1\u2212\u03b3)Rx 2 t +\u03b3y p ; 5: if ||x t \u2212x t\u22121 ||+||y t \u2212y t\u22121 ||<\u03f5, then stop, otherwise set t=t+1 and goto Step 3; 6: return x t and y t .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Biased-MultiRank Algorithm", "sec_num": "3.4.2" }, { "text": "To evaluate the performance of WordTopic-MultiRank in automatic keyphrase extraction task, we utilize it on two different data sets and describe the experiments specifically in this section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "4" }, { "text": "We first employ WordTopic-MultiRank model to conduct experiments on a data set of scientific publication abstracts from the INSPEC database with corresponding manually assigned keyphrases 2 . The data set is also used by Hulth (2003) , Mihalcea and Tarau (2004) , Liu et al. (2009) , and Liu et al. (2010) , meaning that it is classically used in the task of keyphrase extraction, and is convenient for comparison.", "cite_spans": [ { "start": 221, "end": 233, "text": "Hulth (2003)", "ref_id": "BIBREF5" }, { "start": 236, "end": 261, "text": "Mihalcea and Tarau (2004)", "ref_id": "BIBREF12" }, { "start": 264, "end": 281, "text": "Liu et al. (2009)", "ref_id": "BIBREF10" }, { "start": 288, "end": 305, "text": "Liu et al. (2010)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Data Set", "sec_num": "4.1.1" }, { "text": "Actually, this data set contains 2,000 abstracts of research articles and 19,254 manually annotated keyphrases, and is split into 1,000 for training, 500 for validation and 500 for testing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Set", "sec_num": "4.1.1" }, { "text": "In this study, we use the 1,000 training documents as corpus C for topic detection and like other unsupervised ranking methods, 500 test documents are used for comparing the performance with baselines. Following previous work, only the manually assigned uncontrolled keyphrases that occur in the corresponding abstracts are viewed as standard answers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Set", "sec_num": "4.1.1" }, { "text": "We choose methods proposed by Hulth (2003) , Mihalcea and Tarau (2004) , Liu et al. (2009) , and Liu et al. (2010) as baselines for the reason that they are either classical or outstanding in keyphrase extraction task.", "cite_spans": [ { "start": 30, "end": 42, "text": "Hulth (2003)", "ref_id": "BIBREF5" }, { "start": 45, "end": 70, "text": "Mihalcea and Tarau (2004)", "ref_id": "BIBREF12" }, { "start": 73, "end": 90, "text": "Liu et al. (2009)", "ref_id": "BIBREF10" }, { "start": 97, "end": 114, "text": "Liu et al. (2010)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines and Evaluation Metrics", "sec_num": "4.1.2" }, { "text": "Evaluation metrics are precision, recall, F1measure shown as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines and Evaluation Metrics", "sec_num": "4.1.2" }, { "text": "P = T P T P +F P , R= T P T P +F N , F 1= 2P R P +R (20)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines and Evaluation Metrics", "sec_num": "4.1.2" }, { "text": "where T P is the total number of correctly extracted keyphrases, F P is the number of incorrectly extracted keyphrases, and F N is the number of those keyphrases which are not extracted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines and Evaluation Metrics", "sec_num": "4.1.2" }, { "text": "Documents are pre-processed by removing stop words and annotated with POS tags using Stanford Log-Linear Tagger 3 . Based on the research result of (Hulth, 2003) , only adjectives and nouns are used in constructing multi-relational words network for ranking, and keyphrases corresponding with following pattern are considered as candidates:", "cite_spans": [ { "start": 148, "end": 161, "text": "(Hulth, 2003)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Data Pre-processing and Configuration", "sec_num": "4.1.3" }, { "text": "(JJ) * (N N |N N S|N N P )+", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Pre-processing and Configuration", "sec_num": "4.1.3" }, { "text": "in which, JJ indicates adjectives while NN, NNS and NNP represent various forms of nouns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Pre-processing and Configuration", "sec_num": "4.1.3" }, { "text": "At last, top-M keyphrases, which have highest sum scores of words contained in them, are extracted and compared with standard answers after stemming by Porter stemmer 4 .", "cite_spans": [ { "start": 167, "end": 168, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data Pre-processing and Configuration", "sec_num": "4.1.3" }, { "text": "In experiments, we set \u03b1=1, \u03b2=0.01 for Formula (1) to (3) empirically, and \u03bb=0.5, \u03b3=0.9 for Formula (18), (19) indicated by (Li et al., 2012) . Influences of these parameters will not be discussed further in this work as they have been studied intensively in previous researches.", "cite_spans": [ { "start": 124, "end": 141, "text": "(Li et al., 2012)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Data Pre-processing and Configuration", "sec_num": "4.1.3" }, { "text": "In this subsection, we investigate how different parameter values influence performance of our proposed model first, then compare the best results obtained by baseline methods and our model. First of all, we inspect influences of topic number |T | on our model performance. Table 1 shows experimental results when |T | ranges from 20 to 100 while setting window size W =2 and max extracted number M =10. Table 1 , we observe that the performance does not change much when the number of topics varies, showing our model's robustness under the situation that the actual number of topics is unknown, which is commonly seen in Information Retrieval and Natural Language Processing applications. We can see that |T |=60 produces the best result for this corpus, so we choose 60 for |T | in comparison with baselines.", "cite_spans": [], "ref_spans": [ { "start": 274, "end": 281, "text": "Table 1", "ref_id": null }, { "start": 404, "end": 411, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.1.4" }, { "text": "Then, we fix |T |=60 and M =10 to demonstrate how our model is affected by the windows size W . Liu et al. (2009) and Liu et al. (2010) , indicating that performance usually does not vary much when W ranges. More details point out that W =2 is the best.", "cite_spans": [ { "start": 96, "end": 113, "text": "Liu et al. (2009)", "ref_id": "BIBREF10" }, { "start": 118, "end": 135, "text": "Liu et al. (2010)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.1.4" }, { "text": "Moreover, we explore the influence of max extracted number M by setting W =2 and |T |=60. Table 3 indicates that as M increases, precision falls down while recall raises up, and M =10 performs best in F1-measure.", "cite_spans": [], "ref_spans": [ { "start": 90, "end": 97, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.1.4" }, { "text": "At last, Table 4 shows the best results of baseline methods and our proposed model. In fac-", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 16, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "4.1.4" }, { "text": "Precision Recall F1 Hulth's (Hulth, 2003) 0.252 0.517 0.339 TextRank (Mihalcea and Tarau, 2004) 0.312 0.431 0.362 Topical PageRank (Liu et al., 2010) 0.354 0.183 0.242 Clustering (Liu et al., 2009) 0.350 0.660 0.457 WordTopic-MultiRank 0.465 0.502 0.482 (Wan and Xiao, 2008a) 0.288 0.354 0.317 CollaRank (Wan and Xiao, 2008b) 0.283 0.348 0.312 Topical PageRank (Liu et al., 2010) 0.282 0.348 0.312 WordTopic-MultiRank 0.296 0.399 0.340 Table 5 : Comparison on DUC2001 t, the best result of (Hulth, 2003) was obtained by adding POS tags as features for classification, while running PageRank on an undirected graph, which was built via using window W =2 on word sequence, resulted best of (Mihalcea and Tarau, 2004) . According to (Liu et al., 2009) , spectral clustering method got best performance in precision and F1-measure. On the other hand, Topical PageRank (Liu et al., 2010) performed best when setting window size W =10, topic number |T |=1,000. Since the influences of parameters have been discussed above, we set W =2, |T |=60 and M =10 as they result in best performance of our model on the same data set. Table 4 demonstrates that our proposed model outperforms all baselines in both precision and F1-measure. Noting that baseline methods are all under a single relation type assumption for word relatedness, estimations of their word ranking scores are limited, while WordTopic-MultiRank assumes words as multi-relational data and considers interactions between words and topics more comprehensively.", "cite_spans": [ { "start": 28, "end": 41, "text": "(Hulth, 2003)", "ref_id": "BIBREF5" }, { "start": 69, "end": 95, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF12" }, { "start": 131, "end": 149, "text": "(Liu et al., 2010)", "ref_id": "BIBREF11" }, { "start": 179, "end": 197, "text": "(Liu et al., 2009)", "ref_id": "BIBREF10" }, { "start": 254, "end": 275, "text": "(Wan and Xiao, 2008a)", "ref_id": "BIBREF17" }, { "start": 304, "end": 325, "text": "(Wan and Xiao, 2008b)", "ref_id": "BIBREF18" }, { "start": 361, "end": 379, "text": "(Liu et al., 2010)", "ref_id": "BIBREF11" }, { "start": 490, "end": 503, "text": "(Hulth, 2003)", "ref_id": "BIBREF5" }, { "start": 688, "end": 714, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF12" }, { "start": 730, "end": 748, "text": "(Liu et al., 2009)", "ref_id": "BIBREF10" }, { "start": 864, "end": 882, "text": "(Liu et al., 2010)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 436, "end": 443, "text": "Table 5", "ref_id": null }, { "start": 1118, "end": 1125, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "In order to show the generalization performance of our model, we also conduct experiments on another data set for automatic keyphrase extraction task and describe it in this subsection briefly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on DUC2001", "sec_num": "4.2" }, { "text": "Following (Wan and Xiao, 2008a) , (Wan and Xiao, 2008b) and (Liu et al., 2010) , a data set annotated by Wan and Xiao 5 was used in this experiment for evaluation. This data set is the testing part of DUC2001 (Over and Yen, 2004) , con-taining 308 news articles with 2,488 keyphrases manually labeled. And at most 10 keyphrases were assigned to each document. Again, we choose precision, recall and F1-measure as evaluation metrics and use the train part of DUC2001 for topic detection. At last, keyphrases extracted by our WordTopic-MultiRank model will be compared with the ones occurring in corresponding articles after stemming.", "cite_spans": [ { "start": 10, "end": 31, "text": "(Wan and Xiao, 2008a)", "ref_id": "BIBREF17" }, { "start": 34, "end": 55, "text": "(Wan and Xiao, 2008b)", "ref_id": "BIBREF18" }, { "start": 60, "end": 78, "text": "(Liu et al., 2010)", "ref_id": "BIBREF11" }, { "start": 209, "end": 229, "text": "(Over and Yen, 2004)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments on DUC2001", "sec_num": "4.2" }, { "text": "As indicated in (Wan and Xiao, 2008b) , performance on test set does not change much when cooccurrence window size W ranges from 5 to 20, and (Liu et al., 2010) also reports that it does not change much when topic number ranges from 50 to 1,500. Therefore, we pick co-occurrence window size W =10 and topic number |T |=60 to run WordTopic-MultiRank model. As for Keyphrase number M , we vary it from 1 to 20 to obtain different performances. Results are shown in Figure 2 . as M increases from 1 to 20, precision decreases from 0.528 to 0.201 in our experiment, while recall increases from 0.065 to 0.551. As for F1measure, it obtains maximum value 0.340 when M =10 and decreases gradually as M leaves 10 farther. Therefore, W =10, |T |=60 and M =10 are optimal for our proposed method on this test set. Table 5 lists the best performance comparison between our method and previous ones. All previous methods perform best on DUC2001 test set while setting co-occurrence window size W =10 and Keyphrase number M =10, which is consistent with our model. Experimental results on this data set demonstrate the effectiveness of our proposed model again as it outperforms baseline methods over all three metrics.", "cite_spans": [ { "start": 16, "end": 37, "text": "(Wan and Xiao, 2008b)", "ref_id": "BIBREF18" }, { "start": 142, "end": 160, "text": "(Liu et al., 2010)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 463, "end": 472, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 805, "end": 812, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Experiments on DUC2001", "sec_num": "4.2" }, { "text": "In this study, we propose a new method named WordTopic-MultiRank for automatic keyphrase extraction task. It treats words in documents as objects and latent topics as relations, assuming words are under multiple relations. Based on the idea that words and topics have mutual influence on each other, our model ranks importance of words and topics simultaneously, then extracts highly scored phrases as keyphrases. In this way, it makes full use of word-word relatedness, word-topic interaction and inter-topic impacts. Experiments demonstrate that WordTopic-MultiRank achieves better performance than baseline methods on two different data sets. It also shows the good effectiveness and strong robustness of our method after we explored the influence of different parameter values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "In future work, for one thing, we would like to investigate how different corpora influence our method and choose a large-scale and general corpus, such as Wikipedia, for experiments. For another, exploring more algorithms to deal with heterogeneous relation network may help to unearth more knowledge between words and topics, and improve our model performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "GibbsLDA++: http://gibbslda.sourceforge.net", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It can be obtained from https://github.com/snkim/AutomaticKeyphraseExtraction", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://nlp.stanford.edu/software/tagger.shtml 4 http://tartarus.org/martin/PorterStemmer/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://wanxiaojun1979.googlepages.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research is financially supported by NSFC Grant 61073082 and NSFC Grant 61272340.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Extracting key terms from noisy and multitheme documents", "authors": [ { "first": "Maria", "middle": [], "last": "Grineva", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Grinev", "suffix": "" }, { "first": "Dmitry", "middle": [], "last": "Lizorkin", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 18th international conference on World wide web", "volume": "", "issue": "", "pages": "661--670", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Grineva, Maxim Grinev, and Dmitry Lizorkin. 2009. Extracting key terms from noisy and multi- theme documents. In Proceedings of the 18th inter- national conference on World wide web, pages 661- 670. ACM.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Improving browsing in digital libraries with keyphrase indexes", "authors": [ { "first": "Carl", "middle": [], "last": "Gutwin", "suffix": "" }, { "first": "Gordon", "middle": [], "last": "Paynter", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Witten", "suffix": "" } ], "year": 1999, "venue": "", "volume": "27", "issue": "", "pages": "81--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carl Gutwin, Gordon Paynter, Ian Witten, Craig Nevill- Manning, and Eibe Frank. 1999. Improving brows- ing in digital libraries with keyphrase indexes. De- cision Support Systems, 27(1):81-104.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Corephrase: Keyphrase extraction for document clustering", "authors": [ { "first": "M", "middle": [], "last": "Khaled", "suffix": "" }, { "first": "Diego", "middle": [ "N" ], "last": "Hammouda", "suffix": "" }, { "first": "Mohamed", "middle": [ "S" ], "last": "Matute", "suffix": "" }, { "first": "", "middle": [], "last": "Kamel", "suffix": "" } ], "year": 2005, "venue": "Machine Learning and Data Mining in Pattern Recognition", "volume": "", "issue": "", "pages": "265--274", "other_ids": {}, "num": null, "urls": [], "raw_text": "Khaled M Hammouda, Diego N Matute, and Mo- hamed S Kamel. 2005. Corephrase: Keyphrase ex- traction for document clustering. In Machine Learn- ing and Data Mining in Pattern Recognition, pages 265-274. Springer.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Topic-sensitive pagerank", "authors": [ { "first": "H", "middle": [], "last": "Taher", "suffix": "" }, { "first": "", "middle": [], "last": "Haveliwala", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 11th international conference on World Wide Web", "volume": "", "issue": "", "pages": "517--526", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taher H Haveliwala. 2002. Topic-sensitive pagerank. In Proceedings of the 11th international conference on World Wide Web, pages 517-526. ACM.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Improved automatic keyword extraction given more linguistic knowledge", "authors": [ { "first": "A", "middle": [], "last": "Hulth", "suffix": "" } ], "year": 2003, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "216--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Hulth. 2003. Improved automatic keyword extrac- tion given more linguistic knowledge. In Proceed- ings of EMNLP, pages 216-223.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A ranking approach to keyphrase extraction", "authors": [ { "first": "Xin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Yunhua", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "756--757", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xin Jiang, Yunhua Hu, and Hang Li. 2009. A ranking approach to keyphrase extraction. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, pages 756-757. ACM.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Learning user information interests through extraction of semantically significant phrases", "authors": [ { "first": "Bruce", "middle": [], "last": "Krulwich", "suffix": "" }, { "first": "Chad", "middle": [], "last": "Burkey", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the AAAI spring symposium on machine learning in information access", "volume": "", "issue": "", "pages": "100--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bruce Krulwich and Chad Burkey. 1996. Learning user information interests through extraction of se- mantically significant phrases. In Proceedings of the AAAI spring symposium on machine learning in in- formation access, pages 100-112.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Har: Hub, authority and relevance scores in multirelational data for query search", "authors": [ { "first": "Xutao", "middle": [], "last": "Li", "suffix": "" }, { "first": "K", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Yunming", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Ye", "suffix": "" } ], "year": 2012, "venue": "SDM", "volume": "", "issue": "", "pages": "141--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xutao Li, Michael K Ng, and Yunming Ye. 2012. Har: Hub, authority and relevance scores in multi- relational data for query search. In SDM, pages 141- 152.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Graph-based keyword extraction for single-document summarization", "authors": [ { "first": "Marina", "middle": [], "last": "Litvak", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Last", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the workshop on multisource multilingual information extraction and summarization", "volume": "", "issue": "", "pages": "17--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marina Litvak and Mark Last. 2008. Graph-based keyword extraction for single-document summariza- tion. In Proceedings of the workshop on multi- source multilingual information extraction and sum- marization, pages 17-24. Association for Computa- tional Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Clustering to find exemplar terms for keyphrase extraction", "authors": [ { "first": "Z", "middle": [], "last": "Liu", "suffix": "" }, { "first": "P", "middle": [], "last": "Li", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "M", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "257--266", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z. Liu, P. Li, Y. Zheng, and M. Sun. 2009. Clustering to find exemplar terms for keyphrase extraction. In Proceedings of EMNLP, pages 257-266.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Automatic keyphrase extraction via topic decomposition", "authors": [ { "first": "Z", "middle": [], "last": "Liu", "suffix": "" }, { "first": "W", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "M", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2010, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "366--376", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z. Liu, W. Huang, Y. Zheng, and M. Sun. 2010. Auto- matic keyphrase extraction via topic decomposition. In Proceedings of EMNLP, pages 366-376.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Textrank: Bringing order into texts", "authors": [ { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "P", "middle": [], "last": "Tarau", "suffix": "" } ], "year": 2004, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "404--411", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Mihalcea and P. Tarau. 2004. Textrank: Bringing order into texts. In Proceedings of EMNLP, pages 404-411.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Multirank: coranking for objects and relations in multi-relational data", "authors": [ { "first": "M", "middle": [ "K P" ], "last": "Ng", "suffix": "" }, { "first": "X", "middle": [], "last": "Li", "suffix": "" }, { "first": "Y", "middle": [], "last": "Ye", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 17th ACM SIGKDD", "volume": "", "issue": "", "pages": "1217--1225", "other_ids": {}, "num": null, "urls": [], "raw_text": "M.K.P. Ng, X. Li, and Y. Ye. 2011. Multirank: co- ranking for objects and relations in multi-relational data. In Proceedings of the 17th ACM SIGKDD, pages 1217-1225.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Introduction to duc-2001: an intrinsic evaluation of generic news text summarization systems", "authors": [ { "first": "Paul", "middle": [], "last": "Over", "suffix": "" }, { "first": "James", "middle": [], "last": "Yen", "suffix": "" } ], "year": 2004, "venue": "Proceedings of DUC 2004 Document Understanding Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Over and James Yen. 2004. Introduction to duc- 2001: an intrinsic evaluation of generic news tex- t summarization systems. In Proceedings of DUC 2004 Document Understanding Workshop, Boston.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The pagerank citation ranking: bringing order to the web", "authors": [ { "first": "Lawrence", "middle": [], "last": "Page", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Brin", "suffix": "" }, { "first": "Rajeev", "middle": [], "last": "Motwani", "suffix": "" }, { "first": "Terry", "middle": [], "last": "Winograd", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation rank- ing: bringing order to the web.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning to extract keyphrases from text. national research council. Institute for Information Technology", "authors": [ { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Turney", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter D Turney. 1999. Learning to extract keyphrases from text. national research council. Institute for In- formation Technology, Technical Report ERB-1057.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Single document keyphrase extraction using neighborhood knowledge", "authors": [ { "first": "X", "middle": [], "last": "Wan", "suffix": "" }, { "first": "J", "middle": [], "last": "Xiao", "suffix": "" } ], "year": 2008, "venue": "Proceedings of AAAI", "volume": "", "issue": "", "pages": "855--860", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Wan and J. Xiao. 2008a. Single documen- t keyphrase extraction using neighborhood knowl- edge. In Proceedings of AAAI, pages 855-860.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Collabrank: towards a collaborative approach to singledocument keyphrase extraction", "authors": [ { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Jianguo", "middle": [], "last": "Xiao", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "969--976", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaojun Wan and Jianguo Xiao. 2008b. Col- labrank: towards a collaborative approach to single- document keyphrase extraction. In Proceedings of the 22nd International Conference on Computation- al Linguistics-Volume 1, pages 969-976. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Towards an iterative reinforcement approach for simultaneous document summarization and keyword extraction", "authors": [ { "first": "X", "middle": [], "last": "Wan", "suffix": "" }, { "first": "J", "middle": [], "last": "Yang", "suffix": "" }, { "first": "J", "middle": [], "last": "Xiao", "suffix": "" } ], "year": 2007, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Wan, J. Yang, and J. Xiao. 2007. Towards an iter- ative reinforcement approach for simultaneous doc- ument summarization and keyword extraction. In ACL, page 552.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "(a) An example of multi-relational words in graph representation and (b) the corresponding tensor representation.", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "performance vs. Keyphrase number M From Figure 2, we can observe how performances of our model change with M . Actually,", "num": null }, "TABREF1": { "content": "
presents the metrics when W ranges from
2 to 10.
Window Size Precision RecallF1
20.4650.502 0.482
40.4610.496 0.477
60.4620.500 0.480
80.4610.499 0.479
100.4610.498 0.478
", "text": "", "html": null, "num": null, "type_str": "table" }, "TABREF2": { "content": "
Our results are consistent with the findings re-
ported by
", "text": "Influence of Window Size W", "html": null, "num": null, "type_str": "table" }, "TABREF4": { "content": "", "text": "Influence of Max Extracted Number M", "html": null, "num": null, "type_str": "table" }, "TABREF5": { "content": "
: Comparison on Scientific Abstracts
MethodPrecision RecallF1
ExpandRank
", "text": "", "html": null, "num": null, "type_str": "table" } } } }