{ "paper_id": "O14-1012", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:04:23.264027Z" }, "title": "Collaborative Ranking between Supervised and Unsupervised Approaches for Keyphrase Extraction", "authors": [ { "first": "Gerardo", "middle": [], "last": "Figueroa", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Yi-Shin", "middle": [], "last": "Chen", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Automatic keyphrase extraction methods have generally taken either supervised or unsupervised approaches. Supervised methods extract keyphrases by using a training document set, thus acquiring knowledge from a global collection of texts. Conversely, unsupervised methods extract keyphrases by determining their relevance in a single-document context, without prior learning. We present a hybrid keyphrase extraction method for short articles, HybridRank, which leverages the benefits of both approaches. Our system implements modified versions of the TextRank (Mihalcea and Tarau, 2004)-unsupervised-and KEA (Witten et al., 1999)-supervised-methods, and applies a merging algorithm to produce an overall list of keyphrases. We have tested HybridRank on more than 900 abstracts belonging to a wide variety of subjects, and show its superior effectiveness. We conclude that knowledge collaboration between supervised and unsupervised methods can produce higher-quality keyphrases than applying these methods individually.", "pdf_parse": { "paper_id": "O14-1012", "_pdf_hash": "", "abstract": [ { "text": "Automatic keyphrase extraction methods have generally taken either supervised or unsupervised approaches. Supervised methods extract keyphrases by using a training document set, thus acquiring knowledge from a global collection of texts. Conversely, unsupervised methods extract keyphrases by determining their relevance in a single-document context, without prior learning. We present a hybrid keyphrase extraction method for short articles, HybridRank, which leverages the benefits of both approaches. Our system implements modified versions of the TextRank (Mihalcea and Tarau, 2004)-unsupervised-and KEA (Witten et al., 1999)-supervised-methods, and applies a merging algorithm to produce an overall list of keyphrases. We have tested HybridRank on more than 900 abstracts belonging to a wide variety of subjects, and show its superior effectiveness. We conclude that knowledge collaboration between supervised and unsupervised methods can produce higher-quality keyphrases than applying these methods individually.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Keyphrases-also called keywords 1 -are highly condensed summaries that describe the contents of a document. They help readers know quickly what a document is about, and are generally assigned by the document's author or by a human indexer. However, with the massive growth of documents on the Web each day, it has become impractical to manually assign keywords to each document. The need for software applications that automatically assign keywords to documents has therefore become necessary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\uf02a Institute of Information Systems and Applications, National Tsing Hua University, Hsinchu, Taiwan E-mail: {gerardo.ofc, yishin}@gmail.com 1 A keyphrase is a phrase composed of one or more keywords. We will use the terms keyphrase and keyword interchangeably in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this work we apply efficient and effective practices from supervised and unsupervised methods to produce a hybrid system HybridRank. On the supervised side, we implement an extension of the Na\u00efve Bayes classifier originally proposed in KEA (Witten et al., 1999) . This classifier has shown to be practical to implement and can be extended for improved effectiveness. On the unsupervised side, we apply the well-known TextRank (Mihalcea and Tarau, 2004) algorithm with some modifications. TextRank is similarly practical to implement, and can effectively extract keyphrases from texts regardless of their size or domain.", "cite_spans": [ { "start": 239, "end": 264, "text": "KEA (Witten et al., 1999)", "ref_id": null }, { "start": 429, "end": 455, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Each method contributes by providing a list of keyphrases for a particular text, sorted by their rank or relevance as seen from each approach. Finally, a collaborative algorithm is executed, in which the two keyphrase lists are merged to create an overall list of keyphrases for that text. The merging algorithm thus takes into account the ranks given by both approaches to each keyphrase and produces a final, collaborative score reflected by these ranks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We have tested HybridRank on a large number of abstracts belonging to scientific papers across different domains. The results of our experiments show the effectiveness of the proposed method and of the improvements made to the KEA and TextRank algorithms. Our system obtained a higher precision and recall than both KEA and TextRank in most cases, and obtained a higher precision and recall than at least one of these two methods in all the cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The evaluation of our system also shows how knowledge from supervised and unsupervised approaches can be shared to produce keyphrases of better quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Recent work on the automatic generation of keyphrases has been categorized as either supervised or unsupervised.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Supervised methods for keyphrase extraction, in essence, make use of training datasets-a large corpus consisting of texts and their corresponding (previously assigned) keyphrases-to classify candidate terms as keyphrases. Two traditional methods in this category are KEA (Witten et al., 1999) and GenEx (Turney, 2000) . KEA uses a Na\u00efve Bayes classifier constructed from two features extracted from phrases in documents: the TFIDF and the relative position of the phrase. GenEx uses a steady-state genetic algorithm to build an equation consisting of 12 low-level parameters. Even though KEA and GenEx perform similarly well, KEA has shown to be more practical to implement, and has served as the base for other supervised keyphrase extraction methods (Turney, 1999; Hulth, 2003; Nguyen and Kan, 2007) .", "cite_spans": [ { "start": 267, "end": 292, "text": "KEA (Witten et al., 1999)", "ref_id": null }, { "start": 297, "end": 317, "text": "GenEx (Turney, 2000)", "ref_id": null }, { "start": 752, "end": 766, "text": "(Turney, 1999;", "ref_id": "BIBREF5" }, { "start": 767, "end": 779, "text": "Hulth, 2003;", "ref_id": "BIBREF2" }, { "start": 780, "end": 801, "text": "Nguyen and Kan, 2007)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Other innovative supervised approaches have been proposed in recent years, ranging from the application of neural networks (Jo, 2003; Wang et al., 2006; Jo et al., 2006; Sarkar et al., 2010) to conditional random fields (Zhang, 2008) . Yih et al. (Yih et al., 2006) proposed a multi-class, logistic regression classifier for finding keywords on web pages.", "cite_spans": [ { "start": 123, "end": 133, "text": "(Jo, 2003;", "ref_id": "BIBREF10" }, { "start": 134, "end": 152, "text": "Wang et al., 2006;", "ref_id": "BIBREF17" }, { "start": 153, "end": 169, "text": "Jo et al., 2006;", "ref_id": "BIBREF11" }, { "start": 170, "end": 190, "text": "Sarkar et al., 2010)", "ref_id": "BIBREF15" }, { "start": 220, "end": 233, "text": "(Zhang, 2008)", "ref_id": "BIBREF18" }, { "start": 236, "end": 265, "text": "Yih et al. (Yih et al., 2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Unsupervised methods for keyphrase extraction rely solely on implicit information found in individual texts. Simple approaches are based on statistics, using information such as term specificity (Kireyev, 2009) , word frequency (Luhn, 1957) , n-grams (Cohen, 1995) , word co-occurrence (Matsuo and Ishizuka, 2004) and TFIDF (Salton et al., 1975) . Other approaches are graph-based, where a text is converted into a graph whose nodes represent text units (e.g. words, phrases, and sentences) and whose edges represent the relationships between these units. The graph is then recursively iterated and saliency scores are assigned to each node using different approaches.", "cite_spans": [ { "start": 195, "end": 210, "text": "(Kireyev, 2009)", "ref_id": null }, { "start": 228, "end": 240, "text": "(Luhn, 1957)", "ref_id": "BIBREF12" }, { "start": 251, "end": 264, "text": "(Cohen, 1995)", "ref_id": "BIBREF9" }, { "start": 286, "end": 313, "text": "(Matsuo and Ishizuka, 2004)", "ref_id": "BIBREF13" }, { "start": 324, "end": 345, "text": "(Salton et al., 1975)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Mihalcea and Tarau (Mihalcea and Tarau, 2004) developed TextRank, a graph-based ranking model that applies the PageRank (Brin and Page, 1998) formula into texts for assigning scores to phrases and sentences. Wan et al. (Wan et al., 2007) proposed a method that fuses three kinds of relationships between sentences and words: relationships between words, relationships between sentences, and relationships between words and sentences. Wan and Xiao (Wan and Xiao, 2008) also developed CollabRank, which improves the keyphrase extraction task by making use of mutual influences of multiple documents within a cluster context.", "cite_spans": [ { "start": 19, "end": 45, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF3" }, { "start": 120, "end": 141, "text": "(Brin and Page, 1998)", "ref_id": "BIBREF0" }, { "start": 208, "end": 237, "text": "Wan et al. (Wan et al., 2007)", "ref_id": "BIBREF7" }, { "start": 447, "end": 467, "text": "(Wan and Xiao, 2008)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "To our knowledge, all previous work has been either supervised or unsupervised.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Supervised methods have the advantage of learning from an already classified collection of documents in order to find keyphrases for a new document, but in essence make no analysis of individual text structure as done by unsupervised methods. HybridRank leverages the benefits of both approaches for keyphrase extraction, applying a supervised keyphrase extraction algorithm (KEA) and an unsupervised graph-based algorithm (TextRank).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "HybridRank makes use of two well-known and effective keyphrase extraction methods: KEA and TextRank (Mihalcea and Tarau, 2004) . Each of these methods extracts a list of keyphrases ranked according to each method's approach. A final list of keyphrases is constructed from the collaboration between these two methods and the application of a merging algorithm. This section will explain the general frameworks for the KEA and TextRank algorithms.", "cite_spans": [ { "start": 100, "end": 126, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3." }, { "text": "The modifications made for these two methods in our work will be described in Section 4. For briefness purposes, we present only a brief explanation of each algorithm, and suggest the reader to refer to the original papers for more details.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3." }, { "text": "The KEA algorithm consists of a Na\u00efve Bayes classifier that ranks phrases in order of their probability of being keyphrases as learned from a training document set. KEA is divided into four stages: candidate phrase generation, feature extraction, training and ranking.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The KEA Algorithm", "sec_num": "3.1" }, { "text": "The first stage in the KEA algorithm is the selection of phrases that are suitable for training and extraction. To avoid overfitting, this filtering process is applied on both the training document set and the input text to be analyzed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate phrase generation", "sec_num": "3.1.1" }, { "text": "The features extracted from the candidate phrases generated in the previous stage are the heart of the KEA algorithm; they serve as the learning base for the Na\u00efve Bayes classifier and are used for the extraction of keyphrases. The features originally extracted by for each phrase in their KEA algorithm were the TFIDF and the relative position in the text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature extraction", "sec_num": "3.1.2" }, { "text": "The training stage uses the training document set, which is composed of a collection of documents with their manually-assigned keyphrases. First, phrases are generated from each document in the set. The features for each phrase are then extracted and stored in a training model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "3.1.3" }, { "text": "With the model having been trained, the Na\u00efve Bayes classifier can extract keyphrases from a new text by first selecting its candidate phrases and then extracting each phrase's features. The model determines the probability of each phrase being a keyphrase using Bayes' formula with the two extracted features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking", "sec_num": "3.1.4" }, { "text": "The probability that a phrase is a keyphrase given that it has TFIDF T and relative position R is then calculated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking", "sec_num": "3.1.4" }, { "text": ", ) | ( ) | ( = ) , | ( N Y Y k R P k T P R T k P \uf02b \uf0d7 \uf0d7 (1) where ) | ( k T P", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking", "sec_num": "3.1.4" }, { "text": "is the probability that a keyphrase has TFIDF score T and ) | ( k R P is the probability that it has relative position R . Y is the number of phrases that were manually assigned as keyphrases in the training document set and N is the number of phrases that were not. An expression similar to equation 1is used to calculate the probability that a phrase is not", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking", "sec_num": "3.1.4" }, { "text": "a keyphrase ( ) , | ( R T k P \uf0d8 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking", "sec_num": "3.1.4" }, { "text": "The overall probability that a phrase is a keyphrase is then calculated with the following formula:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking", "sec_num": "3.1.4" }, { "text": ") , | ( ) , | ( ) , | ( = R T k P R T k P R T k P P \uf0d8 \uf02b (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking", "sec_num": "3.1.4" }, { "text": "The phrases are finally sorted in descending order of their probability scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking", "sec_num": "3.1.4" }, { "text": "The TextRank algorithm was proposed by Mihalcea and Tarau (Mihalcea and Tarau, 2004) . It is a graph-based, unsupervised method for keyphrase extraction. We have divided the TextRank algorithm into two stages to allow an easier comparison with our modifications: graph construction and phrase ranking.", "cite_spans": [ { "start": 39, "end": 84, "text": "Mihalcea and Tarau (Mihalcea and Tarau, 2004)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "The TextRank Algorithm", "sec_num": "3.2" }, { "text": "The first step carried out in the TextRank algorithm is the construction of a graph that represents a text. The resulting graph is an interconnection of words and phrasesthe verticeswith significant relationsthe edges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph construction", "sec_num": "3.2.1" }, { "text": "With the constructed graph in hand, a recursive algorithm is applied on it which assigns scores to each node on the graph on each iteration until convergence is reached. This algorithm is derived from Google's PageRank (Brin and Page, 1998) , which determines the importance of a vertex within a graph by recursively taking into account global information. In other words, the score of one vertex in the graph will affect the scores of all vertices connected to that vertex, and vice-versa.", "cite_spans": [ { "start": 219, "end": 240, "text": "(Brin and Page, 1998)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Ranking", "sec_num": "3.2.2" }, { "text": "Before starting the recursive ranking algorithm, all vertices in the graph are initialized with a score of 1. Next, the algorithm is run on the graph for several iterations until it converges within a certain threshold. In each iteration, the original PageRank formula is calculated for each", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking", "sec_num": "3.2.2" }, { "text": "vertex i V in graph G , as follows: ), ( | ) ( | 1 ) (1 = ) ( ) ( j j i V In j i V S V Out d d V S \uf0e5 \uf0ce \uf0d7 \uf02b \uf02d (3) where ) ( i V In is the set of vertices that point to i V , ) ( i V Out", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking", "sec_num": "3.2.2" }, { "text": "is the set of vertices that i V points to, and d is a damping factor, which is usually set to 0.85 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking", "sec_num": "3.2.2" }, { "text": "With final scores assigned to each vertex, they are sorted in descending order of this score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking", "sec_num": "3.2.2" }, { "text": "HybridRank is divided into four main components: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Framework", "sec_num": "4." }, { "text": "All documents in the training document set, as well as the input text, are cleaned before being processed by the other components. The following steps are performed in this stage:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "4.1" }, { "text": "1. HTML tags are removed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "4.1" }, { "text": "2. All non-alphanumeric characters are removed, with the exception of punctuation marks relevant to text structure and word meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "4.1" }, { "text": "3. The cleaned text is sent to the supervised and unsupervised components.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "4.1" }, { "text": "The supervised component of our system consists of a modified and extended version of the KEA algorithm proposed by Witten et al. This section will describe the modifications we made in each of the stages of KEA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Ranking", "sec_num": "4.2" }, { "text": "The way candidate phrases are selected in HybridRank has some variations from the procedure followed by the original KEA method. We have carefully inspected the training document set and have used this knowledge to construct a more effective filter for phrase selection (as is later shown in the experimental evaluation). The following procedure is carried out: f. One-word phrases cannot be an adjective or a verb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate phrase generation", "sec_num": "4.2.1" }, { "text": "2. Each word in the extracted phrases is then converted to its stemmed form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate phrase generation", "sec_num": "4.2.1" }, { "text": "3. The phrases are passed as candidate phrases to the Feature Extraction stage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate phrase generation", "sec_num": "4.2.1" }, { "text": "We have included two additional features to the learning scheme as proposed in other works: the keyphrase frequency in the whole collection of texts and the PoS tag pattern (Hulth, 2003) . Adding these two features produced better overall results in our experiments.", "cite_spans": [ { "start": 173, "end": 186, "text": "(Hulth, 2003)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Feature extraction", "sec_num": "4.2.2" }, { "text": "The keyphrase frequency of phrase P in document D is the number of times P is manually assigned as a keyphrase in the training document set G , excluding D .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Keyphrase frequency", "sec_num": null }, { "text": "The PoS (Part-of-Speech) tag pattern of a phrase P is the sequence of PoS tags that belong to P . These tags are assigned to each word in P using a Part-of-Speech Tagger.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PoS tag pattern", "sec_num": null }, { "text": "Unlike the original KEA method, we do not discretize real-valued features (TFIDF and relative position) into numeric ranges; we instead round these values to one decimal place. Experiments with both discretization tables and rounding to one decimal gave similar results, so we decided to use rounding due to its simpler implementation and faster performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.2.3" }, { "text": "With the two additional features (keyphrase frequency and PoS tag pattern) used in HybridRank, an expression similar to equation (1) can be constructed. The probability that a phrase is a keyphrase using all four features would then be calculated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking", "sec_num": "4.2.4" }, { "text": ", ) | ( ) | ( ) | ( ) | ( = ) , , , | ( N Y Y k F P k S P k R P k T P F S R T k P \uf02b \uf0d7 \uf0d7 \uf0d7 \uf0d7 (4) where ) | ( k S P", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking", "sec_num": "4.2.4" }, { "text": "is the probability that it has PoS tag pattern S and ) | ( k F P the probability that it has keyphrase frequency F . An expression similar to equation 4is used to calculate the probability that a phrase is not a keyphrase (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking", "sec_num": "4.2.4" }, { "text": ") , , , | ( F S R T k P \uf0d8 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking", "sec_num": "4.2.4" }, { "text": "The TFIDF and relative position values are rounded to one decimal place in both the trained model and in the current phrase. Since the keyphrase frequency is a non-negative integer, no rounding is performed. Finally, the PoS tag pattern value has to be an exact string match with the one in the trained model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ranking", "sec_num": "4.2.4" }, { "text": "The unsupervised ranking component of HybridRank is an implementation of the TextRank algorithm proposed by Mihalcea et al. for keyphrase extraction. This section will detail the configuration used in our system for the first stage (graph construction) of the TextRank algorithm. No modifications were made to the ranking stage described in Section 3.2.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Ranking", "sec_num": "4.3" }, { "text": "The parameters we have used for the graph construction in our implementation of TextRank presented the best results in our experiments. The following configuration was used:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph construction", "sec_num": "4.3.1" }, { "text": "\uf0b7 The graph is unweighted and undirected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph construction", "sec_num": "4.3.1" }, { "text": "\uf0b7 Two types of vertices are added to the graph: words and phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph construction", "sec_num": "4.3.1" }, { "text": "\uf0b7 Maximum phrase size is 4 words; they can only be composed of nouns and adjectives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph construction", "sec_num": "4.3.1" }, { "text": "\uf0b7 The words added to the graph and those in the phrases cannot be any of the 539 predetermined stopwords.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph construction", "sec_num": "4.3.1" }, { "text": "\uf0b7 The relation between words and phrases is the co-occurrence, i.e. the maximum distance (in words) between two text units. The value used for co-occurrence is 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph construction", "sec_num": "4.3.1" }, { "text": "The merging component is the core of HybridRank. Once the two keyphrase lists are generated by KEA and TextRank, they are combined into a single list using a merging algorithm. The overall list is the result of the collaboration between a supervised and an unsupervised approach for keyphrase extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Merging", "sec_num": "4.4" }, { "text": "The two main stages in the merging component are keyphrase list merging and post-processing. We illustrate the procedure with an example for easier understanding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Merging", "sec_num": "4.4" }, { "text": "The first step performed in the merging stage is to add missing keyphrases to each keyphrase list, which results in two lists of the same size and with the same keyphrases, but in different order. In other words, keyphrases that appear in the KEA list which are not in the TextRank list are appended to the TextRank list, and vice-versa. Missing keyphrases are added to each list in the same order of their original list; their corresponding scores are marked with a flag to indicate that these phrases were not in that list before. Next, a reordering of the two lists is done by giving more priority to those phrases that appear in both lists. Assuming that the two lists are already sorted, the reordering is done by applying the following algorithm to each list L : ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Keyphrase list merging", "sec_num": "4.4.1" }, { "text": "The previous algorithm partitions each list into two sections, leaving phrases that appear on both lists on top, and phrases that only appear in one list on the bottom. It is worth pointing out that the original order of the phrases is maintained in each partition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "reorderedK L \uf0ac", "sec_num": null }, { "text": "Finally, the two keyphrase lists are merged into a single list based on the order in which each phrase appears in both lists. Given phrase P with position i in the KEA list and with position j in the TextRank list, three different merging methods can be used to assign an overall position k to P :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "reorderedK L \uf0ac", "sec_num": null }, { "text": "\uf0b7 Average: )/2 ( = j i k \uf02b \uf0b7 Min: ) , ( = j i Min k \uf0b7 Max: ) , ( = j i Max k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "reorderedK L \uf0ac", "sec_num": null }, { "text": "Once the new HybridRank position k has been calculated for every phrase in the text, the phrases are sorted according to this new position. If two phrases have the same value for k , as it often occurs, then a tie-breaker is used. The tie-breakers have the following precedence: KEA score, TextRank score, TFIDF value, and finally alphabetical order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "reorderedK L \uf0ac", "sec_num": null }, { "text": "In the final stage of HybridRank, a post-processing filter is applied on the final list of keyphrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-processing", "sec_num": "4.4.2" }, { "text": "First, any phrase that is a subphrase of a higher-ranking phrase is removed from the list. For example, if the phrase bass diffusion has a higher ranking than the phrase bass, then the latter is eliminated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-processing", "sec_num": "4.4.2" }, { "text": "Second, any phrase that exists in a predetermined stop-phrase list is removed. The stop-phrase list is a list of words and phrases that will rarely or never be keyphrases by themselves. We have identified 28 stop-phrases, which consist of frequent nouns and noun phrases found in the training documents that were never assigned as author keyphrases. These phrases are different to stopwords in the way that when combined with other words they may become keyphrases. Stopwords, on the other hand, are removed in a previous stage because they will rarely or never be part of a keyphrase. For example, the words research and method are stop-phrases and not stopwords, because they are too general to be keyphrases, unless combined with other word(s), such as in photonics research or kernel method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Post-processing", "sec_num": "4.4.2" }, { "text": "Two different document collections were used for our experiments: the IEEE Xplore collection (1,606 documents) and the Hulth 2003 collection (2,000 documents). The documents consist of abstracts in English from journal and conference papers of various disciplines with their corresponding, manually-assigned keyphrases. Of the total number of abstracts, 1,822 were used for training (to construct the trained model), 917 for testing, and 867 for validation (to evaluate different parameters in the methods used and select the values with the best performance); this assignation was made by random sampling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Corpora", "sec_num": "5.1" }, { "text": "Some statistics relevant to the analysis of our experiments were extracted from the collections used. The statistics show thatin generalonly 51% of the manually-assigned keyphrases are actually contained in the abstract text in their stemmed forms. With this knowledge, it can be deducted that the precision of any keyphrase extraction method will rarely surpass this percentage on these corpora, which presents a difficulty for adequate evaluation. For the purpose of carrying out a fairer evaluation, a utopian subset was selected from the testing set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Corpora", "sec_num": "5.1" }, { "text": "Each of this subset's abstracts must contain at least one of the manually-assigned keyphrases in the text. Additionally, an average of 7 keyphrases were manually assigned for each abstract by either authors or other human annotators, which correspond to roughly 6% of the total number of words per abstract.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Corpora", "sec_num": "5.1" }, { "text": "For evaluating the performance of HybridRank, we have performed experiments on the utopian subset using two other keyphrase extraction methods: KEA and TextRank. HybridRank has been separated into three different merging methods, which we evaluate individually: average, min and max.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "To further break down our evaluation, we have performed experiments using the original procedures stated in the KEA and TextRank papers, and compared their performance with our modified versions. Additionally, we separated the evaluation of the KEA and HybridRank methods by using two different feature sets for the Na\u00efve Bayes classifier: the Base Feature Set (New) The three measures used in our evaluation were the precision, recall and F-score. We compare the output keyphrases of each method with those in the manually-assigned list; the keyphrases in each list are previously stemmed. The number of keyphrases extracted per abstract corresponds to 6. This way of selecting the number of output keyphrases presented the best results.", "cite_spans": [ { "start": 361, "end": 366, "text": "(New)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.2" }, { "text": "The results for the Hulth 2003 dataset are shown in Figure 1 . For this dataset, HybridRank obtained the highest precision, recall and F-score when using the Max merging method. This best performance was obtained when applying either the Base Feature Set (New) or the PoS Tag Feature Set on KEA. It can also be observed in Figure 1 that our modified versions of both KEA and TextRank performed better than the original ones. Figure 2 displays the results for the IEEE Xplore dataset. In this dataset, when applying the Base Feature Set (New) on KEA and using the Min merging method, HybridRank performed better than the other methods. However, when applying the PoS Tag Feature Set, the original KEA method outperformed the others. This is probably due to the fact that the IEEE Xplore dataset has a greater variety of subjects than the Hulth 2003 dataset. This wide range of subjects causes the Keyphrase Frequency attributeapplied on the Base Feature Set (New)to become less meaningful , thus allowing the PoS Tag Feature Set to predict a phrase's class (keyphrase or non-keyphrase) with higher accuracy. Overall, our method performed better than either KEA or TextRank in all of the cases. ", "cite_spans": [], "ref_spans": [ { "start": 52, "end": 60, "text": "Figure 1", "ref_id": "FIGREF4" }, { "start": 323, "end": 331, "text": "Figure 1", "ref_id": "FIGREF4" }, { "start": 425, "end": 433, "text": "Figure 2", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Evaluation and Discussion", "sec_num": "5.3" }, { "text": "In this paper, we have described and evaluated a hybrid keyphrase extraction method:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6." }, { "text": "HybridRank. Our results show that collaboration between a supervised and an unsupervised approach can produce high-quality keyphrase lists for short articles. We have compared the performance of HybridRank with two other well-known keyphrase extraction methods -KEA and TextRankand showed that HybridRank obtained a higher precision, recall and F-score when applied on the Hulth 2003 dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6." }, { "text": "On our second dataset (IEEE Xplore), the original KEA algorithm performed better than HybridRank and TextRank when using PoS Tag Patterns because this dataset contains a wide range of domains, affecting the performance of the Na\u00efve Bayes classifier when using the Base Feature Set (New). Our method, however, outperformed in all cases either the supervised (KEA) or unsupervised (TextRank) approaches. Furthermore, doing some modifications to KEA and TextRank improved their performance in most cases as compared to the original methods proposed by their authors.", "cite_spans": [ { "start": 357, "end": 362, "text": "(KEA)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6." }, { "text": "We can conclude that HybridRank performs the best when the unsupervised component outperforms the supervised component. Additionally, merging KEA's and TextRank's keyphrases with the Min or Max methods produced better results than using the Average.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6." }, { "text": "Among our planned future work is adopting a weighting mechanism to both components, so as to have biased merging, either towards the supervised component or towards the unsupervised one. Another approach we have considered is to implement different (and newer) methods for the supervised and unsupervised components (see Section 2), so as to maximize the overall performance of the HybridRank system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6." } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The anatomy of a large-scale hypertextual Web search engine. Computer networks and ISDN systems", "authors": [ { "first": "S", "middle": [], "last": "Brin", "suffix": "" }, { "first": "L", "middle": [], "last": "Page", "suffix": "" } ], "year": 1998, "venue": "", "volume": "30", "issue": "", "pages": "107--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Brin and L. Page. The anatomy of a large-scale hypertextual Web search engine. Computer networks and ISDN systems, 30(1-7):107--117, 1998.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Domain-specific keyphrase extraction", "authors": [ { "first": "Eibe", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Gordon", "middle": [ "W" ], "last": "Paynter", "suffix": "" }, { "first": "Ian", "middle": [ "H" ], "last": "Witten", "suffix": "" } ], "year": 1999, "venue": "IJCAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eibe Frank and Gordon W. Paynter and Ian H. Witten. Domain-specific keyphrase extraction. IJCAI, 1999.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Improved automatic keyword extraction given more linguistic knowledge", "authors": [ { "first": "Anette", "middle": [], "last": "Hulth", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 conference on Empirical methods in natural language processing", "volume": "", "issue": "", "pages": "216--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anette Hulth. Improved automatic keyword extraction given more linguistic knowledge. Proceedings of the 2003 conference on Empirical methods in natural language processing, :216--223, 2003.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "TextRank: Bringing order into texts", "authors": [ { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "P", "middle": [], "last": "Tarau", "suffix": "" } ], "year": 2004, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "404--411", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Mihalcea and P. Tarau. TextRank: Bringing order into texts. Proceedings of EMNLP, :404--411, 2004.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Keyphrase extraction in scientific publications", "authors": [ { "first": "T", "middle": [ "D" ], "last": "Nguyen", "suffix": "" }, { "first": "M", "middle": [ "Y" ], "last": "Kan", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ICADL2007", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T.D. Nguyen and M.Y. Kan. Keyphrase extraction in scientific publications. Proceedings of ICADL2007, 2007.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Coherent Keyphrase Extraction via Web Mining", "authors": [ { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Turney", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the Eighteenth Research Council", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter D. Turney. Coherent Keyphrase Extraction via Web Mining. Proceedings of the Eighteenth Research Council, 1999.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Learning Algorithms for Keyphrase Extraction", "authors": [ { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2000, "venue": "Inf. Retr", "volume": "2", "issue": "4", "pages": "303--336", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter D. Turney. Learning Algorithms for Keyphrase Extraction. Inf. Retr., 2(4):303--336, 2000.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "CollabRank: towards a collaborative approach to single-document keyphrase extraction", "authors": [ { "first": "X", "middle": [], "last": "Wan", "suffix": "" }, { "first": "J", "middle": [], "last": "Xiao", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Wan and J. Xiao. CollabRank: towards a collaborative approach to single-document keyphrase extraction. Proceedings of the 22nd International Conference on Computational Linguistics, 1:969--976, 2008. Computational Linguistics, 45(1):552, 2007.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "KEA: practical automatic keyphrase extraction. DL '99", "authors": [ { "first": "H", "middle": [], "last": "Ian", "suffix": "" }, { "first": "Gordon", "middle": [ "W" ], "last": "Witten", "suffix": "" }, { "first": "Eibe", "middle": [], "last": "Paynter", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Craig", "middle": [ "G" ], "last": "Gutwin", "suffix": "" }, { "first": "", "middle": [], "last": "Nevill-Manning", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the fourth ACM conference on Digital libraries", "volume": "", "issue": "", "pages": "254--255", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian H. Witten and Gordon W. Paynter and Eibe Frank and Carl Gutwin and Craig G. Nevill-Manning. KEA: practical automatic keyphrase extraction. DL '99: Proceedings of the fourth ACM conference on Digital libraries, :254--255, 1999.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Highlights: Language-and domain-independent automatic indexing terms for abstracting", "authors": [ { "first": "J", "middle": [ "D" ], "last": "Cohen", "suffix": "" } ], "year": 1995, "venue": "JASIS", "volume": "46", "issue": "3", "pages": "162--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. D. Cohen. Highlights: Language-and domain-independent automatic indexing terms for abstracting. JASIS, 46(3):162--174, 1995.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Neural based approach to keyword extraction from documents", "authors": [ { "first": "T", "middle": [], "last": "Jo", "suffix": "" } ], "year": 2003, "venue": "Computational Science and Its Applications--ICCSA 2003", "volume": "", "issue": "", "pages": "456--461", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Jo. Neural based approach to keyword extraction from documents. In Computational Science and Its Applications--ICCSA 2003, pages 456--461. Springer, 2003.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Keyword extraction from documents using a neural network model", "authors": [ { "first": "T", "middle": [], "last": "Jo", "suffix": "" }, { "first": "M", "middle": [], "last": "Lee", "suffix": "" }, { "first": "T", "middle": [ "M" ], "last": "Gatton", "suffix": "" } ], "year": 2006, "venue": "Hybrid Information Technology, 2006. ICHIT'06. International Conference on", "volume": "2", "issue": "", "pages": "194--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Jo, M. Lee, and T. M. Gatton. Keyword extraction from documents using a neural network model. In Hybrid Information Technology, 2006. ICHIT'06. International Conference on, volume 2, pages 194--197. IEEE, 2006.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A statistical approach to mechanized encoding and searching of literary information", "authors": [ { "first": "H", "middle": [ "P" ], "last": "Luhn", "suffix": "" } ], "year": 1957, "venue": "IBM Journal of research and development", "volume": "1", "issue": "4", "pages": "309--317", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. P. Luhn. A statistical approach to mechanized encoding and searching of literary information. IBM Journal of research and development, 1(4):309--317, 1957.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Keyword extraction from a single document using word co-occurrence statistical information", "authors": [ { "first": "Y", "middle": [], "last": "Matsuo", "suffix": "" }, { "first": "M", "middle": [], "last": "Ishizuka", "suffix": "" } ], "year": 2004, "venue": "International Journal on Artificial Intelligence Tools", "volume": "13", "issue": "01", "pages": "157--169", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Matsuo and M. Ishizuka. Keyword extraction from a single document using word co-occurrence statistical information. International Journal on Artificial Intelligence Tools, 13(01):157--169, 2004.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A theory of term importance in automatic text analysis", "authors": [ { "first": "G", "middle": [], "last": "Salton", "suffix": "" }, { "first": "C.-S", "middle": [], "last": "Yang", "suffix": "" }, { "first": "C", "middle": [ "T" ], "last": "Yu", "suffix": "" } ], "year": 1975, "venue": "Journal of the American society for Information Science", "volume": "26", "issue": "1", "pages": "33--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Salton, C.-S. Yang, and C. T. Yu. A theory of term importance in automatic text analysis. Journal of the American society for Information Science, 26(1):33--44, 1975.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A new approach to keyphrase extraction using neural networks", "authors": [ { "first": "K", "middle": [], "last": "Sarkar", "suffix": "" }, { "first": "M", "middle": [], "last": "Nasipuri", "suffix": "" }, { "first": "S", "middle": [], "last": "Ghose", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1004.3274" ] }, "num": null, "urls": [], "raw_text": "K. Sarkar, M. Nasipuri, and S. Ghose. A new approach to keyphrase extraction using neural networks. arXiv preprint arXiv:1004.3274, 2010.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Finding advertising keywords on web pages", "authors": [ { "first": "W", "middle": [], "last": "Yih", "suffix": "" }, { "first": "J", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "V", "middle": [ "R" ], "last": "Carvalho", "suffix": "" } ], "year": null, "venue": "Proceedings of the 15th international conference on World Wide Web", "volume": "", "issue": "", "pages": "213--222", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. tau Yih, J. Goodman, and V. R. Carvalho. Finding advertising keywords on web pages. In Proceedings of the 15th international conference on World Wide Web, pages 213--222.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Automatic keyphrases extraction from document using neural network", "authors": [ { "first": "J", "middle": [], "last": "Wang", "suffix": "" }, { "first": "H", "middle": [], "last": "Peng", "suffix": "" }, { "first": "J. Song", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2006, "venue": "Advances in Machine Learning and Cybernetics", "volume": "", "issue": "", "pages": "633--641", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Wang, H. Peng, and J. song Hu. Automatic keyphrases extraction from document using neural network. In Advances in Machine Learning and Cybernetics, pages 633--641. Springer, 2006.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Automatic keyword extraction from documents using conditional random fields", "authors": [ { "first": "C", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2008, "venue": "Journal of Computational Information Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Zhang. Automatic keyword extraction from documents using conditional random fields. Journal of Computational Information Systems, 2008.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "This section will describe the Preprocessing and Merging components in detail. For the Supervised ranking and Unsupervised ranking components, only the specific modifications made in our work will be detailed.", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "Phrases composed of 1 to 4 words are extracted from each sentence when they comply with the following criteria: a. They do not contain any of a list of 539 predetermined stopwords. b. They are composed of nouns, adjectives and/or verbs in their gerund or past participle forms. c. They do not contain words with less than 3 letters. d. They do not contain words composed only of numbers and/or other non-letters. e. They do not end with an adjective.", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "11: reorderedK .append( existentK ) 12: reorderedK .append(", "uris": null, "num": null }, "FIGREF3": { "type_str": "figure", "text": "and the PoS Tag Feature Set. Base Feature Set (New) Only the TFIDF, relative position and keyphrase frequency are taken into account when calculating equation 4 in Section 4.2.4. PoS Tag Feature Set Only the TFIDF, relative position and PoS tag pattern are taken into account when calculating equation 4 in Section 4.2.4.", "uris": null, "num": null }, "FIGREF4": { "type_str": "figure", "text": "Precision, recall and F-score on the Hulth 2003 dataset. The left corresponds to the Base Feature Set (New), the right to the PoSTag Feature Set.", "uris": null, "num": null }, "FIGREF5": { "type_str": "figure", "text": "Precision, recall and F-score on the IEEE Xplore dataset. The left corresponds to the Base Feature Set (New), the right to the PoS Tag Feature Set.", "uris": null, "num": null } } } }