{ "paper_id": "I11-1046", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:32:05.641959Z" }, "title": "Japanese Abbreviation Expansion with Query and Clickthrough Logs", "authors": [ { "first": "Kei", "middle": [], "last": "Uchiumi", "suffix": "", "affiliation": {}, "email": "kuchiumi@yahoo-corp.jp" }, { "first": "Mamoru", "middle": [], "last": "Komachi", "suffix": "", "affiliation": {}, "email": "kmachina@yahoo-corp.jp" }, { "first": "Keigo", "middle": [], "last": "Machinaga", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Toshiyuki", "middle": [], "last": "Maezawa", "suffix": "", "affiliation": {}, "email": "tmaezawa@yahoo-corp.jp" }, { "first": "Toshinori", "middle": [], "last": "Satou", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Yoshinori", "middle": [], "last": "Kobayashi", "suffix": "", "affiliation": {}, "email": "ykobayas@google.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A novel reranking method has been developed to refine web search queries. A label propagation algorithm was applied on a clickthrough graph, and the candidates were reranked using a query language model. Our method first enumerates query candidates with common landing pages with regard to the given query to create a clickthrough graph. Second, it calculates the likelihood of the candidates, using a language model generated from web search query logs. Finally, the candidates are sorted by the score calculated from the likelihood and label propagation. As a result, high precision and coverage were achieved in the task of Japanese abbreviation expansion, without using handcrafted training data.", "pdf_parse": { "paper_id": "I11-1046", "_pdf_hash": "", "abstract": [ { "text": "A novel reranking method has been developed to refine web search queries. A label propagation algorithm was applied on a clickthrough graph, and the candidates were reranked using a query language model. Our method first enumerates query candidates with common landing pages with regard to the given query to create a clickthrough graph. Second, it calculates the likelihood of the candidates, using a language model generated from web search query logs. Finally, the candidates are sorted by the score calculated from the likelihood and label propagation. As a result, high precision and coverage were achieved in the task of Japanese abbreviation expansion, without using handcrafted training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The query expansion technique has been widely used in recent web-search engines. Query expansion significantly improves recall in information retrieval operations. It uses a thesaurus or synonym dictionary to reformulate a query, or to correct spelling errors in search queries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the early days of the speller, the dictionary was manually compiled by lexicographers. However, it is time consuming to construct a broad coverage dictionary, and domain knowledge is required to achieve high quality. Moreover, the rapid growth of the web makes it even harder to maintain an up-to-date dictionary for the web.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To alleviate this problem, web-search engines often exploit web search query logs to automatically generate a thesaurus. A web search query is a query that a web user types into a web search engine to find information. It is noisy and sometimes ambiguous to detect query intent, but it is a great way to create a fresh web dictionary at low cost. Hence, the web search queries are widely used in the NLP field. For instance, Hagiwara and Suzuki (2009) used them for a query alteration task, and Sekine and Suzuki (2007) leveraged them for acquiring semantic categories.", "cite_spans": [ { "start": 425, "end": 451, "text": "Hagiwara and Suzuki (2009)", "ref_id": "BIBREF11" }, { "start": 495, "end": 519, "text": "Sekine and Suzuki (2007)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "More recently, web search clickthrough logs have been explored in the field of lexical acquisition. A web clickthrough is the process of clicking a URL and going to the page it refers. This ensures that the landing page is appropriate since the web user follows the hyperlink after checking the information displayed, such as 'title', 'URL', and 'summary' of their search. Two distinct queries landing on the same 'URL' are possibly input for the same purpose, meaning that they are likely to be related. In the NLP literature, clickthrough logs have been used to learn semantic categories (Komachi et al., 2009) and named entities (Jain and Pennacchiotti, 2010) .", "cite_spans": [ { "start": 590, "end": 612, "text": "(Komachi et al., 2009)", "ref_id": "BIBREF17" }, { "start": 632, "end": 662, "text": "(Jain and Pennacchiotti, 2010)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main contribution of this work is two fold:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose a novel method to combine web search query logs and clickthrough logs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 To the best our knowledge, this work is the first attempt to automatically recognize full spellings given Japanese abbreviations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This is a very first step of Japanese abbreviation expansion task using search logs. For evaluation of query expansion method, it is desirable to use a set of queries for evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, it is difficult to obtain them beforehand, because we have to check query logs to find incorrect queries and make necessary changes to define their corrections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Therefore, in this paper, we focus on query abbreviation and evaluate our proposed approach in an abbreviation expansion task. Abbreviation expansion itself is difficult for many query expansion methods based on edit distance, because the input and output have only a few, if any, characters in common. Our clickthroughlog-based approach can expand even queries that do not share any characters at all with the abbreviated ones 1 . Since our method does not rely on any language, it is applicable to any other languages including Chinese and English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is organized as follows. Section 2 describes previous works in query expansion tasks. In Section 3, we formulate a query expansion task in a noisy channel model framework. In Section 4, we show that label propagation on a clickthrough graph can be used as a query abbreviation model and extract candidates for query correction without preparing correct candidates. Section 5 explains the query language model we use. In Section 6, we evaluate our method in an abbreviation expansion task and show its efficiency. Section 7 offers conclusions and directions for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Query expansion for a web-search query has to handle neologisms and slang on the web. Thus, it is labor-intensive to maintain a list of correctly spelled words for search queries. Additionally, Japanese query expansion includes several tasks, such as word segmentation, word stemming, and acronym expansion. Much of the previous work has focused on each individual task (Ahmad and Kondrak, 2005; Chen et al., 2007; Bergsma and Wang, 2007; Li et al., 2006; Peng et al., 2007; Risvik et al., 2003) . Cucerzan and Brill (2004) clarified problems of spelling correction for search queries, addressing them using a noisy channel model with a language model created from query logs. and applied a reranking method applying neural net to the search-query spelling correction candidates obtained from the Cucerzan's method. Their reranking method had the advantageous ability to incorporate clickthrough logs to a translation model learned as a ranking-feature. However, their methods are based on edit distance, and thus they did not deal with the task of synonym replacement and acronym expansion. Wei et al. (2009) addressed synonym extraction using similarity based on Jensen-Shannon divergence of commonly clicked URL distribution between queries. Their approach is similar to our proposed method, except that they did not use a language model. Also, their method is not scalable and cannot be applied to our task using large-scale data. Jain and Pennacchiotti (2010) proposed an unsupervised method for named entity extraction from web search query logs. They performed a clustering method using a combination of features based on query logs, web documents, and clickthrough logs. They showed that clickthrough logs give higher accuracy than query logs as a corpus. Guo et al. (2008) proposed a unified approach for query expansion using a discriminative model. They extended feature function of CRFs (Lafferty et al., 2001 ) by adding 'operation' to the triplet variables: 'feature', 'label', and 'operation'. 'Operation' represents a process for query expansion. For example, 'operation' can take four states ('deletion', 'insertion', 'substitution', and 'transposition') on spelling correction. However, their method needs supervised data for training and cannot deal with a word that does not occur in the corpus. In fact, they used only 10,000 queries to learn the query expansion model. Unlike their method, our approach takes advantage of an enormous amount of clickthrough logs for learning the query abbreviation model. Query suggestion is another task that uses search logs (Mei et al., 2008; Cao et al., 2008) . Query suggestion differs from our task in that it allows queries to be suggested that are different from the one that the search user types.", "cite_spans": [ { "start": 370, "end": 395, "text": "(Ahmad and Kondrak, 2005;", "ref_id": "BIBREF0" }, { "start": 396, "end": 414, "text": "Chen et al., 2007;", "ref_id": "BIBREF7" }, { "start": 415, "end": 438, "text": "Bergsma and Wang, 2007;", "ref_id": "BIBREF4" }, { "start": 439, "end": 455, "text": "Li et al., 2006;", "ref_id": "BIBREF19" }, { "start": 456, "end": 474, "text": "Peng et al., 2007;", "ref_id": "BIBREF25" }, { "start": 475, "end": 495, "text": "Risvik et al., 2003)", "ref_id": "BIBREF26" }, { "start": 498, "end": 523, "text": "Cucerzan and Brill (2004)", "ref_id": "BIBREF8" }, { "start": 1092, "end": 1109, "text": "Wei et al. (2009)", "ref_id": "BIBREF30" }, { "start": 1435, "end": 1464, "text": "Jain and Pennacchiotti (2010)", "ref_id": "BIBREF13" }, { "start": 1764, "end": 1781, "text": "Guo et al. (2008)", "ref_id": "BIBREF10" }, { "start": 1899, "end": 1921, "text": "(Lafferty et al., 2001", "ref_id": "BIBREF18" }, { "start": 2582, "end": 2600, "text": "(Mei et al., 2008;", "ref_id": "BIBREF20" }, { "start": 2601, "end": 2618, "text": "Cao et al., 2008)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Furthermore, some previous works have addressed acquiring a Japanese abbreviation task. Murayama and Okumura (2008) formulated the process of generating Japanese abbreviations by noisy channel model but they did not handle abbreviation expansion. Okazaki et al. (2008) Figure 1 : Combining clickthrough logs and search logs for query abbreviation expansion pairs of words from the newspaper corpus using a heuristic and then classified them as \"abbreviation\" or \"not-abbreviation\". However, their heuristic for obtaining abbreviation candidates cannot be applied to web search queries.", "cite_spans": [ { "start": 88, "end": 115, "text": "Murayama and Okumura (2008)", "ref_id": "BIBREF22" }, { "start": 247, "end": 268, "text": "Okazaki et al. (2008)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 269, "end": 277, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this section, we explain our noisy channel based approach to query expansion. We define the query expansion problem as follows: Given a user's query q and a set of search logs L, find a correct query c \u2208 C that is most relevant to the input q. In a probabilistic framework, this can be formulated as finding the argmax P (c|q). Applying Bayes' Rule and dropping the constant denominator, we obtain a unnormalized posterior: argmax P (c)P (q|c)(Eq.1). We now have a noisy channel model for query expansion, with two components: the source model P (c) and the channel model P (q|c).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noisy Channel Model for Abbreviation Expansion", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c * = argmax c P (c|q) = argmax c P (c)P (q|c) P (q) = argmax c P (c)P (q|c)", "eq_num": "(1)" } ], "section": "Noisy Channel Model for Abbreviation Expansion", "sec_num": "3" }, { "text": "We use a language model estimated from search query logs as the source model, thus P (c) represents likelihood of c as a query. As for the channel model, we use a label propagation method on a clickthrough graph as proposed by Komachi et al. (2009) . Figure 1 shows the framework of our approach.", "cite_spans": [ { "start": 227, "end": 248, "text": "Komachi et al. (2009)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 251, "end": 259, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Noisy Channel Model for Abbreviation Expansion", "sec_num": "3" }, { "text": "To find candidates to the input query, we construct a bipartite graph from a query and a clicked URL using the web search logs. We calculate the relatedness between the queries on this graph to select a set of candidates C. Since the label propagation is mathematically identical to the random walk with restart, probability of the label propagation can be regarded as the conditional probability P (q|c). If we assume that the relatedness score represents the conditional probability of the typed query q given a candidate c \u2208 C, P (q|c), the c * is calculated by argmax P (c) \u00d7 P (q|c). As a consequence, we propose reranking in accordance with the follow equation using two probabilistic models P QLM and P LP and then output ranked candidates. In this paper, we will define P LP interchangeably as a query abbreviation model, P QAM .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noisy Channel Model for Abbreviation Expansion", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "score(q, c) = P QLM (c) \u00d7 P QAM (q|c)", "eq_num": "(2)" } ], "section": "Noisy Channel Model for Abbreviation Expansion", "sec_num": "3" }, { "text": "An advantage of our proposed method is that it can correct a query by only using search logs without a manually labeled-corpora or any heuristics. Our approach is a versatile framework for query expansion and thus is not specialized for any tasks. We explain the label propagation algorithm on a clickthrough graph and the query language model below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noisy Channel Model for Abbreviation Expansion", "sec_num": "3" }, { "text": "In this section, we describe a label propagation algorithm on a clickthrough graph. It is based on a previous work by Komachi et al. (2009) . The main difference between their method and ours is that we use the normalized pointwise mutual information and the 1-step approximation of a clickthrough graph. Graph-based semi-supervised methods such as label propagation can performance well with only a few seeds and scale up to a large dataset. Figure 2 illustrates the process of label propagation using a seed term \"abc\". This is a bipartite graph whose left-hand side nodes are terms and right-hand side nodes are patterns. Starting from \"abc\", the label propagates to other term nodes through the pattern \"http://abcnews.go.com\" that is strongly connected to \"abc\" and thus the label \"abc\" will be propagated to \"american broadcasting corporation\".", "cite_spans": [ { "start": 118, "end": 139, "text": "Komachi et al. (2009)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 443, "end": 452, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Query Abbreviation Model from Clickthrough Logs", "sec_num": "4" }, { "text": "In this way, label propagation gradually propagates the label of the seed instance to neighboring The strength of lines indicates relatedness between each node, whereas the depth of the color of nodes represents relatedness to the seed. The darker a left-hand side node, the more likely it is similar to \"abc\". The darker a right-hand side node, the more likely it is the characteristic pattern of \"abc\". nodes, and optimal labels are given as the labels at which the label propagation process has converged.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Abbreviation Model from Clickthrough Logs", "sec_num": "4" }, { "text": "However, the seed instance like that in Figure 2 possibly causes a result to be worse in a task of lexical acquisition, due to an ambiguous instance \"abc\", which belongs to more than one domain, e.g. \"mass media\" and \"dance\". It is expected that the label propagates to unrelated instances if we have highly frequent ambiguous nodes. This problem is called \"semantic drift\" and has received a lot of attention in NLP research (Komachi et al., 2008) . Komachi et al. (2008) have reported that bootstrapping algorithms like Espresso (Pantel and Pennacchiotti, 2006) can be viewed as Kleinberg's HITS algorithm (Kleinberg, 1999) and the \"semantic drift\" problem on the graph is the same phenomenon as \"topic drift\" in HITS, which converges to the eigenvector of the instance-instance similarity graph created from instance-pattern cooccurrence graph as described in the next subsection.", "cite_spans": [ { "start": 427, "end": 449, "text": "(Komachi et al., 2008)", "ref_id": "BIBREF16" }, { "start": 452, "end": 473, "text": "Komachi et al. (2008)", "ref_id": "BIBREF16" }, { "start": 532, "end": 564, "text": "(Pantel and Pennacchiotti, 2006)", "ref_id": "BIBREF24" }, { "start": 609, "end": 626, "text": "(Kleinberg, 1999)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 40, "end": 49, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Query Abbreviation Model from Clickthrough Logs", "sec_num": "4" }, { "text": "Our label propagation method based on Komachi et al. (2009) can be used as a relatedness measure that returns a similarity score relative to the seed instance, and thus is suitable for a query correction task.", "cite_spans": [ { "start": 38, "end": 59, "text": "Komachi et al. (2009)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Query Abbreviation Model from Clickthrough Logs", "sec_num": "4" }, { "text": "Seed instance vector F (0) Instance similarity matrix A Output Instance score vector F (t) 1: Construct the normalized Laplacian matrix ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input", "sec_num": null }, { "text": "L = I \u2212 D \u22121/2 AD \u22121/2 2: Iterate F (t + 1) = \u03b1(\u2212L)F (t) + (1 \u2212 \u03b1)F (0) until convergence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input", "sec_num": null }, { "text": "In this paper, we extract queries landing on the same URL as the one related with input query by stopping label propagation after 1-hop. These queries are possibly synonyms with the input query and thus possible to correct without semantic transformation. Figure 3 shows the label propagation algorithm on a clickthrough graph.", "cite_spans": [], "ref_spans": [ { "start": 256, "end": 264, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "One-step approximation of clickthrough graph", "sec_num": "4.1" }, { "text": "an instance set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Given", "sec_num": null }, { "text": "X = {x 1 , . . . , x l , x l+1 , . . . , x n } and a label set L = {1 , ..., c}, the first l instances x i (i < l) are labeled as y i \u2208 L.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Given", "sec_num": null }, { "text": "The goal is to predict the labels of the unlabeled instances x u (l + 1 \u2264 u \u2264 n).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Given", "sec_num": null }, { "text": "Let F denote the set of n \u00d7 c matrices with nonnegative entries. A matrix F = [F 1 , . . . , F n ] T \u2208 F corresponds to a classification on the dataset X by labeling each instance x i as a label y i = argmax j\u2264c F ij . Define F 0 as the initial F with F ij = 1 if x i is labeled as a label y i = j and F ij = 0 otherwise. The (i, j)-th element of the final matrix F represents a similarity to the labeled instances. We use these similarities as P (q|c) in Equation 2, where q is a seed instance, c is a labeled instance by label propagation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Given", "sec_num": null }, { "text": "The instance-instance similarity matrix A in Figure 3 is defined as A = W T W where W is an instance-pattern matrix. The (i, j)-th element of W ij contains the relative frequency of occurrence of instance x i and pattern p j .", "cite_spans": [], "ref_spans": [ { "start": 45, "end": 53, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Given", "sec_num": null }, { "text": "D is a diagonal degree matrix of A where the (i, j)-th element of D is given as D ii = \u2211 j A ij . Label propagation has a parameter \u03b1 (0 \u2264 \u03b1 \u2264 \u03bb \u22121 , where \u03bb is a principal eigenvalue of normalized Laplacian matrix L) that controls the effect of clamping the label distribution of labeled data. Komachi et al. (2009) suggested that the normalized frequency causes semantic drift (Jurafsky and Martin, 2009) , and we confirmed this phenomenon in our preliminary experiment. They suggested using relative frequency such as pointwise mutual information (PMI) and log-likelihood ratio as countermeasure against semantic drift. Therefore, we used pointwise mutual information (PMI) shown below to handle the aforementioned semantic drift problem.", "cite_spans": [ { "start": 295, "end": 316, "text": "Komachi et al. (2009)", "ref_id": "BIBREF17" }, { "start": 379, "end": 406, "text": "(Jurafsky and Martin, 2009)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Given", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P M I(x, p) = ln P (x, p) P (x)P (p)", "eq_num": "(3)" } ], "section": "Normalized PMI", "sec_num": "4.2" }, { "text": "PMI assigns high scores to low-frequency events. Moreover, using PMI naively makes sparse matrix W dense. Therefore, we used normalized PMI (NPMI) (Bouma, 2009) below as the relative frequency and cut off the values lower than a threshold \u03b8 (\u03b8 \u2265 0).", "cite_spans": [ { "start": 147, "end": 160, "text": "(Bouma, 2009)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Normalized PMI", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "N P M I(x, p) = { ln P (x, p) P (x)P (p) } / \u2212lnP (x, p)", "eq_num": "(4)" } ], "section": "Normalized PMI", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "W ij = { N P M I(x i , p j ) (N P M I(x i , p j ) > \u03b8) 0 (N P M I(x i , p j ) \u2264 \u03b8) , (\u03b8 \u2265 0)", "eq_num": "(5)" } ], "section": "Normalized PMI", "sec_num": "4.2" }, { "text": "NPMI prevents low-frequency events from being assigned scores that are too high by dividing by \u2212lnP (x, p) and heads off excess label propagation through them. By cutting off negative values, the range of W ij can be normalized to [0,1]. Additionally, this prevents sparse matrix W from being dense and reduces the noise in the data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Normalized PMI", "sec_num": "4.2" }, { "text": "In this paper, we use a character n-gram language model to obtain the likelihood of the candidates for query expansion in Equation 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Language Model", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (c) = N \u22121 \u220f i=0 P (x i |x i\u2212N +1 , . . . , x i\u22121 ) = N \u22121 \u220f i=0 f req(x i\u2212N +1 , . . . , x i ) f req(x i\u2212N +1 , . . . , x i\u22121 )", "eq_num": "(6)" } ], "section": "Query Language Model", "sec_num": "5" }, { "text": "where consider c is a contiguous sequences of N characters c = {x 0 , x 1 , . . . , x n\u22121 } .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Language Model", "sec_num": "5" }, { "text": "In the web search, neologims appear continuously, which make it hard to compute the likelihood of queries by a word n-gram language model. Moreover, characters themselves carry essential semantic information in Chinese and Japanese. Therefore, we build a character language model for the search query logs following observations of the usefulness of character n-grams for Japanese (Asahara and Matsumoto, 2004) and Chinese (Huang and Zhao, 2006) . Asahara and Matsumoto used a window of two characters to the right and to the left of the focus character, which results in using character 5-grams. We also used 5-grams for a query language model from the preliminary experiment.", "cite_spans": [ { "start": 381, "end": 410, "text": "(Asahara and Matsumoto, 2004)", "ref_id": null }, { "start": 423, "end": 445, "text": "(Huang and Zhao, 2006)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Query Language Model", "sec_num": "5" }, { "text": "We collected abbreviations of 'Acronym', 'Kanji', 'Kana' from the Japanese version of Wikipedia, and then removed single letters and duplications. Finally, we gathered 1,916 terms and used them in our evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test Set", "sec_num": "6.1" }, { "text": "We used queries and clicked links in Japanese clickthrough logs as instances and patterns, respectively. We tallied them in the below conditions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of a Clickthrough Graph", "sec_num": "6.2" }, { "text": "1. Query and clickthrough are unique with respect to each cookie each day. If a user input the same query and clicked the same URL any number of times, we do not count it as occurring multiple times, i.e. we do not increase the number of clickthrough.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of a Clickthrough Graph", "sec_num": "6.2" }, { "text": "2. Alphanumeric characters in a query are unified to one-byte lower-case characters 3. A sequence of white space in a query is unified to single one-byte white space character 4. All the URLs included in clickthrough logs are unique, i.e., we did not generalize URLs as did.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of a Clickthrough Graph", "sec_num": "6.2" }, { "text": "The Japanese clickthrough logs were collected from October 22 to November 9, 2009 2 and from January 1 to 16 in Yahoo Japan web search logs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of a Clickthrough Graph", "sec_num": "6.2" }, { "text": "Links clicked less than 10 times were removed for efficiency reasons. Finally, we obtained 4, 428,430 nodes, 16,841,683 patterns, and 16,988,516 edges. The threshold \u03b8 of elements W ij was set to 0.1 on the basis of preliminary experimental results.", "cite_spans": [ { "start": 94, "end": 144, "text": "428,430 nodes, 16,841,683 patterns, and 16,988,516", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Construction of a Clickthrough Graph", "sec_num": "6.2" }, { "text": "The parameter \u03b1 for label propagation was set to 0.0001.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of a Clickthrough Graph", "sec_num": "6.2" }, { "text": "We used web search query logs for constructing a language model. The search query logs were collected from August 1, 2009, to January 27, 2010, in Yahoo Japan web search logs. We removed queries that occurred fewer than 10 times. Finally, we obtained 52,399,621 unique queries as a training corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of Query Language Model", "sec_num": "6.3" }, { "text": "In this experiment, we constructed a character 5-gram language model using the query logs, all normalized by the length of the candidate's string.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Construction of Query Language Model", "sec_num": "6.3" }, { "text": "The system output was shown to five search evaluation specialists. We evaluated all systems using precision and coverage at k. Coverage is defined as the percentage of queries for which the system returned at least one relevant query. Precision at k is the number of relevant queries amongst the top k returned. They are computed as follows: In our experiment, the average number of candidates for each query is about 53. Therefore, we extracted 50 candidates from clickthrough logs and then reranked using three methods:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6.4" }, { "text": "1. Ranking using abbreviation model (AM) only 2. Ranking using language model (LM) only 3. Ranking using both language model and abbreviation model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6.4" }, { "text": "Micro average of edit distance between input abbreviations and its correct expansions is 4.03, while the average length of queries is 3.01. These statistics show that input queries should be replaced by totally different characters and it is difficult to use edit distance for extracting correct candidates from web search logs. This is another reason clickthrough logs are essential to the query abbreviation task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6.4" }, { "text": "We describe our guidelines to judge system outputs below. We defined four correction patterns for abbreviation expansion:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Judgment Guideline", "sec_num": "6.4.1" }, { "text": "1. acronym for its English expansion 2. acronym for its Japanese orthography 4 3. Japanese abbreviation for its Japanese orthography 4. Japanese abbreviation for its English orthography", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Judgment Guideline", "sec_num": "6.4.1" }, { "text": "We collected abbreviation/expansion pairs if and only if they were one of these three types:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Judgment Guideline", "sec_num": "6.4.1" }, { "text": "(1) named entity, (2) common expression, (3) Japanese meaning of the common expression. Table 1 shows examples of each correction pattern along with its output type.", "cite_spans": [], "ref_spans": [ { "start": 88, "end": 95, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Judgment Guideline", "sec_num": "6.4.1" }, { "text": "Ambiguous cases were discarded in the study as exceptions after discussion with experts. To calculate the agreement rate, system outputs for a hundred randomly sampled queries from test set were evaluated by two judges. The agreement rate of judgment of abbreviation/expansion pair is 47.0 percentage and Cohen's kappa measure \u03ba = 0.63. Thus, it is considered as an upper bound of the system, and the abbreviation expansion is not considered to be a trivial task. Table 2 shows precision at k and coverage for three systems with k ranging from 1 to 50. Table 3 shows examples of inputs and outputs. The baseline without reranking is shown at the bottom line (k=50). The result of using only QAM in Table 2 is equivalent to the method of Komachi et al. (2009) using NPMI instead of raw frequency as elements of an instance-pattern matrix. To our knowledge, their algorithm is the state-of-theart algorithm in acquiring synonyms using web search logs.", "cite_spans": [ { "start": 737, "end": 758, "text": "Komachi et al. (2009)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 464, "end": 471, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 553, "end": 560, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Judgment Guideline", "sec_num": "6.4.1" }, { "text": "The proposed ranking method using a query language model and abbreviation model learned from clickthrough logs shows the best precision and coverage within 1 \u2264 k \u2264 10. This is because the language and abbreviation model use different sources of information to complement each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "6.5" }, { "text": "The language model estimates probability of the candidate as a query, and it assigns high probability to candidates that appear frequently in query logs. Those candidates tend to co-occur with many clickthrough patterns, which results in creating generic patterns that may cause semantic drift (Komachi et al., 2009 ). Because we used NPMI instead of raw frequency, our label propagation method assigns high weight to instances connected to a seed instance through a few specific patterns. Consequently, low-frequency instances tend to be ranked in higher positions. Table 4 shows the significance level between two baselines and the proposed model. We applied Wilcoxon's signed rank test to compare harmonic mean between precision and coverage of each model with k ranging from 1 to 50. The improvement of adding QAM to QLM is made statistically significant by the Wilcoxon's signed rank test at level p < 0.00001. Our approach outperforms the QAM without QLM although not as significant (p < 0.06). These mean that the ranking of our methods is similar to that of QAM. We consider the reason or this result to be that QLM introduces more information about queries under this experimental setting because the reranking process is performed after narrowing candidates down to 50 by QAM, even though we do not use QAM scores at all when evaluating QLM.", "cite_spans": [ { "start": 294, "end": 315, "text": "(Komachi et al., 2009", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 567, "end": 574, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "6.5" }, { "text": "Due to time constraints and human resources for evaluation, we were unable to compare NPMI with raw frequency. There is still much room for improvement for assigning appropriate weights to edges in a clickthrough graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "6.5" }, { "text": "We conducted error analysis of our proposed method and found that errors can be divided into three types: (1) a partial correct query, (2) a correct query but with an additional attribute word, and (3) a related but not abbreviated term.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6.6" }, { "text": "A partial correct query The main reason for this error is that the likelihood of the partial query becomes higher than that of its correct spelling. Although we normalized the likelihood of candidates by their string length, we still fail to filter fragments of queries. We consider that this issue can be solved by modeling popularity of candidates using PageRank from web search logs. Partial correct queries do not co-occur with attribute words frequently, while correct queries co-occur with diverse attribute words. Therefore, PageRank on a query graph whose edges represent common co-occurring words between queries, will assign higher scores to correct queries than a query language model and abbreviation model. A correct query but with an additional attribute word Examples of this error type include the combination of correct queries and commonly used attribute words in the search (e.g.\"* \"(what does * mean?), \"* \"(* meaning),\"* \" (how to use *)), etc.). There were 857 queries that were classified as incorrect that cooccurred with these attribute words. The similarity of these candidates and input query tend to be higher than that of others because these attribute words frequently appear in search query logs, so the likelihood of these candidate being calculated by a language model tends to be higher too.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6.6" }, { "text": "We consider that this issue can be solved at some level by generating a language model using the first term only, after splitting queries separated by a space in search query logs. However, attribute words are not always separated by a space, and sometimes appear as the first term in the query 5 . Another way to handle this problem is to use PageRank described earlier to decrease likelihood of candidates including attribute words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6.6" }, { "text": "Related but not abbreviated term A number of abbreviations coincide with other general nouns (e.g. \"dog\" 6 ). It is hard to expand these abbreviations correctly at present. In future work, session logs and geo-location information from IP address and GPS can be used to disambiguate the intent of the query.", "cite_spans": [ { "start": 105, "end": 106, "text": "6", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6.6" }, { "text": "Besides above reasons, 280 out of 1,916 queries did not exist in clickthrough logs, resulting in our system not being able to extract the correct query. To solve this problem, we will increase clickthrough logs to improve the coverage of our corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6.6" }, { "text": "We have proposed a query expansion method using the web search query and clickthrough logs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Our noisy channel based method uses character 5-gram of query logs as a language model and label propagation on a clickthrough graph as a channel model. In our experiment, we found that a combination of label propagation and language model outperformed other methods using either label propagation or language model in reranking of query abbreviation candidates extracted from the web search clickthrough logs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "In fact, a modified implementation of this method is currently in production use as an assistance tool for making a synonym dictionary at Yahoo Japan.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "In evaluation of IR systems, Mizzaro (2008) has proposed a normalized mean average precision (NMAP) for considering difficulty of topics in data sets. However, identifying topics in our test set queries and measuring their difficulty are beyond the scope of this paper. Evaluation criteria are important for making production services.", "cite_spans": [ { "start": 29, "end": 43, "text": "Mizzaro (2008)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "In the future, we are going to address this task using discriminative learning as a ranking problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Note that our method can be applied to query expansion as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "A storage device in our experimental environment became full when tallying clickthrough logs. As a result, we were not able to use clickthrough logs of some periods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Underlined words are correct.4 Some corrections were dealt with as exceptions. For example, acronym for its Japanese Hiragana was treated as incorrect, but acronym for its Japanese meaning was treated as correct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Some attributes, (e.g. \" \"(Movie),\"' '(Ani-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learning a spelling error model from search query logs. In mation),\" \"(Picture),etc.), occur often at first token in a search query", "authors": [ { "first": "Farooq", "middle": [], "last": "Ahmad", "suffix": "" }, { "first": "Grzegorz", "middle": [], "last": "Kondrak", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Farooq Ahmad and Grzegorz Kondrak. 2005. Learn- ing a spelling error model from search query logs. In mation),\" \"(Picture),etc.), occur often at first token in a search query, but some attributes, (e.g. \" \", ,\" \",etc.) almost never occur at first token.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Original Group Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "955--962", "other_ids": {}, "num": null, "urls": [], "raw_text": "DOG: Disk Original Group Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Lan- guage Processing, pages 955-962.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Japanese unknown word identification by characterbased chunking", "authors": [], "year": null, "venue": "Proceedings of the 20th international conference on Computational Linguistics", "volume": "", "issue": "", "pages": "459--465", "other_ids": {}, "num": null, "urls": [], "raw_text": "Japanese unknown word identification by character- based chunking. In Proceedings of the 20th inter- national conference on Computational Linguistics, pages 459-465.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning noun phrase query segmentation", "authors": [ { "first": "Shane", "middle": [], "last": "Bergsma", "suffix": "" }, { "first": "Qin Iris", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shane Bergsma and Qin Iris Wang. 2007. Learning noun phrase query segmentation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Normalized (pointwise) mutual information in collocation extraction", "authors": [ { "first": "Geolof", "middle": [], "last": "Bouma", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Biennial GSCL Conference", "volume": "", "issue": "", "pages": "31--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geolof Bouma. 2009. Normalized (pointwise) mutual information in collocation extraction. In Proceed- ings of the Biennial GSCL Conference, pages 31-40.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Contextaware query suggestion by mining click-through and session data", "authors": [ { "first": "Huanhuan", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Daxin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Pei", "suffix": "" }, { "first": "Qi", "middle": [], "last": "He", "suffix": "" }, { "first": "Zhen", "middle": [], "last": "Liao", "suffix": "" }, { "first": "Enhong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" } ], "year": 2008, "venue": "Proceeding of the 14th ACM SIGKDD international conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "875--883", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huanhuan Cao, Daxin Jiang, Jian Pei, Qi He, Zhen Liao, Enhong Chen, and Hang Li. 2008. Context- aware query suggestion by mining click-through and session data. In Proceeding of the 14th ACM SIGKDD international conference on Knowledge Discovery and Data Mining, pages 875-883.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Improving query spelling correction using web search results", "authors": [ { "first": "Qing", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "181--189", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qing Chen, Mu Li, and Ming Zhou. 2007. Improving query spelling correction using web search results. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning, pages 181-189.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Spelling correction as an iterative process that exploits the collective knowledge of web users", "authors": [ { "first": "Silviu", "middle": [], "last": "Cucerzan", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Brill", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "293--300", "other_ids": {}, "num": null, "urls": [], "raw_text": "Silviu Cucerzan and Eric Brill. 2004. Spelling correc- tion as an iterative process that exploits the collective knowledge of web users. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, pages 293-300.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A large scale ranker-based system for search query spelling correction", "authors": [ { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Xiaolong", "middle": [], "last": "Li", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Micol", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "358--366", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianfeng Gao, Xiaolong Li, Daniel Micol, Chris Quirk, and Xu Sun. 2010. A large scale ranker-based sys- tem for search query spelling correction. In Pro- ceedings of the 23rd International Conference on Computational Linguistics, pages 358-366.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A unified and discriminative model for query refinement", "authors": [ { "first": "Jianfeng", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Gu", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xueqi", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 31st annual international ACM SIGIR conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "379--386", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianfeng Guo, Gu Xu, Hang Li, and Xueqi Cheng. 2008. A unified and discriminative model for query refinement. In Proceedings of the 31st annual inter- national ACM SIGIR conference on Research and Development in Information Retrieval, pages 379- 386.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Japanese query alteration based on seamntic similarity", "authors": [ { "first": "Masato", "middle": [], "last": "Hagiwara", "suffix": "" }, { "first": "Hisami", "middle": [], "last": "Suzuki", "suffix": "" } ], "year": 2009, "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "191--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "Masato Hagiwara and Hisami Suzuki. 2009. Japanese query alteration based on seamntic similarity. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, pages 191-199.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Which is essential for chinese word segmentation: character versus word", "authors": [ { "first": "Chang-Ning", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 20th Pacific Asia Conference on Language, Information and Computation (PACLIC-20)", "volume": "", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang-Ning Huang and Hai Zhao. 2006. Which is essential for chinese word segmentation: character versus word. In Proceedings of the 20th Pacific Asia Conference on Language, Information and Compu- tation (PACLIC-20), pages 1-12.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Open entity extraction from web search query logs", "authors": [ { "first": "Alpa", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Pennacchiotti", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "510--518", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alpa Jain and Marco Pennacchiotti. 2010. Open entity extraction from web search query logs. In Proceed- ings of the 23rd International Conference on Com- putational Linguistics, pages 510-518.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition", "authors": [ { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "James", "middle": [ "H" ], "last": "Martin", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Jurafsky and James H. Martin. 2009. Speech and language processing: An introduction to natu- ral language processing, computational linguistics, and speech recognition. 2nd edition. Prentice Hall, Englewood Cliffs. NJ.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Authoritative sources in a hyperlinked environment", "authors": [ { "first": "Jon", "middle": [ "M" ], "last": "Kleinberg", "suffix": "" } ], "year": 1999, "venue": "Journal of the ACM", "volume": "46", "issue": "", "pages": "604--632", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jon M. Kleinberg. 1999. Authoritative sources in a hyperlinked environment. Journal of the ACM, 46:604-632, September.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Graph-based analysis of semantic drift in Espresso-like bootstrapping algorithms", "authors": [ { "first": "Mamoru", "middle": [], "last": "Komachi", "suffix": "" }, { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Masashi", "middle": [], "last": "Shimbo", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1011--1020", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mamoru Komachi, Taku Kudo, Masashi Shimbo, and Yuji Matsumoto. 2008. Graph-based analysis of semantic drift in Espresso-like bootstrapping algo- rithms. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Process- ing, pages 1011-1020.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Learning semantic categories from clickthrough logs", "authors": [ { "first": "Mamoru", "middle": [], "last": "Komachi", "suffix": "" }, { "first": "Shimpei", "middle": [], "last": "Makimoto", "suffix": "" }, { "first": "Kei", "middle": [], "last": "Uchiumi", "suffix": "" }, { "first": "Manabu", "middle": [], "last": "Sassano", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the ACL-IJCNLP 2009 Conference Short Papers", "volume": "", "issue": "", "pages": "189--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mamoru Komachi, Shimpei Makimoto, Kei Uchiumi, and Manabu Sassano. 2009. Learning semantic categories from clickthrough logs. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 189-192.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Eighteenth International Conference on Machine Learning", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth In- ternational Conference on Machine Learning, pages 282-289.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Exploring distributional similarity based models for query spelling correction", "authors": [ { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Muhua", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1025--1032", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mu Li, Yang Zhang, Muhua Zhu, and Ming Zhou. 2006. Exploring distributional similarity based models for query spelling correction. In Proceed- ings of the 21st International Conference on Compu- tational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 1025-1032.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Query suggestion using hitting time", "authors": [ { "first": "Qiazhu", "middle": [], "last": "Mei", "suffix": "" }, { "first": "Dengyong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Church", "suffix": "" } ], "year": 2008, "venue": "Proceeding of the 17th ACM conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "469--478", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qiazhu Mei, Dengyong Zhou, and Kenneth Church. 2008. Query suggestion using hitting time. In Pro- ceeding of the 17th ACM conference on Information and Knowledge Management, pages 469-478.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The good, the bad, the difficult, and the easy: something wrong with information retrieval evaluation?", "authors": [ { "first": "Stefano", "middle": [], "last": "Mizzaro", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the IR research, 30th European conference on Advances in information retrieval, ECIR'08", "volume": "", "issue": "", "pages": "642--646", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefano Mizzaro. 2008. The good, the bad, the dif- ficult, and the easy: something wrong with infor- mation retrieval evaluation? In Proceedings of the IR research, 30th European conference on Advances in information retrieval, ECIR'08, pages 642-646, Berlin, Heidelberg. Springer-Verlag.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Statistical model for Japanese abbreviations", "authors": [ { "first": "Norihumi", "middle": [], "last": "Murayama", "suffix": "" }, { "first": "Manabu", "middle": [], "last": "Okumura", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 10th Pacific Rim International Conference on Artificial Intelligence: Trends in Artificial Intelligence", "volume": "", "issue": "", "pages": "260--272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Norihumi Murayama and Manabu Okumura. 2008. Statistical model for Japanese abbreviations. In Pro- ceedings of the 10th Pacific Rim International Con- ference on Artificial Intelligence: Trends in Artificial Intelligence, pages 260-272.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A discriminative approach to Japanese abbreviation extraction", "authors": [ { "first": "Naoki", "middle": [], "last": "Okazaki", "suffix": "" }, { "first": "Mitsuru", "middle": [], "last": "Ishizuka", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 3rd International Joint Conference on Natural Language Processing (IJCNLP-08)", "volume": "", "issue": "", "pages": "889--894", "other_ids": {}, "num": null, "urls": [], "raw_text": "Naoki Okazaki, Mitsuru Ishizuka, and Jun'ichi Tsujii. 2008. A discriminative approach to Japanese ab- breviation extraction. In Proceedings of the 3rd In- ternational Joint Conference on Natural Language Processing (IJCNLP-08), pages 889-894.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Espresso: Leveraging generic patterns for automatically harvesting semantic relations", "authors": [ { "first": "Patric", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Pennacchiotti", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the ACL", "volume": "", "issue": "", "pages": "113--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patric Pantel and Marco Pennacchiotti. 2006. Espresso: Leveraging generic patterns for automati- cally harvesting semantic relations. In Proceedings of the 21st International Conference on Computa- tional Linguistics and the 44th annual meeting of the ACL, pages 113-120.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Context sensitive stemming for web search", "authors": [ { "first": "Fuchun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Nawaaz", "middle": [], "last": "Ahmed", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yumao", "middle": [], "last": "Lu", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 30th annual international ACM SIGIR conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "639--646", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fuchun Peng, Nawaaz Ahmed, Xin Li, and Yumao Lu. 2007. Context sensitive stemming for web search. In Proceedings of the 30th annual interna- tional ACM SIGIR conference on Research and De- velopment in Information Retrieval, pages 639-646.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Query segmentation for web search", "authors": [ { "first": "M", "middle": [], "last": "Knut", "suffix": "" }, { "first": "Tomasz", "middle": [], "last": "Risvik", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Mikolajewski", "suffix": "" }, { "first": "", "middle": [], "last": "Boros", "suffix": "" } ], "year": 2003, "venue": "Poster Session in The Twelfth International World Wide Web Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Knut M. Risvik, Tomasz Mikolajewski, and Peter Boros. 2003. Query segmentation for web search. In Poster Session in The Twelfth International World Wide Web Conference.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Acquiring ontological knowledge from query logs", "authors": [ { "first": "Satoshi", "middle": [], "last": "Sekine", "suffix": "" }, { "first": "Hisami", "middle": [], "last": "Suzuki", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 16th international conference on World Wide Web", "volume": "", "issue": "", "pages": "1223--1224", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satoshi Sekine and Hisami Suzuki. 2007. Acquiring ontological knowledge from query logs. In Proceed- ings of the 16th international conference on World Wide Web, pages 1223-1224.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Learning phrase-based spelling error models from clickthrough data", "authors": [ { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Micol", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Quirk", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "266--274", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu Sun, Jianfeng Gao, Daniel Micol, and Chris Quirk. 2010. Learning phrase-based spelling error mod- els from clickthrough data. In Proceedings of the 48th Annual Meeting of the Association for Compu- tational Linguistics, pages 266-274.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Mining search engine clickthrough log for matching N-gram features", "authors": [ { "first": "Huihsin", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "Longbin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Fan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ziming", "middle": [], "last": "Zhuang", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Belle", "middle": [], "last": "Tseng", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 14th Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "524--533", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huihsin Tseng, Longbin Chen, Fan Li, Ziming Zhuang, Lei Duan, and Belle Tseng. 2009. Mining search engine clickthrough log for matching N-gram fea- tures. In Proceedings of the 14th Conference on Empirical Methods in Natural Language Process- ing, pages 524-533.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Context sensitive synonym discovery for web search queries", "authors": [ { "first": "Xing", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Fuchunk", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Huishin", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "Yumao", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Benoit", "middle": [], "last": "Dumoulin", "suffix": "" } ], "year": 2009, "venue": "Proceeding of the 18th ACM conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "1585--1588", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xing Wei, Fuchunk Peng, Huishin Tseng, Yumao Lu, and Benoit Dumoulin. 2009. Context sensitive syn- onym discovery for web search queries. In Proceed- ing of the 18th ACM conference on Information and Knowledge Management, pages 1585-1588.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "/abcnews.go.com/ alphabet http://www.alphabetsong.org/ http://www.abc-tokyo.com/ song http://www.alphabetsong.org/ austrian ballet company http://en.wikipedia.org/ Figure 2: An illustrative example of Instance-Pattern co-occurrence graph and label propagation process.", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "Laplacian label propagation", "type_str": "figure", "uris": null, "num": null }, "FIGREF2": { "text": "precision = # of correct output at rank k Number of output at rank k , coverage = # of queries for which the system gives at least one correct output Number of all input queries (7)", "type_str": "figure", "uris": null, "num": null }, "TABREF1": { "type_str": "table", "html": null, "text": "Abbreviations and its correction", "num": null, "content": "
abbreviationcorrect candidates descending order of rankoutput typecorrection pattern
adfasian dub foundationNamed Entity: Organizationacronym for its expansion
ana(All Nippon Airways)Named Entity: Organizationacronym for its Japanese orthography
ny(New York)Named Entity: Locationacronym for its Japanese orthography
tos(Tales of Symphonia)Named Entity: Productacronym for its Japanese orthography
illustratorNamed Entity: ProductJapanese abbreviation for its English orthography
(Hunger Strike)Common expressionJapanese abbreviation for its Japanese orthograpy
Named Entity: OrganizationJapanese abbreviation for its Japanese orthograpy
fyifor your informationCommon expressionacronym to its expansion
" }, "TABREF2": { "type_str": "table", "html": null, "text": "", "num": null, "content": "
: Precision and coverage at k
kquery abbreviation model (QAM) query language model (QLM) precision coverage precision coverageQLM+QAM precision coverage
10.1140.1140.1570.1570.1610.161
30.1220.2560.1420.2780.1570.321
50.1210.3410.1280.3460.1420.392
10 0.1140.4530.1020.4250.1150.465
30 0.0870.5360.0780.5290.0820.542
50 0.0730.5570.0730.5570.0730.557
" }, "TABREF3": { "type_str": "table", "html": null, "text": "Examples of input and candidates or its correction 3", "num": null, "content": "
InputCandidates
vod
iloilo iloilo
prprohoo!pr
" }, "TABREF4": { "type_str": "table", "html": null, "text": "", "num": null, "content": "
: P-values of Wilcoxon's signed rank test
QAM and QAM + QLM QLM and QAM + QLM
p-value 0.0557.79e \u221210
" } } } }