{ "paper_id": "I11-1031", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:32:36.607873Z" }, "title": "Learning the Latent Topics for Question Retrieval in Community QA", "authors": [ { "first": "Li", "middle": [], "last": "Cai", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "addrLine": "95 Zhongguancun East Road", "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "lcai@nlpr.ia.ac.cn" }, { "first": "Guangyou", "middle": [], "last": "Zhou", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "addrLine": "95 Zhongguancun East Road", "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "gyzhou@nlpr.ia.ac.cn" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "addrLine": "95 Zhongguancun East Road", "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "kliu@nlpr.ia.ac.cn" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": { "addrLine": "95 Zhongguancun East Road", "postCode": "100190", "settlement": "Beijing", "country": "China" } }, "email": "jzhao@nlpr.ia.ac.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Community-based Question Answering (cQA) is a popular online service where users can ask and answer questions on any topics. This paper is concerned with the problem of question retrieval. Question retrieval in cQA aims to find historical questions that are semantically equivalent or relevant to the queried questions. Although the translation-based language model (Xue et al., 2008) has gained the state-of-the-art performance for question retrieval, they ignore the latent topic information in calculating the semantic similarity between questions. In this paper, we propose a topic model incorporated with the category information into the process of discovering the latent topics in the content of questions. Then we combine the semantic similarity based latent topics with the translation-based language model into a unified framework for question retrieval. Experiments are carried out on a real world cQA data set from Yahoo! Answers. The results show that our proposed method can significantly improve the question retrieval performance of translation-based language model.", "pdf_parse": { "paper_id": "I11-1031", "_pdf_hash": "", "abstract": [ { "text": "Community-based Question Answering (cQA) is a popular online service where users can ask and answer questions on any topics. This paper is concerned with the problem of question retrieval. Question retrieval in cQA aims to find historical questions that are semantically equivalent or relevant to the queried questions. Although the translation-based language model (Xue et al., 2008) has gained the state-of-the-art performance for question retrieval, they ignore the latent topic information in calculating the semantic similarity between questions. In this paper, we propose a topic model incorporated with the category information into the process of discovering the latent topics in the content of questions. Then we combine the semantic similarity based latent topics with the translation-based language model into a unified framework for question retrieval. Experiments are carried out on a real world cQA data set from Yahoo! Answers. The results show that our proposed method can significantly improve the question retrieval performance of translation-based language model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Over the past few years, large scale question and answer archives have become an important information resource on the Web. These include the traditional FAQ archives constructed by the experts or companies for their products and the emerging community-based online services, such as Yahoo! Answers 1 and Live QnA 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The major challenge for cQA retrieval is the lexical gap (or lexical chasm) between the queried questions and the question-answer pairs in the archives (Jeon et al., 2005; Xue et al., 2008) . To solve the lexical gap problem, most researchers regarded the question retrieval task as a statistical machine translation problem by using IBM model 1 (Brown et al., 1993) to learn the wordto-word translation probabilities (Berger and Lafferty, 1999; Jeon et al., 2005; Xue et al., 2008; Lee et al., 2008; Bernhard and Gurevych, 2009; Cao et al., 2010) . Although the translation-based language model(TRLM) has yielded the state-of-theart performance for question retrieval, they model the word translation probabilities without taking into account the distribution of words in the whole content.", "cite_spans": [ { "start": 152, "end": 171, "text": "(Jeon et al., 2005;", "ref_id": "BIBREF13" }, { "start": 172, "end": 189, "text": "Xue et al., 2008)", "ref_id": "BIBREF21" }, { "start": 346, "end": 366, "text": "(Brown et al., 1993)", "ref_id": "BIBREF4" }, { "start": 418, "end": 445, "text": "(Berger and Lafferty, 1999;", "ref_id": "BIBREF1" }, { "start": 446, "end": 464, "text": "Jeon et al., 2005;", "ref_id": "BIBREF13" }, { "start": 465, "end": 482, "text": "Xue et al., 2008;", "ref_id": "BIBREF21" }, { "start": 483, "end": 500, "text": "Lee et al., 2008;", "ref_id": "BIBREF15" }, { "start": 501, "end": 529, "text": "Bernhard and Gurevych, 2009;", "ref_id": "BIBREF2" }, { "start": 530, "end": 547, "text": "Cao et al., 2010)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we argue that it is beneficial to exploit the latent topic information for question retrieval. The basic idea is as follows: first we employ the topic model (e.g., LDA) to discover the latent topics in the content of questions, and calculate the semantic similarity between questions based on the latent topic information. Moreover, a distinctive feature of question-answer archives in cQA is that cQA services always organize questions into a hierarchy of categories. We propose an improved latent topic model by introducing the category information of questions. To solve the lexical gap problem, the translation-based language model extracts knowledge from questionanswer pairs which are collected from cQA service. Latent topic model extracts knowledge from the distribution of words and categories in whole cQA archives. We assume that the two knowledge are complementary to each other, as we will show in the experiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to illustrate the above ideas clearly, we give an example of retrieving semantically equivalent or relevant to the queried questions in Figure 1. Given question Q1, we get a ranked list of semantically similar questions (Q 2 , Q 3 , Q 4 , Q 5 ) using state-of-the-art translation-based lan-R a n k Q u e s t i o n s C a t e g o r i e s 1 Q 2 : W h a t i s t h e b e s t w e b s i t e f o r e c o n o m i c i n f o ? C 2 : B u s i n e s s & F i n a n c e > I n v e s t i n g 2 Q 3 : W h a t d o y o u t h i n k a r e s o m e o f t h e r e a s o n s f o r t h e A m e r i c a n c a r c o m p a n i e s ' e c o n o m i c d i s t r e s s ? C 3 : B u s i n e s s & F i n a n c e > C o r p o r a t i o n s 3 Q 4 : I s t h e c a r e c o n o m i c t o b u y f r o m t h i s w e t s i t e ? C 4 : C a r s & T r a n s p o r t a t i o n > C a r A u d i o 4 Q 5 : W h a t i s a c h e a p w e b s i t e t o g e t c a r p a r t s f r o m ? C 5 : C a r s & T r a n s p o r t a t i o n > M a i n t e n a n c e & R e p a i r s Q 1 : W h e r e i s t h e b e s t w e b s i t e f o r e c o n o m i c c a r ?", "cite_spans": [], "ref_spans": [ { "start": 145, "end": 151, "text": "Figure", "ref_id": null }, { "start": 345, "end": 1045, "text": "1 Q 2 : W h a t i s t h e b e s t w e b s i t e f o r e c o n o m i c i n f o ? C 2 : B u s i n e s s & F i n a n c e > I n v e s t i n g 2 Q 3 : W h a t d o y o u t h i n k a r e s o m e o f t h e r e a s o n s f o r t h e A m e r i c a n c a r c o m p a n i e s ' e c o n o m i c d i s t r e s s ? C 3 : B u s i n e s s & F i n a n c e > C o r p o r a t i o n s 3 Q 4 : I s t h e c a r e c o n o m i c t o b u y f r o m t h i s w e t s i t e ? C 4 : C a r s & T r a n s p o r t a t i o n > C a r A u d i o 4 Q 5 : W h a t i s a c h e a p w e b s i t e t o g e t c a r p a r t s f r o m ? C 5 : C a r s & T r a n s p o r t a t i o n > M a i", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "R a n k Q u e s t i o n s C a t e g o r i e s 1 Q 4 : I s t h e c a r e c o n o m i c t o b u y f r o m t h i s w e t s i t e ? C 4 : C a r s & T r a n s p o r t a t i o n > C a r A u d i o 2 Q 5 : W h a t i s a c h e a p w e b s i t e t o g e t c a r p a r t s f r o m ? C 5 : C a r s & T r a n s p o r t a t i o n > M a i n t e n a n c e & R e p a i r s 3 Q 2 : W h a t i s t h e b e s t w e b s i t e f o r e c o n o m i c i n f o ? C 2 : B u s i n e s s & F i n a n c e > I n v e s t i n g 4 Q 3 : W h a t d o y o u t h i n k a r e s o m e o f t h e r e a s o n s f o r t h e A m e r i c a n c a r c o m p a n i e s ' e c o n o m i c d i s t r e s s ? C 3 : B u s i n e s s & F i n a n c e > C o r p o r a t i o n s T r a n s l a t i o n -B a s e d L a n g u a g e M o d e l guage model. All the semantically similar questions are with their corresponding categories. Our proposed latent topic model models the distribution of words, categories of the whole content. We illustrate in Figure 1 a matching of the top words and categories from a few topics.", "cite_spans": [], "ref_spans": [ { "start": 46, "end": 780, "text": "1 Q 4 : I s t h e c a r e c o n o m i c t o b u y f r o m t h i s w e t s i t e ? C 4 : C a r s & T r a n s p o r t a t i o n > C a r A u d i o 2 Q 5 : W h a t i s a c h e a p w e b s i t e t o g e t c a r p a r t s f r o m ? C 5 : C a r s & T r a n s p o r t a t i o n > M a i n t e n a n c e & R e p a i r s 3 Q 2 : W h a t i s t h e b e s t w e b s i t e f o r e c o n o m i c i n f o ? C 2 : B u s i n e s s & F i n a n c e > I n v e s t i n g 4 Q 3 : W h a t d o y o u t h i n k a r e s o m e o f t h e r e a s o n s f o r t h e A m e r i c a n c a r c o m p a n i e s ' e c o n o m i c d i s t r e s s ? C 3 : B u s i n e s s & F i n a n c e > C o r p o r a t i o n s", "ref_id": "TABREF1" }, { "start": 1049, "end": 1057, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We can see that the word car is more related to categories \"Cars & Transportation>Car Audio\" and \"Cars & Transportation>Maintenance & Repairs\" than the word \"economic\" to categories \"Business & Finance>Investing\" and \"Business & Finance>Corporations\" in latent topics. Using this information from the latent topic model, we can rerank the retrieved question. Therefore, combining the translation-based language model with the latent topic model with categories, we can get the ranked list of semantically similar questions (Q 4 , Q 5 , Q 2 , Q 3 ) which are better than the previous retrieval result. Specifically, our contributions are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. We employ the topic model to discover the latent topic information in the content of questions for cQA retrieval (in Section 4.1.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. We introduce the category information into the process of discovering the latent topics. (in Section and 4.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. We propose to combine the semantic similarity based latent topics with the translationbased language model into a unified frame-work to further improve the retrieval performance (in Section 4.4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "4. Finally, we conduct the experiments on cQA data set from Yahoo! Answers for question retrieval. The results show that our proposed approach significantly outperform the stateof-the-art translation-based language model (in Section 5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of this paper is organized as follows. Section 2 reviews the related work on community-based question retrieval. Section 3 presents the existing question retrieval models. Section 4 presents the topic model incorporated with category information for question retrieval. Section 5 presents the experimental results. Finally, we conclude and offer the further work in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recently, the research of question retrieval has been further extended to the cQA data. Jeon et al. (2005) proposed a word-based translation model for automatically fixing the lexical gap problem. Experimental results demonstrated that translation model significantly outperformed the traditional methods (i.e., VSM, BM25, LM). Xue et al. (2008) proposed a translation-based language model for question retrieval. The results indicated that translation-based language model further improved the retrieval results and obtained the state-of-the-art performance.", "cite_spans": [ { "start": 88, "end": 106, "text": "Jeon et al. (2005)", "ref_id": "BIBREF13" }, { "start": 328, "end": 345, "text": "Xue et al. (2008)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Subsequent work on translation models focused on providing suitable parallel data to learn the translation probabilities. Lee et al. (2008) tried to further improve the translation probabilities based on question-answer pairs by selecting the most important terms to build compact translation models. Bernhard and Gurevych (2009) proposed to use as a parallel training data set the definitions and glosses provided for the same term by different lexical semantic resources. Cao et al. (2010) explored adding the category information into the translation model for question retrieval. Zhou et al. 2011proposed a phrase-based translation model for question retrieval and obtained the stateof-the-art performance.", "cite_spans": [ { "start": 122, "end": 139, "text": "Lee et al. (2008)", "ref_id": "BIBREF15" }, { "start": 301, "end": 329, "text": "Bernhard and Gurevych (2009)", "ref_id": "BIBREF2" }, { "start": 474, "end": 491, "text": "Cao et al. (2010)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "However, all the existing methods ignore the latent topics information in calculating the semantic similarity between questions. In this paper, we present a new approach to discover the latent topic of questions for improving the performance of translation-based language models for question retrieval. Moreover, we introduce the category information into the process of discovering the latent topics. To the best of our knowledge, none of the existing studies addressed question retrieval in cQA by learning the latent topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The unigram language model has been widely used for question retrieval on community-based Q&A data (Jeon et al., 2005; Xue et al., 2008; Cao et al., 2010) . To avoid zero probability, we use Jelinek-Mercer smoothing (Zhai and Lafferty, 2001) due to its good performance and cheap computational cost. So the ranking function for the query likelihood language model with Jelinek-Mercer smoothing can be written as:", "cite_spans": [ { "start": 99, "end": 118, "text": "(Jeon et al., 2005;", "ref_id": "BIBREF13" }, { "start": 119, "end": 136, "text": "Xue et al., 2008;", "ref_id": "BIBREF21" }, { "start": 137, "end": 154, "text": "Cao et al., 2010)", "ref_id": "BIBREF7" }, { "start": 216, "end": 241, "text": "(Zhai and Lafferty, 2001)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Language Model", "sec_num": "3.1" }, { "text": "PLM (q|Q) = \u220f w\u2208q (1 \u2212 \u03bb)P ml (w|Q) + \u03bbP ml (w|C) (1) P ml (w|Q) = #(w, Q) |Q| , P ml (w|C) = #(w, C) |C| (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Model", "sec_num": "3.1" }, { "text": "where q is the queried question, Q is a historical question, C is background collection, \u03bb is smoothing parameter. #(t, Q) is the frequency of term t in Q, |Q| and |C| denote the length of Q and C, respectively. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Model", "sec_num": "3.1" }, { "text": "Previous work (Berger et al., 2000; Jeon et al., 2005; Xue et al., 2008) consistently reported that the word-based translation models (TR) yielded better performance than the traditional methods (VSM, Okapi and LM) for question retrieval. These models exploited the word translation probabilities in a language modeling framework. According to Jeon et al. (2005) and Xue et al. (2008) , the ranking function can be written as:", "cite_spans": [ { "start": 14, "end": 35, "text": "(Berger et al., 2000;", "ref_id": "BIBREF0" }, { "start": 36, "end": 54, "text": "Jeon et al., 2005;", "ref_id": "BIBREF13" }, { "start": 55, "end": 72, "text": "Xue et al., 2008)", "ref_id": "BIBREF21" }, { "start": 344, "end": 362, "text": "Jeon et al. (2005)", "ref_id": "BIBREF13" }, { "start": 367, "end": 384, "text": "Xue et al. (2008)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Translation Model", "sec_num": "3.2" }, { "text": "PT R(q|Q) = \u220f w\u2208q (1 \u2212 \u03bb)Ptr(w|Q) + \u03bbP ml (w|C) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model", "sec_num": "3.2" }, { "text": "Ptr(w|Q) = \u2211 t\u2208Q P (w|t)P ml (t|Q), P ml (t|Q) = #(t, Q) |Q|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model", "sec_num": "3.2" }, { "text": "where P (w|t) denotes the translation probability from word t to word w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Model", "sec_num": "3.2" }, { "text": "Xue et al. (2008) proposed to linearly mix two different estimations by combining language model and translation model into a unified framework, called TRLM. The experiments show that this model gains better performance than both the language model and the translation model. Following Xue et al. (2008) , this model can be written as: (Salton et al., 1975) , LM (Zhai and Lafferty, 2001 ), TR (Jeon et al., 2005) and TRLM (Xue et al., 2008) . However, all these existing models ignore the latent topics in calculating the semantic similarity between questions. In this Section, we explore the latent topic information for question retrieval.", "cite_spans": [ { "start": 11, "end": 17, "text": "(2008)", "ref_id": null }, { "start": 286, "end": 303, "text": "Xue et al. (2008)", "ref_id": "BIBREF21" }, { "start": 336, "end": 357, "text": "(Salton et al., 1975)", "ref_id": "BIBREF18" }, { "start": 363, "end": 387, "text": "(Zhai and Lafferty, 2001", "ref_id": "BIBREF22" }, { "start": 394, "end": 413, "text": "(Jeon et al., 2005)", "ref_id": "BIBREF13" }, { "start": 423, "end": 441, "text": "(Xue et al., 2008)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Translation-Based Language Model", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "PT RLM (q|Q) = \u220f w\u2208q (1\u2212\u03bb)Pmx(w|Q)+\u03bbP ml (w|C) (5) Pmx(w|Q) = \u03b4 \u2211 t\u2208Q P (w|t)P ml (t|Q) + (1 \u2212 \u03b4)P ml (w|Q)", "eq_num": "(6)" } ], "section": "Translation-Based Language Model", "sec_num": "3.3" }, { "text": "Before introducing our proposed method, we first briefly describe the basic Latent Dirichlet Allocation (LDA) model (Blei et al., 2003) . The notations we used in this paper are presented in Table 1 , and the graphic model representations of LDA model is shown in Figure 2 . LDA models the generation of document content as two independent stochastic processes by introducing latent topic space. For an arbitrary word w in document d, (1) a topic z is first sampled from the multinomial distribution \u03b8 d , which is generated from the Dirichlet prior parameterized by \u03b1;", "cite_spans": [ { "start": 116, "end": 135, "text": "(Blei et al., 2003)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 191, "end": 198, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 264, "end": 272, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Topic Model for Question Retrieval", "sec_num": "4.1" }, { "text": "(2) and then the word w is generated from multinomial distribution \u03c8 z , which is generated from the Dirichlet prior parameterized by \u03b2. The two Dirichlet priors for documents-topic distribution \u03b8 d and topic-word distribution \u03c8 z reduce the probability of overfitting training documents and enhance the ability of inferring topic distribution for new documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Model for Question Retrieval", "sec_num": "4.1" }, { "text": "In cQA, the historical questions in the archives can be considered as documents. In this paper, we employ the state-of-the-art topic model \u2212\u2212 LDA (Blei et al., 2003) to discover the latent topics in the content of questions. We assume that a queried question q and the historical questions Q in cQA archives are represented by a distribu- tion over topics. We obtain the topic distribution of a question by merging the topic distributions of words in question. Formally, we have", "cite_spans": [ { "start": 146, "end": 165, "text": "(Blei et al., 2003)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Topic Model for Question Retrieval", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "PT M (z|q) = 1 |q| (\u03bb1 \u2211 w\u2208q P (z|w)", "eq_num": "(7)" } ], "section": "Topic Model for Question Retrieval", "sec_num": "4.1" }, { "text": "Then, we assume that a question Q in the archives and a queried question q have the same prior probability, so the score function between the two questions can be written as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Model for Question Retrieval", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "PT M (q|Q) = \u2211 z P (q|z)PT M (z|Q) = \u2211 z\u2208K P (z|q)P (q) p(z) PT M (z|Q) = K |q| \u2211 z\u2208K PT M (z|q)PT M (z|Q)", "eq_num": "(8)" } ], "section": "Topic Model for Question Retrieval", "sec_num": "4.1" }, { "text": "In cQA, the questions are organized into a hierarchy of categories. For example, the subcategory \"Computer Networking\" is a child category of \"Computers & Internet\" in Yahoo! Answers. When a user asks a question, the user chooses a category for the question and at then post the question in that category. For example, the questions in the subcategory \"Computer Networking\" mainly related to computer software or networking equipments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Model Incorporated with Category Information", "sec_num": "4.2" }, { "text": "To utilize the category information provided by cQA, we propose a topic model incorporated with category information (TMC) to discover the latent topics in the content of questions. The graphic representation of our proposed TMC model is presented in Figure 3 . Inspired by the related work on topic analysis (Blei et al., 2003; Griffiths and Steyvers, 2004; Zhou et al., 2008; Wang and Mc-Callum, 2006; Guo et al., 2008; Celikyilmaz et al., 2010; Jo and Oh, 2011) , we make the following assumptions about the probabilistic structure of TMC model. First, each question is modeled as a multinomial distribution over latent topics, and each topic is modeled as a multinomial distribution over words and a multinomial distribution over categories. Second, the prior distributions for topics, words and categories follow different parameterized Dirichlet distribution, which is conjugate prior for multinomial distribution. In Figure 3 , for each word w in question q, a topic z is first drawn from the multinomial distribution \u03b8 q , and then a word is sampled from the multinomial distribution \u03d5 z and a category c is also sampled from the multinomial distribution \u03c8 z for the word. Repeating this process N q times, we get the words and category for a question. We obtain the whole question set by repeating the above process N times. After that, we obtain the topic distribution of a question by merging the topic distributions of words category. So equation 7can be rewritten as:", "cite_spans": [ { "start": 309, "end": 328, "text": "(Blei et al., 2003;", "ref_id": "BIBREF3" }, { "start": 329, "end": 358, "text": "Griffiths and Steyvers, 2004;", "ref_id": "BIBREF11" }, { "start": 359, "end": 377, "text": "Zhou et al., 2008;", "ref_id": "BIBREF23" }, { "start": 378, "end": 403, "text": "Wang and Mc-Callum, 2006;", "ref_id": null }, { "start": 404, "end": 421, "text": "Guo et al., 2008;", "ref_id": "BIBREF12" }, { "start": 422, "end": 447, "text": "Celikyilmaz et al., 2010;", "ref_id": "BIBREF8" }, { "start": 448, "end": 464, "text": "Jo and Oh, 2011)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 251, "end": 259, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 924, "end": 932, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Topic Model Incorporated with Category Information", "sec_num": "4.2" }, { "text": "PT M C (z|q) = 1 1 + |q| (\u03bb2P (z|c) + \u03bb3 \u2211 w\u2208q P (z|w)) (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Model Incorporated with Category Information", "sec_num": "4.2" }, { "text": "In equation 9, the topic distribution of question category is modeled by \u03bb 2 P (z|c), the topic distribution of words in question is modeled by \u03bb 3 \u2211 w\u2208q P (z|w). The relative importance of these two parts is adjusted through \u03bb 2 and \u03bb 3 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Model Incorporated with Category Information", "sec_num": "4.2" }, { "text": "Introducing the category information into the process of discovering the latent topics, equation (8) can be rewritten as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Model Incorporated with Category Information", "sec_num": "4.2" }, { "text": "PT M C (Q|q) = \u2211 z P (Q|z)PT M C (z|q) = \u2211 z\u2208K P (z|q)P (q) p(z) PT M C (z|Q) = K |q| \u2211 z\u2208K PT M C (z|q)PT M C (z|Q) (10)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Model Incorporated with Category Information", "sec_num": "4.2" }, { "text": "After introducing our proposed TMC method, we will describe how to estimate the parameter used in the model. In TMC, we introduce the new parameters, which lead to the inference not be done exactly. Expectation-Maximum (EM) algorithm is a possible choice for estimating the parameters of models with latent variables. However, EM suffers from the possibility of running into local maxima and the high computational burden. Therefore, we employ an alternative approach \u2212 Gibbs sampling (Griffiths, 2002) , which is gaining popularity in recent work on latent topic analysis (Griffiths and Steyvers, 2004; Zhou et al., 2008; Wang and Mc-Callum, 2006; Guo et al., 2008; Jo and Oh, 2011) . After training the model, we can get the following parameter estimations as:", "cite_spans": [ { "start": 485, "end": 502, "text": "(Griffiths, 2002)", "ref_id": null }, { "start": 573, "end": 603, "text": "(Griffiths and Steyvers, 2004;", "ref_id": "BIBREF11" }, { "start": 604, "end": 622, "text": "Zhou et al., 2008;", "ref_id": "BIBREF23" }, { "start": 623, "end": 648, "text": "Wang and Mc-Callum, 2006;", "ref_id": null }, { "start": 649, "end": 666, "text": "Guo et al., 2008;", "ref_id": "BIBREF12" }, { "start": 667, "end": 683, "text": "Jo and Oh, 2011)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation for TMC", "sec_num": "4.3" }, { "text": "\u03b8 qz = n qz + \u03b1 z \u2212 1 \u2211 K z \u2032 =1 (n qz \u2032 + \u03b1 z \u2032 ) \u2212 1 \u03d5 zw = n zw + \u03b2 w \u2212 1 \u2211 |V | v=1 (n zv + \u03b2 v ) \u2212 1 \u03c8 zc = n zc + \u03b3 c \u2212 1 \u2211 |C| c \u2032 =1 (n zc \u2032 + \u03b3 c \u2032 ) \u2212 1 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation for TMC", "sec_num": "4.3" }, { "text": "Since the TMC model and the translation-based language model use different strategies for question retrieval, it is interesting to explore how to combine their strength. In this section, we propose an approach to linearly combine the TMC model with the TRLM model for question retrieval. In this paper, we choose translation-based language model (TRLM) (Xue et al., 2008) as the foundation of our solution since TRLM has gained the state-of-the-art performance for question retrieval (Xue et al., 2008; Cao et al., 2010) . Formally, we have", "cite_spans": [ { "start": 353, "end": 371, "text": "(Xue et al., 2008)", "ref_id": "BIBREF21" }, { "start": 484, "end": 502, "text": "(Xue et al., 2008;", "ref_id": "BIBREF21" }, { "start": 503, "end": 520, "text": "Cao et al., 2010)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Combining the TMC with the TRLM for Question Retrieval", "sec_num": "4" }, { "text": "P T M C\u2212T RLM (q|Q) = \u00b5P T RLM (q|Q) + (1 \u2212 \u00b5)P T M C (q|Q) (11)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining the TMC with the TRLM for Question Retrieval", "sec_num": "4" }, { "text": "In equation 11, the relative importance of TMC and the TRLM is adjusted through \u00b5. When \u00b5 = 1, the retrieval model is based on TMC. When \u00b5 = 0, the retrieval model is based on TRLM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining the TMC with the TRLM for Question Retrieval", "sec_num": "4" }, { "text": "We collect the questions from Yahoo! Answers and use the getByCategory function provided in Yahoo! Answers API 3 to obtain Q&A threads from the Yahoo! site. More specifically, we utilize the resolved questions and the resulting question repository that we use for question retrieval contains 2,288,607 questions. Each resolved question consists of four parts: \"question title\", \"question description\", \"question answers\" and \"question category\". For question retrieval, we only use the \"question title\" part and \"question category\" part. It is assumed that the titles and categories of the questions already provide enough semantic information. There are 26 categories at the first level and 1,262 categories at the leaf level. Each question belongs to a unique leaf category. Table 2 shows the distribution across first-level categories of the questions in the training data set. To learn the translation probabilities, we use about one million question-answer pairs from another data set. 4 We randomly select 252 questions for test set and another 252 questions for development set. We select the test set and development set in proportion to the number of questions and categories against the whole distribution to have a better control over a possible imbalance. To obtain the ground-truth of question retrieval, we employ the Vector Space Model (VSM) (Salton et al., 1975) to retrieve the top 20 results and obtain manual judgements. The top 20 results don't include the queried question itself. Given a returned result by VSM, an annotator is asked to label it with \"relevant\" or \"irrelevant\". If a returned result is considered semantically equivalent to the queried question, the annotator will label it as \"relevant\"; otherwise, the annotator will label it as \"irrelevant\". Two annotators are involved in the annotation process. If a conflict happens, a third person will make judgement for the final result. In the process of manually judging questions, the annotators are presented only the questions. Metrics: We evaluate the performance of our approach using the following metrics: Mean Average Precision (MAP) and Precision@n (P@n). MAP rewards methods that return relevant questions early and also rewards correct ranking of the results. P@n reports the fraction of the top-n questions retrieved that are relevant. We perform a significant test, i.e., a t-test with a default significant level of 0.05.", "cite_spans": [ { "start": 991, "end": 992, "text": "4", "ref_id": null }, { "start": 1357, "end": 1378, "text": "(Salton et al., 1975)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 777, "end": 784, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Data Set and Evaluation Metrics", "sec_num": "5.1" }, { "text": "Parameter Selection: The experiments use many parameters. Following the literature, we set the smoothing parameter \u03bb in equations (1), (3) and (5) to 0.2 (Cao et al., 2010) , and the parameter \u03b4 in equation 6to 0.8 (Xue et al., 2008; Cao et al., 2010) , which controls the translation component's impact. Other parameters are tuned on the development set, as we will show in the experiments.", "cite_spans": [ { "start": 154, "end": 172, "text": "(Cao et al., 2010)", "ref_id": "BIBREF7" }, { "start": 215, "end": 233, "text": "(Xue et al., 2008;", "ref_id": "BIBREF21" }, { "start": 234, "end": 251, "text": "Cao et al., 2010)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Data Set and Evaluation Metrics", "sec_num": "5.1" }, { "text": "In this section, we concentrate on how to select proper topic numbers to obtain our model with best performance on our test set and enough iterations in Algorithm 1 to prevent overfitting problem. Here, following (Guo et al., 2008) , we use perplexity to estimate the performance of our model. We calculate the perplexity on development set, which is a sequence of tuples (q, w, c) \u2208 D dev :", "cite_spans": [ { "start": 213, "end": 231, "text": "(Guo et al., 2008)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Topic Number Selection", "sec_num": "5.2" }, { "text": "Perplexity(D dev ) = exp{\u2212 \u2211 (q,w,c)\u2208D dev lnP (w, c|q) |D dev | }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Number Selection", "sec_num": "5.2" }, { "text": "Here, the probability P (w, c|q) is calculated according to the parameters trained from the historical question-answer pairs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Number Selection", "sec_num": "5.2" }, { "text": "P (w, c|q) = K \u2211 z=1 P (w|z)P (c|z)P (z|q)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Number Selection", "sec_num": "5.2" }, { "text": "Figure 4(a) shows the influence of iteration number of Gibbs sampling on the model generalization ability. Empirically, we set the topic number as 100 and change the iteration number in the experiments. Note that the lower perplexity value indicates better generalization ability on the holdout testing set. From Figure 4(a) , it is seen that the perplexity values decreases dramatically when the iteration times are below 200. number of topics starts to increase. However, after a certain point, the perplexity values start to increase. Based on the above experiments, we train our model using 100 topics and 200 iteration times.", "cite_spans": [], "ref_spans": [ { "start": 313, "end": 324, "text": "Figure 4(a)", "ref_id": null } ], "eq_spans": [], "section": "Topic Number Selection", "sec_num": "5.2" }, { "text": "In equation 11, we use the parameter \u00b5 to adjust the relative importance of the TMC and the TRLM. Figure 5 illustrates the relative importance the value of \u00b5 is on the performance of question retrieval in terms of MAP and P@10, respectively. The TMC and TRLM are used for reference. The results are obtained with the 252 questions on the development set. From Figure 5 , we see that for MAP and P@10, the combined model TMC-TRLM performs better than the TMC and TRLM when \u00b5 is between 0 and 0.7. In both cases, a relatively broad set for good parameter values is observed. posed TMC-TRLM. In Table 3 , VSM refers to the vector space model of (Salton et al., 1975) ; BM25 refers to the model of (Robertson et al., 1994) ; LM refers to the language model of (Zhai and Lafferty, 2001) ; TR refers to the translation model of (Jeon et al., 2005; Xue et al., 2008) , TRLM refers to the translation-based language model of (Xue et al., 2008) and TRLM+CE refers to the method of (Cao et al., 2010) . 5 In row 7, we show our approach and choose the best parameter K = 100. There are some clear trends in the results of Table 3: (1) The simple unigram language model (LM) performs slightly better than the classical retrieval models: VSM and BM25 (row 1 vs. row 3; row 2 vs. row 3).", "cite_spans": [ { "start": 642, "end": 663, "text": "(Salton et al., 1975)", "ref_id": "BIBREF18" }, { "start": 694, "end": 718, "text": "(Robertson et al., 1994)", "ref_id": "BIBREF17" }, { "start": 756, "end": 781, "text": "(Zhai and Lafferty, 2001)", "ref_id": "BIBREF22" }, { "start": 822, "end": 841, "text": "(Jeon et al., 2005;", "ref_id": "BIBREF13" }, { "start": 842, "end": 859, "text": "Xue et al., 2008)", "ref_id": "BIBREF21" }, { "start": 917, "end": 935, "text": "(Xue et al., 2008)", "ref_id": "BIBREF21" }, { "start": 972, "end": 990, "text": "(Cao et al., 2010)", "ref_id": "BIBREF7" }, { "start": 993, "end": 994, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 98, "end": 106, "text": "Figure 5", "ref_id": null }, { "start": 360, "end": 368, "text": "Figure 5", "ref_id": null }, { "start": 592, "end": 599, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 1111, "end": 1119, "text": "Table 3:", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "The Relative Importance of Parameter \u00b5", "sec_num": "5.3" }, { "text": "(2) Translation model (TR) outperforms the LM by significant margins (row 3 vs. row 4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Effectiveness of Our Proposed TMC Model", "sec_num": "5.4" }, { "text": "(3) Translation-based language model (TRLM) significantly outperforms the translation model (TR) (row 4 vs. row 5), similar observations have been done by Xue et al. (2008) .", "cite_spans": [ { "start": 155, "end": 172, "text": "Xue et al. (2008)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "The Effectiveness of Our Proposed TMC Model", "sec_num": "5.4" }, { "text": "(4) Exploiting category information of questions into the translation-based language model (TRLM) can significantly improve the question retrieval performance (row 5 vs. row 6), similar observations have been done by Cao et al. (2010) .", "cite_spans": [ { "start": 217, "end": 234, "text": "Cao et al. (2010)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "The Effectiveness of Our Proposed TMC Model", "sec_num": "5.4" }, { "text": "(5) Our proposed approach TMC does not outperform the baseline methods TRLM and TRLM+CE (row 5 vs. row 7; row 6 vs. row 7). This demonstrates that the knowledge extracted from TMC is not as effective as that extracted from TRLM for question retrieval. TRLM learns the word-to-word translation probabilities from parallel corpus collected from question answer archives. However, TMC models wordcategory-topic distribution from the whole question answer content. The knowledge extracted from TMC is much noisier than that of TRLM. We suspect the above reason leads to the poor performance of TMC.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Effectiveness of Our Proposed TMC Model", "sec_num": "5.4" }, { "text": "(6) Our proposed approach TMC-TRLM significantly outperforms the baseline methods TRLM and TRLM+CE (row 5 vs. row 8; row 6 vs. row 8). We conduct a significant test (t-test) on the improvements of our approach over TRLM and TRLM+CE. The result indicates that the improvements are statistically significant in terms of all the evaluation measures. 6 This demonstrates that the knowledge extracted from TMC is complementary to the knowledge extracted from TRLM+CE for question retrieval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Effectiveness of Our Proposed TMC Model", "sec_num": "5.4" }, { "text": "Like the previous approaches, we treat the questions as a multinomial distribution over latent topics, and each topic is a multinomial distribution over words too. Different from previous work on topic analysis (Blei et al., 2003; Griffiths and Steyvers, 2004; Zhou et al., 2008; Wang and Mc-Callum, 2006; Guo et al., 2008; Celikyilmaz et 6 The comparisons are significant at p < 0.05. Table 4 : The effectiveness of category information for question retrieval.", "cite_spans": [ { "start": 211, "end": 230, "text": "(Blei et al., 2003;", "ref_id": "BIBREF3" }, { "start": 231, "end": 260, "text": "Griffiths and Steyvers, 2004;", "ref_id": "BIBREF11" }, { "start": 261, "end": 279, "text": "Zhou et al., 2008;", "ref_id": "BIBREF23" }, { "start": 280, "end": 305, "text": "Wang and Mc-Callum, 2006;", "ref_id": null }, { "start": 306, "end": 323, "text": "Guo et al., 2008;", "ref_id": "BIBREF12" }, { "start": 324, "end": 340, "text": "Celikyilmaz et 6", "ref_id": null } ], "ref_spans": [ { "start": 386, "end": 393, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "The Effectiveness of Category Information", "sec_num": "5.5" }, { "text": "al., 2010; Jo and Oh, 2011), we introduce the category information of questions, which is predefined by cQA services, into the process of discovering latent topics. To see how much the category information benefit the question retrieval, we introduce a baseline method for comparison. The baseline method (denoted as TM-TRLM) is used to denote the proposed method without using the category information. Table 5 provides the comparison. From the Table, we see that the exploring category information can significantly improve the performance for question retrieval (row 1 vs. row 2).", "cite_spans": [], "ref_spans": [ { "start": 404, "end": 411, "text": "Table 5", "ref_id": null }, { "start": 446, "end": 452, "text": "Table,", "ref_id": null } ], "eq_spans": [], "section": "The Effectiveness of Category Information", "sec_num": "5.5" }, { "text": "In this paper, we present a new approach to discover the latent topic of questions for improving the performance of translation-based language model for question retrieval. Experiments conducted on real cQA data demonstrate that our proposed approach significantly outperforms the state-of-the-art methods (TRLM and TRLM+CE). There are some ways in which this research could be continued. First, question structure should be considered, so it is necessary to combine the proposed approach with other question retrieval methods (e.g., (Duan et al., 2008; Wang et al., 2009; Bunescu and Huang, 2010) ) to further improve the performance. Second, we will try to investigate the use of the proposed approach for other kinds of data set, such as categorized questions from forum sites and FAQ sites.", "cite_spans": [ { "start": 534, "end": 553, "text": "(Duan et al., 2008;", "ref_id": "BIBREF9" }, { "start": 554, "end": 572, "text": "Wang et al., 2009;", "ref_id": "BIBREF20" }, { "start": 573, "end": 597, "text": "Bunescu and Huang, 2010)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "http://answers.yahoo.com 2 http://qna.live.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://developer.yahoo.com/answers", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The Yahoo! Webscope dataset Yahoo answers comprehensive questions and answers version 1.0.2, available at http://reseach.yahoo.com/Academic Relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Here, we implement the method of(Cao et al., 2010) and use the TRLM to compute the global relevance and local relevance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by the National Natural Science Foundation of China (No. 60875041 and No. 61070106). We thank the anonymous reviewers for their insightful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Bridging the lexical chasm: statistical approach to answer-finding", "authors": [ { "first": "A", "middle": [], "last": "Berger", "suffix": "" }, { "first": "R", "middle": [], "last": "Caruana", "suffix": "" }, { "first": "D", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "D", "middle": [], "last": "Freitag", "suffix": "" }, { "first": "V", "middle": [], "last": "Mittal", "suffix": "" } ], "year": 2000, "venue": "Proceedings of SIGIR", "volume": "", "issue": "", "pages": "192--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Berger, R. Caruana, D. Cohn, D. Freitag, and V. Mit- tal. 2000. Bridging the lexical chasm: statistical ap- proach to answer-finding. In Proceedings of SIGIR, pages 192-199.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Information retrieval as statistical translation", "authors": [ { "first": "A", "middle": [], "last": "Berger", "suffix": "" }, { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 1999, "venue": "Proceedings of SIGIR", "volume": "", "issue": "", "pages": "222--229", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Berger and J. Lafferty. 1999. Information retrieval as statistical translation. In Proceedings of SIGIR, pages 222-229.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Combining lexical semantic resources with question & answer archives for translation-based answer finding", "authors": [ { "first": "D", "middle": [], "last": "Bernhard", "suffix": "" }, { "first": "I", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2009, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "728--736", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Bernhard and I. Gurevych. 2009. Combining lexical semantic resources with question & answer archives for translation-based answer finding. In Proceedings of ACL, pages 728-736.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Latent dirichlet allocation", "authors": [ { "first": "D", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "A", "middle": [], "last": "Ng", "suffix": "" }, { "first": "M", "middle": [], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. M. Blei, A. Ng, and M. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993-1022.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The mathematics of statistical machine translation: parameter estimation", "authors": [ { "first": "P", "middle": [ "F" ], "last": "Brown", "suffix": "" }, { "first": "V", "middle": [ "J D" ], "last": "Pietra", "suffix": "" }, { "first": "S", "middle": [ "A D" ], "last": "Pietra", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mercer", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "263--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. F. Brown, V. J. D. Pietra, S. A. D. Pietra, and R. L. Mercer. 1993. The mathematics of statistical ma- chine translation: parameter estimation. Computa- tional Linguistics, 19(2):263-311.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Learing the relative usefulness of questions in community QA", "authors": [ { "first": "R", "middle": [], "last": "Bunescu", "suffix": "" }, { "first": "Y", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2010, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "97--107", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Bunescu and Y. Huang. 2010. Learing the relative usefulness of questions in community QA. In Pro- ceedings of EMNLP, pages 97-107.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The use of categorization information in language models for question retrieval", "authors": [ { "first": "X", "middle": [], "last": "Cao", "suffix": "" }, { "first": "G", "middle": [], "last": "Cong", "suffix": "" }, { "first": "B", "middle": [], "last": "Cui", "suffix": "" }, { "first": "C", "middle": [ "S" ], "last": "Jensen", "suffix": "" }, { "first": "C", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2009, "venue": "Proceedings of CIKM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Cao, G. Cong, B. Cui, C. S. Jensen, and C. Zhang. 2009. The use of categorization information in lan- guage models for question retrieval. In Proceedings of CIKM.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A generalized framework of exploring category information for question retrieval in community question answer archives", "authors": [ { "first": "X", "middle": [], "last": "Cao", "suffix": "" }, { "first": "G", "middle": [], "last": "Cong", "suffix": "" }, { "first": "B", "middle": [], "last": "Cui", "suffix": "" }, { "first": "C", "middle": [ "S" ], "last": "Jensen", "suffix": "" } ], "year": 2010, "venue": "Proceedings of WWW", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Cao, G. Cong, B. Cui, and C. S. Jensen. 2010. A generalized framework of exploring category infor- mation for question retrieval in community question answer archives. In Proceedings of WWW.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "LDA based similarity modeling for question answering", "authors": [ { "first": "A", "middle": [], "last": "Celikyilmaz", "suffix": "" }, { "first": "D", "middle": [], "last": "Hakkani-Tur", "suffix": "" }, { "first": "G", "middle": [], "last": "Tur", "suffix": "" } ], "year": 2010, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Celikyilmaz, D. Hakkani-Tur, and G. Tur. 2010. LDA based similarity modeling for question answer- ing. In Proceedings of ACL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Searching questions by identifying questions topics and question focus", "authors": [ { "first": "H", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Y", "middle": [], "last": "Cao", "suffix": "" }, { "first": "C", "middle": [ "Y" ], "last": "Lin", "suffix": "" }, { "first": "Y", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "156--164", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Duan, Y. Cao, C. Y. Lin, and Y. Yu. 2008. Searching questions by identifying questions topics and ques- tion focus. In Proceedings of ACL, pages 156-164.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Gibbs sampling in the generative model of latent dirichlet allocation", "authors": [ { "first": "T", "middle": [], "last": "Griffiths", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Griffiths. Gibbs sampling in the generative model of latent dirichlet allocation. http://www- psych.stanford.edu/ gruffydd/cogsci02/lda.ps.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Finding scientific topics", "authors": [ { "first": "T", "middle": [], "last": "Griffiths", "suffix": "" }, { "first": "M", "middle": [], "last": "Steyvers", "suffix": "" } ], "year": 2004, "venue": "National Academy of Sciences", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Griffiths and M. Steyvers. 2004. Finding scientific topics. In National Academy of Sciences.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Tapping on the potential of Q&A community by recommending answer providers", "authors": [ { "first": "J", "middle": [], "last": "Guo", "suffix": "" }, { "first": "S", "middle": [], "last": "Xu", "suffix": "" }, { "first": "S", "middle": [], "last": "Bao", "suffix": "" }, { "first": "Y", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2008, "venue": "Proceedings of CIKM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Guo, S. Xu, S. Bao, and Y. Yu. 2008. Tapping on the potential of Q&A community by recommending answer providers. In Proceedings of CIKM.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Finding similar questions in large question and answer archives", "authors": [ { "first": "J", "middle": [], "last": "Jeon", "suffix": "" }, { "first": "W", "middle": [ "Bruce" ], "last": "Croft", "suffix": "" }, { "first": "J", "middle": [ "H" ], "last": "Lee", "suffix": "" } ], "year": 2005, "venue": "Proceedings of CIKM", "volume": "", "issue": "", "pages": "84--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Jeon, W. Bruce Croft, and J. H. Lee. 2005. Find- ing similar questions in large question and answer archives. In Proceedings of CIKM, pages 84-90.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Aspect and sentiment unification model for online review analysis", "authors": [ { "first": "Y", "middle": [], "last": "Jo", "suffix": "" }, { "first": "A", "middle": [], "last": "Oh", "suffix": "" } ], "year": 2011, "venue": "Proceedings of WSDM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Jo and A. Oh. 2011. Aspect and sentiment unifica- tion model for online review analysis. In Proceed- ings of WSDM.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Bridge lexical gaps between queries and questions on large online Q&A collections with compact translation models", "authors": [ { "first": "J. -T", "middle": [], "last": "Lee", "suffix": "" }, { "first": "S. -B", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Y. -I", "middle": [], "last": "Song", "suffix": "" }, { "first": "H. -C", "middle": [], "last": "Rim", "suffix": "" } ], "year": 2008, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "410--418", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. -T. Lee, S. -B. Kim, Y. -I. Song, and H. -C. Rim. 2008. Bridge lexical gaps between queries and ques- tions on large online Q&A collections with com- pact translation models. In Proceedings of EMNLP, pages 410-418.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A language modeling approach to information retrieval", "authors": [ { "first": "J", "middle": [ "M" ], "last": "Ponte", "suffix": "" }, { "first": "W", "middle": [ "B" ], "last": "Croft", "suffix": "" } ], "year": 1998, "venue": "Proceedings of SIGIR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. M. Ponte and W. B. Croft. 1998. A language mod- eling approach to information retrieval. In Proceed- ings of SIGIR.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Okapi at trec-3", "authors": [ { "first": "S", "middle": [], "last": "Robertson", "suffix": "" }, { "first": "S", "middle": [], "last": "Walker", "suffix": "" }, { "first": "S", "middle": [], "last": "Jones", "suffix": "" }, { "first": "M", "middle": [], "last": "Hancock-Beaulieu", "suffix": "" }, { "first": "M", "middle": [], "last": "Gatford", "suffix": "" } ], "year": 1994, "venue": "Proceedings of TREC", "volume": "", "issue": "", "pages": "109--126", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Robertson, S. Walker, S. Jones, M. Hancock- Beaulieu, and M. Gatford. 1994. Okapi at trec-3. In Proceedings of TREC, pages 109-126.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A vector space model for automatic indexing", "authors": [ { "first": "G", "middle": [], "last": "Salton", "suffix": "" }, { "first": "A", "middle": [], "last": "Wong", "suffix": "" }, { "first": "C", "middle": [ "S" ], "last": "Yang", "suffix": "" } ], "year": 1975, "venue": "Communications of the ACM", "volume": "18", "issue": "11", "pages": "613--620", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Salton, A. Wong, and C. S. Yang. 1975. A vector space model for automatic indexing. Communica- tions of the ACM, 18(11):613-620.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Topic over time: a non-markov conditionals-time model of topical trends", "authors": [ { "first": "X", "middle": [], "last": "Wang", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2006, "venue": "Proceedings of SIGKDD", "volume": "", "issue": "", "pages": "424--433", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Wang and A. McCallum. 2006. Topic over time: a non-markov conditionals-time model of topical trends. In Proceedings of SIGKDD, pages 424-433.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A syntactic tree matching approach to finding similar questions in community-based qa services", "authors": [ { "first": "K", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Z", "middle": [], "last": "Ming", "suffix": "" }, { "first": "T-S", "middle": [], "last": "Chua", "suffix": "" } ], "year": 2009, "venue": "Proceedings of SIGIR", "volume": "", "issue": "", "pages": "187--194", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Wang, Z. Ming, and T-S. Chua. 2009. A syntactic tree matching approach to finding similar questions in community-based qa services. In Proceedings of SIGIR, pages 187-194.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Retrieval models for question and answer archives", "authors": [ { "first": "X", "middle": [], "last": "Xue", "suffix": "" }, { "first": "J", "middle": [], "last": "Jeon", "suffix": "" }, { "first": "W", "middle": [ "B" ], "last": "Croft", "suffix": "" } ], "year": 2008, "venue": "Proceedings of SIGIR", "volume": "", "issue": "", "pages": "475--482", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Xue, J. Jeon, and W. B. Croft. 2008. Retrieval mod- els for question and answer archives. In Proceedings of SIGIR, pages 475-482.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A study of smooth methods for language models applied to ad hoc information retrieval", "authors": [ { "first": "C", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 2001, "venue": "Proceedings of SIGIR", "volume": "", "issue": "", "pages": "334--342", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Zhai and J. Lafferty. 2001. A study of smooth meth- ods for language models applied to ad hoc informa- tion retrieval. In Proceedings of SIGIR, pages 334- 342.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Exploring social annotation for information retrieval", "authors": [ { "first": "D", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "J", "middle": [], "last": "Bian", "suffix": "" }, { "first": "S", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "H", "middle": [], "last": "Zha", "suffix": "" }, { "first": "C", "middle": [ "L" ], "last": "Giles", "suffix": "" } ], "year": 2008, "venue": "Proceedings of WWW", "volume": "", "issue": "", "pages": "715--724", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Zhou, J. Bian, S. Zheng, H. Zha, and C. L. Giles. 2008. Exploring social annotation for information retrieval. In Proceedings of WWW, pages 715-724.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Phrasebased translation model for question retrieval in community question answer archives", "authors": [ { "first": "G", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "L", "middle": [], "last": "Cai", "suffix": "" }, { "first": "J", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "K", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2011, "venue": "Proceedings of ACL-HLT", "volume": "", "issue": "", "pages": "653--662", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Zhou, L. Cai, J. Zhao, and K. Liu. 2011. Phrase- based translation model for question retrieval in community question answer archives. In Proceed- ings of ACL-HLT, pages 653-662.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "Illustration of our proposed approach.", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "Latent Dirichlet Allocation.", "num": null }, "FIGREF2": { "uris": null, "type_str": "figure", "text": "Topic model incorporated with category information.", "num": null }, "FIGREF3": { "uris": null, "type_str": "figure", "text": "Figure 4(b) shows the perplexity values for different settings of topic number. From theFigure, we see that the perplexity decreases when the Perplexity on different iteration numbers(a) and topic number selection(b). The relative importance of \u00b5 on the performance of TMC-TRLM.", "num": null }, "TABREF1": { "html": null, "content": "", "num": null, "type_str": "table", "text": "Meanings of the notations used in this paper VSM" }, "TABREF3": { "html": null, "content": "
", "num": null, "type_str": "table", "text": "Number of questions in each first-level category" }, "TABREF4": { "html": null, "content": "
#ModelsMAP P@10
1 2 3 4 5 6 7 8 TMC-TRLM (K = 100) 0.475 0.371 VSM 0.242 0.226 BM25 0.301 0.294 LM 0.352 0.327 TR 0.383 0.330 TRLM 0.415 0.342 TRLM+CE 0.437 0.358 TMC 0.385 0.331
", "num": null, "type_str": "table", "text": "shows the main results of question retrieval using the baseline methods and our pro-" }, "TABREF5": { "html": null, "content": "", "num": null, "type_str": "table", "text": "Comparison with different methods for question retrieval." } } } }