{ "paper_id": "D19-1049", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:57:17.307812Z" }, "title": "PaRe: A Paper-Reviewer Matching Approach Using a Common Topic Space", "authors": [ { "first": "Omer", "middle": [], "last": "Anjum", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois at Urbana-Champaign", "location": { "country": "USA" } }, "email": "oanjum@illinois.edu" }, { "first": "Hongyu", "middle": [], "last": "Gong", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois at Urbana-Champaign", "location": { "country": "USA" } }, "email": "hgong6@illinois.edu" }, { "first": "Suma", "middle": [], "last": "Bhat", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois at Urbana-Champaign", "location": { "country": "USA" } }, "email": "spbhat2@illinois.edu" }, { "first": "Jinjun", "middle": [], "last": "Xiong", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM Thomas J. Watson Research Center", "location": { "country": "USA" } }, "email": "jinjun@us.ibm.com" }, { "first": "Wen-Mei", "middle": [], "last": "Hwu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Illinois at Urbana-Champaign", "location": { "country": "USA" } }, "email": "w-hwu@illinois.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Finding the right reviewers to assess the quality of conference submissions is a time consuming process for conference organizers. Given the importance of this step, various automated reviewer-paper matching solutions have been proposed to alleviate the burden. Prior approaches, including bag-ofwords models and probabilistic topic models have been inadequate to deal with the vocabulary mismatch and partial topic overlap between a paper submission and the reviewer's expertise. Our approach, the common topic model, jointly models the topics common to the submission and the reviewer's profile while relying on abstract topic vectors. Experiments and insightful evaluations on two datasets demonstrate that the proposed method achieves consistent improvements compared to available state-of-the-art implementations of paper-reviewer matching.", "pdf_parse": { "paper_id": "D19-1049", "_pdf_hash": "", "abstract": [ { "text": "Finding the right reviewers to assess the quality of conference submissions is a time consuming process for conference organizers. Given the importance of this step, various automated reviewer-paper matching solutions have been proposed to alleviate the burden. Prior approaches, including bag-ofwords models and probabilistic topic models have been inadequate to deal with the vocabulary mismatch and partial topic overlap between a paper submission and the reviewer's expertise. Our approach, the common topic model, jointly models the topics common to the submission and the reviewer's profile while relying on abstract topic vectors. Experiments and insightful evaluations on two datasets demonstrate that the proposed method achieves consistent improvements compared to available state-of-the-art implementations of paper-reviewer matching.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The peer review mechanism constitutes the bedrock of today's academic research landscape spanning submissions to conferences, journals, and funding bodies across numerous disciplines. Matching a paper (or a proposal) to an expert in the topic presented in the paper requires the knowledge of diverse topics of both the submission as well as that of the reviewer's expertise in addition to knowing recent affiliations and co-authorship to resolve conflict of interest. Considering the scale of current conference submissions, performing the task of paper-reviewer matching manually incurs significant overheads to the program committee (PC) and calls for automating the process. Faced with record number of paper submissions essentially interdisciplinary in nature, the inadequacy *Omer Anjum and Hongyu Gong have equal contribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "of available reviewer matching systems to scale to the current needs is being expressed by many conference program committees. It is also notable that the approaches to address the challenges seem ad-hoc and non-scalable, as described in a few of the PC blogs; \"Looking at the abstracts for many of the submissions it also quickly became clear that there was disparity in how authors chose topic keywords for submissions with many only using a single keyword and others using over half a dozen keywords. As such relying on the keywords for submissions became difficult. The combined effect of these problems made any automatic or semi-automatic assignment using HotCRP suboptimal...So, we chose to hand assign the papers.\" (Falsafi et al., 2018) , and again in \"Our plan was to rely on the Toronto Paper Matching System (TPMS) in allocating papers to reviewers. Unfortunately, this system didnt prove as useful as we had hoped for (it requires more extensive reviewer profiles for optimal performance than what we had available) and the work had to rely largely on the manual effort...\" (ACL, 2019) . Noting the urgent need to advance research to address this problem, we study this challenge of matching a paper with a reviewer from a list of potential reviewers for the purpose of assessing the quality of the submission.", "cite_spans": [ { "start": 723, "end": 745, "text": "(Falsafi et al., 2018)", "ref_id": "BIBREF7" }, { "start": 1087, "end": 1098, "text": "(ACL, 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Aside from the long precedence of research in the related area of expertise retrieval -that of expert finding and expert profiling (Balog et al., 2012) , several recent attempts have been made to automate the process (Price and Flach, 2017) . These include, the Toronto paper matching system (Laurent and Zemel, 2013) , the IEEE INFOCOM review assignment system (Li and Hou, 2016) , and the online reviewer recommendation system . Central to these systems is a module that performs the paper-reviewer assignment, which can be broken down into its matching and constraint satisfaction constituents. The constraint satisfaction component typically handles the constraints that each paper be reviewed by at least a few reviewers, each reviewer be assigned no more than a few papers, and thaat reviewers not be assigned papers for which they have a conflict of interest. A second constituent is that of finding a reviewer from a list of reviewers based on the relevance of the person's expertise with the topic of the submission. This latter aspect will be the focus of our study.", "cite_spans": [ { "start": 131, "end": 151, "text": "(Balog et al., 2012)", "ref_id": "BIBREF1" }, { "start": 217, "end": 240, "text": "(Price and Flach, 2017)", "ref_id": "BIBREF26" }, { "start": 292, "end": 317, "text": "(Laurent and Zemel, 2013)", "ref_id": "BIBREF16" }, { "start": 362, "end": 380, "text": "(Li and Hou, 2016)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Available approaches to solve this matching problem can be broadly classified into the following categories (Price and Flach, 2017) : a) Featurebased matching, where a set of topic keywords are collected for each paper and each reviewer. The reviewers are then ranked in order of the number of keyword matches with the paper; b) Automatic feature construction with profile-based matching, where the relevance is decided by building automatic topic representations of both papers and reviewers; c) Bidding, a more recent method, involves giving the reviewers access to all the papers and asking them to bid on papers of their choice. The approaches used in this study are of the profile-based matching kind, where we rely on the use of abstract topic vectors and word embeddings (Mikolov et al., 2013b) to derive the semantic representations of the paper and the expertise area of the reviewer. This is a departure from the bag-of-words approach taken in related prior approaches, e.g. (Laurent and Zemel, 2013) , relying on automatic topic extraction using keywords -a ranked list of terms taken from the paper and the reviewer's profile.", "cite_spans": [ { "start": 108, "end": 131, "text": "(Price and Flach, 2017)", "ref_id": "BIBREF26" }, { "start": 778, "end": 801, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF22" }, { "start": 985, "end": 1010, "text": "(Laurent and Zemel, 2013)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In general, we assume that a reviewer can be represented by a collection of the abstracts of her past publications (termed as the reviewer's profile) and a submission by its abstract. While attempting to match the paper with the reviewer via their profile representations, the obvious difference in the document lengths gives rise to a mismatch due to the small overlap in vocabulary and a consequent scarcity of shared contexts of these overlapping terms. This is because, while past publications of a reviewer may be sufficient to provide a reasonable context for a topical word, a submission abstract provides a very limited context for that topical word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To alleviate this problem of mismatched 'profiles', we use the idea of a shared topic space between the submission and the reviewer's profile.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In our experiments we compare our approach to match the profiles using abstract vectors with that using the hidden topic model (also a set of abstract topic vectors) (Gong et al., 2018) . The two approaches primarily differ in the way the shared topic space is constructed, which we describe in Section 4. We also include other baseline comparisons where the matching is done on the basis of common topic words (keywords) and word-or document-embeddings.", "cite_spans": [ { "start": 166, "end": 185, "text": "(Gong et al., 2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This study makes the following contributions: (1) Instead of relying on a collection of topic words (keywords chosen by the authors or experts), our approach relies on abstract topic vectors to represent the common topics shared by the submission and the reviewer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) We propose a model that outperforms state-ofthe-art approaches in the task of paper-reviewer matching on a benchmark dataset (Mimno and McCallum, 2007) . Additionally, a field evaluation of our approach performed by the program committee of a tier-1 conference showed that it was highly useful.", "cite_spans": [ { "start": 129, "end": 155, "text": "(Mimno and McCallum, 2007)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The paper-reviewer matching task lays the basis for the peer review process ubiquitous in academic conferences and journals. Existing automatic approaches can be broadly categorized into the following types according to the type of models for comparing documents: feature-based models, probabilistic models, embedding-based models, graph models and neural network models. Feature-based models. A list of keywords which summarizes the topics of a submission is used as informative features in the matching process (Dumais and Nielsen, 1992; Basu et al., 1999) . Automatic extraction of these features achieves higher efficiency and one commonly-used feature is a bag-of-words weighted by the words' TF-IDF scores (Jin et al., 2018; Tang et al., 2010; Li and Hou, 2016; Nguyen et al., 2018) . Probabilistic models. The Latent Dirichlet Allocation (LDA) model is the most commonly used probabilistic model in expertise matching, where each topic is represented as a distribution over a given vocabulary and each document is a mixture of hidden topics (Blei et al., 2003) . The popular Toronto Paper Matching System (TPMS) (Laurent and Zemel, 2013) uses LDA to generate the similarity score between a reviewer and a submis-sion (Li and Hou, 2016) . One limitation of LDA is that it does not make use of the potential semantic relatedness between words in a topic because of its assumption that words are generated independently (Xie et al., 2015) . Variants of LDA have been proposed to incorporate notions of semantic coherence for more effective topic modeling (Hu and Tsujii, 2016; Das et al., 2015; Xun et al., 2017) . Beyond having probabilistic models for topics, Jin et al. sought to capture the temporal changes of reviewer interest as well as the stability of their interest trend with probabilistic modeling (Jin et al., 2017) .", "cite_spans": [ { "start": 513, "end": 539, "text": "(Dumais and Nielsen, 1992;", "ref_id": "BIBREF6" }, { "start": 540, "end": 558, "text": "Basu et al., 1999)", "ref_id": "BIBREF2" }, { "start": 712, "end": 730, "text": "(Jin et al., 2018;", "ref_id": "BIBREF11" }, { "start": 731, "end": 749, "text": "Tang et al., 2010;", "ref_id": "BIBREF30" }, { "start": 750, "end": 767, "text": "Li and Hou, 2016;", "ref_id": "BIBREF18" }, { "start": 768, "end": 788, "text": "Nguyen et al., 2018)", "ref_id": "BIBREF24" }, { "start": 1048, "end": 1067, "text": "(Blei et al., 2003)", "ref_id": "BIBREF3" }, { "start": 1224, "end": 1242, "text": "(Li and Hou, 2016)", "ref_id": "BIBREF18" }, { "start": 1424, "end": 1442, "text": "(Xie et al., 2015)", "ref_id": "BIBREF33" }, { "start": 1559, "end": 1580, "text": "(Hu and Tsujii, 2016;", "ref_id": "BIBREF10" }, { "start": 1581, "end": 1598, "text": "Das et al., 2015;", "ref_id": "BIBREF4" }, { "start": 1599, "end": 1616, "text": "Xun et al., 2017)", "ref_id": "BIBREF34" }, { "start": 1814, "end": 1832, "text": "(Jin et al., 2017)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In addition to their inherent limiting assumptions such as independence of semantically related words, probabilistic models, including LDA, require a large corpus to accurately identify the topics and topic distribution in each document, which can be problematic when applied to short documents, such as abstracts. Embedding-based models. Latent Semantic Indexing (LSI) proposes to represent a document as a single dense vector (Deerwester et al., 1990) . The documents corresponding to reviewers and submissions can thus be transformed into their respective vector representations. The relevance of a reviewer to a given submission would then be measured using a distance metric in the vector space, such as the cosine similarity. Other approaches have used word or document embeddings as document representations in order to compare two documents.", "cite_spans": [ { "start": 428, "end": 453, "text": "(Deerwester et al., 1990)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Kou et al. derived topic vectors by treating each topic as a distribution of words (Kou et al., 2015) . In comparison, the key improvement in our work is that the topics are derived based on word embeddings instead of word distributions. Moreover, we derive common topics for each submissionreviewer pair, and as a result, the topics can vary from pair to pair.", "cite_spans": [ { "start": 83, "end": 101, "text": "(Kou et al., 2015)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Another approach to capture similarity between documents is by the use of the Word Mover's Distance (WMD). It relies on the alignment of word pairs from two texts, and the textual dissimilarity is measured as the total distance between the vectors of the word pairs (Kusner et al., 2015) . More recently, a hidden topic model has been used to compare two documents via extracted abstract topic vectors, which showed a strong performance in comparing document for semantic similarity (Gong et al., 2018) .", "cite_spans": [ { "start": 266, "end": 287, "text": "(Kusner et al., 2015)", "ref_id": "BIBREF14" }, { "start": 483, "end": 502, "text": "(Gong et al., 2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Extending the models in this category, we propose the common topic model. Similar to the hidden topic model, we extract topic vectors using word embeddings and match documents at the topic level. The hidden topic model extracts topics purely relying on the reviewer profile, so the topic vectors can be regarded as a summary of the reviewers' research interest. In contrast, the common topic vectors are selected based on the knowledge of both the submission and the reviewer's profile, which are expected to capture the topical overlap between the two. As we will show in the qualitative evaluation, the hidden topic model is likely to miss some important topics when a reviewer has broad research interests, resulting in an underestimation of the paper-reviewer relevance. The common topic model is able to overcome this limitation by extracting topics with reference to submissions. Graph models. All of the models mentioned above only assume access to the texts of submissions and reviewers' publications. Some works also make use of external information such as coauthorship to improve the matching performance. For instance, Liu et al. capture academic connections between reviewers using a graph model, and show that such information improves the matching quality. Each node in their graph model represents a reviewer (Liu et al., 2014) . There is an edge between two nodes if the corresponding reviewers have co-authored papers, and the edge weight is the number of publications. This work also uses LDA to measure the similarity between the submission and the reviewer. Neural network models. Dense vectors are learned by neural networks as the semantic representation of documents (Socher et al., 2011; Le and Mikolov, 2014; Lin et al., 2015; Lau and Baldwin, 2016) . When it comes to the task of expertise matching, the reviewer-submission relevance can thus be measured by the similarity of the vector representations of their textual descriptions. main knowledge of the research area. The source of the data is Microsoft Academic Graph (MSG) (Sinha et al., 2015) . All the abstracts of a reviewer are concatenated as one document, which is then used to profile the reviewer. Reviewers' profiles reflect their research topics, which are later used in the reviewer-submission matching process. Data processing. Since our proposed model is based on word embeddings, we pre-train embeddings using CBOW model of word2vec on the collected publications (Mikolov et al., 2013a) . The dense word representations are intended to capture domain-specific lexical semantics. The data collection and processing are detailed in Section 5. Reviewer-submission matching. A common topic modeling approach is proposed in this work to match reviewers with submissions. The model compares the abstracts of submissions and reviewers' past abstracts during the matching process to decide the reviewer-submission relevance by finding their common research topics. The algorithm is described in Section 4.", "cite_spans": [ { "start": 1325, "end": 1343, "text": "(Liu et al., 2014)", "ref_id": "BIBREF20" }, { "start": 1691, "end": 1712, "text": "(Socher et al., 2011;", "ref_id": "BIBREF29" }, { "start": 1713, "end": 1734, "text": "Le and Mikolov, 2014;", "ref_id": "BIBREF17" }, { "start": 1735, "end": 1752, "text": "Lin et al., 2015;", "ref_id": "BIBREF19" }, { "start": 1753, "end": 1775, "text": "Lau and Baldwin, 2016)", "ref_id": "BIBREF15" }, { "start": 2055, "end": 2075, "text": "(Sinha et al., 2015)", "ref_id": "BIBREF28" }, { "start": 2459, "end": 2482, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "For the purpose of our study, we consider a reviewer's profile to be the concatenation of the abstracts from their previous publications. Let m be the number of words in the reviewer's profile and let the normalized word embeddings of these words be stacked as a reviewer matrix R \u2208 R d\u00d7m , where d is the embedding dimension. Since the embedding is normalized, we have R i 2 = 1 for each column in R. Next suppose that the submission is represented by an n\u2212word sequence of its abstract. Similar to the case of the reviewer, we stack its normalized embeddings as a submission matrix S \u2208 R d\u00d7n . Also we have for each column S j 2 = 1, \u22001 \u2264 j \u2264 n. Common topic selection. Inspired by the compositionality of embeddings (Gong et al., 2017) and the hidden topic model in document matching (Gong et al., 2018) , our intention is to extract topics from reviewer profiles and submissions to summarize their topical overlap. We would like to remind the reader that the topics extracted are neither words nor distributions, but only abstractions and constitute a set of numeric vectors that do not necessarily have a textual representation. To establish the connection between topics, reviewer profiles and submissions, we assume that the topic vectors can be written as a linear combination of the embeddings of component words in either the reviewer profiles or the submissions. This assumption is supported by the geometric property of word embeddings that the weighted sum of the component word embeddings have been shown to be a robust and efficient representation of sentences and documents (Mikolov et al., 2013b) . Intuitively, the extracted common topics would be highly correlated with the subset of the words in the reviewer profile or that of the submission in terms of semantic similarity.", "cite_spans": [ { "start": 719, "end": 738, "text": "(Gong et al., 2017)", "ref_id": "BIBREF8" }, { "start": 787, "end": 806, "text": "(Gong et al., 2018)", "ref_id": "BIBREF9" }, { "start": 1590, "end": 1613, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Modeling", "sec_num": "4" }, { "text": "Let both the reviewer and the submission have K research topics, with each topic represented by a d\u2212dimensional vector. This vector is an abstract topic vector and does not necessarily correspond to a specific word or a word distribution as in LDA (Blei et al., 2003) . Suppose that these topic vectors of the reviewer are stacked as a matrix P \u2208 R d\u00d7K , and those of the submission as Q \u2208 R d\u00d7K . Therefore, these matrices can be represented as a linear combination of the underlying word vectors.", "cite_spans": [ { "start": 248, "end": 267, "text": "(Blei et al., 2003)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Modeling", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P = Ra, Q = Sb,", "eq_num": "(1)" } ], "section": "Modeling", "sec_num": "4" }, { "text": "where a \u2208 R m\u00d7K , b \u2208 R n\u00d7K are the coefficients in the linear combinations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling", "sec_num": "4" }, { "text": "Our goal is to find the common topics shared by a given reviewer and a given submission, to account for the overlap of their research areas. We consider a pair of topics from a reviewer and a submission respectively to constitute a pair of common topics if they are semantically similar. For example, if the reviewer's research areas are machine learning and theory of computation, and the submission is about classification in natural language processing, then (machine learning, classification) can be regarded as a pair of common topics, while the other pairs corresponding to the areas theory of computation and natural language processing are much less similar. We used cosine similarity to measure the semantic similarity of two topic vectors * . The similarity sim(P k , Q k ) between reviewer topic P k and submission topic Q k is shown below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "sim(P k , Q k ) = P T k Q k P k \u2022 Q k .", "eq_num": "(2)" } ], "section": "Modeling", "sec_num": "4" }, { "text": "For K pairs of topic vectors {P k , Q k } K k=1 , their similarity is the sum of the pairwise similarities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling", "sec_num": "4" }, { "text": "sim(P, Q) = K k=1 sim(P k , Q k ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling", "sec_num": "4" }, { "text": "(3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling", "sec_num": "4" }, { "text": "This in turn translates to identifying the common research topics between the reviewer and the submission, i.e., we need to find K such pairs of topics that have the maximum similarity. Based on our discussions above, the approach of common topic extraction can be formulated as an optimization problem:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "max a,b sim(P, Q) s.t. P = Ra, Q = Sb, P T P = Q T Q = I", "eq_num": "(4)" } ], "section": "Modeling", "sec_num": "4" }, { "text": "The first two constraints are based on the linear assumption shown in Eq. 1. Without loss of generality, we add the third constraint that the topic vectors are orthogonal as shown in Eq. 4 to avoid generating multiple similar topic vectors. The closedform solution to this optimization problem can be derived via singular value decomposition on the correlation matrix of R and S (Wegelin, 2000).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling", "sec_num": "4" }, { "text": "Let topic vectors P * and Q * be the optimal solution, both describing the common topics shared by the reviewer and the submission. In the following discussions, we use P * as the common topic vectors. Common topic scoring. To further quantify the reviewer-submission relevance, we need to evaluate how significant these common topics are for the reviewer and the submission respectively. Reusing the example where a reviewer's area are machine learning and theory of computation, we know that machine learning is the common topic between the reviewer and the submission. If the topic of machine learning were only a small part of the reviewer's publications, the reviewer may not be a good match for the submission since reviewer is more of an expert in theory of computation than in machine learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling", "sec_num": "4" }, { "text": "To evaluate how well the topics reflect a reviewer's expertise, we define the importance of common topics P * for both the reviewer and the submission. Consider the vector of the i\u2212th word in the reviewer's profile, R i , and the k-th topic vector P * k . The relevance between R i and P * k is defined as the their squared cosine similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling", "sec_num": "4" }, { "text": "rel(R i , P * k ) = cos 2 (R i , P * k ) = (R T i P * k ) 2 . (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling", "sec_num": "4" }, { "text": "Note that we do not use cosine similarity as is, since R i and P * k might be negatively correlated and the cosine similarity can be negative. Instead, we use the square of the cosine similarity to reflect the strength of their correlation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling", "sec_num": "4" }, { "text": "The relevance between word R i and a set of topic vectors P * is defined as the sum of the relevance between the word and each topic vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "rel(R i , P * ) = K k=1 rel(R i , P * k ).", "eq_num": "(6)" } ], "section": "Modeling", "sec_num": "4" }, { "text": "We can think of word vector R i to be projected along the K dimensions of a linear subspace spanned by topic vectors in P * . If R i lies in this linear subspace, then it can be represented as a linear combination of the topic vectors. In this case, rel(R i , P) achieves the maximum of 1. If the word vector is orthogonal to all topic vectors in P * , the relevance results in the minimum relevance of 0. Thus, the range of rel(R i , P) is from 0 to 1. Furthermore, we define the relevance between the reviewer and the topics as the average of the relevance between the words and the topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "rel(R, P * ) = 1 m m i=1 rel(R i , P * ).", "eq_num": "(7)" } ], "section": "Modeling", "sec_num": "4" }, { "text": "The reviewer-topic relevance rel(R, P) also ranges from 0 to 1. Similarly, we measure the relevance between a submission and a set of common topics, rel(S, P * ) by measuring the relevance between words in the submission and common topics. The submissiontopic relevance reflects the importance of the common topics for a submission. We define the reviewer-submission matching score as a harmonic mean (f-measure) of the reviewer-topic and submission-topic relevance (Powers, 2015).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= 2 \u2022 rel(R, P * ) \u2022 rel(S, P * ) rel(R, P * ) + rel(S, P * ) .", "eq_num": "(8)" } ], "section": "rel(R, S)", "sec_num": null }, { "text": "The reviewer-submission relevance is high when the common topic vectors P * are highly relevant to both the reviewer and the submission.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "rel(R, S)", "sec_num": null }, { "text": "It indicates that the submission has a substantial overlap with reviewer's research area, and that the reviewer is considered to be a good match for the submission.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "rel(R, S)", "sec_num": null }, { "text": "In this section, we empirically compare our proposed common topic model approach against a variety of models in the task of expertise matching.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "For our experiments, we use the two datasets described below. NIPS dataset. This is a benchmark dataset described in (Mimno and McCallum, 2007) and commonly used in the evaluation of expertise matching. It consists of 148 NIPS papers accepted in 2006 and abstracts from the publications of 364 reviewers. It includes annotations from 9 annotators on the relevance of 650 reviewer-paper pairs. Each pair is rated on a scale from 0 to 3. Here \"0\" means irrelevant, \"1\" means slightly relevant, \"2\" means relevant and \"3\" means very relevant. A new dataset. Our proposed paper-reviewer matching system is applied to a tier-1 conference in the area of computer architecture. We created a new dataset for the evaluation of expertise matching from the submissions to this conference. We first collected a pool of 2284 candidate reviewers with publications in top conferences of computer architecture. A reviewer selection policy was adopted by the conference program committee to select reviewers still active in relevant areas. Reviewers were excluded if 1) they started publishing 40 years ago, but had no publications for the last ten years; 2) they did not have publications for the last ten years and have fewer than three papers before that.", "cite_spans": [ { "start": 117, "end": 143, "text": "(Mimno and McCallum, 2007)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "5.1" }, { "text": "The publications of these reviewers were collected from Microsoft Academic Graph (Sinha et al., 2015) . Each reviewer had at least one publication, and some reviewers had as many as 34 publications. Again the abstracts were used as reviewers' profile.", "cite_spans": [ { "start": 81, "end": 101, "text": "(Sinha et al., 2015)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "5.1" }, { "text": "We then used our proposed common topic model to assist the program committee of the conference on computer architecture, and recommended most relevant reviewers to all submissions in the conference. We randomly selected 20 submissions and with the help of the committee, we collected feedbacks from 33 reviewers on their relevance to the assigned submissions. These 33 reviewers were among the top reviewers recommended by our system for each of 20 submissions. The relevance was rated on a scale from 1 to 5, where a score of \"1\" meant that the paper was not relevant at all, \"2\" meant that the reviewer had passing familiarity with the topic of the submission, \"3\" meant that the reviewer knew the material well, \"4\" meant that the reviewer had a deep understanding of the submission, and \"5\" means that the reviewer was a champion candidate for the submission.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "5.1" }, { "text": "We include previous approaches to paper-reviewer matching as our baselines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.2" }, { "text": "\u2022 APT 200. Author-Person-Topic (Mimno and McCallum, 2007) is a generative probabilistic topic model which groups documents of an author into different clusters with the author's topic distribution. Clusters represent different areas of a reviewer's research. \u2022 Single Doc. The Single Doc model is a probabilistic model which takes the idea of language modeling and estimates the likelihood that a submission is assigned to a reviewer given the reviewer's previous works (Mimno and McCallum, 2007 ). \u2022 Latent Dirichlet Allocation (LDA): LDA and its variants are the most popular topic models in expertise matching systems (Blei et al., 2003) . LDA models assume that each document is a mixture of topics where each topic is a multinomial distribution over the words. \u2022 Hierarchical Dirichlet Process (HDP). HDP model is an extension of LDA (Teh et al., 2006) . It is a non-parametric mixed-membership Bayesian model with variable number of topics. It is effective in choosing the number of topics to characterize a given corpus. \u2022 Random Walk with Restart (RWR). RWR is a graph model with sparsity constraints in expertise matching (Liu et al., 2014) . It relies on LDA to capture reviewer-submission relevance and also takes diversity into consideration in the matching process. \u2022 Word Mover's Distance (WMD). WMD is a distance metric between two documents on the basis of pre-trained word embeddings (Kusner et al., 2015) . It calculates the dissmilarity be-", "cite_spans": [ { "start": 31, "end": 57, "text": "(Mimno and McCallum, 2007)", "ref_id": "BIBREF23" }, { "start": 470, "end": 495, "text": "(Mimno and McCallum, 2007", "ref_id": "BIBREF23" }, { "start": 621, "end": 640, "text": "(Blei et al., 2003)", "ref_id": "BIBREF3" }, { "start": 839, "end": 857, "text": "(Teh et al., 2006)", "ref_id": "BIBREF31" }, { "start": 1131, "end": 1149, "text": "(Liu et al., 2014)", "ref_id": "BIBREF20" }, { "start": 1401, "end": 1422, "text": "(Kusner et al., 2015)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "5.2" }, { "text": "Number of Topics P@5 P@10 P@5 P@10 P@5 P@10 GT1 GT2 GT3 tween two documents, which is measured by the embedding distance of aligned words in these documents. \u2022 Hidden Topic Model. This model proposes to learn hidden topic vectors to measure document similarity based on word embeddings (Gong et al., 2018 ). \u2022 Doc2Vec. Doc2Vec is a neural network model which trains document embeddings to predict component words in the documents (Le and Mikolov, 2014) . In expertise matching, the Doc2Vec model is pre-trained on the corpus consisting of reviewers' previous publications.", "cite_spans": [ { "start": 286, "end": 304, "text": "(Gong et al., 2018", "ref_id": "BIBREF9" }, { "start": 430, "end": 452, "text": "(Le and Mikolov, 2014)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "We use the trained model to generate representations for reviewers and submissions respectively. The reviewer-submission relevance is quantified by the cosine similarity of their embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "Setting. Since our model relies on word embeddings, we pre-train embeddings on all papers published in the NIPS conference until 2017 for the matching task in NIPS dataset. Similarly for our new dataset, we collected a corpus of publications until 2018 from top computer architecture conferences for embedding training. The embedding dimension was set as 100, and these word embeddings were also used in two embedding-based baselines: word mover's distance and hidden topic model. For a fair comparison, the corpora used for word embedding training were also used to train Doc2Vec model to generate document embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "The NIPS dataset provides ground truth relevance for reviewer-submission pairs, and the relevance scales from 0 to 3. A score of 0 is assigned when that the reviewer is considered to be not relevant and a score of 3 is assigned when the reviewer is considered to be highly relevant. We set a relevance threshold of 2, and considered reviewers with a score equal to or higher than this threshold to be relevant reviewers to the given submission. In our matching system, we sorted reviewers in decreasing order of the predicted relevance score for a given submission. Evaluation Metric. Precision at k (P@k) is a commonly-used ranking metric on NIPS dataset. P@k is defined to be the percentage of relevant reviewers in the top-k recommendations made by the model to a submission. It is likely that the topk recommendations made by the model contain reviewers whose relevance information is not available in the ground truth. To address this issue, we first discard reviewers that do not have a relevance information prior to calculating P@k. In our experiments, we set k to be 5 and 10. We report the average P@k over all submissions in Table 1 . We note that not all submissions in NIPS dataset have the same number of relevant reviewers and a failure to account for this discrepancy would negatively impact the performance of a system. For example, a submission with only one relevant reviewer would result in a P@5 no higher than 20% for any model. In order to take this discrepancy into consideration, we report the performance only on submissions with at least two relevant reviewers in the columns of \"GT2\", and on submissions with at least three relevant reviewers in column \"GT3\". In \"GT1\", we report the performance without making this distinction.", "cite_spans": [], "ref_spans": [ { "start": 1136, "end": 1143, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results on NIPS Data", "sec_num": "5.3" }, { "text": "The reviewer-submission matching results of our model on the NIPS dataset are presented in Table 1 alongside those of our chosen baselines. We note that the results for APT 200 and Single Doc were only available for GT1 and we report them as such. Some approaches including Common Topic Model, Hidden Topic Model and LDA required a hyperparameter (number of topics) to be specified. We performed experiments on NIPS data with different number of topics in Table 1 . As is shown, our proposed approach consistently outperforms the strong baselines. We also note that Hidden Topic Model and Doc2Vec are competitive approaches in expertise matching compared against probabilistic models. ", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 98, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 456, "end": 463, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results on NIPS Data", "sec_num": "5.3" }, { "text": "Our proposed approach has been used to assist in the paper-reviewer matching process in a tier-1 conference of computer architecture. We evaluated our approach on a new dataset constructed with reviewers' feedbacks on their assigned submissions. Based on the optimal number of topics on the NIPS dataset, we set the number of common topics to be 10 in this experiment. We report the percentage of reviewers whose reported expertise level falls in the given range in Table 2. We note that all recommendations made by our system are reasonable considering that all reviewers had expertise levels no lower than 2. The majority (87.9%) of reviewers reported that they were familiar with the topics of the submissions assigned to them, and 63.6% of the reviewers had deep understanding of the submissions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results on the New Dataset", "sec_num": "5.4" }, { "text": "We perform a qualitative analysis on NIPS dataset to analyze the difference of different algorithms on expertise matching. For the clarity of our discussion, we sample a submission whose abstract is shown in Table 4 . We consider five models: common topic modeling (CT), hidden topic modeling (HT), LDA, Doc2Vec and WMD. We list reviewers who were considered as top candidates for this submission by the five models in Table 3 . For the analyses, we used research topics from the publications of the reviewers as well as their relevance scores assigned by human annotators (i.e., their TREC scores in NIPS dataset). Reviewers are sorted in decreasing order of their relevance to the submission by five models. For example, rank 1 corresponds to the highest relevance. In Table 3 , We also present the rank of each reviewer given by the models.", "cite_spans": [], "ref_spans": [ { "start": 208, "end": 215, "text": "Table 4", "ref_id": null }, { "start": 419, "end": 426, "text": "Table 3", "ref_id": null }, { "start": 771, "end": 778, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "6" }, { "text": "Common topic model. According to the common topic model, reviewer 3, 4 and 5 are included as its top 3 recommendations. But we note that it ranks reviewer 3 higher than reviewer 4 and 5. The relevance scoring of common topic model is based on the relevance between the common topic \"Bayesian method\" and reviewers' profile. Since reviewer 3 is more focused on Bayesian model, it's topic-reviewer relevance is higher than reviewer 4 and 5 who have broader research interests beyond Bayesian model and more publications. One limitation of common topic model reflected in this case is that it does not capture the authority and experience of reviewers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6" }, { "text": "Hidden topic model. It incorrectly considered reviewer 1 more relevant to the submission compared to reviewer 5. We note that reviewer 5 works on a broad set of research topics ranging from Bayesian model to active learning. Since the hidden topic model extracts reviewer's topics based on the topic importance without any knowledge of the submission, it is likely that Bayesian model was not selected into representative hidden topics, which results in low relevance of reviewer 5 to the given submission.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6" }, { "text": "LDA model. LDA assigns higher relevance to reviewer 1 than reviewer 4. Reviewer 1 used Reviewer TREC Research topics CT HT LDA Doc2Vec WMD 1 1 Speech recognition with Bayesian approach , Neural network 6 2 4 1 7 2 0 Online learning, Sequential prediction, Bayes point machine 10 10 8 9 1 3 2 Bayesian network, Variational Bayes estimation, Mixture models 1 4 3 2 9 4 3 Variational method, Bayesian learning, Markov model 2 3 9 4 8 5 3 Bayesian learning, Variational method, Active learning 3 5 1 7 6 Table 3 : Examples of reviewers and their relevance to the submission ranked by different algorithms.", "cite_spans": [], "ref_spans": [ { "start": 185, "end": 544, "text": ", Neural network 6 2 4 1 7 2 0 Online learning, Sequential prediction, Bayes point machine 10 10 8 9 1 3 2 Bayesian network, Variational Bayes estimation, Mixture models 1 4 3 2 9 4 3 Variational method, Bayesian learning, Markov model 2 3 9 4 8 5 3 Bayesian learning, Variational method, Active learning 3 5 1 7 6 Table 3", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "6" }, { "text": "Dirichlet Process (DP) mixture models are candidates for clustering applications where the number of clusters is unknown a priori. [...] The speedup is achieved by incorporating kd-trees into a variational Bayesian algorithm for DP mixture [...] Table 4 : An example of abstract from a submission.", "cite_spans": [ { "start": 131, "end": 136, "text": "[...]", "ref_id": null } ], "ref_spans": [ { "start": 246, "end": 253, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "6" }, { "text": "Bayesian approach, whereas it was not his research focus according to his publications. Reviewer 4 had done extensive research in general graphical models including Bayesian model. We observed that LDA fails to capture the relevance between graph model and Bayesian model since it ignores the semantic similarity between words. Doc2Vec model. Doc2Vec assigned the highest relevance to reviewer 1 among all reviewers. The document representation it generates for reviewer 1's profile is similar to the representation for the submission, possibly because the key word \"Bayesian\" and \"mixture\" in the submission also occurs frequently in the profile. It suggests that Doc2Vec model might be limited to lexical overlap.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6" }, { "text": "WMD. Reviewer 2 is included as WMD's top recommendation, whereas the research focus of reviewer 2 is sequential prediction which is irrelevant to the submission. Moreover, actually relevant reviewers 4 and 5 were excluded from WMD's top recommendations. This may have resulted from WMD's word-level similarity measure. Reviewer 2's publications had some lexical overlap with the submission (e.g., words \"Bayes\", \"algorithm\" and \"learning\", which have high frequency in the submission). WMD tends to assign high relevance scores due to such lexical overlap.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6" }, { "text": "This study used a basic version of a reviewer's profile to be the concatenation of the abstracts from their previous publications. A concrete direction for future work would be to consider enhance-ments in representing reviewers' profiles. Such efforts could consider, for instance, the temporal variation of research interests in order to capture the relevance of a given reviewer to a given topic based on the recency of the contributions to a given area. Other efforts could involve the use of a variable number of research topics for each reviewer and exploring ways to render reviewer profiles human interpretable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "7" }, { "text": "We proposed an automated reviewer-paper matching algorithm via jointly finding the common research topics between submissions and reviewers' publications. Our model is based on word embeddings and efficiently captures the reviewer-paper relevance. It is robust to cases of vocabulary mismatch and partial topic overlap between submissions and reviewers -factors that have posed problems for previous approaches. The common topic model showed strong empirical performance on a benchmark and a newly collected dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "System OverviewThe different stages of our system, together called PaRe, are briefly explained as below: Data collection. At this stage, we collect previous publications from one or more tier-1 conferences in the same domain as the one to which reviewersubmission matching is applied. This data is used to create our pool of candidate reviewers and do-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "* We leave it to future work to experiment with other useful measures of semantic similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is supported by the IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) -a research collaboration as part of the IBM AI Horizons Network. We thank the EMNLP anonymous reviewers for their constructive suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "PC Chairs ACL. 2019. Whats new, different and challenging in acl 2019", "authors": [], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "PC Chairs ACL. 2019. Whats new, different and chal- lenging in acl 2019? http://acl2019pcblog.fileli. unipi.it/?p=156. Accessed: 2019-05-19.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Expertise retrieval", "authors": [ { "first": "Krisztian", "middle": [], "last": "Balog", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Maarten De Rijke", "suffix": "" }, { "first": "Luo", "middle": [], "last": "Serdyukov", "suffix": "" }, { "first": "", "middle": [], "last": "Si", "suffix": "" } ], "year": 2012, "venue": "Foundations and Trends R in Information Retrieval", "volume": "6", "issue": "2-3", "pages": "127--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krisztian Balog, Yi Fang, Maarten de Rijke, Pavel Serdyukov, Luo Si, et al. 2012. Expertise retrieval. Foundations and Trends R in Information Retrieval, 6(2-3):127-256.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Recommending papers by mining the web", "authors": [ { "first": "Chumki", "middle": [], "last": "Basu", "suffix": "" }, { "first": "Haym", "middle": [], "last": "Hirsh", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Nevill-Manning", "suffix": "" } ], "year": 1999, "venue": "In: Proceedings of the IJCAI99 Workshop on Learning about Users", "volume": "", "issue": "", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chumki Basu, Haym Hirsh, William W. Cohen, and Craig Nevill-Manning. 1999. Recommending pa- pers by mining the web. In In: Proceedings of the IJCAI99 Workshop on Learning about Users, pages 1-11.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Latent dirichlet allocation", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "The Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. The Journal of Machine Learning Research, 3:993-1022.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Gaussian lda for topic models with word embeddings", "authors": [ { "first": "Rajarshi", "middle": [], "last": "Das", "suffix": "" }, { "first": "Manzil", "middle": [], "last": "Zaheer", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "795--804", "other_ids": { "DOI": [ "10.3115/v1/P15-1077" ] }, "num": null, "urls": [], "raw_text": "Rajarshi Das, Manzil Zaheer, and Chris Dyer. 2015. Gaussian lda for topic models with word embed- dings. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 795-804.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Journal of the American society for information science", "authors": [ { "first": "Scott", "middle": [], "last": "Deerwester", "suffix": "" }, { "first": "T", "middle": [], "last": "Susan", "suffix": "" }, { "first": "George", "middle": [ "W" ], "last": "Dumais", "suffix": "" }, { "first": "", "middle": [], "last": "Furnas", "suffix": "" }, { "first": "K", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Landauer", "suffix": "" }, { "first": "", "middle": [], "last": "Harshman", "suffix": "" } ], "year": 1990, "venue": "", "volume": "41", "issue": "", "pages": "391--407", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott Deerwester, Susan T Dumais, George W Fur- nas, Thomas K Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Jour- nal of the American society for information science, 41(6):391-407.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Automating the assignment of submitted manuscripts to reviewers", "authors": [ { "first": "Susan", "middle": [ "T" ], "last": "Dumais", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Nielsen", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '92", "volume": "", "issue": "", "pages": "233--244", "other_ids": { "DOI": [ "10.1145/133160.133205" ] }, "num": null, "urls": [], "raw_text": "Susan T. Dumais and Jakob Nielsen. 1992. Automat- ing the assignment of submitted manuscripts to re- viewers. In Proceedings of the 15th Annual Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '92, pages 233-244, New York, NY, USA. ACM.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Isca'18 review process reflections", "authors": [ { "first": "Babak", "middle": [], "last": "Falsafi", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Drumond", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Sutherland", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Babak Falsafi, Mario Drumond, and Mark Sutherland. 2018. Isca'18 review pro- cess reflections. https://www.sigarch.org/ isca18-review-process-reflections/. Accessed: 2019-05-19.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Geometry of compositionality", "authors": [ { "first": "Hongyu", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Suma", "middle": [], "last": "Bhat", "suffix": "" }, { "first": "Pramod", "middle": [], "last": "Viswanath", "suffix": "" } ], "year": 2017, "venue": "Thirty-First AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongyu Gong, Suma Bhat, and Pramod Viswanath. 2017. Geometry of compositionality. In Thirty-First AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Document similarity for texts of varying lengths via hidden topics", "authors": [ { "first": "Hongyu", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Tarek", "middle": [], "last": "Sakakini", "suffix": "" }, { "first": "Suma", "middle": [], "last": "Bhat", "suffix": "" }, { "first": "Jinjun", "middle": [], "last": "Xiong", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2341--2351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongyu Gong, Tarek Sakakini, Suma Bhat, and Jinjun Xiong. 2018. Document similarity for texts of vary- ing lengths via hidden topics. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 2341-2351.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A latent concept topic model for robust topic inference using word embeddings", "authors": [ { "first": "Weihua", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "380--386", "other_ids": { "DOI": [ "10.18653/v1/P16-2062" ] }, "num": null, "urls": [], "raw_text": "Weihua Hu and Jun'ichi Tsujii. 2016. A latent con- cept topic model for robust topic inference using word embeddings. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 380- 386, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Author-subject-topic model for reviewer recommendation", "authors": [ { "first": "Jian", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Qian", "middle": [], "last": "Geng", "suffix": "" }, { "first": "Haikun", "middle": [], "last": "Mou", "suffix": "" }, { "first": "Chong", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2018, "venue": "Journal of Information Science", "volume": "45", "issue": "4", "pages": "554--570", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Jin, Qian Geng, Haikun Mou, and Chong Chen. 2018. Author-subject-topic model for reviewer rec- ommendation. Journal of Information Science, 45(4):554-570.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Integrating the trend of research interest for reviewer assignment", "authors": [ { "first": "Jian", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Qian", "middle": [], "last": "Geng", "suffix": "" }, { "first": "Qian", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Lixue", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 26th International Conference on World Wide Web Companion", "volume": "", "issue": "", "pages": "1233--1241", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Jin, Qian Geng, Qian Zhao, and Lixue Zhang. 2017. Integrating the trend of research interest for reviewer assignment. In Proceedings of the 26th International Conference on World Wide Web Com- panion, pages 1233-1241. International World Wide Web Conferences Steering Committee.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A topic-based reviewer assignment system", "authors": [ { "first": "Nikos", "middle": [], "last": "Ngai Meng Kou", "suffix": "" }, { "first": "Yuhong", "middle": [], "last": "Mamoulis", "suffix": "" }, { "first": "Ye", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhiguo", "middle": [], "last": "Li", "suffix": "" }, { "first": "", "middle": [], "last": "Gong", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the VLDB Endowment", "volume": "8", "issue": "", "pages": "1852--1855", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ngai Meng Kou, Nikos Mamoulis, Yuhong Li, Ye Li, Zhiguo Gong, et al. 2015. A topic-based reviewer assignment system. Proceedings of the VLDB En- dowment, 8(12):1852-1855.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "From word embeddings to document distances", "authors": [ { "first": "Matt", "middle": [ "J" ], "last": "Kusner", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Nicholas", "middle": [ "I" ], "last": "Kolkin", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 32nd International Conference on International Conference on Machine Learning", "volume": "37", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kil- ian Q. Weinberger. 2015. From word embeddings to document distances. In Proceedings of the 32nd In- ternational Conference on International Conference on Machine Learning -Volume 37.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "An empirical evaluation of doc2vec with practical insights into document embedding generation", "authors": [ { "first": "Han", "middle": [], "last": "Jey", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Lau", "suffix": "" }, { "first": "", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1607.05368" ] }, "num": null, "urls": [], "raw_text": "Jey Han Lau and Timothy Baldwin. 2016. An empiri- cal evaluation of doc2vec with practical insights into document embedding generation. arXiv preprint arXiv:1607.05368.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The toronto paper matching system: An automated paper-reviewer assignment system", "authors": [ { "first": "Charlin", "middle": [], "last": "Laurent", "suffix": "" }, { "first": "Richard", "middle": [ "S" ], "last": "Zemel", "suffix": "" } ], "year": 2013, "venue": "Proceedings of 30th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charlin Laurent and Richard S. Zemel. 2013. The toronto paper matching system: An automated paper-reviewer assignment system. In Proceedings of 30th International Conference on Machine Learn- ing.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Distributed representations of sentences and documents", "authors": [ { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2014, "venue": "International conference on machine learning", "volume": "", "issue": "", "pages": "1188--1196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed repre- sentations of sentences and documents. In Interna- tional conference on machine learning, pages 1188- 1196.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The new automated ieee infocom review assignment system", "authors": [ { "first": "Baochun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Y Thomas", "middle": [], "last": "Hou", "suffix": "" } ], "year": 2016, "venue": "IEEE Network", "volume": "30", "issue": "5", "pages": "18--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baochun Li and Y Thomas Hou. 2016. The new au- tomated ieee infocom review assignment system. IEEE Network, 30(5):18-24.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Hierarchical recurrent neural network for document modeling", "authors": [ { "first": "Rui", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Shujie", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Muyun", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Li", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "899--907", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rui Lin, Shujie Liu, Muyun Yang, Mu Li, Ming Zhou, and Sheng Li. 2015. Hierarchical recurrent neural network for document modeling. In Proceedings of the 2015 Conference on Empirical Methods in Nat- ural Language Processing, pages 899-907.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A robust model for paper reviewer assignment", "authors": [ { "first": "Xiang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Torsten", "middle": [], "last": "Suel", "suffix": "" }, { "first": "Nasir", "middle": [], "last": "Memon", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 8th ACM Conference on Recommender Systems, RecSys '14", "volume": "", "issue": "", "pages": "25--32", "other_ids": { "DOI": [ "10.1145/2645710.2645749" ] }, "num": null, "urls": [], "raw_text": "Xiang Liu, Torsten Suel, and Nasir Memon. 2014. A robust model for paper reviewer assignment. In Proceedings of the 8th ACM Conference on Rec- ommender Systems, RecSys '14, pages 25-32, New York, NY, USA. ACM.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. arXiv: 1301.3781.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems", "volume": "2", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013b. Distributed repre- sentations of words and phrases and their composi- tionality. In Proceedings of the 26th International Conference on Neural Information Processing Sys- tems -Volume 2, NIPS'13, pages 3111-3119, USA. Curran Associates Inc.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Expertise modeling for matching papers with reviewers", "authors": [ { "first": "David", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '07", "volume": "", "issue": "", "pages": "500--509", "other_ids": { "DOI": [ "10.1145/1281192.1281247" ] }, "num": null, "urls": [], "raw_text": "David Mimno and Andrew McCallum. 2007. Exper- tise modeling for matching papers with reviewers. In Proceedings of the 13th ACM SIGKDD Inter- national Conference on Knowledge Discovery and Data Mining, KDD '07, pages 500-509, New York, NY, USA. ACM.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A decision support tool using order weighted averaging for conference review assignment", "authors": [ { "first": "Jennifer", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Germn", "middle": [], "last": "Snchez-Hernndez", "suffix": "" }, { "first": "Nria", "middle": [], "last": "Agell", "suffix": "" }, { "first": "Xari", "middle": [], "last": "Rovira", "suffix": "" }, { "first": "Cecilio", "middle": [], "last": "Angulo", "suffix": "" } ], "year": 2018, "venue": "Pattern Recogn. Lett", "volume": "105", "issue": "", "pages": "114--120", "other_ids": { "DOI": [ "10.1016/j.patrec.2017.09.020" ] }, "num": null, "urls": [], "raw_text": "Jennifer Nguyen, Germn Snchez-Hernndez, Nria Ag- ell, Xari Rovira, and Cecilio Angulo. 2018. A de- cision support tool using order weighted averaging for conference review assignment. Pattern Recogn. Lett., 105(C):114-120.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "What the f-measure doesn't measure: Features, flaws, fallacies and fixes", "authors": [ { "first": "M", "middle": [ "W" ], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Powers", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1503.06410" ] }, "num": null, "urls": [], "raw_text": "David MW Powers. 2015. What the f-measure doesn't measure: Features, flaws, fallacies and fixes. arXiv preprint arXiv:1503.06410.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Computational support for academic peer review: A perspective from artificial intelligence", "authors": [ { "first": "Simon", "middle": [], "last": "Price", "suffix": "" }, { "first": "A", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Flach", "suffix": "" } ], "year": 2017, "venue": "Communications of the ACM", "volume": "60", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simon Price and Peter A Flach. 2017. Computational support for academic peer review: A perspective from artificial intelligence. Communications of the ACM, 60(3):7079.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Weakly learning to match experts in online community", "authors": [ { "first": "Yujie", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Kan", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Joint Conference on Artificial Intelligence, IJCAI'18", "volume": "", "issue": "", "pages": "3841--3847", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yujie Qian, Jie Tang, and Kan Wu. 2018. Weakly learning to match experts in online community. In Proceedings of the 27th International Joint Con- ference on Artificial Intelligence, IJCAI'18, pages 3841-3847. AAAI Press.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "An overview of microsoft academic service (ma) and applications", "authors": [ { "first": "Arnab", "middle": [], "last": "Sinha", "suffix": "" }, { "first": "Zhihong", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Song", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Darrin", "middle": [], "last": "Eide", "suffix": "" }, { "first": "Bo-June (", "middle": [], "last": "Paul", "suffix": "" }, { "first": ")", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "Kuansan", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "dx.doi.org/10.1145/2740908.2742839" ] }, "num": null, "urls": [], "raw_text": "Arnab Sinha, Zhihong Shen, Yang Song, Hao Ma, Dar- rin Eide, Bo-June (Paul) Hsu, and Kuansan Wang. 2015. An overview of microsoft academic service (ma) and applications. WWW '15 Companion.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Parsing natural scenes and natural language with recursive neural networks", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "C", "middle": [], "last": "Cliff", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Andrew Y", "middle": [], "last": "Manning", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 28th international conference on machine learning (ICML-11)", "volume": "", "issue": "", "pages": "129--136", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Cliff C Lin, Chris Manning, and An- drew Y Ng. 2011. Parsing natural scenes and natu- ral language with recursive neural networks. In Pro- ceedings of the 28th international conference on ma- chine learning (ICML-11), pages 129-136.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Expertise matching via constraint-based optimization", "authors": [ { "first": "Wenbin", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Chenhao", "middle": [], "last": "Tan", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology", "volume": "01", "issue": "", "pages": "34--41", "other_ids": { "DOI": [ "10.1109/WI-IAT.2010.133" ] }, "num": null, "urls": [], "raw_text": "Wenbin Tang, Jie Tang, and Chenhao Tan. 2010. Ex- pertise matching via constraint-based optimization. In Proceedings of the 2010 IEEE/WIC/ACM Inter- national Conference on Web Intelligence and Intel- ligent Agent Technology -Volume 01, WI-IAT '10, pages 34-41, Washington, DC, USA. IEEE Com- puter Society.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Hierarchical dirichlet processes", "authors": [ { "first": "Yee Whye", "middle": [], "last": "Teh", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" }, { "first": "J", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Beal", "suffix": "" }, { "first": "", "middle": [], "last": "Blei", "suffix": "" } ], "year": 2006, "venue": "Journal of the American Statistical Association", "volume": "101", "issue": "476", "pages": "1566--1581", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yee Whye Teh, Michael I Jordan, Matthew J Beal, and David M Blei. 2006. Hierarchical dirichlet pro- cesses. Journal of the American Statistical Associa- tion, 101(476):1566-1581.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A survey of partial least squares (pls) methods, with emphasis on the twoblock case", "authors": [ { "first": "", "middle": [], "last": "Jacob A Wegelin", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob A Wegelin. 2000. A survey of partial least squares (pls) methods, with emphasis on the two- block case. Technical Report 371, Department of Statistics, University of Washington.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Incorporating word correlation knowledge into topic modeling", "authors": [ { "first": "Pengtao", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Diyi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Xing", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 conference of the north American chapter of the association for computational linguistics: human language technologies", "volume": "", "issue": "", "pages": "725--734", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengtao Xie, Diyi Yang, and Eric Xing. 2015. Incorpo- rating word correlation knowledge into topic model- ing. In Proceedings of the 2015 conference of the north American chapter of the association for com- putational linguistics: human language technolo- gies, pages 725-734.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A correlated topic model using word embeddings", "authors": [ { "first": "Guangxu", "middle": [], "last": "Xun", "suffix": "" }, { "first": "Yaliang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Wayne", "middle": [ "Xin" ], "last": "Zhao", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Aidong", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI'17", "volume": "", "issue": "", "pages": "4207--4213", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guangxu Xun, Yaliang Li, Wayne Xin Zhao, Jing Gao, and Aidong Zhang. 2017. A correlated topic model using word embeddings. In Proceedings of the 26th International Joint Conference on Artificial Intelli- gence, IJCAI'17, pages 4207-4213. AAAI Press.", "links": null } }, "ref_entries": { "TABREF1": { "num": null, "type_str": "table", "content": "", "html": null, "text": "The mean precision of different baselines with optimal hyperparamters on the NIPS dataset. A reviewer is classified as relevant with a TREC score \u2265 2." }, "TABREF3": { "num": null, "type_str": "table", "content": "
", "html": null, "text": "Percentage of reviewers in levels of expertise to the submissions recommended by our model." } } } }