Title
stringlengths
14
179
Authors
stringlengths
6
464
Abstract
stringlengths
83
1.93k
entry_id
stringlengths
32
34
Date
unknown
Categories
stringlengths
5
168
year
int32
2.01k
2.02k
Diversity in Machine Learning
Zhiqiang Gong, Ping Zhong, Weidong Hu
Machine learning methods have achieved good performance and been widely applied in various real-world applications. They can learn the model adaptively and be better fit for special requirements of different tasks. Generally, a good machine learning system is composed of plentiful training data, a good model training process, and an accurate inference. Many factors can affect the performance of the machine learning process, among which the diversity of the machine learning process is an important one. The diversity can help each procedure to guarantee a total good machine learning: diversity of the training data ensures that the training data can provide more discriminative information for the model, diversity of the learned model (diversity in parameters of each model or diversity among different base models) makes each parameter/model capture unique or complement information and the diversity in inference can provide multiple choices each of which corresponds to a specific plausible local optimal result. Even though the diversity plays an important role in machine learning process, there is no systematical analysis of the diversification in machine learning system. In this paper, we systematically summarize the methods to make data diversification, model diversification, and inference diversification in the machine learning process, respectively. In addition, the typical applications where the diversity technology improved the machine learning performance have been surveyed, including the remote sensing imaging tasks, machine translation, camera relocalization, image segmentation, object detection, topic modeling, and others. Finally, we discuss some challenges of the diversity technology in machine learning and point out some directions in future work.
http://arxiv.org/abs/1807.01477v2
"2018-07-04T08:25:17Z"
cs.CV
2,018
Topic Discovery in Massive Text Corpora Based on Min-Hashing
Gibran Fuentes-Pineda, Ivan Vladimir Meza-Ruiz
The task of discovering topics in text corpora has been dominated by Latent Dirichlet Allocation and other Topic Models for over a decade. In order to apply these approaches to massive text corpora, the vocabulary needs to be reduced considerably and large computer clusters and/or GPUs are typically required. Moreover, the number of topics must be provided beforehand but this depends on the corpus characteristics and it is often difficult to estimate, especially for massive text corpora. Unfortunately, both topic quality and time complexity are sensitive to this choice. This paper describes an alternative approach to discover topics based on Min-Hashing, which can handle massive text corpora and large vocabularies using modest computer hardware and does not require to fix the number of topics in advance. The basic idea is to generate multiple random partitions of the corpus vocabulary to find sets of highly co-occurring words, which are then clustered to produce the final topics. In contrast to probabilistic topic models where topics are distributions over the complete vocabulary, the topics discovered by the proposed approach are sets of highly co-occurring words. Interestingly, these topics underlie various thematics with different levels of granularity. An extensive qualitative and quantitative evaluation using the 20 Newsgroups (18K), Reuters (800K), Spanish Wikipedia (1M), and English Wikipedia (5M) corpora shows that the proposed approach is able to consistently discover meaningful and coherent topics. Remarkably, the time complexity of the proposed approach is linear with respect to corpus and vocabulary size; a non-parallel implementation was able to discover topics from the entire English edition of Wikipedia with over 5 million documents and 1 million words in less than 7 hours.
http://arxiv.org/abs/1807.00938v2
"2018-07-03T00:52:50Z"
cs.CL
2,018
A Multimodal Recommender System for Large-scale Assortment Generation in E-commerce
Murium Iqbal, Adair Kovac, Kamelia Aryafar
E-commerce platforms surface interesting products largely through product recommendations that capture users' styles and aesthetic preferences. Curating recommendations as a complete complementary set, or assortment, is critical for a successful e-commerce experience, especially for product categories such as furniture, where items are selected together with the overall theme, style or ambiance of a space in mind. In this paper, we propose two visually-aware recommender systems that can automatically curate an assortment of living room furniture around a couple of pre-selected seed pieces for the room. The first system aims to maximize the visual-based style compatibility of the entire selection by making use of transfer learning and topic modeling. The second system extends the first by incorporating text data and applying polylingual topic modeling to infer style over both modalities. We review the production pipeline for surfacing these visually-aware recommender systems and compare them through offline validations and large-scale online A/B tests on Overstock. Our experimental results show that complimentary style is best discovered over product sets when both visual and textual data are incorporated.
http://arxiv.org/abs/1806.11226v1
"2018-06-28T23:11:54Z"
cs.IR, cs.CV, cs.LG
2,018
Mutual-Excitation of Cryptocurrency Market Returns and Social Media Topics
Ross C. Phillips, Denise Gorse
Cryptocurrencies have recently experienced a new wave of price volatility and interest; activity within social media communities relating to cryptocurrencies has increased significantly. There is currently limited documented knowledge of factors which could indicate future price movements. This paper aims to decipher relationships between cryptocurrency price changes and topic discussion on social media to provide, among other things, an understanding of which topics are indicative of future price movements. To achieve this a well-known dynamic topic modelling approach is applied to social media communication to retrieve information about the temporal occurrence of various topics. A Hawkes model is then applied to find interactions between topics and cryptocurrency prices. The results show particular topics tend to precede certain types of price movements, for example the discussion of 'risk and investment vs trading' being indicative of price falls, the discussion of 'substantial price movements' being indicative of volatility, and the discussion of 'fundamental cryptocurrency value' by technical communities being indicative of price rises. The knowledge of topic relationships gained here could be built into a real-time system, providing trading or alerting signals.
http://arxiv.org/abs/1806.11093v1
"2018-06-28T17:34:13Z"
cs.SI
2,018
Unveiling the semantic structure of text documents using paragraph-aware Topic Models
Simón Roca-Sotelo, Jerónimo Arenas-García
Classic Topic Models are built under the Bag Of Words assumption, in which word position is ignored for simplicity. Besides, symmetric priors are typically used in most applications. In order to easily learn topics with different properties among the same corpus, we propose a new line of work in which the paragraph structure is exploited. Our proposal is based on the following assumption: in many text document corpora there are formal constraints shared across all the collection, e.g. sections. When this assumption is satisfied, some paragraphs may be related to general concepts shared by all documents in the corpus, while others would contain the genuine description of documents. Assuming each paragraph can be semantically more general, specific, or hybrid, we look for ways to measure this, transferring this distinction to topics and being able to learn what we call specific and general topics. Experiments show that this is a proper methodology to highlight certain paragraphs in structured documents at the same time we learn interesting and more diverse topics.
http://arxiv.org/abs/1806.09827v1
"2018-06-26T07:50:37Z"
cs.CL, cs.IR, cs.LG, stat.ML
2,018
A NoSQL Data-based Personalized Recommendation System for C2C e-Commerce
Khanh Dang, Khuong Vo, Josef Küng
With the considerable development of customer-to-customer (C2C) e-commerce in the recent years, there is a big demand for an effective recommendation system that suggests suitable websites for users to sell their items with some specified needs. Nonetheless, e-commerce recommendation systems are mostly designed for business-to-customer (B2C) websites, where the systems offer the consumers the products that they might like to buy. Almost none of the related research works focus on choosing selling sites for target items. In this paper, we introduce an approach that recommends the selling websites based upon the item's description, category, and desired selling price. This approach employs NoSQL data-based machine learning techniques for building and training topic models and classification models. The trained models can then be used to rank the websites dynamically with respect to the user needs. The experimental results with real-world datasets from Vietnam C2C websites will demonstrate the effectiveness of our proposed method.
http://arxiv.org/abs/1806.09793v1
"2018-06-26T05:02:30Z"
cs.IR, cs.DB, cs.LG
2,018
Computational Analysis of Insurance Complaints: GEICO Case Study
Amir Karami, Noelle M. Pendergraft
The online environment has provided a great opportunity for insurance policyholders to share their complaints with respect to different services. These complaints can reveal valuable information for insurance companies who seek to improve their services; however, analyzing a huge number of online complaints is a complicated task for human and must involve computational methods to create an efficient process. This research proposes a computational approach to characterize the major topics of a large number of online complaints. Our approach is based on using the topic modeling approach to disclose the latent semantic of complaints. The proposed approach deployed on thousands of GEICO negative reviews. Analyzing 1,371 GEICO complaints indicates that there are 30 major complains in four categories: (1) customer service, (2) insurance coverage, paperwork, policy, and reports, (3) legal issues, and (4) costs, estimates, and payments. This research approach can be used in other applications to explore a large number of reviews.
http://arxiv.org/abs/1806.09736v1
"2018-06-26T00:12:14Z"
stat.AP, cs.CL, cs.IR, stat.ML
2,018
Concentration in the Generalized Chinese Restaurant Process
Alan Pereira, Roberto I. Oliveira, Rodrigo Ribeiro
The Generalized Chinese Restaurant Process (GCRP) describes a sequence of exchangeable random partitions of the numbers $\{1,\dots,n\}$. This process is related to the Ewens sampling model in Genetics and to Bayesian nonparametric methods such as topic models. In this paper, we study the GCRP in a regime where the number of parts grows like $n^\alpha$ with $\alpha>0$. We prove a non-asymptotic concentration result for the number of parts of size $k=o(n^{\alpha/(2\alpha+4)}/(\log n)^{1/(2+\alpha)})$. In particular, we show that these random variables concentrate around $c_{k}\,V_*\,n^\alpha$ where $V_*\,n^\alpha$ is the asymptotic number of parts and $c_k\approx k^{-(1+\alpha)}$ is a positive value depending on $k$. We also obtain finite-$n$ bounds for the total number of parts. Our theorems complement asymptotic statements by Pitman and more recent results on large and moderate deviations by Favaro, Feng and Gao.
http://arxiv.org/abs/1806.09656v1
"2018-06-25T18:36:21Z"
math.PR
2,018
Personalized Thread Recommendation for MOOC Discussion Forums
Andrew S. Lan, Jonathan C. Spencer, Ziqi Chen, Christopher G. Brinton, Mung Chiang
Social learning, i.e., students learning from each other through social interactions, has the potential to significantly scale up instruction in online education. In many cases, such as in massive open online courses (MOOCs), social learning is facilitated through discussion forums hosted by course providers. In this paper, we propose a probabilistic model for the process of learners posting on such forums, using point processes. Different from existing works, our method integrates topic modeling of the post text, timescale modeling of the decay in post activity over time, and learner topic interest modeling into a single model, and infers this information from user data. Our method also varies the excitation levels induced by posts according to the thread structure, to reflect typical notification settings in discussion forums. We experimentally validate the proposed model on three real-world MOOC datasets, with the largest one containing up to 6,000 learners making 40,000 posts in 5,000 threads. Results show that our model excels at thread recommendation, achieving significant improvement over a number of baselines, thus showing promise of being able to direct learners to threads that they are interested in more efficiently. Moreover, we demonstrate analytics that our model parameters can provide, such as the timescales of different topic categories in a course.
http://arxiv.org/abs/1806.08468v1
"2018-06-22T02:16:14Z"
cs.SI, cs.CY, stat.AP
2,018
Large-Scale Stochastic Sampling from the Probability Simplex
Jack Baker, Paul Fearnhead, Emily B Fox, Christopher Nemeth
Stochastic gradient Markov chain Monte Carlo (SGMCMC) has become a popular method for scalable Bayesian inference. These methods are based on sampling a discrete-time approximation to a continuous time process, such as the Langevin diffusion. When applied to distributions defined on a constrained space the time-discretization error can dominate when we are near the boundary of the space. We demonstrate that because of this, current SGMCMC methods for the simplex struggle with sparse simplex spaces; when many of the components are close to zero. Unfortunately, many popular large-scale Bayesian models, such as network or topic models, require inference on sparse simplex spaces. To avoid the biases caused by this discretization error, we propose the stochastic Cox-Ingersoll-Ross process (SCIR), which removes all discretization error and we prove that samples from the SCIR process are asymptotically unbiased. We discuss how this idea can be extended to target other constrained spaces. Use of the SCIR process within a SGMCMC algorithm is shown to give substantially better performance for a topic model and a Dirichlet process mixture model than existing SGMCMC approaches.
http://arxiv.org/abs/1806.07137v2
"2018-06-19T10:08:37Z"
stat.CO, cs.LG, stat.ML
2,018
Overlapping Clustering Models, and One (class) SVM to Bind Them All
Xueyu Mao, Purnamrita Sarkar, Deepayan Chakrabarti
People belong to multiple communities, words belong to multiple topics, and books cover multiple genres; overlapping clusters are commonplace. Many existing overlapping clustering methods model each person (or word, or book) as a non-negative weighted combination of "exemplars" who belong solely to one community, with some small noise. Geometrically, each person is a point on a cone whose corners are these exemplars. This basic form encompasses the widely used Mixed Membership Stochastic Blockmodel of networks (Airoldi et al., 2008) and its degree-corrected variants (Jin et al., 2017), as well as topic models such as LDA (Blei et al., 2003). We show that a simple one-class SVM yields provably consistent parameter inference for all such models, and scales to large datasets. Experimental results on several simulated and real datasets show our algorithm (called SVM-cone) is both accurate and scalable.
http://arxiv.org/abs/1806.06945v2
"2018-06-18T21:00:00Z"
stat.ML, cs.LG, math.ST, stat.TH
2,018
Nonparametric Topic Modeling with Neural Inference
Xuefei Ning, Yin Zheng, Zhuxi Jiang, Yu Wang, Huazhong Yang, Junzhou Huang
This work focuses on combining nonparametric topic models with Auto-Encoding Variational Bayes (AEVB). Specifically, we first propose iTM-VAE, where the topics are treated as trainable parameters and the document-specific topic proportions are obtained by a stick-breaking construction. The inference of iTM-VAE is modeled by neural networks such that it can be computed in a simple feed-forward manner. We also describe how to introduce a hyper-prior into iTM-VAE so as to model the uncertainty of the prior parameter. Actually, the hyper-prior technique is quite general and we show that it can be applied to other AEVB based models to alleviate the {\it collapse-to-prior} problem elegantly. Moreover, we also propose HiTM-VAE, where the document-specific topic distributions are generated in a hierarchical manner. HiTM-VAE is even more flexible and can generate topic distributions with better variability. Experimental results on 20News and Reuters RCV1-V2 datasets show that the proposed models outperform the state-of-the-art baselines significantly. The advantages of the hyper-prior technique and the hierarchical model construction are also confirmed by experiments.
http://arxiv.org/abs/1806.06583v1
"2018-06-18T10:22:18Z"
cs.CL, cs.IR, cs.LG
2,018
Aspect Sentiment Model for Micro Reviews
Reinald Kim Amplayo, Seung-won Hwang
This paper aims at an aspect sentiment model for aspect-based sentiment analysis (ABSA) focused on micro reviews. This task is important in order to understand short reviews majority of the users write, while existing topic models are targeted for expert-level long reviews with sufficient co-occurrence patterns to observe. Current methods on aggregating micro reviews using metadata information may not be effective as well due to metadata absence, topical heterogeneity, and cold start problems. To this end, we propose a model called Micro Aspect Sentiment Model (MicroASM). MicroASM is based on the observation that short reviews 1) are viewed with sentiment-aspect word pairs as building blocks of information, and 2) can be clustered into larger reviews. When compared to the current state-of-the-art aspect sentiment models, experiments show that our model provides better performance on aspect-level tasks such as aspect term extraction and document-level tasks such as sentiment classification.
http://arxiv.org/abs/1806.05499v1
"2018-06-14T12:28:43Z"
cs.CL
2,018
Learning Multilingual Topics from Incomparable Corpus
Shudong Hao, Michael J. Paul
Multilingual topic models enable crosslingual tasks by extracting consistent topics from multilingual corpora. Most models require parallel or comparable training corpora, which limits their ability to generalize. In this paper, we first demystify the knowledge transfer mechanism behind multilingual topic models by defining an alternative but equivalent formulation. Based on this analysis, we then relax the assumption of training data required by most existing models, creating a model that only requires a dictionary for training. Experiments show that our new method effectively learns coherent multilingual topics from partially and fully incomparable corpora with limited amounts of dictionary resources.
http://arxiv.org/abs/1806.04270v1
"2018-06-11T23:51:18Z"
cs.CL
2,018
Measuring Conversational Productivity in Child Forensic Interviews
Victor Ardulov, Manoj Kumar, Shanna Williams, Thomas Lyon, Shrikanth Narayanan
Child Forensic Interviewing (FI) presents a challenge for effective information retrieval and decision making. The high stakes associated with the process demand that expert legal interviewers are able to effectively establish a channel of communication and elicit substantive knowledge from the child-client while minimizing potential for experiencing trauma. As a first step toward computationally modeling and producing quality spoken interviewing strategies and a generalized understanding of interview dynamics, we propose a novel methodology to computationally model effectiveness criteria, by applying summarization and topic modeling techniques to objectively measure and rank the responsiveness and conversational productivity of a child during FI. We score information retrieval by constructing an agenda to represent general topics of interest and measuring alignment with a given response and leveraging lexical entrainment for responsiveness. For comparison, we present our methods along with traditional metrics of evaluation and discuss the use of prior information for generating situational awareness.
http://arxiv.org/abs/1806.03357v1
"2018-06-08T21:21:19Z"
cs.CL, cs.CY
2,018
Topic Modelling of Empirical Text Corpora: Validity, Reliability, and Reproducibility in Comparison to Semantic Maps
Tobias Hecking, Loet Leydesdorff
Using the 6,638 case descriptions of societal impact submitted for evaluation in the Research Excellence Framework (REF 2014), we replicate the topic model (Latent Dirichlet Allocation or LDA) made in this context and compare the results with factor-analytic results using a traditional word-document matrix (Principal Component Analysis or PCA). Removing a small fraction of documents from the sample, for example, has on average a much larger impact on LDA than on PCA-based models to the extent that the largest distortion in the case of PCA has less effect than the smallest distortion of LDA-based models. In terms of semantic coherence, however, LDA models outperform PCA-based models. The topic models inform us about the statistical properties of the document sets under study, but the results are statistical and should not be used for a semantic interpretation - for example, in grant selections and micro-decision making, or scholarly work-without follow-up using domain-specific semantic maps.
http://arxiv.org/abs/1806.01045v1
"2018-06-04T11:03:11Z"
cs.CL
2,018
Transfer Topic Labeling with Domain-Specific Knowledge Base: An Analysis of UK House of Commons Speeches 1935-2014
Alexander Herzog, Peter John, Slava Jankin Mikhaylov
Topic models are widely used in natural language processing, allowing researchers to estimate the underlying themes in a collection of documents. Most topic models use unsupervised methods and hence require the additional step of attaching meaningful labels to estimated topics. This process of manual labeling is not scalable and suffers from human bias. We present a semi-automatic transfer topic labeling method that seeks to remedy these problems. Domain-specific codebooks form the knowledge-base for automated topic labeling. We demonstrate our approach with a dynamic topic model analysis of the complete corpus of UK House of Commons speeches 1935-2014, using the coding instructions of the Comparative Agendas Project to label topics. We show that our method works well for a majority of the topics we estimate; but we also find that institution-specific topics, in particular on subnational governance, require manual input. We validate our results using human expert coding.
http://arxiv.org/abs/1806.00793v2
"2018-06-03T13:22:10Z"
cs.CL, cs.CY
2,018
Learning Restricted Boltzmann Machines via Influence Maximization
Guy Bresler, Frederic Koehler, Ankur Moitra, Elchanan Mossel
Graphical models are a rich language for describing high-dimensional distributions in terms of their dependence structure. While there are algorithms with provable guarantees for learning undirected graphical models in a variety of settings, there has been much less progress in the important scenario when there are latent variables. Here we study Restricted Boltzmann Machines (or RBMs), which are a popular model with wide-ranging applications in dimensionality reduction, collaborative filtering, topic modeling, feature extraction and deep learning. The main message of our paper is a strong dichotomy in the feasibility of learning RBMs, depending on the nature of the interactions between variables: ferromagnetic models can be learned efficiently, while general models cannot. In particular, we give a simple greedy algorithm based on influence maximization to learn ferromagnetic RBMs with bounded degree. In fact, we learn a description of the distribution on the observed variables as a Markov Random Field. Our analysis is based on tools from mathematical physics that were developed to show the concavity of magnetization. Our algorithm extends straighforwardly to general ferromagnetic Ising models with latent variables. Conversely, we show that even for a contant number of latent variables with constant degree, without ferromagneticity the problem is as hard as sparse parity with noise. This hardness result is based on a sharp and surprising characterization of the representational power of bounded degree RBMs: the distribution on their observed variables can simulate any bounded order MRF. This result is of independent interest since RBMs are the building blocks of deep belief networks.
http://arxiv.org/abs/1805.10262v2
"2018-05-25T17:32:19Z"
cs.LG, cs.DS, math.PR, stat.ML
2,018
A fast algorithm with minimax optimal guarantees for topic models with an unknown number of topics
Xin Bing, Florentina Bunea, Marten Wegkamp
We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates the number of topics K from the observed data. We derive new finite sample minimax lower bounds for the estimation of A, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any number of documents (n), individual document length (N_i), dictionary size (p) and number of topics (K), and both p and K are allowed to increase with n, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics K, while we provide the competing methods with the correct value in our simulations.
http://arxiv.org/abs/1805.06837v3
"2018-05-17T16:07:32Z"
stat.ML, cs.LG
2,018
News Sentiment as Leading Indicators for Recessions
Melody Y. Huang, Randall R. Rojas, Patrick D. Convery
In the following paper, we use a topic modeling algorithm and sentiment scoring methods to construct a novel metric that serves as a leading indicator in recession prediction models. We hypothesize that the inclusion of such a sentiment indicator, derived purely from unstructured news data, will improve our capabilities to forecast future recessions because it provides a direct measure of the polarity of the information consumers and producers are exposed to. We go on to show that the inclusion of our proposed news sentiment indicator, with traditional sentiment data, such as the Michigan Index of Consumer Sentiment and the Purchasing Manager's Index, and common factors derived from a large panel of economic and financial indicators helps improve model performance significantly.
http://arxiv.org/abs/1805.04160v2
"2018-05-10T20:21:28Z"
stat.AP, econ.EM
2,018
The Evolution of Popularity and Images of Characters in Marvel Cinematic Universe Fanfictions
Fan Bu
This analysis proposes a new topic model to study the yearly trends in Marvel Cinematic Universe fanfictions on three levels: character popularity, character images/topics, and vocabulary pattern of topics. It is found that character appearances in fanfictions have become more diverse over the years thanks to constant introduction of new characters in feature films, and in the case of Captain America, multi-dimensional character development is well-received by the fanfiction world.
http://arxiv.org/abs/1805.03774v1
"2018-05-10T01:27:58Z"
cs.CL
2,018
Investor Reaction to Financial Disclosures Across Topics: An Application of Latent Dirichlet Allocation
Stefan Feuerriegel, Nicolas Pröllochs
This paper provides a holistic study of how stock prices vary in their response to financial disclosures across different topics. Thereby, we specifically shed light into the extensive amount of filings for which no a priori categorization of their content exists. For this purpose, we utilize an approach from data mining - namely, latent Dirichlet allocation - as a means of topic modeling. This technique facilitates our task of automatically categorizing, ex ante, the content of more than 70,000 regulatory 8-K filings from U.S. companies. We then evaluate the subsequent stock market reaction. Our empirical evidence suggests a considerable discrepancy among various types of news stories in terms of their relevance and impact on financial markets. For instance, we find a statistically significant abnormal return in response to earnings results and credit rating, but also for disclosures regarding business strategy, the health sector, as well as mergers and acquisitions. Our results yield findings that benefit managers, investors and policy-makers by indicating how regulatory filings should be structured and the topics most likely to precede changes in stock valuations.
http://arxiv.org/abs/1805.03308v1
"2018-05-08T22:22:26Z"
cs.CL, q-fin.GN
2,018
Dynamic and Static Topic Model for Analyzing Time-Series Document Collections
Rem Hida, Naoya Takeishi, Takehisa Yairi, Koichi Hori
For extracting meaningful topics from texts, their structures should be considered properly. In this paper, we aim to analyze structured time-series documents such as a collection of news articles and a series of scientific papers, wherein topics evolve along time depending on multiple topics in the past and are also related to each other at each time. To this end, we propose a dynamic and static topic model, which simultaneously considers the dynamic structures of the temporal topic evolution and the static structures of the topic hierarchy at each time. We show the results of experiments on collections of scientific papers, in which the proposed method outperformed conventional models. Moreover, we show an example of extracted topic structures, which we found helpful for analyzing research activities.
http://arxiv.org/abs/1805.02203v1
"2018-05-06T12:41:47Z"
cs.CL
2,018
When Politicians Talk About Politics: Identifying Political Tweets of Brazilian Congressmen
Lucas S. Oliveira, Pedro O. S. Vaz de Melo, Marcelo S. Amaral, José Antônio. G. Pinho
Since June 2013, when Brazil faced the largest and most significant mass protests in a generation, a political crisis is in course. In midst of this crisis, Brazilian politicians use social media to communicate with the electorate in order to retain or to grow their political capital. The problem is that many controversial topics are in course and deputies may prefer to avoid such themes in their messages. To characterize this behavior, we propose a method to accurately identify political and non-political tweets independently of the deputy who posted it and of the time it was posted. Moreover, we collected tweets of all congressmen who were active on Twitter and worked in the Brazilian parliament from October 2013 to October 2017. To evaluate our method, we used word clouds and a topic model to identify the main political and non-political latent topics in parliamentarian tweets. Both results indicate that our proposal is able to accurately distinguish political from non-political tweets. Moreover, our analyses revealed a striking fact: more than half of the messages posted by Brazilian deputies are non-political.
http://arxiv.org/abs/1805.01589v1
"2018-05-04T02:26:21Z"
cs.SI, cs.CL
2,018
Viscovery: Trend Tracking in Opinion Forums based on Dynamic Topic Models
Ignacio Espinoza, Marcelo Mendoza, Pablo Ortega, Daniel Rivera, Fernanda Weiss
Opinions in forums and social networks are released by millions of people due to the increasing number of users that use Web 2.0 platforms to opine about brands and organizations. For enterprises or government agencies it is almost impossible to track what people say producing a gap between user needs/expectations and organizations actions. To bridge this gap we create Viscovery, a platform for opinion summarization and trend tracking that is able to analyze a stream of opinions recovered from forums. To do this we use dynamic topic models, allowing to uncover the hidden structure of topics behind opinions, characterizing vocabulary dynamics. We extend dynamic topic models for incremental learning, a key aspect needed in Viscovery for model updating in near-real time. In addition, we include in Viscovery sentiment analysis, allowing to separate positive/negative words for a specific topic at different levels of granularity. Viscovery allows to visualize representative opinions and terms in each topic. At a coarse level of granularity, the dynamic of the topics can be analyzed using a 2D topic embedding, suggesting longitudinal topic merging or segmentation. In this paper we report our experience developing this platform, sharing lessons learned and opportunities that arise from the use of sentiment analysis and topic modeling in real world applications.
http://arxiv.org/abs/1805.00457v1
"2018-05-01T17:48:19Z"
cs.IR
2,018
"I ain't tellin' white folks nuthin": A quantitative exploration of the race-related problem of candour in the WPA slave narratives
Soumya Kambhampati
From 1936-38, the Works Progress Administration interviewed thousands of former slaves about their life experiences. While these interviews are crucial to understanding the "peculiar institution" from the standpoint of the slave himself, issues relating to bias cloud analyses of these interviews. The problem I investigate is the problem of candour in the WPA slave narratives: it is widely held in the historical community that the strict racial caste system of the Deep South compelled black ex-slaves to tell white interviewers what they thought they wanted to hear, suggesting that there was a significant difference candour depending on whether their interviewer was white or black. In this work, I attempt to quantitatively characterise this race-related problem of candour. Prior work has either been of an impressionistic, qualitative nature, or utilised exceedingly simple quantitative methodology. In contrast, I use more sophisticated statistical methods: in particular word frequency and sentiment analysis and comparative topic modelling with LDA to try and identify differences in the content and sentiment expressed by ex-slaves in front of white interviewers versus black interviewers. While my sentiment analysis methodology was ultimately unsuccessful due to the complexity of the task, my word frequency analysis and comparative topic modelling methods both showed strong evidence that the content expressed in front of white interviewers was different from that of black interviewers. In particular, I found that the ex-slaves spoke much more about unfavourable aspects of slavery like whipping and slave patrollers in front of interviewers of their own race. I hope that my more-sophisticated statistical methodology helps improve the robustness of the argument for the existence of this problem of candour in the slave narratives, which some would seek to deny for revisionist purposes.
http://arxiv.org/abs/1805.00471v1
"2018-05-01T05:24:40Z"
cs.CL
2,018
Lessons from the Bible on Modern Topics: Low-Resource Multilingual Topic Model Evaluation
Shudong Hao, Jordan Boyd-Graber, Michael J. Paul
Multilingual topic models enable document analysis across languages through coherent multilingual summaries of the data. However, there is no standard and effective metric to evaluate the quality of multilingual topics. We introduce a new intrinsic evaluation of multilingual topic models that correlates well with human judgments of multilingual topic coherence as well as performance in downstream applications. Importantly, we also study evaluation for low-resource languages. Because standard metrics fail to accurately measure topic quality when robust external resources are unavailable, we propose an adaptation model that improves the accuracy and reliability of these metrics in low-resource settings.
http://arxiv.org/abs/1804.10184v1
"2018-04-26T17:35:15Z"
cs.CL
2,018
Variational Inference In Pachinko Allocation Machines
Akash Srivastava, Charles Sutton
The Pachinko Allocation Machine (PAM) is a deep topic model that allows representing rich correlation structures among topics by a directed acyclic graph over topics. Because of the flexibility of the model, however, approximate inference is very difficult. Perhaps for this reason, only a small number of potential PAM architectures have been explored in the literature. In this paper we present an efficient and flexible amortized variational inference method for PAM, using a deep inference network to parameterize the approximate posterior distribution in a manner similar to the variational autoencoder. Our inference method produces more coherent topics than state-of-art inference methods for PAM while being an order of magnitude faster, which allows exploration of a wider range of PAM architectures than have previously been studied.
http://arxiv.org/abs/1804.07944v1
"2018-04-21T11:12:25Z"
cs.CL, cs.LG, stat.ML
2,018
A Deep Representation Empowered Distant Supervision Paradigm for Clinical Information Extraction
Yanshan Wang, Sunghwan Sohn, Sijia Liu, Feichen Shen, Liwei Wang, Elizabeth J. Atkinson, Shreyasee Amin, Hongfang Liu
Objective: To automatically create large labeled training datasets and reduce the efforts of feature engineering for training accurate machine learning models for clinical information extraction. Materials and Methods: We propose a distant supervision paradigm empowered by deep representation for extracting information from clinical text. In this paradigm, the rule-based NLP algorithms are utilized to generate weak labels and create large training datasets automatically. Additionally, we use pre-trained word embeddings as deep representation to eliminate the need of task-specific feature engineering for machine learning. We evaluated the effectiveness of the proposed paradigm on two clinical information extraction tasks: smoking status extraction and proximal femur (hip) fracture extraction. We tested three prevalent machine learning models, namely, Convolutional Neural Networks (CNN), Support Vector Machine (SVM), and Random Forrest (RF). Results: The results indicate that CNN is the best fit to the proposed distant supervision paradigm. It outperforms the rule-based NLP algorithms given large datasets by capturing additional extraction patterns. We also verified the advantage of word embedding feature representation in the paradigm over term frequency-inverse document frequency (tf-idf) and topic modeling representations. Discussion: In the clinical domain, the limited amount of labeled data is always a bottleneck for applying machine learning. Additionally, the performance of machine learning approaches highly depends on task-specific feature engineering. The proposed paradigm could alleviate those problems by leveraging rule-based NLP algorithms to automatically assign weak labels and eliminating the need of task-specific feature engineering using word embedding feature representation.
http://arxiv.org/abs/1804.07814v1
"2018-04-20T20:18:46Z"
cs.IR
2,018
Structuring Wikipedia Articles with Section Recommendations
Tiziano Piccardi, Michele Catasta, Leila Zia, Robert West
Sections are the building blocks of Wikipedia articles. They enhance readability and can be used as a structured entry point for creating and expanding articles. Structuring a new or already existing Wikipedia article with sections is a hard task for humans, especially for newcomers or less experienced editors, as it requires significant knowledge about how a well-written article looks for each possible topic. Inspired by this need, the present paper defines the problem of section recommendation for Wikipedia articles and proposes several approaches for tackling it. Our systems can help editors by recommending what sections to add to already existing or newly created Wikipedia articles. Our basic paradigm is to generate recommendations by sourcing sections from articles that are similar to the input article. We explore several ways of defining similarity for this purpose (based on topic modeling, collaborative filtering, and Wikipedia's category system). We use both automatic and human evaluation approaches for assessing the performance of our recommendation system, concluding that the category-based approach works best, achieving precision@10 of about 80% in the human evaluation.
http://arxiv.org/abs/1804.05995v2
"2018-04-17T00:47:41Z"
cs.IR
2,018
Overlapping Coalition Formation via Probabilistic Topic Modeling
Michalis Mamakos, Georgios Chalkiadakis
Research in cooperative games often assumes that agents know the coalitional values with certainty, and that they can belong to one coalition only. By contrast, this work assumes that the value of a coalition is based on an underlying collaboration structure emerging due to existing but unknown relations among the agents; and that agents can form overlapping coalitions. Specifically, we first propose Relational Rules, a novel representation scheme for cooperative games with overlapping coalitions, which encodes the aforementioned relations, and which extends the well-known MC-nets representation to this setting. We then present a novel decision-making method for decentralized overlapping coalition formation, which exploits probabilistic topic modeling, and in particular, online Latent Dirichlet Allocation. By interpreting formed coalitions as documents, agents can effectively learn topics that correspond to profitable collaboration structures.
http://arxiv.org/abs/1804.05235v1
"2018-04-14T15:08:20Z"
cs.GT
2,018
Are Abstracts Enough for Hypothesis Generation?
Justin Sybrandt, Angelo Carrabba, Alexander Herzog, Ilya Safro
The potential for automatic hypothesis generation (HG) systems to improve research productivity keeps pace with the growing set of publicly available scientific information. But as data becomes easier to acquire, we must understand the effect different textual data sources have on our resulting hypotheses. Are abstracts enough for HG, or does it need full-text papers? How many papers does an HG system need to make valuable predictions? How sensitive is a general-purpose HG system to hyperparameter values or input quality? What effect does corpus size and document length have on HG results? To answer these questions we train multiple versions of knowledge network-based HG system, Moliere, on varying corpora in order to compare challenges and trade offs in terms of result quality and computational requirements. Moliere generalizes main principles of similar knowledge network-based HG systems and reinforces them with topic modeling components. The corpora include the abstract and full-text versions of PubMed Central, as well as iterative halves of MEDLINE, which allows us to compare the effect document length and count has on the results. We find that, quantitatively, corpora with a higher median document length result in marginally higher quality results, yet require substantially longer to process. However, qualitatively, full-length papers introduce a significant number of intruder terms to the resulting topics, which decreases human interpretability. Additionally, we find that the effect of document length is greater than that of document count, even if both sets contain only paper abstracts. Reproducibility: Our code and data are available at github.com/jsybran/moliere, and bit.ly/2GxghpM respectively.
http://arxiv.org/abs/1804.05942v3
"2018-04-13T17:08:45Z"
cs.IR, cs.DL
2,018
Predicting Good Configurations for GitHub and Stack Overflow Topic Models
Christoph Treude, Markus Wagner
Software repositories contain large amounts of textual data, ranging from source code comments and issue descriptions to questions, answers, and comments on Stack Overflow. To make sense of this textual data, topic modelling is frequently used as a text-mining tool for the discovery of hidden semantic structures in text bodies. Latent Dirichlet allocation (LDA) is a commonly used topic model that aims to explain the structure of a corpus by grouping texts. LDA requires multiple parameters to work well, and there are only rough and sometimes conflicting guidelines available on how these parameters should be set. In this paper, we contribute (i) a broad study of parameters to arrive at good local optima for GitHub and Stack Overflow text corpora, (ii) an a-posteriori characterisation of text corpora related to eight programming languages, and (iii) an analysis of corpus feature importance via per-corpus LDA configuration. We find that (1) popular rules of thumb for topic modelling parameter configuration are not applicable to the corpora used in our experiments, (2) corpora sampled from GitHub and Stack Overflow have different characteristics and require different configurations to achieve good model fit, and (3) we can predict good configurations for unseen corpora reliably. These findings support researchers and practitioners in efficiently determining suitable configurations for topic modelling when analysing textual data contained in software repositories.
http://arxiv.org/abs/1804.04749v3
"2018-04-13T00:09:48Z"
cs.CL, cs.NE
2,018
Learning Topics using Semantic Locality
Ziyi Zhao, Krittaphat Pugdeethosapol, Sheng Lin, Zhe Li, Caiwen Ding, Yanzhi Wang, Qinru Qiu
The topic modeling discovers the latent topic probability of the given text documents. To generate the more meaningful topic that better represents the given document, we proposed a new feature extraction technique which can be used in the data preprocessing stage. The method consists of three steps. First, it generates the word/word-pair from every single document. Second, it applies a two-way TF-IDF algorithm to word/word-pair for semantic filtering. Third, it uses the K-means algorithm to merge the word pairs that have the similar semantic meaning. Experiments are carried out on the Open Movie Database (OMDb), Reuters Dataset and 20NewsGroup Dataset. The mean Average Precision score is used as the evaluation metric. Comparing our results with other state-of-the-art topic models, such as Latent Dirichlet allocation and traditional Restricted Boltzmann Machines. Our proposed data preprocessing can improve the generated topic accuracy by up to 12.99\%.
http://arxiv.org/abs/1804.04205v1
"2018-04-11T20:23:23Z"
cs.LG, cs.CL, cs.IR, stat.ML
2,018
Towards Training Probabilistic Topic Models on Neuromorphic Multi-chip Systems
Zihao Xiao, Jianfei Chen, Jun Zhu
Probabilistic topic models are popular unsupervised learning methods, including probabilistic latent semantic indexing (pLSI) and latent Dirichlet allocation (LDA). By now, their training is implemented on general purpose computers (GPCs), which are flexible in programming but energy-consuming. Towards low-energy implementations, this paper investigates their training on an emerging hardware technology called the neuromorphic multi-chip systems (NMSs). NMSs are very effective for a family of algorithms called spiking neural networks (SNNs). We present three SNNs to train topic models. The first SNN is a batch algorithm combining the conventional collapsed Gibbs sampling (CGS) algorithm and an inference SNN to train LDA. The other two SNNs are online algorithms targeting at both energy- and storage-limited environments. The two online algorithms are equivalent with training LDA by using maximum-a-posterior estimation and maximizing the semi-collapsed likelihood, respectively. They use novel, tailored ordinary differential equations for stochastic optimization. We simulate the new algorithms and show that they are comparable with the GPC algorithms, while being suitable for NMS implementation. We also propose an extension to train pLSI and a method to prune the network to obey the limited fan-in of some NMSs.
http://arxiv.org/abs/1804.03578v1
"2018-04-10T15:01:50Z"
cs.LG, cs.AI, cs.ET, stat.ML
2,018
Microblog Topic Identification using Linked Open Data
A. Yıldırım, S. Uskudarli
The extensive use of social media for sharing and obtaining information has resulted in the development of topic detection models to facilitate the comprehension of the overwhelming amount of short and distributed posts. Probabilistic topic models, such as Latent Dirichlet Allocation, and matrix factorization based approaches such as Latent Semantic Analysis and Non-negative Matrix Factorization represent topics as sets of terms that are useful for many automated processes. However, the determination of what a topic is about is left as a further task. Alternatively, techniques that produce summaries are human comprehensible, but less suitable for automated processing. This work proposes an approach that utilizes Linked Open Data (LOD) resources to extract semantically represented topics from collections of microposts. The proposed approach utilizes entity linking to identify the elements of topics from microposts. The elements are related through co-occurrence graphs, which are processed to yield topics. The topics are represented using an ontology that is introduced for this purpose. A prototype of the approach is used to identify topics from 11 datasets consisting of more than one million posts collected from Twitter during various events, such as the 2016 US election debates and the death of Carrie Fisher. The characteristics of the approach with more than 5 thousand generated topics are described in detail. The potentials of semantic topics in revealing information, that is not otherwise easily observable, is demonstrated with semantic queries of various complexities. A human evaluation of topics from 36 randomly selected intervals resulted in a precision of 81.0% and F1 score of 93.3%. Furthermore, they are compared with topics generated from the same datasets from an approach that produces human readable topics from microblog post collections.
http://arxiv.org/abs/1804.02158v4
"2018-04-06T07:44:13Z"
cs.IR, cs.SI, H.3.3; I.2.4; I.2.7
2,018
Analyzing Self-Driving Cars on Twitter
Rizwan Sadiq, Mohsin Khan
This paper studies users' perception regarding a controversial product, namely self-driving (autonomous) cars. To find people's opinion regarding this new technology, we used an annotated Twitter dataset, and extracted the topics in positive and negative tweets using an unsupervised, probabilistic model known as topic modeling. We later used the topics, as well as linguist and Twitter specific features to classify the sentiment of the tweets. Regarding the opinions, the result of our analysis shows that people are optimistic and excited about the future technology, but at the same time they find it dangerous and not reliable. For the classification task, we found Twitter specific features, such as hashtags as well as linguistic features such as emphatic words among top attributes in classifying the sentiment of the tweets.
http://arxiv.org/abs/1804.04058v1
"2018-04-05T23:31:44Z"
cs.LG, cs.CL, cs.SI, stat.ML
2,018
Computer-Assisted Text Analysis for Social Science: Topic Models and Beyond
Ryan Wesslen
Topic models are a family of statistical-based algorithms to summarize, explore and index large collections of text documents. After a decade of research led by computer scientists, topic models have spread to social science as a new generation of data-driven social scientists have searched for tools to explore large collections of unstructured text. Recently, social scientists have contributed to topic model literature with developments in causal inference and tools for handling the problem of multi-modality. In this paper, I provide a literature review on the evolution of topic modeling including extensions for document covariates, methods for evaluation and interpretation, and advances in interactive visualizations along with each aspect's relevance and application for social science research.
http://arxiv.org/abs/1803.11045v2
"2018-03-29T13:11:32Z"
cs.CL
2,018
Topic Modeling Based Multi-modal Depression Detection
Yuan Gong, Christian Poellabauer
Major depressive disorder is a common mental disorder that affects almost 7% of the adult U.S. population. The 2017 Audio/Visual Emotion Challenge (AVEC) asks participants to build a model to predict depression levels based on the audio, video, and text of an interview ranging between 7-33 minutes. Since averaging features over the entire interview will lose most temporal information, how to discover, capture, and preserve useful temporal details for such a long interview are significant challenges. Therefore, we propose a novel topic modeling based approach to perform context-aware analysis of the recording. Our experiments show that the proposed approach outperforms context-unaware methods and the challenge baselines for all metrics.
http://arxiv.org/abs/1803.10384v1
"2018-03-28T02:12:48Z"
cs.CL, cs.IR, cs.LG, cs.SD, eess.AS
2,018
Scalable Generalized Dynamic Topic Models
Patrick Jähnichen, Florian Wenzel, Marius Kloft, Stephan Mandt
Dynamic topic models (DTMs) model the evolution of prevalent themes in literature, online media, and other forms of text over time. DTMs assume that word co-occurrence statistics change continuously and therefore impose continuous stochastic process priors on their model parameters. These dynamical priors make inference much harder than in regular topic models, and also limit scalability. In this paper, we present several new results around DTMs. First, we extend the class of tractable priors from Wiener processes to the generic class of Gaussian processes (GPs). This allows us to explore topics that develop smoothly over time, that have a long-term memory or are temporally concentrated (for event detection). Second, we show how to perform scalable approximate inference in these models based on ideas around stochastic variational inference and sparse Gaussian processes. This way we can train a rich family of DTMs to massive data. Our experiments on several large-scale datasets show that our generalized model allows us to find interesting patterns that were not accessible by previous approaches.
http://arxiv.org/abs/1803.07868v1
"2018-03-21T11:50:35Z"
stat.ML, cs.LG
2,018
CuLDA_CGS: Solving Large-scale LDA Problems on GPUs
Xiaolong Xie, Yun Liang, Xiuhong Li, Wei Tan
Latent Dirichlet Allocation(LDA) is a popular topic model. Given the fact that the input corpus of LDA algorithms consists of millions to billions of tokens, the LDA training process is very time-consuming, which may prevent the usage of LDA in many scenarios, e.g., online service. GPUs have benefited modern machine learning algorithms and big data analysis as they can provide high memory bandwidth and computation power. Therefore, many frameworks, e.g. Ten- sorFlow, Caffe, CNTK, support to use GPUs for accelerating the popular machine learning data-intensive algorithms. However, we observe that LDA solutions on GPUs are not satisfying. In this paper, we present CuLDA_CGS, a GPU-based efficient and scalable approach to accelerate large-scale LDA problems. CuLDA_CGS is designed to efficiently solve LDA problems at high throughput. To it, we first delicately design workload partition and synchronization mechanism to exploit the benefits of mul- tiple GPUs. Then, we offload the LDA sampling process to each individual GPU by optimizing from the sampling algorithm, par- allelization, and data compression perspectives. Evaluations show that compared with state-of-the-art LDA solutions, CuLDA_CGS outperforms them by a large margin (up to 7.3X) on a single GPU. CuLDA_CGS is able to achieve extra 3.0X speedup on 4 GPUs. The source code is publicly available on https://github.com/cuMF/ CuLDA_CGS.
http://arxiv.org/abs/1803.04631v1
"2018-03-13T05:44:40Z"
cs.DC
2,018
WHAI: Weibull Hybrid Autoencoding Inference for Deep Topic Modeling
Hao Zhang, Bo Chen, Dandan Guo, Mingyuan Zhou
To train an inference network jointly with a deep generative topic model, making it both scalable to big corpora and fast in out-of-sample prediction, we develop Weibull hybrid autoencoding inference (WHAI) for deep latent Dirichlet allocation, which infers posterior samples via a hybrid of stochastic-gradient MCMC and autoencoding variational Bayes. The generative network of WHAI has a hierarchy of gamma distributions, while the inference network of WHAI is a Weibull upward-downward variational autoencoder, which integrates a deterministic-upward deep neural network, and a stochastic-downward deep generative model based on a hierarchy of Weibull distributions. The Weibull distribution can be used to well approximate a gamma distribution with an analytic Kullback-Leibler divergence, and has a simple reparameterization via the uniform noise, which help efficiently compute the gradients of the evidence lower bound with respect to the parameters of the inference network. The effectiveness and efficiency of WHAI are illustrated with experiments on big corpora.
http://arxiv.org/abs/1803.01328v2
"2018-03-04T09:53:59Z"
stat.ML, stat.AP, stat.CO
2,018
Application of Rényi and Tsallis Entropies to Topic Modeling Optimization
Koltcov Sergei
This is full length article (draft version) where problem number of topics in Topic Modeling is discussed. We proposed idea that Renyi and Tsallis entropy can be used for identification of optimal number in large textual collections. We also report results of numerical experiments of Semantic stability for 4 topic models, which shows that semantic stability play very important role in problem topic number. The calculation of Renyi and Tsallis entropy based on thermodynamics approach.
http://arxiv.org/abs/1802.10526v1
"2018-02-28T16:41:14Z"
stat.ML
2,018
ADMM-based Networked Stochastic Variational Inference
Hamza Anwar, Quanyan Zhu
Owing to the recent advances in "Big Data" modeling and prediction tasks, variational Bayesian estimation has gained popularity due to their ability to provide exact solutions to approximate posteriors. One key technique for approximate inference is stochastic variational inference (SVI). SVI poses variational inference as a stochastic optimization problem and solves it iteratively using noisy gradient estimates. It aims to handle massive data for predictive and classification tasks by applying complex Bayesian models that have observed as well as latent variables. This paper aims to decentralize it allowing parallel computation, secure learning and robustness benefits. We use Alternating Direction Method of Multipliers in a top-down setting to develop a distributed SVI algorithm such that independent learners running inference algorithms only require sharing the estimated model parameters instead of their private datasets. Our work extends the distributed SVI-ADMM algorithm that we first propose, to an ADMM-based networked SVI algorithm in which not only are the learners working distributively but they share information according to rules of a graph by which they form a network. This kind of work lies under the umbrella of `deep learning over networks' and we verify our algorithm for a topic-modeling problem for corpus of Wikipedia articles. We illustrate the results on latent Dirichlet allocation (LDA) topic model in large document classification, compare performance with the centralized algorithm, and use numerical experiments to corroborate the analytical results.
http://arxiv.org/abs/1802.10168v1
"2018-02-27T21:11:56Z"
cs.LG, stat.ML
2,018
Classifying Idiomatic and Literal Expressions Using Topic Models and Intensity of Emotions
Jing Peng, Anna Feldman, Ekaterina Vylomova
We describe an algorithm for automatic classification of idiomatic and literal expressions. Our starting point is that words in a given text segment, such as a paragraph, that are highranking representatives of a common topic of discussion are less likely to be a part of an idiomatic expression. Our additional hypothesis is that contexts in which idioms occur, typically, are more affective and therefore, we incorporate a simple analysis of the intensity of the emotions expressed by the contexts. We investigate the bag of words topic representation of one to three paragraphs containing an expression that should be classified as idiomatic or literal (a target phrase). We extract topics from paragraphs containing idioms and from paragraphs containing literals using an unsupervised clustering method, Latent Dirichlet Allocation (LDA) (Blei et al., 2003). Since idiomatic expressions exhibit the property of non-compositionality, we assume that they usually present different semantics than the words used in the local topic. We treat idioms as semantic outliers, and the identification of a semantic shift as outlier detection. Thus, this topic representation allows us to differentiate idioms from literals using local semantic contexts. Our results are encouraging.
http://arxiv.org/abs/1802.09961v1
"2018-02-27T15:20:43Z"
cs.CL
2,018
The Development of Darwin's Origin of Species
Jaimie Murdock, Colin Allen, Simon DeDeo
From 1837, when he returned to England aboard the $\textit{HMS Beagle}$, to 1860, just after publication of $\textit{The Origin of Species}$, Charles Darwin kept detailed notes of each book he read or wanted to read. His notes and manuscripts provide information about decades of individual scientific practice. Previously, we trained topic models on the full texts of each reading, and applied information-theoretic measures to detect that changes in his reading patterns coincided with the boundaries of his three major intellectual projects in the period 1837-1860. In this new work we apply the reading model to five additional documents, four of them by Darwin: the first edition of $\textit{The Origin of Species}$, two private essays stating intermediate forms of his theory in 1842 and 1844, a third essay of disputed dating, and Alfred Russel Wallace's essay, which Darwin received in 1858. We address three historical inquiries, previously treated qualitatively: 1) the mythology of "Darwin's Delay," that despite completing an extensive draft in 1844, Darwin waited until 1859 to publish $\textit{The Origin of Species}$ due to external pressures; 2) the relationship between Darwin and Wallace's contemporaneous theories, especially in light of their joint presentation; and 3) dating of the "Outline and Draft" which was rediscovered in 1975 and postulated first as an 1839 draft preceding the Sketch of 1842, then as an interstitial draft between the 1842 and 1844 essays.
http://arxiv.org/abs/1802.09944v1
"2018-02-26T16:22:14Z"
cs.CL, cs.DL
2,018
Learning Topic Models by Neighborhood Aggregation
Ryohei Hisano
Topic models are frequently used in machine learning owing to their high interpretability and modular structure. However, extending a topic model to include a supervisory signal, to incorporate pre-trained word embedding vectors and to include a nonlinear output function is not an easy task because one has to resort to a highly intricate approximate inference procedure. The present paper shows that topic modeling with pre-trained word embedding vectors can be viewed as implementing a neighborhood aggregation algorithm where messages are passed through a network defined over words. From the network view of topic models, nodes correspond to words in a document and edges correspond to either a relationship describing co-occurring words in a document or a relationship describing the same word in the corpus. The network view allows us to extend the model to include supervisory signals, incorporate pre-trained word embedding vectors and include a nonlinear output function in a simple manner. In experiments, we show that our approach outperforms the state-of-the-art supervised Latent Dirichlet Allocation implementation in terms of held-out document classification tasks.
http://arxiv.org/abs/1802.08012v6
"2018-02-22T12:39:59Z"
stat.ML, cs.LG
2,018
Aspect-Aware Latent Factor Model: Rating Prediction with Ratings and Reviews
Zhiyong Cheng, Ying Ding, Lei Zhu, Mohan Kankanhalli
Although latent factor models (e.g., matrix factorization) achieve good accuracy in rating prediction, they suffer from several problems including cold-start, non-transparency, and suboptimal recommendation for local users or items. In this paper, we employ textual review information with ratings to tackle these limitations. Firstly, we apply a proposed aspect-aware topic model (ATM) on the review text to model user preferences and item features from different aspects, and estimate the aspect importance of a user towards an item. The aspect importance is then integrated into a novel aspect-aware latent factor model (ALFM), which learns user's and item's latent factors based on ratings. In particular, ALFM introduces a weighted matrix to associate those latent factors with the same set of aspects discovered by ATM, such that the latent factors could be used to estimate aspect ratings. Finally, the overall rating is computed via a linear combination of the aspect ratings, which are weighted by the corresponding aspect importance. To this end, our model could alleviate the data sparsity problem and gain good interpretability for recommendation. Besides, an aspect rating is weighted by an aspect importance, which is dependent on the targeted user's preferences and targeted item's features. Therefore, it is expected that the proposed method can model a user's preferences on an item more accurately for each user-item pair locally. Comprehensive experimental studies have been conducted on 19 datasets from Amazon and Yelp 2017 Challenge dataset. Results show that our method achieves significant improvement compared with strong baseline methods, especially for users with only few ratings. Moreover, our model could interpret the recommendation results in depth.
http://arxiv.org/abs/1802.07938v1
"2018-02-22T08:23:51Z"
cs.IR
2,018
Discovering Hidden Topical Hubs and Authorities in Online Social Networks
Roy Ka-Wei Lee, Tuan-Anh Hoang, Ee-Peng Lim
Finding influential users in online social networks is an important problem with many possible useful applications. HITS and other link analysis methods, in particular, have been often used to identify hub and authority users in web graphs and online social networks. These works, however, have not considered topical aspect of links in their analysis. A straightforward approach to overcome this limitation is to first apply topic models to learn the user topics before applying the HITS algorithm. In this paper, we instead propose a novel topic model known as Hub and Authority Topic (HAT) model to combine the two process so as to jointly learn the hub, authority and topical interests. We evaluate HAT against several existing state-of-the-art methods in two aspects: (i) modeling of topics, and (ii) link recommendation. We conduct experiments on two real-world datasets from Twitter and Instagram. Our experiment results show that HAT is comparable to state-of-the-art topic models in learning topics and it outperforms the state-of-the-art in link recommendation task.
http://arxiv.org/abs/1802.07022v1
"2018-02-20T09:17:01Z"
cs.SI, cs.IR
2,018
Learning Hidden Markov Models from Pairwise Co-occurrences with Application to Topic Modeling
Kejun Huang, Xiao Fu, Nicholas D. Sidiropoulos
We present a new algorithm for identifying the transition and emission probabilities of a hidden Markov model (HMM) from the emitted data. Expectation-maximization becomes computationally prohibitive for long observation records, which are often required for identification. The new algorithm is particularly suitable for cases where the available sample size is large enough to accurately estimate second-order output probabilities, but not higher-order ones. We show that if one is only able to obtain a reliable estimate of the pairwise co-occurrence probabilities of the emissions, it is still possible to uniquely identify the HMM if the emission probability is \emph{sufficiently scattered}. We apply our method to hidden topic Markov modeling, and demonstrate that we can learn topics with higher quality if documents are modeled as observations of HMMs sharing the same emission (topic) probability, compared to the simple but widely used bag-of-words model.
http://arxiv.org/abs/1802.06894v2
"2018-02-19T22:33:56Z"
cs.CL, cs.LG, eess.SP, stat.ML
2,018
Framing Matters: Predicting Framing Changes and Legislation from Topic News Patterns
Karthik Sheshadri, Chung-Wei Hang, Munindar Singh
News has traditionally been well researched, with studies ranging from sentiment analysis to event detection and topic tracking. We extend the focus to two surprisingly under-researched aspects of news: \emph{framing} and \emph{predictive utility}. We demonstrate that framing influences public opinion and behavior, and present a simple entropic algorithm to characterize and detect framing changes. We introduce a dataset of news topics with framing changes, harvested from manual surveys in previous research. Our approach achieves an F-measure of $F_1=0.96$ on our data, whereas dynamic topic modeling returns $F_1=0.1$. We also establish that news has \emph{predictive utility}, by showing that legislation in topics of current interest can be foreshadowed and predicted from news patterns.
http://arxiv.org/abs/1802.05762v1
"2018-02-15T21:06:24Z"
cs.CY
2,018
Attention based Sentence Extraction from Scientific Articles using Pseudo-Labeled data
Parth Mehta, Gaurav Arora, Prasenjit Majumder
In this work, we present a weakly supervised sentence extraction technique for identifying important sentences in scientific papers that are worthy of inclusion in the abstract. We propose a new attention based deep learning architecture that jointly learns to identify important content, as well as the cue phrases that are indicative of summary worthy sentences. We propose a new context embedding technique for determining the focus of a given paper using topic models and use it jointly with an LSTM based sequence encoder to learn attention weights across the sentence words. We use a collection of articles publicly available through ACL anthology for our experiments. Our system achieves a performance that is better, in terms of several ROUGE metrics, as compared to several state of art extractive techniques. It also generates more coherent summaries and preserves the overall structure of the document.
http://arxiv.org/abs/1802.04675v1
"2018-02-13T15:13:28Z"
cs.IR, cs.AI, cs.CL
2,018
Large-Scale Validation of Hypothesis Generation Systems via Candidate Ranking
Justin Sybrandt, Michael Shtutman, Ilya Safro
The first step of many research projects is to define and rank a short list of candidates for study. In the modern rapidity of scientific progress, some turn to automated hypothesis generation (HG) systems to aid this process. These systems can identify implicit or overlooked connections within a large scientific corpus, and while their importance grows alongside the pace of science, they lack thorough validation. Without any standard numerical evaluation method, many validate general-purpose HG systems by rediscovering a handful of historical findings, and some wishing to be more thorough may run laboratory experiments based on automatic suggestions. These methods are expensive, time consuming, and cannot scale. Thus, we present a numerical evaluation framework for the purpose of validating HG systems that leverages thousands of validation hypotheses. This method evaluates a HG system by its ability to rank hypotheses by plausibility; a process reminiscent of human candidate selection. Because HG systems do not produce a ranking criteria, specifically those that produce topic models, we additionally present novel metrics to quantify the plausibility of hypotheses given topic model system output. Finally, we demonstrate that our proposed validation method aligns with real-world research goals by deploying our method within Moliere, our recent topic-driven HG system, in order to automatically generate a set of candidate genes related to HIV-associated neurodegenerative disease (HAND). By performing laboratory experiments based on this candidate set, we discover a new connection between HAND and Dead Box RNA Helicase 3 (DDX3). Reproducibility: code, validation data, and results can be found at sybrandt.com/2018/validation.
http://arxiv.org/abs/1802.03793v4
"2018-02-11T19:04:49Z"
cs.IR, cs.CL
2,018
Mining Public Opinion about Economic Issues: Twitter and the U.S. Presidential Election
Amir Karami, London S. Bennett, Xiaoyun He
Opinion polls have been the bridge between public opinion and politicians in elections. However, developing surveys to disclose people's feedback with respect to economic issues is limited, expensive, and time-consuming. In recent years, social media such as Twitter has enabled people to share their opinions regarding elections. Social media has provided a platform for collecting a large amount of social media data. This paper proposes a computational public opinion mining approach to explore the discussion of economic issues in social media during an election. Current related studies use text mining methods independently for election analysis and election prediction; this research combines two text mining methods: sentiment analysis and topic modeling. The proposed approach has effectively been deployed on millions of tweets to analyze economic concerns of people during the 2012 US presidential election.
http://arxiv.org/abs/1802.01786v1
"2018-02-06T03:55:37Z"
cs.SI, cs.CL, cs.IR, stat.AP, stat.ML
2,018
Robust Vertex Enumeration for Convex Hulls in High Dimensions
Pranjal Awasthi, Bahman Kalantari, Yikai Zhang
Computation of the vertices of the convex hull of a set $S$ of $n$ points in $\mathbb{R} ^m$ is a fundamental problem in computational geometry, optimization, machine learning and more. We present "All Vertex Triangle Algorithm" (AVTA), a robust and efficient algorithm for computing the subset $\overline S$ of all $K$ vertices of $conv(S)$, the convex hull of $S$. If $\Gamma_*$ is the minimum of the distances from each vertex to the convex hull of the remaining vertices, given any $\gamma \leq \gamma_* = \Gamma_*/R$, $R$ the diameter of $S$, $AVTA$ computes $\overline S$ in $O(nK(m+ \gamma^{-2}))$ operations. If $\gamma_*$ is unknown but $K$ is known, AVTA computes $\overline S$ in $O(nK(m+ \gamma_*^{-2})) \log(\gamma_*^{-1})$ operations. More generally, given $t \in (0,1)$, AVTA computes a subset $\overline S^t$ of $\overline S$ in $O(n |\overline S^t|(m+ t^{-2}))$ operations, where the distance between any $p \in conv(S)$ to $conv(\overline S^t)$ is at most $t R$. Next we consider AVTA where input is $S_\varepsilon$, an $\varepsilon$ perturbation of $S$. Assuming a bound on $\varepsilon$ in terms of the minimum of the distances of vertices of $conv(S)$ to the convex hull of the remaining point of $S$, we derive analogous complexity bounds for computing $\overline S_\varepsilon$. We also analyze AVTA under random projections of $S$ or $S_\varepsilon$. Finally, via AVTA we design new practical algorithms for two popular machine learning problems: topic modeling and non-negative matrix factorization. For topic models AVTA leads to significantly better reconstruction of the topic-word matrix than state of the art approaches~\cite{arora2013practical, bansal2014provable}. For non-negative matrix AVTA is competitive with existing methods~\cite{arora2012computing}. Empirically AVTA is robust and can handle larger amounts of noise than existing methods.
http://arxiv.org/abs/1802.01515v2
"2018-02-05T17:13:16Z"
cs.CG, 90C05, 90C25, 65D18, 32C37, G.1.6; I.3.5; I.2.0
2,018
An Instability in Variational Inference for Topic Models
Behrooz Ghorbani, Hamid Javadi, Andrea Montanari
Topic models are Bayesian models that are frequently used to capture the latent structure of certain corpora of documents or images. Each data element in such a corpus (for instance each item in a collection of scientific articles) is regarded as a convex combination of a small number of vectors corresponding to `topics' or `components'. The weights are assumed to have a Dirichlet prior distribution. The standard approach towards approximating the posterior is to use variational inference algorithms, and in particular a mean field approximation. We show that this approach suffers from an instability that can produce misleading conclusions. Namely, for certain regimes of the model parameters, variational inference outputs a non-trivial decomposition into topics. However --for the same parameter values-- the data contain no actual information about the true decomposition, and hence the output of the algorithm is uncorrelated with the true topic decomposition. Among other consequences, the estimated posterior mean is significantly wrong, and estimated Bayesian credible regions do not achieve the nominal coverage. We discuss how this instability is remedied by more accurate mean field approximations.
http://arxiv.org/abs/1802.00568v1
"2018-02-02T05:52:48Z"
stat.ML
2,018
On the Topic of Jets: Disentangling Quarks and Gluons at Colliders
Eric M. Metodiev, Jesse Thaler
We introduce jet topics: a framework to identify underlying classes of jets from collider data. Because of a close mathematical relationship between distributions of observables in jets and emergent themes in sets of documents, we can apply recent techniques in "topic modeling" to extract jet topics from data with minimal or no input from simulation or theory. As a proof of concept with parton shower samples, we apply jet topics to determine separate quark and gluon jet distributions for constituent multiplicity. We also determine separate quark and gluon rapidity spectra from a mixed Z-plus-jet sample. While jet topics are defined directly from hadron-level multi-differential cross sections, one can also predict jet topics from first-principles theoretical calculations, with potential implications for how to define quark and gluon jets beyond leading-logarithmic accuracy. These investigations suggest that jet topics will be useful for extracting underlying jet distributions and fractions in a wide range of contexts at the Large Hadron Collider.
http://arxiv.org/abs/1802.00008v2
"2018-01-31T19:00:00Z"
hep-ph, hep-ex, stat.ML
2,018
Netizen-Style Commenting on Fashion Photos: Dataset and Diversity Measures
Wen Hua Lin, Kuan-Ting Chen, Hung Yueh Chiang, Winston Hsu
Recently, deep neural network models have achieved promising results in image captioning task. Yet, "vanilla" sentences, only describing shallow appearances (e.g., types, colors), generated by current works are not satisfied netizen style resulting in lacking engagements, contexts, and user intentions. To tackle this problem, we propose Netizen Style Commenting (NSC), to automatically generate characteristic comments to a user-contributed fashion photo. We are devoted to modulating the comments in a vivid "netizen" style which reflects the culture in a designated social community and hopes to facilitate more engagement with users. In this work, we design a novel framework that consists of three major components: (1) We construct a large-scale clothing dataset named NetiLook, which contains 300K posts (photos) with 5M comments to discover netizen-style comments. (2) We propose three unique measures to estimate the diversity of comments. (3) We bring diversity by marrying topic models with neural networks to make up the insufficiency of conventional image captioning works. Experimenting over Flickr30k and our NetiLook datasets, we demonstrate our proposed approaches benefit fashion photo commenting and improve image captioning tasks both in accuracy and diversity.
http://arxiv.org/abs/1801.10300v1
"2018-01-31T05:08:58Z"
cs.CV
2,018
Creative Exploration Using Topic Based Bisociative Networks
Faez Ahmed, Mark Fuge
Bisociative knowledge discovery is an approach that combines elements from two or more "incompatible" domains to generate creative solutions and insight. Inspired by Koestler's notion of bisociation, in this paper we propose a computational framework for the discovery of new connections between domains to promote creative discovery and inspiration in design. Specifically, we propose using topic models on a large collection of unstructured text ideas from multiple domains to discover creative sources of inspiration. We use these topics to generate a Bisociative Information Network--- a graph that captures conceptual similarity between ideas--- that helps designers find creative links within that network. Using a dataset of thousands of ideas from OpenIDEO, an online collaborative community, our results show usefulness of representing conceptual bridges through collections of words (topics) in finding cross-domain inspiration. We show that the discovered links between domains, whether presented on their own or via ideas they inspired, are perceived to be more novel and can also be used as creative stimuli for new idea generation.
http://arxiv.org/abs/1801.10084v1
"2018-01-30T16:20:14Z"
cs.SI, cs.IR
2,018
Bayesian Nonparametric Modeling of Driver Behavior using HDP Split-Merge Sampling Algorithm
Vadim Smolyakov, Julian Straub, Sue Zheng, John W. Fisher III
Modern vehicles are equipped with increasingly complex sensors. These sensors generate large volumes of data that provide opportunities for modeling and analysis. Here, we are interested in exploiting this data to learn aspects of behaviors and the road network associated with individual drivers. Our dataset is collected on a standard vehicle used to commute to work and for personal trips. A Hidden Markov Model (HMM) trained on the GPS position and orientation data is utilized to compress the large amount of position information into a small amount of road segment states. Each state has a set of observations, i.e. car signals, associated with it that are quantized and modeled as draws from a Hierarchical Dirichlet Process (HDP). The inference for the topic distributions is carried out using HDP split-merge sampling algorithm. The topic distributions over joint quantized car signals characterize the driving situation in the respective road state. In a novel manner, we demonstrate how the sparsity of the personal road network of a driver in conjunction with a hierarchical topic model allows data driven predictions about destinations as well as likely road conditions.
http://arxiv.org/abs/1801.09150v1
"2018-01-27T23:25:08Z"
stat.ML
2,018
A Large-Scale Empirical Comparison of Static and Dynamic Test Case Prioritization Techniques
Qi Luo, Kevin Moran, Denys Poshyvanyk
The large body of existing research in Test Case Prioritization (TCP) techniques, can be broadly classified into two categories: dynamic techniques (that rely on run-time execution information) and static techniques (that operate directly on source and test code). Absent from this current body of work is a comprehensive study aimed at understanding and evaluating the static approaches and comparing them to dynamic approaches on a large set of projects. In this work, we perform the first extensive study aimed at empirically evaluating four static TCP techniques comparing them with state-of-research dynamic TCP techniques at different test-case granularities (e.g., method and class-level) in terms of effectiveness, efficiency and similarity of faults detected. This study was performed on 30 real-word Java programs encompassing 431 KLoC. In terms of effectiveness, we find that the static call-graph-based technique outperforms the other static techniques at test-class level, but the topic-model-based technique performs better at test-method level. In terms of efficiency, the static call-graph-based technique is also the most efficient when compared to other static techniques. When examining the similarity of faults detected for the four static techniques compared to the four dynamic ones, we find that on average, the faults uncovered by these two groups of techniques are quite dissimilar, with the top 10% of test cases agreeing on only 25% - 30% of detected faults. This prompts further research into the severity/importance of faults uncovered by these techniques, and into the potential for combining static and dynamic information for more effective approaches.
http://arxiv.org/abs/1801.05917v1
"2018-01-18T02:58:11Z"
cs.SE
2,018
ProvThreads: Analytic Provenance Visualization and Segmentation
Sina Mohseni, Alyssa Pena, Eric D. Ragan
Our work aims to generate visualizations to enable meta-analysis of analytic provenance and aid better understanding of analysts' strategies during exploratory text analysis. We introduce ProvThreads, a visual analytics approach that incorporates interactive topic modeling outcomes to illustrate relationships between user interactions and the data topics under investigation. ProvThreads uses a series of continuous analysis paths called topic threads to demonstrate both topic coverage and the progression of an investigation over time. As an analyst interacts with different pieces of data during the analysis, interactions are logged and used to track user interests in topics over time. A line chart shows different amounts of interest in multiple topics over the duration of the analysis. We discuss how different configurations of ProvThreads can be used to reveal changes in focus throughout an analysis.
http://arxiv.org/abs/1801.05469v1
"2018-01-16T20:06:03Z"
cs.HC
2,018
Latent nested nonparametric priors
Federico Camerlenghi, David B. Dunson, Antonio Lijoi, Igor Prünster, Abel Rodríguez
Discrete random structures are important tools in Bayesian nonparametrics and the resulting models have proven effective in density estimation, clustering, topic modeling and prediction, among others. In this paper, we consider nested processes and study the dependence structures they induce. Dependence ranges between homogeneity, corresponding to full exchangeability, and maximum heterogeneity, corresponding to (unconditional) independence across samples. The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed or latent level. To overcome this drawback, inherent to nesting general discrete random measures, we introduce a novel class of latent nested processes. These are obtained by adding common and group-specific completely random measures and, then, normalising to yield dependent random probability measures. We provide results on the partition distributions induced by latent nested processes, and develop an Markov Chain Monte Carlo sampler for Bayesian inferences. A test for distributional homogeneity across groups is obtained as a by product. The results and their inferential implications are showcased on synthetic and real data.
http://arxiv.org/abs/1801.05048v1
"2018-01-15T22:12:52Z"
math.ST, stat.ME, stat.TH
2,018
Topic Modeling on Health Journals with Regularized Variational Inference
Robert Giaquinto, Arindam Banerjee
Topic modeling enables exploration and compact representation of a corpus. The CaringBridge (CB) dataset is a massive collection of journals written by patients and caregivers during a health crisis. Topic modeling on the CB dataset, however, is challenging due to the asynchronous nature of multiple authors writing about their health journeys. To overcome this challenge we introduce the Dynamic Author-Persona topic model (DAP), a probabilistic graphical model designed for temporal corpora with multiple authors. The novelty of the DAP model lies in its representation of authors by a persona --- where personas capture the propensity to write about certain topics over time. Further, we present a regularized variational inference algorithm, which we use to encourage the DAP model's personas to be distinct. Our results show significant improvements over competing topic models --- particularly after regularization, and highlight the DAP model's unique ability to capture common journeys shared by different authors.
http://arxiv.org/abs/1801.04958v1
"2018-01-15T19:23:21Z"
cs.CL, cs.LG, stat.ML
2,018
Between an Arena and a Sports Bar: Online Chats of eSports Spectators
Denis Bulygin, Ilya Musabirov, Alena Suvorova, Ksenia Konstantinova, Pavel Okopnyi
Hundreds of thousands of spectators use Twitch.tv to watch The International, a Dota 2 eSports tournament and communicate in massive chats. In this paper, we analyse crowd behavior in these chats, disentangle features of social communication, such as contextual meanings of emojis and short messages. We apply structural topic modelling and cross-correlation analysis to investigate topical and temporal patterns of chat messages and their relation to in-game events. We show that in-game events drive the communication in the massive chat and define its emergent topical structure to a various extent. Following the discussion in communication and social computing literature, we discuss these findings in the framework of analysis of communication of physical sports crowds and outline some limitations of the 'stadium' metaphor, suggesting a complementary metaphor of 'sports bar' as a useful analytical and design device.
http://arxiv.org/abs/1801.02862v3
"2018-01-09T10:12:06Z"
cs.HC
2,018
Knowledge-based Word Sense Disambiguation using Topic Models
Devendra Singh Chaplot, Ruslan Salakhutdinov
Word Sense Disambiguation is an open problem in Natural Language Processing which is particularly challenging and useful in the unsupervised setting where all the words in any given text need to be disambiguated without using any labeled data. Typically WSD systems use the sentence or a small window of words around the target word as the context for disambiguation because their computational complexity scales exponentially with the size of the context. In this paper, we leverage the formalism of topic model to design a WSD system that scales linearly with the number of words in the context. As a result, our system is able to utilize the whole document as the context for a word to be disambiguated. The proposed method is a variant of Latent Dirichlet Allocation in which the topic proportions for a document are replaced by synset proportions. We further utilize the information in the WordNet by assigning a non-uniform prior to synset distribution over words and a logistic-normal prior for document distribution over synsets. We evaluate the proposed method on Senseval-2, Senseval-3, SemEval-2007, SemEval-2013 and SemEval-2015 English All-Word WSD datasets and show that it outperforms the state-of-the-art unsupervised knowledge-based WSD system by a significant margin.
http://arxiv.org/abs/1801.01900v1
"2018-01-05T19:20:24Z"
cs.CL, cs.LG
2,018
Advice from the Oracle: Really Intelligent Information Retrieval
Michael J. Kurtz
What is "intelligent" information retrieval? Essentially this is asking what is intelligence, in this article I will attempt to show some of the aspects of human intelligence, as related to information retrieval. I will do this by the device of a semi-imaginary Oracle. Every Observatory has an oracle, someone who is a distinguished scientist, has great administrative responsibilities, acts as mentor to a number of less senior people, and as trusted advisor to even the most accomplished scientists, and knows essentially everyone in the field. In an appendix I will present a brief summary of the Statistical Factor Space method for text indexing and retrieval, and indicate how it will be used in the Astrophysics Data System Abstract Service. 2018 Keywords: Personal Digital Assistant; Supervised Topic Models
http://arxiv.org/abs/1801.00815v1
"2018-01-02T19:36:22Z"
cs.AI, astro-ph.IM, physics.soc-ph
2,018
Topic Compositional Neural Language Model
Wenlin Wang, Zhe Gan, Wenqi Wang, Dinghan Shen, Jiaji Huang, Wei Ping, Sanjeev Satheesh, Lawrence Carin
We propose a Topic Compositional Neural Language Model (TCNLM), a novel method designed to simultaneously capture both the global semantic meaning and the local word ordering structure in a document. The TCNLM learns the global semantic coherence of a document via a neural topic model, and the probability of each learned latent topic is further used to build a Mixture-of-Experts (MoE) language model, where each expert (corresponding to one topic) is a recurrent neural network (RNN) that accounts for learning the local structure of a word sequence. In order to train the MoE model efficiently, a matrix factorization method is applied, by extending each weight matrix of the RNN to be an ensemble of topic-dependent weight matrices. The degree to which each member of the ensemble is used is tied to the document-dependent probability of the corresponding topics. Experimental results on several corpora show that the proposed approach outperforms both a pure RNN-based model and other topic-guided language models. Further, our model yields sensible topics, and also has the capacity to generate meaningful sentences conditioned on given topics.
http://arxiv.org/abs/1712.09783v3
"2017-12-28T08:05:48Z"
cs.LG, cs.CL
2,017
Multilingual Topic Models
Kriste Krstovski, Michael J. Kurtz, David A. Smith, Alberto Accomazzi
Scientific publications have evolved several features for mitigating vocabulary mismatch when indexing, retrieving, and computing similarity between articles. These mitigation strategies range from simply focusing on high-value article sections, such as titles and abstracts, to assigning keywords, often from controlled vocabularies, either manually or through automatic annotation. Various document representation schemes possess different cost-benefit tradeoffs. In this paper, we propose to model different representations of the same article as translations of each other, all generated from a common latent representation in a multilingual topic model. We start with a methodological overview on latent variable models for parallel document representations that could be used across many information science tasks. We then show how solving the inference problem of mapping diverse representations into a shared topic space allows us to evaluate representations based on how topically similar they are to the original article. In addition, our proposed approach provides means to discover where different concept vocabularies require improvement.
http://arxiv.org/abs/1712.06704v1
"2017-12-18T22:45:20Z"
stat.ML, cs.CL, cs.IR
2,017
Exchangeable modelling of relational data: checking sparsity, train-test splitting, and sparse exchangeable Poisson matrix factorization
Victor Veitch, Ekansh Sharma, Zacharie Naulet, Daniel M. Roy
A variety of machine learning tasks---e.g., matrix factorization, topic modelling, and feature allocation---can be viewed as learning the parameters of a probability distribution over bipartite graphs. Recently, a new class of models for networks, the sparse exchangeable graphs, have been introduced to resolve some important pathologies of traditional approaches to statistical network modelling; most notably, the inability to model sparsity (in the asymptotic sense). The present paper explains some practical insights arising from this work. We first show how to check if sparsity is relevant for modelling a given (fixed size) dataset by using network subsampling to identify a simple signature of sparsity. We discuss the implications of the (sparse) exchangeable subsampling theory for test-train dataset splitting; we argue common approaches can lead to biased results, and we propose a principled alternative. Finally, we study sparse exchangeable Poisson matrix factorization as a worked example. In particular, we show how to adapt mean field variational inference to the sparse exchangeable setting, allowing us to scale inference to huge datasets.
http://arxiv.org/abs/1712.02311v1
"2017-12-06T18:20:14Z"
stat.ML
2,017
Measuring the Popularity of Job Skills in Recruitment Market: A Multi-Criteria Approach
Tong Xu, Hengshu Zhu, Chen Zhu, Pan Li, Hui Xiong
To cope with the accelerating pace of technological changes, talents are urged to add and refresh their skills for staying in active and gainful employment. This raises a natural question: what are the right skills to learn? Indeed, it is a nontrivial task to measure the popularity of job skills due to the diversified criteria of jobs and the complicated connections within job skills. To that end, in this paper, we propose a data driven approach for modeling the popularity of job skills based on the analysis of large-scale recruitment data. Specifically, we first build a job skill network by exploring a large corpus of job postings. Then, we develop a novel Skill Popularity based Topic Model (SPTM) for modeling the generation of the skill network. In particular, SPTM can integrate different criteria of jobs (e.g., salary levels, company size) as well as the latent connections within skills, thus we can effectively rank the job skills based on their multi-faceted popularity. Extensive experiments on real-world recruitment data validate the effectiveness of SPTM for measuring the popularity of job skills, and also reveal some interesting rules, such as the popular job skills which lead to high-paid employment.
http://arxiv.org/abs/1712.03087v1
"2017-12-06T11:18:40Z"
cs.CY, 68T30
2,017
Topics and Label Propagation: Best of Both Worlds for Weakly Supervised Text Classification
Sachin Pawar, Nitin Ramrakhiyani, Swapnil Hingmire, Girish K. Palshikar
We propose a Label Propagation based algorithm for weakly supervised text classification. We construct a graph where each document is represented by a node and edge weights represent similarities among the documents. Additionally, we discover underlying topics using Latent Dirichlet Allocation (LDA) and enrich the document graph by including the topics in the form of additional nodes. The edge weights between a topic and a text document represent level of "affinity" between them. Our approach does not require document level labelling, instead it expects manual labels only for topic nodes. This significantly minimizes the level of supervision needed as only a few topics are observed to be enough for achieving sufficiently high accuracy. The Label Propagation Algorithm is employed on this enriched graph to propagate labels among the nodes. Our approach combines the advantages of Label Propagation (through document-document similarities) and Topic Modelling (for minimal but smart supervision). We demonstrate the effectiveness of our approach on various datasets and compare with state-of-the-art weakly supervised text classification approaches.
http://arxiv.org/abs/1712.02767v1
"2017-12-04T06:05:21Z"
cs.CL, cs.LG
2,017
Joint Topic-Semantic-aware Social Recommendation for Online Voting
Hongwei Wang, Jia Wang, Miao Zhao, Jiannong Cao, Minyi Guo
Online voting is an emerging feature in social networks, in which users can express their attitudes toward various issues and show their unique interest. Online voting imposes new challenges on recommendation, because the propagation of votings heavily depends on the structure of social networks as well as the content of votings. In this paper, we investigate how to utilize these two factors in a comprehensive manner when doing voting recommendation. First, due to the fact that existing text mining methods such as topic model and semantic model cannot well process the content of votings that is typically short and ambiguous, we propose a novel Topic-Enhanced Word Embedding (TEWE) method to learn word and document representation by jointly considering their topics and semantics. Then we propose our Joint Topic-Semantic-aware social Matrix Factorization (JTS-MF) model for voting recommendation. JTS-MF model calculates similarity among users and votings by combining their TEWE representation and structural information of social networks, and preserves this topic-semantic-social similarity during matrix factorization. To evaluate the performance of TEWE representation and JTS-MF model, we conduct extensive experiments on real online voting dataset. The results prove the efficacy of our approach against several state-of-the-art baselines.
http://arxiv.org/abs/1712.00731v1
"2017-12-03T08:11:43Z"
stat.ML, cs.IR, cs.SI
2,017
Survival-Supervised Topic Modeling with Anchor Words: Characterizing Pancreatitis Outcomes
George H. Chen, Jeremy C. Weiss
We introduce a new approach for topic modeling that is supervised by survival analysis. Specifically, we build on recent work on unsupervised topic modeling with so-called anchor words by providing supervision through an elastic-net regularized Cox proportional hazards model. In short, an anchor word being present in a document provides strong indication that the document is partially about a specific topic. For example, by seeing "gallstones" in a document, we are fairly certain that the document is partially about medicine. Our proposed method alternates between learning a topic model and learning a survival model to find a local minimum of a block convex optimization problem. We apply our proposed approach to predicting how long patients with pancreatitis admitted to an intensive care unit (ICU) will stay in the ICU. Our approach is as accurate as the best of a variety of baselines while being more interpretable than any of the baselines.
http://arxiv.org/abs/1712.00535v2
"2017-12-02T01:57:35Z"
stat.ML
2,017
Detection and Characterization of Illegal Marketing and Promotion of Prescription Drugs on Twitter
Janani Kalyanam, Timothy Mackey
Illicit online pharmacies allow the purchase of prescription drugs online without a prescription. Such pharmacies leverage social media platforms such as Twit- ter as a promotion and marketing tool with the intent of reaching out to a larger, potentially younger demographics of the population. Given the serious negative health effects that arise from abusing such drugs, it is important to identify the relevant content on social media and exterminate their presence as quickly as pos- sible. In response, we collected all the tweets that contained the names of certain preselected controlled substances over a period of 5 months. We found that an unsupervised topic modeling based methodology is able to identify tweets that promote and market controlled substances with high precision. We also study the meta-data characteristics of such tweets and the users who post them and find that they have several distinguishing characteristics that sets them apart. We were able to train supervised methods and achieve high performance in detecting such content and the users who post them.
http://arxiv.org/abs/1712.00507v1
"2017-12-01T22:34:07Z"
cs.CY
2,017
Prediction-Constrained Topic Models for Antidepressant Recommendation
Michael C. Hughes, Gabriel Hope, Leah Weiner, Thomas H. McCoy, Roy H. Perlis, Erik B. Sudderth, Finale Doshi-Velez
Supervisory signals can help topic models discover low-dimensional data representations that are more interpretable for clinical tasks. We propose a framework for training supervised latent Dirichlet allocation that balances two goals: faithful generative explanations of high-dimensional data and accurate prediction of associated class labels. Existing approaches fail to balance these goals by not properly handling a fundamental asymmetry: the intended task is always predicting labels from data, not data from labels. Our new prediction-constrained objective trains models that predict labels from heldout data well while also producing good generative likelihoods and interpretable topic-word parameters. In a case study on predicting depression medications from electronic health records, we demonstrate improved recommendations compared to previous supervised topic models and high- dimensional logistic regression from words alone.
http://arxiv.org/abs/1712.00499v1
"2017-12-01T21:24:26Z"
cs.LG, stat.ML
2,017
Feature discovery and visualization of robot mission data using convolutional autoencoders and Bayesian nonparametric topic models
Genevieve Flaspohler, Nicholas Roy, Yogesh Girdhar
The gap between our ability to collect interesting data and our ability to analyze these data is growing at an unprecedented rate. Recent algorithmic attempts to fill this gap have employed unsupervised tools to discover structure in data. Some of the most successful approaches have used probabilistic models to uncover latent thematic structure in discrete data. Despite the success of these models on textual data, they have not generalized as well to image data, in part because of the spatial and temporal structure that may exist in an image stream. We introduce a novel unsupervised machine learning framework that incorporates the ability of convolutional autoencoders to discover features from images that directly encode spatial information, within a Bayesian nonparametric topic model that discovers meaningful latent patterns within discrete data. By using this hybrid framework, we overcome the fundamental dependency of traditional topic models on rigidly hand-coded data representations, while simultaneously encoding spatial dependency in our topics without adding model complexity. We apply this model to the motivating application of high-level scene understanding and mission summarization for exploratory marine robots. Our experiments on a seafloor dataset collected by a marine robot show that the proposed hybrid framework outperforms current state-of-the-art approaches on the task of unsupervised seafloor terrain characterization.
http://arxiv.org/abs/1712.00028v1
"2017-11-30T19:02:34Z"
cs.LG, stat.ML
2,017
KIBS Innovative Entrepreneurship Networks on Social Media
José N. Franco-Riquelme, Isaac Lemus-Aguilar, Joaquín Ordieres-Meré
The analysis of the use of social media for innovative entrepreneurship in the context has received little attention in the literature, especially in the context of Knowledge Intensive Business Services (KIBS). Therefore, this paper focuses on bridging this gap by applying text mining and sentiment analysis techniques to identify the innovative entrepreneurship reflected by these companies in their social media. Finally, we present and analyze the results of our quantitative analysis of 23.483 posts based on eleven Spanish and Italian consultancy KIBS Twitter Usernames and Keywords using data interpretation techniques such as clustering and topic modeling. This paper suggests that there is a significant gap between the perceived potential of social media and the entrepreneurial behaviors at the social context in business-to-business (B2B) companies.
http://arxiv.org/abs/1711.11403v1
"2017-11-30T14:08:19Z"
cs.SI, cs.CY
2,017
State Space LSTM Models with Particle MCMC Inference
Xun Zheng, Manzil Zaheer, Amr Ahmed, Yuan Wang, Eric P Xing, Alexander J Smola
Long Short-Term Memory (LSTM) is one of the most powerful sequence models. Despite the strong performance, however, it lacks the nice interpretability as in state space models. In this paper, we present a way to combine the best of both worlds by introducing State Space LSTM (SSL) models that generalizes the earlier work \cite{zaheer2017latent} of combining topic models with LSTM. However, unlike \cite{zaheer2017latent}, we do not make any factorization assumptions in our inference algorithm. We present an efficient sampler based on sequential Monte Carlo (SMC) method that draws from the joint posterior directly. Experimental results confirms the superiority and stability of this SMC inference algorithm on a variety of domains.
http://arxiv.org/abs/1711.11179v1
"2017-11-30T01:42:26Z"
cs.LG, stat.ML
2,017
TweetIT- Analyzing Topics for Twitter Users to garner Maximum Attention
Dhanasekar Sundararaman, Priya Arora, Vishwanath Seshagiri
Twitter, a microblogging service, is todays most popular platform for communication in the form of short text messages, called Tweets. Users use Twitter to publish their content either for expressing concerns on information news or views on daily conversations. When this expression emerges, they are experienced by the worldwide distribution network of users and not only by the interlocutor(s). Depending upon the impact of the tweet in the form of the likes, retweets and percentage of followers increases for the user considering a window of time frame, we compute attention factor for each tweet for the selected user profiles. This factor is used to select the top 1000 Tweets, from each user profile, to form a document. Topic modelling is then applied to this document to determine the intent of the user behind the Tweets. After topics are modelled, the similarity is determined between the BBC news data-set containing the modelled topic, and the user document under evaluation. Finally, we determine the top words for a user which would enable us to find the topics which garnered attention and has been posted recently. The experiment is performed using more than 1.1M Tweets from around 500 Twitter profiles spanning Politics, Entertainment, Sports etc. and hundreds of BBC news articles. The results show that our analysis is efficient enough to enable us to find the topics which would act as a suggestion for users to get higher popularity rating for the user in the future.
http://arxiv.org/abs/1711.10002v1
"2017-11-27T21:10:48Z"
cs.SI
2,017
Topic Modelling of Everyday Sexism Project Entries
Sophie Melville, Kathryn Eccles, Taha Yasseri
The Everyday Sexism Project documents everyday examples of sexism reported by volunteer contributors from all around the world. It collected 100,000 entries in 13+ languages within the first 3 years of its existence. The content of reports in various languages submitted to Everyday Sexism is a valuable source of crowdsourced information with great potential for feminist and gender studies. In this paper, we take a computational approach to analyze the content of reports. We use topic-modelling techniques to extract emerging topics and concepts from the reports, and to map the semantic relations between those topics. The resulting picture closely resembles and adds to that arrived at through qualitative analysis, showing that this form of topic modeling could be useful for sifting through datasets that had not previously been subject to any analysis. More precisely, we come up with a map of topics for two different resolutions of our topic model and discuss the connection between the identified topics. In the low resolution picture, for instance, we found Public space/Street, Online, Work related/Office, Transport, School, Media harassment, and Domestic abuse. Among these, the strongest connection is between Public space/Street harassment and Domestic abuse and sexism in personal relationships.The strength of the relationships between topics illustrates the fluid and ubiquitous nature of sexism, with no single experience being unrelated to another.
http://arxiv.org/abs/1711.09074v3
"2017-11-24T18:40:41Z"
cs.CY, cs.SI, physics.soc-ph
2,017
Continuous Semantic Topic Embedding Model Using Variational Autoencoder
Namkyu Jung, Hyeong In Choi
This paper proposes the continuous semantic topic embedding model (CSTEM) which finds latent topic variables in documents using continuous semantic distance function between the topics and the words by means of the variational autoencoder(VAE). The semantic distance could be represented by any symmetric bell-shaped geometric distance function on the Euclidean space, for which the Mahalanobis distance is used in this paper. In order for the semantic distance to perform more properly, we newly introduce an additional model parameter for each word to take out the global factor from this distance indicating how likely it occurs regardless of its topic. It certainly improves the problem that the Gaussian distribution which is used in previous topic model with continuous word embedding could not explain the semantic relation correctly and helps to obtain the higher topic coherence. Through the experiments with the dataset of 20 Newsgroup, NIPS papers and CNN/Dailymail corpus, the performance of the recent state-of-the-art models is accomplished by our model as well as generating topic embedding vectors which makes possible to observe where the topic vectors are embedded with the word vectors in the real Euclidean space and how the topics are related each other semantically.
http://arxiv.org/abs/1711.08870v1
"2017-11-24T05:37:35Z"
stat.ML, cs.CL, cs.LG
2,017
Parametric Instabilities in Resonantly-Driven Bose-Einstein Condensates
S. Lellouch, N. Goldman
Shaking optical lattices in a resonant manner offers an efficient and versatile method to devise artificial gauge fields and topological band structures for ultracold atomic gases. This was recently demonstrated through the experimental realization of the Harper-Hofstadter model, which combined optical superlattices and resonant time-modulations. Adding inter-particle interactions to these engineered band systems is expected to lead to strongly-correlated states with topological features, such as fractional Chern insulators. However, the interplay between interactions and external time-periodic drives typically triggers violent instabilities and uncontrollable heating, hence potentially ruling out the possibility of accessing such intriguing states of matter in experiments. In this work, we study the early-stage parametric instabilities that occur in systems of resonantly-driven Bose-Einstein condensates in optical lattices. We apply and extend an approach based on Bogoliubov theory [PRX 7, 021015 (2017)] to a variety of resonantly-driven band models, from a simple shaken Wannier-Stark ladder to the more intriguing driven-induced Harper-Hofstadter model. In particular, we provide ab initio numerical and analytical predictions for the stability properties of these topical models. This work sheds light on general features that could guide current experiments to stable regimes of operation.
http://arxiv.org/abs/1711.08832v1
"2017-11-23T21:26:16Z"
cond-mat.quant-gas, cond-mat.mes-hall, quant-ph
2,017
Application of Natural Language Processing to Determine User Satisfaction in Public Services
Radoslaw Kowalski, Marc Esteve, Slava J. Mikhaylov
Research on customer satisfaction has increased substantially in recent years. However, the relative importance and relationships between different determinants of satisfaction remains uncertain. Moreover, quantitative studies to date tend to test for significance of pre-determined factors thought to have an influence with no scalable means to identify other causes of user satisfaction. The gaps in knowledge make it difficult to use available knowledge on user preference for public service improvement. Meanwhile, digital technology development has enabled new methods to collect user feedback, for example through online forums where users can comment freely on their experience. New tools are needed to analyze large volumes of such feedback. Use of topic models is proposed as a feasible solution to aggregate open-ended user opinions that can be easily deployed in the public sector. Generated insights can contribute to a more inclusive decision-making process in public service provision. This novel methodological approach is applied to a case of service reviews of publicly-funded primary care practices in England. Findings from the analysis of 145,000 reviews covering almost 7,700 primary care centers indicate that the quality of interactions with staff and bureaucratic exigencies are the key issues driving user satisfaction across England.
http://arxiv.org/abs/1711.08083v1
"2017-11-21T23:22:15Z"
cs.CL
2,017
A Double Parametric Bootstrap Test for Topic Models
Skyler Seto, Sarah Tan, Giles Hooker, Martin T. Wells
Non-negative matrix factorization (NMF) is a technique for finding latent representations of data. The method has been applied to corpora to construct topic models. However, NMF has likelihood assumptions which are often violated by real document corpora. We present a double parametric bootstrap test for evaluating the fit of an NMF-based topic model based on the duality of the KL divergence and Poisson maximum likelihood estimation. The test correctly identifies whether a topic model based on an NMF approach yields reliable results in simulated and real data.
http://arxiv.org/abs/1711.07104v2
"2017-11-19T23:33:27Z"
stat.ML
2,017
Learning Seasonal Phytoplankton Communities with Topic Models
Arnold Kalmbach, Heidi M. Sosik, Gregory Dudek, Yogesh Girdhar
In this work we develop and demonstrate a probabilistic generative model for phytoplankton communities. The proposed model takes counts of a set of phytoplankton taxa in a timeseries as its training data, and models communities by learning sparse co-occurrence structure between the taxa. Our model is probabilistic, where communities are represented by probability distributions over the species, and each time-step is represented by a probability distribution over the communities. The proposed approach uses a non-parametric, spatiotemporal topic model to encourage the communities to form an interpretable representation of the data, without making strong assumptions about the communities. We demonstrate the quality and interpretability of our method by its ability to improve performance of a simplistic regression model. We show that simple linear regression is sufficient to predict the community distribution learned by our method, and therefore the taxon distributions, from a set of naively chosen environment variables. In contrast, a similar regression model is insufficient to predict the taxon distributions directly or through PCA with the same level of accuracy.
http://arxiv.org/abs/1711.09013v2
"2017-11-19T21:20:26Z"
stat.AP, cs.CE
2,017
Prior-aware Dual Decomposition: Document-specific Topic Inference for Spectral Topic Models
Moontae Lee, David Bindel, David Mimno
Spectral topic modeling algorithms operate on matrices/tensors of word co-occurrence statistics to learn topic-specific word distributions. This approach removes the dependence on the original documents and produces substantial gains in efficiency and provable topic inference, but at a cost: the model can no longer provide information about the topic composition of individual documents. Recently Thresholded Linear Inverse (TLI) is proposed to map the observed words of each document back to its topic composition. However, its linear characteristics limit the inference quality without considering the important prior information over topics. In this paper, we evaluate Simple Probabilistic Inverse (SPI) method and novel Prior-aware Dual Decomposition (PADD) that is capable of learning document-specific topic compositions in parallel. Experiments show that PADD successfully leverages topic correlations as a prior, notably outperforming TLI and learning quality topic compositions comparable to Gibbs sampling on various data.
http://arxiv.org/abs/1711.07065v1
"2017-11-19T19:56:23Z"
cs.CL, cs.IR, cs.LG
2,017
The Cultural Evolution of National Constitutions
Daniel N. Rockmore, Chen Fang, Nicholas J. Foti, Tom Ginsburg, David C. Krakauer
We explore how ideas from infectious disease and genetics can be used to uncover patterns of cultural inheritance and innovation in a corpus of 591 national constitutions spanning 1789 - 2008. Legal "Ideas" are encoded as "topics" - words statistically linked in documents - derived from topic modeling the corpus of constitutions. Using these topics we derive a diffusion network for borrowing from ancestral constitutions back to the US Constitution of 1789 and reveal that constitutions are complex cultural recombinants. We find systematic variation in patterns of borrowing from ancestral texts and "biological"-like behavior in patterns of inheritance with the distribution of "offspring" arising through a bounded preferential-attachment process. This process leads to a small number of highly innovative (influential) constitutions some of which have yet to have been identified as so in the current literature. Our findings thus shed new light on the critical nodes of the constitution-making network. The constitutional network structure reflects periods of intense constitution creation, and systematic patterns of variation in constitutional life-span and temporal influence.
http://arxiv.org/abs/1711.06899v1
"2017-11-18T17:10:25Z"
cs.SI, cs.CL, physics.soc-ph, 68.U99, J.4
2,017
Low-dimensional Embeddings for Interpretable Anchor-based Topic Inference
Moontae Lee, David Mimno
The anchor words algorithm performs provably efficient topic model inference by finding an approximate convex hull in a high-dimensional word co-occurrence space. However, the existing greedy algorithm often selects poor anchor words, reducing topic quality and interpretability. Rather than finding an approximate convex hull in a high-dimensional space, we propose to find an exact convex hull in a visualizable 2- or 3-dimensional space. Such low-dimensional embeddings both improve topics and clearly show users why the algorithm selects certain words.
http://arxiv.org/abs/1711.06826v1
"2017-11-18T07:52:40Z"
cs.CL
2,017
Political Polarization in Social Media: Analysis of the "Twitter Political Field" in Japan
Hiroki Takikawa, Kikuko Nagayoshi
There is an ongoing debate about whether the Internet is like a public sphere or an echo chamber. Among many forms of social media, Twitter is one of the most crucial online places for political debate. Most of the previous studies focus on the formal structure of the Twitter political field, such as its homophilic tendency, or otherwise limit the analysis to a few topics. In order to explore whether Twitter functions as an echo chamber in general, however, we have to investigate not only the structure but also the contents of Twitter's political field. Accordingly, we conducted both large-scale social network analysis and natural language processing. We firstly applied a community detection method to the reciprocal following network in Twitter and found five politically distinct communities in the field. We further examined dominant topics discussed therein by employing a topic model in analyzing the content of the tweets, and we found that a topic related to xenophobia is circulated solely in right-wing communities. To our knowledge, this is the first study to address echo chambers in Japanese Twitter political field and to examine the formal structure and the contents of tweets with the combination of large-scale social network analysis and natural language processing.
http://arxiv.org/abs/1711.06752v1
"2017-11-17T22:25:04Z"
cs.CY, cs.SI
2,017
Identifying Patterns of Associated-Conditions through Topic Models of Electronic Medical Records
Moumita Bhattacharya, Claudine Jurkovitz, Hagit Shatkay
Multiple adverse health conditions co-occurring in a patient are typically associated with poor prognosis and increased office or hospital visits. Developing methods to identify patterns of co-occurring conditions can assist in diagnosis. Thus identifying patterns of associations among co-occurring conditions is of growing interest. In this paper, we report preliminary results from a data-driven study, in which we apply a machine learning method, namely, topic modeling, to electronic medical records, aiming to identify patterns of associated conditions. Specifically, we use the well established latent dirichlet allocation, a method based on the idea that documents can be modeled as a mixture of latent topics, where each topic is a distribution over words. In our study, we adapt the LDA model to identify latent topics in patients' EMRs. We evaluate the performance of our method both qualitatively, and show that the obtained topics indeed align well with distinct medical phenomena characterized by co-occurring conditions.
http://arxiv.org/abs/1711.10960v1
"2017-11-17T21:39:19Z"
cs.CL
2,017
Deep Temporal-Recurrent-Replicated-Softmax for Topical Trends over Time
Pankaj Gupta, Subburam Rajaram, Hinrich Schütze, Bernt Andrassy
Dynamic topic modeling facilitates the identification of topical trends over time in temporal collections of unstructured documents. We introduce a novel unsupervised neural dynamic topic model named as Recurrent Neural Network-Replicated Softmax Model (RNNRSM), where the discovered topics at each time influence the topic discovery in the subsequent time steps. We account for the temporal ordering of documents by explicitly modeling a joint distribution of latent topical dependencies over time, using distributional estimators with temporal recurrent connections. Applying RNN-RSM to 19 years of articles on NLP research, we demonstrate that compared to state-of-the art topic models, RNNRSM shows better generalization, topic interpretation, evolution and trends. We also introduce a metric (named as SPAN) to quantify the capability of dynamic topic model to capture word evolution in topics over time.
http://arxiv.org/abs/1711.05626v2
"2017-11-15T15:33:59Z"
cs.CL, cs.AI, cs.IR, cs.LG
2,017
Latent Dirichlet Allocation (LDA) and Topic modeling: models, applications, a survey
Hamed Jelodar, Yongli Wang, Chi Yuan, Xia Feng, Xiahui Jiang, Yanchao Li, Liang Zhao
Topic modeling is one of the most powerful techniques in text mining for data mining, latent data discovery, and finding relationships among data, text documents. Researchers have published many articles in the field of topic modeling and applied in various fields such as software engineering, political science, medical and linguistic science, etc. There are various methods for topic modeling, which Latent Dirichlet allocation (LDA) is one of the most popular methods in this field. Researchers have proposed various models based on the LDA in topic modeling. According to previous work, this paper can be very useful and valuable for introducing LDA approaches in topic modeling. In this paper, we investigated scholarly articles highly (between 2003 to 2016) related to Topic Modeling based on LDA to discover the research development, current trends and intellectual structure of topic modeling. Also, we summarize challenges and introduce famous tools and datasets in topic modeling based on LDA.
http://arxiv.org/abs/1711.04305v2
"2017-11-12T14:50:14Z"
cs.IR
2,017
Interpretable probabilistic embeddings: bridging the gap between topic models and neural networks
Anna Potapenko, Artem Popov, Konstantin Vorontsov
We consider probabilistic topic models and more recent word embedding techniques from a perspective of learning hidden semantic representations. Inspired by a striking similarity of the two approaches, we merge them and learn probabilistic embeddings with online EM-algorithm on word co-occurrence data. The resulting embeddings perform on par with Skip-Gram Negative Sampling (SGNS) on word similarity tasks and benefit in the interpretability of the components. Next, we learn probabilistic document embeddings that outperform paragraph2vec on a document similarity task and require less memory and time for training. Finally, we employ multimodal Additive Regularization of Topic Models (ARTM) to obtain a high sparsity and learn embeddings for other modalities, such as timestamps and categories. We observe further improvement of word similarity performance and meaningful inter-modality similarities.
http://arxiv.org/abs/1711.04154v1
"2017-11-11T15:35:22Z"
cs.CL
2,017
Discovering conversational topics and emotions associated with Demonetization tweets in India
Mitodru Niyogi, Asim K. Pal
Social media platforms contain great wealth of information which provides us opportunities explore hidden patterns or unknown correlations, and understand people's satisfaction with what they are discussing. As one showcase, in this paper, we summarize the data set of Twitter messages related to recent demonetization of all Rs. 500 and Rs. 1000 notes in India and explore insights from Twitter's data. Our proposed system automatically extracts the popular latent topics in conversations regarding demonetization discussed in Twitter via the Latent Dirichlet Allocation (LDA) based topic model and also identifies the correlated topics across different categories. Additionally, it also discovers people's opinions expressed through their tweets related to the event under consideration via the emotion analyzer. The system also employs an intuitive and informative visualization to show the uncovered insight. Furthermore, we use an evaluation measure, Normalized Mutual Information (NMI), to select the best LDA models. The obtained LDA results show that the tool can be effectively used to extract discussion topics and summarize them for further manual analysis.
http://arxiv.org/abs/1711.04115v1
"2017-11-11T10:55:38Z"
cs.CL
2,017
Joint Sentiment/Topic Modeling on Text Data Using Boosted Restricted Boltzmann Machine
Masoud Fatemi, Mehran Safayani
Recently by the development of the Internet and the Web, different types of social media such as web blogs become an immense source of text data. Through the processing of these data, it is possible to discover practical information about different topics, individuals opinions and a thorough understanding of the society. Therefore, applying models which can automatically extract the subjective information from the documents would be efficient and helpful. Topic modeling methods, also sentiment analysis are the most raised topics in the natural language processing and text mining fields. In this paper a new structure for joint sentiment-topic modeling based on Restricted Boltzmann Machine (RBM) which is a type of neural networks is proposed. By modifying the structure of RBM as well as appending a layer which is analogous to sentiment of text data to it, we propose a generative structure for joint sentiment topic modeling based on neutral networks. The proposed method is supervised and trained by the Contrastive Divergence algorithm. The new attached layer in the proposed model is a layer with the multinomial probability distribution which can be used in text data sentiment classification or any other supervised application. The proposed model is compared with existing models in the experiments such as evaluating as a generative model, sentiment classification, information retrieval and the corresponding results demonstrate the efficiency of the method.
http://arxiv.org/abs/1711.03736v1
"2017-11-10T09:17:02Z"
cs.CL, cs.IR, cs.LG
2,017
Enhanced Movie Content Similarity Based on Textual, Auditory and Visual Information
Konstantinos Bougiatiotis, Theodore Giannakopoulos
In this paper we examine the ability of low-level multimodal features to extract movie similarity, in the context of a content-based movie recommendation approach. In particular, we demonstrate the extraction of multimodal representation models of movies, based on textual information from subtitles, as well as cues from the audio and visual channels. With regards to the textual domain, we emphasize our research in topic modeling of movies based on their subtitles, in order to extract topics that discriminate between movies. Regarding the visual domain, we focus on the extraction of semantically useful features that model camera movements, colors and faces, while for the audio domain we adopt simple classification aggregates based on pretrained models. The three domains are combined with static metadata (e.g. directors, actors) to prove that the content-based movie similarity procedure can be enhanced with low-level multimodal information. In order to demonstrate the proposed content representation approach, we have built a small dataset of 160 widely known movies. We assert movie similarities, as propagated by the individual modalities and fusion models, in the form of recommendation rankings. Extensive experimentation proves that all three low-level modalities (text, audio and visual) boost the performance of a content-based recommendation system, compared to the typical metadata-based content representation, by more than $50\%$ relative increase. To our knowledge, this is the first approach that utilizes a wide range of features from all involved modalities, in order to enhance the performance of the content similarity estimation, compared to the metadata-based approaches.
http://arxiv.org/abs/1711.03889v1
"2017-11-09T15:21:21Z"
cs.IR
2,017
Cross-National Measurement of Polarization in Political Discourse: Analyzing floor debate in the U.S. and the Japanese legislatures
Takuto Sakamoto, Hiroki Takikawa
Political polarization in public space can seriously hamper the function and the integrity of contemporary democratic societies. In this paper, we propose a novel measure of such polarization, which, by way of simple topic modelling, quantifies differences in collective articulation of public agendas among relevant political actors. Unlike most other polarization measures, our measure allows cross-national comparison. Analyzing a large amount of speech records of legislative debate in the United States Congress and the Japanese Diet over a long period of time, we have reached two intriguing findings. First, on average, Japanese political actors are far more polarized in their issue articulation than their counterparts in the U.S., which is somewhat surprising given the recent notion of U.S. politics as highly polarized. Second, the polarization in each country shows its own temporal dynamics in response to a different set of factors. In Japan, structural factors such as the roles of the ruling party and the opposition often dominate such dynamics, whereas the U.S. legislature suffers from persistent ideological differences over particular issues between major political parties. The analysis confirms a strong influence of institutional differences on legislative debate in parliamentary democracies.
http://arxiv.org/abs/1711.02977v1
"2017-11-08T14:46:12Z"
cs.CY, cs.SI
2,017
Multi-label Dataless Text Classification with Topic Modeling
Daochen Zha, Chenliang Li
Manually labeling documents is tedious and expensive, but it is essential for training a traditional text classifier. In recent years, a few dataless text classification techniques have been proposed to address this problem. However, existing works mainly center on single-label classification problems, that is, each document is restricted to belonging to a single category. In this paper, we propose a novel Seed-guided Multi-label Topic Model, named SMTM. With a few seed words relevant to each category, SMTM conducts multi-label classification for a collection of documents without any labeled document. In SMTM, each category is associated with a single category-topic which covers the meaning of the category. To accommodate with multi-labeled documents, we explicitly model the category sparsity in SMTM by using spike and slab prior and weak smoothing prior. That is, without using any threshold tuning, SMTM automatically selects the relevant categories for each document. To incorporate the supervision of the seed words, we propose a seed-guided biased GPU (i.e., generalized Polya urn) sampling procedure to guide the topic inference of SMTM. Experiments on two public datasets show that SMTM achieves better classification accuracy than state-of-the-art alternatives and even outperforms supervised solutions in some scenarios.
http://arxiv.org/abs/1711.01563v1
"2017-11-05T11:34:46Z"
cs.IR, cs.CL
2,017
Convergence Rates of Latent Topic Models Under Relaxed Identifiability Conditions
Yining Wang
In this paper we study the frequentist convergence rate for the Latent Dirichlet Allocation (Blei et al., 2003) topic models. We show that the maximum likelihood estimator converges to one of the finitely many equivalent parameters in Wasserstein's distance metric at a rate of $n^{-1/4}$ without assuming separability or non-degeneracy of the underlying topics and/or the existence of more than three words per document, thus generalizing the previous works of Anandkumar et al. (2012, 2014) from an information-theoretical perspective. We also show that the $n^{-1/4}$ convergence rate is optimal in the worst case.
http://arxiv.org/abs/1710.11070v2
"2017-10-30T17:05:28Z"
stat.ML, cs.LG
2,017