Title
stringlengths
14
179
Authors
stringlengths
6
464
Abstract
stringlengths
83
1.93k
entry_id
stringlengths
32
34
Date
unknown
Categories
stringlengths
5
168
year
int32
2.01k
2.02k
Transform-Invariant Non-Parametric Clustering of Covariance Matrices and its Application to Unsupervised Joint Segmentation and Action Discovery
Nadia Figueroa, Aude Billard
In this work, we tackle the problem of transform-invariant unsupervised learning in the space of Covariance matrices and applications thereof. We begin by introducing the Spectral Polytope Covariance Matrix (SPCM) Similarity function; a similarity function for Covariance matrices, invariant to any type of transformation. We then derive the SPCM-CRP mixture model, a transform-invariant non-parametric clustering approach for Covariance matrices that leverages the proposed similarity function, spectral embedding and the distance-dependent Chinese Restaurant Process (dd-CRP) (Blei and Frazier, 2011). The scalability and applicability of these two contributions is extensively validated on real-world Covariance matrix datasets from diverse research fields. Finally, we couple the SPCM-CRP mixture model with the Bayesian non-parametric Indian Buffet Process (IBP) - Hidden Markov Model (HMM) (Fox et al., 2009), to jointly segment and discover transform-invariant action primitives from complex sequential data. Resulting in a topic-modeling inspired hierarchical model for unsupervised time-series data analysis which we call ICSC-HMM (IBP Coupled SPCM-CRP Hidden Markov Model). The ICSC-HMM is validated on kinesthetic demonstrations of uni-manual and bi-manual cooking tasks; achieving unsupervised human-level decomposition of complex sequential tasks.
http://arxiv.org/abs/1710.10060v1
"2017-10-27T10:25:35Z"
cs.LG
2,017
Classifying Web Exploits with Topic Modeling
Jukka Ruohonen
This short empirical paper investigates how well topic modeling and database meta-data characteristics can classify web and other proof-of-concept (PoC) exploits for publicly disclosed software vulnerabilities. By using a dataset comprised of over 36 thousand PoC exploits, near a 0.9 accuracy rate is obtained in the empirical experiment. Text mining and topic modeling are a significant boost factor behind this classification performance. In addition to these empirical results, the paper contributes to the research tradition of enhancing software vulnerability information with text mining, providing also a few scholarly observations about the potential for semi-automatic classification of exploits in the existing tracking infrastructures.
http://arxiv.org/abs/1710.05561v1
"2017-10-16T08:34:24Z"
cs.CR, cs.IR, cs.SE
2,017
Conic Scan-and-Cover algorithms for nonparametric topic modeling
Mikhail Yurochkin, Aritra Guha, XuanLong Nguyen
We propose new algorithms for topic modeling when the number of topics is unknown. Our approach relies on an analysis of the concentration of mass and angular geometry of the topic simplex, a convex polytope constructed by taking the convex hull of vertices representing the latent topics. Our algorithms are shown in practice to have accuracy comparable to a Gibbs sampler in terms of topic estimation, which requires the number of topics be given. Moreover, they are one of the fastest among several state of the art parametric techniques. Statistical consistency of our estimator is established under some conditions.
http://arxiv.org/abs/1710.02952v1
"2017-10-09T06:28:03Z"
stat.ML
2,017
Topic Modeling based on Keywords and Context
Johannes Schneider
Current topic models often suffer from discovering topics not matching human intuition, unnatural switching of topics within documents and high computational demands. We address these concerns by proposing a topic model and an inference algorithm based on automatically identifying characteristic keywords for topics. Keywords influence topic-assignments of nearby words. Our algorithm learns (key)word-topic scores and it self-regulates the number of topics. Inference is simple and easily parallelizable. Qualitative analysis yields comparable results to state-of-the-art models (eg. LDA), but with different strengths and weaknesses. Quantitative analysis using 9 datasets shows gains in terms of classification accuracy, PMI score, computational performance and consistency of topic assignments within documents, while most often using less topics.
http://arxiv.org/abs/1710.02650v2
"2017-10-07T08:18:12Z"
cs.CL, cs.IR, cs.LG
2,017
Quantitative Perspectives on Fifty Years of the Journal of the History of Biology
B. R. Erick Peirson, Erin Bottino, Julia L. Damerow, Manfred D. Laubichler
Journal of the History of Biology provides a fifty-year long record for examining the evolution of the history of biology as a scholarly discipline. In this paper, we present a new dataset and preliminary quantitative analysis of the thematic content of JHB from the perspectives of geography, organisms, and thematic fields. The geographic diversity of authors whose work appears in JHB has increased steadily since 1968, but the geographic coverage of the content of JHB articles remains strongly lopsided toward the United States, United Kingdom, and western Europe and has diversified much less dramatically over time. The taxonomic diversity of organisms discussed in JHB increased steadily between 1968 and the late 1990s but declined in later years, mirroring broader patterns of diversification previously reported in the biomedical research literature. Finally, we used a combination of topic modeling and nonlinear dimensionality reduction techniques to develop a model of multi-article fields within JHB. We found evidence for directional changes in the representation of fields on multiple scales. The diversity of JHB with regard to the representation of thematic fields has increased overall, with most of that diversification occurring in recent years. Drawing on the dataset generated in the course of this analysis, as well as web services in the emerging digital history and philosophy of science ecosystem, we have developed an interactive web platform for exploring the content of JHB, and we provide a brief overview of the platform in this article. As a whole, the data and analyses presented here provide a starting-place for further critical reflection on the evolution of the history of biology over the past half-century.
http://arxiv.org/abs/1710.01966v1
"2017-10-05T11:13:16Z"
cs.DL, cs.CY, cs.HC
2,017
Crisis Communication Patterns in Social Media during Hurricane Sandy
Arif Mohaimin Sadri, Samiul Hasan, Satish V. Ukkusuri, Manuel Cebrian
Hurricane Sandy was one of the deadliest and costliest of hurricanes over the past few decades. Many states experienced significant power outage, however many people used social media to communicate while having limited or no access to traditional information sources. In this study, we explored the evolution of various communication patterns using machine learning techniques and determined user concerns that emerged over the course of Hurricane Sandy. The original data included ~52M tweets coming from ~13M users between October 14, 2012 and November 12, 2012. We run topic model on ~763K tweets from top 4,029 most frequent users who tweeted about Sandy at least 100 times. We identified 250 well-defined communication patterns based on perplexity. Conversations of most frequent and relevant users indicate the evolution of numerous storm-phase (warning, response, and recovery) specific topics. People were also concerned about storm location and time, media coverage, and activities of political leaders and celebrities. We also present each relevant keyword that contributed to one particular pattern of user concerns. Such keywords would be particularly meaningful in targeted information spreading and effective crisis communication in similar major disasters. Each of these words can also be helpful for efficient hash-tagging to reach target audience as needed via social media. The pattern recognition approach of this study can be used in identifying real time user needs in future crises.
http://arxiv.org/abs/1710.01887v1
"2017-10-05T05:32:07Z"
cs.SI
2,017
A Bimodal Network Approach to Model Topic Dynamics
Luigi Di Caro, Marco Guerzoni, Massimiliano Nuccio, Giovanni Siragusa
This paper presents an intertemporal bimodal network to analyze the evolution of the semantic content of a scientific field within the framework of topic modeling, namely using the Latent Dirichlet Allocation (LDA). The main contribution is the conceptualization of the topic dynamics and its formalization and codification into an algorithm. To benchmark the effectiveness of this approach, we propose three indexes which track the transformation of topics over time, their rate of birth and death, and the novelty of their content. Applying the LDA, we test the algorithm both on a controlled experiment and on a corpus of several thousands of scientific papers over a period of more than 100 years which account for the history of the economic thought.
http://arxiv.org/abs/1709.09373v1
"2017-09-27T07:49:03Z"
cs.CL, econ.GN, q-fin.EC
2,017
Computational Content Analysis of Negative Tweets for Obesity, Diet, Diabetes, and Exercise
George Shaw Jr., Amir Karami
Social media based digital epidemiology has the potential to support faster response and deeper understanding of public health related threats. This study proposes a new framework to analyze unstructured health related textual data via Twitter users' post (tweets) to characterize the negative health sentiments and non-health related concerns in relations to the corpus of negative sentiments, regarding Diet Diabetes Exercise, and Obesity (DDEO). Through the collection of 6 million Tweets for one month, this study identified the prominent topics of users as it relates to the negative sentiments. Our proposed framework uses two text mining methods, sentiment analysis and topic modeling, to discover negative topics. The negative sentiments of Twitter users support the literature narratives and the many morbidity issues that are associated with DDEO and the linkage between obesity and diabetes. The framework offers a potential method to understand the publics' opinions and sentiments regarding DDEO. More importantly, this research provides new opportunities for computational social scientists, medical experts, and public health professionals to collectively address DDEO-related issues.
http://arxiv.org/abs/1709.07915v1
"2017-09-22T19:18:42Z"
cs.SI, cs.CL, stat.AP, stat.CO, stat.ML
2,017
SpectralLeader: Online Spectral Learning for Single Topic Models
Tong Yu, Branislav Kveton, Zheng Wen, Hung Bui, Ole J. Mengshoel
We study the problem of learning a latent variable model from a stream of data. Latent variable models are popular in practice because they can explain observed data in terms of unobserved concepts. These models have been traditionally studied in the offline setting. In the online setting, on the other hand, the online EM is arguably the most popular algorithm for learning latent variable models. Although the online EM is computationally efficient, it typically converges to a local optimum. In this work, we develop a new online learning algorithm for latent variable models, which we call SpectralLeader. SpectralLeader always converges to the global optimum, and we derive a sublinear upper bound on its $n$-step regret in the bag-of-words model. In both synthetic and real-world experiments, we show that SpectralLeader performs similarly to or better than the online EM with tuned hyper-parameters.
http://arxiv.org/abs/1709.07172v4
"2017-09-21T06:24:51Z"
cs.LG, stat.ML
2,017
MetaLDA: a Topic Model that Efficiently Incorporates Meta information
He Zhao, Lan Du, Wray Buntine, Gang Liu
Besides the text content, documents and their associated words usually come with rich sets of meta informa- tion, such as categories of documents and semantic/syntactic features of words, like those encoded in word embeddings. Incorporating such meta information directly into the generative process of topic models can improve modelling accuracy and topic quality, especially in the case where the word-occurrence information in the training data is insufficient. In this paper, we present a topic model, called MetaLDA, which is able to leverage either document or word meta information, or both of them jointly. With two data argumentation techniques, we can derive an efficient Gibbs sampling algorithm, which benefits from the fully local conjugacy of the model. Moreover, the algorithm is favoured by the sparsity of the meta information. Extensive experiments on several real world datasets demonstrate that our model achieves comparable or improved performance in terms of both perplexity and topic quality, particularly in handling sparse texts. In addition, compared with other models using meta information, our model runs significantly faster.
http://arxiv.org/abs/1709.06365v1
"2017-09-19T12:09:21Z"
cs.CL, stat.AP
2,017
Early prediction of the duration of protests using probabilistic Latent Dirichlet Allocation and Decision Trees
Satyakama Paul, Madhur Hasija, Tshilidzi Marwala
Protests and agitations are an integral part of every democratic civil society. In recent years, South Africa has seen a large increase in its protests. The objective of this paper is to provide an early prediction of the duration of protests from its free flowing English text description. Free flowing descriptions of the protests help us in capturing its various nuances such as multiple causes, courses of actions etc. Next we use a combination of unsupervised learning (topic modeling) and supervised learning (decision trees) to predict the duration of the protests. Our results show a high degree (close to 90%) of accuracy in early prediction of the duration of protests.We expect the work to help police and other security services in planning and managing their resources in better handling protests in future.
http://arxiv.org/abs/1711.00462v1
"2017-09-18T06:03:09Z"
cs.SI, cs.AI
2,017
Bug or Not? Bug Report Classification Using N-Gram IDF
Pannavat Terdchanakul, Hideaki Hata, Passakorn Phannachitta, Kenichi Matsumoto
Previous studies have found that a significant number of bug reports are misclassified between bugs and non-bugs, and that manually classifying bug reports is a time-consuming task. To address this problem, we propose a bug reports classification model with N-gram IDF, a theoretical extension of Inverse Document Frequency (IDF) for handling words and phrases of any length. N-gram IDF enables us to extract key terms of any length from texts, these key terms can be used as the features to classify bug reports. We build classification models with logistic regression and random forest using features from N-gram IDF and topic modeling, which is widely used in various software engineering tasks. With a publicly available dataset, our results show that our N-gram IDF-based models have a superior performance than the topic-based models on all of the evaluated cases. Our models show promising results and have a potential to be extended to other software engineering tasks.
http://arxiv.org/abs/1709.05763v1
"2017-09-18T03:36:12Z"
cs.SE
2,017
Social Media Text Processing and Semantic Analysis for Smart Cities
João Filipe Figueiredo Pereira
With the rise of Social Media, people obtain and share information almost instantly on a 24/7 basis. Many research areas have tried to gain valuable insights from these large volumes of freely available user generated content. With the goal of extracting knowledge from social media streams that might be useful in the context of intelligent transportation systems and smart cities, we designed and developed a framework that provides functionalities for parallel collection of geo-located tweets from multiple pre-defined bounding boxes (cities or regions), including filtering of non-complying tweets, text pre-processing for Portuguese and English language, topic modeling, and transportation-specific text classifiers, as well as, aggregation and data visualization. We performed an exploratory data analysis of geo-located tweets in 5 different cities: Rio de Janeiro, S\~ao Paulo, New York City, London and Melbourne, comprising a total of more than 43 million tweets in a period of 3 months. Furthermore, we performed a large scale topic modelling comparison between Rio de Janeiro and S\~ao Paulo. Interestingly, most of the topics are shared between both cities which despite being in the same country are considered very different regarding population, economy and lifestyle. We take advantage of recent developments in word embeddings and train such representations from the collections of geo-located tweets. We then use a combination of bag-of-embeddings and traditional bag-of-words to train travel-related classifiers in both Portuguese and English to filter travel-related content from non-related. We created specific gold-standard data to perform empirical evaluation of the resulting classifiers. Results are in line with research work in other application areas by showing the robustness of using word embeddings to learn word similarities that bag-of-words is not able to capture.
http://arxiv.org/abs/1709.03406v1
"2017-09-11T14:30:35Z"
cs.SI, cs.CL, cs.CY
2,017
Data-Driven Dialogue Systems for Social Agents
Kevin K. Bowden, Shereen Oraby, Amita Misra, Jiaqi Wu, Stephanie Lukin
In order to build dialogue systems to tackle the ambitious task of holding social conversations, we argue that we need a data driven approach that includes insight into human conversational chit chat, and which incorporates different natural language processing modules. Our strategy is to analyze and index large corpora of social media data, including Twitter conversations, online debates, dialogues between friends, and blog posts, and then to couple this data retrieval with modules that perform tasks such as sentiment and style analysis, topic modeling, and summarization. We aim for personal assistants that can learn more nuanced human language, and to grow from task-oriented agents to more personable social bots.
http://arxiv.org/abs/1709.03190v1
"2017-09-10T22:37:18Z"
cs.CL
2,017
Combining LSTM and Latent Topic Modeling for Mortality Prediction
Yohan Jo, Lisa Lee, Shruti Palaskar
There is a great need for technologies that can predict the mortality of patients in intensive care units with both high accuracy and accountability. We present joint end-to-end neural network architectures that combine long short-term memory (LSTM) and a latent topic model to simultaneously train a classifier for mortality prediction and learn latent topics indicative of mortality from textual clinical notes. For topic interpretability, the topic modeling layer has been carefully designed as a single-layer network with constraints inspired by LDA. Experiments on the MIMIC-III dataset show that our models significantly outperform prior models that are based on LDA topics in mortality prediction. However, we achieve limited success with our method for interpreting topics from the trained models by looking at the neural network weights.
http://arxiv.org/abs/1709.02842v1
"2017-09-08T19:30:09Z"
cs.CL
2,017
Characterizing Geo-located Tweets in Brazilian Megacities
João Pereira, Arian Pasquali, Pedro Saleiro, Rosaldo Rossetti, Nélio Cacho
This work presents a framework for collecting, processing and mining geo-located tweets in order to extract meaningful and actionable knowledge in the context of smart cities. We collected and characterized more than 9M tweets from the two biggest cities in Brazil, Rio de Janeiro and S\~ao Paulo. We performed topic modeling using the Latent Dirichlet Allocation model to produce an unsupervised distribution of semantic topics over the stream of geo-located tweets as well as a distribution of words over those topics. We manually labeled and aggregated similar topics obtaining a total of 29 different topics across both cities. Results showed similarities in the majority of topics for both cities, reflecting similar interests and concerns among the population of Rio de Janeiro and S\~ao Paulo. Nevertheless, some specific topics are more predominant in one of the cities.
http://arxiv.org/abs/1709.01981v1
"2017-09-06T20:20:26Z"
cs.CY
2,017
IAD: Interaction-Aware Diffusion Framework in Social Networks
Xi Zhang, Yuan Su, Siyu Qu, Sihong Xie, Binxing Fang, Philip S. Yu
In networks, multiple contagions, such as information and purchasing behaviors, may interact with each other as they spread simultaneously. However, most of the existing information diffusion models are built on the assumption that each individual contagion spreads independently, regardless of their interactions. Gaining insights into such interaction is crucial to understand the contagion adoption behaviors, and thus can make better predictions. In this paper, we study the contagion adoption behavior under a set of interactions, specifically, the interactions among users, contagions' contents and sentiments, which are learned from social network structures and texts. We then develop an effective and efficient interaction-aware diffusion (IAD) framework, incorporating these interactions into a unified model. We also present a generative process to distinguish user roles, a co-training method to determine contagions' categories and a new topic model to obtain topic-specific sentiments. Evaluation on large-scale Weibo dataset demonstrates that our proposal can learn how different users, contagion categories and sentiments interact with each other efficiently. With these interactions, we can make a more accurate prediction than the state-of-art baselines. Moreover, we can better understand how the interactions influence the propagation process and thus can suggest useful directions for information promotion or suppression in viral marketing.
http://arxiv.org/abs/1709.01773v2
"2017-09-06T11:22:28Z"
cs.SI, physics.soc-ph
2,017
Unsupervised Terminological Ontology Learning based on Hierarchical Topic Modeling
Xiaofeng Zhu, Diego Klabjan, Patrick Bless
In this paper, we present hierarchical relationbased latent Dirichlet allocation (hrLDA), a data-driven hierarchical topic model for extracting terminological ontologies from a large number of heterogeneous documents. In contrast to traditional topic models, hrLDA relies on noun phrases instead of unigrams, considers syntax and document structures, and enriches topic hierarchies with topic relations. Through a series of experiments, we demonstrate the superiority of hrLDA over existing topic models, especially for building hierarchies. Furthermore, we illustrate the robustness of hrLDA in the settings of noisy data sets, which are likely to occur in many practical scenarios. Our ontology evaluation results show that ontologies extracted from hrLDA are very competitive with the ontologies created by domain experts.
http://arxiv.org/abs/1708.09025v1
"2017-08-29T21:04:11Z"
cs.CL, cs.IR, cs.LG
2,017
Online Interactive Collaborative Filtering Using Multi-Armed Bandit with Dependent Arms
Qing Wang, Chunqiu Zeng, Wubai Zhou, Tao Li, Larisa Shwartz, Genady Ya. Grabarnik
Online interactive recommender systems strive to promptly suggest to consumers appropriate items (e.g., movies, news articles) according to the current context including both the consumer and item content information. However, such context information is often unavailable in practice for the recommendation, where only the users' interaction data on items can be utilized. Moreover, the lack of interaction records, especially for new users and items, worsens the performance of recommendation further. To address these issues, collaborative filtering (CF), one of the recommendation techniques relying on the interaction data only, as well as the online multi-armed bandit mechanisms, capable of achieving the balance between exploitation and exploration, are adopted in the online interactive recommendation settings, by assuming independent items (i.e., arms). Nonetheless, the assumption rarely holds in reality, since the real-world items tend to be correlated with each other (e.g., two articles with similar topics). In this paper, we study online interactive collaborative filtering problems by considering the dependencies among items. We explicitly formulate the item dependencies as the clusters on arms, where the arms within a single cluster share the similar latent topics. In light of the topic modeling techniques, we come up with a generative model to generate the items from their underlying topics. Furthermore, an efficient online algorithm based on particle learning is developed for inferring both latent parameters and states of our model. Additionally, our inferred model can be naturally integrated with existing multi-armed selection strategies in the online interactive collaborating setting. Empirical studies on two real-world applications, online recommendations of movies and news, demonstrate both the effectiveness and efficiency of the proposed approach.
http://arxiv.org/abs/1708.03058v2
"2017-08-10T02:52:57Z"
cs.IR, cs.LG, H.3.3; I.2.6
2,017
Communication-Free Parallel Supervised Topic Models
Lee Gao, Ronghuo Zheng
Embarrassingly (communication-free) parallel Markov chain Monte Carlo (MCMC) methods are commonly used in learning graphical models. However, MCMC cannot be directly applied in learning topic models because of the quasi-ergodicity problem caused by multimodal distribution of topics. In this paper, we develop an embarrassingly parallel MCMC algorithm for sLDA. Our algorithm works by switching the order of sampled topics combination and labeling variable prediction in sLDA, in which it overcomes the quasi-ergodicity problem because high-dimension topics that follow a multimodal distribution are projected into one-dimension document labels that follow a unimodal distribution. Our empirical experiments confirm that the out-of-sample prediction performance using our embarrassingly parallel algorithm is comparable to non-parallel sLDA while the computation time is significantly reduced.
http://arxiv.org/abs/1708.03052v1
"2017-08-10T02:03:52Z"
cs.LG, cs.CL, cs.IR, stat.ML
2,017
Identifying Reference Spans: Topic Modeling and Word Embeddings help IR
Luis Moraes, Shahryar Baki, Rakesh Verma, Daniel Lee
The CL-SciSumm 2016 shared task introduced an interesting problem: given a document D and a piece of text that cites D, how do we identify the text spans of D being referenced by the piece of text? The shared task provided the first annotated dataset for studying this problem. We present an analysis of our continued work in improving our system's performance on this task. We demonstrate how topic models and word embeddings can be used to surpass the previously best performing system.
http://arxiv.org/abs/1708.02989v1
"2017-08-09T19:58:55Z"
cs.CL
2,017
Semi-Automatic Terminology Ontology Learning Based on Topic Modeling
Monika Rani, Amit Kumar Dhar, O. P. Vyas
Ontologies provide features like a common vocabulary, reusability, machine-readable content, and also allows for semantic search, facilitate agent interaction and ordering & structuring of knowledge for the Semantic Web (Web 3.0) application. However, the challenge in ontology engineering is automatic learning, i.e., the there is still a lack of fully automatic approach from a text corpus or dataset of various topics to form ontology using machine learning techniques. In this paper, two topic modeling algorithms are explored, namely LSI & SVD and Mr.LDA for learning topic ontology. The objective is to determine the statistical relationship between document and terms to build a topic ontology and ontology graph with minimum human intervention. Experimental analysis on building a topic ontology and semantic retrieving corresponding topic ontology for the user's query demonstrating the effectiveness of the proposed approach.
http://arxiv.org/abs/1709.01991v1
"2017-08-05T08:30:48Z"
cs.IR, cs.CL
2,017
A network approach to topic models
Martin Gerlach, Tiago P. Peixoto, Eduardo G. Altmann
One of the main computational and scientific challenges in the modern age is to extract useful information from unstructured texts. Topic models are one popular machine-learning approach which infers the latent topical structure of a collection of documents. Despite their success --- in particular of its most widely used variant called Latent Dirichlet Allocation (LDA) --- and numerous applications in sociology, history, and linguistics, topic models are known to suffer from severe conceptual and practical problems, e.g. a lack of justification for the Bayesian priors, discrepancies with statistical properties of real texts, and the inability to properly choose the number of topics. Here we obtain a fresh view on the problem of identifying topical structures by relating it to the problem of finding communities in complex networks. This is achieved by representing text corpora as bipartite networks of documents and words. By adapting existing community-detection methods -- using a stochastic block model (SBM) with non-parametric priors -- we obtain a more versatile and principled framework for topic modeling (e.g., it automatically detects the number of topics and hierarchically clusters both the words and documents). The analysis of artificial and real corpora demonstrates that our SBM approach leads to better topic models than LDA in terms of statistical model selection. More importantly, our work shows how to formally relate methods from community detection and topic modeling, opening the possibility of cross-fertilization between these two fields.
http://arxiv.org/abs/1708.01677v2
"2017-08-04T22:35:50Z"
stat.ML, cs.CL, physics.data-an, physics.soc-ph
2,017
SenGen: Sentence Generating Neural Variational Topic Model
Ramesh Nallapati, Igor Melnyk, Abhishek Kumar, Bowen Zhou
We present a new topic model that generates documents by sampling a topic for one whole sentence at a time, and generating the words in the sentence using an RNN decoder that is conditioned on the topic of the sentence. We argue that this novel formalism will help us not only visualize and model the topical discourse structure in a document better, but also potentially lead to more interpretable topics since we can now illustrate topics by sampling representative sentences instead of bag of words or phrases. We present a variational auto-encoder approach for learning in which we use a factorized variational encoder that independently models the posterior over topical mixture vectors of documents using a feed-forward network, and the posterior over topic assignments to sentences using an RNN. Our preliminary experiments on two different datasets indicate early promise, but also expose many challenges that remain to be addressed.
http://arxiv.org/abs/1708.00308v1
"2017-08-01T13:31:24Z"
cs.CL, cs.LG, stat.ML
2,017
Familia: An Open-Source Toolkit for Industrial Topic Modeling
Di Jiang, Zeyu Chen, Rongzhong Lian, Siqi Bao, Chen Li
Familia is an open-source toolkit for pragmatic topic modeling in industry. Familia abstracts the utilities of topic modeling in industry as two paradigms: semantic representation and semantic matching. Efficient implementations of the two paradigms are made publicly available for the first time. Furthermore, we provide off-the-shelf topic models trained on large-scale industrial corpora, including Latent Dirichlet Allocation (LDA), SentenceLDA and Topical Word Embedding (TWE). We further describe typical applications which are successfully powered by topic modeling, in order to ease the confusions and difficulties of software engineers during topic model selection and utilization.
http://arxiv.org/abs/1707.09823v1
"2017-07-31T12:48:45Z"
cs.IR, cs.CL
2,017
Combining Thesaurus Knowledge and Probabilistic Topic Models
Natalia Loukachevitch, Michael Nokel, Kirill Ivanov
In this paper we present the approach of introducing thesaurus knowledge into probabilistic topic models. The main idea of the approach is based on the assumption that the frequencies of semantically related words and phrases, which are met in the same texts, should be enhanced: this action leads to their larger contribution into topics found in these texts. We have conducted experiments with several thesauri and found that for improving topic models, it is useful to utilize domain-specific knowledge. If a general thesaurus, such as WordNet, is used, the thesaurus-based improvement of topic models can be achieved with excluding hyponymy relations in combined topic models.
http://arxiv.org/abs/1707.09816v1
"2017-07-31T12:32:16Z"
cs.CL
2,017
Structural Regularities in Text-based Entity Vector Spaces
Christophe Van Gysel, Maarten de Rijke, Evangelos Kanoulas
Entity retrieval is the task of finding entities such as people or products in response to a query, based solely on the textual documents they are associated with. Recent semantic entity retrieval algorithms represent queries and experts in finite-dimensional vector spaces, where both are constructed from text sequences. We investigate entity vector spaces and the degree to which they capture structural regularities. Such vector spaces are constructed in an unsupervised manner without explicit information about structural aspects. For concreteness, we address these questions for a specific type of entity: experts in the context of expert finding. We discover how clusterings of experts correspond to committees in organizations, the ability of expert representations to encode the co-author graph, and the degree to which they encode academic rank. We compare latent, continuous representations created using methods based on distributional semantics (LSI), topic models (LDA) and neural networks (word2vec, doc2vec, SERT). Vector spaces created using neural methods, such as doc2vec and SERT, systematically perform better at clustering than LSI, LDA and word2vec. When it comes to encoding entity relations, SERT performs best.
http://arxiv.org/abs/1707.07930v1
"2017-07-25T11:54:19Z"
cs.IR, cs.AI, cs.CL
2,017
Prediction-Constrained Training for Semi-Supervised Mixture and Topic Models
Michael C. Hughes, Leah Weiner, Gabriel Hope, Thomas H. McCoy Jr., Roy H. Perlis, Erik B. Sudderth, Finale Doshi-Velez
Supervisory signals have the potential to make low-dimensional data representations, like those learned by mixture and topic models, more interpretable and useful. We propose a framework for training latent variable models that explicitly balances two goals: recovery of faithful generative explanations of high-dimensional data, and accurate prediction of associated semantic labels. Existing approaches fail to achieve these goals due to an incomplete treatment of a fundamental asymmetry: the intended application is always predicting labels from data, not data from labels. Our prediction-constrained objective for training generative models coherently integrates loss-based supervisory signals while enabling effective semi-supervised learning from partially labeled data. We derive learning algorithms for semi-supervised mixture and topic models using stochastic gradient descent with automatic differentiation. We demonstrate improved prediction quality compared to several previous supervised topic models, achieving predictions competitive with high-dimensional logistic regression on text sentiment analysis and electronic health records tasks while simultaneously learning interpretable topics.
http://arxiv.org/abs/1707.07341v1
"2017-07-23T20:19:06Z"
stat.ML, cs.AI, cs.LG
2,017
Cooperative Hierarchical Dirichlet Processes: Superposition vs. Maximization
Junyu Xuan, Jie Lu, Guangquan Zhang, Richard Yi Da Xu
The cooperative hierarchical structure is a common and significant data structure observed in, or adopted by, many research areas, such as: text mining (author-paper-word) and multi-label classification (label-instance-feature). Renowned Bayesian approaches for cooperative hierarchical structure modeling are mostly based on topic models. However, these approaches suffer from a serious issue in that the number of hidden topics/factors needs to be fixed in advance and an inappropriate number may lead to overfitting or underfitting. One elegant way to resolve this issue is Bayesian nonparametric learning, but existing work in this area still cannot be applied to cooperative hierarchical structure modeling. In this paper, we propose a cooperative hierarchical Dirichlet process (CHDP) to fill this gap. Each node in a cooperative hierarchical structure is assigned a Dirichlet process to model its weights on the infinite hidden factors/topics. Together with measure inheritance from hierarchical Dirichlet process, two kinds of measure cooperation, i.e., superposition and maximization, are defined to capture the many-to-many relationships in the cooperative hierarchical structure. Furthermore, two constructive representations for CHDP, i.e., stick-breaking and international restaurant process, are designed to facilitate the model inference. Experiments on synthetic and real-world data with cooperative hierarchical structures demonstrate the properties and the ability of CHDP for cooperative hierarchical structure modeling and its potential for practical application scenarios.
http://arxiv.org/abs/1707.05420v1
"2017-07-18T00:42:10Z"
cs.LG, stat.ML
2,017
Learning the Latent "Look": Unsupervised Discovery of a Style-Coherent Embedding from Fashion Images
Wei-Lin Hsiao, Kristen Grauman
What defines a visual style? Fashion styles emerge organically from how people assemble outfits of clothing, making them difficult to pin down with a computational model. Low-level visual similarity can be too specific to detect stylistically similar images, while manually crafted style categories can be too abstract to capture subtle style differences. We propose an unsupervised approach to learn a style-coherent representation. Our method leverages probabilistic polylingual topic models based on visual attributes to discover a set of latent style factors. Given a collection of unlabeled fashion images, our approach mines for the latent styles, then summarizes outfits by how they mix those styles. Our approach can organize galleries of outfits by style without requiring any style labels. Experiments on over 100K images demonstrate its promise for retrieving, mixing, and summarizing fashion images by their style.
http://arxiv.org/abs/1707.03376v2
"2017-07-11T17:28:59Z"
cs.CV
2,017
Document Retrieval for Large Scale Content Analysis using Contextualized Dictionaries
Gregor Wiedemann, Andreas Niekler
This paper presents a procedure to retrieve subsets of relevant documents from large text collections for Content Analysis, e.g. in social sciences. Document retrieval for this purpose needs to take account of the fact that analysts often cannot describe their research objective with a small set of key terms, especially when dealing with theoretical or rather abstract research interests. Instead, it is much easier to define a set of paradigmatic documents which reflect topics of interest as well as targeted manner of speech. Thus, in contrast to classic information retrieval tasks we employ manually compiled collections of reference documents to compose large queries of several hundred key terms, called dictionaries. We extract dictionaries via Topic Models and also use co-occurrence data from reference collections. Evaluations show that the procedure improves retrieval results for this purpose compared to alternative methods of key term extraction as well as neglecting co-occurrence data.
http://arxiv.org/abs/1707.03217v1
"2017-07-11T11:00:44Z"
cs.IR
2,017
Look Who's Talking: Bipartite Networks as Representations of a Topic Model of New Zealand Parliamentary Speeches
Ben Curran, Kyle Higham, Elisenda Ortiz, Demival Vasques Filho
Quantitative methods to measure the participation to parliamentary debate and discourse of elected Members of Parliament (MPs) and the parties they belong to are lacking. This is an exploratory study in which we propose the development of a new approach for a quantitative analysis of such participation. We utilize the New Zealand government's digital Hansard database to construct a topic model of parliamentary speeches consisting of nearly 40 million words in the period 2003-2016. A Latent Dirichlet Allocation topic model is implemented in order to reveal the thematic structure of our set of documents. This generative statistical model enables the detection of major themes or topics that are publicly discussed in the New Zealand parliament, as well as permitting their classification by MP. Information on topic proportions is subsequently analyzed using a combination of statistical methods. We observe patterns arising from time-series analysis of topic frequencies which can be related to specific social, economic and legislative events. We then construct a bipartite network representation, linking MPs to topics, for each of four parliamentary terms in this time frame. We build projected networks (onto the set of nodes represented by MPs) and proceed to the study of the dynamical changes of their topology, including community structure. By performing this longitudinal network analysis, we can observe the evolution of the New Zealand parliamentary topic network and its main parties in the period studied.
http://arxiv.org/abs/1707.03095v3
"2017-07-11T01:25:31Z"
cs.CL, cs.DL, cs.SI, physics.soc-ph
2,017
What Works Better? A Study of Classifying Requirements
Zahra Shakeri Hossein Abad, Oliver Karras, Parisa Ghazi, Martin Glinz, Guenther Ruhe, Kurt Schneider
Classifying requirements into functional requirements (FR) and non-functional ones (NFR) is an important task in requirements engineering. However, automated classification of requirements written in natural language is not straightforward, due to the variability of natural language and the absence of a controlled vocabulary. This paper investigates how automated classification of requirements into FR and NFR can be improved and how well several machine learning approaches work in this context. We contribute an approach for preprocessing requirements that standardizes and normalizes requirements before applying classification algorithms. Further, we report on how well several existing machine learning methods perform for automated classification of NFRs into sub-categories such as usability, availability, or performance. Our study is performed on 625 requirements provided by the OpenScience tera-PROMISE repository. We found that our preprocessing improved the performance of an existing classification method. We further found significant differences in the performance of approaches such as Latent Dirichlet Allocation, Biterm Topic Modeling, or Naive Bayes for the sub-classification of NFRs.
http://arxiv.org/abs/1707.02358v1
"2017-07-07T20:54:22Z"
cs.SE
2,017
Structured Black Box Variational Inference for Latent Time Series Models
Robert Bamler, Stephan Mandt
Continuous latent time series models are prevalent in Bayesian modeling; examples include the Kalman filter, dynamic collaborative filtering, or dynamic topic models. These models often benefit from structured, non mean field variational approximations that capture correlations between time steps. Black box variational inference with reparameterization gradients (BBVI) allows us to explore a rich new class of Bayesian non-conjugate latent time series models; however, a naive application of BBVI to such structured variational models would scale quadratically in the number of time steps. We describe a BBVI algorithm analogous to the forward-backward algorithm which instead scales linearly in time. It allows us to efficiently sample from the variational distribution and estimate the gradients of the ELBO. Finally, we show results on the recently proposed dynamic word embedding model, which was trained using our method.
http://arxiv.org/abs/1707.01069v1
"2017-07-04T17:03:59Z"
stat.ML, cs.LG
2,017
Efficient Correlated Topic Modeling with Topic Embedding
Junxian He, Zhiting Hu, Taylor Berg-Kirkpatrick, Ying Huang, Eric P. Xing
Correlated topic modeling has been limited to small model and problem sizes due to their high computational cost and poor scaling. In this paper, we propose a new model which learns compact topic embeddings and captures topic correlations through the closeness between the topic vectors. Our method enables efficient inference in the low-dimensional embedding space, reducing previous cubic or quadratic time complexity to linear w.r.t the topic size. We further speedup variational inference with a fast sampler to exploit sparsity of topic occurrence. Extensive experiments show that our approach is capable of handling model and data scales which are several orders of magnitude larger than existing correlation results, without sacrificing modeling quality by providing competitive or superior performance in document classification and retrieval.
http://arxiv.org/abs/1707.00206v1
"2017-07-01T21:10:15Z"
cs.LG, cs.CL, stat.ML
2,017
Jointly Learning Word Embeddings and Latent Topics
Bei Shi, Wai Lam, Shoaib Jameel, Steven Schockaert, Kwun Ping Lai
Word embedding models such as Skip-gram learn a vector-space representation for each word, based on the local word collocation patterns that are observed in a text corpus. Latent topic models, on the other hand, take a more global view, looking at the word distributions across the corpus to assign a topic to each word occurrence. These two paradigms are complementary in how they represent the meaning of word occurrences. While some previous works have already looked at using word embeddings for improving the quality of latent topics, and conversely, at using latent topics for improving word embeddings, such "two-step" methods cannot capture the mutual interaction between the two paradigms. In this paper, we propose STE, a framework which can learn word embeddings and latent topics in a unified manner. STE naturally obtains topic-specific word embeddings, and thus addresses the issue of polysemy. At the same time, it also learns the term distributions of the topics, and the topic distributions of the documents. Our experimental results demonstrate that the STE model can indeed generate useful topic-specific word embeddings and coherent latent topics in an effective and efficient way.
http://arxiv.org/abs/1706.07276v1
"2017-06-21T06:19:24Z"
cs.CL, cs.IR, cs.LG
2,017
Topic Modeling for Classification of Clinical Reports
Efsun Sarioglu Kayi, Kabir Yadav, James M. Chamberlain, Hyeong-Ah Choi
Electronic health records (EHRs) contain important clinical information about patients. Efficient and effective use of this information could supplement or even replace manual chart review as a means of studying and improving the quality and safety of healthcare delivery. However, some of these clinical data are in the form of free text and require pre-processing before use in automated systems. A common free text data source is radiology reports, typically dictated by radiologists to explain their interpretations. We sought to demonstrate machine learning classification of computed tomography (CT) imaging reports into binary outcomes, i.e. positive and negative for fracture, using regular text classification and classifiers based on topic modeling. Topic modeling provides interpretable themes (topic distributions) in reports, a representation that is more compact than the commonly used bag-of-words representation and can be processed faster than raw text in subsequent automated processes. We demonstrate new classifiers based on this topic modeling representation of the reports. Aggregate topic classifier (ATC) and confidence-based topic classifier (CTC) use a single topic that is determined from the training dataset based on different measures to classify the reports on the test dataset. Alternatively, similarity-based topic classifier (STC) measures the similarity between the reports' topic distributions to determine the predicted class. Our proposed topic modeling-based classifier systems are shown to be competitive with existing text classification techniques and provides an efficient and interpretable representation.
http://arxiv.org/abs/1706.06177v1
"2017-06-19T21:04:22Z"
cs.CL
2,017
Bayesian Joint Modelling for Object Localisation in Weakly Labelled Images
Zhiyuan Shi, Timothy M. Hospedales, Tao Xiang
We address the problem of localisation of objects as bounding boxes in images and videos with weak labels. This weakly supervised object localisation problem has been tackled in the past using discriminative models where each object class is localised independently from other classes. In this paper, a novel framework based on Bayesian joint topic modelling is proposed, which differs significantly from the existing ones in that: (1) All foreground object classes are modelled jointly in a single generative model that encodes multiple object co-existence so that "explaining away" inference can resolve ambiguity and lead to better learning and localisation. (2) Image backgrounds are shared across classes to better learn varying surroundings and "push out" objects of interest. (3) Our model can be learned with a mixture of weakly labelled and unlabelled data, allowing the large volume of unlabelled images on the Internet to be exploited for learning. Moreover, the Bayesian formulation enables the exploitation of various types of prior knowledge to compensate for the limited supervision offered by weakly labelled data, as well as Bayesian domain adaptation for transfer learning. Extensive experiments on the PASCAL VOC, ImageNet and YouTube-Object videos datasets demonstrate the effectiveness of our Bayesian joint model for weakly supervised object localisation.
http://arxiv.org/abs/1706.05952v1
"2017-06-19T13:59:48Z"
cs.CV
2,017
An Automatic Approach for Document-level Topic Model Evaluation
Shraey Bhatia, Jey Han Lau, Timothy Baldwin
Topic models jointly learn topics and document-level topic distribution. Extrinsic evaluation of topic models tends to focus exclusively on topic-level evaluation, e.g. by assessing the coherence of topics. We demonstrate that there can be large discrepancies between topic- and document-level model quality, and that basing model evaluation on topic-level analysis can be highly misleading. We propose a method for automatically predicting topic model quality based on analysis of document-level topic allocations, and provide empirical evidence for its robustness.
http://arxiv.org/abs/1706.05140v1
"2017-06-16T03:53:38Z"
cs.CL
2,017
Topic supervised non-negative matrix factorization
Kelsey MacMillan, James D. Wilson
Topic models have been extensively used to organize and interpret the contents of large, unstructured corpora of text documents. Although topic models often perform well on traditional training vs. test set evaluations, it is often the case that the results of a topic model do not align with human interpretation. This interpretability fallacy is largely due to the unsupervised nature of topic models, which prohibits any user guidance on the results of a model. In this paper, we introduce a semi-supervised method called topic supervised non-negative matrix factorization (TS-NMF) that enables the user to provide labeled example documents to promote the discovery of more meaningful semantic structure of a corpus. In this way, the results of TS-NMF better match the intuition and desired labeling of the user. The core of TS-NMF relies on solving a non-convex optimization problem for which we derive an iterative algorithm that is shown to be monotonic and convergent to a local optimum. We demonstrate the practical utility of TS-NMF on the Reuters and PubMed corpora, and find that TS-NMF is especially useful for conceptual or broad topics, where topic key terms are not well understood. Although identifying an optimal latent structure for the data is not a primary objective of the proposed approach, we find that TS-NMF achieves higher weighted Jaccard similarity scores than the contemporary methods, (unsupervised) NMF and latent Dirichlet allocation, at supervision rates as low as 10% to 20%.
http://arxiv.org/abs/1706.05084v2
"2017-06-12T04:20:04Z"
cs.CL, cs.IR, cs.LG, stat.ML
2,017
Joint Modeling of Topics, Citations, and Topical Authority in Academic Corpora
Jooyeon Kim, Dongwoo Kim, Alice Oh
Much of scientific progress stems from previously published findings, but searching through the vast sea of scientific publications is difficult. We often rely on metrics of scholarly authority to find the prominent authors but these authority indices do not differentiate authority based on research topics. We present Latent Topical-Authority Indexing (LTAI) for jointly modeling the topics, citations, and topical authority in a corpus of academic papers. Compared to previous models, LTAI differs in two main aspects. First, it explicitly models the generative process of the citations, rather than treating the citations as given. Second, it models each author's influence on citations of a paper based on the topics of the cited papers, as well as the citing papers. We fit LTAI to four academic corpora: CORA, Arxiv Physics, PNAS, and Citeseer. We compare the performance of LTAI against various baselines, starting with the latent Dirichlet allocation, to the more advanced models including author-link topic model and dynamic author citation topic model. The results show that LTAI achieves improved accuracy over other similar models when predicting words, citations and authors of publications.
http://arxiv.org/abs/1706.00593v1
"2017-06-02T08:52:47Z"
cs.CL, cs.DL, cs.SI
2,017
Discovering Discrete Latent Topics with Neural Variational Inference
Yishu Miao, Edward Grefenstette, Phil Blunsom
Topic models have been widely explored as probabilistic generative models of documents. Traditional inference methods have sought closed-form derivations for updating the models, however as the expressiveness of these models grows, so does the difficulty of performing fast and accurate inference over their parameters. This paper presents alternative neural approaches to topic modelling by providing parameterisable distributions over topics which permit training by backpropagation in the framework of neural variational inference. In addition, with the help of a stick-breaking construction, we propose a recurrent network that is able to discover a notionally unbounded number of topics, analogous to Bayesian non-parametric topic models. Experimental results on the MXM Song Lyrics, 20NewsGroups and Reuters News datasets demonstrate the effectiveness and efficiency of these neural topic models.
http://arxiv.org/abs/1706.00359v2
"2017-06-01T15:55:42Z"
cs.CL, cs.AI, cs.IR, cs.LG
2,017
Neural Models for Documents with Metadata
Dallas Card, Chenhao Tan, Noah A. Smith
Most real-world document collections involve various types of metadata, such as author, source, and date, and yet the most commonly-used approaches to modeling text corpora ignore this information. While specialized models have been developed for particular applications, few are widely used in practice, as customization typically requires derivation of a custom inference algorithm. In this paper, we build on recent advances in variational inference methods and propose a general neural framework, based on topic models, to enable flexible incorporation of metadata and allow for rapid exploration of alternative models. Our approach achieves strong performance, with a manageable tradeoff between perplexity, coherence, and sparsity. Finally, we demonstrate the potential of our framework through an exploration of a corpus of articles about US immigration.
http://arxiv.org/abs/1705.09296v2
"2017-05-25T18:00:03Z"
stat.ML, cs.CL
2,017
Self-supervised learning of visual features through embedding images into text topic spaces
Lluis Gomez, Yash Patel, Marçal Rusiñol, Dimosthenis Karatzas, C. V. Jawahar
End-to-end training from scratch of current deep architectures for new computer vision problems would require Imagenet-scale datasets, and this is not always possible. In this paper we present a method that is able to take advantage of freely available multi-modal content to train computer vision algorithms without human supervision. We put forward the idea of performing self-supervised learning of visual features by mining a large scale corpus of multi-modal (text and image) documents. We show that discriminative visual features can be learnt efficiently by training a CNN to predict the semantic context in which a particular image is more probable to appear as an illustration. For this we leverage the hidden semantic structures discovered in the text corpus with a well-known topic modeling technique. Our experiments demonstrate state of the art performance in image classification, object detection, and multi-modal retrieval compared to recent self-supervised or natural-supervised approaches.
http://arxiv.org/abs/1705.08631v1
"2017-05-24T06:59:30Z"
cs.CV
2,017
TwiInsight: Discovering Topics and Sentiments from Social Media Datasets
Zhengkui Wang, Guangdong Bai, Soumyadeb Chowdhury, Quanqing Xu, Zhi Lin Seow
Social media platforms contain a great wealth of information which provides opportunities for us to explore hidden patterns or unknown correlations, and understand people's satisfaction with what they are discussing. As one showcase, in this paper, we present a system, TwiInsight which explores the insight of Twitter data. Different from other Twitter analysis systems, TwiInsight automatically extracts the popular topics under different categories (e.g., healthcare, food, technology, sports and transport) discussed in Twitter via topic modeling and also identifies the correlated topics across different categories. Additionally, it also discovers the people's opinions on the tweets and topics via the sentiment analysis. The system also employs an intuitive and informative visualization to show the uncovered insight. Furthermore, we also develop and compare six most popular algorithms - three for sentiment analysis and three for topic modeling.
http://arxiv.org/abs/1705.08094v1
"2017-05-23T06:49:12Z"
cs.IR, cs.CL
2,017
W2VLDA: Almost Unsupervised System for Aspect Based Sentiment Analysis
Aitor García-Pablos, Montse Cuadros, German Rigau
With the increase of online customer opinions in specialised websites and social networks, the necessity of automatic systems to help to organise and classify customer reviews by domain-specific aspect/categories and sentiment polarity is more important than ever. Supervised approaches to Aspect Based Sentiment Analysis obtain good results for the domain/language their are trained on, but having manually labelled data for training supervised systems for all domains and languages are usually very costly and time consuming. In this work we describe W2VLDA, an almost unsupervised system based on topic modelling, that combined with some other unsupervised methods and a minimal configuration, performs aspect/category classifiation, aspect-terms/opinion-words separation and sentiment polarity classification for any given domain and language. We evaluate the performance of the aspect and sentiment classification in the multilingual SemEval 2016 task 5 (ABSA) dataset. We show competitive results for several languages (English, Spanish, French and Dutch) and domains (hotels, restaurants, electronic-devices).
http://arxiv.org/abs/1705.07687v2
"2017-05-22T12:01:10Z"
cs.CL
2,017
Why are Big Data Matrices Approximately Low Rank?
Madeleine Udell, Alex Townsend
Matrices of (approximate) low rank are pervasive in data science, appearing in recommender systems, movie preferences, topic models, medical records, and genomics. While there is a vast literature on how to exploit low rank structure in these datasets, there is less attention on explaining why the low rank structure appears in the first place. Here, we explain the effectiveness of low rank models in data science by considering a simple generative model for these matrices: we suppose that each row or column is associated to a (possibly high dimensional) bounded latent variable, and entries of the matrix are generated by applying a piecewise analytic function to these latent variables. These matrices are in general full rank. However, we show that we can approximate every entry of an $m \times n$ matrix drawn from this model to within a fixed absolute error by a low rank matrix whose rank grows as $\mathcal O(\log(m + n))$. Hence any sufficiently large matrix from such a latent variable model can be approximated, up to a small entrywise error, by a low rank matrix.
http://arxiv.org/abs/1705.07474v2
"2017-05-21T16:49:36Z"
cs.LG, stat.ML
2,017
Mixed Membership Word Embeddings for Computational Social Science
James Foulds
Word embeddings improve the performance of NLP systems by revealing the hidden structural relationships between words. Despite their success in many applications, word embeddings have seen very little use in computational social science NLP tasks, presumably due to their reliance on big data, and to a lack of interpretability. I propose a probabilistic model-based word embedding method which can recover interpretable embeddings, without big data. The key insight is to leverage mixed membership modeling, in which global representations are shared, but individual entities (i.e. dictionary words) are free to use these representations to uniquely differing degrees. I show how to train the model using a combination of state-of-the-art training techniques for word embeddings and topic models. The experimental results show an improvement in predictive language modeling of up to 63% in MRR over the skip-gram, and demonstrate that the representations are beneficial for supervised learning. I illustrate the interpretability of the models with computational social science case studies on State of the Union addresses and NIPS articles.
http://arxiv.org/abs/1705.07368v3
"2017-05-20T23:45:54Z"
cs.CL, cs.AI, cs.LG
2,017
Effective Representations of Clinical Notes
Sebastien Dubois, Nathanael Romano, David C. Kale, Nigam Shah, Kenneth Jung
Clinical notes are a rich source of information about patient state. However, using them to predict clinical events with machine learning models is challenging. They are very high dimensional, sparse and have complex structure. Furthermore, training data is often scarce because it is expensive to obtain reliable labels for many clinical events. These difficulties have traditionally been addressed by manual feature engineering encoding task specific domain knowledge. We explored the use of neural networks and transfer learning to learn representations of clinical notes that are useful for predicting future clinical events of interest, such as all causes mortality, inpatient admissions, and emergency room visits. Our data comprised 2.7 million notes and 115 thousand patients at Stanford Hospital. We used the learned representations, along with commonly used bag of words and topic model representations, as features for predictive models of clinical events. We evaluated the effectiveness of these representations with respect to the performance of the models trained on small datasets. Models using the neural network derived representations performed significantly better than models using the baseline representations with small ($N < 1000$) training datasets. The learned representations offer significant performance gains over commonly used baseline representations for a range of predictive modeling tasks and cohort sizes, offering an effective alternative to task specific feature engineering when plentiful labeled training data is not available.
http://arxiv.org/abs/1705.07025v3
"2017-05-19T14:42:48Z"
stat.ML, cs.LG
2,017
Bayesian Joint Topic Modelling for Weakly Supervised Object Localisation
Zhiyuan Shi, Timothy M. Hospedales, Tao Xiang
We address the problem of localisation of objects as bounding boxes in images with weak labels. This weakly supervised object localisation problem has been tackled in the past using discriminative models where each object class is localised independently from other classes. We propose a novel framework based on Bayesian joint topic modelling. Our framework has three distinctive advantages over previous works: (1) All object classes and image backgrounds are modelled jointly together in a single generative model so that "explaining away" inference can resolve ambiguity and lead to better learning and localisation. (2) The Bayesian formulation of the model enables easy integration of prior knowledge about object appearance to compensate for limited supervision. (3) Our model can be learned with a mixture of weakly labelled and unlabelled data, allowing the large volume of unlabelled images on the Internet to be exploited for learning. Extensive experiments on the challenging VOC dataset demonstrate that our approach outperforms the state-of-the-art competitors.
http://arxiv.org/abs/1705.03372v1
"2017-05-09T15:00:07Z"
cs.CV
2,017
Credible Review Detection with Limited Information using Consistency Analysis
Subhabrata Mukherjee, Sourav Dutta, Gerhard Weikum
Online reviews provide viewpoints on the strengths and shortcomings of products/services, influencing potential customers' purchasing decisions. However, the proliferation of non-credible reviews -- either fake (promoting/ demoting an item), incompetent (involving irrelevant aspects), or biased -- entails the problem of identifying credible reviews. Prior works involve classifiers harnessing rich information about items/users -- which might not be readily available in several domains -- that provide only limited interpretability as to why a review is deemed non-credible. This paper presents a novel approach to address the above issues. We utilize latent topic models leveraging review texts, item ratings, and timestamps to derive consistency features without relying on item/user histories, unavailable for "long-tail" items/users. We develop models, for computing review credibility scores to provide interpretable evidence for non-credible reviews, that are also transferable to other domains -- addressing the scarcity of labeled data. Experiments on real-world datasets demonstrate improvements over state-of-the-art baselines.
http://arxiv.org/abs/1705.02668v1
"2017-05-07T17:43:01Z"
cs.AI, cs.CL, cs.IR, cs.SI, stat.ML
2,017
KATE: K-Competitive Autoencoder for Text
Yu Chen, Mohammed J. Zaki
Autoencoders have been successful in learning meaningful representations from image datasets. However, their performance on text datasets has not been widely studied. Traditional autoencoders tend to learn possibly trivial representations of text documents due to their confounding properties such as high-dimensionality, sparsity and power-law word distributions. In this paper, we propose a novel k-competitive autoencoder, called KATE, for text documents. Due to the competition between the neurons in the hidden layer, each neuron becomes specialized in recognizing specific data patterns, and overall the model can learn meaningful representations of textual data. A comprehensive set of experiments show that KATE can learn better representations than traditional autoencoders including denoising, contractive, variational, and k-sparse autoencoders. Our model also outperforms deep generative models, probabilistic topic models, and even word representation models (e.g., Word2Vec) in terms of several downstream tasks such as document classification, regression, and retrieval.
http://arxiv.org/abs/1705.02033v2
"2017-05-04T22:04:17Z"
stat.ML, cs.LG
2,017
Improving fitness: Mapping research priorities against societal needs on obesity
Lorenzo Cassi, Agénor Lahatte, Ismael Rafols, Pierre Sautier, Élisabeth de Turckheim
Science policy is increasingly shifting towards an emphasis in societal problems or grand challenges. As a result, new evaluative tools are needed to help assess not only the knowledge production side of research programmes or organisations, but also the articulation of research agendas with societal needs. In this paper, we present an exploratory investigation of science supply and societal needs on the grand challenge of obesity -an emerging health problem with enormous social costs. We illustrate a potential approach that uses topic modelling to explore: (a) how scientific publications can be used to describe existing priorities in science production; (b) how records of questions posed in the European parliament can be used as an instance of mapping discourse of social needs; (c) how the comparison between the two may show (mis)alignments between societal concerns and scientific outputs. While this is a technical exercise, we propose that this type of mapping methods can be useful for informing strategic planning and evaluation in funding agencies.
http://arxiv.org/abs/1705.01151v2
"2017-05-02T19:29:44Z"
cs.DL, 67.02, H.3.7
2,017
Fuzzy Approach Topic Discovery in Health and Medical Corpora
Amir Karami, Aryya Gangopadhyay, Bin Zhou, Hadi Kharrazi
The majority of medical documents and electronic health records (EHRs) are in text format that poses a challenge for data processing and finding relevant documents. Looking for ways to automatically retrieve the enormous amount of health and medical knowledge has always been an intriguing topic. Powerful methods have been developed in recent years to make the text processing automatic. One of the popular approaches to retrieve information based on discovering the themes in health & medical corpora is topic modeling, however, this approach still needs new perspectives. In this research we describe fuzzy latent semantic analysis (FLSA), a novel approach in topic modeling using fuzzy perspective. FLSA can handle health & medical corpora redundancy issue and provides a new method to estimate the number of topics. The quantitative evaluations show that FLSA produces superior performance and features to latent Dirichlet allocation (LDA), the most popular topic model.
http://arxiv.org/abs/1705.00995v2
"2017-05-02T14:29:14Z"
stat.ML, cs.CL, cs.IR, 62-07, 62-09, 68T50, 03B52, 03E72, H.3.1; H.3.3; I.2.7; I.7; I.5; I.2.3
2,017
Stochastic Divergence Minimization for Biterm Topic Model
Zhenghang Cui, Issei Sato, Masashi Sugiyama
As the emergence and the thriving development of social networks, a huge number of short texts are accumulated and need to be processed. Inferring latent topics of collected short texts is useful for understanding its hidden structure and predicting new contents. Unlike conventional topic models such as latent Dirichlet allocation (LDA), a biterm topic model (BTM) was recently proposed for short texts to overcome the sparseness of document-level word co-occurrences by directly modeling the generation process of word pairs. Stochastic inference algorithms based on collapsed Gibbs sampling (CGS) and collapsed variational inference have been proposed for BTM. However, they either require large computational complexity, or rely on very crude estimation. In this work, we develop a stochastic divergence minimization inference algorithm for BTM to estimate latent topics more accurately in a scalable way. Experiments demonstrate the superiority of our proposed algorithm compared with existing inference algorithms.
http://arxiv.org/abs/1705.00394v1
"2017-05-01T01:05:09Z"
stat.ML
2,017
Optimal client recommendation for market makers in illiquid financial products
Dieter Hendricks, Stephen J. Roberts
The process of liquidity provision in financial markets can result in prolonged exposure to illiquid instruments for market makers. In this case, where a proprietary position is not desired, pro-actively targeting the right client who is likely to be interested can be an effective means to offset this position, rather than relying on commensurate interest arising through natural demand. In this paper, we consider the inference of a client profile for the purpose of corporate bond recommendation, based on typical recorded information available to the market maker. Given a historical record of corporate bond transactions and bond meta-data, we use a topic-modelling analogy to develop a probabilistic technique for compiling a curated list of client recommendations for a particular bond that needs to be traded, ranked by probability of interest. We show that a model based on Latent Dirichlet Allocation offers promising performance to deliver relevant recommendations for sales traders.
http://arxiv.org/abs/1704.08488v1
"2017-04-27T09:28:50Z"
q-fin.CP, cs.LG, stat.ML
2,017
OMNIRank: Risk Quantification for P2P Platforms with Deep Learning
Honglun Zhang, Haiyang Wang, Xiaming Chen, Yongkun Wang, Yaohui Jin
P2P lending presents as an innovative and flexible alternative for conventional lending institutions like banks, where lenders and borrowers directly make transactions and benefit each other without complicated verifications. However, due to lack of specialized laws, delegated monitoring and effective managements, P2P platforms may spawn potential risks, such as withdraw failures, investigation involvements and even runaway bosses, which cause great losses to lenders and are especially serious and notorious in China. Although there are abundant public information and data available on the Internet related to P2P platforms, challenges of multi-sourcing and heterogeneity matter. In this paper, we promote a novel deep learning model, OMNIRank, which comprehends multi-dimensional features of P2P platforms for risk quantification and produces scores for ranking. We first construct a large-scale flexible crawling framework and obtain great amounts of multi-source heterogeneous data of domestic P2P platforms since 2007 from the Internet. Purifications like duplication and noise removal, null handing, format unification and fusion are applied to improve data qualities. Then we extract deep features of P2P platforms via text comprehension, topic modeling, knowledge graph and sentiment analysis, which are delivered as inputs to OMNIRank, a deep learning model for risk quantification of P2P platforms. Finally, according to rankings generated by OMNIRank, we conduct flourish data visualizations and interactions, providing lenders with comprehensive information supports, decision suggestions and safety guarantees.
http://arxiv.org/abs/1705.03497v1
"2017-04-27T03:15:38Z"
cs.CY, cs.LG
2,017
Topically Driven Neural Language Model
Jey Han Lau, Timothy Baldwin, Trevor Cohn
Language models are typically applied at the sentence level, without access to the broader document context. We present a neural language model that incorporates document context in the form of a topic model-like architecture, thus providing a succinct representation of the broader document context outside of the current sentence. Experiments over a range of datasets demonstrate that our model outperforms a pure sentence-based model in terms of language model perplexity, and leads to topics that are potentially more coherent than those produced by a standard LDA topic model. Our model also has the ability to generate related sentences for a topic, providing another way to interpret topics.
http://arxiv.org/abs/1704.08012v2
"2017-04-26T08:33:14Z"
cs.CL
2,017
User Profile Based Research Paper Recommendation
Harshita Sahijwani, Sourish Dasgupta
We design a recommender system for research papers based on topic-modeling. The users feedback to the results is used to make the results more relevant the next time they fire a query. The user's needs are understood by observing the change in the themes that the user shows a preference for over time.
http://arxiv.org/abs/1704.07757v1
"2017-04-25T16:01:50Z"
cs.IR
2,017
Using SVD for Topic Modeling
Zheng Tracy Ke, Minzhe Wang
The probabilistic topic model imposes a low-rank structure on the expectation of the corpus matrix. Therefore, singular value decomposition (SVD) is a natural tool of dimension reduction. We propose an SVD-based method for estimating a topic model. Our method constructs an estimate of the topic matrix from only a few leading singular vectors of the corpus matrix, and has a great advantage in memory use and computational cost for large-scale corpora. The core ideas behind our method include a pre-SVD normalization to tackle severe word frequency heterogeneity, a post-SVD normalization to create a low-dimensional word embedding that manifests a simplex geometry, and a post-SVD procedure to construct an estimate of the topic matrix directly from the embedded word cloud. We provide the explicit rate of convergence of our method. We show that our method attains the optimal rate in the case of long and moderately long documents, and it improves the rates of existing methods in the case of short documents. The key of our analysis is a sharp row-wise large-deviation bound for empirical singular vectors, which is technically demanding to derive and potentially useful for other problems. We apply our method to a corpus of Associated Press news articles and a corpus of abstracts of statistical papers.
http://arxiv.org/abs/1704.07016v3
"2017-04-24T01:45:51Z"
stat.ME, 62H12, 62H25, 62C20, 62P25
2,017
Analysis of Computational Science Papers from ICCS 2001-2016 using Topic Modeling and Graph Theory
Tesfamariam M. Abuhay, Sergey V. Kovalchuk, Klavdiya O. Bochenina, George Kampis, Valeria V. Krzhizhanovskaya, Michael H. Lees
This paper presents results of topic modeling and network models of topics using the International Conference on Computational Science corpus, which contains domain-specific (computational science) papers over sixteen years (a total of 5695 papers). We discuss topical structures of International Conference on Computational Science, how these topics evolve over time in response to the topicality of various problems, technologies and methods, and how all these topics relate to one another. This analysis illustrates multidisciplinary research and collaborations among scientific communities, by constructing static and dynamic networks from the topic modeling results and the keywords of authors. The results of this study give insights about the past and future trends of core discussion topics in computational science. We used the Non-negative Matrix Factorization topic modeling algorithm to discover topics and labeled and grouped results hierarchically.
http://arxiv.org/abs/1705.02203v1
"2017-04-18T13:24:41Z"
cs.DL, cs.CL, cs.IR, cs.SI
2,017
Pólya Urn Latent Dirichlet Allocation: a doubly sparse massively parallel sampler
Alexander Terenin, Måns Magnusson, Leif Jonsson, David Draper
Latent Dirichlet Allocation (LDA) is a topic model widely used in natural language processing and machine learning. Most approaches to training the model rely on iterative algorithms, which makes it difficult to run LDA on big corpora that are best analyzed in parallel and distributed computational environments. Indeed, current approaches to parallel inference either don't converge to the correct posterior or require storage of large dense matrices in memory. We present a novel sampler that overcomes both problems, and we show that this sampler is faster, both empirically and theoretically, than previous Gibbs samplers for LDA. We do so by employing a novel P\'olya-urn-based approximation in the sparse partially collapsed sampler for LDA. We prove that the approximation error vanishes with data size, making our algorithm asymptotically exact, a property of importance for large-scale topic models. In addition, we show, via an explicit example, that - contrary to popular belief in the topic modeling literature - partially collapsed samplers can be more efficient than fully collapsed samplers. We conclude by comparing the performance of our algorithm with that of other approaches on well-known corpora.
http://arxiv.org/abs/1704.03581v7
"2017-04-12T01:02:27Z"
stat.ML, stat.CO
2,017
Conceptualization Topic Modeling
Yi-Kun Tang, Xian-Ling Mao, Heyan Huang, Guihua Wen
Recently, topic modeling has been widely used to discover the abstract topics in text corpora. Most of the existing topic models are based on the assumption of three-layer hierarchical Bayesian structure, i.e. each document is modeled as a probability distribution over topics, and each topic is a probability distribution over words. However, the assumption is not optimal. Intuitively, it's more reasonable to assume that each topic is a probability distribution over concepts, and then each concept is a probability distribution over words, i.e. adding a latent concept layer between topic layer and word layer in traditional three-layer assumption. In this paper, we verify the proposed assumption by incorporating the new assumption in two representative topic models, and obtain two novel topic models. Extensive experiments were conducted among the proposed models and corresponding baselines, and the results show that the proposed models significantly outperform the baselines in terms of case study and perplexity, which means the new assumption is more reasonable than traditional one.
http://arxiv.org/abs/1704.02090v1
"2017-04-07T05:12:38Z"
cs.CL, cs.IR
2,017
Topic modeling of public repositories at scale using names in source code
Vadim Markovtsev, Eiso Kant
Programming languages themselves have a limited number of reserved keywords and character based tokens that define the language specification. However, programmers have a rich use of natural language within their code through comments, text literals and naming entities. The programmer defined names that can be found in source code are a rich source of information to build a high level understanding of the project. The goal of this paper is to apply topic modeling to names used in over 13.6 million repositories and perceive the inferred topics. One of the problems in such a study is the occurrence of duplicate repositories not officially marked as forks (obscure forks). We show how to address it using the same identifiers which are extracted for topic modeling. We open with a discussion on naming in source code, we then elaborate on our approach to remove exact duplicate and fuzzy duplicate repositories using Locality Sensitive Hashing on the bag-of-words model and then discuss our work on topic modeling; and finally present the results from our data analysis together with open-access to the source code, tools and datasets.
http://arxiv.org/abs/1704.00135v2
"2017-04-01T08:16:20Z"
cs.PL, cs.CL
2,017
Phytoplankton Hotspot Prediction With an Unsupervised Spatial Community Model
Arnold Kalmbach, Yogesh Girdhar, Heidi M. Sosik, Gregory Dudek
Many interesting natural phenomena are sparsely distributed and discrete. Locating the hotspots of such sparsely distributed phenomena is often difficult because their density gradient is likely to be very noisy. We present a novel approach to this search problem, where we model the co-occurrence relations between a robot's observations with a Bayesian nonparametric topic model. This approach makes it possible to produce a robust estimate of the spatial distribution of the target, even in the absence of direct target observations. We apply the proposed approach to the problem of finding the spatial locations of the hotspots of a specific phytoplankton taxon in the ocean. We use classified image data from Imaging FlowCytobot (IFCB), which automatically measures individual microscopic cells and colonies of cells. Given these individual taxon-specific observations, we learn a phytoplankton community model that characterizes the co-occurrence relations between taxa. We present experiments with simulated robot missions drawn from real observation data collected during a research cruise traversing the US Atlantic coast. Our results show that the proposed approach outperforms nearest neighbor and k-means based methods for predicting the spatial distribution of hotspots from in-situ observations.
http://arxiv.org/abs/1703.07309v1
"2017-03-21T16:48:50Z"
cs.RO, stat.AP
2,017
Automatic Text Summarization Approaches to Speed up Topic Model Learning Process
Mohamed Morchid, Juan-Manuel Torres-Moreno, Richard Dufour, Javier Ramírez-Rodríguez, Georges Linarès
The number of documents available into Internet moves each day up. For this reason, processing this amount of information effectively and expressibly becomes a major concern for companies and scientists. Methods that represent a textual document by a topic representation are widely used in Information Retrieval (IR) to process big data such as Wikipedia articles. One of the main difficulty in using topic model on huge data collection is related to the material resources (CPU time and memory) required for model estimate. To deal with this issue, we propose to build topic spaces from summarized documents. In this paper, we present a study of topic space representation in the context of big data. The topic space representation behavior is analyzed on different languages. Experiments show that topic spaces estimated from text summaries are as relevant as those estimated from the complete documents. The real advantage of such an approach is the processing time gain: we showed that the processing time can be drastically reduced using summarized documents (more than 60\% in general). This study finally points out the differences between thematic representations of documents depending on the targeted languages such as English or latin languages.
http://arxiv.org/abs/1703.06630v1
"2017-03-20T08:19:43Z"
cs.IR, cs.CL
2,017
Automated U.S Diplomatic Cables Security Classification: Topic Model Pruning vs. Classification Based on Clusters
Khudran Alzhrani, Ethan M. Rudd, C. Edward Chow, Terrance E. Boult
The U.S Government has been the target for cyber-attacks from all over the world. Just recently, former President Obama accused the Russian government of the leaking emails to Wikileaks and declared that the U.S. might be forced to respond. While Russia denied involvement, it is clear that the U.S. has to take some defensive measures to protect its data infrastructure. Insider threats have been the cause of other sensitive information leaks too, including the infamous Edward Snowden incident. Most of the recent leaks were in the form of text. Due to the nature of text data, security classifications are assigned manually. In an adversarial environment, insiders can leak texts through E-mail, printers, or any untrusted channels. The optimal defense is to automatically detect the unstructured text security class and enforce the appropriate protection mechanism without degrading services or daily tasks. Unfortunately, existing Data Leak Prevention (DLP) systems are not well suited for detecting unstructured texts. In this paper, we compare two recent approaches in the literature for text security classification, evaluating them on actual sensitive text data from the WikiLeaks dataset.
http://arxiv.org/abs/1703.02248v1
"2017-03-07T07:29:56Z"
cs.CR
2,017
An Unsupervised Approach for Discovering Relevant Tutorial Fragments for APIs
He Jiang, Jingxuan Zhang, Zhilei Ren, Tao Zhang
Developers increasingly rely on API tutorials to facilitate software development. However, it remains a challenging task for them to discover relevant API tutorial fragments explaining unfamiliar APIs. Existing supervised approaches suffer from the heavy burden of manually preparing corpus-specific annotated data and features. In this study, we propose a novel unsupervised approach, namely Fragment Recommender for APIs with PageRank and Topic model (FRAPT). FRAPT can well address two main challenges lying in the task and effectively determine relevant tutorial fragments for APIs. In FRAPT, a Fragment Parser is proposed to identify APIs in tutorial fragments and replace ambiguous pronouns and variables with related ontologies and API names, so as to address the pronoun and variable resolution challenge. Then, a Fragment Filter employs a set of nonexplanatory detection rules to remove non-explanatory fragments, thus address the non-explanatory fragment identification challenge. Finally, two correlation scores are achieved and aggregated to determine relevant fragments for APIs, by applying both topic model and PageRank algorithm to the retained fragments. Extensive experiments over two publicly open tutorial corpora show that, FRAPT improves the state-of-the-art approach by 8.77% and 12.32% respectively in terms of F-Measure. The effectiveness of key components of FRAPT is also validated.
http://arxiv.org/abs/1703.01552v1
"2017-03-05T03:38:50Z"
cs.SE
2,017
Autoencoding Variational Inference For Topic Models
Akash Srivastava, Charles Sutton
Topic models are one of the most popular methods for learning representations of text, but a major challenge is that any change to the topic model requires mathematically deriving a new inference algorithm. A promising approach to address this problem is autoencoding variational Bayes (AEVB), but it has proven diffi- cult to apply to topic models in practice. We present what is to our knowledge the first effective AEVB based inference method for latent Dirichlet allocation (LDA), which we call Autoencoded Variational Inference For Topic Model (AVITM). This model tackles the problems caused for AEVB by the Dirichlet prior and by component collapsing. We find that AVITM matches traditional methods in accuracy with much better inference time. Indeed, because of the inference network, we find that it is unnecessary to pay the computational cost of running variational optimization on test data. Because AVITM is black box, it is readily applied to new topic models. As a dramatic illustration of this, we present a new topic model called ProdLDA, that replaces the mixture model in LDA with a product of experts. By changing only one line of code from LDA, we find that ProdLDA yields much more interpretable topics, even if LDA is trained via collapsed Gibbs sampling.
http://arxiv.org/abs/1703.01488v1
"2017-03-04T16:28:15Z"
stat.ML
2,017
Finding Likely Errors with Bayesian Specifications
Vijayaraghavan Murali, Swarat Chaudhuri, Chris Jermaine
We present a Bayesian framework for learning probabilistic specifications from large, unstructured code corpora, and a method to use this framework to statically detect anomalous, hence likely buggy, program behavior. The distinctive insight here is to build a statistical model that correlates all specifications hidden inside a corpus with the syntax and observed behavior of programs that implement these specifications. During the analysis of a particular program, this model is conditioned into a posterior distribution that prioritizes specifications that are relevant to this program. This allows accurate program analysis even if the corpus is highly heterogeneous. The problem of finding anomalies is now framed quantitatively, as a problem of computing a distance between a "reference distribution" over program behaviors that our model expects from the program, and the distribution over behaviors that the program actually produces. We present a concrete embodiment of our framework that combines a topic model and a neural network model to learn specifications, and queries the learned models to compute anomaly scores. We evaluate this implementation on the task of detecting anomalous usage of Android APIs. Our encouraging experimental results show that the method can automatically discover subtle errors in Android applications in the wild, and has high precision and recall compared to competing probabilistic approaches.
http://arxiv.org/abs/1703.01370v1
"2017-03-04T00:58:10Z"
cs.SE
2,017
The CKM Parameters
Sébastien Descotes-Genon, Patrick Koppenburg
The Cabibbo-Kobayashi-Maskawa matrix is a key element to describe flavour dynamics in the Standard Model. With only four parameters, this matrix is able to describe a large range of phenomena in the quark sector, such as CP violation and rare decays. It can thus be constrained by many different processes, which have to be measured experimentally with a high accuracy and computed with a good theoretical control. With the advent of the B factories and the LHCb experiment taking data, the precision has significantly improved recently. The most relevant experimental constraints and theoretical inputs are reviewed and fits to the CKM matrix are presented for the Standard Model and for some topical model-independent studies of New Physics.
http://arxiv.org/abs/1702.08834v4
"2017-02-28T16:05:35Z"
hep-ex, hep-ph
2,017
Stability of Topic Modeling via Matrix Factorization
Mark Belford, Brian Mac Namee, Derek Greene
Topic models can provide us with an insight into the underlying latent structure of a large corpus of documents. A range of methods have been proposed in the literature, including probabilistic topic models and techniques based on matrix factorization. However, in both cases, standard implementations rely on stochastic elements in their initialization phase, which can potentially lead to different results being generated on the same corpus when using the same parameter values. This corresponds to the concept of "instability" which has previously been studied in the context of $k$-means clustering. In many applications of topic modeling, this problem of instability is not considered and topic models are treated as being definitive, even though the results may change considerably if the initialization process is altered. In this paper we demonstrate the inherent instability of popular topic modeling approaches, using a number of new measures to assess stability. To address this issue in the context of matrix factorization for topic modeling, we propose the use of ensemble learning strategies. Based on experiments performed on annotated text corpora, we show that a K-Fold ensemble strategy, combining both ensembles and structured initialization, can significantly reduce instability, while simultaneously yielding more accurate topic models.
http://arxiv.org/abs/1702.07186v2
"2017-02-23T12:00:10Z"
cs.IR, cs.CL, cs.LG, stat.ML
2,017
LTSG: Latent Topical Skip-Gram for Mutually Learning Topic Model and Vector Representations
Jarvan Law, Hankz Hankui Zhuo, Junhua He, Erhu Rong
Topic models have been widely used in discovering latent topics which are shared across documents in text mining. Vector representations, word embeddings and topic embeddings, map words and topics into a low-dimensional and dense real-value vector space, which have obtained high performance in NLP tasks. However, most of the existing models assume the result trained by one of them are perfect correct and used as prior knowledge for improving the other model. Some other models use the information trained from external large corpus to help improving smaller corpus. In this paper, we aim to build such an algorithm framework that makes topic models and vector representations mutually improve each other within the same corpus. An EM-style algorithm framework is employed to iteratively optimize both topic model and vector representations. Experimental results show that our model outperforms state-of-art methods on various NLP tasks.
http://arxiv.org/abs/1702.07117v1
"2017-02-23T07:16:03Z"
cs.CL
2,017
Scalable Inference for Nested Chinese Restaurant Process Topic Models
Jianfei Chen, Jun Zhu, Jie Lu, Shixia Liu
Nested Chinese Restaurant Process (nCRP) topic models are powerful nonparametric Bayesian methods to extract a topic hierarchy from a given text corpus, where the hierarchical structure is automatically determined by the data. Hierarchical Latent Dirichlet Allocation (hLDA) is a popular instance of nCRP topic models. However, hLDA has only been evaluated at small scale, because the existing collapsed Gibbs sampling and instantiated weight variational inference algorithms either are not scalable or sacrifice inference quality with mean-field assumptions. Moreover, an efficient distributed implementation of the data structures, such as dynamically growing count matrices and trees, is challenging. In this paper, we propose a novel partially collapsed Gibbs sampling (PCGS) algorithm, which combines the advantages of collapsed and instantiated weight algorithms to achieve good scalability as well as high model quality. An initialization strategy is presented to further improve the model quality. Finally, we propose an efficient distributed implementation of PCGS through vectorization, pre-processing, and a careful design of the concurrent data structures and communication strategy. Empirical studies show that our algorithm is 111 times more efficient than the previous open-source implementation for hLDA, with comparable or even better model quality. Our distributed implementation can extract 1,722 topics from a 131-million-document corpus with 28 billion tokens, which is 4-5 orders of magnitude larger than the previous largest corpus, with 50 machines in 7 hours.
http://arxiv.org/abs/1702.07083v1
"2017-02-23T03:34:07Z"
stat.ML, cs.DC, cs.IR, cs.LG
2,017
Triaging Content Severity in Online Mental Health Forums
Arman Cohan, Sydney Young, Andrew Yates, Nazli Goharian
Mental health forums are online communities where people express their issues and seek help from moderators and other users. In such forums, there are often posts with severe content indicating that the user is in acute distress and there is a risk of attempted self-harm. Moderators need to respond to these severe posts in a timely manner to prevent potential self-harm. However, the large volume of daily posted content makes it difficult for the moderators to locate and respond to these critical posts. We present a framework for triaging user content into four severity categories which are defined based on indications of self-harm ideation. Our models are based on a feature-rich classification framework which includes lexical, psycholinguistic, contextual and topic modeling features. Our approaches improve the state of the art in triaging the content severity in mental health forums by large margins (up to 17% improvement over the F-1 scores). Using the proposed model, we analyze the mental state of users and we show that overall, long-term users of the forum demonstrate a decreased severity of risk over time. Our analysis on the interaction of the moderators with the users further indicates that without an automatic way to identify critical content, it is indeed challenging for the moderators to provide timely response to the users in need.
http://arxiv.org/abs/1702.06875v1
"2017-02-22T16:14:12Z"
cs.CL, cs.IR, cs.SI
2,017
Multimodal Content Representation and Similarity Ranking of Movies
Konstantinos Bougiatiotis, Theodore Giannakopoulos
In this paper we examine the existence of correlation between movie similarity and low level features from respective movie content. In particular, we demonstrate the extraction of multi-modal representation models of movies based on subtitles, audio and metadata mining. We emphasize our research in topic modeling of movies based on their subtitles. In order to demonstrate the proposed content representation approach, we have built a small dataset of 160 widely known movies. We assert movie similarities, as propagated by the singular modalities and fusion models, in the form of recommendation rankings. We showcase a novel topic model browser for movies that allows for exploration of the different aspects of similarities between movies and an information retrieval system for movie similarity based on multi-modal content.
http://arxiv.org/abs/1702.04815v2
"2017-02-15T23:31:44Z"
cs.IR
2,017
Automated Phrase Mining from Massive Text Corpora
Jingbo Shang, Jialu Liu, Meng Jiang, Xiang Ren, Clare R Voss, Jiawei Han
As one of the fundamental tasks in text analysis, phrase mining aims at extracting quality phrases from a text corpus. Phrase mining is important in various tasks such as information extraction/retrieval, taxonomy construction, and topic modeling. Most existing methods rely on complex, trained linguistic analyzers, and thus likely have unsatisfactory performance on text corpora of new domains and genres without extra but expensive adaption. Recently, a few data-driven methods have been developed successfully for extraction of phrases from massive domain-specific text. However, none of the state-of-the-art models is fully automated because they require human experts for designing rules or labeling phrases. Since one can easily obtain many quality phrases from public knowledge bases to a scale that is much larger than that produced by human experts, in this paper, we propose a novel framework for automated phrase mining, AutoPhrase, which leverages this large amount of high-quality phrases in an effective way and achieves better performance compared to limited human labeled phrases. In addition, we develop a POS-guided phrasal segmentation model, which incorporates the shallow syntactic information in part-of-speech (POS) tags to further enhance the performance, when a POS tagger is available. Note that, AutoPhrase can support any language as long as a general knowledge base (e.g., Wikipedia) in that language is available, while benefiting from, but not requiring, a POS tagger. Compared to the state-of-the-art methods, the new method has shown significant improvements in effectiveness on five real-world datasets across different domains and languages.
http://arxiv.org/abs/1702.04457v2
"2017-02-15T03:35:03Z"
cs.CL
2,017
Multi-level computational methods for interdisciplinary research in the HathiTrust Digital Library
Jaimie Murdock, Colin Allen, Katy Börner, Robert Light, Simon McAlister, Andrew Ravenscroft, Robert Rose, Doori Rose, Jun Otsuka, David Bourget, John Lawrence, Chris Reed
We show how faceted search using a combination of traditional classification systems and mixed-membership topic models can go beyond keyword search to inform resource discovery, hypothesis formulation, and argument extraction for interdisciplinary research. Our test domain is the history and philosophy of scientific work on animal mind and cognition. The methods can be generalized to other research areas and ultimately support a system for semi-automatic identification of argument structures. We provide a case study for the application of the methods to the problem of identifying and extracting arguments about anthropomorphism during a critical period in the development of comparative psychology. We show how a combination of classification systems and mixed-membership models trained over large digital libraries can inform resource discovery in this domain. Through a novel approach of "drill-down" topic modeling---simultaneously reducing both the size of the corpus and the unit of analysis---we are able to reduce a large collection of fulltext volumes to a much smaller set of pages within six focal volumes containing arguments of interest to historians and philosophers of comparative psychology. The volumes identified in this way did not appear among the first ten results of the keyword search in the HathiTrust digital library and the pages bear the kind of "close reading" needed to generate original interpretations that is the heart of scholarly work in the humanities. Zooming back out, we provide a way to place the books onto a map of science originally constructed from very different data and for different purposes. The multilevel approach advances understanding of the intellectual and societal contexts in which writings are interpreted.
http://arxiv.org/abs/1702.01090v2
"2017-02-03T17:36:19Z"
cs.DL, cs.CL, cs.IR
2,017
Topic Modeling the Hàn diăn Ancient Classics
Colin Allen, Hongliang Luo, Jaimie Murdock, Jianghuai Pu, Xiaohong Wang, Yanjie Zhai, Kun Zhao
Ancient Chinese texts present an area of enormous challenge and opportunity for humanities scholars interested in exploiting computational methods to assist in the development of new insights and interpretations of culturally significant materials. In this paper we describe a collaborative effort between Indiana University and Xi'an Jiaotong University to support exploration and interpretation of a digital corpus of over 18,000 ancient Chinese documents, which we refer to as the "Handian" ancient classics corpus (H\`an di\u{a}n g\u{u} j\'i, i.e, the "Han canon" or "Chinese classics"). It contains classics of ancient Chinese philosophy, documents of historical and biographical significance, and literary works. We begin by describing the Digital Humanities context of this joint project, and the advances in humanities computing that made this project feasible. We describe the corpus and introduce our application of probabilistic topic modeling to this corpus, with attention to the particular challenges posed by modeling ancient Chinese documents. We give a specific example of how the software we have developed can be used to aid discovery and interpretation of themes in the corpus. We outline more advanced forms of computer-aided interpretation that are also made possible by the programming interface provided by our system, and the general implications of these methods for understanding the nature of meaning in these texts.
http://arxiv.org/abs/1702.00860v1
"2017-02-02T22:51:04Z"
cs.CL, cs.CY, cs.DL, cs.HC, cs.IR
2,017
Discriminative Neural Topic Models
Gaurav Pandey, Ambedkar Dukkipati
We propose a neural network based approach for learning topics from text and image datasets. The model makes no assumptions about the conditional distribution of the observed features given the latent topics. This allows us to perform topic modelling efficiently using sentences of documents and patches of images as observed features, rather than limiting ourselves to words. Moreover, the proposed approach is online, and hence can be used for streaming data. Furthermore, since the approach utilizes neural networks, it can be implemented on GPU with ease, and hence it is very scalable.
http://arxiv.org/abs/1701.06796v2
"2017-01-24T10:29:31Z"
cs.LG
2,017
Hierarchical Re-estimation of Topic Models for Measuring Topical Diversity
Hosein Azarbonyad, Mostafa Dehghani, Tom Kenter, Maarten Marx, Jaap Kamps, Maarten de Rijke
A high degree of topical diversity is often considered to be an important characteristic of interesting text documents. A recent proposal for measuring topical diversity identifies three elements for assessing diversity: words, topics, and documents as collections of words. Topic models play a central role in this approach. Using standard topic models for measuring diversity of documents is suboptimal due to generality and impurity. General topics only include common information from a background corpus and are assigned to most of the documents in the collection. Impure topics contain words that are not related to the topic; impurity lowers the interpretability of topic models and impure topics are likely to get assigned to documents erroneously. We propose a hierarchical re-estimation approach for topic models to combat generality and impurity; the proposed approach operates at three levels: words, topics, and documents. Our re-estimation approach for measuring documents' topical diversity outperforms the state of the art on PubMed dataset which is commonly used for diversity experiments.
http://arxiv.org/abs/1701.04273v1
"2017-01-16T12:59:47Z"
cs.IR
2,017
The Quantified City: Sensing Dynamics in Urban Setting
Tahar Zanouda, Noora AL Emadi, Sofiane Abbar, Jaideep Srivastava
The world is witnessing a period of extreme growth and urbanization; cities in the 21st century became nerve centers creating economic opportunities and cultural values which make cities grow exponentially. With this rapid urban population growth, city infrastructure is facing major problems, from the need to scale urban systems to sustaining the quality of services for citizen at scale. Understanding the dynamics of cities is critical towards informed strategic urban planning. This paper showcases QuantifiedCity, a system aimed at understanding the complex dynamics taking place in cities. Often, these dynamics involve humans, services, and infrastructures and are observed in different spaces: physical (IoT-based) sensing and human (social-based) sensing. The main challenges the system strives to address are related to data integration and fusion to enable an effective and semantically relevant data grouping. This is achieved by considering the spatio-temporal space as a blocking function for any data generated in the city. Our system consists of three layer for data acquisition, data analysis, and data visualization; each of which embeds a variety of modules to better achieve its purpose (e.g., data crawling, data cleaning, topic modeling, sentiment analysis, named entity recognition, event detection, time series analysis, etc.) End users can browse the dynamics through three main dimensions: location, time, and event. For each dimension, the system renders a set of map-centric widgets that summarize the underlying related dynamics. This paper highlights the need for such a holistic platform, identifies the strengths of the "Quantified City" concept, and showcases a working demo through a real-life scenario.
http://arxiv.org/abs/1701.04253v1
"2017-01-16T11:44:53Z"
cs.SI, cs.CY
2,017
Prior matters: simple and general methods for evaluating and improving topic quality in topic modeling
Angela Fan, Finale Doshi-Velez, Luke Miratrix
Latent Dirichlet Allocation (LDA) models trained without stopword removal often produce topics with high posterior probabilities on uninformative words, obscuring the underlying corpus content. Even when canonical stopwords are manually removed, uninformative words common in that corpus will still dominate the most probable words in a topic. In this work, we first show how the standard topic quality measures of coherence and pointwise mutual information act counter-intuitively in the presence of common but irrelevant words, making it difficult to even quantitatively identify situations in which topics may be dominated by stopwords. We propose an additional topic quality metric that targets the stopword problem, and show that it, unlike the standard measures, correctly correlates with human judgements of quality. We also propose a simple-to-implement strategy for generating topics that are evaluated to be of much higher quality by both human assessment and our new metric. This approach, a collection of informative priors easily introduced into most LDA-style inference methods, automatically promotes terms with domain relevance and demotes domain-specific stop words. We demonstrate this approach's effectiveness in three very different domains: Department of Labor accident reports, online health forum posts, and NIPS abstracts. Overall we find that current practices thought to solve this problem do not do so adequately, and that our proposal offers a substantial improvement for those interested in interpreting their topics as objects in their own right.
http://arxiv.org/abs/1701.03227v3
"2017-01-12T04:26:00Z"
cs.CL, cs.IR, cs.LG
2,017
Crime Topic Modeling
Da Kuang, P. Jeffrey Brantingham, Andrea L. Bertozzi
The classification of crime into discrete categories entails a massive loss of information. Crimes emerge out of a complex mix of behaviors and situations, yet most of these details cannot be captured by singular crime type labels. This information loss impacts our ability to not only understand the causes of crime, but also how to develop optimal crime prevention strategies. We apply machine learning methods to short narrative text descriptions accompanying crime records with the goal of discovering ecologically more meaningful latent crime classes. We term these latent classes "crime topics" in reference to text-based topic modeling methods that produce them. We use topic distributions to measure clustering among formally recognized crime types. Crime topics replicate broad distinctions between violent and property crime, but also reveal nuances linked to target characteristics, situational conditions and the tools and methods of attack. Formal crime types are not discrete in topic space. Rather, crime types are distributed across a range of crime topics. Similarly, individual crime topics are distributed across a range of formal crime types. Key ecological groups include identity theft, shoplifting, burglary and theft, car crimes and vandalism, criminal threats and confidence crimes, and violent crimes. Though not a replacement for formal legal crime classifications, crime topics provide a unique window into the heterogeneous causal processes underlying crime.
http://arxiv.org/abs/1701.01505v2
"2017-01-05T23:35:12Z"
cs.CL
2,017
Partial Membership Latent Dirichlet Allocation
Chao Chen, Alina Zare, Huy Trinh, Gbeng Omotara, J. Tory Cobb, Timotius Lagaunne
Topic models (e.g., pLSA, LDA, sLDA) have been widely used for segmenting imagery. However, these models are confined to crisp segmentation, forcing a visual word (i.e., an image patch) to belong to one and only one topic. Yet, there are many images in which some regions cannot be assigned a crisp categorical label (e.g., transition regions between a foggy sky and the ground or between sand and water at a beach). In these cases, a visual word is best represented with partial memberships across multiple topics. To address this, we present a partial membership latent Dirichlet allocation (PM-LDA) model and an associated parameter estimation algorithm. This model can be useful for imagery where a visual word may be a mixture of multiple topics. Experimental results on visual and sonar imagery show that PM-LDA can produce both crisp and soft semantic image segmentations; a capability previous topic modeling methods do not have.
http://arxiv.org/abs/1612.08936v1
"2016-12-28T17:32:52Z"
cs.CV, stat.ML
2,016
Provable learning of Noisy-or Networks
Sanjeev Arora, Rong Ge, Tengyu Ma, Andrej Risteski
Many machine learning applications use latent variable models to explain structure in data, whereby visible variables (= coordinates of the given datapoint) are explained as a probabilistic function of some hidden variables. Finding parameters with the maximum likelihood is NP-hard even in very simple settings. In recent years, provably efficient algorithms were nevertheless developed for models with linear structures: topic models, mixture models, hidden markov models, etc. These algorithms use matrix or tensor decomposition, and make some reasonable assumptions about the parameters of the underlying model. But matrix or tensor decomposition seems of little use when the latent variable model has nonlinearities. The current paper shows how to make progress: tensor decomposition is applied for learning the single-layer {\em noisy or} network, which is a textbook example of a Bayes net, and used for example in the classic QMR-DT software for diagnosing which disease(s) a patient may have by observing the symptoms he/she exhibits. The technical novelty here, which should be useful in other settings in future, is analysis of tensor decomposition in presence of systematic error (i.e., where the noise/error is correlated with the signal, and doesn't decrease as number of samples goes to infinity). This requires rethinking all steps of tensor decomposition methods from the ground up. For simplicity our analysis is stated assuming that the network parameters were chosen from a probability distribution but the method seems more generally applicable.
http://arxiv.org/abs/1612.08795v1
"2016-12-28T03:35:59Z"
cs.LG, cs.DS, stat.ML
2,016
ScienceWISE: Topic Modeling over Scientific Literature Networks
Andrea Martini, Artem Lutov, Valerio Gemmetto, Andrii Magalich, Alessio Cardillo, Alex Constantin, Vasyl Palchykov, Mourad Khayati, Philippe Cudré-Mauroux, Alexey Boyarsky, Oleg Ruchayskiy, Diego Garlaschelli, Paolo De Los Rios, Karl Aberer
We provide an up-to-date view on the knowledge management system ScienceWISE (SW) and address issues related to the automatic assignment of articles to research topics. So far, SW has been proven to be an effective platform for managing large volumes of technical articles by means of ontological concept-based browsing. However, as the publication of research articles accelerates, the expressivity and the richness of the SW ontology turns into a double-edged sword: a more fine-grained characterization of articles is possible, but at the cost of introducing more spurious relations among them. In this context, the challenge of continuously recommending relevant articles to users lies in tackling a network partitioning problem, where nodes represent articles and co-occurring concepts create edges between them. In this paper, we discuss the three research directions we have taken for solving this issue: i) the identification of generic concepts to reinforce inter-article similarities; ii) the adoption of a bipartite network representation to improve scalability; iii) the design of a clustering algorithm to identify concepts for cross-disciplinary articles and obtain fine-grained topics for all articles.
http://arxiv.org/abs/1612.07636v1
"2016-12-22T15:11:38Z"
cs.DL, cs.SI, physics.soc-ph
2,016
Inverted Bilingual Topic Models for Lexicon Extraction from Non-parallel Data
Tengfei Ma, Tetsuya Nasukawa
Topic models have been successfully applied in lexicon extraction. However, most previous methods are limited to document-aligned data. In this paper, we try to address two challenges of applying topic models to lexicon extraction in non-parallel data: 1) hard to model the word relationship and 2) noisy seed dictionary. To solve these two challenges, we propose two new bilingual topic models to better capture the semantic information of each word while discriminating the multiple translations in a noisy seed dictionary. We extend the scope of topic models by inverting the roles of "word" and "document". In addition, to solve the problem of noise in seed dictionary, we incorporate the probability of translation selection in our models. Moreover, we also propose an effective measure to evaluate the similarity of words in different languages and select the optimal translation pairs. Experimental results using real world data demonstrate the utility and efficacy of the proposed models.
http://arxiv.org/abs/1612.07215v2
"2016-12-21T16:12:45Z"
cs.CL
2,016
Automatic Labelling of Topics with Neural Embeddings
Shraey Bhatia, Jey Han Lau, Timothy Baldwin
Topics generated by topic models are typically represented as list of terms. To reduce the cognitive overhead of interpreting these topics for end-users, we propose labelling a topic with a succinct phrase that summarises its theme or idea. Using Wikipedia document titles as label candidates, we compute neural embeddings for documents and words to select the most relevant labels for topics. Compared to a state-of-the-art topic labelling system, our methodology is simpler, more efficient, and finds better topic labels.
http://arxiv.org/abs/1612.05340v2
"2016-12-16T02:49:53Z"
cs.CL
2,016
You Are What You Eat... Listen to, Watch, and Read
Mason Bretan
This article describes a data driven method for deriving the relationship between personality and media preferences. A qunatifiable representation of such a relationship can be leveraged for use in recommendation systems and ameliorate the "cold start" problem. Here, the data is comprised of an original collection of 1,316 Okcupid dating profiles. Of these profiles, 800 are labeled with one of 16 possible Myers-Briggs Type Indicators (MBTI). A personality specific topic model describing a person's favorite books, movies, shows, music, and food was generated using latent Dirichlet allocation (LDA). There were several significant findings, for example, intuitive thinking types preferred sci-fi/fantasy entertainment, extraversion correlated positively with upbeat dance music, and jazz, folk, and international cuisine correlated positively with those characterized by openness to experience. Many other correlations confirmed previous findings describing the relationship among personality, writing style, and personal preferences. (For complete word/personality type assocations see the Appendix).
http://arxiv.org/abs/1612.04403v1
"2016-12-13T21:29:05Z"
cs.SI, cs.CL, cs.IR
2,016
Monte Carlo Structured SVI for Two-Level Non-Conjugate Models
Rishit Sheth, Roni Khardon
The stochastic variational inference (SVI) paradigm, which combines variational inference, natural gradients, and stochastic updates, was recently proposed for large-scale data analysis in conjugate Bayesian models and demonstrated to be effective in several problems. This paper studies a family of Bayesian latent variable models with two levels of hidden variables but without any conjugacy requirements, making several contributions in this context. The first is observing that SVI, with an improved structured variational approximation, is applicable under more general conditions than previously thought with the only requirement being that the approximating variational distribution be in the same family as the prior. The resulting approach, Monte Carlo Structured SVI (MC-SSVI), significantly extends the scope of SVI, enabling large-scale learning in non-conjugate models. For models with latent Gaussian variables we propose a hybrid algorithm, using both standard and natural gradients, which is shown to improve stability and convergence. Applications in mixed effects models, sparse Gaussian processes, probabilistic matrix factorization and correlated topic models demonstrate the generality of the approach and the advantages of the proposed algorithms.
http://arxiv.org/abs/1612.03957v3
"2016-12-12T22:36:04Z"
stat.ML
2,016
Unraveling reported dreams with text analytics
Iris Hendrickx, Louis Onrust, Florian Kunneman, Ali Hürriyetoğlu, Antal van den Bosch, Wessel Stoop
We investigate what distinguishes reported dreams from other personal narratives. The continuity hypothesis, stemming from psychological dream analysis work, states that most dreams refer to a person's daily life and personal concerns, similar to other personal narratives such as diary entries. Differences between the two texts may reveal the linguistic markers of dream text, which could be the basis for new dream analysis work and for the automatic detection of dream descriptions. We used three text analytics methods: text classification, topic modeling, and text coherence analysis, and applied these methods to a balanced set of texts representing dreams, diary entries, and other personal stories. We observed that dream texts could be distinguished from other personal narratives nearly perfectly, mostly based on the presence of uncertainty markers and descriptions of scenes. Important markers for non-dream narratives are specific time expressions and conversational expressions. Dream texts also exhibit a lower discourse coherence than other personal narratives.
http://arxiv.org/abs/1612.03659v1
"2016-12-12T13:08:55Z"
cs.CL
2,016
Connection Discovery using Shared Images by Gaussian Relational Topic Model
Xiaopeng Li, Ming Cheung, James She
Social graphs, representing online friendships among users, are one of the fundamental types of data for many applications, such as recommendation, virality prediction and marketing in social media. However, this data may be unavailable due to the privacy concerns of users, or kept private by social network operators, which makes such applications difficult. Inferring user interests and discovering user connections through their shared multimedia content has attracted more and more attention in recent years. This paper proposes a Gaussian relational topic model for connection discovery using user shared images in social media. The proposed model not only models user interests as latent variables through their shared images, but also considers the connections between users as a result of their shared images. It explicitly relates user shared images to user connections in a hierarchical, systematic and supervisory way and provides an end-to-end solution for the problem. This paper also derives efficient variational inference and learning algorithms for the posterior of the latent variables and model parameters. It is demonstrated through experiments with over 200k images from Flickr that the proposed method significantly outperforms the methods in previous works.
http://arxiv.org/abs/1612.03639v1
"2016-12-12T12:10:28Z"
cs.SI, cs.IR
2,016
A New Spectral Method for Latent Variable Models
Matteo Ruffini, Marta Casanellas, Ricard Gavaldà
This paper presents an algorithm for the unsupervised learning of latent variable models from unlabeled sets of data. We base our technique on spectral decomposition, providing a technique that proves to be robust both in theory and in practice. We also describe how to use this algorithm to learn the parameters of two well known text mining models: single topic model and Latent Dirichlet Allocation, providing in both cases an efficient technique to retrieve the parameters to feed the algorithm. We compare the results of our algorithm with those of existing algorithms on synthetic data, and we provide examples of applications to real world text corpora for both single topic model and LDA, obtaining meaningful results.
http://arxiv.org/abs/1612.03409v2
"2016-12-11T13:31:58Z"
stat.ML
2,016
Supervised topic models for clinical interpretability
Michael C. Hughes, Huseyin Melih Elibol, Thomas McCoy, Roy Perlis, Finale Doshi-Velez
Supervised topic models can help clinical researchers find interpretable cooccurence patterns in count data that are relevant for diagnostics. However, standard formulations of supervised Latent Dirichlet Allocation have two problems. First, when documents have many more words than labels, the influence of the labels will be negligible. Second, due to conditional independence assumptions in the graphical model the impact of supervised labels on the learned topic-word probabilities is often minimal, leading to poor predictions on heldout data. We investigate penalized optimization methods for training sLDA that produce interpretable topic-word parameters and useful heldout predictions, using recognition networks to speed-up inference. We report preliminary results on synthetic data and on predicting successful anti-depressant medication given a patient's diagnostic history.
http://arxiv.org/abs/1612.01678v1
"2016-12-06T06:07:55Z"
stat.ML
2,016
Diagnostic Prediction Using Discomfort Drawings
Cheng Zhang, Hedvig Kjellstrom, Bo C. Bertilson
In this paper, we explore the possibility to apply machine learning to make diagnostic predictions using discomfort drawings. A discomfort drawing is an intuitive way for patients to express discomfort and pain related symptoms. These drawings have proven to be an effective method to collect patient data and make diagnostic decisions in real-life practice. A dataset from real-world patient cases is collected for which medical experts provide diagnostic labels. Next, we extend a factorized multimodal topic model, Inter-Battery Topic Model (IBTM), to train a system that can make diagnostic predictions given an unseen discomfort drawing. Experimental results show reasonable predictions of diagnostic labels given an unseen discomfort drawing. The positive result indicates a significant potential of machine learning to be used for parts of the pain diagnostic process and to be a decision support system for physicians and other health care personnel.
http://arxiv.org/abs/1612.01356v1
"2016-12-05T14:11:20Z"
cs.LG
2,016
Anchored Correlation Explanation: Topic Modeling with Minimal Domain Knowledge
Ryan J. Gallagher, Kyle Reing, David Kale, Greg Ver Steeg
While generative models such as Latent Dirichlet Allocation (LDA) have proven fruitful in topic modeling, they often require detailed assumptions and careful specification of hyperparameters. Such model complexity issues only compound when trying to generalize generative models to incorporate human input. We introduce Correlation Explanation (CorEx), an alternative approach to topic modeling that does not assume an underlying generative model, and instead learns maximally informative topics through an information-theoretic framework. This framework naturally generalizes to hierarchical and semi-supervised extensions with no additional modeling assumptions. In particular, word-level domain knowledge can be flexibly incorporated within CorEx through anchor words, allowing topic separability and representation to be promoted with minimal human intervention. Across a variety of datasets, metrics, and experiments, we demonstrate that CorEx produces topics that are comparable in quality to those produced by unsupervised and semi-supervised variants of LDA.
http://arxiv.org/abs/1611.10277v4
"2016-11-30T17:32:17Z"
cs.CL, cs.IR, cs.IT, math.IT, stat.ML
2,016
Less is More: Learning Prominent and Diverse Topics for Data Summarization
Jian Tang, Cheng Li, Ming Zhang, Qiaozhu Mei
Statistical topic models efficiently facilitate the exploration of large-scale data sets. Many models have been developed and broadly used to summarize the semantic structure in news, science, social media, and digital humanities. However, a common and practical objective in data exploration tasks is not to enumerate all existing topics, but to quickly extract representative ones that broadly cover the content of the corpus, i.e., a few topics that serve as a good summary of the data. Most existing topic models fit exactly the same number of topics as a user specifies, which have imposed an unnecessary burden to the users who have limited prior knowledge. We instead propose new models that are able to learn fewer but more representative topics for the purpose of data summarization. We propose a reinforced random walk that allows prominent topics to absorb tokens from similar and smaller topics, thus enhances the diversity among the top topics extracted. With this reinforced random walk as a general process embedded in classical topic models, we obtain \textit{diverse topic models} that are able to extract the most prominent and diverse topics from data. The inference procedures of these diverse topic models remain as simple and efficient as the classical models. Experimental results demonstrate that the diverse topic models not only discover topics that better summarize the data, but also require minimal prior knowledge of the users.
http://arxiv.org/abs/1611.09921v2
"2016-11-29T22:24:30Z"
cs.LG, cs.CL, cs.IR
2,016
Learning Concept Hierarchies through Probabilistic Topic Modeling
V. S. Anoop, S. Asharaf, P. Deepak
With the advent of semantic web, various tools and techniques have been introduced for presenting and organizing knowledge. Concept hierarchies are one such technique which gained significant attention due to its usefulness in creating domain ontologies that are considered as an integral part of semantic web. Automated concept hierarchy learning algorithms focus on extracting relevant concepts from unstructured text corpus and connect them together by identifying some potential relations exist between them. In this paper, we propose a novel approach for identifying relevant concepts from plain text and then learns hierarchy of concepts by exploiting subsumption relation between them. To start with, we model topics using a probabilistic topic model and then make use of some lightweight linguistic process to extract semantically rich concepts. Then we connect concepts by identifying an "is-a" relationship between pair of concepts. The proposed method is completely unsupervised and there is no need for a domain specific training corpus for concept extraction and learning. Experiments on large and real-world text corpora such as BBC News dataset and Reuters News corpus shows that the proposed method outperforms some of the existing methods for concept extraction and efficient concept hierarchy learning is possible if the overall task is guided by a probabilistic topic modeling algorithm.
http://arxiv.org/abs/1611.09573v1
"2016-11-29T11:28:59Z"
cs.AI, cs.CL, cs.IR
2,016
Poisson Random Fields for Dynamic Feature Models
Valerio Perrone, Paul A. Jenkins, Dario Spano, Yee Whye Teh
We present the Wright-Fisher Indian buffet process (WF-IBP), a probabilistic model for time-dependent data assumed to have been generated by an unknown number of latent features. This model is suitable as a prior in Bayesian nonparametric feature allocation models in which the features underlying the observed data exhibit a dependency structure over time. More specifically, we establish a new framework for generating dependent Indian buffet processes, where the Poisson random field model from population genetics is used as a way of constructing dependent beta processes. Inference in the model is complex, and we describe a sophisticated Markov Chain Monte Carlo algorithm for exact posterior simulation. We apply our construction to develop a nonparametric focused topic model for collections of time-stamped text documents and test it on the full corpus of NIPS papers published from 1987 to 2015.
http://arxiv.org/abs/1611.07460v1
"2016-11-22T18:53:32Z"
stat.ML
2,016