Title
stringlengths
14
179
Authors
stringlengths
6
464
Abstract
stringlengths
83
1.93k
entry_id
stringlengths
32
34
Date
unknown
Categories
stringlengths
5
168
year
int32
2.01k
2.02k
Conversational Structure Aware and Context Sensitive Topic Model for Online Discussions
Yingcheng Sun, Kenneth Loparo, Richard Kolacinski
Millions of online discussions are generated everyday on social media platforms. Topic modelling is an efficient way of better understanding large text datasets at scale. Conventional topic models have had limited success in online discussions, and to overcome their limitations, we use the discussion thread tree structure and propose a "popularity" metric to quantify the number of replies to a comment to extend the frequency of word occurrences, and the "transitivity" concept to characterize topic dependency among nodes in a nested discussion thread. We build a Conversational Structure Aware Topic Model (CSATM) based on popularity and transitivity to infer topics and their assignments to comments. Experiments on real forum datasets are used to demonstrate improved performance for topic extraction with six different measurements of coherence and impressive accuracy for topic assignments.
http://arxiv.org/abs/2002.02353v1
"2020-02-06T16:57:27Z"
cs.CL
2,020
A Neural Topical Expansion Framework for Unstructured Persona-oriented Dialogue Generation
Minghong Xu, Piji Li, Haoran Yang, Pengjie Ren, Zhaochun Ren, Zhumin Chen, Jun Ma
Unstructured Persona-oriented Dialogue Systems (UPDS) has been demonstrated effective in generating persona consistent responses by utilizing predefined natural language user persona descriptions (e.g., "I am a vegan"). However, the predefined user persona descriptions are usually short and limited to only a few descriptive words, which makes it hard to correlate them with the dialogues. As a result, existing methods either fail to use the persona description or use them improperly when generating persona consistent responses. To address this, we propose a neural topical expansion framework, namely Persona Exploration and Exploitation (PEE), which is able to extend the predefined user persona description with semantically correlated content before utilizing them to generate dialogue responses. PEE consists of two main modules: persona exploration and persona exploitation. The former learns to extend the predefined user persona description by mining and correlating with existing dialogue corpus using a variational auto-encoder (VAE) based topic model. The latter learns to generate persona consistent responses by utilizing the predefined and extended user persona description. In order to make persona exploitation learn to utilize user persona description more properly, we also introduce two persona-oriented loss functions: Persona-oriented Matching (P-Match) loss and Persona-oriented Bag-of-Words (P-BoWs) loss which respectively supervise persona selection in encoder and decoder. Experimental results show that our approach outperforms state-of-the-art baselines, in terms of both automatic and human evaluations.
http://arxiv.org/abs/2002.02153v1
"2020-02-06T08:24:33Z"
cs.CL, cs.AI, cs.LG
2,020
A Method and Analysis to Elicit User-reported Problems in Intelligent Everyday Applications
Malin Eiband, Sarah Theres Völkel, Daniel Buschek, Sophia Cook, Heinrich Hussmann
The complex nature of intelligent systems motivates work on supporting users during interaction, for example through explanations. However, as of yet, there is little empirical evidence in regard to specific problems users face when applying such systems in everyday situations. This paper contributes a novel method and analysis to investigate such problems as reported by users: We analysed 45,448 reviews of four apps on the Google Play Store (Facebook, Netflix, Google Maps and Google Assistant) with sentiment analysis and topic modelling to reveal problems during interaction that can be attributed to the apps' algorithmic decision-making. We enriched this data with users' coping and support strategies through a follow-up online survey (N=286). In particular, we found problems and strategies related to content, algorithm, user choice, and feedback. We discuss corresponding implications for designing user support, highlighting the importance of user control and explanations of output, rather than processes.
http://arxiv.org/abs/2002.01288v1
"2020-02-04T14:05:43Z"
cs.HC, H.m
2,020
Crash Themes in Automated Vehicles: A Topic Modeling Analysis of the California Department of Motor Vehicles Automated Vehicle Crash Database
Hananeh Alambeigi, Anthony D. McDonald, Srinivas R. Tankasala
Automated vehicle technology promises to reduce the societal impact of traffic crashes. Early investigations of this technology suggest that significant safety issues remain during control transfers between the automation and human drivers and automation interactions with the transportation system. In order to address these issues, it is critical to understand both the behavior of human drivers during these events and the environments where they occur. This article analyzes automated vehicle crash narratives from the California Department of Motor Vehicles automated vehicle crash database to identify safety concerns and gaps between crash types and current areas of focus in the current research. The database was analyzed using probabilistic topic modeling of open-ended crash narratives. Topic modeling analysis identified five themes in the database: driver-initiated transition crashes, sideswipe crashes during left-side overtakes, and rear-end collisions while the vehicle was stopped at an intersection, in a turn lane, and when the crash involved oncoming traffic. Many crashes represented by the driver-initiated transitions topic were also associated with the side-swipe collisions. A substantial portion of the side-swipe collisions also involved motorcycles. These findings highlight previously raised safety concerns with transitions of control and interactions between vehicles in automated mode and the transportation social network. In response to these findings, future empirical work should focus on driver-initiated transitions, overtakes, silent failures, complex traffic situations, and adverse driving environments. Beyond this future work, the topic modeling analysis method may be used as a tool to monitor emergent safety issues.
http://arxiv.org/abs/2001.11087v1
"2020-01-29T20:53:01Z"
stat.AP
2,020
Charting the Landscape of Online Cryptocurrency Manipulation
Leonardo Nizzoli, Serena Tardelli, Marco Avvenuti, Stefano Cresci, Maurizio Tesconi, Emilio Ferrara
Cryptocurrencies represent one of the most attractive markets for financial speculation. As a consequence, they have attracted unprecedented attention on social media. Besides genuine discussions and legitimate investment initiatives, several deceptive activities have flourished. In this work, we chart the online cryptocurrency landscape across multiple platforms. To reach our goal, we collected a large dataset, composed of more than 50M messages published by almost 7M users on Twitter, Telegram and Discord, over three months. We performed bot detection on Twitter accounts sharing invite links to Telegram and Discord channels, and we discovered that more than 56% of them were bots or suspended accounts. Then, we applied topic modeling techniques to Telegram and Discord messages, unveiling two different deception schemes - "pump-and-dump" and "Ponzi" - and identifying the channels involved in these frauds. Whereas on Discord we found a negligible level of deception, on Telegram we retrieved 296 channels involved in pump-and-dump and 432 involved in Ponzi schemes, accounting for a striking 20% of the total. Moreover, we observed that 93% of the invite links shared by Twitter bots point to Telegram pump-and-dump channels, shedding light on a little-known social bot activity. Charting the landscape of online cryptocurrency manipulation can inform actionable policies to fight such abuse.
http://arxiv.org/abs/2001.10289v1
"2020-01-28T12:19:09Z"
cs.CY
2,020
Keyword-based Topic Modeling and Keyword Selection
Xingyu Wang, Lida Zhang, Diego Klabjan
Certain type of documents such as tweets are collected by specifying a set of keywords. As topics of interest change with time it is beneficial to adjust keywords dynamically. The challenge is that these need to be specified ahead of knowing the forthcoming documents and the underlying topics. The future topics should mimic past topics of interest yet there should be some novelty in them. We develop a keyword-based topic model that dynamically selects a subset of keywords to be used to collect future documents. The generative process first selects keywords and then the underlying documents based on the specified keywords. The model is trained by using a variational lower bound and stochastic gradient optimization. The inference consists of finding a subset of keywords where given a subset the model predicts the underlying topic-word matrix for the unknown forthcoming documents. We compare the keyword topic model against a benchmark model using viral predictions of tweets combined with a topic model. The keyword-based topic model outperforms this sophisticated baseline model by 67%.
http://arxiv.org/abs/2001.07866v1
"2020-01-22T03:41:10Z"
stat.ML, cs.IR, cs.LG
2,020
Optimal estimation of sparse topic models
Xin Bing, Florentina Bunea, Marten Wegkamp
Topic models have become popular tools for dimension reduction and exploratory analysis of text data which consists in observed frequencies of a vocabulary of $p$ words in $n$ documents, stored in a $p\times n$ matrix. The main premise is that the mean of this data matrix can be factorized into a product of two non-negative matrices: a $p\times K$ word-topic matrix $A$ and a $K\times n$ topic-document matrix $W$. This paper studies the estimation of $A$ that is possibly element-wise sparse, and the number of topics $K$ is unknown. In this under-explored context, we derive a new minimax lower bound for the estimation of such $A$ and propose a new computationally efficient algorithm for its recovery. We derive a finite sample upper bound for our estimator, and show that it matches the minimax lower bound in many scenarios. Our estimate adapts to the unknown sparsity of $A$ and our analysis is valid for any finite $n$, $p$, $K$ and document lengths. Empirical results on both synthetic data and semi-synthetic data show that our proposed estimator is a strong competitor of the existing state-of-the-art algorithms for both non-sparse $A$ and sparse $A$, and has superior performance is many scenarios of interest.
http://arxiv.org/abs/2001.07861v1
"2020-01-22T03:19:50Z"
stat.ML, cs.IR, cs.LG
2,020
Random-walk Based Generative Model for Classifying Document Networks
Takafumi J. Suzuki
Document networks are found in various collections of real-world data, such as citation networks, hyperlinked web pages, and online social networks. A large number of generative models have been proposed because they offer intuitive and useful pictures for analyzing document networks. Prominent examples are relational topic models, where documents are linked according to their topic similarities. However, existing generative models do not make full use of network structures because they are largely dependent on topic modeling of documents. In particular, centrality of graph nodes is missing in generative processes of previous models. In this paper, we propose a novel generative model for document networks by introducing random walkers on networks to integrate the node centrality into link generation processes. The developed method is evaluated in semi-supervised classification tasks with real-world citation networks. We show that the proposed model outperforms existing probabilistic approaches especially in detecting communities in connected networks.
http://arxiv.org/abs/2001.07380v1
"2020-01-21T08:26:06Z"
physics.soc-ph, cs.IR, cs.SI
2,020
Crude oil price forecasting incorporating news text
Yun Bai, Xixi Li, Hao Yu, Suling Jia
Sparse and short news headlines can be arbitrary, noisy, and ambiguous, making it difficult for classic topic model LDA (latent Dirichlet allocation) designed for accommodating long text to discover knowledge from them. Nonetheless, some of the existing research about text-based crude oil forecasting employs LDA to explore topics from news headlines, resulting in a mismatch between the short text and the topic model and further affecting the forecasting performance. Exploiting advanced and appropriate methods to construct high-quality features from news headlines becomes crucial in crude oil forecasting. To tackle this issue, this paper introduces two novel indicators of topic and sentiment for the short and sparse text data. Empirical experiments show that AdaBoost.RT with our proposed text indicators, with a more comprehensive view and characterization of the short and sparse text data, outperforms the other benchmarks. Another significant merit is that our method also yields good forecasting performance when applied to other futures commodities.
http://arxiv.org/abs/2002.02010v4
"2020-01-19T16:58:05Z"
q-fin.ST
2,020
VSEC-LDA: Boosting Topic Modeling with Embedded Vocabulary Selection
Yuzhen Ding, Baoxin Li
Topic modeling has found wide application in many problems where latent structures of the data are crucial for typical inference tasks. When applying a topic model, a relatively standard pre-processing step is to first build a vocabulary of frequent words. Such a general pre-processing step is often independent of the topic modeling stage, and thus there is no guarantee that the pre-generated vocabulary can support the inference of some optimal (or even meaningful) topic models appropriate for a given task, especially for computer vision applications involving "visual words". In this paper, we propose a new approach to topic modeling, termed Vocabulary-Selection-Embedded Correspondence-LDA (VSEC-LDA), which learns the latent model while simultaneously selecting most relevant words. The selection of words is driven by an entropy-based metric that measures the relative contribution of the words to the underlying model, and is done dynamically while the model is learned. We present three variants of VSEC-LDA and evaluate the proposed approach with experiments on both synthetic and real databases from different applications. The results demonstrate the effectiveness of built-in vocabulary selection and its importance in improving the performance of topic modeling.
http://arxiv.org/abs/2001.05578v1
"2020-01-15T22:16:24Z"
cs.CV
2,020
CATVI: Conditional and Adaptively Truncated Variational Inference for Hierarchical Bayesian Nonparametric Models
Yirui Liu, Xinghao Qiao, Jessica Lam
Current variational inference methods for hierarchical Bayesian nonparametric models can neither characterize the correlation structure among latent variables due to the mean-field setting, nor infer the true posterior dimension because of the universal truncation. To overcome these limitations, we propose the conditional and adaptively truncated variational inference method (CATVI) by maximizing the nonparametric evidence lower bound and integrating Monte Carlo into the variational inference framework. CATVI enjoys several advantages over traditional methods, including a smaller divergence between variational and true posteriors, reduced risk of underfitting or overfitting, and improved prediction accuracy. Empirical studies on three large datasets reveal that CATVI applied in Bayesian nonparametric topic models substantially outperforms competing models, providing lower perplexity and clearer topic-words clustering.
http://arxiv.org/abs/2001.04508v2
"2020-01-13T19:27:11Z"
stat.ML, cs.LG
2,020
Detecting New Word Meanings: A Comparison of Word Embedding Models in Spanish
Andrés Torres-Rivera, Juan-Manuel Torres-Moreno
Semantic neologisms (SN) are defined as words that acquire a new word meaning while maintaining their form. Given the nature of this kind of neologisms, the task of identifying these new word meanings is currently performed manually by specialists at observatories of neology. To detect SN in a semi-automatic way, we developed a system that implements a combination of the following strategies: topic modeling, keyword extraction, and word sense disambiguation. The role of topic modeling is to detect the themes that are treated in the input text. Themes within a text give clues about the particular meaning of the words that are used, for example: viral has one meaning in the context of computer science (CS) and another when talking about health. To extract keywords, we used TextRank with POS tag filtering. With this method, we can obtain relevant words that are already part of the Spanish lexicon. We use a deep learning model to determine if a given keyword could have a new meaning. Embeddings that are different from all the known meanings (or topics) indicate that a word might be a valid SN candidate. In this study, we examine the following word embedding models: Word2Vec, Sense2Vec, and FastText. The models were trained with equivalent parameters using Wikipedia in Spanish as corpora. Then we used a list of words and their concordances (obtained from our database of neologisms) to show the different embeddings that each model yields. Finally, we present a comparison of these outcomes with the concordances of each word to show how we can determine if a word could be a valid candidate for SN.
http://arxiv.org/abs/2001.05285v1
"2020-01-12T21:54:52Z"
cs.CL
2,020
Review of Probability Distributions for Modeling Count Data
F. William Townes
Count data take on non-negative integer values and are challenging to properly analyze using standard linear-Gaussian methods such as linear regression and principal components analysis. Generalized linear models enable direct modeling of counts in a regression context using distributions such as the Poisson and negative binomial. When counts contain only relative information, multinomial or Dirichlet-multinomial models can be more appropriate. We review some of the fundamental connections between multinomial and count models from probability theory, providing detailed proofs. These relationships are useful for methods development in applications such as topic modeling of text data and genomics.
http://arxiv.org/abs/2001.04343v1
"2020-01-10T18:28:19Z"
stat.ME, stat.ML
2,020
A Correspondence Analysis Framework for Author-Conference Recommendations
Rahul Radhakrishnan Iyer, Manish Sharma, Vijaya Saradhi
For many years, achievements and discoveries made by scientists are made aware through research papers published in appropriate journals or conferences. Often, established scientists and especially newbies are caught up in the dilemma of choosing an appropriate conference to get their work through. Every scientific conference and journal is inclined towards a particular field of research and there is a vast multitude of them for any particular field. Choosing an appropriate venue is vital as it helps in reaching out to the right audience and also to further one's chance of getting their paper published. In this work, we address the problem of recommending appropriate conferences to the authors to increase their chances of acceptance. We present three different approaches for the same involving the use of social network of the authors and the content of the paper in the settings of dimensionality reduction and topic modeling. In all these approaches, we apply Correspondence Analysis (CA) to derive appropriate relationships between the entities in question, such as conferences and papers. Our models show promising results when compared with existing methods such as content-based filtering, collaborative filtering and hybrid filtering.
http://arxiv.org/abs/2001.02669v1
"2020-01-08T18:52:39Z"
cs.IR, cs.CL, cs.LG, cs.SI, stat.ML
2,020
Topic Extraction of Crawled Documents Collection using Correlated Topic Model in MapReduce Framework
Mi Khine Oo, May Aye Khine
The tremendous increase in the amount of available research documents impels researchers to propose topic models to extract the latent semantic themes of a documents collection. However, how to extract the hidden topics of the documents collection has become a crucial task for many topic model applications. Moreover, conventional topic modeling approaches suffer from the scalability problem when the size of documents collection increases. In this paper, the Correlated Topic Model with variational Expectation-Maximization algorithm is implemented in MapReduce framework to solve the scalability problem. The proposed approach utilizes the dataset crawled from the public digital library. In addition, the full-texts of the crawled documents are analysed to enhance the accuracy of MapReduce CTM. The experiments are conducted to demonstrate the performance of the proposed algorithm. From the evaluation, the proposed approach has a comparable performance in terms of topic coherences with LDA implemented in MapReduce framework.
http://arxiv.org/abs/2001.01669v1
"2020-01-06T17:09:21Z"
cs.IR, cs.CL, cs.LG, stat.ML
2,020
Measuring the Diversity of Facebook Reactions to Research
Cole Freeman, Hamed Alhoori, Murtuza Shahzad
Online and in the real world, communities are bonded together by emotional consensus around core issues. Emotional responses to scientific findings often play a pivotal role in these core issues. When there is too much diversity of opinion on topics of science, emotions flare up and give rise to conflict. This conflict threatens positive outcomes for research. Emotions have the power to shape how people process new information. They can color the public's understanding of science, motivate policy positions, even change lives. And yet little work has been done to evaluate the public's emotional response to science using quantitative methods. In this paper, we use a dataset of responses to scholarly articles on Facebook to analyze the dynamics of emotional valence, intensity, and diversity. We present a novel way of weighting click-based reactions that increases their comprehensibility, and use these weighted reactions to develop new metrics of aggregate emotional responses. We use our metrics along with LDA topic models and statistical testing to investigate how users' emotional responses differ from one scientific topic to another. We find that research articles related to gender, genetics, or agricultural/environmental sciences elicit significantly different emotional responses from users than other research topics. We also find that there is generally a positive response to scientific research on Facebook, and that articles generating a positive emotional response are more likely to be widely shared---a conclusion that contradicts previous studies of other social media platforms.
http://arxiv.org/abs/2001.01029v1
"2020-01-04T03:41:44Z"
cs.SI, cs.CY
2,020
On Large-Scale Dynamic Topic Modeling with Nonnegative CP Tensor Decomposition
Miju Ahn, Nicole Eikmeier, Jamie Haddock, Lara Kassab, Alona Kryshchenko, Kathryn Leonard, Deanna Needell, R. W. M. A. Madushani, Elena Sizikova, Chuntian Wang
There is currently an unprecedented demand for large-scale temporal data analysis due to the explosive growth of data. Dynamic topic modeling has been widely used in social and data sciences with the goal of learning latent topics that emerge, evolve, and fade over time. Previous work on dynamic topic modeling primarily employ the method of nonnegative matrix factorization (NMF), where slices of the data tensor are each factorized into the product of lower-dimensional nonnegative matrices. With this approach, however, information contained in the temporal dimension of the data is often neglected or underutilized. To overcome this issue, we propose instead adopting the method of nonnegative CANDECOMP/PARAPAC (CP) tensor decomposition (NNCPD), where the data tensor is directly decomposed into a minimal sum of outer products of nonnegative vectors, thereby preserving the temporal information. The viability of NNCPD is demonstrated through application to both synthetic and real data, where significantly improved results are obtained compared to those of typical NMF-based methods. The advantages of NNCPD over such approaches are studied and discussed. To the best of our knowledge, this is the first time that NNCPD has been utilized for the purpose of dynamic topic modeling, and our findings will be transformative for both applications and further developments.
http://arxiv.org/abs/2001.00631v2
"2020-01-02T21:28:10Z"
cs.LG, stat.ML
2,020
Domain-topic models with chained dimensions: charting an emergent domain of a major oncology conference
Alexandre Hannud Abdo, Jean-Philippe Cointet, Pascale Bourret, Alberto Cambrosio
This paper presents a contribution to the study of bibliographic corpora in the context of science mapping. Starting from a graph representation of documents and their textual dimension, we observe that stochastic block models (SBMs) can provide a simultaneous clustering of documents and words that we call a domain-topic model. Previous work by (Gerlach et al., 2018) investigated the resulting topics, or word clusters, while ours focuses on the study of the document clusters, which we call domains. To enable the synthetic description and interactive navigation of domains, we introduce measures and interfaces relating both types of clusters, which reflect the structure of the graph and the model. We then present a procedure that, starting from the document clusters, extends the block model to also cluster arbitrary metadata attributes of the documents. We call this procedure a domain-chained model, and our previous measures and interfaces can be directly transposed to read the metadata clusters. We provide an example application to a corpus that is relevant to current STS research, and an interesting case for our approach: the 1995-2017 collection of abstracts presented at ASCO, the main annual oncology research conference. Through a sequence of domain-topic and domain-chained models, we identify and describe a particular group of domains in ASCO that have notably grown through the last decades, and which we relate to the establishment of "oncopolicy" as a major concern in oncology.
http://arxiv.org/abs/1912.13349v3
"2019-12-31T15:17:26Z"
cs.DL, physics.data-an, physics.soc-ph
2,019
Recurrent Hierarchical Topic-Guided RNN for Language Generation
Dandan Guo, Bo Chen, Ruiying Lu, Mingyuan Zhou
To simultaneously capture syntax and global semantics from a text corpus, we propose a new larger-context recurrent neural network (RNN) based language model, which extracts recurrent hierarchical semantic structure via a dynamic deep topic model to guide natural language generation. Moving beyond a conventional RNN-based language model that ignores long-range word dependencies and sentence order, the proposed model captures not only intra-sentence word dependencies, but also temporal transitions between sentences and inter-sentence topic dependencies. For inference, we develop a hybrid of stochastic-gradient Markov chain Monte Carlo and recurrent autoencoding variational Bayes. Experimental results on a variety of real-world text corpora demonstrate that the proposed model not only outperforms larger-context RNN-based language models, but also learns interpretable recurrent multilayer topics and generates diverse sentences and paragraphs that are syntactically correct and semantically coherent.
http://arxiv.org/abs/1912.10337v2
"2019-12-21T21:11:35Z"
cs.CL, cs.LG, stat.ME, stat.ML
2,019
Topic subject creation using unsupervised learning for topic modeling
Rashid Mehdiyev, Jean Nava, Karan Sodhi, Saurav Acharya, Annie Ibrahim Rana
We describe the use of Non-Negative Matrix Factorization (NMF) and Latent Dirichlet Allocation (LDA) algorithms to perform topic mining and labelling applied to retail customer communications in attempt to characterize the subject of customers inquiries. In this paper we compare both algorithms in the topic mining performance and propose methods to assign topic subject labels in an automated way.
http://arxiv.org/abs/1912.08868v1
"2019-12-18T20:11:03Z"
cs.LG, cs.CL, cs.CY, cs.IR, stat.ML
2,019
Minority report detection in refugee-authored community-driven journalism using RBMs
Bogdana Rakova, Nick DePalma
Our work seeks to gather and distribute sensitive information from refugee settlements to stakeholders to help shape policy and help guide action networks. In this paper, we propose the following 1) a method of data collection through stakeholder organizations experienced in working with displaced and refugee communities, 2) a method of topic modeling based on Deep Boltzmann Machines that identifies topics and issues of interest within the population, to help enable mapping of human rights violations, and 3) a secondary analysis component that will use the probability of fit to isolate minority reports within these stories using anomaly detection techniques.
http://arxiv.org/abs/1912.04953v1
"2019-12-10T19:58:30Z"
cs.CY
2,019
JNET: Learning User Representations via Joint Network Embedding and Topic Embedding
Lin Gong, Lu Lin, Weihao Song, Hongning Wang
User representation learning is vital to capture diverse user preferences, while it is also challenging as user intents are latent and scattered among complex and different modalities of user-generated data, thus, not directly measurable. Inspired by the concept of user schema in social psychology, we take a new perspective to perform user representation learning by constructing a shared latent space to capture the dependency among different modalities of user-generated data. Both users and topics are embedded to the same space to encode users' social connections and text content, to facilitate joint modeling of different modalities, via a probabilistic generative framework. We evaluated the proposed solution on large collections of Yelp reviews and StackOverflow discussion posts, with their associated network structures. The proposed model outperformed several state-of-the-art topic modeling based user models with better predictive power in unseen documents, and state-of-the-art network embedding based user models with improved link prediction quality in unseen nodes. The learnt user representations are also proved to be useful in content recommendation, e.g., expert finding in StackOverflow.
http://arxiv.org/abs/1912.00465v1
"2019-12-01T18:21:42Z"
cs.SI, cs.IR
2,019
Macross: Urban Dynamics Modeling based on Metapath Guided Cross-Modal Embedding
Yunan Zhang, Heting Gao, Tarek Abdelzaher
As the ongoing rapid urbanization takes place with an ever-increasing speed, fully modeling urban dynamics becomes more and more challenging, but also a necessity for socioeconomic development. It is challenging because human activities and constructions are ubiquitous; urban landscape and life content change anywhere and anytime. It's crucial due to the fact that only up-to-date urban dynamics can enable governors to optimize their city planning strategy and help individuals organize their daily lives in a more efficient way. Previous geographic topic model based methods attempt to solve this problem but suffer from high computational cost and memory consumption, limiting their scalability to city level applications. Also, strong prior assumptions make such models fail to capture certain patterns by nature. To bridge the gap, we propose Macross, a metapath guided embedding approach to jointly model location, time and text information. Given a dataset of geo-tagged social media posts, we extract and aggregate location and time and construct a heterogeneous information network using the aggregated space and time. Metapath2vec based approach is used to construct vector representations for times, locations and frequent words such that co-occurrence pairs of nodes are closer in latent space. The vector representations will be used to infer related time, locations or keywords for a user query. Experiments done on enormous datasets show our model can generate comparable if not better quality query results compared to state of the art models and outperform some cutting-edge models for activity recovery and classification.
http://arxiv.org/abs/1911.12866v1
"2019-11-28T21:09:52Z"
cs.IR
2,019
Legal document retrieval across languages: topic hierarchies based on synsets
Carlos Badenes-Olmedo, Jose-Luis Redondo-Garcia, Oscar Corcho
Cross-lingual annotations of legislative texts enable us to explore major themes covered in multilingual legal data and are a key facilitator of semantic similarity when searching for similar documents. Multilingual probabilistic topic models have recently emerged as a group of semi-supervised machine learning models that can be used to perform thematic explorations on collections of texts in multiple languages. However, these approaches require theme-aligned training data to create a language-independent space, which limits the amount of scenarios where this technique can be used. In this work, we provide an unsupervised document similarity algorithm based on hierarchies of multi-lingual concepts to describe topics across languages. The algorithm does not require parallel or comparable corpora, or any other type of translation resource. Experiments performed on the English, Spanish, French and Portuguese editions of JCR-Acquis corpora reveal promising results on classifying and sorting documents by similar content.
http://arxiv.org/abs/1911.12637v1
"2019-11-28T10:49:36Z"
cs.IR, cs.DL
2,019
Conditional Hierarchical Bayesian Tucker Decomposition for Genetic Data Analysis
Adam Sandler, Diego Klabjan, Yuan Luo
We develop methods for reducing the dimensionality of large data sets, common in biomedical applications. Learning about patients using genetic data often includes more features than observations, which makes direct supervised learning difficult. One method of reducing the feature space is to use latent Dirichlet allocation to group genetic variants in an unsupervised manner. Latent Dirichlet allocation describes a patient as a mixture of topics corresponding to genetic variants. This can be generalized as a Bayesian tensor decomposition to account for multiple feature variables. Our most significant contributions are with hierarchical topic modeling. We design distinct methods of incorporating hierarchical topic modeling, based on nested Chinese restaurant processes and Pachinko Allocation Machine, into Bayesian tensor decomposition. We apply these models to examine patients with one of four common types of cancer (breast, lung, prostate, and colorectal) and siblings with and without autism spectrum disorder. We linked the genes with their biological pathways and combine this information into a tensor of patients, counts of their genetic variants, and the genes' membership in pathways. We find that our trained models outperform baseline models, with respect to coherence, by up to 40%.
http://arxiv.org/abs/1911.12426v6
"2019-11-27T21:22:04Z"
cs.LG, stat.ME, stat.ML
2,019
Tracing State-Level Obesity Prevalence from Sentence Embeddings of Tweets: A Feasibility Study
Xiaoyi Zhang, Rodoniki Athanasiadou, Narges Razavian
Twitter data has been shown broadly applicable for public health surveillance. Previous public health studies based on Twitter data have largely relied on keyword-matching or topic models for clustering relevant tweets. However, both methods suffer from the short-length of texts and unpredictable noise that naturally occurs in user-generated contexts. In response, we introduce a deep learning approach that uses hashtags as a form of supervision and learns tweet embeddings for extracting informative textual features. In this case study, we address the specific task of estimating state-level obesity from dietary-related textual features. Our approach yields an estimation that strongly correlates the textual features to government data and outperforms the keyword-matching baseline. The results also demonstrate the potential of discovering risk factors using the textual features. This method is general-purpose and can be applied to a wide range of Twitter-based public health studies.
http://arxiv.org/abs/1911.11324v2
"2019-11-26T03:57:15Z"
cs.CL, cs.SI
2,019
My Approach = Your Apparatus? Entropy-Based Topic Modeling on Multiple Domain-Specific Text Collections
Julian Risch, Ralf Krestel
Comparative text mining extends from genre analysis and political bias detection to the revelation of cultural and geographic differences, through to the search for prior art across patents and scientific papers. These applications use cross-collection topic modeling for the exploration, clustering, and comparison of large sets of documents, such as digital libraries. However, topic modeling on documents from different collections is challenging because of domain-specific vocabulary. We present a cross-collection topic model combined with automatic domain term extraction and phrase segmentation. This model distinguishes collection-specific and collection-independent words based on information entropy and reveals commonalities and differences of multiple text collections. We evaluate our model on patents, scientific papers, newspaper articles, forum posts, and Wikipedia articles. In comparison to state-of-the-art cross-collection topic modeling, our model achieves up to 13% higher topic coherence, up to 4% lower perplexity, and up to 31% higher document classification accuracy. More importantly, our approach is the first topic model that ensures disjunct general and specific word distributions, resulting in clear-cut topic representations.
http://arxiv.org/abs/1911.11240v1
"2019-11-25T21:29:59Z"
cs.IR, cs.DL
2,019
FLATM: A Fuzzy Logic Approach Topic Model for Medical Documents
Amir Karami, Aryya Gangopadhyay, Bin Zhou, Hadi Kharrazi
One of the challenges for text analysis in medical domains is analyzing large-scale medical documents. As a consequence, finding relevant documents has become more difficult. One of the popular methods to retrieve information based on discovering the themes in the documents is topic modeling. The themes in the documents help to retrieve documents on the same topic with and without a query. In this paper, we present a novel approach to topic modeling using fuzzy clustering. To evaluate our model, we experiment with two text datasets of medical documents. The evaluation metrics carried out through document classification and document modeling show that our model produces better performance than LDA, indicating that fuzzy set theory can improve the performance of topic models in medical domains.
http://arxiv.org/abs/1911.10953v1
"2019-11-25T14:55:11Z"
cs.IR, cs.CL
2,019
Discovering topics with neural topic models built from PLSA assumptions
Sileye 0. Ba
In this paper we present a model for unsupervised topic discovery in texts corpora. The proposed model uses documents, words, and topics lookup table embedding as neural network model parameters to build probabilities of words given topics, and probabilities of topics given documents. These probabilities are used to recover by marginalization probabilities of words given documents. For very large corpora where the number of documents can be in the order of billions, using a neural auto-encoder based document embedding is more scalable then using a lookup table embedding as classically done. We thus extended the lookup based document embedding model to continuous auto-encoder based model. Our models are trained using probabilistic latent semantic analysis (PLSA) assumptions. We evaluated our models on six datasets with a rich variety of contents. Conducted experiments demonstrate that the proposed neural topic models are very effective in capturing relevant topics. Furthermore, considering perplexity metric, conducted evaluation benchmarks show that our topic models outperform latent Dirichlet allocation (LDA) model which is classically used to address topic discovery tasks.
http://arxiv.org/abs/1911.10924v1
"2019-11-25T13:59:05Z"
cs.CL, cs.LG, stat.ML
2,019
Topical Phrase Extraction from Clinical Reports by Incorporating both Local and Global Context
Gabriele Pergola, Yulan He, David Lowe
Making sense of words often requires to simultaneously examine the surrounding context of a term as well as the global themes characterizing the overall corpus. Several topic models have already exploited word embeddings to recognize local context, however, it has been weakly combined with the global context during the topic inference. This paper proposes to extract topical phrases corroborating the word embedding information with the global context detected by Latent Semantic Analysis, and then combine them by means of the P\'{o}lya urn model. To highlight the effectiveness of this combined approach the model was assessed analyzing clinical reports, a challenging scenario characterized by technical jargon and a limited word statistics available. Results show it outperforms the state-of-the-art approaches in terms of both topic coherence and computational cost.
http://arxiv.org/abs/1911.10180v1
"2019-11-22T18:29:19Z"
cs.CL
2,019
Topic Model for four-top at the LHC
Ezequiel Alvarez, Federico Lamagna, Manuel Szewc
We study the implementation of a Topic Model algorithm in four-top searches at the LHC as a test-probe of a not ideal system for applying this technique. We study this Topic Model behavior as its different hypotheses such as mutual reducibility and equal distribution in all samples shift from true. The four-top final state at the LHC is not only relevant because it does not fulfill these conditions, but also because it is a difficult and inefficient system to reconstruct and current Monte Carlo modeling of signal and backgrounds suffers from non-negligible uncertainties. We implement this Topic Model algorithm in the Same-Sign lepton channel where S/B is of order one and all backgrounds cannot have more than two b-jets at parton level. We define different mixtures according to the number of b-jets and we use the total number of jets to demix. Since only the background has an anchor bin, we find that we can reconstruct the background in the signal region independently of Monte Carlo. We propose to use this information to tune the Monte Carlo in the signal region and then compare signal prediction with data. We also explore Machine Learning techniques applied to this Topic Model algorithm and find slight improvements as well as potential roads to investigate. Although our findings indicate that still with the full LHC run 3 data the implementation would be challenging, we pursue through this work to find ways to reduce the impact of Monte Carlo simulations in four-top searches at the LHC.
http://arxiv.org/abs/1911.09699v2
"2019-11-21T19:02:25Z"
hep-ph, hep-ex
2,019
Graph-Driven Generative Models for Heterogeneous Multi-Task Learning
Wenlin Wang, Hongteng Xu, Zhe Gan, Bai Li, Guoyin Wang, Liqun Chen, Qian Yang, Wenqi Wang, Lawrence Carin
We propose a novel graph-driven generative model, that unifies multiple heterogeneous learning tasks into the same framework. The proposed model is based on the fact that heterogeneous learning tasks, which correspond to different generative processes, often rely on data with a shared graph structure. Accordingly, our model combines a graph convolutional network (GCN) with multiple variational autoencoders, thus embedding the nodes of the graph i.e., samples for the tasks) in a uniform manner while specializing their organization and usage to different tasks. With a focus on healthcare applications (tasks), including clinical topic modeling, procedure recommendation and admission-type prediction, we demonstrate that our method successfully leverages information across different tasks, boosting performance in all tasks and outperforming existing state-of-the-art approaches.
http://arxiv.org/abs/1911.08709v1
"2019-11-20T05:14:35Z"
cs.LG, stat.ML
2,019
A Coefficient of Determination for Probabilistic Topic Models
Tommy Jones
This research proposes a new (old) metric for evaluating goodness of fit in topic models, the coefficient of determination, or $R^2$. Within the context of topic modeling, $R^2$ has the same interpretation that it does when used in a broader class of statistical models. Reporting $R^2$ with topic models addresses two current problems in topic modeling: a lack of standard cross-contextual evaluation metrics for topic modeling and ease of communication with lay audiences. The author proposes that $R^2$ should be reported as a standard metric when constructing topic models.
http://arxiv.org/abs/1911.11061v2
"2019-11-20T00:55:30Z"
cs.IR, cs.LG, stat.ML
2,019
Prediction Focused Topic Models for Electronic Health Records
Jason Ren, Russell Kunes, Finale Doshi-Velez
Electronic Health Record (EHR) data can be represented as discrete counts over a high dimensional set of possible procedures, diagnoses, and medications. Supervised topic models present an attractive option for incorporating EHR data as features into a prediction problem: given a patient's record, we estimate a set of latent factors that are predictive of the response variable. However, existing methods for supervised topic modeling struggle to balance prediction quality and coherence of the latent factors. We introduce a novel approach, the prediction-focused topic model, that uses the supervisory signal to retain only features that improve, or do not hinder, prediction performance. By removing features with irrelevant signal, the topic model is able to learn task-relevant, interpretable topics. We demonstrate on a EHR dataset and a movie review dataset that compared to existing approaches, prediction-focused topic models are able to learn much more coherent topics while maintaining competitive predictions.
http://arxiv.org/abs/1911.08551v1
"2019-11-15T23:19:43Z"
cs.LG, stat.ML
2,019
Sato: Contextual Semantic Type Detection in Tables
Dan Zhang, Yoshihiko Suhara, Jinfeng Li, Madelon Hulsebos, Çağatay Demiralp, Wang-Chiew Tan
Detecting the semantic types of data columns in relational tables is important for various data preparation and information retrieval tasks such as data cleaning, schema matching, data discovery, and semantic search. However, existing detection approaches either perform poorly with dirty data, support only a limited number of semantic types, fail to incorporate the table context of columns or rely on large sample sizes for training data. We introduce Sato, a hybrid machine learning model to automatically detect the semantic types of columns in tables, exploiting the signals from the context as well as the column values. Sato combines a deep learning model trained on a large-scale table corpus with topic modeling and structured prediction to achieve support-weighted and macro average F1 scores of 0.925 and 0.735, respectively, exceeding the state-of-the-art performance by a significant margin. We extensively analyze the overall and per-type performance of Sato, discussing how individual modeling components, as well as feature categories, contribute to its performance.
http://arxiv.org/abs/1911.06311v3
"2019-11-14T18:51:59Z"
cs.DB, cs.CL, cs.LG
2,019
'Warriors of the Word' -- Deciphering Lyrical Topics in Music and Their Connection to Audio Feature Dimensions Based on a Corpus of Over 100,000 Metal Songs
Isabella Czedik-Eysenberg, Oliver Wieczorek, Christoph Reuter
We look into the connection between the musical and lyrical content of metal music by combining automated extraction of high-level audio features and quantitative text analysis on a corpus of 124.288 song lyrics from this genre. Based on this text corpus, a topic model was first constructed using Latent Dirichlet Allocation (LDA). For a subsample of 503 songs, scores for predicting perceived musical hardness/heaviness and darkness/gloominess were extracted using audio feature models. By combining both audio feature and text analysis, we (1) offer a comprehensive overview of the lyrical topics present within the metal genre and (2) are able to establish whether or not levels of hardness and other music dimensions are associated with the occurrence of particularly harsh (and other) textual topics. Twenty typical topics were identified and projected into a topic space using multidimensional scaling (MDS). After Bonferroni correction, positive correlations were found between musical hardness and darkness and textual topics dealing with 'brutal death', 'dystopia', 'archaisms and occultism', 'religion and satanism', 'battle' and '(psychological) madness', while there is a negative associations with topics like 'personal life' and 'love and romance'.
http://arxiv.org/abs/1911.04952v2
"2019-11-12T15:53:40Z"
eess.AS, cs.CL, cs.SD, H.5.5
2,019
Analysing Russian Trolls via NLP tools
Bokun Kong
The fifty-eighth American presidential election in 2016 still arouse fierce controversyat present. A portion of politicians as well as medium and voters believe that theRussian government interfered with the election of 2016 by controlling malicioussocial media accounts on twitter, such as trolls and bots accounts. Both of them willbroadcast fake news, derail the conversations about election, and mislead people.Therefore, this paper will focus on analysing some of the twitter dataset about theelection of 2016 by using NLP methods and looking for some interesting patterns ofwhether the Russian government interfered with the election or not. We apply topicmodel on the given twitter dataset to extract some interesting topics and analysethe meaning, then we implement supervised topic model to retrieve the relationshipbetween topics to category which is left troll or right troll, and analyse the pattern.Additionally, we will do sentiment analysis to analyse the attitude of the tweet. Afterextracting typical tweets from interesting topic, sentiment analysis offers the ability toknow whether the tweet supports this topic or not. Based on comprehensive analysisand evaluation, we find interesting patterns of the dataset as well as some meaningfultopics.
http://arxiv.org/abs/1911.11067v1
"2019-11-11T08:30:54Z"
cs.IR, cs.CY
2,019
A Latent Topic Model with Markovian Transition for Process Data
Haochen Xu, Guanhua Fang, Zhiliang Ying
We propose a latent topic model with a Markovian transition for process data, which consist of time-stamped events recorded in a log file. Such data are becoming more widely available in computer-based educational assessment with complex problem solving items. The proposed model can be viewed as an extension of the hierarchical Bayesian topic model with a hidden Markov structure to accommodate the underlying evolution of an examinee's latent state. Using topic transition probabilities along with response times enables us to capture examinees' learning trajectories, making clustering/classification more efficient. A forward-backward variational expectation-maximization (FB-VEM) algorithm is developed to tackle the challenging computational problem. Useful theoretical properties are established under certain asymptotic regimes. The proposed method is applied to a complex problem solving item in 2012 Programme for International Student Assessment (PISA 2012).
http://arxiv.org/abs/1911.01583v1
"2019-11-05T02:59:17Z"
stat.ME, math.ST, stat.AP, stat.TH
2,019
Statistical Model Aggregation via Parameter Matching
Mikhail Yurochkin, Mayank Agarwal, Soumya Ghosh, Kristjan Greenewald, Trong Nghia Hoang
We consider the problem of aggregating models learned from sequestered, possibly heterogeneous datasets. Exploiting tools from Bayesian nonparametrics, we develop a general meta-modeling framework that learns shared global latent structures by identifying correspondences among local model parameterizations. Our proposed framework is model-independent and is applicable to a wide range of model types. After verifying our approach on simulated data, we demonstrate its utility in aggregating Gaussian topic models, hierarchical Dirichlet process based hidden Markov models, and sparse Gaussian processes with applications spanning text summarization, motion capture analysis, and temperature forecasting.
http://arxiv.org/abs/1911.00218v1
"2019-11-01T06:24:38Z"
stat.ML, cs.LG
2,019
L2RS: A Learning-to-Rescore Mechanism for Automatic Speech Recognition
Yuanfeng Song, Di Jiang, Xuefang Zhao, Qian Xu, Raymond Chi-Wing Wong, Lixin Fan, Qiang Yang
Modern Automatic Speech Recognition (ASR) systems primarily rely on scores from an Acoustic Model (AM) and a Language Model (LM) to rescore the N-best lists. With the abundance of recent natural language processing advances, the information utilized by current ASR for evaluating the linguistic and semantic legitimacy of the N-best hypotheses is rather limited. In this paper, we propose a novel Learning-to-Rescore (L2RS) mechanism, which is specialized for utilizing a wide range of textual information from the state-of-the-art NLP models and automatically deciding their weights to rescore the N-best lists for ASR systems. Specifically, we incorporate features including BERT sentence embedding, topic vector, and perplexity scores produced by n-gram LM, topic modeling LM, BERT LM and RNNLM to train a rescoring model. We conduct extensive experiments based on a public dataset, and experimental results show that L2RS outperforms not only traditional rescoring methods but also its deep neural network counterparts by a substantial improvement of 20.67% in terms of NDCG@10. L2RS paves the way for developing more effective rescoring models for ASR.
http://arxiv.org/abs/1910.11496v1
"2019-10-25T02:25:34Z"
cs.CL, cs.SD, eess.AS
2,019
Deep topic modeling by multilayer bootstrap network and lasso
Jianyu Wang, Xiao-Lei Zhang
Topic modeling is widely studied for the dimension reduction and analysis of documents. However, it is formulated as a difficult optimization problem. Current approximate solutions also suffer from inaccurate model- or data-assumptions. To deal with the above problems, we propose a polynomial-time deep topic model with no model and data assumptions. Specifically, we first apply multilayer bootstrap network (MBN), which is an unsupervised deep model, to reduce the dimension of documents, and then use the low-dimensional data representations or their clustering results as the target of supervised Lasso for topic word discovery. To our knowledge, this is the first time that MBN and Lasso are applied to unsupervised topic modeling. Experimental comparison results with five representative topic models on the 20-newsgroups and TDT2 corpora illustrate the effectiveness of the proposed algorithm.
http://arxiv.org/abs/1910.10953v1
"2019-10-24T07:35:28Z"
cs.LG, stat.ML
2,019
A bibliometric study of research topics, collaboration and centrality in the Iterated Prisoner's Dilemma
Nikoleta E. Glynatsi, Vincent A. Knight
This manuscript explores the research topics and collaborative behaviour of authors in the field of the Prisoner's Dilemma using topic modeling and a graph theoretic analysis of the co-authorship network. The analysis identified five research topics in the Prisoner's Dilemma which have been relevant over the course of time. These are human subject research, biological studies, strategies, evolutionary dynamics on networks and modeling problems as a Prisoner's Dilemma game. Moreover, the results demonstrated the Prisoner's Dilemma is a field of continued interest, and that it is a collaborative field compared to other game theoretic fields. The co-authorship network suggests that authors are focused on their communities and that not many connections across the communities are made. The most central authors of the network are the authors connected to the main cluster. Through examining the networks of topics, it was uncovered that the main cluster is characterised by the collaboration of authors in a single topic. These findings add to the bibliometrics study in another field and present new questions and avenues of research to understand the reasons for the measured behaviours.
http://arxiv.org/abs/1911.06128v3
"2019-10-16T15:03:58Z"
physics.soc-ph
2,019
NLPExplorer: Exploring the Universe of NLP Papers
Monarch Parmar, Naman Jain, Pranjali Jain, P Jayakrishna Sahit, Soham Pachpande, Shruti Singh, Mayank Singh
Understanding the current research trends, problems, and their innovative solutions remains a bottleneck due to the ever-increasing volume of scientific articles. In this paper, we propose NLPExplorer, a completely automatic portal for indexing, searching, and visualizing Natural Language Processing (NLP) research volume. NLPExplorer presents interesting insights from papers, authors, venues, and topics. In contrast to previous topic modelling based approaches, we manually curate five course-grained non-exclusive topical categories namely Linguistic Target (Syntax, Discourse, etc.), Tasks (Tagging, Summarization, etc.), Approaches (unsupervised, supervised, etc.), Languages (English, Chinese,etc.) and Dataset types (news, clinical notes, etc.). Some of the novel features include a list of young popular authors, popular URLs, and datasets, a list of topically diverse papers and recent popular papers. Also, it provides temporal statistics such as yearwise popularity of topics, datasets, and seminal papers. To facilitate future research and system development, we make all the processed datasets accessible through API calls. The current system is available at http://lingo.iitgn.ac.in:5001/
http://arxiv.org/abs/1910.07351v2
"2019-10-16T13:57:15Z"
cs.IR, cs.DL
2,019
HiGitClass: Keyword-Driven Hierarchical Classification of GitHub Repositories
Yu Zhang, Frank F. Xu, Sha Li, Yu Meng, Xuan Wang, Qi Li, Jiawei Han
GitHub has become an important platform for code sharing and scientific exchange. With the massive number of repositories available, there is a pressing need for topic-based search. Even though the topic label functionality has been introduced, the majority of GitHub repositories do not have any labels, impeding the utility of search and topic-based analysis. This work targets the automatic repository classification problem as keyword-driven hierarchical classification. Specifically, users only need to provide a label hierarchy with keywords to supply as supervision. This setting is flexible, adaptive to the users' needs, accounts for the different granularity of topic labels and requires minimal human effort. We identify three key challenges of this problem, namely (1) the presence of multi-modal signals; (2) supervision scarcity and bias; (3) supervision format mismatch. In recognition of these challenges, we propose the HiGitClass framework, comprising of three modules: heterogeneous information network embedding; keyword enrichment; topic modeling and pseudo document generation. Experimental results on two GitHub repository collections confirm that HiGitClass is superior to existing weakly-supervised and dataless hierarchical classification methods, especially in its ability to integrate both structured and unstructured data for repository classification.
http://arxiv.org/abs/1910.07115v2
"2019-10-16T01:05:26Z"
cs.LG, cs.CL, cs.SE, stat.ML
2,019
Prediction Focused Topic Models via Feature Selection
Jason Ren, Russell Kunes, Finale Doshi-Velez
Supervised topic models are often sought to balance prediction quality and interpretability. However, when models are (inevitably) misspecified, standard approaches rarely deliver on both. We introduce a novel approach, the prediction-focused topic model, that uses the supervisory signal to retain only vocabulary terms that improve, or at least do not hinder, prediction performance. By removing terms with irrelevant signal, the topic model is able to learn task-relevant, coherent topics. We demonstrate on several data sets that compared to existing approaches, prediction-focused topic models learn much more coherent topics while maintaining competitive predictions.
http://arxiv.org/abs/1910.05495v2
"2019-10-12T05:08:43Z"
cs.LG, cs.CL, stat.ML
2,019
Learning beyond Predefined Label Space via Bayesian Nonparametric Topic Modelling
Changying Du, Fuzhen Zhuang, Jia He, Qing He, Guoping Long
In real world machine learning applications, testing data may contain some meaningful new categories that have not been seen in labeled training data. To simultaneously recognize new data categories and assign most appropriate category labels to the data actually from known categories, existing models assume the number of unknown new categories is pre-specified, though it is difficult to determine in advance. In this paper, we propose a Bayesian nonparametric topic model to automatically infer this number, based on the hierarchical Dirichlet process and the notion of latent Dirichlet allocation. Exact inference in our model is intractable, so we provide an efficient collapsed Gibbs sampling algorithm for approximate posterior inference. Extensive experiments on various text data sets show that: (a) compared with parametric approaches that use pre-specified true number of new categories, the proposed nonparametric approach can yield comparable performance; and (b) when the exact number of new categories is unavailable, i.e. the parametric approaches only have a rough idea about the new categories, our approach has evident performance advantages.
http://arxiv.org/abs/1910.04420v1
"2019-10-10T08:15:08Z"
cs.LG, stat.ML
2,019
BioNLP-OST 2019 RDoC Tasks: Multi-grain Neural Relevance Ranking Using Topics and Attention Based Query-Document-Sentence Interactions
Yatin Chaudhary, Pankaj Gupta, Hinrich Schütze
This paper presents our system details and results of participation in the RDoC Tasks of BioNLP-OST 2019. Research Domain Criteria (RDoC) construct is a multi-dimensional and broad framework to describe mental health disorders by combining knowledge from genomics to behaviour. Non-availability of RDoC labelled dataset and tedious labelling process hinders the use of RDoC framework to reach its full potential in Biomedical research community and Healthcare industry. Therefore, Task-1 aims at retrieval and ranking of PubMed abstracts relevant to a given RDoC construct and Task-2 aims at extraction of the most relevant sentence from a given PubMed abstract. We investigate (1) attention based supervised neural topic model and SVM for retrieval and ranking of PubMed abstracts and, further utilize BM25 and other relevance measures for re-ranking, (2) supervised and unsupervised sentence ranking models utilizing multi-view representations comprising of query-aware attention-based sentence representation (QAR), bag-of-words (BoW) and TF-IDF. Our best systems achieved 1st rank and scored 0.86 mean average precision (mAP) and 0.58 macro average accuracy (MAA) in Task-1 and Task-2 respectively.
http://arxiv.org/abs/1910.00314v2
"2019-10-01T11:47:36Z"
cs.LG, cs.CL, cs.IR, stat.ML
2,019
Lifelong Neural Topic Learning in Contextualized Autoregressive Topic Models of Language via Informative Transfers
Yatin Chaudhary, Pankaj Gupta, Thomas Runkler
Topic models such as LDA, DocNADE, iDocNADEe have been popular in document analysis. However, the traditional topic models have several limitations including: (1) Bag-of-words (BoW) assumption, where they ignore word ordering, (2) Data sparsity, where the application of topic models is challenging due to limited word co-occurrences, leading to incoherent topics and (3) No Continuous Learning framework for topic learning in lifelong fashion, exploiting historical knowledge (or latent topics) and minimizing catastrophic forgetting. This thesis focuses on addressing the above challenges within neural topic modeling framework. We propose: (1) Contextualized topic model that combines a topic and a language model and introduces linguistic structures (such as word ordering, syntactic and semantic features, etc.) in topic modeling, (2) A novel lifelong learning mechanism into neural topic modeling framework to demonstrate continuous learning in sequential document collections and minimizing catastrophic forgetting. Additionally, we perform a selective data augmentation to alleviate the need for complete historical corpora during data hallucination or replay.
http://arxiv.org/abs/1909.13315v1
"2019-09-29T16:43:30Z"
cs.IR, cs.CL, cs.LG
2,019
PaRe: A Paper-Reviewer Matching Approach Using a Common Topic Space
Omer Anjum, Hongyu Gong, Suma Bhat, Wen-Mei Hwu, Jinjun Xiong
Finding the right reviewers to assess the quality of conference submissions is a time consuming process for conference organizers. Given the importance of this step, various automated reviewer-paper matching solutions have been proposed to alleviate the burden. Prior approaches, including bag-of-words models and probabilistic topic models have been inadequate to deal with the vocabulary mismatch and partial topic overlap between a paper submission and the reviewer's expertise. Our approach, the common topic model, jointly models the topics common to the submission and the reviewer's profile while relying on abstract topic vectors. Experiments and insightful evaluations on two datasets demonstrate that the proposed method achieves consistent improvements compared to available state-of-the-art implementations of paper-reviewer matching.
http://arxiv.org/abs/1909.11258v1
"2019-09-25T02:25:23Z"
cs.CL, cs.IR, cs.LG
2,019
Diachronic Topics in New High German Poetry
Thomas N. Haider
Statistical topic models are increasingly and popularly used by Digital Humanities scholars to perform distant reading tasks on literary data. It allows us to estimate what people talk about. Especially Latent Dirichlet Allocation (LDA) has shown its usefulness, as it is unsupervised, robust, easy to use, scalable, and it offers interpretable results. In a preliminary study, we apply LDA to a corpus of New High German poetry (textgrid, with 51k poems, 8m token), and use the distribution of topics over documents for a classification of poems into time periods and for authorship attribution.
http://arxiv.org/abs/1909.11189v1
"2019-09-24T21:19:01Z"
cs.CL, cs.IR
2,019
Natural Language Processing via LDA Topic Model in Recommendation Systems
Hamed Jelodar, Yongli Wang, Mahdi Rabbani, SeyedValyAllah Ayobi
Today, Internet is one of the widest available media worldwide. Recommendation systems are increasingly being used in various applications such as movie recommendation, mobile recommendation, article recommendation and etc. Collaborative Filtering (CF) and Content-Based (CB) are Well-known techniques for building recommendation systems. Topic modeling based on LDA, is a powerful technique for semantic mining and perform topic extraction. In the past few years, many articles have been published based on LDA technique for building recommendation systems. In this paper, we present taxonomy of recommendation systems and applications based on LDA. In addition, we utilize LDA and Gibbs sampling algorithms to evaluate ISWC and WWW conference publications in computer science. Our study suggest that the recommendation systems based on LDA could be effective in building smart recommendation system in online communities.
http://arxiv.org/abs/1909.09551v1
"2019-09-20T15:08:51Z"
cs.IR, cs.CL
2,019
Corporate IT-support Help-Desk Process Hybrid-Automation Solution with Machine Learning Approach
Kuruparan Shanmugalingam, Nisal Chandrasekara, Calvin Hindle, Gihan Fernando, Chanaka Gunawardhana
Comprehensive IT support teams in large scale organizations require more man power for handling engagement and requests of employees from different channels on a 24*7 basis. Automated email technical queries help desk is proposed to have instant real-time quick solutions and email categorisation. Email topic modelling with various machine learning, deep-learning approaches are compared with different features for a scalable, generalised solution along with sure-shot static rules. Email's title, body, attachment, OCR text, and some feature engineered custom features are given as input elements. XGBoost cascaded hierarchical models, Bi-LSTM model with word embeddings perform well showing 77.3 overall accuracy For the real world corporate email data set. By introducing the thresholding techniques, the overall automation system architecture provides 85.6 percentage of accuracy for real world corporate emails. Combination of quick fixes, static rules, ML categorization as a low cost inference solution reduces 81 percentage of the human effort in the process of automation and real time implementation.
http://arxiv.org/abs/1909.09018v1
"2019-09-18T10:07:01Z"
cs.LG, cs.AI, cs.CL, cs.CV
2,019
Two Computational Models for Analyzing Political Attention in Social Media
Libby Hemphill, Angela M. Schöpke-Gonzalez
Understanding how political attention is divided and over what subjects is crucial for research on areas such as agenda setting, framing, and political rhetoric. Existing methods for measuring attention, such as manual labeling according to established codebooks, are expensive and can be restrictive. We describe two computational models that automatically distinguish topics in politicians' social media content. Our models---one supervised classifier and one unsupervised topic model---provide different benefits. The supervised classifier reduces the labor required to classify content according to pre-determined topic list. However, tweets do more than communicate policy positions. Our unsupervised model uncovers both political topics and other Twitter uses (e.g., constituent service). These models are effective, inexpensive computational tools for political communication and social media research. We demonstrate their utility and discuss the different analyses they afford by applying both models to the tweets posted by members of the 115th U.S. Congress.
http://arxiv.org/abs/1909.08189v1
"2019-09-17T15:00:14Z"
cs.SI, cs.LG, stat.ML
2,019
Multi Sense Embeddings from Topic Models
Shobhit Jain, Sravan Babu Bodapati, Ramesh Nallapati, Anima Anandkumar
Distributed word embeddings have yielded state-of-the-art performance in many NLP tasks, mainly due to their success in capturing useful semantic information. These representations assign only a single vector to each word whereas a large number of words are polysemous (i.e., have multiple meanings). In this work, we approach this critical problem in lexical semantics, namely that of representing various senses of polysemous words in vector spaces. We propose a topic modeling based skip-gram approach for learning multi-prototype word embeddings. We also introduce a method to prune the embeddings determined by the probabilistic representation of the word in each topic. We use our embeddings to show that they can capture the context and word similarity strongly and outperform various state-of-the-art implementations.
http://arxiv.org/abs/1909.07746v2
"2019-09-17T12:23:33Z"
cs.LG, cs.CL, cs.IR
2,019
Automatic Detection and Classification of Cognitive Distortions in Mental Health Text
Benjamin Shickel, Scott Siegel, Martin Heesacker, Sherry Benton, Parisa Rashidi
In cognitive psychology, automatic and self-reinforcing irrational thought patterns are known as cognitive distortions. Left unchecked, patients exhibiting these types of thoughts can become stuck in negative feedback loops of unhealthy thinking, leading to inaccurate perceptions of reality commonly associated with anxiety and depression. In this paper, we present a machine learning framework for the automatic detection and classification of 15 common cognitive distortions in two novel mental health free text datasets collected from both crowdsourcing and a real-world online therapy program. When differentiating between distorted and non-distorted passages, our model achieved a weighted F1 score of 0.88. For classifying distorted passages into one of 15 distortion categories, our model yielded weighted F1 scores of 0.68 in the larger crowdsourced dataset and 0.45 in the smaller online counseling dataset, both of which outperformed random baseline metrics by a large margin. For both tasks, we also identified the most discriminative words and phrases between classes to highlight common thematic elements for improving targeted and therapist-guided mental health treatment. Furthermore, we performed an exploratory analysis using unsupervised content-based clustering and topic modeling algorithms as first efforts towards a data-driven perspective on the thematic relationship between similar cognitive distortions traditionally deemed unique. Finally, we highlight the difficulties in applying mental health-based machine learning in a real-world setting and comment on the implications and benefits of our framework for improving automated delivery of therapeutic treatment in conjunction with traditional cognitive-behavioral therapy.
http://arxiv.org/abs/1909.07502v2
"2019-09-16T22:21:27Z"
cs.HC, cs.CL, cs.LG, stat.ML
2,019
Cross-domain recommender system using Generalized Canonical Correlation Analysis
Seyed Mohammad Hashemi, Mohammad Rahmati
Recommender systems provide personalized recommendations to the users from a large number of possible options in online stores. Matrix factorization is a well-known and accurate collaborative filtering approach for recommender system, which suffers from cold-start problem for new users and items. Whenever a new user participate with the system there is not enough interactions with the system, therefore there are not enough ratings in the user-item matrix to learn the matrix factorization model. Using auxiliary data such as users demographic, ratings and reviews in relevant domains, is an effective solution to reduce the new user problem. In this paper, we used data of users from other domains and build a common space to represent the latent factors of users from different domains. In this representation we proposed an iterative method which applied MAX-VAR generalized canonical correlation analysis (GCCA) on users latent factors learned from matrix factorization on each domain. Also, to improve the capability of GCCA to learn latent factors for new users, we propose generalized canonical correlation analysis by inverse sum of selection matrices (GCCA-ISSM) approach, which provides better recommendations in cold-start scenarios. The proposed approach is extended using content-based features from topic modeling extracted from users reviews. We demonstrate the accuracy and effectiveness of the proposed approaches on cross-domain ratings predictions using comprehensive experiments on Amazon and MovieLens datasets.
http://arxiv.org/abs/1909.12746v1
"2019-09-15T22:27:53Z"
cs.IR
2,019
Multi-view and Multi-source Transfers in Neural Topic Modeling with Pretrained Topic and Word Embeddings
Pankaj Gupta, Yatin Chaudhary, Hinrich Schütze
Though word embeddings and topics are complementary representations, several past works have only used pre-trained word embeddings in (neural) topic modeling to address data sparsity problem in short text or small collection of documents. However, no prior work has employed (pre-trained latent) topics in transfer learning paradigm. In this paper, we propose an approach to (1) perform knowledge transfer using latent topics obtained from a large source corpus, and (2) jointly transfer knowledge via the two representations (or views) in neural topic modeling to improve topic quality, better deal with polysemy and data sparsity issues in a target corpus. In doing so, we first accumulate topics and word representations from one or many source corpora to build a pool of topics and word vectors. Then, we identify one or multiple relevant source domain(s) and take advantage of corresponding topics and word features via the respective pools to guide meaningful learning in the sparse target domain. We quantify the quality of topic and document representations via generalization (perplexity), interpretability (topic coherence) and information retrieval (IR) using short-text, long-text, small and large document collections from news and medical domains. We have demonstrated the state-of-the-art results on topic modeling with the proposed framework.
http://arxiv.org/abs/1909.06563v2
"2019-09-14T09:16:05Z"
cs.CL, cs.IR, cs.LG
2,019
Mixed Hamiltonian Monte Carlo for Mixed Discrete and Continuous Variables
Guangyao Zhou
Hamiltonian Monte Carlo (HMC) has emerged as a powerful Markov Chain Monte Carlo (MCMC) method to sample from complex continuous distributions. However, a fundamental limitation of HMC is that it can not be applied to distributions with mixed discrete and continuous variables. In this paper, we propose mixed HMC (M-HMC) as a general framework to address this limitation. M-HMC is a novel family of MCMC algorithms that evolves the discrete and continuous variables in tandem, allowing more frequent updates of discrete variables while maintaining HMC's ability to suppress random-walk behavior. We establish M-HMC's theoretical properties, and present an efficient implementation with Laplace momentum that introduces minimal overhead compared to existing HMC methods. The superior performances of M-HMC over existing methods are demonstrated with numerical experiments on Gaussian mixture models (GMMs), variable selection in Bayesian logistic regression (BLR), and correlated topic models (CTMs).
http://arxiv.org/abs/1909.04852v6
"2019-09-11T05:03:08Z"
stat.CO
2,019
Neural Embedding Allocation: Distributed Representations of Topic Models
Kamrun Naher Keya, Yannis Papanikolaou, James R. Foulds
Word embedding models such as the skip-gram learn vector representations of words' semantic relationships, and document embedding models learn similar representations for documents. On the other hand, topic models provide latent representations of the documents' topical themes. To get the benefits of these representations simultaneously, we propose a unifying algorithm, called neural embedding allocation (NEA), which deconstructs topic models into interpretable vector-space embeddings of words, topics, documents, authors, and so on, by learning neural embeddings to mimic the topic models. We showcase NEA's effectiveness and generality on LDA, author-topic models and the recently proposed mixed membership skip gram topic model and achieve better performance with the embeddings compared to several state-of-the-art models. Furthermore, we demonstrate that using NEA to smooth out the topics improves coherence scores over the original topic models when the number of topics is large.
http://arxiv.org/abs/1909.04702v1
"2019-09-10T18:39:26Z"
cs.CL, cs.IR, cs.LG
2,019
Poster Abstract: A Dynamic Data-Driven Prediction Model for Sparse Collaborative Sensing Applications
Daniel Zhang, Yang Zhang, Dong Wang
A fundamental problem in collaborative sensing lies in providing an accurate prediction of critical events (e.g., hazardous environmental condition, urban abnormalities, economic trends). However, due to the resource constraints, collaborative sensing applications normally only collect measurements from a subset of physical locations and predict the measurements for the rest of locations. This problem is referred to as sparse collaborative sensing prediction. In this poster, we present a novel closed-loop prediction model by leveraging topic modeling and online learning techniques. We evaluate our scheme using a real-world collaborative sensing dataset. The initial results show that our proposed scheme outperforms the state-of-the-art baselines.
http://arxiv.org/abs/1909.04111v1
"2019-09-09T19:12:10Z"
eess.SP
2,019
Evaluating Topic Quality with Posterior Variability
Linzi Xing, Michael J. Paul, Giuseppe Carenini
Probabilistic topic models such as latent Dirichlet allocation (LDA) are popularly used with Bayesian inference methods such as Gibbs sampling to learn posterior distributions over topic model parameters. We derive a novel measure of LDA topic quality using the variability of the posterior distributions. Compared to several existing baselines for automatic topic evaluation, the proposed metric achieves state-of-the-art correlations with human judgments of topic quality in experiments on three corpora. We additionally demonstrate that topic quality estimation can be further improved using a supervised estimator that combines multiple metrics.
http://arxiv.org/abs/1909.03524v2
"2019-09-08T18:25:48Z"
cs.CL
2,019
Improved Hierarchical Patient Classification with Language Model Pretraining over Clinical Notes
Jonas Kemp, Alvin Rajkomar, Andrew M. Dai
Clinical notes in electronic health records contain highly heterogeneous writing styles, including non-standard terminology or abbreviations. Using these notes in predictive modeling has traditionally required preprocessing (e.g. taking frequent terms or topic modeling) that removes much of the richness of the source data. We propose a pretrained hierarchical recurrent neural network model that parses minimally processed clinical notes in an intuitive fashion, and show that it improves performance for discharge diagnosis classification tasks on the Medical Information Mart for Intensive Care III (MIMIC-III) dataset, compared to models that treat the notes as an unordered collection of terms or that conduct no pretraining. We also apply an attribution technique to examples to identify the words that the model uses to make its prediction, and show the importance of the words' nearby context.
http://arxiv.org/abs/1909.03039v3
"2019-09-06T17:49:56Z"
cs.LG, cs.CL, stat.ML
2,019
Identifying Editor Roles in Argumentative Writing from Student Revision Histories
Tazin Afrin, Diane Litman
We present a method for identifying editor roles from students' revision behaviors during argumentative writing. We first develop a method for applying a topic modeling algorithm to identify a set of editor roles from a vocabulary capturing three aspects of student revision behaviors: operation, purpose, and position. We validate the identified roles by showing that modeling the editor roles that students take when revising a paper not only accounts for the variance in revision purposes in our data, but also relates to writing improvement.
http://arxiv.org/abs/1909.05308v1
"2019-09-03T20:47:32Z"
cs.CL
2,019
Discriminative Topic Modeling with Logistic LDA
Iryna Korshunova, Hanchen Xiong, Mateusz Fedoryszak, Lucas Theis
Despite many years of research into latent Dirichlet allocation (LDA), applying LDA to collections of non-categorical items is still challenging. Yet many problems with much richer data share a similar structure and could benefit from the vast literature on LDA. We propose logistic LDA, a novel discriminative variant of latent Dirichlet allocation which is easy to apply to arbitrary inputs. In particular, our model can easily be applied to groups of images, arbitrary text embeddings, and integrates well with deep neural networks. Although it is a discriminative model, we show that logistic LDA can learn from unlabeled data in an unsupervised manner by exploiting the group structure present in the data. In contrast to other recent topic models designed to handle arbitrary inputs, our model does not sacrifice the interpretability and principled motivation of LDA.
http://arxiv.org/abs/1909.01436v2
"2019-09-03T20:25:49Z"
stat.ML, cs.IR, cs.LG
2,019
Greedy clustering of count data through a mixture of multinomial PCA
Nicolas Jouvin, Pierre Latouche, Charles Bouveyron, Guillaume Bataillon, Alain Livartowski
Count data is becoming more and more ubiquitous in a wide range of applications, with datasets growing both in size and in dimension. In this context, an increasing amount of work is dedicated to the construction of statistical models directly accounting for the discrete nature of the data. Moreover, it has been shown that integrating dimension reduction to clustering can drastically improve performance and stability. In this paper, we rely on the mixture of multinomial PCA, a mixture model for the clustering of count data, also known as the probabilistic clustering-projection model in the literature. Related to the latent Dirichlet allocation model, it offers the flexibility of topic modeling while being able to assign each observation to a unique cluster. We introduce a greedy clustering algorithm, where inference and clustering are jointly done by mixing a classification variational expectation maximization algorithm, with a branch & bound like strategy on a variational lower bound. An integrated classification likelihood criterion is derived for model selection, and a thorough study with numerical experiments is proposed to assess both the performance and robustness of the method. Finally, we illustrate the qualitative interest of the latter in a real-world application, for the clustering of anatomopathological medical reports, in partnership with expert practitioners from the Institut Curie hospital.
http://arxiv.org/abs/1909.00721v3
"2019-09-02T13:56:09Z"
stat.ME
2,019
Leveraging Just a Few Keywords for Fine-Grained Aspect Detection Through Weakly Supervised Co-Training
Giannis Karamanolakis, Daniel Hsu, Luis Gravano
User-generated reviews can be decomposed into fine-grained segments (e.g., sentences, clauses), each evaluating a different aspect of the principal entity (e.g., price, quality, appearance). Automatically detecting these aspects can be useful for both users and downstream opinion mining applications. Current supervised approaches for learning aspect classifiers require many fine-grained aspect labels, which are labor-intensive to obtain. And, unfortunately, unsupervised topic models often fail to capture the aspects of interest. In this work, we consider weakly supervised approaches for training aspect classifiers that only require the user to provide a small set of seed words (i.e., weakly positive indicators) for the aspects of interest. First, we show that current weakly supervised approaches do not effectively leverage the predictive power of seed words for aspect detection. Next, we propose a student-teacher approach that effectively leverages seed words in a bag-of-words classifier (teacher); in turn, we use the teacher to train a second model (student) that is potentially more powerful (e.g., a neural network that uses pre-trained word embeddings). Finally, we show that iterative co-training can be used to cope with noisy seed words, leading to both improved teacher and student models. Our proposed approach consistently outperforms previous weakly supervised approaches (by 14.1 absolute F1 points on average) in six different domains of product reviews and six multilingual datasets of restaurant reviews.
http://arxiv.org/abs/1909.00415v1
"2019-09-01T15:12:23Z"
cs.LG, cs.CL, stat.ML
2,019
Online influence, offline violence: Language Use on YouTube surrounding the 'Unite the Right' rally
Isabelle van der Vegt, Maximilian Mozes, Paul Gill, Bennett Kleinberg
The media frequently describes the 2017 Charlottesville 'Unite the Right' rally as a turning point for the alt-right and white supremacist movements. Social movement theory suggests that the media attention and public discourse concerning the rally may have influenced the alt-right, but this has yet to be empirically tested. The current study investigates whether there are differences in language use between 7,142 alt-right and progressive YouTube channels, in addition to measuring possible changes as a result of the rally. To do so, we create structural topic models and measure bigram proportions in video transcripts, spanning eight weeks before to eight weeks after the rally. We observe differences in topics between the two groups, with the 'alternative influencers' for example discussing topics related to race and free speech to an increasing and larger extent than progressive channels. We also observe structural breakpoints in the use of bigrams at the time of the rally, suggesting there are changes in language use within the two groups as a result of the rally. While most changes relate to mentions of the rally itself, the alternative group also shows an increase in promotion of their YouTube channels. Results are discussed in light of social movement theory, followed by a discussion of potential implications for understanding the alt-right and their language use on YouTube.
http://arxiv.org/abs/1908.11599v2
"2019-08-30T08:58:11Z"
cs.CL
2,019
Learning document embeddings along with their uncertainties
Santosh Kesiraju, Oldřich Plchot, Lukáš Burget, Suryakanth V Gangashetty
Majority of the text modelling techniques yield only point-estimates of document embeddings and lack in capturing the uncertainty of the estimates. These uncertainties give a notion of how well the embeddings represent a document. We present Bayesian subspace multinomial model (Bayesian SMM), a generative log-linear model that learns to represent documents in the form of Gaussian distributions, thereby encoding the uncertainty in its co-variance. Additionally, in the proposed Bayesian SMM, we address a commonly encountered problem of intractability that appears during variational inference in mixed-logit models. We also present a generative Gaussian linear classifier for topic identification that exploits the uncertainty in document embeddings. Our intrinsic evaluation using perplexity measure shows that the proposed Bayesian SMM fits the data better as compared to the state-of-the-art neural variational document model on Fisher speech and 20Newsgroups text corpora. Our topic identification experiments show that the proposed systems are robust to over-fitting on unseen test data. The topic ID results show that the proposed model is outperforms state-of-the-art unsupervised topic models and achieve comparable results to the state-of-the-art fully supervised discriminative models.
http://arxiv.org/abs/1908.07599v3
"2019-08-20T20:31:51Z"
cs.CL, cs.LG
2,019
Discriminative Topic Mining via Category-Name Guided Text Embedding
Yu Meng, Jiaxin Huang, Guangyuan Wang, Zihan Wang, Chao Zhang, Yu Zhang, Jiawei Han
Mining a set of meaningful and distinctive topics automatically from massive text corpora has broad applications. Existing topic models, however, typically work in a purely unsupervised way, which often generate topics that do not fit users' particular needs and yield suboptimal performance on downstream tasks. We propose a new task, discriminative topic mining, which leverages a set of user-provided category names to mine discriminative topics from text corpora. This new task not only helps a user understand clearly and distinctively the topics he/she is most interested in, but also benefits directly keyword-driven classification tasks. We develop CatE, a novel category-name guided text embedding method for discriminative topic mining, which effectively leverages minimal user guidance to learn a discriminative embedding space and discover category representative terms in an iterative manner. We conduct a comprehensive set of experiments to show that CatE mines high-quality set of topics guided by category names only, and benefits a variety of downstream applications including weakly-supervised classification and lexical entailment direction identification.
http://arxiv.org/abs/1908.07162v2
"2019-08-20T04:32:30Z"
cs.CL, cs.IR
2,019
Topic Augmented Generator for Abstractive Summarization
Melissa Ailem, Bowen Zhang, Fei Sha
Steady progress has been made in abstractive summarization with attention-based sequence-to-sequence learning models. In this paper, we propose a new decoder where the output summary is generated by conditioning on both the input text and the latent topics of the document. The latent topics, identified by a topic model such as LDA, reveals more global semantic information that can be used to bias the decoder to generate words. In particular, they enable the decoder to have access to additional word co-occurrence statistics captured at document corpus level. We empirically validate the advantage of the proposed approach on both the CNN/Daily Mail and the WikiHow datasets. Concretely, we attain strongly improved ROUGE scores when compared to state-of-the-art models.
http://arxiv.org/abs/1908.07026v1
"2019-08-19T19:02:14Z"
cs.LG, stat.ML
2,019
Generating an Overview Report over Many Documents
Jingwen Wang, Hao Zhang, Cheng Zhang, Wenjing Yang, Liqun Shao, Jie Wang
How to efficiently generate an accurate, well-structured overview report (ORPT) over thousands of related documents is challenging. A well-structured ORPT consists of sections of multiple levels (e.g., sections and subsections). None of the existing multi-document summarization (MDS) algorithms is directed toward this task. To overcome this obstacle, we present NDORGS (Numerous Documents' Overview Report Generation Scheme) that integrates text filtering, keyword scoring, single-document summarization (SDS), topic modeling, MDS, and title generation to generate a coherent, well-structured ORPT. We then devise a multi-criteria evaluation method using techniques of text mining and multi-attribute decision making on a combination of human judgments, running time, information coverage, and topic diversity. We evaluate ORPTs generated by NDORGS on two large corpora of documents, where one is classified and the other unclassified. We show that, using Saaty's pairwise comparison 9-point scale and under TOPSIS, the ORPTs generated on SDS's with the length of 20% of the original documents are the best overall on both datasets.
http://arxiv.org/abs/1908.06216v1
"2019-08-17T01:11:04Z"
cs.CL
2,019
The Hitchhiker's Guide to LDA
Chen Ma
Latent Dirichlet Allocation (LDA) model is a famous model in the topic model field, it has been studied for years due to its extensive application value in industry and academia. However, the mathematical derivation of LDA model is challenging and difficult, which makes it difficult for the beginners to learn. To help the beginners in learning LDA, this book analyzes the mathematical derivation of LDA in detail, and it also introduces all the knowledge background to make it easy for beginners to understand. Thus, this book contains the author's unique insights. It should be noted that this book is written in Chinese.
http://arxiv.org/abs/1908.03142v2
"2019-08-07T03:59:19Z"
cs.IR, cs.CL, cs.LG
2,019
Semantic Concept Spaces: Guided Topic Model Refinement using Word-Embedding Projections
Mennatallah El-Assady, Rebecca Kehlbeck, Christopher Collins, Daniel Keim, Oliver Deussen
We present a framework that allows users to incorporate the semantics of their domain knowledge for topic model refinement while remaining model-agnostic. Our approach enables users to (1) understand the semantic space of the model, (2) identify regions of potential conflicts and problems, and (3) readjust the semantic relation of concepts based on their understanding, directly influencing the topic modeling. These tasks are supported by an interactive visual analytics workspace that uses word-embedding projections to define concept regions which can then be refined. The user-refined concepts are independent of a particular document collection and can be transferred to related corpora. All user interactions within the concept space directly affect the semantic relations of the underlying vector space model, which, in turn, change the topic modeling. In addition to direct manipulation, our system guides the users' decision-making process through recommended interactions that point out potential improvements. This targeted refinement aims at minimizing the feedback required for an efficient human-in-the-loop process. We confirm the improvements achieved through our approach in two user studies that show topic model quality improvements through our visual knowledge externalization and learning process.
http://arxiv.org/abs/1908.00475v1
"2019-08-01T16:02:04Z"
cs.HC, cs.CL, cs.IR
2,019
Convolutional Auto-encoding of Sentence Topics for Image Paragraph Generation
Jing Wang, Yingwei Pan, Ting Yao, Jinhui Tang, Tao Mei
Image paragraph generation is the task of producing a coherent story (usually a paragraph) that describes the visual content of an image. The problem nevertheless is not trivial especially when there are multiple descriptive and diverse gists to be considered for paragraph generation, which often happens in real images. A valid question is how to encapsulate such gists/topics that are worthy of mention from an image, and then describe the image from one topic to another but holistically with a coherent structure. In this paper, we present a new design --- Convolutional Auto-Encoding (CAE) that purely employs convolutional and deconvolutional auto-encoding framework for topic modeling on the region-level features of an image. Furthermore, we propose an architecture, namely CAE plus Long Short-Term Memory (dubbed as CAE-LSTM), that novelly integrates the learnt topics in support of paragraph generation. Technically, CAE-LSTM capitalizes on a two-level LSTM-based paragraph generation framework with attention mechanism. The paragraph-level LSTM captures the inter-sentence dependency in a paragraph, while sentence-level LSTM is to generate one sentence which is conditioned on each learnt topic. Extensive experiments are conducted on Stanford image paragraph dataset, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, CAE-LSTM increases CIDEr performance from 20.93% to 25.15%.
http://arxiv.org/abs/1908.00249v1
"2019-08-01T07:58:50Z"
cs.CV, cs.CL
2,019
Personalized, Health-Aware Recipe Recommendation: An Ensemble Topic Modeling Based Approach
Mansura A. Khan, Ellen Rushe, Barry Smyth, David Coyle
Food choices are personal and complex and have a significant impact on our long-term health and quality of life. By helping users to make informed and satisfying decisions, Recommender Systems (RS) have the potential to support users in making healthier food choices. Intelligent users-modeling is a key challenge in achieving this potential. This paper investigates Ensemble Topic Modelling (EnsTM) based Feature Identification techniques for efficient user-modeling and recipe recommendation. It builds on findings in EnsTM to propose a reduced data representation format and a smart user-modeling strategy that makes capturing user-preference fast, efficient and interactive. This approach enables personalization, even in a cold-start scenario. This paper proposes two different EnsTM based and one Hybrid EnsTM based recommenders. We compared all three EnsTM based variations through a user study with 48 participants, using a large-scale,real-world corpus of 230,876 recipes, and compare against a conventional Content Based (CB) approach. EnsTM based recommenders performed significantly better than the CB approach. Besides acknowledging multi-domain contents such as taste, demographics and cost, our proposed approach also considers user's nutritional preference and assists them finding recipes under diverse nutritional categories. Furthermore, it provides excellent coverage and enables implicit understanding of user's food practices. Subsequent analysis also exposed correlation between certain features and a healthier lifestyle.
http://arxiv.org/abs/1908.00148v1
"2019-07-31T23:51:51Z"
cs.IR, cs.LG, I.2
2,019
VASSL: A Visual Analytics Toolkit for Social Spambot Labeling
Mosab Khayat, Morteza Karimzadeh, Jieqiong Zhao, David S. Ebert
Social media platforms such as Twitter are filled with social spambots. Detecting these malicious accounts is essential, yet challenging, as they continually evolve and evade traditional detection techniques. In this work, we propose VASSL, a visual analytics system that assists in the process of detecting and labeling spambots. Our tool enhances the performance and scalability of manual labeling by providing multiple connected views and utilizing dimensionality reduction, sentiment analysis and topic modeling techniques, which offer new insights that enable the identification of spambots. The system allows users to select and analyze groups of accounts in an interactive manner, which enables the detection of spambots that may not be identified when examined individually. We conducted a user study to objectively evaluate the performance of VASSL users, as well as capturing subjective opinions about the usefulness and the ease of use of the tool.
http://arxiv.org/abs/1907.13319v2
"2019-07-31T06:05:55Z"
cs.HC, cs.SI
2,019
Confirmatory Aspect-based Opinion Mining Processes
Jongho Im, Taikgun Song, Youngsu Lee, Jewoo Kim
A new opinion extraction method is proposed to summarize unstructured, user-generated content (i.e., online customer reviews) in the fixed topic domains. To differentiate the current approach from other opinion extraction approaches, which are often exposed to a sparsity problem and lack of sentiment scores, a confirmatory aspect-based opinion mining framework is introduced along with its practical algorithm called DiSSBUS. In this procedure, 1) each customer review is disintegrated into a set of clauses; 2) each clause is summarized to bi-terms-a topic word and an evaluation word-using a part-of-speech (POS) tagger; and 3) each bi-term is matched to a pre-specified topic relevant to a specific domain. The proposed processes have two primary advantages over existing methods: 1) they can decompose a single review into a set of bi-terms related to pre-specified topics in the domain of interest and, therefore, 2) allow identification of the reviewer's opinions on the topics via evaluation words within the set of bi-terms. The proposed aspect-based opinion mining is applied to customer reviews of restaurants in Hawaii obtained from TripAdvisor, and the empirical findings validate the effectiveness of the method. Keywords: Clause-based sentiment analysis, Customer review, Opinion mining, Topic modeling, User-generate-contents.
http://arxiv.org/abs/1907.12850v1
"2019-07-30T12:00:03Z"
cs.CL
2,019
TopicSifter: Interactive Search Space Reduction Through Targeted Topic Modeling
Hannah Kim, Dongjin Choi, Barry Drake, Alex Endert, Haesun Park
Topic modeling is commonly used to analyze and understand large document collections. However, in practice, users want to focus on specific aspects or "targets" rather than the entire corpus. For example, given a large collection of documents, users may want only a smaller subset which more closely aligns with their interests, tasks, and domains. In particular, our paper focuses on large-scale document retrieval with high recall where any missed relevant documents can be critical. A simple keyword matching search is generally not effective nor efficient as 1) it is difficult to find a list of keyword queries that can cover the documents of interest before exploring the dataset, 2) some documents may not contain the exact keywords of interest but may still be highly relevant, and 3) some words have multiple meanings, which would result in irrelevant documents included in the retrieved subset. In this paper, we present TopicSifter, a visual analytics system for interactive search space reduction. Our system utilizes targeted topic modeling based on nonnegative matrix factorization and allows users to give relevance feedback in order to refine their target and guide the topic modeling to the most relevant results.
http://arxiv.org/abs/1907.12079v1
"2019-07-28T13:27:18Z"
cs.IR, cs.HC
2,019
Topic Modeling with Wasserstein Autoencoders
Feng Nan, Ran Ding, Ramesh Nallapati, Bing Xiang
We propose a novel neural topic model in the Wasserstein autoencoders (WAE) framework. Unlike existing variational autoencoder based models, we directly enforce Dirichlet prior on the latent document-topic vectors. We exploit the structure of the latent space and apply a suitable kernel in minimizing the Maximum Mean Discrepancy (MMD) to perform distribution matching. We discover that MMD performs much better than the Generative Adversarial Network (GAN) in matching high dimensional Dirichlet distribution. We further discover that incorporating randomness in the encoder output during training leads to significantly more coherent topics. To measure the diversity of the produced topics, we propose a simple topic uniqueness metric. Together with the widely used coherence measure NPMI, we offer a more wholistic evaluation of topic quality. Experiments on several real datasets show that our model produces significantly better topics than existing topic models.
http://arxiv.org/abs/1907.12374v2
"2019-07-24T14:08:23Z"
cs.IR, cs.AI, cs.LG
2,019
The Dynamic Embedded Topic Model
Adji B. Dieng, Francisco J. R. Ruiz, David M. Blei
Topic modeling analyzes documents to learn meaningful patterns of words. For documents collected in sequence, dynamic topic models capture how these patterns vary over time. We develop the dynamic embedded topic model (D-ETM), a generative model of documents that combines dynamic latent Dirichlet allocation (D-LDA) and word embeddings. The D-ETM models each word with a categorical distribution parameterized by the inner product between the word embedding and a per-time-step embedding representation of its assigned topic. The D-ETM learns smooth topic trajectories by defining a random walk prior over the embedding representations of the topics. We fit the D-ETM using structured amortized variational inference with a recurrent neural network. On three different corpora---a collection of United Nations debates, a set of ACL abstracts, and a dataset of Science Magazine articles---we found that the D-ETM outperforms D-LDA on a document completion task. We further found that the D-ETM learns more diverse and coherent topics than D-LDA while requiring significantly less time to fit.
http://arxiv.org/abs/1907.05545v2
"2019-07-12T01:55:36Z"
cs.CL, stat.ML
2,019
Topic Modeling in Embedding Spaces
Adji B. Dieng, Francisco J. R. Ruiz, David M. Blei
Topic modeling analyzes documents to learn meaningful patterns of words. However, existing topic models fail to learn interpretable topics when working with large and heavy-tailed vocabularies. To this end, we develop the Embedded Topic Model (ETM), a generative model of documents that marries traditional topic models with word embeddings. In particular, it models each word with a categorical distribution whose natural parameter is the inner product between a word embedding and an embedding of its assigned topic. To fit the ETM, we develop an efficient amortized variational inference algorithm. The ETM discovers interpretable topics even with large vocabularies that include rare words and stop words. It outperforms existing document models, such as latent Dirichlet allocation (LDA), in terms of both topic quality and predictive performance.
http://arxiv.org/abs/1907.04907v1
"2019-07-08T03:50:57Z"
cs.IR, cs.CL, cs.LG, stat.ML
2,019
Joint Lifelong Topic Model and Manifold Ranking for Document Summarization
Jianying Lin, Rui Liu, Quanye Jia
Due to the manifold ranking method has a significant effect on the ranking of unknown data based on known data by using a weighted network, many researchers use the manifold ranking method to solve the document summarization task. However, their models only consider the original features but ignore the semantic features of sentences when they construct the weighted networks for the manifold ranking method. To solve this problem, we proposed two improved models based on the manifold ranking method. One is combining the topic model and manifold ranking method (JTMMR) to solve the document summarization task. This model not only uses the original feature, but also uses the semantic feature to represent the document, which can improve the accuracy of the manifold ranking method. The other one is combining the lifelong topic model and manifold ranking method (JLTMMR). On the basis of the JTMMR, this model adds the constraint of knowledge to improve the quality of the topic. At the same time, we also add the constraint of the relationship between documents to dig out a better document semantic features. The JTMMR model can improve the effect of the manifold ranking method by using the better semantic feature. Experiments show that our models can achieve a better result than other baseline models for multi-document summarization task. At the same time, our models also have a good performance on the single document summarization task. After combining with a few basic surface features, our model significantly outperforms some model based on deep learning in recent years. After that, we also do an exploring work for lifelong machine learning by analyzing the effect of adding feedback. Experiments show that the effect of adding feedback to our model is significant.
http://arxiv.org/abs/1907.03224v1
"2019-07-07T05:41:55Z"
cs.CL
2,019
TEAGS: Time-aware Text Embedding Approach to Generate Subgraphs
Saeid Hosseini, Saeed Najafipour, Ngai-Man Cheung, Hongzhi Yin, Mohammad Reza Kangavari, Xiaofang Zhou
Contagions (e.g. virus, gossip) spread over the nodes in propagation graphs. We can use the temporal and textual data of the nodes to compute the edge weights and then generate subgraphs with highly relevant nodes. This is beneficial to many applications. Yet, challenges abound. First, the propagation pattern between each pair of nodes may change by time. Second, not always the same contagion propagates. Hence, the state-of-the-art text mining approaches including topic-modeling cannot effectively compute the edge weights. Third, since the propagation is affected by time, the word-word co-occurrence patterns may differ in various temporal dimensions, that can decrease the effectiveness of word embedding approaches. We argue that multi-aspect temporal dimensions (hour, day, etc) should be considered to better calculate the correlation weights between the nodes. In this work, we devise a novel framework that on the one hand, integrates a neural network based time-aware word embedding component to construct the word vectors through multiple temporal facets, and on the other hand, uses a temporal generative model to compute the weights. Subsequently, we propose a Max-Heap Graph cutting algorithm to generate subgraphs. We validate our model through comprehensive experiments on real-world datasets. The results show that our model can retrieve the subgraphs more effective than other rivals and the temporal dynamics should be noticed both in word embedding and propagation processes.
http://arxiv.org/abs/1907.03191v3
"2019-07-06T21:26:22Z"
cs.IR, cs.DB, cs.LG
2,019
Mining Twitter to Assess the Determinants of Health Behavior towards Human Papillomavirus Vaccination in the United States
Hansi Zhang, Christopher Wheldon, Adam G. Dunn, Cui Tao, Jinhai Huo, Rui Zhang, Mattia Prosperi, Yi Guo, Jiang Bian
Objectives To test the feasibility of using Twitter data to assess determinants of consumers' health behavior towards Human papillomavirus (HPV) vaccination informed by the Integrated Behavior Model (IBM). Methods We used three Twitter datasets spanning from 2014 to 2018. We preprocessed and geocoded the tweets, and then built a rule-based model that classified each tweet into either promotional information or consumers' discussions. We applied topic modeling to discover major themes, and subsequently explored the associations between the topics learned from consumers' discussions and the responses of HPV-related questions in the Health Information National Trends Survey (HINTS). Results We collected 2,846,495 tweets and analyzed 335,681 geocoded tweets. Through topic modeling, we identified 122 high-quality topics. The most discussed consumer topic is "cervical cancer screening"; while in promotional tweets, the most popular topic is to increase awareness of "HPV causes cancer". 87 out of the 122 topics are correlated between promotional information and consumers' discussions. Guided by IBM, we examined the alignment between our Twitter findings and the results obtained from HINTS. 35 topics can be mapped to HINTS questions by keywords, 112 topics can be mapped to IBM constructs, and 45 topics have statistically significant correlations with HINTS responses in terms of geographic distributions. Conclusion Not only mining Twitter to assess consumers' health behaviors can obtain results comparable to surveys but can yield additional insights via a theory-driven approach. Limitations exist, nevertheless, these encouraging results impel us to develop innovative ways of leveraging social media in the changing health communication landscape.
http://arxiv.org/abs/1907.11624v1
"2019-07-06T18:51:51Z"
cs.SI, cs.CY, cs.LG
2,019
Topic Modeling the Reading and Writing Behavior of Information Foragers
Jaimie Murdock
The general problem of "information foraging" in an environment about which agents have incomplete information has been explored in many fields, including cognitive psychology, neuroscience, economics, finance, ecology, and computer science. In all of these areas, the searcher aims to enhance future performance by surveying enough of existing knowledge to orient themselves in the information space. Individuals can be viewed as conducting a cognitive search in which they must balance exploration of ideas that are novel to them against exploitation of knowledge in domains in which they are already expert. In this dissertation, I present several case studies that demonstrate how reading and writing behaviors interact to construct personal knowledge bases. These studies use LDA topic modeling to represent the information environment of the texts each author read and wrote. Three studies revolve around Charles Darwin. Darwin left detailed records of every book he read for 23 years, from disembarking from the H.M.S. Beagle to just after publication of The Origin of Species. Additionally, he left copies of his drafts before publication. I characterize his reading behavior, then show how that reading behavior interacted with the drafts and subsequent revisions of The Origin of Species, and expand the dataset to include later readings and writings. Then, through a study of Thomas Jefferson's correspondence, I expand the study to non-book data. Finally, through an examination of neuroscience citation data, I move from individual behavior to collective behavior in constructing an information environment. Together, these studies reveal "the interplay between individual and collective phenomena where innovation takes place" (Tria et al. 2014).
http://arxiv.org/abs/1907.00488v1
"2019-06-30T22:40:37Z"
cs.CL, cs.CY, cs.DL, cs.IR
2,019
Learning with fuzzy hypergraphs: a topical approach to query-oriented text summarization
Hadrien Van Lierde, Tommy W. S. Chow
Existing graph-based methods for extractive document summarization represent sentences of a corpus as the nodes of a graph or a hypergraph in which edges depict relationships of lexical similarity between sentences. Such approaches fail to capture semantic similarities between sentences when they express a similar information but have few words in common and are thus lexically dissimilar. To overcome this issue, we propose to extract semantic similarities based on topical representations of sentences. Inspired by the Hierarchical Dirichlet Process, we propose a probabilistic topic model in order to infer topic distributions of sentences. As each topic defines a semantic connection among a group of sentences with a certain degree of membership for each sentence, we propose a fuzzy hypergraph model in which nodes are sentences and fuzzy hyperedges are topics. To produce an informative summary, we extract a set of sentences from the corpus by simultaneously maximizing their relevance to a user-defined query, their centrality in the fuzzy hypergraph and their coverage of topics present in the corpus. We formulate a polynomial time algorithm building on the theory of submodular functions to solve the associated optimization problem. A thorough comparative analysis with other graph-based summarization systems is included in the paper. Our obtained results show the superiority of our method in terms of content coverage of the summaries.
http://arxiv.org/abs/1906.09445v1
"2019-06-22T13:28:32Z"
cs.CL, cs.AI
2,019
An Online Topic Modeling Framework with Topics Automatically Labeled
Fenglei Jin, Cuiyun Gao, Michael R. Lyu
In this paper, we propose a novel online topic tracking framework, named IEDL, for tracking the topic changes related to deep learning techniques on Stack Exchange and automatically interpreting each identified topic. The proposed framework combines the prior topic distributions in a time window during inferring the topics in current time slice, and introduces a new ranking scheme to select most representative phrases and sentences for the inferred topics in each time slice. Experiments on 7,076 Stack Exchange posts show the effectiveness of IEDL in tracking topic changes and labeling topics.
http://arxiv.org/abs/1907.01638v1
"2019-06-22T02:42:44Z"
cs.IR, cs.CL
2,019
Properties of jet fragmentation using charged particles measured with the ATLAS detector in $pp$ collisions at $\sqrt{s}=13$ TeV
ATLAS Collaboration
This paper presents a measurement of quantities related to the formation of jets from high-energy quarks and gluons (fragmentation). Jets with transverse momentum 100 GeV $<p_T<$ 2.5 TeV and pseudorapidity $|\eta| < 2.1$ from an integrated luminosity of 33 fb$^{-1}$ of $\sqrt{s}=13$ TeV proton-proton collisions are reconstructed with the ATLAS detector at the Large Hadron Collider. Charged-particle tracks with $p_T > 500$ MeV and $|\eta| < 2.5$ are used to probe the detailed structure of the jet. The fragmentation properties of the more forward and the more central of the two leading jets from each event are studied. The data are unfolded to correct for detector resolution and acceptance effects. Comparisons with parton shower Monte Carlo generators indicate that existing models provide a reasonable description of the data across a wide range of phase space, but there are also significant differences. Furthermore, the data are interpreted in the context of quark- and gluon-initiated jets by exploiting the rapidity dependence of the jet flavor fraction. A first measurement of the charged-particle multiplicity using model-independent jet labels (topic modeling) provides a promising alternative to traditional quark and gluon extractions using input from simulation. The simulations provide a reasonable description of the quark-like data across the jet $p_T$ range presented in this measurement, but the gluon-like data have systematically fewer charged particles than the simulations.
http://arxiv.org/abs/1906.09254v2
"2019-06-21T17:40:23Z"
hep-ex
2,019
Stuck? No worries!: Task-aware Command Recommendation and Proactive Help for Analysts
Aadhavan M. Nambhi, Bhanu Prakash Reddy, Aarsh Prakash Agarwal, Gaurav Verma, Harvineet Singh, Iftikhar Ahamath Burhanuddin
Data analytics software applications have become an integral part of the decision-making process of analysts. Users of such a software face challenges due to insufficient product and domain knowledge, and find themselves in need of help. To alleviate this, we propose a task-aware command recommendation system, to guide the user on what commands could be executed next. We rely on topic modeling techniques to incorporate information about user's task into our models. We also present a help prediction model to detect if a user is in need of help, in which case the system proactively provides the aforementioned command recommendations. We leverage the log data of a web-based analytics software to quantify the superior performance of our neural models, in comparison to competitive baselines.
http://arxiv.org/abs/1906.08973v1
"2019-06-21T06:30:08Z"
cs.HC, cs.IR
2,019
Interactive Topic Modeling with Anchor Words
Sanjoy Dasgupta, Stefanos Poulis, Christopher Tosh
The formalism of anchor words has enabled the development of fast topic modeling algorithms with provable guarantees. In this paper, we introduce a protocol that allows users to interact with anchor words to build customized and interpretable topic models. Experimental evidence validating the usefulness of our approach is also presented.
http://arxiv.org/abs/1907.04919v1
"2019-06-18T23:42:23Z"
cs.IR, cs.LG, stat.ML
2,019
Analyses of Multi-collection Corpora via Compound Topic Modeling
Clint P. George, Wei Xia, George Michailidis
As electronically stored data grow in daily life, obtaining novel and relevant information becomes challenging in text mining. Thus people have sought statistical methods based on term frequency, matrix algebra, or topic modeling for text mining. Popular topic models have centered on one single text collection, which is deficient for comparative text analyses. We consider a setting where one can partition the corpus into subcollections. Each subcollection shares a common set of topics, but there exists relative variation in topic proportions among collections. Including any prior knowledge about the corpus (e.g. organization structure), we propose the compound latent Dirichlet allocation (cLDA) model, improving on previous work, encouraging generalizability, and depending less on user-input parameters. To identify the parameters of interest in cLDA, we study Markov chain Monte Carlo (MCMC) and variational inference approaches extensively, and suggest an efficient MCMC method. We evaluate cLDA qualitatively and quantitatively using both synthetic and real-world corpora. The usability study on some real-world corpora illustrates the superiority of cLDA to explore the underlying topics automatically but also model their connections and variations across multiple collections.
http://arxiv.org/abs/1907.01636v1
"2019-06-17T06:59:25Z"
cs.IR, cs.CL, cs.LG, stat.ML, 62F15, 60J22, I.2; I.7; G.3
2,019
Yoga-Veganism: Correlation Mining of Twitter Health Data
Tunazzina Islam
Nowadays social media is a huge platform of data. People usually share their interest, thoughts via discussions, tweets, status. It is not possible to go through all the data manually. We need to mine the data to explore hidden patterns or unknown correlations, find out the dominant topic in data and understand people's interest through the discussions. In this work, we explore Twitter data related to health. We extract the popular topics under different categories (e.g. diet, exercise) discussed in Twitter via topic modeling, observe model behavior on new tweets, discover interesting correlation (i.e. Yoga-Veganism). We evaluate accuracy by comparing with ground truth using manual annotation both for train and test data.
http://arxiv.org/abs/1906.07668v1
"2019-06-15T20:56:48Z"
cs.CL, cs.AI, cs.CY, cs.LG
2,019
Topic Modeling via Full Dependence Mixtures
Dan Fisher, Mark Kozdoba, Shie Mannor
In this paper we introduce a new approach to topic modelling that scales to large datasets by using a compact representation of the data and by leveraging the GPU architecture. In this approach, topics are learned directly from the co-occurrence data of the corpus. In particular, we introduce a novel mixture model which we term the Full Dependence Mixture (FDM) model. FDMs model second moment under general generative assumptions on the data. While there is previous work on topic modeling using second moments, we develop a direct stochastic optimization procedure for fitting an FDM with a single Kullback Leibler objective. Moment methods in general have the benefit that an iteration no longer needs to scale with the size of the corpus. Our approach allows us to leverage standard optimizers and GPUs for the problem of topic modeling. In particular, we evaluate the approach on two large datasets, NeurIPS papers and a Twitter corpus, with a large number of topics, and show that the approach performs comparably or better than the the standard benchmarks.
http://arxiv.org/abs/1906.06181v3
"2019-06-13T10:47:41Z"
cs.IR, cs.LG, stat.ML
2,019
Multiway clustering via tensor block models
Miaoyan Wang, Yuchen Zeng
We consider the problem of identifying multiway block structure from a large noisy tensor. Such problems arise frequently in applications such as genomics, recommendation system, topic modeling, and sensor network localization. We propose a tensor block model, develop a unified least-square estimation, and obtain the theoretical accuracy guarantees for multiway clustering. The statistical convergence of the estimator is established, and we show that the associated clustering procedure achieves partition consistency. A sparse regularization is further developed for identifying important blocks with elevated means. The proposal handles a broad range of data types, including binary, continuous, and hybrid observations. Through simulation and application to two real datasets, we demonstrate the outperformance of our approach over previous methods.
http://arxiv.org/abs/1906.03807v4
"2019-06-10T06:07:41Z"
stat.ML, cs.LG, math.ST, stat.ME, stat.TH, 62H25, 62H12
2,019
Analyzing Social Media Data to Understand Consumers' Information Needs on Dietary Supplements
Rubina F. Rizvi, Yefeng Wang, Thao Nguyen, Jake Vasilakes, Jiang Bian, Zhe He, Rui Zhang
Despite the high consumption of dietary supplements (DS), there are not many reliable, relevant, and comprehensive online resources that could satisfy information seekers. The purpose of this research study is to understand consumers' information needs on DS using topic modeling and to evaluate its accuracy in correctly identifying topics from social media. We retrieved 16,095 unique questions posted on Yahoo! Answers relating to 438 unique DS ingredients mentioned in sub-section, "Alternative medicine" under the section, "Health". We implemented an unsupervised topic modeling method, Correlation Explanation (CorEx) to unveil the various topics consumers are most interested in. We manually reviewed the keywords of all the 200 topics generated by CorEx and assigned them to 38 health-related categories, corresponding to 12 higher-level groups. We found high accuracy (90-100%) in identifying questions that correctly align with the selected topics. The results could be used to guide us to generate a more comprehensive and structured DS resource based on consumers' information needs.
http://arxiv.org/abs/1906.03171v1
"2019-06-07T15:38:27Z"
cs.CY
2,019
Sparse Parallel Training of Hierarchical Dirichlet Process Topic Models
Alexander Terenin, Måns Magnusson, Leif Jonsson
To scale non-parametric extensions of probabilistic topic models such as Latent Dirichlet allocation to larger data sets, practitioners rely increasingly on parallel and distributed systems. In this work, we study data-parallel training for the hierarchical Dirichlet process (HDP) topic model. Based upon a representation of certain conditional distributions within an HDP, we propose a doubly sparse data-parallel sampler for the HDP topic model. This sampler utilizes all available sources of sparsity found in natural language - an important way to make computation efficient. We benchmark our method on a well-known corpus (PubMed) with 8m documents and 768m tokens, using a single multi-core machine in under four days.
http://arxiv.org/abs/1906.02416v2
"2019-06-06T05:04:08Z"
stat.ML, cs.CL, cs.IR, cs.LG
2,019
On Privacy Protection of Latent Dirichlet Allocation Model Training
Fangyuan Zhao, Xuebin Ren, Shusen Yang, Xinyu Yang
Latent Dirichlet Allocation (LDA) is a popular topic modeling technique for discovery of hidden semantic architecture of text datasets, and plays a fundamental role in many machine learning applications. However, like many other machine learning algorithms, the process of training a LDA model may leak the sensitive information of the training datasets and bring significant privacy risks. To mitigate the privacy issues in LDA, we focus on studying privacy-preserving algorithms of LDA model training in this paper. In particular, we first develop a privacy monitoring algorithm to investigate the privacy guarantee obtained from the inherent randomness of the Collapsed Gibbs Sampling (CGS) process in a typical LDA training algorithm on centralized curated datasets. Then, we further propose a locally private LDA training algorithm on crowdsourced data to provide local differential privacy for individual data contributors. The experimental results on real-world datasets demonstrate the effectiveness of our proposed algorithms.
http://arxiv.org/abs/1906.01178v2
"2019-06-04T03:25:17Z"
cs.LG, cs.AI, stat.ML
2,019
Multimodal Ensemble Approach to Incorporate Various Types of Clinical Notes for Predicting Readmission
Bonggun Shin, Julien Hogan, Andrew B. Adams, Raymond J. Lynch, Rachel E. Patzer, Jinho D. Choi
Electronic Health Records (EHRs) have been heavily used to predict various downstream clinical tasks such as readmission or mortality. One of the modalities in EHRs, clinical notes, has not been fully explored for these tasks due to its unstructured and inexplicable nature. Although recent advances in deep learning (DL) enables models to extract interpretable features from unstructured data, they often require a large amount of training data. However, many tasks in medical domains inherently consist of small sample data with lengthy documents; for a kidney transplant as an example, data from only a few thousand of patients are available and each patient's document consists of a couple of millions of words in major hospitals. Thus, complex DL methods cannot be applied to these kinds of domains. In this paper, we present a comprehensive ensemble model using vector space modeling and topic modeling. Our proposed model is evaluated on the readmission task of kidney transplant patients and improves 0.0211 in terms of c-statistics from the previous state-of-the-art approach using structured data, while typical DL methods fail to beat this approach. The proposed architecture provides the interpretable score for each feature from both modalities, structured and unstructured data, which is shown to be meaningful through a physician's evaluation.
http://arxiv.org/abs/1906.01498v1
"2019-05-31T20:25:06Z"
cs.CL, cs.LG, stat.ML
2,019
Adapting Text Embeddings for Causal Inference
Victor Veitch, Dhanya Sridhar, David M. Blei
Does adding a theorem to a paper affect its chance of acceptance? Does labeling a post with the author's gender affect the post popularity? This paper develops a method to estimate such causal effects from observational text data, adjusting for confounding features of the text such as the subject or writing quality. We assume that the text suffices for causal adjustment but that, in practice, it is prohibitively high-dimensional. To address this challenge, we develop causally sufficient embeddings, low-dimensional document representations that preserve sufficient information for causal identification and allow for efficient estimation of causal effects. Causally sufficient embeddings combine two ideas. The first is supervised dimensionality reduction: causal adjustment requires only the aspects of text that are predictive of both the treatment and outcome. The second is efficient language modeling: representations of text are designed to dispose of linguistically irrelevant information, and this information is also causally irrelevant. Our method adapts language models (specifically, word embeddings and topic models) to learn document embeddings that are able to predict both treatment and outcome. We study causally sufficient embeddings with semi-synthetic datasets and find that they improve causal estimation over related embedding methods. We illustrate the methods by answering the two motivating questions---the effect of a theorem on paper acceptance and the effect of a gender label on post popularity. Code and data available at https://github.com/vveitch/causal-text-embeddings-tf2}{github.com/vveitch/causal-text-embeddings-tf2
http://arxiv.org/abs/1905.12741v2
"2019-05-29T21:29:37Z"
cs.LG, cs.CL, stat.ML
2,019
Tweeting your Destiny: Profiling Users in the Twitter Landscape around an Online Game
Günter Wallner, Simone Kriglstein, Anders Drachen
Social media has become a major communication channel for communities centered around video games. Consequently, social media offers a rich data source to study online communities and the discussions evolving around games. Towards this end, we explore a large-scale dataset consisting of over 1 million tweets related to the online multiplayer shooter Destiny and spanning a time period of about 14 months using unsupervised clustering and topic modelling. Furthermore, we correlate Twitter activity of over 3,000 players with their playtime. Our results contribute to the understanding of online player communities by identifying distinct player groups with respect to their Twitter characteristics, describing subgroups within the Destiny community, and uncovering broad topics of community interest.
http://arxiv.org/abs/1905.12694v1
"2019-05-29T19:32:32Z"
cs.HC
2,019