Title
stringlengths
14
179
Authors
stringlengths
6
464
Abstract
stringlengths
83
1.93k
entry_id
stringlengths
32
34
Date
unknown
Categories
stringlengths
5
168
year
int32
2.01k
2.02k
The Increased Effect of Elections and Changing Prime Ministers on Topics Discussed in the Australian Federal Parliament between 1901 and 2018
Rohan Alexander, Monica Alexander
Politics and discussion in parliament is likely to be influenced by the party in power and associated election cycles. However, little is known about the extent to which these events affect discussion and how this has changed over time. We systematically analyse how discussion in the Australian Federal Parliament changes in response to two types of political events: elections and changed prime ministers. We use a newly constructed dataset of what was said in the Australian Federal Parliament from 1901 through to 2018 based on extracting and cleaning available public records. We reduce the dimensionality of discussion in this dataset by using a correlated topic model to obtain a set of comparable topics over time. We then relate those topics to the Comparative Agendas Project, and then analyse the effect of these two types of events using a Bayesian hierarchical Dirichlet model. We find that: changes in prime minister tend to be associated with topic changes even when the party in power does not change; and the effect of elections has been increasing since the 1980s, regardless of whether the election results in a change of prime minister.
http://arxiv.org/abs/2111.09299v1
"2021-11-17T18:55:07Z"
stat.AP
2,021
Community-Detection via Hashtag-Graphs for Semi-Supervised NMF Topic Models
Mattias Luber, Anton Thielmann, Christoph Weisser, Benjamin Säfken
Extracting topics from large collections of unstructured text-documents has become a central task in current NLP applications and algorithms like NMF, LDA as well as their generalizations are the well-established current state of the art. However, especially when it comes to short text documents like Tweets, these approaches often lead to unsatisfying results due to the sparsity of the document-feature matrices. Even though, several approaches have been proposed to overcome this sparsity by taking additional information into account, these are merely focused on the aggregation of similar documents and the estimation of word-co-occurrences. This ultimately completely neglects the fact that a lot of topical-information can be actually retrieved from so-called hashtag-graphs by applying common community detection algorithms. Therefore, this paper outlines a novel approach on how to integrate topic structures of hashtag graphs into the estimation of topic models by connecting graph-based community detection and semi-supervised NMF. By applying this approach on recently streamed Twitter data it will be seen that this procedure actually leads to more intuitive and humanly interpretable topics.
http://arxiv.org/abs/2111.10401v1
"2021-11-17T12:52:16Z"
cs.SI, cs.LG
2,021
Utilizing Textual Reviews in Latent Factor Models for Recommender Systems
Tatev Karen Aslanyan, Flavius Frasincar
Most of the existing recommender systems are based only on the rating data, and they ignore other sources of information that might increase the quality of recommendations, such as textual reviews, or user and item characteristics. Moreover, the majority of those systems are applicable only on small datasets (with thousands of observations) and are unable to handle large datasets (with millions of observations). We propose a recommender algorithm that combines a rating modelling technique (i.e., Latent Factor Model) with a topic modelling method based on textual reviews (i.e., Latent Dirichlet Allocation), and we extend the algorithm such that it allows adding extra user- and item-specific information to the system. We evaluate the performance of the algorithm using Amazon.com datasets with different sizes, corresponding to 23 product categories. After comparing the built model to four other models we found that combining textual reviews with ratings leads to better recommendations. Moreover, we found that adding extra user and item features to the model increases its prediction accuracy, which is especially true for medium and large datasets.
http://arxiv.org/abs/2111.08538v1
"2021-11-16T15:07:51Z"
cs.IR, cs.LG, stat.ML
2,021
Regional Topics in British Grocery Retail Transactions
Mariflor Vega Carrasco, Mirco Musolesi, Jason O'Sullivan, Rosie Prior, Ioanna Manolopoulou
Understanding the customer behaviours behind transactional data has high commercial value in the grocery retail industry. Customers generate millions of transactions every day, choosing and buying products to satisfy specific shopping needs. Product availability may vary geographically due to local demand and local supply, thus driving the importance of analysing transactions within their corresponding store and regional context. Topic models provide a powerful tool in the analysis of transactional data, identifying topics that display frequently-bought-together products and summarising transactions as mixtures of topics. We use the Segmented Topic Model (STM) to capture customer behaviours that are nested within stores. STM not only provides topics and transaction summaries but also topical summaries at the store level that can be used to identify regional topics. We summarised the posterior distribution of STM by post-processing multiple posterior samples and selecting semantic modes represented as recurrent topics. We use linear Gaussian process regression to model topic prevalence across British territory while accounting for spatial autocorrelation. We implement our methods on a dataset of transactional data from a major UK grocery retailer and demonstrate that shopping behaviours may vary regionally and nearby stores tend to exhibit similar regional demand.
http://arxiv.org/abs/2111.08078v1
"2021-11-15T20:55:53Z"
stat.AP, stat.ME
2,021
Forecasting Crude Oil Price Using Event Extraction
Jiangwei Liu, Xiaohong Huang
Research on crude oil price forecasting has attracted tremendous attention from scholars and policymakers due to its significant effect on the global economy. Besides supply and demand, crude oil prices are largely influenced by various factors, such as economic development, financial markets, conflicts, wars, and political events. Most previous research treats crude oil price forecasting as a time series or econometric variable prediction problem. Although recently there have been researches considering the effects of real-time news events, most of these works mainly use raw news headlines or topic models to extract text features without profoundly exploring the event information. In this study, a novel crude oil price forecasting framework, AGESL, is proposed to deal with this problem. In our approach, an open domain event extraction algorithm is utilized to extract underlying related events, and a text sentiment analysis algorithm is used to extract sentiment from massive news. Then a deep neural network integrating the news event features, sentimental features, and historical price features is built to predict future crude oil prices. Empirical experiments are performed on West Texas Intermediate (WTI) crude oil price data, and the results show that our approach obtains superior performance compared with several benchmark methods.
http://arxiv.org/abs/2111.09111v1
"2021-11-14T08:48:43Z"
cs.LG, cs.AI, cs.CL, econ.GN, q-fin.EC, stat.AP
2,021
RATE: Overcoming Noise and Sparsity of Textual Features in Real-Time Location Estimation
Yu Zhang, Wei Wei, Binxuan Huang, Kathleen M. Carley, Yan Zhang
Real-time location inference of social media users is the fundamental of some spatial applications such as localized search and event detection. While tweet text is the most commonly used feature in location estimation, most of the prior works suffer from either the noise or the sparsity of textual features. In this paper, we aim to tackle these two problems. We use topic modeling as a building block to characterize the geographic topic variation and lexical variation so that "one-hot" encoding vectors will no longer be directly used. We also incorporate other features which can be extracted through the Twitter streaming API to overcome the noise problem. Experimental results show that our RATE algorithm outperforms several benchmark methods, both in the precision of region classification and the mean distance error of latitude and longitude regression.
http://arxiv.org/abs/2111.06515v1
"2021-11-12T00:57:42Z"
cs.CL, cs.LG
2,021
A quantitative and qualitative open citation analysis of retracted articles in the humanities
Ivan Heibi, Silvio Peroni
In this article, we show and discuss the results of a quantitative and qualitative analysis of open citations to retracted publications in the humanities domain. Our study was conducted by selecting retracted papers in the humanities domain and marking their main characteristics (e.g., retraction reason). Then, we gathered the citing entities and annotated their basic metadata (e.g., title, venue, subject, etc.) and the characteristics of their in-text citations (e.g., intent, sentiment, etc.). Using these data, we performed a quantitative and qualitative study of retractions in the humanities, presenting descriptive statistics and a topic modeling analysis of the citing entities' abstracts and the in-text citation contexts. As part of our main findings, we noticed that there was no drop in the overall number of citations after the year of retraction, with few entities which have either mentioned the retraction or expressed a negative sentiment toward the cited publication. In addition, on several occasions, we noticed a higher concern/awareness when it was about citing a retracted publication, by the citing entities belonging to the health sciences domain, if compared to the humanities and the social science domains. Philosophy, arts, and history are the humanities areas that showed the higher concern toward the retraction.
http://arxiv.org/abs/2111.05223v3
"2021-11-09T16:02:16Z"
cs.DL, cs.IR
2,021
Trend and Thoughts: Understanding Climate Change Concern using Machine Learning and Social Media Data
Zhongkai Shangguan, Zihe Zheng, Lei Lin
Nowadays social media platforms such as Twitter provide a great opportunity to understand public opinion of climate change compared to traditional survey methods. In this paper, we constructed a massive climate change Twitter dataset and conducted comprehensive analysis using machine learning. By conducting topic modeling and natural language processing, we show the relationship between the number of tweets about climate change and major climate events; the common topics people discuss climate change; and the trend of sentiment. Our dataset was published on Kaggle (\url{https://www.kaggle.com/leonshangguan/climate-change-tweets-ids-until-aug-2021}) and can be used in further research.
http://arxiv.org/abs/2111.14929v1
"2021-11-06T19:59:03Z"
cs.CL
2,021
Investigation of Topic Modelling Methods for Understanding the Reports of the Mining Projects in Queensland
Yasuko Okamoto, Thirunavukarasu Balasubramaniam, Richi Nayak
In the mining industry, many reports are generated in the project management process. These past documents are a great resource of knowledge for future success. However, it would be a tedious and challenging task to retrieve the necessary information if the documents are unorganized and unstructured. Document clustering is a powerful approach to cope with the problem, and many methods have been introduced in past studies. Nonetheless, there is no silver bullet that can perform the best for any types of documents. Thus, exploratory studies are required to apply the clustering methods for new datasets. In this study, we will investigate multiple topic modelling (TM) methods. The objectives are finding the appropriate approach for the mining project reports using the dataset of the Geological Survey of Queensland, Department of Resources, Queensland Government, and understanding the contents to get the idea of how to organise them. Three TM methods, Latent Dirichlet Allocation (LDA), Nonnegative Matrix Factorization (NMF), and Nonnegative Tensor Factorization (NTF) are compared statistically and qualitatively. After the evaluation, we conclude that the LDA performs the best for the dataset; however, the possibility remains that the other methods could be adopted with some improvements.
http://arxiv.org/abs/2111.03576v1
"2021-11-05T15:52:03Z"
cs.IR, cs.LG
2,021
A Case Study and Qualitative Analysis of Simple Cross-Lingual Opinion Mining
Gerhard Johann Hagerer, Wing Sheung Leung, Qiaoxi Liu, Hannah Danner, Georg Groh
User-generated content from social media is produced in many languages, making it technically challenging to compare the discussed themes from one domain across different cultures and regions. It is relevant for domains in a globalized world, such as market research, where people from two nations and markets might have different requirements for a product. We propose a simple, modern, and effective method for building a single topic model with sentiment analysis capable of covering multiple languages simultanteously, based on a pre-trained state-of-the-art deep neural network for natural language understanding. To demonstrate its feasibility, we apply the model to newspaper articles and user comments of a specific domain, i.e., organic food products and related consumption behavior. The themes match across languages. Additionally, we obtain an high proportion of stable and domain-relevant topics, a meaningful relation between topics and their respective textual contents, and an interpretable representation for social media documents. Marketing can potentially benefit from our method, since it provides an easy-to-use means of addressing specific customer interests from different market regions around the globe. For reproducibility, we provide the code, data, and results of our study.
http://arxiv.org/abs/2111.02259v3
"2021-11-03T14:49:50Z"
cs.CL, cs.IR
2,021
Word embeddings for topic modeling: an application to the estimation of the economic policy uncertainty index
Hairo U. Miranda Belmonte, Victor Muñiz-Sánchez, Francisco Corona
Quantification of economic uncertainty is a key concept for the prediction of macro economic variables such as gross domestic product (GDP), and it becomes particularly relevant on real-time or short-time predictions methodologies, such as nowcasting, where it is required a large amount of time series data, commonly with different structures and frequencies. Most of the data comes from the official agencies statistics and non-public institutions, however, relying our estimates in just the traditional data mentioned before, have some disadvantages. One of them is that economic uncertainty could not be represented or measured in a proper way based solely in financial or macroeconomic data, another one, is that they are susceptible to lack of information due to extraordinary events, such as the current COVID-19 pandemic. For these reasons, it is very common nowadays to use some non-traditional data from different sources, such as social networks or digital newspapers, in addition to the traditional data from official sources. The economic policy uncertainty (EPU) index, is the most used newspaper-based indicator to quantify the uncertainty, and is based on topic modeling of newspapers. In this paper, we propose a methodology to estimate the EPU index, which incorporates a fast and efficient method for topic modeling of digital news based on semantic clustering with word embeddings, allowing to update the index in real-time, which is a drawback with another proposals that use computationally intensive methods for topic modeling, such as Latent Dirichlet Allocation (LDA). We show that our proposal allow us to update the index and significantly reduces the time required for new document assignation into topics.
http://arxiv.org/abs/2111.00057v1
"2021-10-29T19:31:03Z"
cs.LG, cs.IR
2,021
Cognitive network science quantifies feelings expressed in suicide letters and Reddit mental health communities
Simmi Marina Joseph, Salvatore Citraro, Virginia Morini, Giulio Rossetti, Massimo Stella
Writing messages is key to expressing feelings. This study adopts cognitive network science to reconstruct how individuals report their feelings in clinical narratives like suicide notes or mental health posts. We achieve this by reconstructing syntactic/semantic associations between conceptsin texts as co-occurrences enriched with affective data. We transform 142 suicide notes and 77,000 Reddit posts from the r/anxiety, r/depression, r/schizophrenia, and r/do-it-your-own (r/DIY) forums into 5 cognitive networks, each one expressing meanings and emotions as reported by authors. These networks reconstruct the semantic frames surrounding 'feel', enabling a quantification of prominent associations and emotions focused around feelings. We find strong feelings of sadness across all clinical Reddit boards, added to fear r/depression, and replaced by joy/anticipation in r/DIY. Semantic communities and topic modelling both highlight key narrative topics of 'regret', 'unhealthy lifestyle' and 'low mental well-being'. Importantly, negative associations and emotions co-existed with trustful/positive language, focused on 'getting better'. This emotional polarisation provides quantitative evidence that online clinical boards possess a complex structure, where users mix both positive and negative outlooks. This dichotomy is absent in the r/DIY reference board and in suicide notes, where negative emotional associations about regret and pain persist but are overwhelmed by positive jargon addressing loved ones. Our quantitative comparisons provide strong evidence that suicide notes encapsulate different ways of expressing feelings compared to online Reddit boards, the latter acting more like personal diaries and relief valve. Our findings provide an interpretable, quantitative aid for supporting psychological inquiries of human feelings in digital and clinical settings.
http://arxiv.org/abs/2110.15269v2
"2021-10-28T16:26:50Z"
cs.CL, cs.CY
2,021
TopicNet: Semantic Graph-Guided Topic Discovery
Zhibin Duan, Yishi Xu, Bo Chen, Dongsheng Wang, Chaojie Wang, Mingyuan Zhou
Existing deep hierarchical topic models are able to extract semantically meaningful topics from a text corpus in an unsupervised manner and automatically organize them into a topic hierarchy. However, it is unclear how to incorporate prior beliefs such as knowledge graph to guide the learning of the topic hierarchy. To address this issue, we introduce TopicNet as a deep hierarchical topic model that can inject prior structural knowledge as an inductive bias to influence learning. TopicNet represents each topic as a Gaussian-distributed embedding vector, projects the topics of all layers into a shared embedding space, and explores both the symmetric and asymmetric similarities between Gaussian embedding vectors to incorporate prior semantic hierarchies. With an auto-encoding variational inference network, the model parameters are optimized by minimizing the evidence lower bound and a regularization term via stochastic gradient descent. Experiments on widely used benchmarks show that TopicNet outperforms related deep topic models on discovering deeper interpretable topics and mining better document~representations.
http://arxiv.org/abs/2110.14286v1
"2021-10-27T09:07:14Z"
cs.LG, cs.IR
2,021
Contrastive Learning for Neural Topic Model
Thong Nguyen, Anh Tuan Luu
Recent empirical studies show that adversarial topic models (ATM) can successfully capture semantic patterns of the document by differentiating a document with another dissimilar sample. However, utilizing that discriminative-generative architecture has two important drawbacks: (1) the architecture does not relate similar documents, which has the same document-word distribution of salient words; (2) it restricts the ability to integrate external information, such as sentiments of the document, which has been shown to benefit the training of neural topic model. To address those issues, we revisit the adversarial topic architecture in the viewpoint of mathematical analysis, propose a novel approach to re-formulate discriminative goal as an optimization problem, and design a novel sampling method which facilitates the integration of external variables. The reformulation encourages the model to incorporate the relations among similar samples and enforces the constraint on the similarity among dissimilar ones; while the sampling method, which is based on the internal input and reconstructed output, helps inform the model of salient words contributing to the main topic. Experimental results show that our framework outperforms other state-of-the-art neural topic models in three common benchmark datasets that belong to various domains, vocabulary sizes, and document lengths in terms of topic coherence.
http://arxiv.org/abs/2110.12764v1
"2021-10-25T09:46:26Z"
cs.CL
2,021
Recommender Systems meet Mechanism Design
Yang Cai, Constantinos Daskalakis
Machine learning has developed a variety of tools for learning and representing high-dimensional distributions with structure. Recent years have also seen big advances in designing multi-item mechanisms. Akin to overfitting, however, these mechanisms can be extremely sensitive to the Bayesian prior that they target, which becomes problematic when that prior is only approximately known. At the same time, even if access to the exact Bayesian prior is given, it is known that optimal or even approximately optimal multi-item mechanisms run into sample, computational, representation and communication intractability barriers. We consider a natural class of multi-item mechanism design problems with very large numbers of items, but where the bidders' value distributions can be well-approximated by a topic model akin to those used in recommendation systems with very large numbers of possible recommendations. We propose a mechanism design framework for this setting, building on a recent robustification framework by Brustle et al., which disentangles the statistical challenge of estimating a multi-dimensional prior from the task of designing a good mechanism for it, and robustifies the performance of the latter against the estimation error of the former. We provide an extension of this framework appropriate for our setting, which allows us to exploit the expressive power of topic models to reduce the effective dimensionality of the mechanism design problem and remove the dependence of its computational, communication and representation complexity on the number of items.
http://arxiv.org/abs/2110.12558v2
"2021-10-25T00:03:30Z"
cs.GT, cs.IR, cs.LG, stat.ML
2,021
Topic-Guided Abstractive Multi-Document Summarization
Peng Cui, Le Hu
A critical point of multi-document summarization (MDS) is to learn the relations among various documents. In this paper, we propose a novel abstractive MDS model, in which we represent multiple documents as a heterogeneous graph, taking semantic nodes of different granularities into account, and then apply a graph-to-sequence framework to generate summaries. Moreover, we employ a neural topic model to jointly discover latent topics that can act as cross-document semantic units to bridge different documents and provide global information to guide the summary generation. Since topic extraction can be viewed as a special type of summarization that "summarizes" texts into a more abstract format, i.e., a topic distribution, we adopt a multi-task learning strategy to jointly train the topic and summarization module, allowing the promotion of each other. Experimental results on the Multi-News dataset demonstrate that our model outperforms previous state-of-the-art MDS models on both Rouge metrics and human evaluation, meanwhile learns high-quality topics.
http://arxiv.org/abs/2110.11207v1
"2021-10-21T15:32:30Z"
cs.CL, cs.AI
2,021
SocialVisTUM: An Interactive Visualization Toolkit for Correlated Neural Topic Models on Social Media Opinion Mining
Gerhard Johann Hagerer, Martin Kirchhoff, Hannah Danner, Robert Pesch, Mainak Ghosh, Archishman Roy, Jiaxi Zhao, Georg Groh
Recent research in opinion mining proposed word embedding-based topic modeling methods that provide superior coherence compared to traditional topic modeling. In this paper, we demonstrate how these methods can be used to display correlated topic models on social media texts using SocialVisTUM, our proposed interactive visualization toolkit. It displays a graph with topics as nodes and their correlations as edges. Further details are displayed interactively to support the exploration of large text collections, e.g., representative words and sentences of topics, topic and sentiment distributions, hierarchical topic clustering, and customizable, predefined topic labels. The toolkit optimizes automatically on custom data for optimal coherence. We show a working instance of the toolkit on data crawled from English social media discussions about organic food consumption. The visualization confirms findings of a qualitative consumer research study. SocialVisTUM and its training procedures are accessible online.
http://arxiv.org/abs/2110.10575v2
"2021-10-20T14:04:13Z"
cs.CL
2,021
Uncertainty-aware Topic Modeling Visualization
Valerie Müller, Christian Sieg, Lars Linsen
Topic modeling is a state-of-the-art technique for analyzing text corpora. It uses a statistical model, most commonly Latent Dirichlet Allocation (LDA), to discover abstract topics that occur in the document collection. However, the LDA-based topic modeling procedure is based on a randomly selected initial configuration as well as a number of parameter values than need to be chosen. This induces uncertainties on the topic modeling results, and visualization methods should convey these uncertainties during the analysis process. We propose a visual uncertainty-aware topic modeling analysis. We capture the uncertainty by computing topic modeling ensembles and propose measures for estimating topic modeling uncertainty from the ensemble. Then, we propose to enhance state-of-the-art topic modeling visualization methods to convey the uncertainty in the topic modeling process. We visualize the entire ensemble of topic modeling results at different levels for topic and document analysis. We apply our visualization methods to a text corpus to document the impact of uncertainty on the analysis.
http://arxiv.org/abs/2110.09247v1
"2021-10-18T12:48:33Z"
cs.HC
2,021
n-stage Latent Dirichlet Allocation: A Novel Approach for LDA
Zekeriya Anil Guven, Banu Diri, Tolgahan Cakaloglu
Nowadays, data analysis has become a problem as the amount of data is constantly increasing. In order to overcome this problem in textual data, many models and methods are used in natural language processing. The topic modeling field is one of these methods. Topic modeling allows determining the semantic structure of a text document. Latent Dirichlet Allocation (LDA) is the most common method among topic modeling methods. In this article, the proposed n-stage LDA method, which can enable the LDA method to be used more effectively, is explained in detail. The positive effect of the method has been demonstrated by the applied English and Turkish studies. Since the method focuses on reducing the word count in the dictionary, it can be used language-independently. You can access the open-source code of the method and the example: https://github.com/anil1055/n-stage_LDA
http://arxiv.org/abs/2110.08591v2
"2021-10-16T15:26:53Z"
cs.CL, cs.IR, H.3.3; I.2.7; I.7.0
2,021
Neural Attention-Aware Hierarchical Topic Model
Yuan Jin, He Zhao, Ming Liu, Lan Du, Wray Buntine
Neural topic models (NTMs) apply deep neural networks to topic modelling. Despite their success, NTMs generally ignore two important aspects: (1) only document-level word count information is utilized for the training, while more fine-grained sentence-level information is ignored, and (2) external semantic knowledge regarding documents, sentences and words are not exploited for the training. To address these issues, we propose a variational autoencoder (VAE) NTM model that jointly reconstructs the sentence and document word counts using combinations of bag-of-words (BoW) topical embeddings and pre-trained semantic embeddings. The pre-trained embeddings are first transformed into a common latent topical space to align their semantics with the BoW embeddings. Our model also features hierarchical KL divergence to leverage embeddings of each document to regularize those of their sentences, thereby paying more attention to semantically relevant sentences. Both quantitative and qualitative experiments have shown the efficacy of our model in 1) lowering the reconstruction errors at both the sentence and document levels, and 2) discovering more coherent topics from real-world datasets.
http://arxiv.org/abs/2110.07161v1
"2021-10-14T05:42:32Z"
cs.CL, cs.LG
2,021
Topic Model Supervised by Understanding Map
Gangli Liu
Inspired by the notion of Center of Mass in physics, an extension called Semantic Center of Mass (SCOM) is proposed, and used to discover the abstract "topic" of a document. The notion is under a framework model called Understanding Map Supervised Topic Model (UM-S-TM). The devising aim of UM-S-TM is to let both the document content and a semantic network -- specifically, Understanding Map -- play a role, in interpreting the meaning of a document. Based on different justifications, three possible methods are devised to discover the SCOM of a document. Some experiments on artificial documents and Understanding Maps are conducted to test their outcomes. In addition, its ability of vectorization of documents and capturing sequential information are tested. We also compared UM-S-TM with probabilistic topic models like Latent Dirichlet Allocation (LDA) and probabilistic Latent Semantic Analysis (pLSA).
http://arxiv.org/abs/2110.06043v12
"2021-10-12T14:42:33Z"
cs.CL
2,021
Topic Modeling, Clade-assisted Sentiment Analysis, and Vaccine Brand Reputation Analysis of COVID-19 Vaccine-related Facebook Comments in the Philippines
Jasper Kyle Catapang, Jerome V. Cleofas
Vaccine hesitancy and other COVID-19-related concerns and complaints in the Philippines are evident on social media. It is important to identify these different topics and sentiments in order to gauge public opinion, use the insights to develop policies, and make necessary adjustments or actions to improve public image and reputation of the administering agency and the COVID-19 vaccines themselves. This paper proposes a semi-supervised machine learning pipeline to perform topic modeling, sentiment analysis, and an analysis of vaccine brand reputation to obtain an in-depth understanding of national public opinion of Filipinos on Facebook. The methodology makes use of a multilingual version of Bidirectional Encoder Representations from Transformers or BERT for topic modeling, hierarchical clustering, five different classifiers for sentiment analysis, and cosine similarity of BERT topic embeddings for vaccine brand reputation analysis. Results suggest that any type of COVID-19 misinformation is an emergent property of COVID-19 public opinion, and that the detection of COVID-19 misinformation can be an unsupervised task. Sentiment analysis aided by hierarchical clustering reveal that 21 of the 25 topics extrapolated by topic modeling are negative topics. Such negative comments spike in count whenever the Department of Health in the Philippines posts about the COVID-19 situation in other countries. Additionally, the high numbers of laugh reactions on the Facebook posts by the same agency -- without any humorous content -- suggest that the reactors of these posts tend to react the way they do, not because of what the posts are about but because of who posted them.
http://arxiv.org/abs/2111.04416v1
"2021-10-11T11:08:38Z"
cs.CY, cs.CL
2,021
Hotel Preference Rank based on Online Customer Review
Muhammad Apriandito Arya Saputra, Andry Alamsyah, Fajar Ibnu Fatihan
Topline hotels are now shifting into the digital way in how they understand their customers to maintain and ensuring satisfaction. Rather than the conventional way which uses written reviews or interviews, the hotel is now heavily investing in Artificial Intelligence particularly Machine Learning solutions. Analysis of online customer reviews changes the way companies make decisions in a more effective way than using conventional analysis. The purpose of this research is to measure hotel service quality. The proposed approach emphasizes service quality dimensions reviews of the top-5 luxury hotel in Indonesia that appear on the online travel site TripAdvisor based on section Best of 2018. In this research, we use a model based on a simple Bayesian classifier to classify each customer review into one of the service quality dimensions. Our model was able to separate each classification properly by accuracy, kappa, recall, precision, and F-measure measurements. To uncover latent topics in the customer's opinion we use Topic Modeling. We found that the common issue that occurs is about responsiveness as it got the lowest percentage compared to others. Our research provides a faster outlook of hotel rank based on service quality to end customers based on a summary of the previous online review.
http://arxiv.org/abs/2110.06133v1
"2021-10-10T15:59:01Z"
cs.IR, cs.SI, econ.GN, q-fin.EC
2,021
Sentiment Analysis and Topic Modeling for COVID-19 Vaccine Discussions
Hui Yin, Xiangyu Song, Shuiqiao Yang, Jianxin Li
The outbreak of the novel Coronavirus Disease 2019 (COVID-19) has lasted for nearly two years and caused unprecedented impacts on people's daily life around the world. Even worse, the emergence of the COVID-19 Delta variant once again puts the world in danger. Fortunately, many countries and companies have started to develop coronavirus vaccines since the beginning of this disaster. Till now, more than 20 vaccines have been approved by the World Health Organization (WHO), bringing light to people besieged by the pandemic. The promotion of COVID-19 vaccination around the world also brings a lot of discussions on social media about different aspects of vaccines, such as efficacy and security. However, there does not exist much research work to systematically analyze public opinion towards COVID-19 vaccines. In this study, we conduct an in-depth analysis of tweets related to the coronavirus vaccine on Twitter to understand the trending topics and their corresponding sentimental polarities regarding the country and vaccine levels. The results show that a majority of people are confident in the effectiveness of vaccines and are willing to get vaccinated. In contrast, the negative tweets are often associated with the complaints of vaccine shortages, side effects after injections and possible death after being vaccinated. Overall, this study exploits popular NLP and topic modeling methods to mine people's opinions on the COVID-19 vaccines on social media and to analyse and visualise them objectively. Our findings can improve the readability of the noisy information on social media and provide effective data support for the government and policy makers.
http://arxiv.org/abs/2111.04415v1
"2021-10-08T23:30:17Z"
cs.CY, cs.CL
2,021
Learning Topic Models: Identifiability and Finite-Sample Analysis
Yinyin Chen, Shishuang He, Yun Yang, Feng Liang
Topic models provide a useful text-mining tool for learning, extracting, and discovering latent structures in large text corpora. Although a plethora of methods have been proposed for topic modeling, lacking in the literature is a formal theoretical investigation of the statistical identifiability and accuracy of latent topic estimation. In this paper, we propose a maximum likelihood estimator (MLE) of latent topics based on a specific integrated likelihood that is naturally connected to the concept, in computational geometry, of volume minimization. Our theory introduces a new set of geometric conditions for topic model identifiability, conditions that are weaker than conventional separability conditions, which typically rely on the existence of pure topic documents or of anchor words. Weaker conditions allow a wider and thus potentially more fruitful investigation. We conduct finite-sample error analysis for the proposed estimator and discuss connections between our results and those of previous investigations. We conclude with empirical studies employing both simulated and real datasets.
http://arxiv.org/abs/2110.04232v2
"2021-10-08T16:35:42Z"
stat.ML, cs.IR, cs.LG, stat.ME
2,021
Analysis of the influence of political polarization in the vaccination stance: the Brazilian COVID-19 scenario
Régis Ebeling, Carlos Abel Córdova Sáenz, Jeferson Nobre, Karin Becker
The outbreak of COVID-19 had a huge global impact, and non-scientific beliefs and political polarization have significantly influenced the population's behavior. In this context, COVID vaccines were made available in an unprecedented time, but a high level of hesitance has been observed that can undermine community immunization. Traditionally, anti-vaccination attitudes are more related to conspiratorial thinking rather than political bias. In Brazil, a country with an exemplar tradition in large-scale vaccination programs, all COVID-related topics have also been discussed under a strong political bias. In this paper, we use a multidimensional analysis framework to understand if anti/pro-vaccination stances expressed by Brazilians in social media are influenced by political polarization. The analysis framework incorporates techniques to automatically infer from users their political orientation, topic modeling to discover their concerns, network analysis to characterize their social behavior, and the characterization of information sources and external influence. Our main findings confirm that anti/pro stances are biased by political polarization, right and left, respectively. While a significant proportion of pro-vaxxers display haste for an immunization program and criticize the government's actions, the anti-vaxxers distrust a vaccine developed in a record time. Anti-vaccination stance is also related to prejudice against China (anti-sinovaxxers), revealing conspiratorial theories related to communism. All groups display an "echo chamber behavior, revealing they are not open to distinct views.
http://arxiv.org/abs/2110.03382v1
"2021-10-07T12:21:33Z"
cs.SI
2,021
Investigating Health-Aware Smart-Nudging with Machine Learning to Help People Pursue Healthier Eating-Habits
Mansura A Khan, Khalil Muhammad, Barry Smyth, David Coyle
Food-choices and eating-habits directly contribute to our long-term health. This makes the food recommender system a potential tool to address the global crisis of obesity and malnutrition. Over the past decade, artificial-intelligence and medical researchers became more invested in researching tools that can guide and help people make healthy and thoughtful decisions around food and diet. In many typical (Recommender System) RS domains, smart nudges have been proven effective in shaping users' consumption patterns. In recent years, knowledgeable nudging and incentifying choices started getting attention in the food domain as well. To develop smart nudging for promoting healthier food choices, we combined Machine Learning and RS technology with food-healthiness guidelines from recognized health organizations, such as the World Health Organization, Food Standards Agency, and the National Health Service United Kingdom. In this paper, we discuss our research on, persuasive visualization for making users aware of the healthiness of the recommended recipes. Here, we propose three novel nudging technology, the WHO-BubbleSlider, the FSA-ColorCoading, and the DRCI-MLCP, that encourage users to choose healthier recipes. We also propose a Topic Modeling based portion-size recommendation algorithm. To evaluate our proposed smart-nudges, we conducted an online user study with 96 participants and 92250 recipes. Results showed that, during the food decision-making process, appropriate healthiness cues make users more likely to click, browse, and choose healthier recipes over less healthy ones.
http://arxiv.org/abs/2110.07045v1
"2021-10-05T10:56:02Z"
cs.HC, cs.AI, cs.IR, cs.LG, 68U35 (Primary), 68T35 (Secondary), 68T50(Secondary), I.2.1
2,021
Extracting Major Topics of COVID-19 Related Tweets
Faezeh Azizi, Hamed Vahdat-Nejad, Hamideh Hajiabadi, Mohammad Hossein Khosravi
With the outbreak of the Covid-19 virus, the activity of users on Twitter has significantly increased. Some studies have investigated the hot topics of tweets in this period; however, little attention has been paid to presenting and analyzing the spatial and temporal trends of Covid-19 topics. In this study, we use the topic modeling method to extract global topics during the nationwide quarantine periods (March 23 to June 23, 2020) on Covid-19 tweets. We implement the Latent Dirichlet Allocation (LDA) algorithm to extract the topics and then name them with the "reopening", "death cases", "telecommuting", "protests", "anger expression", "masking", "medication", "social distance", "second wave", and "peak of the disease" titles. We additionally analyze temporal trends of the topics for the whole world and four countries. By analyzing the graphs, fascinating results are obtained from altering users' focus on topics over time.
http://arxiv.org/abs/2110.01876v1
"2021-10-05T08:40:51Z"
cs.SI, cs.IR, cs.LG
2,021
Beyond Topics: Discovering Latent Healthcare Objectives from Event Sequences
Adrian Caruana, Madhushi Bandara, Daniel Catchpoole, Paul J Kennedy
A meaningful understanding of clinical protocols and patient pathways helps improve healthcare outcomes. Electronic health records (EHR) reflect real-world treatment behaviours that are used to enhance healthcare management but present challenges; protocols and pathways are often loosely defined and with elements frequently not recorded in EHRs, complicating the enhancement. To solve this challenge, healthcare objectives associated with healthcare management activities can be indirectly observed in EHRs as latent topics. Topic models, such as Latent Dirichlet Allocation (LDA), are used to identify latent patterns in EHR data. However, they do not examine the ordered nature of EHR sequences, nor do they appraise individual events in isolation. Our novel approach, the Categorical Sequence Encoder (CaSE) addresses these shortcomings. The sequential nature of EHRs is captured by CaSE's event-level representations, revealing latent healthcare objectives. In synthetic EHR sequences, CaSE outperforms LDA by up to 37% at identifying healthcare objectives. In the real-world MIMIC-III dataset, CaSE identifies meaningful representations that could critically enhance protocol and pathway development.
http://arxiv.org/abs/2110.01160v1
"2021-10-04T02:52:14Z"
cs.LG, cs.AI, cs.CL
2,021
Artificial intelligence for Sustainable Energy: A Contextual Topic Modeling and Content Analysis
Tahereh Saheb, Mohammad Dehghani
Parallel to the rising debates over sustainable energy and artificial intelligence solutions, the world is currently discussing the ethics of artificial intelligence and its possible negative effects on society and the environment. In these arguments, sustainable AI is proposed, which aims at advancing the pathway toward sustainability, such as sustainable energy. In this paper, we offered a novel contextual topic modeling combining LDA, BERT, and Clustering. We then combined these computational analyses with content analysis of related scientific publications to identify the main scholarly topics, sub-themes, and cross-topic themes within scientific research on sustainable AI in energy. Our research identified eight dominant topics including sustainable buildings, AI-based DSSs for urban water management, climate artificial intelligence, Agriculture 4, the convergence of AI with IoT, AI-based evaluation of renewable technologies, smart campus and engineering education, and AI-based optimization. We then recommended 14 potential future research strands based on the observed theoretical gaps. Theoretically, this analysis contributes to the existing literature on sustainable AI and sustainable energy, and practically, it intends to act as a general guide for energy engineers and scientists, AI scientists, and social scientists to widen their knowledge of sustainability in AI and energy convergence research.
http://arxiv.org/abs/2110.00828v1
"2021-10-02T15:51:51Z"
cs.AI
2,021
A Generalized Hierarchical Nonnegative Tensor Decomposition
Joshua Vendrow, Jamie Haddock, Deanna Needell
Nonnegative matrix factorization (NMF) has found many applications including topic modeling and document analysis. Hierarchical NMF (HNMF) variants are able to learn topics at various levels of granularity and illustrate their hierarchical relationship. Recently, nonnegative tensor factorization (NTF) methods have been applied in a similar fashion in order to handle data sets with complex, multi-modal structure. Hierarchical NTF (HNTF) methods have been proposed, however these methods do not naturally generalize their matrix-based counterparts. Here, we propose a new HNTF model which directly generalizes a HNMF model special case, and provide a supervised extension. We also provide a multiplicative updates training method for this model. Our experimental results show that this model more naturally illuminates the topic hierarchy than previous HNMF and HNTF methods.
http://arxiv.org/abs/2109.14820v2
"2021-09-30T03:00:41Z"
cs.LG, stat.ML
2,021
Evaluation of Non-Negative Matrix Factorization and n-stage Latent Dirichlet Allocation for Emotion Analysis in Turkish Tweets
Zekeriya Anil Guven, Banu Diri, Tolgahan Cakaloglu
With the development of technology, the use of social media has become quite common. Analyzing comments on social media in areas such as media and advertising plays an important role today. For this reason, new and traditional natural language processing methods are used to detect the emotion of these shares. In this paper, the Latent Dirichlet Allocation, namely LDA, and Non-Negative Matrix Factorization methods in topic modeling were used to determine which emotion the Turkish tweets posted via Twitter. In addition, the accuracy of a proposed n-level method based on LDA was analyzed. Dataset consists of 5 emotions, namely angry, fear, happy, sad and confused. NMF was the most successful method among all topic modeling methods in this study. Then, the F1-measure of Random Forest, Naive Bayes and Support Vector Machine methods was analyzed by obtaining a file suitable for Weka by using the word weights and class labels of the topics. Among the Weka results, the most successful method was n-stage LDA, and the most successful algorithm was Random Forest.
http://arxiv.org/abs/2110.00418v1
"2021-09-27T18:43:52Z"
cs.CL, cs.IR, cs.LG, H.3.3; I.2.7; I.7.0
2,021
Topic Model Robustness to Automatic Speech Recognition Errors in Podcast Transcripts
Raluca Alexandra Fetic, Mikkel Jordahn, Lucas Chaves Lima, Rasmus Arpe Fogh Egebæk, Martin Carsten Nielsen, Benjamin Biering, Lars Kai Hansen
For a multilingual podcast streaming service, it is critical to be able to deliver relevant content to all users independent of language. Podcast content relevance is conventionally determined using various metadata sources. However, with the increasing quality of speech recognition in many languages, utilizing automatic transcriptions to provide better content recommendations becomes possible. In this work, we explore the robustness of a Latent Dirichlet Allocation topic model when applied to transcripts created by an automatic speech recognition engine. Specifically, we explore how increasing transcription noise influences topics obtained from transcriptions in Danish; a low resource language. First, we observe a baseline of cosine similarity scores between topic embeddings from automatic transcriptions and the descriptions of the podcasts written by the podcast creators. We then observe how the cosine similarities decrease as transcription noise increases and conclude that even when automatic speech recognition transcripts are erroneous, it is still possible to obtain high-quality topic embeddings from the transcriptions.
http://arxiv.org/abs/2109.12306v1
"2021-09-25T07:59:31Z"
cs.IR, cs.LG
2,021
A Unified Graph-Based Approach to Disinformation Detection using Contextual and Semantic Relations
Marius Paraschiv, Nikos Salamanos, Costas Iordanou, Nikolaos Laoutaris, Michael Sirivianos
As recent events have demonstrated, disinformation spread through social networks can have dire political, economic and social consequences. Detecting disinformation must inevitably rely on the structure of the network, on users particularities and on event occurrence patterns. We present a graph data structure, which we denote as a meta-graph, that combines underlying users' relational event information, as well as semantic and topical modeling. We detail the construction of an example meta-graph using Twitter data covering the 2016 US election campaign and then compare the detection of disinformation at cascade level, using well-known graph neural network algorithms, to the same algorithms applied on the meta-graph nodes. The comparison shows a consistent 3%-4% improvement in accuracy when using the meta-graph, over all considered algorithms, compared to basic cascade classification, and a further 1% increase when topic modeling and sentiment analysis are considered. We carry out the same experiment on two other datasets, HealthRelease and HealthStory, part of the FakeHealth dataset repository, with consistent results. Finally, we discuss further advantages of our approach, such as the ability to augment the graph structure using external data sources, the ease with which multiple meta-graphs can be combined as well as a comparison of our method to other graph-based disinformation detection frameworks.
http://arxiv.org/abs/2109.11781v1
"2021-09-24T07:23:59Z"
cs.SI
2,021
Enriching and Controlling Global Semantics for Text Summarization
Thong Nguyen, Anh Tuan Luu, Truc Lu, Tho Quan
Recently, Transformer-based models have been proven effective in the abstractive summarization task by creating fluent and informative summaries. Nevertheless, these models still suffer from the short-range dependency problem, causing them to produce summaries that miss the key points of document. In this paper, we attempt to address this issue by introducing a neural topic model empowered with normalizing flow to capture the global semantics of the document, which are then integrated into the summarization model. In addition, to avoid the overwhelming effect of global semantics on contextualized representation, we introduce a mechanism to control the amount of global semantics supplied to the text generation module. Our method outperforms state-of-the-art summarization models on five common text summarization datasets, namely CNN/DailyMail, XSum, Reddit TIFU, arXiv, and PubMed.
http://arxiv.org/abs/2109.10616v1
"2021-09-22T09:31:50Z"
cs.CL
2,021
Tecnologica cosa: Modeling Storyteller Personalities in Boccaccio's Decameron
A. Feder Cooper, Maria Antoniak, Christopher De Sa, Marilyn Migiel, David Mimno
We explore Boccaccio's Decameron to see how digital humanities tools can be used for tasks that have limited data in a language no longer in contemporary use: medieval Italian. We focus our analysis on the question: Do the different storytellers in the text exhibit distinct personalities? To answer this question, we curate and release a dataset based on the authoritative edition of the text. We use supervised classification methods to predict storytellers based on the stories they tell, confirming the difficulty of the task, and demonstrate that topic modeling can extract thematic storyteller "profiles."
http://arxiv.org/abs/2109.10506v1
"2021-09-22T03:42:14Z"
cs.CL, cs.LG
2,021
Towards Explainable Scientific Venue Recommendations
Bastian Schäfermeier, Gerd Stumme, Tom Hanika
Selecting the best scientific venue (i.e., conference/journal) for the submission of a research article constitutes a multifaceted challenge. Important aspects to consider are the suitability of research topics, a venue's prestige, and the probability of acceptance. The selection problem is exacerbated through the continuous emergence of additional venues. Previously proposed approaches for supporting authors in this process rely on complex recommender systems, e.g., based on Word2Vec or TextCNN. These, however, often elude an explanation for their recommendations. In this work, we propose an unsophisticated method that advances the state-of-the-art in two aspects: First, we enhance the interpretability of recommendations through non-negative matrix factorization based topic models; Second, we surprisingly can obtain competitive recommendation performance while using simpler learning methods.
http://arxiv.org/abs/2109.11343v1
"2021-09-21T10:25:26Z"
cs.IR, cs.AI
2,021
Evolution of topics in central bank speech communication
Magnus Hansson
This paper studies the content of central bank speech communication from 1997 through 2020 and asks the following questions: (i) What global topics do central banks talk about? (ii) How do these topics evolve over time? I turn to natural language processing, and more specifically Dynamic Topic Models, to answer these questions. The analysis consists of an aggregate study of nine major central banks and a case study of the Federal Reserve, which allows for region specific control variables. I show that: (i) Central banks address a broad range of topics. (ii) The topics are well captured by Dynamic Topic Models. (iii) The global topics exhibit strong and significant autoregressive properties not easily explained by financial control variables.
http://arxiv.org/abs/2109.10058v1
"2021-09-21T09:57:18Z"
econ.GN, q-fin.EC
2,021
Not All Comments are Equal: Insights into Comment Moderation from a Topic-Aware Model
Elaine Zosa, Ravi Shekhar, Mladen Karan, Matthew Purver
Moderation of reader comments is a significant problem for online news platforms. Here, we experiment with models for automatic moderation, using a dataset of comments from a popular Croatian newspaper. Our analysis shows that while comments that violate the moderation rules mostly share common linguistic and thematic features, their content varies across the different sections of the newspaper. We therefore make our models topic-aware, incorporating semantic features from a topic model into the classification decision. Our results show that topic information improves the performance of the model, increases its confidence in correct outputs, and helps us understand the model's outputs.
http://arxiv.org/abs/2109.10033v1
"2021-09-21T08:57:17Z"
cs.CL
2,021
Co-occurrence of medical conditions: Exposing patterns through probabilistic topic modeling of SNOMED codes
Moumita Bhattacharya, Claudine Jurkovitz, Hagit Shatkay
Patients associated with multiple co-occurring health conditions often face aggravated complications and less favorable outcomes. Co-occurring conditions are especially prevalent among individuals suffering from kidney disease, an increasingly widespread condition affecting 13% of the general population in the US. This study aims to identify and characterize patterns of co-occurring medical conditions in patients employing a probabilistic framework. Specifically, we apply topic modeling in a non-traditional way to find associations across SNOMEDCT codes assigned and recorded in the EHRs of>13,000 patients diagnosed with kidney disease. Unlike most prior work on topic modeling, we apply the method to codes rather than to natural language. Moreover, we quantitatively evaluate the topics, assessing their tightness and distinctiveness, and also assess the medical validity of our results. Our experiments show that each topic is succinctly characterized by a few highly probable and unique disease codes, indicating that the topics are tight. Furthermore, inter-topic distance between each pair of topics is typically high, illustrating distinctiveness. Last, most coded conditions grouped together within a topic, are indeed reported to co-occur in the medical literature. Notably, our results uncover a few indirect associations among conditions that have hitherto not been reported as correlated in the medical literature.
http://arxiv.org/abs/2109.09199v1
"2021-09-19T19:34:21Z"
cs.LG
2,021
Introducing an Abusive Language Classification Framework for Telegram to Investigate the German Hater Community
Maximilian Wich, Adrian Gorniak, Tobias Eder, Daniel Bartmann, Burak Enes Çakici, Georg Groh
Since traditional social media platforms continue to ban actors spreading hate speech or other forms of abusive languages (a process known as deplatforming), these actors migrate to alternative platforms that do not moderate users content. One popular platform relevant for the German hater community is Telegram for which limited research efforts have been made so far. This study aims to develop a broad framework comprising (i) an abusive language classification model for German Telegram messages and (ii) a classification model for the hatefulness of Telegram channels. For the first part, we use existing abusive language datasets containing posts from other platforms to develop our classification models. For the channel classification model, we develop a method that combines channel-specific content information collected from a topic model with a social graph to predict the hatefulness of channels. Furthermore, we complement these two approaches for hate speech detection with insightful results on the evolution of the hater community on Telegram in Germany. We also propose methods for conducting scalable network analyses for social media platforms to the hate speech research community. As an additional output of this study, we provide an annotated abusive language dataset containing 1,149 annotated Telegram messages.
http://arxiv.org/abs/2109.07346v2
"2021-09-15T14:58:46Z"
cs.CL
2,021
Semantics of European poetry is shaped by conservative forces: The relationship between poetic meter and meaning in accentual-syllabic verse
Artjoms Šeļa, Petr Plecháč, Alie Lassche
Recent advances in cultural analytics and large-scale computational studies of art, literature and film often show that long-term change in the features of artistic works happens gradually. These findings suggest that conservative forces that shape creative domains might be underestimated. To this end, we provide the first large-scale formal evidence of the persistent association between poetic meter and semantics in 18-19th European literatures, using Czech, German and Russian collections with additional data from English poetry and early modern Dutch songs. Our study traces this association through a series of clustering experiments using the abstracted semantic features of 150,000 poems. With the aid of topic modeling we infer semantic features for individual poems. Texts were also lexically simplified across collections to increase generalizability and decrease the sparseness of word frequency distributions. Topics alone enable recognition of the meters in each observed language, as may be seen from highly robust clustering of same-meter samples (median Adjusted Rand Index between 0.48 and 1). In addition, this study shows that the strength of the association between form and meaning tends to decrease over time. This may reflect a shift in aesthetic conventions between the 18th and 19th centuries as individual innovation was increasingly favored in literature. Despite this decline, it remains possible to recognize semantics of the meters from past or future, which suggests the continuity of semantic traditions while also revealing the historical variability of conditions across languages. This paper argues that distinct metrical forms, which are often copied in a language over centuries, also maintain long-term semantic inertia in poetry. Our findings, thus, highlight the role of the formal features of cultural items in influencing the pace and shape of cultural evolution.
http://arxiv.org/abs/2109.07148v1
"2021-09-15T08:20:01Z"
cs.CL
2,021
What are the attackers doing now? Automating cyber threat intelligence extraction from text on pace with the changing threat landscape: A survey
Md Rayhanur Rahman, Rezvan Mahdavi-Hezaveh, Laurie Williams
Cybersecurity researchers have contributed to the automated extraction of CTI from textual sources, such as threat reports and online articles, where cyberattack strategies, procedures, and tools are described. The goal of this article is to aid cybersecurity researchers understand the current techniques used for cyberthreat intelligence extraction from text through a survey of relevant studies in the literature. We systematically collect "CTI extraction from text"-related studies from the literature and categorize the CTI extraction purposes. We propose a CTI extraction pipeline abstracted from these studies. We identify the data sources, techniques, and CTI sharing formats utilized in the context of the proposed pipeline. Our work finds ten types of extraction purposes, such as extraction indicators of compromise extraction, TTPs (tactics, techniques, procedures of attack), and cybersecurity keywords. We also identify seven types of textual sources for CTI extraction, and textual data obtained from hacker forums, threat reports, social media posts, and online news articles have been used by almost 90% of the studies. Natural language processing along with both supervised and unsupervised machine learning techniques such as named entity recognition, topic modelling, dependency parsing, supervised classification, and clustering are used for CTI extraction. We observe the technical challenges associated with these studies related to obtaining available clean, labelled data which could assure replication, validation, and further extension of the studies. As we find the studies focusing on CTI information extraction from text, we advocate for building upon the current CTI extraction work to help cybersecurity practitioners with proactive decision making such as threat prioritization, automated threat modelling to utilize knowledge from past cybersecurity incidents.
http://arxiv.org/abs/2109.06808v1
"2021-09-14T16:38:41Z"
cs.CR, cs.CL, cs.LG
2,021
Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to Corpus Exploration
Shufan Wang, Laure Thompson, Mohit Iyyer
Phrase representations derived from BERT often do not exhibit complex phrasal compositionality, as the model relies instead on lexical similarity to determine semantic relatedness. In this paper, we propose a contrastive fine-tuning objective that enables BERT to produce more powerful phrase embeddings. Our approach (Phrase-BERT) relies on a dataset of diverse phrasal paraphrases, which is automatically generated using a paraphrase generation model, as well as a large-scale dataset of phrases in context mined from the Books3 corpus. Phrase-BERT outperforms baselines across a variety of phrase-level similarity tasks, while also demonstrating increased lexical diversity between nearest neighbors in the vector space. Finally, as a case study, we show that Phrase-BERT embeddings can be easily integrated with a simple autoencoder to build a phrase-based neural topic model that interprets topics as mixtures of words and phrases by performing a nearest neighbor search in the embedding space. Crowdsourced evaluations demonstrate that this phrase-based topic model produces more coherent and meaningful topics than baseline word and phrase-level topic models, further validating the utility of Phrase-BERT.
http://arxiv.org/abs/2109.06304v2
"2021-09-13T20:31:57Z"
cs.CL
2,021
Multiscale Analysis of Count Data through Topic Alignment
Julia Fukuyama, Kris Sankaran, Laura Symul
Topic modeling is a popular method used to describe biological count data. With topic models, the user must specify the number of topics $K$. Since there is no definitive way to choose $K$ and since a true value might not exist, we develop techniques to study the relationships across models with different $K$. This can show how many topics are consistently present across different models, if a topic is only transiently present, or if a topic splits in two when $K$ increases. This strategy gives more insight into the process generating the data than choosing a single value of $K$ would. We design a visual representation of these cross-model relationships, which we call a topic alignment, and present three diagnostics based on it. We show the effectiveness of these tools for interpreting the topics on simulated and real data, and we release an accompanying R package, alto, available at https://lasy.github.io/alto.
http://arxiv.org/abs/2109.05541v2
"2021-09-12T15:49:37Z"
stat.AP, stat.CO
2,021
Bayesian Topic Regression for Causal Inference
Maximilian Ahrens, Julian Ashwin, Jan-Peter Calliess, Vu Nguyen
Causal inference using observational text data is becoming increasingly popular in many research areas. This paper presents the Bayesian Topic Regression (BTR) model that uses both text and numerical information to model an outcome variable. It allows estimation of both discrete and continuous treatment effects. Furthermore, it allows for the inclusion of additional numerical confounding factors next to text data. To this end, we combine a supervised Bayesian topic model with a Bayesian regression framework and perform supervised representation learning for the text features jointly with the regression parameter training, respecting the Frisch-Waugh-Lovell theorem. Our paper makes two main contributions. First, we provide a regression framework that allows causal inference in settings when both text and numerical confounders are of relevance. We show with synthetic and semi-synthetic datasets that our joint approach recovers ground truth with lower bias than any benchmark model, when text and numerical features are correlated. Second, experiments on two real-world datasets demonstrate that a joint and supervised learning strategy also yields superior prediction results compared to strategies that estimate regression weights for text and non-text features separately, being even competitive with more complex deep neural networks.
http://arxiv.org/abs/2109.05317v1
"2021-09-11T16:40:43Z"
stat.ML, cs.CL, cs.LG
2,021
Enhancing Self-Disclosure In Neural Dialog Models By Candidate Re-ranking
Mayank Soni, Benjamin Cowan, Vincent Wade
Neural language modelling has progressed the state-of-the-art in different downstream Natural Language Processing (NLP) tasks. One such area is of open-domain dialog modelling, neural dialog models based on GPT-2 such as DialoGPT have shown promising performance in single-turn conversation. However, such (neural) dialog models have been criticized for generating responses which although may have relevance to the previous human response, tend to quickly dissipate human interest and descend into trivial conversation. One reason for such performance is the lack of explicit conversation strategy being employed in human-machine conversation. Humans employ a range of conversation strategies while engaging in a conversation, one such key social strategies is Self-disclosure(SD). A phenomenon of revealing information about one-self to others. Social penetration theory (SPT) proposes that communication between two people moves from shallow to deeper levels as the relationship progresses primarily through self-disclosure. Disclosure helps in creating rapport among the participants engaged in a conversation. In this paper, Self-disclosure enhancement architecture (SDEA) is introduced utilizing Self-disclosure Topic Model (SDTM) during inference stage of a neural dialog model to re-rank response candidates to enhance self-disclosure in single-turn responses from from the model.
http://arxiv.org/abs/2109.05090v3
"2021-09-10T20:06:27Z"
cs.CL
2,021
Narratives in economics
Michael Roos, Matthias Reccius
There is growing awareness within the economics profession of the important role narratives play in the economy. Even though empirical approaches that try to quantify economic narratives are getting increasingly popular, there is no theory or even a universally accepted definition of economic narratives underlying this research. First, we review and categorize the economic literature concerned with narratives and work out the different paradigms that are at play. Only a subset of the literature considers narratives to be active drivers of economic activity. In order to solidify the foundation of narrative economics, we propose a definition of collective economic narratives, isolating five important characteristics. We argue that, for a narrative to be economically relevant, it must be a sense-making story that emerges in a social context and suggests action to a social group. We also systematize how a collective economic narrative differs from a topic and from other kinds of narratives that are likely to have less impact on the economy. With regard to the popular use of topic modeling as an empirical strategy, we suggest that the complementary use of other canonical methods from the natural language processing toolkit and the development of new methods is inevitable to go beyond identifying topics and be able to move towards true empirical narrative economics.
http://arxiv.org/abs/2109.02331v2
"2021-09-06T10:05:08Z"
econ.GN, q-fin.EC
2,021
Recommending Researchers in Machine Learning based on Author-Topic Model
Deepak Sharma, Bijendra Kumar, Satish Chand
The aim of this paper is to uncover the researchers in machine learning using the author-topic model (ATM). We collect 16,855 scientific papers from six top journals in the field of machine learning published from 1997 to 2016 and analyze them using ATM. The dataset is broken down into 4 intervals to identify the top researchers and find similar researchers using their similarity score. The similarity score is calculated using Hellinger distance. The researchers are plotted using t-SNE, which reduces the dimensionality of the data while keeping the same distance between the points. The analysis of our study helps the upcoming researchers to find the top researchers in their area of interest.
http://arxiv.org/abs/2109.02022v1
"2021-09-05T08:16:10Z"
cs.IR, H.4, I.7
2,021
Effective user intent mining with unsupervised word representation models and topic modelling
Bencheng Wei
Understanding the intent behind chat between customers and customer service agents has become a crucial problem nowadays due to an exponential increase in the use of the Internet by people from different cultures and educational backgrounds. More importantly, the explosion of e-commerce has led to a significant increase in text conversation between customers and agents. In this paper, we propose an approach to data mining the conversation intents behind the textual data. Using the customer service data set, we train unsupervised text representation models, and then develop an intent mapping model which would rank the predefined intents base on cosine similarity between sentences and intents. Topic-modeling techniques are used to define intents and domain experts are also involved to interpret topic modelling results. With this approach, we can get a good understanding of the user intentions behind the unlabelled customer service textual data.
http://arxiv.org/abs/2109.01765v1
"2021-09-04T01:52:12Z"
cs.AI
2,021
Dynamic Games in Empirical Industrial Organization
Victor Aguirregabiria, Allan Collard-Wexler, Stephen P. Ryan
This survey is organized around three main topics: models, econometrics, and empirical applications. Section 2 presents the theoretical framework, introduces the concept of Markov Perfect Nash Equilibrium, discusses existence and multiplicity, and describes the representation of this equilibrium in terms of conditional choice probabilities. We also discuss extensions of the basic framework, including models in continuous time, the concepts of oblivious equilibrium and experience-based equilibrium, and dynamic games where firms have non-equilibrium beliefs. In section 3, we first provide an overview of the types of data used in this literature, before turning to a discussion of identification issues and results, and estimation methods. We review different methods to deal with multiple equilibria and large state spaces. We also describe recent developments for estimating games in continuous time and incorporating serially correlated unobservables, and discuss the use of machine learning methods to solving and estimating dynamic games. Section 4 discusses empirical applications of dynamic games in IO. We start describing the first empirical applications in this literature during the early 2000s. Then, we review recent applications dealing with innovation, antitrust and mergers, dynamic pricing, regulation, product repositioning, advertising, uncertainty and investment, airline network competition, dynamic matching, and natural resources. We conclude with our view of the progress made in this literature and the remaining challenges.
http://arxiv.org/abs/2109.01725v2
"2021-09-03T20:45:43Z"
econ.EM
2,021
Chronic Pain and Language: A Topic Modelling Approach to Personal Pain Descriptions
Diogo A. P. Nunes, Joana Ferreira Gomes, Fani Neto, David Martins de Matos
Chronic pain is recognized as a major health problem, with impacts not only at the economic, but also at the social, and individual levels. Being a private and subjective experience, it is impossible to externally and impartially experience, describe, and interpret chronic pain as a purely noxious stimulus that would directly point to a causal agent and facilitate its mitigation, contrary to acute pain, the assessment of which is usually straightforward. Verbal communication is, thus, key to convey relevant information to health professionals that would otherwise not be accessible to external entities, namely, intrinsic qualities about the painful experience and the patient. We propose and discuss a topic modelling approach to recognize patterns in verbal descriptions of chronic pain, and use these patterns to quantify and qualify experiences of pain. Our approaches allow for the extraction of novel insights on chronic pain experiences from the obtained topic models and latent spaces. We argue that our results are clinically relevant for the assessment and management of chronic pain.
http://arxiv.org/abs/2109.00402v2
"2021-09-01T14:31:16Z"
cs.CL, cs.IR, q-bio.QM, I.2.7; I.5.3; I.5.4; J.3; J.4
2,021
STFT-LDA: An Algorithm to Facilitate the Visual Analysis of Building Seismic Responses
Zhenge Zhao, Danilo Motta, Matthew Berger, Joshua A. Levine, Ismail B. Kuzucu, Robert B. Fleischman, Afonso Paiva, Carlos Scheidegger
Civil engineers use numerical simulations of a building's responses to seismic forces to understand the nature of building failures, the limitations of building codes, and how to determine the latter to prevent the former. Such simulations generate large ensembles of multivariate, multiattribute time series. Comprehensive understanding of this data requires techniques that support the multivariate nature of the time series and can compare behaviors that are both periodic and non-periodic across multiple time scales and multiple time series themselves. In this paper, we present a novel technique to extract such patterns from time series generated from simulations of seismic responses. The core of our approach is the use of topic modeling, where topics correspond to interpretable and discriminative features of the earthquakes. We transform the raw time series data into a time series of topics, and use this visual summary to compare temporal patterns in earthquakes, query earthquakes via the topics across arbitrary time scales, and enable details on demand by linking the topic visualization with the original earthquake data. We show, through a surrogate task and an expert study, that this technique allows analysts to more easily identify recurring patterns in such time series. By integrating this technique in a prototype system, we show how it enables novel forms of visual interaction.
http://arxiv.org/abs/2109.00197v1
"2021-09-01T05:47:05Z"
cs.HC
2,021
Public sentiment analysis and topic modeling regarding COVID-19 vaccines on the Reddit social media platform: A call to action for strengthening vaccine confidence
Chad A Melton, Olufunto A Olusanya, Nariman Ammar, Arash Shaban-Nejad
The COVID-19 pandemic fueled one of the most rapid vaccine developments in history. However, misinformation spread through online social media often leads to negative vaccine sentiment and hesitancy. To investigate COVID-19 vaccine-related discussion in social media, we conducted a sentiment analysis and Latent Dirichlet Allocation topic modeling on textual data collected from 13 Reddit communities focusing on the COVID-19 vaccine from Dec 1, 2020, to May 15, 2021. Data were aggregated and analyzed by month to detect changes in any sentiment and latent topics. ty analysis suggested these communities expressed more positive sentiment than negative regarding the vaccine-related discussions and has remained static over time. Topic modeling revealed community members mainly focused on side effects rather than outlandish conspiracy theories. Covid-19 vaccine-related content from 13 subreddits show that the sentiments expressed in these communities are overall more positive than negative and have not meaningfully changed since December 2020. Keywords indicating vaccine hesitancy were detected throughout the LDA topic modeling. Public sentiment and topic modeling analysis regarding vaccines could facilitate the implementation of appropriate messaging, digital interventions, and new policies to promote vaccine confidence.
http://arxiv.org/abs/2108.13293v1
"2021-08-22T00:11:19Z"
cs.IR, cs.SI, 68T50, I.2.7; J.3
2,021
A Framework for Neural Topic Modeling of Text Corpora
Shayan Fazeli, Majid Sarrafzadeh
Topic Modeling refers to the problem of discovering the main topics that have occurred in corpora of textual data, with solutions finding crucial applications in numerous fields. In this work, inspired by the recent advancements in the Natural Language Processing domain, we introduce FAME, an open-source framework enabling an efficient mechanism of extracting and incorporating textual features and utilizing them in discovering topics and clustering text documents that are semantically similar in a corpus. These features range from traditional approaches (e.g., frequency-based) to the most recent auto-encoding embeddings from transformer-based language models such as BERT model family. To demonstrate the effectiveness of this library, we conducted experiments on the well-known News-Group dataset. The library is available online.
http://arxiv.org/abs/2108.08946v1
"2021-08-19T23:32:38Z"
cs.CL, cs.LG
2,021
MigrationsKB: A Knowledge Base of Public Attitudes towards Migrations and their Driving Factors
Yiyi Chen, Harald Sack, Mehwish Alam
With the increasing trend in the topic of migration in Europe, the public is now more engaged in expressing their opinions through various platforms such as Twitter. Understanding the online discourses is therefore essential to capture the public opinion. The goal of this study is the analysis of social media platform to quantify public attitudes towards migrations and the identification of different factors causing these attitudes. The tweets spanning from 2013 to Jul-2021 in the European countries which are hosts to immigrants are collected, pre-processed, and filtered using advanced topic modeling technique. BERT-based entity linking and sentiment analysis, and attention-based hate speech detection are performed to annotate the curated tweets. Moreover, the external databases are used to identify the potential social and economic factors causing negative attitudes of the people about migration. To further promote research in the interdisciplinary fields of social science and computer science, the outcomes are integrated into a Knowledge Base (KB), i.e., MigrationsKB which significantly extends the existing models to take into account the public attitudes towards migrations and the economic indicators. This KB is made public using FAIR principles, which can be queried through SPARQL endpoint. Data dumps are made available on Zenodo.
http://arxiv.org/abs/2108.07593v1
"2021-08-17T12:50:39Z"
cs.CL, cs.AI, 68T50, 68T07
2,021
Generating Cyber Threat Intelligence to Discover Potential Security Threats Using Classification and Topic Modeling
Md Imran Hossen, Ashraful Islam, Farzana Anowar, Eshtiak Ahmed, Mohammad Masudur Rahman, Xiali, Hei
Due to the variety of cyber-attacks or threats, the cybersecurity community enhances the traditional security control mechanisms to an advanced level so that automated tools can encounter potential security threats. Very recently, Cyber Threat Intelligence (CTI) has been presented as one of the proactive and robust mechanisms because of its automated cybersecurity threat prediction. Generally, CTI collects and analyses data from various sources e.g., online security forums, social media where cyber enthusiasts, analysts, even cybercriminals discuss cyber or computer security-related topics and discovers potential threats based on the analysis. As the manual analysis of every such discussion (posts on online platforms) is time-consuming, inefficient, and susceptible to errors, CTI as an automated tool can perform uniquely to detect cyber threats. In this paper, we identify and explore relevant CTI from hacker forums utilizing different supervised (classification) and unsupervised learning (topic modeling) techniques. To this end, we collect data from a real hacker forum and constructed two datasets: a binary dataset and a multi-class dataset. We then apply several classifiers along with deep neural network-based classifiers and use them on the datasets to compare their performances. We also employ the classifiers on a labeled leaked dataset as our ground truth. We further explore the datasets using unsupervised techniques. For this purpose, we leverage two topic modeling algorithms namely Latent Dirichlet Allocation (LDA) and Non-negative Matrix Factorization (NMF).
http://arxiv.org/abs/2108.06862v3
"2021-08-16T02:30:29Z"
cs.LG, cs.CR
2,021
A Random Matrix Perspective on Random Tensors
José Henrique de Morais Goulart, Romain Couillet, Pierre Comon
Tensor models play an increasingly prominent role in many fields, notably in machine learning. In several applications, such as community detection, topic modeling and Gaussian mixture learning, one must estimate a low-rank signal from a noisy tensor. Hence, understanding the fundamental limits of estimators of that signal inevitably calls for the study of random tensors. Substantial progress has been recently achieved on this subject in the large-dimensional limit. Yet, some of the most significant among these results--in particular, a precise characterization of the abrupt phase transition (with respect to signal-to-noise ratio) that governs the performance of the maximum likelihood (ML) estimator of a symmetric rank-one model with Gaussian noise--were derived based of mean-field spin glass theory, which is not easily accessible to non-experts. In this work, we develop a sharply distinct and more elementary approach, relying on standard but powerful tools brought by years of advances in random matrix theory. The key idea is to study the spectra of random matrices arising from contractions of a given random tensor. We show how this gives access to spectral properties of the random tensor itself. For the aforementioned rank-one model, our technique yields a hitherto unknown fixed-point equation whose solution precisely matches the asymptotic performance of the ML estimator above the phase transition threshold in the third-order case. A numerical verification provides evidence that the same holds for orders 4 and 5, leading us to conjecture that, for any order, our fixed-point equation is equivalent to the known characterization of the ML estimation performance that had been obtained by relying on spin glasses. Moreover, our approach sheds light on certain properties of the ML problem landscape in large dimensions and can be extended to other models, such as asymmetric and non-Gaussian.
http://arxiv.org/abs/2108.00774v2
"2021-08-02T10:42:22Z"
stat.ML, cs.LG, math.PR, 15A69, 60B20
2,021
An Empirical Study of Developers' Discussions about Security Challenges of Different Programming Languages
Roland Croft, Yongzheng Xie, Mansooreh Zahedi, M. Ali Babar, Christoph Treude
Given programming languages can provide different types and levels of security support, it is critically important to consider security aspects while selecting programming languages for developing software systems. Inadequate consideration of security in the choice of a programming language may lead to potential ramifications for secure development. Whilst theoretical analysis of the supposed security properties of different programming languages has been conducted, there has been relatively little effort to empirically explore the actual security challenges experienced by developers. We have performed a large-scale study of the security challenges of 15 programming languages by quantitatively and qualitatively analysing the developers' discussions from Stack Overflow and GitHub. By leveraging topic modelling, we have derived a taxonomy of 18 major security challenges for 6 topic categories. We have also conducted comparative analysis to understand how the identified challenges vary regarding the different programming languages and data sources. Our findings suggest that the challenges and their characteristics differ substantially for different programming languages and data sources, i.e., Stack Overflow and GitHub. The findings provide evidence-based insights and understanding of security challenges related to different programming languages to software professionals (i.e., practitioners or researchers). The reported taxonomy of security challenges can assist both practitioners and researchers in better understanding and traversing the secure development landscape. This study highlights the importance of the choice of technology, e.g., programming language, in secure software engineering. Hence, the findings are expected to motivate practitioners to consider the potential impact of the choice of programming languages on software security.
http://arxiv.org/abs/2107.13723v2
"2021-07-29T03:19:52Z"
cs.SE, cs.CR
2,021
Measuring daily-life fear perception change: a computational study in the context of COVID-19
Yuchen Chai, Juan Palacios, Jianghao Wang, Yichun Fan, Siqi Zheng
COVID-19, as a global health crisis, has triggered the fear emotion with unprecedented intensity. Besides the fear of getting infected, the outbreak of COVID-19 also created significant disruptions in people's daily life and thus evoked intensive psychological responses indirect to COVID-19 infections. Here, we construct an expressed fear database using 16 million social media posts generated by 536 thousand users between January 1st, 2019 and August 31st, 2020 in China. We employ deep learning techniques to detect the fear emotion within each post and apply topic models to extract the central fear topics. Based on this database, we find that sleep disorders ("nightmare" and "insomnia") take up the largest share of fear-labeled posts in the pre-pandemic period (January 2019-December 2019), and significantly increase during the COVID-19. We identify health and work-related concerns are the two major sources of fear induced by the COVID-19. We also detect gender differences, with females generating more posts containing the daily-life fear sources during the COVID-19 period. This research adopts a data-driven approach to trace back public emotion, which can be used to complement traditional surveys to achieve real-time emotion monitoring to discern societal concerns and support policy decision-making.
http://arxiv.org/abs/2107.12606v1
"2021-07-27T05:17:09Z"
cs.CL
2,021
Out of the Shadows: Analyzing Anonymous' Twitter Resurgence during the 2020 Black Lives Matter Protests
Keenan Jones, Jason R. C. Nurse, Shujun Li
Recently, there had been little notable activity from the once prominent hacktivist group, Anonymous. The group, responsible for activist-based cyber attacks on major businesses and governments, appeared to have fragmented after key members were arrested in 2013. In response to the major Black Lives Matter (BLM) protests that occurred after the killing of George Floyd, however, reports indicated that the group was back. To examine this apparent resurgence, we conduct a large-scale study of Anonymous affiliates on Twitter. To this end, we first use machine learning to identify a significant network of more than 33,000 Anonymous accounts. Through topic modelling of tweets collected from these accounts, we find evidence of sustained interest in topics related to BLM. We then use sentiment analysis on tweets focused on these topics, finding evidence of a united approach amongst the group, with positive tweets typically being used to express support towards BLM, and negative tweets typically being used to criticize police actions. Finally, we examine the presence of automation in the network, identifying indications of bot-like behavior across the majority of Anonymous accounts. These findings show that whilst the group has seen a resurgence during the protests, bot activity may be responsible for exaggerating the extent of this resurgence.
http://arxiv.org/abs/2107.10554v1
"2021-07-22T10:18:32Z"
cs.CY, cs.LG
2,021
Hamiltonian Monte Carlo for Regression with High-Dimensional Categorical Data
Szymon Sacher, Laura Battaglia, Stephen Hansen
Latent variable models are increasingly used in economics for high-dimensional categorical data like text and surveys. We demonstrate the effectiveness of Hamiltonian Monte Carlo (HMC) with parallelized automatic differentiation for analyzing such data in a computationally efficient and methodologically sound manner. Our new model, Supervised Topic Model with Covariates, shows that carefully modeling this type of data can have significant implications on conclusions compared to a simpler, frequently used, yet methodologically problematic, two-step approach. A simulation study and revisiting Bandiera et al. (2020)'s study of executive time use demonstrate these results. The approach accommodates thousands of parameters and doesn't require custom algorithms specific to each model, making it accessible for applied researchers
http://arxiv.org/abs/2107.08112v2
"2021-07-16T20:40:54Z"
econ.EM, stat.ME
2,021
Modeling User Behaviour in Research Paper Recommendation System
Arpita Chaudhuri, Debasis Samanta, Monalisa Sarma
User intention which often changes dynamically is considered to be an important factor for modeling users in the design of recommendation systems. Recent studies are starting to focus on predicting user intention (what users want) beyond user preference (what users like). In this work, a user intention model is proposed based on deep sequential topic analysis. The model predicts a user's intention in terms of the topic of interest. The Hybrid Topic Model (HTM) comprising Latent Dirichlet Allocation (LDA) and Word2Vec is proposed to derive the topic of interest of users and the history of preferences. HTM finds the true topics of papers estimating word-topic distribution which includes syntactic and semantic correlations among words. Next, to model user intention, a Long Short Term Memory (LSTM) based sequential deep learning model is proposed. This model takes into account temporal context, namely the time difference between clicks of two consecutive papers seen by a user. Extensive experiments with the real-world research paper dataset indicate that the proposed approach significantly outperforms the state-of-the-art methods. Further, the proposed approach introduces a new road map to model a user activity suitable for the design of a research paper recommendation system.
http://arxiv.org/abs/2107.07831v1
"2021-07-16T11:31:03Z"
cs.IR, cs.LG
2,021
Tales of a City: Sentiment Analysis of Urban Green Space in Dublin
Mohammadhossein Ghahramani, Nadina Galle, Carlo Ratti, Francesco Pilla
Social media services such as TripAdvisor and Foursquare can provide opportunities for users to exchange their opinions about urban green space (UGS). Visitors can exchange their experiences with parks, woods, and wetlands in social communities via social networks. In this work, we implement a unified topic modeling approach to reveal UGS characteristics. Leveraging Artificial Intelligence techniques for opinion mining using the mentioned platforms (e.g., TripAdvisor and Foursquare) reviews is a novel application to UGS quality assessments. We show how specific characteristics of different green spaces can be explored by using a tailor-optimized sentiment analysis model. Such an application can support local authorities and stakeholders in understanding--and justification for--future urban green space investments.
http://arxiv.org/abs/2107.06041v1
"2021-07-13T12:51:46Z"
cs.SI
2,021
Semiparametric Latent Topic Modeling on Consumer-Generated Corpora
Dominic B. Dayta, Erniel B. Barrios
Legacy procedures for topic modelling have generally suffered problems of overfitting and a weakness towards reconstructing sparse topic structures. With motivation from a consumer-generated corpora, this paper proposes semiparametric topic model, a two-step approach utilizing nonnegative matrix factorization and semiparametric regression in topic modeling. The model enables the reconstruction of sparse topic structures in the corpus and provides a generative model for predicting topics in new documents entering the corpus. Assuming the presence of auxiliary information related to the topics, this approach exhibits better performance in discovering underlying topic structures in cases where the corpora are small and limited in vocabulary. In an actual consumer feedback corpus, the model also demonstrably provides interpretable and useful topic definitions comparable with those produced by other methods.
http://arxiv.org/abs/2107.10651v1
"2021-07-13T00:22:02Z"
cs.CL, cs.LG
2,021
Likelihood estimation of sparse topic distributions in topic models and its applications to Wasserstein document distance calculations
Xin Bing, Florentina Bunea, Seth Strimas-Mackey, Marten Wegkamp
This paper studies the estimation of high-dimensional, discrete, possibly sparse, mixture models in topic models. The data consists of observed multinomial counts of $p$ words across $n$ independent documents. In topic models, the $p\times n$ expected word frequency matrix is assumed to be factorized as a $p\times K$ word-topic matrix $A$ and a $K\times n$ topic-document matrix $T$. Since columns of both matrices represent conditional probabilities belonging to probability simplices, columns of $A$ are viewed as $p$-dimensional mixture components that are common to all documents while columns of $T$ are viewed as the $K$-dimensional mixture weights that are document specific and are allowed to be sparse. The main interest is to provide sharp, finite sample, $\ell_1$-norm convergence rates for estimators of the mixture weights $T$ when $A$ is either known or unknown. For known $A$, we suggest MLE estimation of $T$. Our non-standard analysis of the MLE not only establishes its $\ell_1$ convergence rate, but reveals a remarkable property: the MLE, with no extra regularization, can be exactly sparse and contain the true zero pattern of $T$. We further show that the MLE is both minimax optimal and adaptive to the unknown sparsity in a large class of sparse topic distributions. When $A$ is unknown, we estimate $T$ by optimizing the likelihood function corresponding to a plug in, generic, estimator $\hat{A}$ of $A$. For any estimator $\hat{A}$ that satisfies carefully detailed conditions for proximity to $A$, the resulting estimator of $T$ is shown to retain the properties established for the MLE. The ambient dimensions $K$ and $p$ are allowed to grow with the sample sizes. Our application is to the estimation of 1-Wasserstein distances between document generating distributions. We propose, estimate and analyze new 1-Wasserstein distances between two probabilistic document representations.
http://arxiv.org/abs/2107.05766v2
"2021-07-12T22:22:32Z"
math.ST, stat.ME, stat.ML, stat.TH
2,021
Investor Behavior Modeling by Analyzing Financial Advisor Notes: A Machine Learning Perspective
Cynthia Pagliaro, Dhagash Mehta, Han-Tai Shiao, Shaofei Wang, Luwei Xiong
Modeling investor behavior is crucial to identifying behavioral coaching opportunities for financial advisors. With the help of natural language processing (NLP) we analyze an unstructured (textual) dataset of financial advisors' summary notes, taken after every investor conversation, to gain first ever insights into advisor-investor interactions. These insights are used to predict investor needs during adverse market conditions; thus allowing advisors to coach investors and help avoid inappropriate financial decision-making. First, we perform topic modeling to gain insight into the emerging topics and trends. Based on this insight, we construct a supervised classification model to predict the probability that an advised investor will require behavioral coaching during volatile market periods. To the best of our knowledge, ours is the first work on exploring the advisor-investor relationship using unstructured data. This work may have far-reaching implications for both traditional and emerging financial advisory service models like robo-advising.
http://arxiv.org/abs/2107.05592v1
"2021-07-12T17:12:30Z"
q-fin.ST, q-fin.CP, stat.AP
2,021
Assigning Topics to Documents by Successive Projections
Olga Klopp, Maxim Panov, Suzanne Sigalla, Alexandre Tsybakov
Topic models provide a useful tool to organize and understand the structure of large corpora of text documents, in particular, to discover hidden thematic structure. Clustering documents from big unstructured corpora into topics is an important task in various areas, such as image analysis, e-commerce, social networks, population genetics. A common approach to topic modeling is to associate each topic with a probability distribution on the dictionary of words and to consider each document as a mixture of topics. Since the number of topics is typically substantially smaller than the size of the corpus and of the dictionary, the methods of topic modeling can lead to a dramatic dimension reduction. In this paper, we study the problem of estimating topics distribution for each document in the given corpus, that is, we focus on the clustering aspect of the problem. We introduce an algorithm that we call Successive Projection Overlapping Clustering (SPOC) inspired by the Successive Projection Algorithm for separable matrix factorization. This algorithm is simple to implement and computationally fast. We establish theoretical guarantees on the performance of the SPOC algorithm, in particular, near matching minimax upper and lower bounds on its estimation risk. We also propose a new method that estimates the number of topics. We complement our theoretical results with a numerical study on synthetic and semi-synthetic data to analyze the performance of this new algorithm in practice. One of the conclusions is that the error of the algorithm grows at most logarithmically with the size of the dictionary, in contrast to what one observes for Latent Dirichlet Allocation.
http://arxiv.org/abs/2107.03684v1
"2021-07-08T08:58:35Z"
math.ST, stat.TH
2,021
Evaluating Sensitivity to the Stick-Breaking Prior in Bayesian Nonparametrics
Ryan Giordano, Runjing Liu, Michael I. Jordan, Tamara Broderick
Bayesian models based on the Dirichlet process and other stick-breaking priors have been proposed as core ingredients for clustering, topic modeling, and other unsupervised learning tasks. Prior specification is, however, relatively difficult for such models, given that their flexibility implies that the consequences of prior choices are often relatively opaque. Moreover, these choices can have a substantial effect on posterior inferences. Thus, considerations of robustness need to go hand in hand with nonparametric modeling. In the current paper, we tackle this challenge by exploiting the fact that variational Bayesian methods, in addition to having computational advantages in fitting complex nonparametric models, also yield sensitivities with respect to parametric and nonparametric aspects of Bayesian models. In particular, we demonstrate how to assess the sensitivity of conclusions to the choice of concentration parameter and stick-breaking distribution for inferences under Dirichlet process mixtures and related mixture models. We provide both theoretical and empirical support for our variational approach to Bayesian sensitivity analysis.
http://arxiv.org/abs/2107.03584v3
"2021-07-08T03:40:18Z"
stat.ME, stat.CO, stat.ML
2,021
Topic Modeling in the Voynich Manuscript
Rachel Sterneck, Annie Polish, Claire Bowern
This article presents the results of investigations using topic modeling of the Voynich Manuscript (Beinecke MS408). Topic modeling is a set of computational methods which are used to identify clusters of subjects within text. We use latent dirichlet allocation, latent semantic analysis, and nonnegative matrix factorization to cluster Voynich pages into `topics'. We then compare the topics derived from the computational models to clusters derived from the Voynich illustrations and from paleographic analysis. We find that computationally derived clusters match closely to a conjunction of scribe and subject matter (as per the illustrations), providing further evidence that the Voynich Manuscript contains meaningful text.
http://arxiv.org/abs/2107.02858v1
"2021-07-06T19:50:03Z"
cs.CL
2,021
Is Automated Topic Model Evaluation Broken?: The Incoherence of Coherence
Alexander Hoyle, Pranav Goel, Denis Peskov, Andrew Hian-Cheong, Jordan Boyd-Graber, Philip Resnik
Topic model evaluation, like evaluation of other unsupervised methods, can be contentious. However, the field has coalesced around automated estimates of topic coherence, which rely on the frequency of word co-occurrences in a reference corpus. Contemporary neural topic models surpass classical ones according to these metrics. At the same time, topic model evaluation suffers from a validation gap: automated coherence, developed for classical models, has not been validated using human experimentation for neural models. In addition, a meta-analysis of topic modeling literature reveals a substantial standardization gap in automated topic modeling benchmarks. To address the validation gap, we compare automated coherence with the two most widely accepted human judgment tasks: topic rating and word intrusion. To address the standardization gap, we systematically evaluate a dominant classical model and two state-of-the-art neural models on two commonly used datasets. Automated evaluations declare a winning model when corresponding human evaluations do not, calling into question the validity of fully automatic evaluations independent of human judgments.
http://arxiv.org/abs/2107.02173v3
"2021-07-05T17:58:52Z"
cs.CL, cs.LG
2,021
Evaluation of Thematic Coherence in Microblogs
Iman Munire Bilal, Bo Wang, Maria Liakata, Rob Procter, Adam Tsakalidis
Collecting together microblogs representing opinions about the same topics within the same timeframe is useful to a number of different tasks and practitioners. A major question is how to evaluate the quality of such thematic clusters. Here we create a corpus of microblog clusters from three different domains and time windows and define the task of evaluating thematic coherence. We provide annotation guidelines and human annotations of thematic coherence by journalist experts. We subsequently investigate the efficacy of different automated evaluation metrics for the task. We consider a range of metrics including surface level metrics, ones for topic model coherence and text generation metrics (TGMs). While surface level metrics perform well, outperforming topic coherence metrics, they are not as consistent as TGMs. TGMs are more reliable than all other metrics considered for capturing thematic coherence in microblog clusters due to being less sensitive to the effect of time windows.
http://arxiv.org/abs/2106.15971v1
"2021-06-30T10:32:59Z"
cs.CL
2,021
Sawtooth Factorial Topic Embeddings Guided Gamma Belief Network
Zhibin Duan, Dongsheng Wang, Bo Chen, Chaojie Wang, Wenchao Chen, Yewen Li, Jie Ren, Mingyuan Zhou
Hierarchical topic models such as the gamma belief network (GBN) have delivered promising results in mining multi-layer document representations and discovering interpretable topic taxonomies. However, they often assume in the prior that the topics at each layer are independently drawn from the Dirichlet distribution, ignoring the dependencies between the topics both at the same layer and across different layers. To relax this assumption, we propose sawtooth factorial topic embedding guided GBN, a deep generative model of documents that captures the dependencies and semantic similarities between the topics in the embedding space. Specifically, both the words and topics are represented as embedding vectors of the same dimension. The topic matrix at a layer is factorized into the product of a factor loading matrix and a topic embedding matrix, the transpose of which is set as the factor loading matrix of the layer above. Repeating this particular type of factorization, which shares components between adjacent layers, leads to a structure referred to as sawtooth factorization. An auto-encoding variational inference network is constructed to optimize the model parameter via stochastic gradient descent. Experiments on big corpora show that our models outperform other neural topic models on extracting deeper interpretable topics and deriving better document representations.
http://arxiv.org/abs/2107.02757v1
"2021-06-30T10:14:57Z"
cs.IR, cs.CL, cs.LG
2,021
The rise of populism and the reconfiguration of the German political space
Eckehard Olbrich, Sven Banisch
The paper explores the notion of a reconfiguration of political space in the context of the rise of populism and its effects on the political system. We focus on Germany and the appearance of the new right wing party "Alternative for Germany" (AfD). Many scholars of politics discuss the rise of the new populism in Western Europe and the US with respect to a new political cleavage related to globalization, which is assumed to mainly affect the cultural dimension of the political space. As such, it might replace the older economic cleavage based on class divisions in defining the dominant dimension of political conflict. An explanation along these lines suggests a reconfiguration of the political space in the sense that (1) the main cleavage within the political space changes its direction from the economic axis towards the cultural axis, but (2) also the semantics of the cultural axis itself is changing towards globalization related topics. Using the electoral manifestos from the Manifesto project database, we empirically address this reconfiguration of the political space by comparing political spaces for Germany built using topic modeling with the spaces based on the content analysis of the Manifesto project and the corresponding categories of political goals. We find that both spaces have a similar structure and that the AfD appears on a new dimension. In order to characterize this new dimension we employ a novel technique, inter-issue consistency networks (IICN) that allow to analyze the evolution of the correlations between the political positions on different issues over several elections. We find that the new dimension introduced by the AfD can be related to the split off of a new "cultural right" issue bundle from the previously existing center-right bundle.
http://arxiv.org/abs/2106.15717v2
"2021-06-29T20:43:45Z"
physics.soc-ph, cs.SI
2,021
Topic Modeling Based Extractive Text Summarization
Kalliath Abdul Rasheed Issam, Shivam Patel, Subalalitha C. N
Text summarization is an approach for identifying important information present within text documents. This computational technique aims to generate shorter versions of the source text, by including only the relevant and salient information present within the source text. In this paper, we propose a novel method to summarize a text document by clustering its contents based on latent topics produced using topic modeling techniques and by generating extractive summaries for each of the identified text clusters. All extractive sub-summaries are later combined to generate a summary for any given source document. We utilize the lesser used and challenging WikiHow dataset in our approach to text summarization. This dataset is unlike the commonly used news datasets which are available for text summarization. The well-known news datasets present their most important information in the first few lines of their source texts, which make their summarization a lesser challenging task when compared to summarizing the WikiHow dataset. Contrary to these news datasets, the documents in the WikiHow dataset are written using a generalized approach and have lesser abstractedness and higher compression ratio, thus proposing a greater challenge to generate summaries. A lot of the current state-of-the-art text summarization techniques tend to eliminate important information present in source documents in the favor of brevity. Our proposed technique aims to capture all the varied information present in source documents. Although the dataset proved challenging, after performing extensive tests within our experimental setup, we have discovered that our model produces encouraging ROUGE results and summaries when compared to the other published extractive and abstractive text summarization models.
http://arxiv.org/abs/2106.15313v1
"2021-06-29T12:28:19Z"
cs.CL, cs.IR
2,021
Integrating topic modeling and word embedding to characterize violent deaths
Alina Arseniev-Koehler, Susan D. Cochran, Vickie M. Mays, Kai-Wei Chang, Jacob Gates Foster
There is an escalating need for methods to identify latent patterns in text data from many domains. We introduce a new method to identify topics in a corpus and represent documents as topic sequences. Discourse Atom Topic Modeling draws on advances in theoretical machine learning to integrate topic modeling and word embedding, capitalizing on the distinct capabilities of each. We first identify a set of vectors ("discourse atoms") that provide a sparse representation of an embedding space. Atom vectors can be interpreted as latent topics: Through a generative model, atoms map onto distributions over words; one can also infer the topic that generated a sequence of words. We illustrate our method with a prominent example of underutilized text: the U.S. National Violent Death Reporting System (NVDRS). The NVDRS summarizes violent death incidents with structured variables and unstructured narratives. We identify 225 latent topics in the narratives (e.g., preparation for death and physical aggression); many of these topics are not captured by existing structured variables. Motivated by known patterns in suicide and homicide by gender, and recent research on gender biases in semantic space, we identify the gender bias of our topics (e.g., a topic about pain medication is feminine). We then compare the gender bias of topics to their prevalence in narratives of female versus male victims. Results provide a detailed quantitative picture of reporting about lethal violence and its gendered nature. Our method offers a flexible and broadly applicable approach to model topics in text data.
http://arxiv.org/abs/2106.14365v1
"2021-06-28T01:53:20Z"
cs.CL, cs.CY, cs.LG
2,021
Recurrent Coupled Topic Modeling over Sequential Documents
Jinjin Guo, Longbing Cao, Zhiguo Gong
The abundant sequential documents such as online archival, social media and news feeds are streamingly updated, where each chunk of documents is incorporated with smoothly evolving yet dependent topics. Such digital texts have attracted extensive research on dynamic topic modeling to infer hidden evolving topics and their temporal dependencies. However, most of the existing approaches focus on single-topic-thread evolution and ignore the fact that a current topic may be coupled with multiple relevant prior topics. In addition, these approaches also incur the intractable inference problem when inferring latent parameters, resulting in a high computational cost and performance degradation. In this work, we assume that a current topic evolves from all prior topics with corresponding coupling weights, forming the multi-topic-thread evolution. Our method models the dependencies between evolving topics and thoroughly encodes their complex multi-couplings across time steps. To conquer the intractable inference challenge, a new solution with a set of novel data augmentation techniques is proposed, which successfully discomposes the multi-couplings between evolving topics. A fully conjugate model is thus obtained to guarantee the effectiveness and efficiency of the inference technique. A novel Gibbs sampler with a backward-forward filter algorithm efficiently learns latent timeevolving parameters in a closed-form. In addition, the latent Indian Buffet Process (IBP) compound distribution is exploited to automatically infer the overall topic number and customize the sparse topic proportions for each sequential document without bias. The proposed method is evaluated on both synthetic and real-world datasets against the competitive baselines, demonstrating its superiority over the baselines in terms of the low per-word perplexity, high coherent topics, and better document time prediction.
http://arxiv.org/abs/2106.13732v1
"2021-06-23T08:58:13Z"
cs.IR, cs.LG
2,021
Towards a corpus for credibility assessment in software practitioner blog articles
Ashley Williams, Matthew Shardlow, Austen Rainer
Blogs are a source of grey literature which are widely adopted by software practitioners for disseminating opinion and experience. Analysing such articles can provide useful insights into the state-of-practice for software engineering research. However, there are challenges in identifying higher quality content from the large quantity of articles available. Credibility assessment can help in identifying quality content, though there is a lack of existing corpora. Credibility is typically measured through a series of conceptual criteria, with 'argumentation' and 'evidence' being two important criteria. We create a corpus labelled for argumentation and evidence that can aid the credibility community. The corpus consists of articles from the blog of a single software practitioner and is publicly available. Three annotators label the corpus with a series of conceptual credibility criteria, reaching an agreement of 0.82 (Fleiss' Kappa). We present preliminary analysis of the corpus by using it to investigate the identification of claim sentences (one of our ten labels). We train four systems (Bert, KNN, Decision Tree and SVM) using three feature sets (Bag of Words, Topic Modelling and InferSent), achieving an F1 score of 0.64 using InferSent and a Linear SVM. Our preliminary results are promising, indicating that the corpus can help future studies in detecting the credibility of grey literature. Future research will investigate the degree to which the sentence level annotations can infer the credibility of the overall document.
http://arxiv.org/abs/2106.11159v1
"2021-06-21T14:57:13Z"
cs.SE
2,021
Deriving Word Vectors from Contextualized Language Models using Topic-Aware Mention Selection
Yixiao Wang, Zied Bouraoui, Luis Espinosa Anke, Steven Schockaert
One of the long-standing challenges in lexical semantics consists in learning representations of words which reflect their semantic properties. The remarkable success of word embeddings for this purpose suggests that high-quality representations can be obtained by summarizing the sentence contexts of word mentions. In this paper, we propose a method for learning word representations that follows this basic strategy, but differs from standard word embeddings in two important ways. First, we take advantage of contextualized language models (CLMs) rather than bags of word vectors to encode contexts. Second, rather than learning a word vector directly, we use a topic model to partition the contexts in which words appear, and then learn different topic-specific vectors for each word. Finally, we use a task-specific supervision signal to make a soft selection of the resulting vectors. We show that this simple strategy leads to high-quality word vectors, which are more predictive of semantic properties than word embeddings and existing CLM-based strategies.
http://arxiv.org/abs/2106.07947v1
"2021-06-15T08:02:42Z"
cs.CL
2,021
Insight from NLP Analysis: COVID-19 Vaccines Sentiments on Social Media
Tao Na, Wei Cheng, Dongming Li, Wanyu Lu, Hongjiang Li
Social media is an appropriate source for analyzing public attitudes towards the COVID-19 vaccine and various brands. Nevertheless, there are few relevant studies. In the research, we collected tweet posts by the UK and US residents from the Twitter API during the pandemic and designed experiments to answer three main questions concerning vaccination. To get the dominant sentiment of the civics, we performed sentiment analysis by VADER and proposed a new method that can count the individual's influence. This allows us to go a step further in sentiment analysis and explain some of the fluctuations in the data changing. The results indicated that celebrities could lead the opinion shift on social media in vaccination progress. Moreover, at the peak, nearly 40\% of the population in both countries have a negative attitude towards COVID-19 vaccines. Besides, we investigated how people's opinions toward different vaccine brands are. We found that the Pfizer vaccine enjoys the most popular among people. By applying the sentiment analysis tool, we discovered most people hold positive views toward the COVID-19 vaccine manufactured by most brands. In the end, we carried out topic modelling by using the LDA model. We found residents in the two countries are willing to share their views and feelings concerning the vaccine. Several death cases have occurred after vaccination. Due to these negative events, US residents are more worried about the side effects and safety of the vaccine.
http://arxiv.org/abs/2106.04081v1
"2021-06-08T03:37:22Z"
cs.CL, cs.SI
2,021
Surveillance of COVID-19 Pandemic using Social Media: A Reddit Study in North Carolina
Christopher Whitfield, Yang Liu, Mohd Anwar
Coronavirus disease (COVID-19) pandemic has changed various aspects of people's lives and behaviors. At this stage, there are no other ways to control the natural progression of the disease than adopting mitigation strategies such as wearing masks, watching distance, and washing hands. Moreover, at this time of social distancing, social media plays a key role in connecting people and providing a platform for expressing their feelings. In this study, we tap into social media to surveil the uptake of mitigation and detection strategies, and capture issues and concerns about the pandemic. In particular, we explore the research question, "how much can be learned regarding the public uptake of mitigation strategies and concerns about COVID-19 pandemic by using natural language processing on Reddit posts?" After extracting COVID-related posts from the four largest subreddit communities of North Carolina over six months, we performed NLP-based preprocessing to clean the noisy data. We employed a custom Named-entity Recognition (NER) system and a Latent Dirichlet Allocation (LDA) method for topic modeling on a Reddit corpus. We observed that 'mask', 'flu', and 'testing' are the most prevalent named-entities for "Personal Protective Equipment", "symptoms", and "testing" categories, respectively. We also observed that the most discussed topics are related to testing, masks, and employment. The mitigation measures are the most prevalent theme of discussion across all subreddits.
http://arxiv.org/abs/2106.04515v3
"2021-06-07T06:55:25Z"
cs.SI, cs.IR, cs.LG
2,021
Network-based Topic Interaction Map for Big Data Mining of COVID-19 Biomedical Literature
Yeseul Jeon, Dongjun Chung, Jina Park, Ick Hoon Jin
Since the emergence of the worldwide pandemic of COVID-19, relevant research has been published at a dazzling pace, which yields an abundant amount of big data in biomedical literature. Due to the high volum of relevant literature, it is practically impossible to follow up the research manually. Topic modeling is a well-known unsupervised learning that aims to reveal latent topics from text data. In this paper, we propose a novel analytical framework for estimating topic interactions and effective visualization to improve topics' relationships. We first estimate topic-word distributions using the biterm topic model and estimate the topics' interaction based on the word distribution using the latent space item response model. We mapped these latent topics onto networks to visualize relationships among the topics. Moreover, in the proposed approach, we developed a score that is helpful in selecting meaningful words that characterize the topic. We figure out how topics are related by looking at how their relationships change. We do this with a "trajectory plot" that is made with different levels of word richness. These findings provide a thoroughly mined and intuitive representation of relationships between topics related to a specific research area. The application of this proposed framework to the PubMed literature demonstrates utility of our approach in understanding of the topic composition related to COVID-19 studies in the stage of its emergence.
http://arxiv.org/abs/2106.07374v4
"2021-06-07T06:01:17Z"
cs.IR, stat.AP
2,021
A protocol to gather, characterize and analyze incoming citations of retracted articles
Ivan Heibi, Silvio Peroni
In this article, we present a methodology which takes as input a collection of retracted articles, gathers the entities citing them, characterizes such entities according to multiple dimensions (disciplines, year of publication, sentiment, etc.), and applies a quantitative and qualitative analysis on the collected values. The methodology is composed of four phases: (1) identifying, retrieving, and extracting basic metadata of the entities which have cited a retracted article, (2) extracting and labeling additional features based on the textual content of the citing entities, (3) building a descriptive statistical summary based on the collected data, and finally (4) running a topic modeling analysis. The goal of the methodology is to generate data and visualizations that help understanding possible behaviors related to retraction cases. We present the methodology in a structured step-by-step form following its four phases, discuss its limits and possible workarounds, and list the planned future improvements.
http://arxiv.org/abs/2106.01781v1
"2021-06-03T12:09:41Z"
cs.DL
2,021
T-BERT -- Model for Sentiment Analysis of Micro-blogs Integrating Topic Model and BERT
Sarojadevi Palani, Prabhu Rajagopal, Sidharth Pancholi
Sentiment analysis (SA) has become an extensive research area in recent years impacting diverse fields including ecommerce, consumer business, and politics, driven by increasing adoption and usage of social media platforms. It is challenging to extract topics and sentiments from unsupervised short texts emerging in such contexts, as they may contain figurative words, strident data, and co-existence of many possible meanings for a single word or phrase, all contributing to obtaining incorrect topics. Most prior research is based on a specific theme/rhetoric/focused-content on a clean dataset. In the work reported here, the effectiveness of BERT(Bidirectional Encoder Representations from Transformers) in sentiment classification tasks from a raw live dataset taken from a popular microblogging platform is demonstrated. A novel T-BERT framework is proposed to show the enhanced performance obtainable by combining latent topics with contextual BERT embeddings. Numerical experiments were conducted on an ensemble with about 42000 datasets using NimbleBox.ai platform with a hardware configuration consisting of Nvidia Tesla K80(CUDA), 4 core CPU, 15GB RAM running on an isolated Google Cloud Platform instance. The empirical results show that the model improves in performance while adding topics to BERT and an accuracy rate of 90.81% on sentiment classification using BERT with the proposed approach.
http://arxiv.org/abs/2106.01097v1
"2021-06-02T12:01:47Z"
cs.CL, cs.AI
2,021
A Query-Driven Topic Model
Zheng Fang, Yulan He, Rob Procter
Topic modeling is an unsupervised method for revealing the hidden semantic structure of a corpus. It has been increasingly widely adopted as a tool in the social sciences, including political science, digital humanities and sociological research in general. One desirable property of topic models is to allow users to find topics describing a specific aspect of the corpus. A possible solution is to incorporate domain-specific knowledge into topic modeling, but this requires a specification from domain experts. We propose a novel query-driven topic model that allows users to specify a simple query in words or phrases and return query-related topics, thus avoiding tedious work from domain experts. Our proposed approach is particularly attractive when the user-specified query has a low occurrence in a text corpus, making it difficult for traditional topic models built on word cooccurrence patterns to identify relevant topics. Experimental results demonstrate the effectiveness of our model in comparison with both classical topic models and neural topic models.
http://arxiv.org/abs/2106.07346v2
"2021-05-28T22:49:42Z"
cs.IR, cs.LG
2,021
Non-negative matrix factorization algorithms greatly improve topic model fits
Peter Carbonetto, Abhishek Sarkar, Zihao Wang, Matthew Stephens
We report on the potential for using algorithms for non-negative matrix factorization (NMF) to improve parameter estimation in topic models. While several papers have studied connections between NMF and topic models, none have suggested leveraging these connections to develop new algorithms for fitting topic models. NMF avoids the "sum-to-one" constraints on the topic model parameters, resulting in an optimization problem with simpler structure and more efficient computations. Building on recent advances in optimization algorithms for NMF, we show that first solving the NMF problem then recovering the topic model fit can produce remarkably better fits, and in less time, than standard algorithms for topic models. While we focus primarily on maximum likelihood estimation, we show that this approach also has the potential to improve variational inference for topic models. Our methods are implemented in the R package fastTopics.
http://arxiv.org/abs/2105.13440v2
"2021-05-27T20:34:46Z"
stat.ML, cs.LG, stat.CO
2,021
On the Globalization of the QAnon Conspiracy Theory Through Telegram
Mohamad Hoseini, Philipe Melo, Fabricio Benevenuto, Anja Feldmann, Savvas Zannettou
QAnon is a far-right conspiracy theory that became popular and mainstream over the past few years. Worryingly, the QAnon conspiracy theory has implications in the real world, with supporters of the theory participating in real-world violent acts like the US capitol attack in 2021. At the same time, the QAnon theory started evolving into a global phenomenon by attracting followers across the globe and, in particular, in Europe. Therefore, it is imperative to understand how the QAnon theory became a worldwide phenomenon and how this dissemination has been happening in the online space. This paper performs a large-scale data analysis of QAnon through Telegram by collecting 4.5M messages posted in 161 QAnon groups/channels. Using Google's Perspective API, we analyze the toxicity of QAnon content across languages and over time. Also, using a BERT-based topic modeling approach, we analyze the QAnon discourse across multiple languages. Among other things, we find that the German language is prevalent in QAnon groups/channels on Telegram, even overshadowing English after 2020. Also, we find that content posted in German and Portuguese tends to be more toxic compared to English. Our topic modeling indicates that QAnon supporters discuss various topics of interest within far-right movements, including world politics, conspiracy theories, COVID-19, and the anti-vaccination movement. Taken all together, we perform the first multilingual study on QAnon through Telegram and paint a nuanced overview of the globalization of the QAnon theory.
http://arxiv.org/abs/2105.13020v1
"2021-05-27T09:24:25Z"
cs.CY, cs.SI
2,021
Topic Modeling and Progression of American Digital News Media During the Onset of the COVID-19 Pandemic
Xiangpeng Wan, Michael C. Lucic, Hakim Ghazzai, Yehia Massoud
Currently, the world is in the midst of a severe global pandemic, which has affected all aspects of people's lives. As a result, there is a deluge of COVID-related digital media articles published in the United States, due to the disparate effects of the pandemic. This large volume of information is difficult to consume by the audience in a reasonable amount of time. In this paper, we develop a Natural Language Processing (NLP) pipeline that is capable of automatically distilling various digital articles into manageable pieces of information, while also modelling the progression topics discussed over time in order to aid readers in rapidly gaining holistic perspectives on pressing issues (i.e., the COVID-19 pandemic) from a diverse array of sources. We achieve these goals by first collecting a large corpus of COVID-related articles during the onset of the pandemic. After, we apply unsupervised and semi-supervised learning procedures to summarize articles, then cluster them based on their similarities using the community detection methods. Next, we identify the topic of each cluster of articles using the BART algorithm. Finally, we provide a detailed digital media analysis based on the NLP-pipeline outputs and show how the conversation surrounding COVID-19 evolved over time.
http://arxiv.org/abs/2106.09572v1
"2021-05-25T14:27:47Z"
cs.CL
2,021
Have you tried Neural Topic Models? Comparative Analysis of Neural and Non-Neural Topic Models with Application to COVID-19 Twitter Data
Andrew Bennett, Dipendra Misra, Nga Than
Topic models are widely used in studying social phenomena. We conduct a comparative study examining state-of-the-art neural versus non-neural topic models, performing a rigorous quantitative and qualitative assessment on a dataset of tweets about the COVID-19 pandemic. Our results show that not only do neural topic models outperform their classical counterparts on standard evaluation metrics, but they also produce more coherent topics, which are of great benefit when studying complex social problems. We also propose a novel regularization term for neural topic models, which is designed to address the well-documented problem of mode collapse, and demonstrate its effectiveness.
http://arxiv.org/abs/2105.10165v1
"2021-05-21T07:24:09Z"
cs.CL, cs.CY, cs.IR, cs.LG
2,021
Variational Gaussian Topic Model with Invertible Neural Projections
Rui Wang, Deyu Zhou, Yuxuan Xiong, Haiping Huang
Neural topic models have triggered a surge of interest in extracting topics from text automatically since they avoid the sophisticated derivations in conventional topic models. However, scarce neural topic models incorporate the word relatedness information captured in word embedding into the modeling process. To address this issue, we propose a novel topic modeling approach, called Variational Gaussian Topic Model (VaGTM). Based on the variational auto-encoder, the proposed VaGTM models each topic with a multivariate Gaussian in decoder to incorporate word relatedness. Furthermore, to address the limitation that pre-trained word embeddings of topic-associated words do not follow a multivariate Gaussian, Variational Gaussian Topic Model with Invertible neural Projections (VaGTM-IP) is extended from VaGTM. Three benchmark text corpora are used in experiments to verify the effectiveness of VaGTM and VaGTM-IP. The experimental results show that VaGTM and VaGTM-IP outperform several competitive baselines and obtain more coherent topics.
http://arxiv.org/abs/2105.10095v1
"2021-05-21T02:23:02Z"
cs.AI
2,021
Learning a Latent Simplex in Input-Sparsity Time
Ainesh Bakshi, Chiranjib Bhattacharyya, Ravi Kannan, David P. Woodruff, Samson Zhou
We consider the problem of learning a latent $k$-vertex simplex $K\subset\mathbb{R}^d$, given access to $A\in\mathbb{R}^{d\times n}$, which can be viewed as a data matrix with $n$ points that are obtained by randomly perturbing latent points in the simplex $K$ (potentially beyond $K$). A large class of latent variable models, such as adversarial clustering, mixed membership stochastic block models, and topic models can be cast as learning a latent simplex. Bhattacharyya and Kannan (SODA, 2020) give an algorithm for learning such a latent simplex in time roughly $O(k\cdot\textrm{nnz}(A))$, where $\textrm{nnz}(A)$ is the number of non-zeros in $A$. We show that the dependence on $k$ in the running time is unnecessary given a natural assumption about the mass of the top $k$ singular values of $A$, which holds in many of these applications. Further, we show this assumption is necessary, as otherwise an algorithm for learning a latent simplex would imply an algorithmic breakthrough for spectral low rank approximation. At a high level, Bhattacharyya and Kannan provide an adaptive algorithm that makes $k$ matrix-vector product queries to $A$ and each query is a function of all queries preceding it. Since each matrix-vector product requires $\textrm{nnz}(A)$ time, their overall running time appears unavoidable. Instead, we obtain a low-rank approximation to $A$ in input-sparsity time and show that the column space thus obtained has small $\sin\Theta$ (angular) distance to the right top-$k$ singular space of $A$. Our algorithm then selects $k$ points in the low-rank subspace with the largest inner product with $k$ carefully chosen random vectors. By working in the low-rank subspace, we avoid reading the entire matrix in each iteration and thus circumvent the $\Theta(k\cdot\textrm{nnz}(A))$ running time.
http://arxiv.org/abs/2105.08005v1
"2021-05-17T16:40:48Z"
cs.LG, cs.DS, stat.ML
2,021
Matching Visual Features to Hierarchical Semantic Topics for Image Paragraph Captioning
Dandan Guo, Ruiying Lu, Bo Chen, Zequn Zeng, Mingyuan Zhou
Observing a set of images and their corresponding paragraph-captions, a challenging task is to learn how to produce a semantically coherent paragraph to describe the visual content of an image. Inspired by recent successes in integrating semantic topics into this task, this paper develops a plug-and-play hierarchical-topic-guided image paragraph generation framework, which couples a visual extractor with a deep topic model to guide the learning of a language model. To capture the correlations between the image and text at multiple levels of abstraction and learn the semantic topics from images, we design a variational inference network to build the mapping from image features to textual captions. To guide the paragraph generation, the learned hierarchical topics and visual features are integrated into the language model, including Long Short-Term Memory (LSTM) and Transformer, and jointly optimized. Experiments on public datasets demonstrate that the proposed models, which are competitive with many state-of-the-art approaches in terms of standard evaluation metrics, can be used to both distill interpretable multi-layer semantic topics and generate diverse and coherent captions. We release our code at https://github.com/DandanGuo1993/VTCM-based-image-paragraph-caption.git
http://arxiv.org/abs/2105.04143v2
"2021-05-10T06:55:39Z"
cs.CV, stat.ML
2,021
GraphTMT: Unsupervised Graph-based Topic Modeling from Video Transcripts
Lukas Stappen, Jason Thies, Gerhard Hagerer, Björn W. Schuller, Georg Groh
To unfold the tremendous amount of multimedia data uploaded daily to social media platforms, effective topic modeling techniques are needed. Existing work tends to apply topic models on written text datasets. In this paper, we propose a topic extractor on video transcripts. Exploiting neural word embeddings through graph-based clustering, we aim to improve usability and semantic coherence. Unlike most topic models, this approach works without knowing the true number of topics, which is important when no such assumption can or should be made. Experimental results on the real-life multimodal dataset MuSe-CaR demonstrates that our approach GraphTMT extracts coherent and meaningful topics and outperforms baseline methods. Furthermore, we successfully demonstrate the applicability of our approach on the popular Citysearch corpus.
http://arxiv.org/abs/2105.01466v4
"2021-05-04T12:48:17Z"
cs.CL, cs.MM
2,021
Supervised multi-specialist topic model with applications on large-scale electronic health record data
Ziyang Song, Xavier Sumba Toral, Yixin Xu, Aihua Liu, Liming Guo, Guido Powell, Aman Verma, David Buckeridge, Ariane Marelli, Yue Li
Motivation: Electronic health record (EHR) data provides a new venue to elucidate disease comorbidities and latent phenotypes for precision medicine. To fully exploit its potential, a realistic data generative process of the EHR data needs to be modelled. We present MixEHR-S to jointly infer specialist-disease topics from the EHR data. As the key contribution, we model the specialist assignments and ICD-coded diagnoses as the latent topics based on patient's underlying disease topic mixture in a novel unified supervised hierarchical Bayesian topic model. For efficient inference, we developed a closed-form collapsed variational inference algorithm to learn the model distributions of MixEHR-S. We applied MixEHR-S to two independent large-scale EHR databases in Quebec with three targeted applications: (1) Congenital Heart Disease (CHD) diagnostic prediction among 154,775 patients; (2) Chronic obstructive pulmonary disease (COPD) diagnostic prediction among 73,791 patients; (3) future insulin treatment prediction among 78,712 patients diagnosed with diabetes as a mean to assess the disease exacerbation. In all three applications, MixEHR-S conferred clinically meaningful latent topics among the most predictive latent topics and achieved superior target prediction accuracy compared to the existing methods, providing opportunities for prioritizing high-risk patients for healthcare services. MixEHR-S source code and scripts of the experiments are freely available at https://github.com/li-lab-mcgill/mixehrS
http://arxiv.org/abs/2105.01238v1
"2021-05-04T01:27:11Z"
cs.LG, q-bio.QM
2,021
Adapting CRISP-DM for Idea Mining: A Data Mining Process for Generating Ideas Using a Textual Dataset
W. Y. Ayele
Data mining project managers can benefit from using standard data mining process models. The benefits of using standard process models for data mining, such as the de facto and the most popular, Cross-Industry-Standard-Process model for Data Mining (CRISP-DM) are reduced cost and time. Also, standard models facilitate knowledge transfer, reuse of best practices, and minimize knowledge requirements. On the other hand, to unlock the potential of ever-growing textual data such as publications, patents, social media data, and documents of various forms, digital innovation is increasingly needed. Furthermore, the introduction of cutting-edge machine learning tools and techniques enable the elicitation of ideas. The processing of unstructured textual data to generate new and useful ideas is referred to as idea mining. Existing literature about idea mining merely overlooks the utilization of standard data mining process models. Therefore, the purpose of this paper is to propose a reusable model to generate ideas, CRISP-DM, for Idea Mining (CRISP-IM). The design and development of the CRISP-IM are done following the design science approach. The CRISP-IM facilitates idea generation, through the use of Dynamic Topic Modeling (DTM), unsupervised machine learning, and subsequent statistical analysis on a dataset of scholarly articles. The adapted CRISP-IM can be used to guide the process of identifying trends using scholarly literature datasets or temporally organized patent or any other textual dataset of any domain to elicit ideas. The ex-post evaluation of the CRISP-IM is left for future study.
http://arxiv.org/abs/2105.00574v1
"2021-05-02T23:24:25Z"
cs.IR, cs.CL, cs.LG
2,021
Improving Response Quality with Backward Reasoning in Open-domain Dialogue Systems
Ziming Li, Julia Kiseleva, Maarten de Rijke
Being able to generate informative and coherent dialogue responses is crucial when designing human-like open-domain dialogue systems. Encoder-decoder-based dialogue models tend to produce generic and dull responses during the decoding step because the most predictable response is likely to be a non-informative response instead of the most suitable one. To alleviate this problem, we propose to train the generation model in a bidirectional manner by adding a backward reasoning step to the vanilla encoder-decoder training. The proposed backward reasoning step pushes the model to produce more informative and coherent content because the forward generation step's output is used to infer the dialogue context in the backward direction. The advantage of our method is that the forward generation and backward reasoning steps are trained simultaneously through the use of a latent variable to facilitate bidirectional optimization. Our method can improve response quality without introducing side information (e.g., a pre-trained topic model). The proposed bidirectional response generation method achieves state-of-the-art performance for response quality.
http://arxiv.org/abs/2105.00079v1
"2021-04-30T20:38:27Z"
cs.CL
2,021
Analysis of Legal Documents via Non-negative Matrix Factorization Methods
Ryan Budahazy, Lu Cheng, Yihuan Huang, Andrew Johnson, Pengyu Li, Joshua Vendrow, Zhoutong Wu, Denali Molitor, Elizaveta Rebrova, Deanna Needell
The California Innocence Project (CIP), a clinical law school program aiming to free wrongfully convicted prisoners, evaluates thousands of mails containing new requests for assistance and corresponding case files. Processing and interpreting this large amount of information presents a significant challenge for CIP officials, which can be successfully aided by topic modeling techniques.In this paper, we apply Non-negative Matrix Factorization (NMF) method and implement various offshoots of it to the important and previously unstudied data set compiled by CIP. We identify underlying topics of existing case files and classify request files by crime type and case status (decision type). The results uncover the semantic structure of current case files and can provide CIP officials with a general understanding of newly received case files before further examinations. We also provide an exposition of popular variants of NMF with their experimental results and discuss the benefits and drawbacks of each variant through the real-world application.
http://arxiv.org/abs/2104.14028v2
"2021-04-28T21:32:22Z"
cs.LG, cs.CY
2,021
A Comprehensive Attempt to Research Statement Generation
Wenhao Wu, Sujian Li
For a researcher, writing a good research statement is crucial but costs a lot of time and effort. To help researchers, in this paper, we propose the research statement generation (RSG) task which aims to summarize one's research achievements and help prepare a formal research statement. For this task, we conduct a comprehensive attempt including corpus construction, method design, and performance evaluation. First, we construct an RSG dataset with 62 research statements and the corresponding 1,203 publications. Due to the limitation of our resources, we propose a practical RSG method which identifies a researcher's research directions by topic modeling and clustering techniques and extracts salient sentences by a neural text summarizer. Finally, experiments show that our method outperforms all the baselines with better content coverage and coherence.
http://arxiv.org/abs/2104.14339v1
"2021-04-25T03:57:00Z"
cs.IR, cs.CL
2,021
Deep Probabilistic Graphical Modeling
Adji B. Dieng
Probabilistic graphical modeling (PGM) provides a framework for formulating an interpretable generative process of data and expressing uncertainty about unknowns, but it lacks flexibility. Deep learning (DL) is an alternative framework for learning from data that has achieved great empirical success in recent years. DL offers great flexibility, but it lacks the interpretability and calibration of PGM. This thesis develops deep probabilistic graphical modeling (DPGM.) DPGM consists in leveraging DL to make PGM more flexible. DPGM brings about new methods for learning from data that exhibit the advantages of both PGM and DL. We use DL within PGM to build flexible models endowed with an interpretable latent structure. One model class we develop extends exponential family PCA using neural networks to improve predictive performance while enforcing the interpretability of the latent factors. Another model class we introduce enables accounting for long-term dependencies when modeling sequential data, which is a challenge when using purely DL or PGM approaches. Finally, DPGM successfully solves several outstanding problems of probabilistic topic models, a widely used family of models in PGM. DPGM also brings about new algorithms for learning with complex data. We develop reweighted expectation maximization, an algorithm that unifies several existing maximum likelihood-based algorithms for learning models parameterized by neural networks. This unifying view is made possible using expectation maximization, a canonical inference algorithm in PGM. We also develop entropy-regularized adversarial learning, a learning paradigm that deviates from the traditional maximum likelihood approach used in PGM. From the DL perspective, entropy-regularized adversarial learning provides a solution to the long-standing mode collapse problem of generative adversarial networks, a widely used DL approach.
http://arxiv.org/abs/2104.12053v1
"2021-04-25T03:48:02Z"
stat.ML, cs.LG
2,021
Clustering Introductory Computer Science Exercises Using Topic Modeling Methods
Laura O. Moraes, Carlos Eduardo Pedreira
Manually determining concepts present in a group of questions is a challenging and time-consuming process. However, the process is an essential step while modeling a virtual learning environment since a mapping between concepts and questions using mastery level assessment and recommendation engines are required. We investigated unsupervised semantic models (known as topic modeling techniques) to assist computer science teachers in this task and propose a method to transform Computer Science 1 teacher-provided code solutions into representative text documents, including the code structure information. By applying non-negative matrix factorization and latent Dirichlet allocation techniques, we extract the underlying relationship between questions and validate the results using an external dataset. We consider the interpretability of the learned concepts using 14 university professors' data, and the results confirm six semantically coherent clusters using the current dataset. Moreover, the six topics comprise the main concepts present in the test dataset, achieving 0.75 in the normalized pointwise mutual information metric. The metric correlates with human ratings, making the proposed method useful and providing semantics for large amounts of unannotated code.
http://arxiv.org/abs/2104.10748v1
"2021-04-21T20:23:53Z"
cs.LG, cs.CL, cs.IR
2,021