Title
stringlengths
14
179
Authors
stringlengths
6
464
Abstract
stringlengths
83
1.93k
entry_id
stringlengths
32
34
Date
unknown
Categories
stringlengths
5
168
year
int32
2.01k
2.02k
Improving Neural Topic Models using Knowledge Distillation
Alexander Hoyle, Pranav Goel, Philip Resnik
Topic models are often used to identify human-interpretable topics to help make sense of large document collections. We use knowledge distillation to combine the best attributes of probabilistic topic models and pretrained transformers. Our modular method can be straightforwardly applied with any neural topic model to improve topic quality, which we demonstrate using two models having disparate architectures, obtaining state-of-the-art topic coherence. We show that our adaptable framework not only improves performance in the aggregate over all estimated topics, as is commonly reported, but also in head-to-head comparisons of aligned topics.
http://arxiv.org/abs/2010.02377v1
"2020-10-05T22:49:16Z"
cs.CL, cs.IR, cs.LG
2,020
Unification of HDP and LDA Models for Optimal Topic Clustering of Subject Specific Question Banks
Nikhil Fernandes, Alexandra Gkolia, Nicolas Pizzo, James Davenport, Akshar Nair
There has been an increasingly popular trend in Universities for curriculum transformation to make teaching more interactive and suitable for online courses. An increase in the popularity of online courses would result in an increase in the number of course-related queries for academics. This, coupled with the fact that if lectures were delivered in a video on demand format, there would be no fixed time where the majority of students could ask questions. When questions are asked in a lecture there is a negligible chance of having similar questions repeatedly, but asynchronously this is more likely. In order to reduce the time spent on answering each individual question, clustering them is an ideal choice. There are different unsupervised models fit for text clustering, of which the Latent Dirichlet Allocation model is the most commonly used. We use the Hierarchical Dirichlet Process to determine an optimal topic number input for our LDA model runs. Due to the probabilistic nature of these topic models, the outputs of them vary for different runs. The general trend we found is that not all the topics were being used for clustering on the first run of the LDA model, which results in a less effective clustering. To tackle probabilistic output, we recursively use the LDA model on the effective topics being used until we obtain an efficiency ratio of 1. Through our experimental results we also establish a reasoning on how Zeno's paradox is avoided.
http://arxiv.org/abs/2011.01035v1
"2020-10-04T18:21:20Z"
cs.IR, cs.CL, cs.LG, stat.ML
2,020
Sparseness-constrained Nonnegative Tensor Factorization for Detecting Topics at Different Time Scales
Lara Kassab, Alona Kryshchenko, Hanbaek Lyu, Denali Molitor, Deanna Needell, Elizaveta Rebrova, Jiahong Yuan
Temporal data (such as news articles or Twitter feeds) often consists of a mixture of long-lasting trends and popular but short-lasting topics of interest. A truly successful topic modeling strategy should be able to detect both types of topics and clearly locate them in time. In this paper, we first show that nonnegative CANDECOMP/PARAFAC decomposition (NCPD) is able to discover topics of variable persistence automatically. Then, we propose sparseness-constrained NCPD (S-NCPD) and its online variant in order to actively control the length of the learned topics effectively and efficiently. Further, we propose quantitative ways to measure the topic length and demonstrate the ability of S-NCPD (as well as its online variant) to discover short and long-lasting temporal topics in a controlled manner in semi-synthetic and real-world data including news headlines. We also demonstrate that the online variant of S-NCPD reduces the reconstruction error more rapidly than S-NCPD.
http://arxiv.org/abs/2010.01600v3
"2020-10-04T15:20:05Z"
cs.IR, cs.NA, cs.SI, math.NA
2,020
Towards a Multi-modal, Multi-task Learning based Pre-training Framework for Document Representation Learning
Subhojeet Pramanik, Shashank Mujumdar, Hima Patel
Recent approaches in literature have exploited the multi-modal information in documents (text, layout, image) to serve specific downstream document tasks. However, they are limited by their - (i) inability to learn cross-modal representations across text, layout and image dimensions for documents and (ii) inability to process multi-page documents. Pre-training techniques have been shown in Natural Language Processing (NLP) domain to learn generic textual representations from large unlabelled datasets, applicable to various downstream NLP tasks. In this paper, we propose a multi-task learning-based framework that utilizes a combination of self-supervised and supervised pre-training tasks to learn a generic document representation applicable to various downstream document tasks. Specifically, we introduce Document Topic Modelling and Document Shuffle Prediction as novel pre-training tasks to learn rich image representations along with the text and layout representations for documents. We utilize the Longformer network architecture as the backbone to encode the multi-modal information from multi-page documents in an end-to-end fashion. We showcase the applicability of our pre-training framework on a variety of different real-world document tasks such as document classification, document information extraction, and document retrieval. We evaluate our framework on different standard document datasets and conduct exhaustive experiments to compare performance against various ablations of our framework and state-of-the-art baselines.
http://arxiv.org/abs/2009.14457v2
"2020-09-30T05:39:04Z"
cs.CL, cs.AI, cs.LG
2,020
Neural Topic Modeling by Incorporating Document Relationship Graph
Deyu Zhou, Xuemeng Hu, Rui Wang
Graph Neural Networks (GNNs) that capture the relationships between graph nodes via message passing have been a hot research direction in the natural language processing community. In this paper, we propose Graph Topic Model (GTM), a GNN based neural topic model that represents a corpus as a document relationship graph. Documents and words in the corpus become nodes in the graph and are connected based on document-word co-occurrences. By introducing the graph structure, the relationships between documents are established through their shared words and thus the topical representation of a document is enriched by aggregating information from its neighboring nodes using graph convolution. Extensive experiments on three datasets were conducted and the results demonstrate the effectiveness of the proposed approach.
http://arxiv.org/abs/2009.13972v1
"2020-09-29T12:45:55Z"
cs.CL
2,020
Neural Topic Modeling with Cycle-Consistent Adversarial Training
Xuemeng Hu, Rui Wang, Deyu Zhou, Yuxuan Xiong
Advances on deep generative models have attracted significant research interest in neural topic modeling. The recently proposed Adversarial-neural Topic Model models topics with an adversarially trained generator network and employs Dirichlet prior to capture the semantic patterns in latent topics. It is effective in discovering coherent topics but unable to infer topic distributions for given documents or utilize available document labels. To overcome such limitations, we propose Topic Modeling with Cycle-consistent Adversarial Training (ToMCAT) and its supervised version sToMCAT. ToMCAT employs a generator network to interpret topics and an encoder network to infer document topics. Adversarial training and cycle-consistent constraints are used to encourage the generator and the encoder to produce realistic samples that coordinate with each other. sToMCAT extends ToMCAT by incorporating document labels into the topic modeling process to help discover more coherent topics. The effectiveness of the proposed models is evaluated on unsupervised/supervised topic modeling and text classification. The experimental results show that our models can produce both coherent and informative topics, outperforming a number of competitive baselines.
http://arxiv.org/abs/2009.13971v1
"2020-09-29T12:41:27Z"
cs.CL
2,020
Visual Exploration and Knowledge Discovery from Biomedical Dark Data
Shashwat Aggarwal, Ramesh Singh
Data visualization techniques proffer efficient means to organize and present data in graphically appealing formats, which not only speeds up the process of decision making and pattern recognition but also enables decision-makers to fully understand data insights and make informed decisions. Over time, with the rise in technological and computational resources, there has been an exponential increase in the world's scientific knowledge. However, most of it lacks structure and cannot be easily categorized and imported into regular databases. This type of data is often termed as Dark Data. Data visualization techniques provide a promising solution to explore such data by allowing quick comprehension of information, the discovery of emerging trends, identification of relationships and patterns, etc. In this empirical research study, we use the rich corpus of PubMed comprising of more than 30 million citations from biomedical literature to visually explore and understand the underlying key-insights using various information visualization techniques. We employ a natural language processing based pipeline to discover knowledge out of the biomedical dark data. The pipeline comprises of different lexical analysis techniques like Topic Modeling to extract inherent topics and major focus areas, Network Graphs to study the relationships between various entities like scientific documents and journals, researchers, and, keywords and terms, etc. With this analytical research, we aim to proffer a potential solution to overcome the problem of analyzing overwhelming amounts of information and diminish the limitation of human cognition and perception in handling and examining such large volumes of data.
http://arxiv.org/abs/2009.13059v1
"2020-09-28T04:27:05Z"
cs.DL, cs.CL, cs.IR, cs.LG
2,020
Modeling Topical Relevance for Multi-Turn Dialogue Generation
Hainan Zhang, Yanyan Lan, Liang Pang, Hongshen Chen, Zhuoye Ding, Dawei Yin
Topic drift is a common phenomenon in multi-turn dialogue. Therefore, an ideal dialogue generation models should be able to capture the topic information of each context, detect the relevant context, and produce appropriate responses accordingly. However, existing models usually use word or sentence level similarities to detect the relevant contexts, which fail to well capture the topical level relevance. In this paper, we propose a new model, named STAR-BTM, to tackle this problem. Firstly, the Biterm Topic Model is pre-trained on the whole training dataset. Then, the topic level attention weights are computed based on the topic representation of each context. Finally, the attention weights and the topic distribution are utilized in the decoding process to generate the corresponding responses. Experimental results on both Chinese customer services data and English Ubuntu dialogue data show that STAR-BTM significantly outperforms several state-of-the-art methods, in terms of both metric-based and human evaluations.
http://arxiv.org/abs/2009.12735v1
"2020-09-27T03:33:22Z"
cs.CL, cs.HC, cs.LG
2,020
Crosslingual Topic Modeling with WikiPDA
Tiziano Piccardi, Robert West
We present Wikipedia-based Polyglot Dirichlet Allocation (WikiPDA), a crosslingual topic model that learns to represent Wikipedia articles written in any language as distributions over a common set of language-independent topics. It leverages the fact that Wikipedia articles link to each other and are mapped to concepts in the Wikidata knowledge base, such that, when represented as bags of links, articles are inherently language-independent. WikiPDA works in two steps, by first densifying bags of links using matrix completion and then training a standard monolingual topic model. A human evaluation shows that WikiPDA produces more coherent topics than monolingual text-based LDA, thus offering crosslinguality at no cost. We demonstrate WikiPDA's utility in two applications: a study of topical biases in 28 Wikipedia editions, and crosslingual supervised classification. Finally, we highlight WikiPDA's capacity for zero-shot language transfer, where a model is reused for new languages without any fine-tuning. Researchers can benefit from WikiPDA as a practical tool for studying Wikipedia's content across its 299 language editions in interpretable ways, via an easy-to-use library publicly available at https://github.com/epfl-dlab/WikiPDA.
http://arxiv.org/abs/2009.11207v2
"2020-09-23T15:19:27Z"
cs.CL, cs.DL
2,020
Can questions summarize a corpus? Using question generation for characterizing COVID-19 research
Gabriela Surita, Rodrigo Nogueira, Roberto Lotufo
What are the latent questions on some textual data? In this work, we investigate using question generation models for exploring a collection of documents. Our method, dubbed corpus2question, consists of applying a pre-trained question generation model over a corpus and aggregating the resulting questions by frequency and time. This technique is an alternative to methods such as topic modelling and word cloud for summarizing large amounts of textual data. Results show that applying corpus2question on a corpus of scientific articles related to COVID-19 yields relevant questions about the topic. The most frequent questions are "what is covid 19" and "what is the treatment for covid". Among the 1000 most frequent questions are "what is the threshold for herd immunity" and "what is the role of ace2 in viral entry". We show that the proposed method generated similar questions for 13 of the 27 expert-made questions from the CovidQA question answering dataset. The code to reproduce our experiments and the generated questions are available at: https://github.com/unicamp-dl/corpus2question
http://arxiv.org/abs/2009.09290v1
"2020-09-19T19:57:44Z"
cs.IR, cs.CL, cs.LG, H.3.3
2,020
A Simple and Effective Self-Supervised Contrastive Learning Framework for Aspect Detection
Tian Shi, Liuqing Li, Ping Wang, Chandan K. Reddy
Unsupervised aspect detection (UAD) aims at automatically extracting interpretable aspects and identifying aspect-specific segments (such as sentences) from online reviews. However, recent deep learning-based topic models, specifically aspect-based autoencoder, suffer from several problems, such as extracting noisy aspects and poorly mapping aspects discovered by models to the aspects of interest. To tackle these challenges, in this paper, we first propose a self-supervised contrastive learning framework and an attention-based model equipped with a novel smooth self-attention (SSA) module for the UAD task in order to learn better representations for aspects and review segments. Secondly, we introduce a high-resolution selective mapping (HRSMap) method to efficiently assign aspects discovered by the model to aspects of interest. We also propose using a knowledge distilling technique to further improve the aspect detection performance. Our methods outperform several recent unsupervised and weakly supervised approaches on publicly available benchmark user review datasets. Aspect interpretation results show that extracted aspects are meaningful, have good coverage, and can be easily mapped to aspects of interest. Ablation studies and attention weight visualization also demonstrate the effectiveness of SSA and the knowledge distilling method.
http://arxiv.org/abs/2009.09107v2
"2020-09-18T22:13:49Z"
cs.CL, cs.IR
2,020
ReviewViz: Assisting Developers Perform Empirical Study on Energy Consumption Related Reviews for Mobile Applications
Mohammad Abdul Hadi, Fatemeh H Fard
Improving the energy efficiency of mobile applications is a topic that has gained a lot of attention recently. It has been addressed in a number of ways such as identifying energy bugs and developing a catalog of energy patterns. Previous work shows that users discuss the battery-related issues (energy inefficiency or energy consumption) of the apps in their reviews. However, there is no work that addresses the automatic extraction of battery-related issues from users' feedback. In this paper, we report on a visualization tool that is developed to empirically study machine learning algorithms and text features to automatically identify the energy consumption specific reviews with the highest accuracy. Other than the common machine learning algorithms, we utilize deep learning models with different word embeddings to compare the results. Furthermore, to help the developers extract the main topics that are discussed in the reviews, two states of the art topic modeling algorithms are applied. The visualizations of the topics represent the keywords that are extracted for each topic along with a comparison with the results of string matching. The developed web-browser based interactive visualization tool is a novel framework developed with the intention of giving the app developers insights about running time and accuracy of machine learning and deep learning models as well as extracted topics. The tool makes it easier for the developers to traverse through the extensive result set generated by the text classification and topic modeling algorithms. The dynamic-data structure used for the tool stores the baseline-results of the discussed approaches and is updated when applied on new datasets. The tool is open-sourced to replicate the research results.
http://arxiv.org/abs/2009.06027v2
"2020-09-13T15:47:46Z"
cs.LG
2,020
AOBTM: Adaptive Online Biterm Topic Modeling for Version Sensitive Short-texts Analysis
Mohammad Abdul Hadi, Fatemeh H Fard
Analysis of mobile app reviews has shown its important role in requirement engineering, software maintenance and evolution of mobile apps. Mobile app developers check their users' reviews frequently to clarify the issues experienced by users or capture the new issues that are introduced due to a recent app update. App reviews have a dynamic nature and their discussed topics change over time. The changes in the topics among collected reviews for different versions of an app can reveal important issues about the app update. A main technique in this analysis is using topic modeling algorithms. However, app reviews are short texts and it is challenging to unveil their latent topics over time. Conventional topic models suffer from the sparsity of word co-occurrence patterns while inferring topics for short texts. Furthermore, these algorithms cannot capture topics over numerous consecutive time-slices. Online topic modeling algorithms speed up the inference of topic models for the texts collected in the latest time-slice by saving a fraction of data from the previous time-slice. But these algorithms do not analyze the statistical-data of all the previous time-slices, which can confer contributions to the topic distribution of the current time-slice. We propose Adaptive Online Biterm Topic Model (AOBTM) to model topics in short texts adaptively. AOBTM alleviates the sparsity problem in short-texts and considers the statistical-data for an optimal number of previous time-slices. We also propose parallel algorithms to automatically determine the optimal number of topics and the best number of previous versions that should be considered in topic inference phase. Automatic evaluation on collections of app reviews and real-world short text datasets confirm that AOBTM can find more coherent topics and outperforms the state-of-the-art baselines.
http://arxiv.org/abs/2009.09930v1
"2020-09-13T09:50:44Z"
cs.IR, cs.LG, stat.ML
2,020
Non-Pharmaceutical Intervention Discovery with Topic Modeling
Jonathan Smith, Borna Ghotbi, Seungeun Yi, Mahboobeh Parsapoor
We consider the task of discovering categories of non-pharmaceutical interventions during the evolving COVID-19 pandemic. We explore topic modeling on two corpora with national and international scope. These models discover existing categories when compared with human intervention labels while reduced human effort needed.
http://arxiv.org/abs/2009.13602v1
"2020-09-10T11:37:00Z"
cs.CL, cs.LG
2,020
Topic, Sentiment and Impact Analysis: COVID19 Information Seeking on Social Media
Md Abul Bashar, Richi Nayak, Thirunavukarasu Balasubramaniam
When people notice something unusual, they discuss it on social media. They leave traces of their emotions via text expressions. A systematic collection, analysis, and interpretation of social media data across time and space can give insights on local outbreaks, mental health, and social issues. Such timely insights can help in developing strategies and resources with an appropriate and efficient response. This study analysed a large Spatio-temporal tweet dataset of the Australian sphere related to COVID19. The methodology included a volume analysis, dynamic topic modelling, sentiment detection, and semantic brand score to obtain an insight on the COVID19 pandemic outbreak and public discussion in different states and cities of Australia over time. The obtained insights are compared with independently observed phenomena such as government reported instances.
http://arxiv.org/abs/2008.12435v1
"2020-08-28T02:03:18Z"
cs.SI, cs.IR, cs.LG
2,020
A Pipeline to Understand Emerging Illness via Social Media Data Analysis: A Case Study on Breast Implant Illness
Vishal Dey, Peter Krasniak, Minh Nguyen, Clara Lee, Xia Ning
Background: A new illness could first come to the public attention over social media before it is medically defined, formally documented or systematically studied. One example is a phenomenon known as breast implant illness (BII) that has been extensively discussed on social media, though vaguely defined in medical literature. Objectives: The objective of this study is to construct a data analysis pipeline to understand emerging illness using social media data, and to apply the pipeline to understand key attributes of BII. Methods: We conducted a pipeline of social media data analysis using Natural Language Processing (NLP) and topic modeling. We extracted mentions related to signs/symptoms, diseases/disorders and medical procedures using the Clinical Text Analysis and Knowledge Extraction System (cTAKES) from social media data. We mapped the mentions to standard medical concepts. We summarized mapped concepts to topics using Latent Dirichlet Allocation (LDA). Finally, we applied this pipeline to understand BII from several BII-dedicated social media sites. Results: Our pipeline identified topics related to toxicity, cancer and mental health issues that are highly associated with BII. Our pipeline also shows that cancers, autoimmune disorders and mental health problems are emerging concerns associated with breast implants based on social media discussions. The pipeline also identified mentions such as rupture, infection, pain and fatigue as common self-reported issues among the public, as well as toxicity from silicone implants. Conclusions: Our study could inspire future work studying the suggested symptoms and factors of BII. Our study provides the first analysis and derived knowledge of BII from social media using NLP techniques, and demonstrates the potential of using social media information to better understand similar emerging illnesses.
http://arxiv.org/abs/2008.11238v2
"2020-08-25T19:00:25Z"
cs.IR, cs.CY, cs.SI
2,020
ETC-NLG: End-to-end Topic-Conditioned Natural Language Generation
Ginevra Carbone, Gabriele Sarti
Plug-and-play language models (PPLMs) enable topic-conditioned natural language generation by pairing large pre-trained generators with attribute models used to steer the predicted token distribution towards the selected topic. Despite their computational efficiency, PPLMs require large amounts of labeled texts to effectively balance generation fluency and proper conditioning, making them unsuitable for low-resource settings. We present ETC-NLG, an approach leveraging topic modeling annotations to enable fully-unsupervised End-to-end Topic-Conditioned Natural Language Generation over emergent topics in unlabeled document collections. We first test the effectiveness of our approach in a low-resource setting for Italian, evaluating the conditioning for both topic models and gold annotations. We then perform a comparative evaluation of ETC-NLG for Italian and English using a parallel corpus. Finally, we propose an automatic approach to estimate the effectiveness of conditioning on the generated utterances.
http://arxiv.org/abs/2008.10875v3
"2020-08-25T08:22:38Z"
cs.CL
2,020
Emerging App Issue Identification via Online Joint Sentiment-Topic Tracing
Cuiyun Gao, Jichuan Zeng, Zhiyuan Wen, David Lo, Xin Xia, Irwin King, Michael R. Lyu
Millions of mobile apps are available in app stores, such as Apple's App Store and Google Play. For a mobile app, it would be increasingly challenging to stand out from the enormous competitors and become prevalent among users. Good user experience and well-designed functionalities are the keys to a successful app. To achieve this, popular apps usually schedule their updates frequently. If we can capture the critical app issues faced by users in a timely and accurate manner, developers can make timely updates, and good user experience can be ensured. There exist prior studies on analyzing reviews for detecting emerging app issues. These studies are usually based on topic modeling or clustering techniques. However, the short-length characteristics and sentiment of user reviews have not been considered. In this paper, we propose a novel emerging issue detection approach named MERIT to take into consideration the two aforementioned characteristics. Specifically, we propose an Adaptive Online Biterm Sentiment-Topic (AOBST) model for jointly modeling topics and corresponding sentiments that takes into consideration app versions. Based on the AOBST model, we infer the topics negatively reflected in user reviews for one app version, and automatically interpret the meaning of the topics with most relevant phrases and sentences. Experiments on popular apps from Google Play and Apple's App Store demonstrate the effectiveness of MERIT in identifying emerging app issues, improving the state-of-the-art method by 22.3% in terms of F1-score. In terms of efficiency, MERIT can return results within acceptable time.
http://arxiv.org/abs/2008.09976v1
"2020-08-23T06:34:05Z"
cs.SE, cs.CL
2,020
Top2Vec: Distributed Representations of Topics
Dimo Angelov
Topic modeling is used for discovering latent semantic structure, usually referred to as topics, in a large collection of documents. The most widely used methods are Latent Dirichlet Allocation and Probabilistic Latent Semantic Analysis. Despite their popularity they have several weaknesses. In order to achieve optimal results they often require the number of topics to be known, custom stop-word lists, stemming, and lemmatization. Additionally these methods rely on bag-of-words representation of documents which ignore the ordering and semantics of words. Distributed representations of documents and words have gained popularity due to their ability to capture semantics of words and documents. We present $\texttt{top2vec}$, which leverages joint document and word semantic embedding to find $\textit{topic vectors}$. This model does not require stop-word lists, stemming or lemmatization, and it automatically finds the number of topics. The resulting topic vectors are jointly embedded with the document and word vectors with distance between them representing semantic similarity. Our experiments demonstrate that $\texttt{top2vec}$ finds topics which are significantly more informative and representative of the corpus trained on than probabilistic generative models.
http://arxiv.org/abs/2008.09470v1
"2020-08-19T20:58:27Z"
cs.CL, cs.LG, stat.ML
2,020
Data-driven quark and gluon jet modification in heavy-ion collisions
Jasmine Brewer, Jesse Thaler, Andrew P. Turner
Whether quark- and gluon-initiated jets are modified differently by the quark-gluon plasma produced in heavy-ion collisions is a long-standing question that has thus far eluded a definitive experimental answer. A crucial complication for quark-gluon discrimination in both proton-proton and heavy-ion collisions is that all measurements necessarily average over the (unknown) quark-gluon composition of a jet sample. In the heavy-ion context, the simultaneous modification of both the fractions and substructure of quark and gluon jets by the quark-gluon plasma further obscures the interpretation. Here, we demonstrate a fully data-driven method for separating quark and gluon contributions to jet observables using a statistical technique called topic modeling. Assuming that jet distributions are a mixture of underlying "quark-like" and "gluon-like" distributions, we show how to extract quark and gluon jet fractions and constituent multiplicity distributions as a function of the jet transverse momentum. This proof-of-concept study is based on proton-proton and heavy-ion collision events from the Monte Carlo event generator Jewel with statistics accessible in Run 4 of the Large Hadron Collider. These results suggest the potential for an experimental determination of quark and gluon jet modifications.
http://arxiv.org/abs/2008.08596v1
"2020-08-19T18:00:03Z"
hep-ph, nucl-th
2,020
OpenFraming: We brought the ML; you bring the data. Interact with your data and discover its frames
Alyssa Smith, David Assefa Tofu, Mona Jalal, Edward Edberg Halim, Yimeng Sun, Vidya Akavoor, Margrit Betke, Prakash Ishwar, Lei Guo, Derry Wijaya
When journalists cover a news story, they can cover the story from multiple angles or perspectives. A news article written about COVID-19 for example, might focus on personal preventative actions such as mask-wearing, while another might focus on COVID-19's impact on the economy. These perspectives are called "frames," which when used may influence public perception and opinion of the issue. We introduce a Web-based system for analyzing and classifying frames in text documents. Our goal is to make effective tools for automatic frame discovery and labeling based on topic modeling and deep learning widely accessible to researchers from a diverse array of disciplines. To this end, we provide both state-of-the-art pre-trained frame classification models on various issues as well as a user-friendly pipeline for training novel classification models on user-provided corpora. Researchers can submit their documents and obtain frames of the documents. The degree of user involvement is flexible: they can run models that have been pre-trained on select issues; submit labeled documents and train a new model for frame classification; or submit unlabeled documents and obtain potential frames of the documents. The code making up our system is also open-sourced and well-documented, making the system transparent and expandable. The system is available on-line at http://www.openframing.org and via our GitHub page https://github.com/davidatbu/openFraming .
http://arxiv.org/abs/2008.06974v1
"2020-08-16T18:59:30Z"
cs.CL, cs.IR, cs.LG
2,020
Neural Topic Model via Optimal Transport
He Zhao, Dinh Phung, Viet Huynh, Trung Le, Wray Buntine
Recently, Neural Topic Models (NTMs) inspired by variational autoencoders have obtained increasingly research interest due to their promising results on text analysis. However, it is usually hard for existing NTMs to achieve good document representation and coherent/diverse topics at the same time. Moreover, they often degrade their performance severely on short documents. The requirement of reparameterisation could also comprise their training quality and model flexibility. To address these shortcomings, we present a new neural topic model via the theory of optimal transport (OT). Specifically, we propose to learn the topic distribution of a document by directly minimising its OT distance to the document's word distributions. Importantly, the cost matrix of the OT distance models the weights between topics and words, which is constructed by the distances between topics and words in an embedding space. Our proposed model can be trained efficiently with a differentiable loss. Extensive experiments show that our framework significantly outperforms the state-of-the-art NTMs on discovering more coherent and diverse topics and deriving better document representations for both regular and short texts.
http://arxiv.org/abs/2008.13537v3
"2020-08-12T06:37:09Z"
cs.IR, cs.CL, cs.LG, stat.ML
2,020
A Neural Generative Model for Joint Learning Topics and Topic-Specific Word Embeddings
Lixing Zhu, Yulan He, Deyu Zhou
We propose a novel generative model to explore both local and global context for joint learning topics and topic-specific word embeddings. In particular, we assume that global latent topics are shared across documents, a word is generated by a hidden semantic vector encoding its contextual semantic meaning, and its context words are generated conditional on both the hidden semantic vector and global latent topics. Topics are trained jointly with the word embeddings. The trained model maps words to topic-dependent embeddings, which naturally addresses the issue of word polysemy. Experimental results show that the proposed model outperforms the word-level embedding methods in both word similarity evaluation and word sense disambiguation. Furthermore, the model also extracts more coherent topics compared with existing neural topic models or other models for joint learning of topics and word embeddings. Finally, the model can be easily integrated with existing deep contextualized word embedding learning methods to further improve the performance of downstream tasks such as sentiment classification.
http://arxiv.org/abs/2008.04702v1
"2020-08-11T13:54:11Z"
cs.CL
2,020
Context Reinforced Neural Topic Modeling over Short Texts
Jiachun Feng, Zusheng Zhang, Cheng Ding, Yanghui Rao, Haoran Xie
As one of the prevalent topic mining tools, neural topic modeling has attracted a lot of interests for the advantages of high efficiency in training and strong generalisation abilities. However, due to the lack of context in each short text, the existing neural topic models may suffer from feature sparsity on such documents. To alleviate this issue, we propose a Context Reinforced Neural Topic Model (CRNTM), whose characteristics can be summarized as follows. Firstly, by assuming that each short text covers only a few salient topics, CRNTM infers the topic for each word in a narrow range. Secondly, our model exploits pre-trained word embeddings by treating topics as multivariate Gaussian distributions or Gaussian mixture distributions in the embedding space. Extensive experiments on two benchmark datasets validate the effectiveness of the proposed model on both topic discovery and text classification.
http://arxiv.org/abs/2008.04545v1
"2020-08-11T06:41:53Z"
cs.IR, cs.CL
2,020
Challenges in Docker Development: A Large-scale Study Using Stack Overflow
Mubin Ul Haque, Leonardo Horn Iwaya, M. Ali Babar
Docker technology has been increasingly used among software developers in a multitude of projects. This growing interest is due to the fact that Docker technology supports a convenient process for creating and building containers, promoting close cooperation between developer and operations teams, and enabling continuous software delivery. As a fast-growing technology, it is important to identify the Docker-related topics that are most popular as well as existing challenges and difficulties that developers face. This paper presents a large-scale empirical study identifying practitioners' perspectives on Docker technology by mining posts from the Stack Overflow (SoF) community. Method: A dataset of 113,922 Docker-related posts was created based on a set of relevant tags and contents. The dataset was cleaned and prepared. Topic modelling was conducted using Latent Dirichlet Allocation (LDA), allowing the identification of dominant topics in the domain. Our results show that most developers use SoF to ask about a broad spectrum of Docker topics including framework development, application deployment, continuous integration, web-server configuration and many more. We determined that 30 topics that developers discuss can be grouped into 13 main categories. Most of the posts belong to categories of application development, configuration, and networking. On the other hand, we find that the posts on monitoring status, transferring data, and authenticating users are more popular among developers compared to the other topics. Specifically, developers face challenges in web browser issues, networking error and memory management. Besides, there is a lack of experts in this domain. Our research findings will guide future work on the development of new tools and techniques, helping the community to focus efforts and understand existing trade-offs on Docker topics.
http://arxiv.org/abs/2008.04467v1
"2020-08-11T01:19:23Z"
cs.SE, cs.IR
2,020
A Large-scale Study of Security Vulnerability Support on Developer Q&A Websites
Triet H. M. Le, Roland Croft, David Hin, M. Ali Babar
Context: Security Vulnerabilities (SVs) pose many serious threats to software systems. Developers usually seek solutions to addressing these SVs on developer Question and Answer (Q&A) websites. However, there is still little known about on-going SV-specific discussions on different developer Q&A sites. Objective: We present a large-scale empirical study to understand developers' SV discussions and how these discussions are being supported by Q&A sites. Method: We first curate 71,329 SV posts from two large Q&A sites, namely Stack Overflow (SO) and Security StackExchange (SSE). We then use topic modeling to uncover the topics of SV-related discussions and analyze the popularity, difficulty, and level of expertise for each topic. We also perform a qualitative analysis to identify the types of solutions to SV-related questions. Results: We identify 13 main SV discussion topics on Q&A sites. Many topics do not follow the distributions and trends in expert-based security sources such as Common Weakness Enumeration (CWE) and Open Web Application Security Project (OWASP). We also discover that SV discussions attract more experts to answer than many other domains, but some difficult SV topics (e.g., Vulnerability Scanning Tools) still receive quite limited support from experts. Moreover, we identify seven key types of answers given to SV questions on Q&A sites, in which SO often provides code and instructions, while SSE usually gives experience-based advice and explanations. Conclusion: Our findings provide support for researchers and practitioners to effectively acquire, share and leverage SV knowledge on Q&A sites.
http://arxiv.org/abs/2008.04176v2
"2020-08-10T14:58:23Z"
cs.SE, cs.CR
2,020
BATS: A Spectral Biclustering Approach to Single Document Topic Modeling and Segmentation
Qiong Wu, Adam Hare, Sirui Wang, Yuwei Tu, Zhenming Liu, Christopher G. Brinton, Yanhua Li
Existing topic modeling and text segmentation methodologies generally require large datasets for training, limiting their capabilities when only small collections of text are available. In this work, we reexamine the inter-related problems of "topic identification" and "text segmentation" for sparse document learning, when there is a single new text of interest. In developing a methodology to handle single documents, we face two major challenges. First is sparse information: with access to only one document, we cannot train traditional topic models or deep learning algorithms. Second is significant noise: a considerable portion of words in any single document will produce only noise and not help discern topics or segments. To tackle these issues, we design an unsupervised, computationally efficient methodology called BATS: Biclustering Approach to Topic modeling and Segmentation. BATS leverages three key ideas to simultaneously identify topics and segment text: (i) a new mechanism that uses word order information to reduce sample complexity, (ii) a statistically sound graph-based biclustering technique that identifies latent structures of words and sentences, and (iii) a collection of effective heuristics that remove noise words and award important words to further improve performance. Experiments on four datasets show that our approach outperforms several state-of-the-art baselines when considering topic coherence, topic diversity, segmentation, and runtime comparison metrics.
http://arxiv.org/abs/2008.02218v3
"2020-08-05T16:34:33Z"
cs.IR, cs.LG
2,020
Deep Learning based Topic Analysis on Financial Emerging Event Tweets
Shaan Aryaman, Nguwi Yok Yen
Financial analyses of stock markets rely heavily on quantitative approaches in an attempt to predict subsequent or market movements based on historical prices and other measurable metrics. These quantitative analyses might have missed out on un-quantifiable aspects like sentiment and speculation that also impact the market. Analyzing vast amounts of qualitative text data to understand public opinion on social media platform is one approach to address this gap. This work carried out topic analysis on 28264 financial tweets [1] via clustering to discover emerging events in the stock market. Three main topics were discovered to be discussed frequently within the period. First, the financial ratio EPS is a measure that has been discussed frequently by investors. Secondly, short selling of shares were discussed heavily, it was often mentioned together with Morgan Stanley. Thirdly, oil and energy sectors were often discussed together with policy. These tweets were semantically clustered by a method consisting of word2vec algorithm to obtain word embeddings that map words to vectors. Semantic word clusters were then formed. Each tweet was then vectorized using the Term Frequency-Inverse Document Frequency (TF-IDF) values of the words it consisted of and based on which clusters its words were in. Tweet vectors were then converted to compressed representations by training a deep-autoencoder. K-means clusters were then formed. This method reduces dimensionality and produces dense vectors, in contrast to the usual Vector Space Model. Topic modelling with Latent Dirichlet Allocation (LDA) and top frequent words were used to analyze clusters and reveal emerging events.
http://arxiv.org/abs/2008.00670v1
"2020-08-03T06:43:11Z"
cs.CL
2,020
Using LDA and LSTM Models to Study Public Opinions and Critical Groups Towards Congestion Pricing in New York City through 2007 to 2019
Qian Ye, Xiaohong Chen, Onur Kalan, Kaan Ozbay
This study explores how people view and respond to the proposals of NYC congestion pricing evolve in time. To understand these responses, Twitter data is collected and analyzed. Critical groups in the recurrent process are detected by statistically analyzing the active users and the most mentioned accounts, and the trends of people's attitudes and concerns over the years are identified with text mining and hybrid Nature Language Processing techniques, including LDA topic modeling and LSTM sentiment classification. The result shows that multiple interest groups were involved and played crucial roles during the proposal, especially Mayor and Governor, MTA, and outer-borough representatives. The public shifted the concern of focus from the plan details to a wider city's sustainability and fairness. Furthermore, the plan's approval relies on several elements, the joint agreement reached in the political process, strong motivation in the real-world, the scheme based on balancing multiple interests, and groups' awareness of tolling's benefits and necessity.
http://arxiv.org/abs/2008.07366v1
"2020-08-01T02:59:29Z"
cs.CY, cs.LG
2,020
Is there something I'm missing? Topic Modeling in eDiscovery
Herbert L. Roitblat
In legal eDiscovery, the parties are required to search through their electronically stored information to find documents that are relevant to a specific case. Negotiations over the scope of these searches are often based on a fear that something will be missed. This paper continues an argument that discovery should be based on identifying the facts of a case. If a search process is less than complete (if it has Recall less than 100%), it may still be complete in presenting all of the relevant available topics. In this study, Latent Dirichlet Allocation was used to identify 100 topics from all of the known relevant documents. The documents were then categorized to about 80% Recall (i.e., 80% of the relevant documents were found by the categorizer, designated the hit set and 20% were missed, designated the missed set). Despite the fact that less than all of the relevant documents were identified by the categorizer, the documents that were identified contained all of the topics derived from the full set of documents. This same pattern held whether the categorizer was a na\"ive Bayes categorizer trained on a random selection of documents or a Support Vector Machine trained with Continuous Active Learning (which focuses evaluation on the most-likely-to-be-relevant documents). No topics were identified in either categorizer's missed set that were not already seen in the hit set. Not only is a computer-assisted search process reasonable (as required by the Federal Rules of Civil Procedure), it is also complete when measured by topics.
http://arxiv.org/abs/2007.15731v1
"2020-07-30T20:37:27Z"
cs.IR, H.3.3
2,020
Better Early than Late: Fusing Topics with Word Embeddings for Neural Question Paraphrase Identification
Nicole Peinelt, Dong Nguyen, Maria Liakata
Question paraphrase identification is a key task in Community Question Answering (CQA) to determine if an incoming question has been previously asked. Many current models use word embeddings to identify duplicate questions, but the use of topic models in feature-engineered systems suggests that they can be helpful for this task, too. We therefore propose two ways of merging topics with word embeddings (early vs. late fusion) in a new neural architecture for question paraphrase identification. Our results show that our system outperforms neural baselines on multiple CQA datasets, while an ablation study highlights the importance of topics and especially early topic-embedding fusion in our architecture.
http://arxiv.org/abs/2007.11314v1
"2020-07-22T10:09:26Z"
cs.CL
2,020
Welcome to Gab Alt Right Discourses
Nga Than, Maria Y. Rodriguez, Diane Yoong, Friederike Windel
Social media has become an important venue for diverse groups to share information, discuss political issues, and organize social movements. Recent scholarship has shown that the social media ecosystem can affect political thinking and expression. Individuals and groups across the political spectrum have engaged in the use of these platforms extensively, even creating their own forums with varying approaches to content moderation in pursuit of freer standards of speech. The Gab social media platform arose in this context. Gab is a social media platform for the so-called alt right, and much of the popular press has opined about the thematic content of discourses on Gab and platforms like it, but little research has examined the content itself. Using a publicly available dataset of all Gab posts from August 2016 until July 2019, the current paper explores a five percent random sample of this dataset to explore thematic content on the platform. We run multiple structural topic models, using standard procedures to arrive at an optimal k number of topics. The final model specifies 85 topics for 403,469 documents. We include as prevalence variables whether the source account has been flagged as a bot and the number of followers for the source account. Results suggest the most nodal topics in the dataset pertain to the authenticity of the Holocaust, the meaning of red pill, and the journalistic merit of mainstream media. We conclude by discussing the implications of our findings for work in ethical content moderation, online community development, political polarization, and avenues for future research.
http://arxiv.org/abs/2007.09685v1
"2020-07-19T15:07:25Z"
cs.SI
2,020
Hierarchical Topic Mining via Joint Spherical Tree and Text Embedding
Yu Meng, Yunyi Zhang, Jiaxin Huang, Yu Zhang, Chao Zhang, Jiawei Han
Mining a set of meaningful topics organized into a hierarchy is intuitively appealing since topic correlations are ubiquitous in massive text corpora. To account for potential hierarchical topic structures, hierarchical topic models generalize flat topic models by incorporating latent topic hierarchies into their generative modeling process. However, due to their purely unsupervised nature, the learned topic hierarchy often deviates from users' particular needs or interests. To guide the hierarchical topic discovery process with minimal user supervision, we propose a new task, Hierarchical Topic Mining, which takes a category tree described by category names only, and aims to mine a set of representative terms for each category from a text corpus to help a user comprehend his/her interested topics. We develop a novel joint tree and text embedding method along with a principled optimization procedure that allows simultaneous modeling of the category tree structure and the corpus generative process in the spherical space for effective category-representative term discovery. Our comprehensive experiments show that our model, named JoSH, mines a high-quality set of hierarchical topics with high efficiency and benefits weakly-supervised hierarchical text classification tasks.
http://arxiv.org/abs/2007.09536v1
"2020-07-18T23:30:47Z"
cs.CL, cs.IR, cs.LG
2,020
Semi-Supervised Learning Approach to Discover Enterprise User Insights from Feedback and Support
Xin Deng, Ross Smith, Genevieve Quintin
With the evolution of the cloud and customer centric culture, we inherently accumulate huge repositories of textual reviews, feedback, and support data.This has driven enterprises to seek and research engagement patterns, user network analysis, topic detections, etc.However, huge manual work is still necessary to mine data to be able to mine actionable outcomes. In this paper, we proposed and developed an innovative Semi-Supervised Learning approach by utilizing Deep Learning and Topic Modeling to have a better understanding of the user voice.This approach combines a BERT-based multiclassification algorithm through supervised learning combined with a novel Probabilistic and Semantic Hybrid Topic Inference (PSHTI) Model through unsupervised learning, aiming at automating the process of better identifying the main topics or areas as well as the sub-topics from the textual feedback and support.There are three major break-through: 1. As the advancement of deep learning technology, there have been tremendous innovations in the NLP field, yet the traditional topic modeling as one of the NLP applications lag behind the tide of deep learning. In the methodology and technical perspective, we adopt transfer learning to fine-tune a BERT-based multiclassification system to categorize the main topics and then utilize the novel PSHTI model to infer the sub-topics under the predicted main topics. 2. The traditional unsupervised learning-based topic models or clustering methods suffer from the difficulty of automatically generating a meaningful topic label, but our system enables mapping the top words to the self-help issues by utilizing domain knowledge about the product through web-crawling. 3. This work provides a prominent showcase by leveraging the state-of-the-art methodology in the real production to help shed light to discover user insights and drive business investment priorities.
http://arxiv.org/abs/2007.09303v3
"2020-07-18T01:18:00Z"
cs.LG, cs.CL, stat.ML
2,020
EZLDA: Efficient and Scalable LDA on GPUs
Shilong Wang, Hang Liu, Anil Gaihre, Hengyong Yu
LDA is a statistical approach for topic modeling with a wide range of applications. However, there exist very few attempts to accelerate LDA on GPUs which come with exceptional computing and memory throughput capabilities. To this end, we introduce EZLDA which achieves efficient and scalable LDA training on GPUs with the following three contributions: First, EZLDA introduces three-branch sampling method which takes advantage of the convergence heterogeneity of various tokens to reduce the redundant sampling task. Second, to enable sparsity-aware format for both D and W on GPUs with fast sampling and updating, we introduce hybrid format for W along with corresponding token partition to T and inverted index designs. Third, we design a hierarchical workload balancing solution to address the extremely skewed workload imbalance problem on GPU and scaleEZLDA across multiple GPUs. Taken together, EZLDA achieves superior performance over the state-of-the-art attempts with lower memory consumption.
http://arxiv.org/abs/2007.08725v1
"2020-07-17T02:40:03Z"
cs.DC, cs.IR, cs.LG
2,020
The Sparse Hausdorff Moment Problem, with Application to Topic Models
Spencer Gordon, Bijan Mazaheri, Leonard J. Schulman, Yuval Rabani
We consider the problem of identifying, from its first $m$ noisy moments, a probability distribution on $[0,1]$ of support $k<\infty$. This is equivalent to the problem of learning a distribution on $m$ observable binary random variables $X_1,X_2,\dots,X_m$ that are iid conditional on a hidden random variable $U$ taking values in $\{1,2,\dots,k\}$. Our focus is on accomplishing this with $m=2k$, which is the minimum $m$ for which verifying that the source is a $k$-mixture is possible (even with exact statistics). This problem, so simply stated, is quite useful: e.g., by a known reduction, any algorithm for it lifts to an algorithm for learning pure topic models. We give an algorithm for identifying a $k$-mixture using samples of $m=2k$ iid binary random variables using a sample of size $\left(1/w_{\min}\right)^2 \cdot\left(1/\zeta\right)^{O(k)}$ and post-sampling runtime of only $O(k^{2+o(1)})$ arithmetic operations. Here $w_{\min}$ is the minimum probability of an outcome of $U$, and $\zeta$ is the minimum separation between the distinct success probabilities of the $X_i$s. Stated in terms of the moment problem, it suffices to know the moments to additive accuracy $w_{\min}\cdot\zeta^{O(k)}$. It is known that the sample complexity of any solution to the identification problem must be at least exponential in $k$. Previous results demonstrated either worse sample complexity and worse $O(k^c)$ runtime for some $c$ substantially larger than $2$, or similar sample complexity and much worse $k^{O(k^2)}$ runtime.
http://arxiv.org/abs/2007.08101v3
"2020-07-16T04:23:57Z"
cs.LG, cs.DS, stat.ML
2,020
Neural Topic Models with Survival Supervision: Jointly Predicting Time-to-Event Outcomes and Learning How Clinical Features Relate
Linhong Li, Ren Zuo, Amanda Coston, Jeremy C. Weiss, George H. Chen
In time-to-event prediction problems, a standard approach to estimating an interpretable model is to use Cox proportional hazards, where features are selected based on lasso regularization or stepwise regression. However, these Cox-based models do not learn how different features relate. As an alternative, we present an interpretable neural network approach to jointly learn a survival model to predict time-to-event outcomes while simultaneously learning how features relate in terms of a topic model. In particular, we model each subject as a distribution over "topics", which are learned from clinical features as to help predict a time-to-event outcome. From a technical standpoint, we extend existing neural topic modeling approaches to also minimize a survival analysis loss function. We study the effectiveness of this approach on seven healthcare datasets on predicting time until death as well as hospital ICU length of stay, where we find that neural survival-supervised topic models achieves competitive accuracy with existing approaches while yielding interpretable clinical "topics" that explain feature relationships.
http://arxiv.org/abs/2007.07796v1
"2020-07-15T16:20:04Z"
cs.LG, stat.ML
2,020
COVID-19 Twitter Dataset with Latent Topics, Sentiments and Emotions Attributes
Raj Kumar Gupta, Ajay Vishwanath, Yinping Yang
This paper describes a large global dataset on people's discourse and responses to the COVID-19 pandemic over the Twitter platform. From 28 January 2020 to 1 June 2022, we collected and processed over 252 million Twitter posts from more than 29 million unique users using four keywords: "corona", "wuhan", "nCov" and "covid". Leveraging probabilistic topic modelling and pre-trained machine learning-based emotion recognition algorithms, we labelled each tweet with seventeen attributes, including a) ten binary attributes indicating the tweet's relevance (1) or irrelevance (0) to the top ten detected topics, b) five quantitative emotion attributes indicating the degree of intensity of the valence or sentiment (from 0: extremely negative to 1: extremely positive) and the degree of intensity of fear, anger, sadness and happiness emotions (from 0: not at all to 1: extremely intense), and c) two categorical attributes indicating the sentiment (very negative, negative, neutral or mixed, positive, very positive) and the dominant emotion (fear, anger, sadness, happiness, no specific emotion) the tweet is mainly expressing. We discuss the technical validity and report the descriptive statistics of these attributes, their temporal distribution, and geographic representation. The paper concludes with a discussion of the dataset's usage in communication, psychology, public health, economics, and epidemiology.
http://arxiv.org/abs/2007.06954v8
"2020-07-14T10:30:47Z"
cs.CL, cs.IR
2,020
Model Fusion with Kullback--Leibler Divergence
Sebastian Claici, Mikhail Yurochkin, Soumya Ghosh, Justin Solomon
We propose a method to fuse posterior distributions learned from heterogeneous datasets. Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors and proceeds using a simple assign-and-average approach. The components of the dataset posteriors are assigned to the proposed global model components by solving a regularized variant of the assignment problem. The global components are then updated based on these assignments by their mean under a KL divergence. For exponential family variational distributions, our formulation leads to an efficient non-parametric algorithm for computing the fused model. Our algorithm is easy to describe and implement, efficient, and competitive with state-of-the-art on motion capture analysis, topic modeling, and federated learning of Bayesian neural networks.
http://arxiv.org/abs/2007.06168v1
"2020-07-13T03:27:45Z"
cs.LG, stat.ML
2,020
Tracing Complexity in Food Blogging Entries
Maija Kāle, Ebenezer Agbozo
Within this paper, we focus on the concept of complexity and how it is represented in food blogging entries on Twitter. We turn specific attention to complexity capture when looking at healthy foods, focusing on food blogging entries that entail the notions of health/healthiness/healthy. We do so because we consider that complexity manifests hedonism - that is the irrational determinant of food choice above rational considerations of nutrition and healthiness. Using text as a platform for our analysis, we derive bigrams and topic models that illustrate the frequencies of words and bi-grams, thus, pointing our attention to current discourse in food blogging entries on Twitter. The results show that, contrary to complexity, that the dominating characteristics in healthy food domain are easiness and speed of preparation, however, rational and health related considerations may not always take precedence when the choice is determined. Food blogging entries show surprisingly little account of healthy food as being tasty and enjoyable. With this we aim to contribute to the knowledge of how to shape more healthy consumer behaviors. Having discovered the scarcity of hedonic connotations, this work invites for further research in text-based information about food.
http://arxiv.org/abs/2007.05552v1
"2020-07-10T18:13:48Z"
cs.CY, 68U15 (Primary), 91F20 (Secondary), I.2.7
2,020
Topic Modeling on User Stories using Word Mover's Distance
Kim Julian Gülle, Nicholas Ford, Patrick Ebel, Florian Brokhausen, Andreas Vogelsang
Requirements elicitation has recently been complemented with crowd-based techniques, which continuously involve large, heterogeneous groups of users who express their feedback through a variety of media. Crowd-based elicitation has great potential for engaging with (potential) users early on but also results in large sets of raw and unstructured feedback. Consolidating and analyzing this feedback is a key challenge for turning it into sensible user requirements. In this paper, we focus on topic modeling as a means to identify topics within a large set of crowd-generated user stories and compare three approaches: (1) a traditional approach based on Latent Dirichlet Allocation, (2) a combination of word embeddings and principal component analysis, and (3) a combination of word embeddings and Word Mover's Distance. We evaluate the approaches on a publicly available set of 2,966 user stories written and categorized by crowd workers. We found that a combination of word embeddings and Word Mover's Distance is most promising. Depending on the word embeddings we use in our approaches, we manage to cluster the user stories in two ways: one that is closer to the original categorization and another that allows new insights into the dataset, e.g. to find potentially new categories. Unfortunately, no measure exists to rate the quality of our results objectively. Still, our findings provide a basis for future work towards analyzing crowd-sourced user stories.
http://arxiv.org/abs/2007.05302v2
"2020-07-10T11:05:42Z"
cs.CL, cs.IR
2,020
Handling Collocations in Hierarchical Latent Tree Analysis for Topic Modeling
Leonard K. M. Poon, Nevin L. Zhang, Haoran Xie, Gary Cheng
Topic modeling has been one of the most active research areas in machine learning in recent years. Hierarchical latent tree analysis (HLTA) has been recently proposed for hierarchical topic modeling and has shown superior performance over state-of-the-art methods. However, the models used in HLTA have a tree structure and cannot represent the different meanings of multiword expressions sharing the same word appropriately. Therefore, we propose a method for extracting and selecting collocations as a preprocessing step for HLTA. The selected collocations are replaced with single tokens in the bag-of-words model before running HLTA. Our empirical evaluation shows that the proposed method led to better performance of HLTA on three of the four data sets tested.
http://arxiv.org/abs/2007.05163v1
"2020-07-10T04:56:36Z"
cs.CL, cs.IR, cs.LG
2,020
The impact of political party/candidate on the election results from a sentiment analysis perspective using #AnambraDecides2017 tweets
Ikechukwu Onyenwe, Samuel Nwagbo, Njideka Mbeledogu, Ebele Onyedinma
This work investigates empirically the impact of political party control over its candidates or vice versa on winning an election using a natural language processing technique called sentiment analysis (SA). To do this, a set of 7430 tweets bearing or related to #AnambraDecides2017 was streamed during the November 18, 2017, Anambra State gubernatorial election. These are Twitter discussions on the top five political parties and their candidates termed political actors in this paper. We conduct polarity and subjectivity sentiment analyses on all the tweets considering time as a useful dimension of SA. Furthermore, we use the word frequency to find words most associated with the political actors in a given time. We find most talked about topics using a topic modeling algorithm and how the computed sentiments and most frequent words are related to the topics per political actor. Among other things, we deduced from the experimental results that even though a political party serves as a platform that sales the personality of a candidate, the acceptance of the candidate/party adds to the winning of an election. For example, we found the winner of the election Willie Obiano benefiting from the values his party share among the people of the State. Associating his name with his party, All Progressive Grand Alliance (APGA) displays more positive sentiments and the subjective sentiment analysis indicates that Twitter users mentioning APGA are less emotionally subjective in their tweets than the other parties.
http://arxiv.org/abs/2007.03824v1
"2020-07-07T23:41:56Z"
cs.SI, cs.IR
2,020
Cultural Convergence: Insights into the behavior of misinformation networks on Twitter
Liz McQuillan, Erin McAweeney, Alicia Bargar, Alex Ruch
How can the birth and evolution of ideas and communities in a network be studied over time? We use a multimodal pipeline, consisting of network mapping, topic modeling, bridging centrality, and divergence to analyze Twitter data surrounding the COVID-19 pandemic. We use network mapping to detect accounts creating content surrounding COVID-19, then Latent Dirichlet Allocation to extract topics, and bridging centrality to identify topical and non-topical bridges, before examining the distribution of each topic and bridge over time and applying Jensen-Shannon divergence of topic distributions to show communities that are converging in their topical narratives.
http://arxiv.org/abs/2007.03443v1
"2020-07-07T13:50:24Z"
cs.SI, cs.CL, physics.soc-ph, H.1.2; H.4.3; I.2.1; I.2.6; I.2.7; I.5
2,020
Exploratory Analysis of COVID-19 Related Tweets in North America to Inform Public Health Institutes
Hyeju Jang, Emily Rempel, Giuseppe Carenini, Naveed Janjua
Social media is a rich source where we can learn about people's reactions to social issues. As COVID-19 has significantly impacted on people's lives, it is essential to capture how people react to public health interventions and understand their concerns. In this paper, we aim to investigate people's reactions and concerns about COVID-19 in North America, especially focusing on Canada. We analyze COVID-19 related tweets using topic modeling and aspect-based sentiment analysis, and interpret the results with public health experts. We compare timeline of topics discussed with timing of implementation of public health interventions for COVID-19. We also examine people's sentiment about COVID-19 related issues. We discuss how the results can be helpful for public health agencies when designing a policy for new interventions. Our work shows how Natural Language Processing (NLP) techniques could be applied to public health questions with domain expert involvement.
http://arxiv.org/abs/2007.02452v1
"2020-07-05T21:38:28Z"
cs.CL, cs.CY, cs.SI
2,020
Source Code Comments: Overlooked in the Realm of Code Clone Detection
Sandeep Kaur Kuttal, Akash Ghosh
Reusing code can produce duplicate or near-duplicate code clones in code repositories. Current code clone detection techniques, like Program Dependence Graphs, rely on code structure and their dependencies to detect clones. These techniques are expensive, using large amounts of processing power, time, and memory. In practice, programmers often utilize code comments to comprehend and reuse code, as comments carry important domain knowledge. But current code detection techniques ignore code comments, mainly due to the ambiguity of the English language. Recent advances in information retrieval techniques may have the potential to utilize code comments for clone detection. We investigated this by empirically comparing the accuracy of detecting clones with solely comments versus solely source code (without comments) on the JHotDraw package, which contains 315 classes and 27K lines of code. To detect clones at the file level, we used a topic modeling technique, Latent Dirichlet Allocation, to analyze code comments and GRAPLE -- utilizing Program Dependency Graph -- to analyze code. Our results show 94.86 recall and 84.21 precision with Latent Dirichlet Allocation and 28.7 recall and 55.39 precision using GRAPLE. We found Latent Dirichlet Allocation generated false positives in cases where programs lacked quality comments. But this limitation can be addressed by using a hybrid approach: utilizing code comments at the file level to reduce the clone set and then using Program Dependency Graph-based techniques at the method level to detect precise clones. Our further analysis across Java and Python packages, Java Swing and PyGUI, found a recall of 74.86\% and a precision of 84.21\%. Our findings call for reexamining the assumptions regarding the use of code comments in current clone detection techniques.
http://arxiv.org/abs/2006.14505v1
"2020-06-25T15:53:14Z"
cs.SE
2,020
Graph Structural-topic Neural Network
Qingqing Long, Yilun Jin, Guojie Song, Yi Li, Wei Lin
Graph Convolutional Networks (GCNs) achieved tremendous success by effectively gathering local features for nodes. However, commonly do GCNs focus more on node features but less on graph structures within the neighborhood, especially higher-order structural patterns. However, such local structural patterns are shown to be indicative of node properties in numerous fields. In addition, it is not just single patterns, but the distribution over all these patterns matter, because networks are complex and the neighborhood of each node consists of a mixture of various nodes and structural patterns. Correspondingly, in this paper, we propose Graph Structural-topic Neural Network, abbreviated GraphSTONE, a GCN model that utilizes topic models of graphs, such that the structural topics capture indicative graph structures broadly from a probabilistic aspect rather than merely a few structures. Specifically, we build topic models upon graphs using anonymous walks and Graph Anchor LDA, an LDA variant that selects significant structural patterns first, so as to alleviate the complexity and generate structural topics efficiently. In addition, we design multi-view GCNs to unify node features and structural topic features and utilize structural topics to guide the aggregation. We evaluate our model through both quantitative and qualitative experiments, where our model exhibits promising performance, high efficiency, and clear interpretability.
http://arxiv.org/abs/2006.14278v2
"2020-06-25T09:47:21Z"
cs.LG, cs.SI, stat.ML
2,020
Neural Topic Modeling with Continual Lifelong Learning
Pankaj Gupta, Yatin Chaudhary, Thomas Runkler, Hinrich Schütze
Lifelong learning has recently attracted attention in building machine learning systems that continually accumulate and transfer knowledge to help future learning. Unsupervised topic modeling has been popularly used to discover topics from document collections. However, the application of topic modeling is challenging due to data sparsity, e.g., in a small collection of (short) documents and thus, generate incoherent topics and sub-optimal document representations. To address the problem, we propose a lifelong learning framework for neural topic modeling that can continuously process streams of document collections, accumulate topics and guide future topic modeling tasks by knowledge transfer from several sources to better deal with the sparse data. In the lifelong process, we particularly investigate jointly: (1) sharing generative homologies (latent topics) over lifetime to transfer prior knowledge, and (2) minimizing catastrophic forgetting to retain the past learning via novel selective data augmentation, co-training and topic regularization approaches. Given a stream of document collections, we apply the proposed Lifelong Neural Topic Modeling (LNTM) framework in modeling three sparse document collections as future tasks and demonstrate improved performance quantified by perplexity, topic coherence and information retrieval task.
http://arxiv.org/abs/2006.10909v2
"2020-06-19T00:43:23Z"
cs.CL, cs.IR, cs.LG, cs.NE
2,020
Explainable and Discourse Topic-aware Neural Language Understanding
Yatin Chaudhary, Hinrich Schütze, Pankaj Gupta
Marrying topic models and language models exposes language understanding to a broader source of document-level context beyond sentences via topics. While introducing topical semantics in language models, existing approaches incorporate latent document topic proportions and ignore topical discourse in sentences of the document. This work extends the line of research by additionally introducing an explainable topic representation in language understanding, obtained from a set of key terms correspondingly for each latent topic of the proportion. Moreover, we retain sentence-topic associations along with document-topic association by modeling topical discourse for every sentence in the document. We present a novel neural composite language model that exploits both the latent and explainable topics along with topical discourse at sentence-level in a joint learning framework of topic and language models. Experiments over a range of tasks such as language modeling, word sense disambiguation, document classification, retrieval and text generation demonstrate ability of the proposed model in improving language understanding.
http://arxiv.org/abs/2006.10632v3
"2020-06-18T15:53:58Z"
cs.CL, cs.AI, cs.LG
2,020
Improving unsupervised neural aspect extraction for online discussions using out-of-domain classification
Anton Alekseev, Elena Tutubalina, Valentin Malykh, Sergey Nikolenko
Deep learning architectures based on self-attention have recently achieved and surpassed state of the art results in the task of unsupervised aspect extraction and topic modeling. While models such as neural attention-based aspect extraction (ABAE) have been successfully applied to user-generated texts, they are less coherent when applied to traditional data sources such as news articles and newsgroup documents. In this work, we introduce a simple approach based on sentence filtering in order to improve topical aspects learned from newsgroups-based content without modifying the basic mechanism of ABAE. We train a probabilistic classifier to distinguish between out-of-domain texts (outer dataset) and in-domain texts (target dataset). Then, during data preparation we filter out sentences that have a low probability of being in-domain and train the neural model on the remaining sentences. The positive effect of sentence filtering on topic coherence is demonstrated in comparison to aspect extraction models trained on unfiltered texts.
http://arxiv.org/abs/2006.09766v1
"2020-06-17T10:34:16Z"
cs.CL
2,020
Deep Autoencoding Topic Model with Scalable Hybrid Bayesian Inference
Hao Zhang, Bo Chen, Yulai Cong, Dandan Guo, Hongwei Liu, Mingyuan Zhou
To build a flexible and interpretable model for document analysis, we develop deep autoencoding topic model (DATM) that uses a hierarchy of gamma distributions to construct its multi-stochastic-layer generative network. In order to provide scalable posterior inference for the parameters of the generative network, we develop topic-layer-adaptive stochastic gradient Riemannian MCMC that jointly learns simplex-constrained global parameters across all layers and topics, with topic and layer specific learning rates. Given a posterior sample of the global parameters, in order to efficiently infer the local latent representations of a document under DATM across all stochastic layers, we propose a Weibull upward-downward variational encoder that deterministically propagates information upward via a deep neural network, followed by a Weibull distribution based stochastic downward generative model. To jointly model documents and their associated labels, we further propose supervised DATM that enhances the discriminative power of its latent representations. The efficacy and scalability of our models are demonstrated on both unsupervised and supervised learning tasks on big corpora.
http://arxiv.org/abs/2006.08804v1
"2020-06-15T22:22:56Z"
cs.LG, stat.AP, stat.CO, stat.ML
2,020
Behind the Mask: A Computational Study of Anonymous' Presence on Twitter
Keenan Jones, Jason R. C. Nurse, Shujun Li
The hacktivist group Anonymous is unusual in its public-facing nature. Unlike other cybercriminal groups, which rely on secrecy and privacy for protection, Anonymous is prevalent on the social media site, Twitter. In this paper we re-examine some key findings reported in previous small-scale qualitative studies of the group using a large-scale computational analysis of Anonymous' presence on Twitter. We specifically refer to reports which reject the group's claims of leaderlessness, and indicate a fracturing of the group after the arrests of prominent members in 2011-2013. In our research, we present the first attempts to use machine learning to identify and analyse the presence of a network of over 20,000 Anonymous accounts spanning from 2008-2019 on the Twitter platform. In turn, this research utilises social network analysis (SNA) and centrality measures to examine the distribution of influence within this large network, identifying the presence of a small number of highly influential accounts. Moreover, we present the first study of tweets from some of the identified key influencer accounts and, through the use of topic modelling, demonstrate a similarity in overarching subjects of discussion between these prominent accounts. These findings provide robust, quantitative evidence to support the claims of smaller-scale, qualitative studies of the Anonymous collective.
http://arxiv.org/abs/2006.08273v1
"2020-06-15T10:26:12Z"
cs.SI, cs.LG
2,020
Collective response to the media coverage of COVID-19 Pandemic on Reddit and Wikipedia
Nicolò Gozzi, Michele Tizzani, Michele Starnini, Fabio Ciulla, Daniela Paolotti, André Panisson, Nicola Perra
The exposure and consumption of information during epidemic outbreaks may alter risk perception, trigger behavioural changes, and ultimately affect the evolution of the disease. It is thus of the uttermost importance to map information dissemination by mainstream media outlets and public response. However, our understanding of this exposure-response dynamic during COVID-19 pandemic is still limited. In this paper, we provide a characterization of media coverage and online collective attention to COVID-19 pandemic in four countries: Italy, United Kingdom, United States, and Canada. For this purpose, we collect an heterogeneous dataset including 227,768 online news articles and 13,448 Youtube videos published by mainstream media, 107,898 users posts and 3,829,309 comments on the social media platform Reddit, and 278,456,892 views to COVID-19 related Wikipedia pages. Our results show that public attention, quantified as users activity on Reddit and active searches on Wikipedia pages, is mainly driven by media coverage and declines rapidly, while news exposure and COVID-19 incidence remain high. Furthermore, by using an unsupervised, dynamical topic modeling approach, we show that while the attention dedicated to different topics by media and online users are in good accordance, interesting deviations emerge in their temporal patterns. Overall, our findings offer an additional key to interpret public perception/response to the current global health emergency and raise questions about the effects of attention saturation on collective awareness, risk perception and thus on tendencies towards behavioural changes.
http://arxiv.org/abs/2006.06446v1
"2020-06-08T16:10:13Z"
cs.SI, physics.soc-ph
2,020
StackOverflow vs Kaggle: A Study of Developer Discussions About Data Science
David Hin
Software developers are increasingly required to understand fundamental Data science (DS) concepts. Recently, the presence of machine learning (ML) and deep learning (DL) has dramatically increased in the development of user applications, whether they are leveraged through frameworks or implemented from scratch. These topics attract much discussion on online platforms. This paper conducts large-scale qualitative and quantitative experiments to study the characteristics of 197836 posts from StackOverflow and Kaggle. Latent Dirichlet Allocation topic modelling is used to extract twenty-four DS discussion topics. The main findings include that TensorFlow-related topics were most prevalent in StackOverflow, while meta discussion topics were the prevalent ones on Kaggle. StackOverflow tends to include lower-level troubleshooting, while Kaggle focuses on practicality and optimising leaderboard performance. In addition, across both communities, DS discussion is increasing at a dramatic rate. While TensorFlow discussion on StackOverflow is slowing, interest in Keras is rising. Finally, ensemble algorithms are the most mentioned ML/DL algorithms in Kaggle but are rarely discussed on StackOverflow. These findings can help educators and researchers to more effectively tailor and prioritise efforts in researching and communicating DS concepts towards different developer communities.
http://arxiv.org/abs/2006.08334v1
"2020-06-06T06:51:11Z"
cs.CL, cs.SE
2,020
Classification Aware Neural Topic Model and its Application on a New COVID-19 Disinformation Corpus
Xingyi Song, Johann Petrak, Ye Jiang, Iknoor Singh, Diana Maynard, Kalina Bontcheva
The explosion of disinformation accompanying the COVID-19 pandemic has overloaded fact-checkers and media worldwide, and brought a new major challenge to government responses worldwide. Not only is disinformation creating confusion about medical science amongst citizens, but it is also amplifying distrust in policy makers and governments. To help tackle this, we developed computational methods to categorise COVID-19 disinformation. The COVID-19 disinformation categories could be used for a) focusing fact-checking efforts on the most damaging kinds of COVID-19 disinformation; b) guiding policy makers who are trying to deliver effective public health messages and counter effectively COVID-19 disinformation. This paper presents: 1) a corpus containing what is currently the largest available set of manually annotated COVID-19 disinformation categories; 2) a classification-aware neural topic model (CANTM) designed for COVID-19 disinformation category classification and topic discovery; 3) an extensive analysis of COVID-19 disinformation categories with respect to time, volume, false type, media type and origin source.
http://arxiv.org/abs/2006.03354v2
"2020-06-05T10:32:18Z"
cs.LG, cs.CL, cs.SI, stat.ML
2,020
Stopwords in Technical Language Processing
Serhad Sarica, Jianxi Luo
There are increasingly applications of natural language processing techniques for information retrieval, indexing and topic modelling in the engineering contexts. A standard component of such tasks is the removal of stopwords, which are uninformative components of the data. While researchers use readily available stopword lists which are derived for general English language, the technical jargon of engineering fields contains their own highly frequent and uninformative words and there exists no standard stopword list for technical language processing applications. Here we address this gap by rigorously identifying generic, insignificant, uninformative stopwords in engineering texts beyond the stopwords in general texts, based on the synthesis of alternative data-driven approaches, and curating a stopword list ready for technical language processing applications.
http://arxiv.org/abs/2006.02633v1
"2020-06-04T03:52:59Z"
cs.IR, cs.CL
2,020
Streaming Coresets for Symmetric Tensor Factorization
Rachit Chhaya, Jayesh Choudhari, Anirban Dasgupta, Supratim Shit
Factorizing tensors has recently become an important optimization module in a number of machine learning pipelines, especially in latent variable models. We show how to do this efficiently in the streaming setting. Given a set of $n$ vectors, each in $\mathbb{R}^d$, we present algorithms to select a sublinear number of these vectors as coreset, while guaranteeing that the CP decomposition of the $p$-moment tensor of the coreset approximates the corresponding decomposition of the $p$-moment tensor computed from the full data. We introduce two novel algorithmic techniques: online filtering and kernelization. Using these two, we present six algorithms that achieve different tradeoffs of coreset size, update time and working space, beating or matching various state of the art algorithms. In the case of matrices ($2$-ordered tensor), our online row sampling algorithm guarantees $(1 \pm \epsilon)$ relative error spectral approximation. We show applications of our algorithms in learning single topic modeling.
http://arxiv.org/abs/2006.01225v2
"2020-06-01T19:55:34Z"
cs.LG, cs.DS, stat.ML
2,020
Detecting Group Beliefs Related to 2018's Brazilian Elections in Tweets A Combined Study on Modeling Topics and Sentiment Analysis
Brenda Salenave Santana, Aline Aver Vanin
2018's Brazilian presidential elections highlighted the influence of alternative media and social networks, such as Twitter. In this work, we perform an analysis covering politically motivated discourses related to the second round in Brazilian elections. In order to verify whether similar discourses reinforce group engagement to personal beliefs, we collected a set of tweets related to political hashtags at that moment. To this end, we have used a combination of topic modeling approach with opinion mining techniques to analyze the motivated political discourses. Using SentiLex-PT, a Portuguese sentiment lexicon, we extracted from the dataset the top 5 most frequent group of words related to opinions. Applying a bag-of-words model, the cosine similarity calculation was performed between each opinion and the observed groups. This study allowed us to observe an exacerbated use of passionate discourses in the digital political scenario as a form of appreciation and engagement to the groups which convey similar beliefs.
http://arxiv.org/abs/2006.00490v1
"2020-05-31T10:58:35Z"
cs.CL
2,020
Topic Detection and Summarization of User Reviews
Pengyuan Li, Lei Huang, Guang-jie Ren
A massive amount of reviews are generated daily from various platforms. It is impossible for people to read through tons of reviews and to obtain useful information. Automatic summarizing customer reviews thus is important for identifying and extracting the essential information to help users to obtain the gist of the data. However, as customer reviews are typically short, informal, and multifaceted, it is extremely challenging to generate topic-wise summarization.While there are several studies aims to solve this issue, they are heuristic methods that are developed only utilizing customer reviews. Unlike existing method, we propose an effective new summarization method by analyzing both reviews and summaries.To do that, we first segment reviews and summaries into individual sentiments. As the sentiments are typically short, we combine sentiments talking about the same aspect into a single document and apply topic modeling method to identify hidden topics among customer reviews and summaries. Sentiment analysis is employed to distinguish positive and negative opinions among each detected topic. A classifier is also introduced to distinguish the writing pattern of summaries and that of customer reviews. Finally, sentiments are selected to generate the summarization based on their topic relevance, sentiment analysis score and the writing pattern. To test our method, a new dataset comprising product reviews and summaries about 1028 products are collected from Amazon and CNET. Experimental results show the effectiveness of our method compared with other methods.
http://arxiv.org/abs/2006.00148v1
"2020-05-30T02:19:08Z"
cs.CL
2,020
Automatic Generation of Topic Labels
Areej Alokaili, Nikolaos Aletras, Mark Stevenson
Topic modelling is a popular unsupervised method for identifying the underlying themes in document collections that has many applications in information retrieval. A topic is usually represented by a list of terms ranked by their probability but, since these can be difficult to interpret, various approaches have been developed to assign descriptive labels to topics. Previous work on the automatic assignment of labels to topics has relied on a two-stage approach: (1) candidate labels are retrieved from a large pool (e.g. Wikipedia article titles); and then (2) re-ranked based on their semantic similarity to the topic terms. However, these extractive approaches can only assign candidate labels from a restricted set that may not include any suitable ones. This paper proposes using a sequence-to-sequence neural-based approach to generate labels that does not suffer from this limitation. The model is trained over a new large synthetic dataset created using distant supervision. The method is evaluated by comparing the labels it generates to ones rated by humans.
http://arxiv.org/abs/2006.00127v1
"2020-05-29T23:33:13Z"
cs.IR
2,020
Examining Racial Bias in an Online Abuse Corpus with Structural Topic Modeling
Thomas Davidson, Debasmita Bhattacharya
We use structural topic modeling to examine racial bias in data collected to train models to detect hate speech and abusive language in social media posts. We augment the abusive language dataset by adding an additional feature indicating the predicted probability of the tweet being written in African-American English. We then use structural topic modeling to examine the content of the tweets and how the prevalence of different topics is related to both abusiveness annotation and dialect prediction. We find that certain topics are disproportionately racialized and considered abusive. We discuss how topic modeling may be a useful approach for identifying bias in annotated data.
http://arxiv.org/abs/2005.13041v1
"2020-05-26T21:02:43Z"
cs.CL, cs.SI
2,020
MPSUM: Entity Summarization with Predicate-based Matching
Dongjun Wei, Shiyuan Gao, Yaxin Liu, Zhibing Liu, Longtao Hang
With the development of Semantic Web, entity summarization has become an emerging task to generate concrete summaries for real world entities. To solve this problem, we propose an approach named MPSUM that extends a probabilistic topic model by integrating the idea of predicate-uniqueness and object-importance for ranking triples. The approach aims at generating brief but representative summaries for entities. We compare our approach with the state-of-the-art methods using DBpedia and LinkedMDB datasets.The experimental results show that our work improves the quality of entity summarization.
http://arxiv.org/abs/2005.11992v1
"2020-05-25T09:22:32Z"
cs.IR
2,020
Symptom extraction from the narratives of personal experiences with COVID-19 on Reddit
Curtis Murray, Lewis Mitchell, Jonathan Tuke, Mark Mackay
Social media discussion of COVID-19 provides a rich source of information into how the virus affects people's lives that is qualitatively different from traditional public health datasets. In particular, when individuals self-report their experiences over the course of the virus on social media, it can allow for identification of the emotions each stage of symptoms engenders in the patient. Posts to the Reddit forum r/COVID19Positive contain first-hand accounts from COVID-19 positive patients, giving insight into personal struggles with the virus. These posts often feature a temporal structure indicating the number of days after developing symptoms the text refers to. Using topic modelling and sentiment analysis, we quantify the change in discussion of COVID-19 throughout individuals' experiences for the first 14 days since symptom onset. Discourse on early symptoms such as fever, cough, and sore throat was concentrated towards the beginning of the posts, while language indicating breathing issues peaked around ten days. Some conversation around critical cases was also identified and appeared at a roughly constant rate. We identified two clear clusters of positive and negative emotions associated with the evolution of these symptoms and mapped their relationships. Our results provide a perspective on the patient experience of COVID-19 that complements other medical data streams and can potentially reveal when mental health issues might appear.
http://arxiv.org/abs/2005.10454v1
"2020-05-21T03:54:51Z"
cs.CL, cs.SI, stat.AP
2,020
Uncovering Spatiotemporal and Semantic Aspects of Tourists Mobility Using Social Sensing
Ana P G Ferreira, Thiago H Silva, Antonio A F Loureiro
Tourism favors more economic activities, employment, revenues and plays a significant role in development; thus, the improvement of this activity is a strategic task. In this work, we show how social sensing can be used to understand the key characteristics of the behavior of tourists and residents. We observe distinct behavioral patterns in those classes, considering the spatial and temporal dimensions, where cultural and regional aspects might play an important role. Besides, we investigate how tourists move and the factors that influence their movements in London, New York, Rio de Janeiro and Tokyo. In addition, we propose a new approach based on a topic model that enables the automatic identification of mobility pattern themes, ultimately leading to a better understanding of users' profiles. The applicability of our results is broad, helping to provide better applications and services in the tourism segment.
http://arxiv.org/abs/2005.09033v1
"2020-05-18T19:05:22Z"
cs.SI
2,020
Public discourse and sentiment during the COVID-19 pandemic: using Latent Dirichlet Allocation for topic modeling on Twitter
Jia Xue, Junxiang Chen, Chen Chen, Chengda Zheng, Sijia Li, Tingshao Zhu
The study aims to understand Twitter users' discourse and psychological reactions to COVID-19. We use machine learning techniques to analyze about 1.9 million Tweets (written in English) related to coronavirus collected from January 23 to March 7, 2020. A total of salient 11 topics are identified and then categorized into ten themes, including "updates about confirmed cases," "COVID-19 related death," "cases outside China (worldwide)," "COVID-19 outbreak in South Korea," "early signs of the outbreak in New York," "Diamond Princess cruise," "economic impact," "Preventive measures," "authorities," and "supply chain." Results do not reveal treatments and symptoms related messages as prevalent topics on Twitter. Sentiment analysis shows that fear for the unknown nature of the coronavirus is dominant in all topics. Implications and limitations of the study are also discussed.
http://arxiv.org/abs/2005.08817v3
"2020-05-18T15:50:38Z"
cs.SI, cs.CL, cs.CY
2,020
Content analysis of Persian/Farsi Tweets during COVID-19 pandemic in Iran using NLP
Pedram Hosseini, Poorya Hosseini, David A. Broniatowski
Iran, along with China, South Korea, and Italy was among the countries that were hit hard in the first wave of the COVID-19 spread. Twitter is one of the widely-used online platforms by Iranians inside and abroad for sharing their opinion, thoughts, and feelings about a wide range of issues. In this study, using more than 530,000 original tweets in Persian/Farsi on COVID-19, we analyzed the topics discussed among users, who are mainly Iranians, to gauge and track the response to the pandemic and how it evolved over time. We applied a combination of manual annotation of a random sample of tweets and topic modeling tools to classify the contents and frequency of each category of topics. We identified the top 25 topics among which living experience under home quarantine emerged as a major talking point. We additionally categorized broader content of tweets that shows satire, followed by news, is the dominant tweet type among the Iranian users. While this framework and methodology can be used to track public response to ongoing developments related to COVID-19, a generalization of this framework can become a useful framework to gauge Iranian public reaction to ongoing policy measures or events locally and internationally.
http://arxiv.org/abs/2005.08400v1
"2020-05-17T23:47:08Z"
cs.SI, cs.CL, cs.CY
2,020
India nudges to contain COVID-19 pandemic: a reactive public policy analysis using machine-learning based topic modelling
Ramit Debnath, Ronita Bardhan
India locked down 1.3 billion people on March 25, 2020 in the wake of COVID-19 pandemic. The economic cost of it was estimated at USD 98 billion, while the social costs are still unknown. This study investigated how government formed reactive policies to fight coronavirus across its policy sectors. Primary data was collected from the Press Information Bureau (PIB) in the form press releases of government plans, policies, programme initiatives and achievements. A text corpus of 260,852 words was created from 396 documents from the PIB. An unsupervised machine-based topic modelling using Latent Dirichlet Allocation (LDA) algorithm was performed on the text corpus. It was done to extract high probability topics in the policy sectors. The interpretation of the extracted topics was made through a nudge theoretic lens to derive the critical policy heuristics of the government. Results showed that most interventions were targeted to generate endogenous nudge by using external triggers. Notably, the nudges from the Prime Minister of India was critical in creating herd effect on lockdown and social distancing norms across the nation. A similar effect was also observed around the public health (e.g., masks in public spaces; Yoga and Ayurveda for immunity), transport (e.g., old trains converted to isolation wards), micro, small and medium enterprises (e.g., rapid production of PPE and masks), science and technology sector (e.g., diagnostic kits, robots and nano-technology), home affairs (e.g., surveillance and lockdown), urban (e.g. drones, GIS-tools) and education (e.g., online learning). A conclusion was drawn on leveraging these heuristics are crucial for lockdown easement planning.
http://arxiv.org/abs/2005.06619v2
"2020-05-14T04:14:09Z"
cs.CY, cs.CL, cs.LG, J.4; K.4.1
2,020
SCAT: Second Chance Autoencoder for Textual Data
Somaieh Goudarzvand, Gharib Gharibi, Yugyung Lee
We present a k-competitive learning approach for textual autoencoders named Second Chance Autoencoder (SCAT). SCAT selects the $k$ largest and smallest positive activations as the winner neurons, which gain the activation values of the loser neurons during the learning process, and thus focus on retrieving well-representative features for topics. Our experiments show that SCAT achieves outstanding performance in classification, topic modeling, and document visualization compared to LDA, K-Sparse, NVCTM, and KATE.
http://arxiv.org/abs/2005.06632v3
"2020-05-11T19:04:31Z"
cs.CL, cs.LG
2,020
Text-Based Ideal Points
Keyon Vafa, Suresh Naidu, David M. Blei
Ideal point models analyze lawmakers' votes to quantify their political positions, or ideal points. But votes are not the only way to express a political position. Lawmakers also give speeches, release press statements, and post tweets. In this paper, we introduce the text-based ideal point model (TBIP), an unsupervised probabilistic topic model that analyzes texts to quantify the political positions of its authors. We demonstrate the TBIP with two types of politicized text data: U.S. Senate speeches and senator tweets. Though the model does not analyze their votes or political affiliations, the TBIP separates lawmakers by party, learns interpretable politicized topics, and infers ideal points close to the classical vote-based ideal points. One benefit of analyzing texts, as opposed to votes, is that the TBIP can estimate ideal points of anyone who authors political texts, including non-voting actors. To this end, we use it to study tweets from the 2020 Democratic presidential candidates. Using only the texts of their tweets, it identifies them along an interpretable progressive-to-moderate spectrum.
http://arxiv.org/abs/2005.04232v2
"2020-05-08T21:16:42Z"
cs.CL, cs.LG, stat.ML
2,020
Exploratory Analysis of Covid-19 Tweets using Topic Modeling, UMAP, and DiGraphs
Catherine Ordun, Sanjay Purushotham, Edward Raff
This paper illustrates five different techniques to assess the distinctiveness of topics, key terms and features, speed of information dissemination, and network behaviors for Covid19 tweets. First, we use pattern matching and second, topic modeling through Latent Dirichlet Allocation (LDA) to generate twenty different topics that discuss case spread, healthcare workers, and personal protective equipment (PPE). One topic specific to U.S. cases would start to uptick immediately after live White House Coronavirus Task Force briefings, implying that many Twitter users are paying attention to government announcements. We contribute machine learning methods not previously reported in the Covid19 Twitter literature. This includes our third method, Uniform Manifold Approximation and Projection (UMAP), that identifies unique clustering-behavior of distinct topics to improve our understanding of important themes in the corpus and help assess the quality of generated topics. Fourth, we calculated retweeting times to understand how fast information about Covid19 propagates on Twitter. Our analysis indicates that the median retweeting time of Covid19 for a sample corpus in March 2020 was 2.87 hours, approximately 50 minutes faster than repostings from Chinese social media about H7N9 in March 2013. Lastly, we sought to understand retweet cascades, by visualizing the connections of users over time from fast to slow retweeting. As the time to retweet increases, the density of connections also increase where in our sample, we found distinct users dominating the attention of Covid19 retweeters. One of the simplest highlights of this analysis is that early-stage descriptive methods like regular expressions can successfully identify high-level themes which were consistently verified as important through every subsequent analysis.
http://arxiv.org/abs/2005.03082v1
"2020-05-06T19:16:38Z"
cs.SI, cs.LG
2,020
Modelling Grocery Retail Topic Distributions: Evaluation, Interpretability and Stability
Mariflor Vega-Carrasco, Jason O'sullivan, Rosie Prior, Ioanna Manolopoulou, Mirco Musolesi
Understanding the shopping motivations behind market baskets has high commercial value in the grocery retail industry. Analyzing shopping transactions demands techniques that can cope with the volume and dimensionality of grocery transactional data while keeping interpretable outcomes. Latent Dirichlet Allocation (LDA) provides a suitable framework to process grocery transactions and to discover a broad representation of customers' shopping motivations. However, summarizing the posterior distribution of an LDA model is challenging, while individual LDA draws may not be coherent and cannot capture topic uncertainty. Moreover, the evaluation of LDA models is dominated by model-fit measures which may not adequately capture the qualitative aspects such as interpretability and stability of topics. In this paper, we introduce clustering methodology that post-processes posterior LDA draws to summarise the entire posterior distribution and identify semantic modes represented as recurrent topics. Our approach is an alternative to standard label-switching techniques and provides a single posterior summary set of topics, as well as associated measures of uncertainty. Furthermore, we establish a more holistic definition for model evaluation, which assesses topic models based not only on their likelihood but also on their coherence, distinctiveness and stability. By means of a survey, we set thresholds for the interpretation of topic coherence and topic similarity in the domain of grocery retail data. We demonstrate that the selection of recurrent topics through our clustering methodology not only improves model likelihood but also outperforms the qualitative aspects of LDA such as interpretability and stability. We illustrate our methods on an example from a large UK supermarket chain.
http://arxiv.org/abs/2005.10125v2
"2020-05-04T21:23:36Z"
stat.AP, cs.CL, stat.ME
2,020
An Algebraic Approach for High-level Text Analytics
Xiuwen Zheng, Amarnath Gupta
Text analytical tasks like word embedding, phrase mining, and topic modeling, are placing increasing demands as well as challenges to existing database management systems. In this paper, we provide a novel algebraic approach based on associative arrays. Our data model and algebra can bring together relational operators and text operators, which enables interesting optimization opportunities for hybrid data sources that have both relational and textual data. We demonstrate its expressive power in text analytics using several real-world tasks.
http://arxiv.org/abs/2005.00993v1
"2020-05-03T05:41:36Z"
cs.DB
2,020
Tired of Topic Models? Clusters of Pretrained Word Embeddings Make for Fast and Good Topics too!
Suzanna Sia, Ayush Dalmia, Sabrina J. Mielke
Topic models are a useful analysis tool to uncover the underlying themes within document collections. The dominant approach is to use probabilistic topic models that posit a generative story, but in this paper we propose an alternative way to obtain topics: clustering pre-trained word embeddings while incorporating document information for weighted clustering and reranking top words. We provide benchmarks for the combination of different word embeddings and clustering algorithms, and analyse their performance under dimensionality reduction with PCA. The best performing combination for our approach performs as well as classical topic models, but with lower runtime and computational complexity.
http://arxiv.org/abs/2004.14914v2
"2020-04-30T16:18:18Z"
cs.CL
2,020
Indirect Identification of Psychosocial Risks from Natural Language
Kristen C. Allen, Alex Davis, Tamar Krishnamurti
During the perinatal period, psychosocial health risks, including depression and intimate partner violence, are associated with serious adverse health outcomes for parents and children. To appropriately intervene, healthcare professionals must first identify those at risk, yet stigma often prevents people from directly disclosing the information needed to prompt an assessment. We examine indirect methods of eliciting and analyzing information that could indicate psychosocial risks. Short diary entries by peripartum women exhibit thematic patterns, extracted by topic modeling, and emotional perspective, drawn from dictionary-informed sentiment features. Using these features, we use regularized regression to predict screening measures of depression and psychological aggression by an intimate partner. Journal text entries quantified through topic models and sentiment features show promise for depression prediction, with performance almost as good as closed-form questions. Text-based features were less useful for prediction of intimate partner violence, but moderately indirect multiple-choice questioning allowed for detection without explicit disclosure. Both methods may serve as an initial or complementary screening approach to detecting stigmatized risks.
http://arxiv.org/abs/2004.14554v1
"2020-04-30T03:13:28Z"
cs.CL, cs.CY, J.3; J.4; H.5.2
2,020
CoronaVis: A Real-time COVID-19 Tweets Data Analyzer and Data Repository
Md. Yasin Kabir, Sanjay Madria
Due to the nature of the data and public interaction, twitter is becoming more and more useful to understand and model various events. The goal of CoronaVis is to use tweets as the information shared by the people to visualize topic modeling, study subjectivity, and to model the human emotions during the COVID-19 pandemic. The main objective is to explore the psychology and behavior of the societies at large which can assist in managing the economic and social crisis during the ongoing pandemic as well as the after-effects of it. The novel coronavirus (COVID-19) pandemic forced people to stay at home to reduce the spread of the virus by maintaining social distancing. However, social media is keeping people connected both locally and globally. People are sharing information (e.g. personal opinions, some facts, news, status, etc.) on social media platforms which can be helpful to understand the various public behavior such as emotions, sentiments, and mobility during the ongoing pandemic. In this work, we develop a live application to observe the tweets on COVID-19 generated from the USA. In this paper, we have generated various data analytics over a period of time to study the changes in topics, subjectivity, and human emotions. We also share a cleaned and processed dataset named CoronaVis Twitter dataset (focused on the United States) available to the research community at https://github.com/mykabir/COVID19. This will enable the community to find more useful insights and create different applications and models to fight with COVID-19 pandemic and future pandemics as well.
http://arxiv.org/abs/2004.13932v2
"2020-04-29T02:52:53Z"
cs.SI
2,020
Neural Topic Modeling with Bidirectional Adversarial Training
Rui Wang, Xuemeng Hu, Deyu Zhou, Yulan He, Yuxuan Xiong, Chenchen Ye, Haiyang Xu
Recent years have witnessed a surge of interests of using neural topic models for automatic topic extraction from text, since they avoid the complicated mathematical derivations for model inference as in traditional topic models such as Latent Dirichlet Allocation (LDA). However, these models either typically assume improper prior (e.g. Gaussian or Logistic Normal) over latent topic space or could not infer topic distribution for a given document. To address these limitations, we propose a neural topic modeling approach, called Bidirectional Adversarial Topic (BAT) model, which represents the first attempt of applying bidirectional adversarial training for neural topic modeling. The proposed BAT builds a two-way projection between the document-topic distribution and the document-word distribution. It uses a generator to capture the semantic patterns from texts and an encoder for topic inference. Furthermore, to incorporate word relatedness information, the Bidirectional Adversarial Topic model with Gaussian (Gaussian-BAT) is extended from BAT. To verify the effectiveness of BAT and Gaussian-BAT, three benchmark corpora are used in our experiments. The experimental results show that BAT and Gaussian-BAT obtain more coherent topics, outperforming several competitive baselines. Moreover, when performing text clustering based on the extracted topics, our models outperform all the baselines, with more significant improvements achieved by Gaussian-BAT where an increase of near 6\% is observed in accuracy.
http://arxiv.org/abs/2004.12331v1
"2020-04-26T09:41:17Z"
cs.CL, cs.IR, cs.LG
2,020
Deep Sentiment Classification and Topic Discovery on Novel Coronavirus or COVID-19 Online Discussions: NLP Using LSTM Recurrent Neural Network Approach
Hamed Jelodar, Yongli Wang, Rita Orji, Hucheng Huang
Internet forums and public social media, such as online healthcare forums, provide a convenient channel for users (people/patients) concerned about health issues to discuss and share information with each other. In late December 2019, an outbreak of a novel coronavirus (infection from which results in the disease named COVID-19) was reported, and, due to the rapid spread of the virus in other parts of the world, the World Health Organization declared a state of emergency. In this paper, we used automated extraction of COVID-19 related discussions from social media and a natural language process (NLP) method based on topic modeling to uncover various issues related to COVID-19 from public opinions. Moreover, we also investigate how to use LSTM recurrent neural network for sentiment classification of COVID-19 comments. Our findings shed light on the importance of using public opinions and suitable computational techniques to understand issues surrounding COVID-19 and to guide related decision-making.
http://arxiv.org/abs/2004.11695v1
"2020-04-24T16:29:13Z"
cs.IR, cs.CL
2,020
A Gamma-Poisson Mixture Topic Model for Short Text
Jocelyn Mazarura, Alta de Waal, Pieter de Villiers
Most topic models are constructed under the assumption that documents follow a multinomial distribution. The Poisson distribution is an alternative distribution to describe the probability of count data. For topic modelling, the Poisson distribution describes the number of occurrences of a word in documents of fixed length. The Poisson distribution has been successfully applied in text classification, but its application to topic modelling is not well documented, specifically in the context of a generative probabilistic model. Furthermore, the few Poisson topic models in literature are admixture models, making the assumption that a document is generated from a mixture of topics. In this study, we focus on short text. Many studies have shown that the simpler assumption of a mixture model fits short text better. With mixture models, as opposed to admixture models, the generative assumption is that a document is generated from a single topic. One topic model, which makes this one-topic-per-document assumption, is the Dirichlet-multinomial mixture model. The main contributions of this work are a new Gamma-Poisson mixture model, as well as a collapsed Gibbs sampler for the model. The benefit of the collapsed Gibbs sampler derivation is that the model is able to automatically select the number of topics contained in the corpus. The results show that the Gamma-Poisson mixture model performs better than the Dirichlet-multinomial mixture model at selecting the number of topics in labelled corpora. Furthermore, the Gamma-Poisson mixture produces better topic coherence scores than the Dirichlet-multinomial mixture model, thus making it a viable option for the challenging task of topic modelling of short text.
http://arxiv.org/abs/2004.11464v1
"2020-04-23T21:13:53Z"
cs.CL, cs.IR, cs.LG, stat.ML
2,020
Privacy at Scale: Introducing the PrivaSeer Corpus of Web Privacy Policies
Mukund Srinath, Shomir Wilson, C. Lee Giles
Organisations disclose their privacy practices by posting privacy policies on their website. Even though users often care about their digital privacy, they often don't read privacy policies since they require a significant investment in time and effort. Although natural language processing can help in privacy policy understanding, there has been a lack of large scale privacy policy corpora that could be used to analyse, understand, and simplify privacy policies. Thus, we create PrivaSeer, a corpus of over one million English language website privacy policies, which is significantly larger than any previously available corpus. We design a corpus creation pipeline which consists of crawling the web followed by filtering documents using language detection, document classification, duplicate and near-duplication removal, and content extraction. We investigate the composition of the corpus and show results from readability tests, document similarity, keyphrase extraction, and explored the corpus through topic modeling.
http://arxiv.org/abs/2004.11131v2
"2020-04-23T13:21:00Z"
cs.IR, cs.CR
2,020
In the Eyes of the Beholder: Analyzing Social Media Use of Neutral and Controversial Terms for COVID-19
Long Chen, Hanjia Lyu, Tongyu Yang, Yu Wang, Jiebo Luo
During the COVID-19 pandemic, "Chinese Virus" emerged as a controversial term for coronavirus. To some, it may seem like a neutral term referring to the physical origin of the virus. To many others, however, the term is in fact attaching ethnicity to the virus. While both arguments appear reasonable, quantitative analysis of the term's real-world usage is lacking to shed light on the issues behind the controversy. In this paper, we attempt to fill this gap. To model the substantive difference of tweets with controversial terms and those with non-controversial terms, we apply topic modeling and LIWC-based sentiment analysis. To test whether "Chinese Virus" and "COVID-19" are interchangeable, we formulate it as a classification task, mask out these terms, and classify them using the state-of-the-art transformer models. Our experiments consistently show that the term "Chinese Virus" is associated with different substantive topics and sentiment compared with "COVID-19" and that the two terms are easily distinguishable by looking at their context.
http://arxiv.org/abs/2004.10225v3
"2020-04-21T18:15:45Z"
cs.SI, cs.IR
2,020
Dynamic topic modeling of the COVID-19 Twitter narrative among U.S. governors and cabinet executives
Hao Sha, Mohammad Al Hasan, George Mohler, P. Jeffrey Brantingham
A combination of federal and state-level decision making has shaped the response to COVID-19 in the United States. In this paper we analyze the Twitter narratives around this decision making by applying a dynamic topic model to COVID-19 related tweets by U.S. Governors and Presidential cabinet members. We use a network Hawkes binomial topic model to track evolving sub-topics around risk, testing and treatment. We also construct influence networks amongst government officials using Granger causality inferred from the network Hawkes process.
http://arxiv.org/abs/2004.11692v1
"2020-04-19T16:22:06Z"
cs.SI, physics.soc-ph
2,020
CO.ME.T.A. -- covid-19 media textual analysis. A dashboard for media monitoring
Emma Zavarrone, Maria Gabriella Grassia, Marina Marino, Rasanna Cataldo, Rocco Mazza, Nicola Canestrari
The focus of this paper is to trace how mass media, particularly newspapers, have addressed the issues about the containment of contagion or the explanation of epidemiological evolution. We propose an interactive dashboard: CO.ME.T.A.. During crises it is important to shape the best communication strategies in order to respond to critical situations. In this regard, it is important to monitor the information that mass media and social platforms convey. The dashboard allows to explore the mining of contents extracted and study the lexical structure that links the main discussion topics. The dashboard merges together four methods: text mining, sentiment analysis, textual network analysis and latent topic models. Results obtained on a subset of documents show not only a health-related semantic dimension, but it also extends to social-economic dimensions.
http://arxiv.org/abs/2004.07742v1
"2020-04-16T16:24:56Z"
cs.CY, stat.AP
2,020
Cross-lingual Contextualized Topic Models with Zero-shot Learning
Federico Bianchi, Silvia Terragni, Dirk Hovy, Debora Nozza, Elisabetta Fersini
Many data sets (e.g., reviews, forums, news, etc.) exist parallelly in multiple languages. They all cover the same content, but the linguistic differences make it impossible to use traditional, bag-of-word-based topic models. Models have to be either single-language or suffer from a huge, but extremely sparse vocabulary. Both issues can be addressed by transfer learning. In this paper, we introduce a zero-shot cross-lingual topic model. Our model learns topics on one language (here, English), and predicts them for unseen documents in different languages (here, Italian, French, German, and Portuguese). We evaluate the quality of the topic predictions for the same document in different languages. Our results show that the transferred topics are coherent and stable across languages, which suggests exciting future research directions.
http://arxiv.org/abs/2004.07737v2
"2020-04-16T16:21:17Z"
cs.CL
2,020
Framing COVID-19: How we conceptualize and discuss the pandemic on Twitter
Philipp Wicke, Marianna M. Bolognesi
Doctors and nurses in these weeks are busy in the trenches, fighting against a new invisible enemy: Covid-19. Cities are locked down and civilians are besieged in their own homes, to prevent the spreading of the virus. War-related terminology is commonly used to frame the discourse around epidemics and diseases. Arguably the discourse around the current epidemic will make use of war-related metaphors too,not only in public discourse and the media, but also in the tweets written by non-experts of mass communication. We hereby present an analysis of the discourse around #Covid-19, based on a corpus of 200k tweets posted on Twitter during March and April 2020. Using topic modelling we first analyze the topics around which the discourse can be classified. Then, we show that the WAR framing is used to talk about specific topics, such as the virus treatment, but not others, such as the effects of social distancing on the population. We then measure and compare the popularity of the WAR frame to three alternative figurative frames (MONSTER, STORM and TSUNAMI) and a literal frame used as control (FAMILY). The results show that while the FAMILY literal frame covers a wider portion of the corpus, among the figurative framings WAR is the most frequently used, and thus arguably the most conventional one. However, we conclude, this frame is not apt to elaborate the discourse around many aspects involved in the current situation. Therefore, we conclude, in line with previous suggestions, a plethora of framing options, or a metaphor menu, may facilitate the communication of various aspects involved in the Covid-19-related discourse on the social media, and thus support civilians in the expression of their feelings, opinions and ideas during the current pandemic.
http://arxiv.org/abs/2004.06986v2
"2020-04-15T10:14:15Z"
cs.CL, cs.SI, J.5; I.7.0
2,020
Probabilistic Model of Narratives Over Topical Trends in Social Media: A Discrete Time Model
Toktam A. Oghaz, Ece C. Mutlu, Jasser Jasser, Niloofar Yousefi, Ivan Garibay
Online social media platforms are turning into the prime source of news and narratives about worldwide events. However,a systematic summarization-based narrative extraction that can facilitate communicating the main underlying events is lacking. To address this issue, we propose a novel event-based narrative summary extraction framework. Our proposed framework is designed as a probabilistic topic model, with categorical time distribution, followed by extractive text summarization. Our topic model identifies topics' recurrence over time with a varying time resolution. This framework not only captures the topic distributions from the data, but also approximates the user activity fluctuations over time. Furthermore, we define significance-dispersity trade-off (SDT) as a comparison measure to identify the topic with the highest lifetime attractiveness in a timestamped corpus. We evaluate our model on a large corpus of Twitter data, including more than one million tweets in the domain of the disinformation campaigns conducted against the White Helmets of Syria. Our results indicate that the proposed framework is effective in identifying topical trends, as well as extracting narrative summaries from text corpus with timestamped data.
http://arxiv.org/abs/2004.06793v1
"2020-04-14T20:18:21Z"
cs.SI, cs.CL, cs.IR
2,020
Keyword Assisted Topic Models
Shusei Eshima, Kosuke Imai, Tomoya Sasaki
In recent years, fully automated content analysis based on probabilistic topic models has become popular among social scientists because of their scalability. The unsupervised nature of the models makes them suitable for exploring topics in a corpus without prior knowledge. However, researchers find that these models often fail to measure specific concepts of substantive interest by inadvertently creating multiple topics with similar content and combining distinct themes into a single topic. In this paper, we empirically demonstrate that providing a small number of keywords can substantially enhance the measurement performance of topic models. An important advantage of the proposed keyword assisted topic model (keyATM) is that the specification of keywords requires researchers to label topics prior to fitting a model to the data. This contrasts with a widespread practice of post-hoc topic interpretation and adjustments that compromises the objectivity of empirical findings. In our application, we find that keyATM provides more interpretable results, has better document classification performance, and is less sensitive to the number of topics than the standard topic models. Finally, we show that keyATM can also incorporate covariates and model time trends. An open-source software package is available for implementing the proposed methodology.
http://arxiv.org/abs/2004.05964v3
"2020-04-13T14:35:28Z"
cs.CL, stat.AP, stat.ME
2,020
Vintage Factor Analysis with Varimax Performs Statistical Inference
Karl Rohe, Muzhe Zeng
Psychologists developed Multiple Factor Analysis to decompose multivariate data into a small number of interpretable factors without any a priori knowledge about those factors. In this form of factor analysis, the Varimax "factor rotation" is a key step to make the factors interpretable. Charles Spearman and many others objected to factor rotations because the factors seem to be rotationally invariant. These objections are still reported in all contemporary multivariate statistics textbooks. This is an engima because this vintage form of factor analysis has survived and is widely popular because, empirically, the factor rotation often makes the factors easier to interpret. We argue that the rotation makes the factors easier to interpret because, in fact, the Varimax factor rotation performs statistical inference. We show that Principal Components Analysis (PCA) with the Varimax rotation provides a unified spectral estimation strategy for a broad class of modern factor models, including the Stochastic Blockmodel and a natural variation of Latent Dirichlet Allocation (i.e., "topic modeling"). In addition, we show that Thurstone's widely employed sparsity diagnostics implicitly assess a key "leptokurtic" condition that makes the rotation statistically identifiable in these models. Taken together, this shows that the know-how of Vintage Factor Analysis performs statistical inference, reversing nearly a century of statistical thinking on the topic. With a sparse eigensolver, PCA with Varimax is both fast and stable. Combined with Thurstone's straightforward diagnostics, this vintage approach is suitable for a wide array of modern applications.
http://arxiv.org/abs/2004.05387v2
"2020-04-11T12:45:06Z"
stat.ME, math.ST, stat.TH
2,020
Measuring Emotions in the COVID-19 Real World Worry Dataset
Bennett Kleinberg, Isabelle van der Vegt, Maximilian Mozes
The COVID-19 pandemic is having a dramatic impact on societies and economies around the world. With various measures of lockdowns and social distancing in place, it becomes important to understand emotional responses on a large scale. In this paper, we present the first ground truth dataset of emotional responses to COVID-19. We asked participants to indicate their emotions and express these in text. This resulted in the Real World Worry Dataset of 5,000 texts (2,500 short + 2,500 long texts). Our analyses suggest that emotional responses correlated with linguistic measures. Topic modeling further revealed that people in the UK worry about their family and the economic situation. Tweet-sized texts functioned as a call for solidarity, while longer texts shed light on worries and concerns. Using predictive modeling approaches, we were able to approximate the emotional responses of participants from text within 14% of their actual value. We encourage others to use the dataset and improve how we can use automated methods to learn about emotional responses and worries about an urgent problem.
http://arxiv.org/abs/2004.04225v2
"2020-04-08T19:52:14Z"
cs.CL, cs.IR, cs.SI
2,020
Pre-training is a Hot Topic: Contextualized Document Embeddings Improve Topic Coherence
Federico Bianchi, Silvia Terragni, Dirk Hovy
Topic models extract groups of words from documents, whose interpretation as a topic hopefully allows for a better understanding of the data. However, the resulting word groups are often not coherent, making them harder to interpret. Recently, neural topic models have shown improvements in overall coherence. Concurrently, contextual embeddings have advanced the state of the art of neural models in general. In this paper, we combine contextualized representations with neural topic models. We find that our approach produces more meaningful and coherent topics than traditional bag-of-words topic models and recent neural models. Our results indicate that future improvements in language models will translate into better topic models.
http://arxiv.org/abs/2004.03974v2
"2020-04-08T12:37:51Z"
cs.CL
2,020
Is it feasible to detect FLOSS version release events from textual messages? A case study on Stack Overflow
A. Sokolovsky, T. Gross, J. Bacardit
Topic Detection and Tracking (TDT) is a very active research question within the area of text mining, generally applied to news feeds and Twitter datasets, where topics and events are detected. The notion of "event" is broad, but typically it applies to occurrences that can be detected from a single post or a message. Little attention has been drawn to what we call "micro-events", which, due to their nature, cannot be detected from a single piece of textual information. The study investigates the feasibility of micro-event detection on textual data using a sample of messages from the Stack Overflow Q&A platform and Free/Libre Open Source Software (FLOSS) version releases from Libraries.io dataset. We build pipelines for detection of micro-events using three different estimators whose parameters are optimized using a grid search approach. We consider two feature spaces: LDA topic modeling with sentiment analysis, and hSBM topics with sentiment analysis. The feature spaces are optimized using the recursive feature elimination with cross validation (RFECV) strategy. In our experiments we investigate whether there is a characteristic change in the topics distribution or sentiment features before or after micro-events take place and we thoroughly evaluate the capacity of each variant of our analysis pipeline to detect micro-events. Additionally, we perform a detailed statistical analysis of the models, including influential cases, variance inflation factors, validation of the linearity assumption, pseudo R squared measures and no-information rate. Finally, in order to study limits of micro-event detection, we design a method for generating micro-event synthetic datasets with similar properties to the real-world data, and use them to identify the micro-event detectability threshold for each of the evaluated classifiers.
http://arxiv.org/abs/2003.14257v3
"2020-03-30T16:55:38Z"
cs.SE, cs.IR, cs.LG, stat.ML
2,020
Unsupervised Cross-Modal Audio Representation Learning from Unstructured Multilingual Text
Alexander Schindler, Sergiu Gordea, Peter Knees
We present an approach to unsupervised audio representation learning. Based on a triplet neural network architecture, we harnesses semantically related cross-modal information to estimate audio track-relatedness. By applying Latent Semantic Indexing (LSI) we embed corresponding textual information into a latent vector space from which we derive track relatedness for online triplet selection. This LSI topic modelling facilitates fine-grained selection of similar and dissimilar audio-track pairs to learn the audio representation using a Convolution Recurrent Neural Network (CRNN). By this we directly project the semantic context of the unstructured text modality onto the learned representation space of the audio modality without deriving structured ground-truth annotations from it. We evaluate our approach on the Europeana Sounds collection and show how to improve search in digital audio libraries by harnessing the multilingual meta-data provided by numerous European digital libraries. We show that our approach is invariant to the variety of annotation styles as well as to the different languages of this collection. The learned representations perform comparable to the baseline of handcrafted features, respectively exceeding this baseline in similarity retrieval precision at higher cut-offs with only 15\% of the baseline's feature vector length.
http://arxiv.org/abs/2003.12265v1
"2020-03-27T07:37:15Z"
cs.MM, cs.IR, cs.LG
2,020
Know thy corpus! Robust methods for digital curation of Web corpora
Serge Sharoff
This paper proposes a novel framework for digital curation of Web corpora in order to provide robust estimation of their parameters, such as their composition and the lexicon. In recent years language models pre-trained on large corpora emerged as clear winners in numerous NLP tasks, but no proper analysis of the corpora which led to their success has been conducted. The paper presents a procedure for robust frequency estimation, which helps in establishing the core lexicon for a given corpus, as well as a procedure for estimating the corpus composition via unsupervised topic models and via supervised genre classification of Web pages. The results of the digital curation study applied to several Web-derived corpora demonstrate their considerable differences. First, this concerns different frequency bursts which impact the core lexicon obtained from each corpus. Second, this concerns the kinds of texts they contain. For example, OpenWebText contains considerably more topical news and political argumentation in comparison to ukWac or Wikipedia. The tools and the results of analysis have been released.
http://arxiv.org/abs/2003.06389v1
"2020-03-13T17:21:57Z"
cs.CL
2,020
A Graph Convolutional Topic Model for Short and Noisy Text Streams
Ngo Van Linh, Tran Xuan Bach, Khoat Than
Learning hidden topics from data streams has become absolutely necessary but posed challenging problems such as concept drift as well as short and noisy data. Using prior knowledge to enrich a topic model is one of potential solutions to cope with these challenges. Prior knowledge that is derived from human knowledge (e.g. Wordnet) or a pre-trained model (e.g. Word2vec) is very valuable and useful to help topic models work better. However, in a streaming environment where data arrives continually and infinitely, existing studies are limited to exploiting these resources effectively. Especially, a knowledge graph, that contains meaningful word relations, is ignored. In this paper, to aim at exploiting a knowledge graph effectively, we propose a novel graph convolutional topic model (GCTM) which integrates graph convolutional networks (GCN) into a topic model and a learning method which learns the networks and the topic model simultaneously for data streams. In each minibatch, our method not only can exploit an external knowledge graph but also can balance the external and old knowledge to perform well on new data. We conduct extensive experiments to evaluate our method with both a human knowledge graph (Wordnet) and a graph built from pre-trained word embeddings (Word2vec). The experimental results show that our method achieves significantly better performances than state-of-the-art baselines in terms of probabilistic predictive measure and topic coherence. In particular, our method can work well when dealing with short texts as well as concept drift. The implementation of GCTM is available at \url{https://github.com/bachtranxuan/GCTM.git}.
http://arxiv.org/abs/2003.06112v4
"2020-03-13T05:09:00Z"
cs.LG, stat.ML
2,020
Style-compatible Object Recommendation for Multi-room Indoor Scene Synthesis
Yu He, Yun Cai, Yuan-Chen Guo, Zheng-Ning Liu, Shao-Kui Zhang, Song-Hai Zhang, Hong-Bo Fu, Sheng-Yong Chen
Traditional indoor scene synthesis methods often take a two-step approach: object selection and object arrangement. Current state-of-the-art object selection approaches are based on convolutional neural networks (CNNs) and can produce realistic scenes for a single room. However, they cannot be directly extended to synthesize style-compatible scenes for multiple rooms with different functions. To address this issue, we treat the object selection problem as combinatorial optimization based on a Labeled LDA (L-LDA) model. We first calculate occurrence probability distribution of object categories according to a topic model, and then sample objects from each category considering their function diversity along with style compatibility, while regarding not only separate rooms, but also associations among rooms. User study shows that our method outperforms the baselines by incorporating multi-function and multi-room settings with style constraints, and sometimes even produces plausible scenes comparable to those produced by professional designers.
http://arxiv.org/abs/2003.04187v2
"2020-03-09T15:04:25Z"
cs.GR
2,020
Contrastive estimation reveals topic posterior information to linear models
Christopher Tosh, Akshay Krishnamurthy, Daniel Hsu
Contrastive learning is an approach to representation learning that utilizes naturally occurring similar and dissimilar pairs of data points to find useful embeddings of data. In the context of document classification under topic modeling assumptions, we prove that contrastive learning is capable of recovering a representation of documents that reveals their underlying topic posterior information to linear models. We apply this procedure in a semi-supervised setup and demonstrate empirically that linear classifiers with these representations perform well in document classification tasks with very few training examples.
http://arxiv.org/abs/2003.02234v1
"2020-03-04T18:20:55Z"
cs.LG, stat.ML
2,020
Generalized Gumbel-Softmax Gradient Estimator for Generic Discrete Random Variables
Weonyoung Joo, Dongjun Kim, Seungjae Shin, Il-Chul Moon
Estimating the gradients of stochastic nodes in stochastic computational graphs is one of the crucial research questions in the deep generative modeling community, which enables the gradient descent optimization on neural network parameters. Stochastic gradient estimators of discrete random variables are widely explored, for example, Gumbel-Softmax reparameterization trick for Bernoulli and categorical distributions. Meanwhile, other discrete distribution cases such as the Poisson, geometric, binomial, multinomial, negative binomial, etc. have not been explored. This paper proposes a generalized version of the Gumbel-Softmax estimator, which is able to reparameterize generic discrete distributions, not restricted to the Bernoulli and the categorical. The proposed estimator utilizes the truncation of discrete random variables, the Gumbel-Softmax trick, and a special form of linear transformation. Our experiments consist of (1) synthetic examples and applications on VAE, which show the efficacy of our methods; and (2) topic models, which demonstrate the value of the proposed estimation in practice.
http://arxiv.org/abs/2003.01847v4
"2020-03-04T01:13:15Z"
cs.LG, stat.ML
2,020
Gaussian Hierarchical Latent Dirichlet Allocation: Bringing Polysemy Back
Takahiro Yoshida, Ryohei Hisano, Takaaki Ohnishi
Topic models are widely used to discover the latent representation of a set of documents. The two canonical models are latent Dirichlet allocation, and Gaussian latent Dirichlet allocation, where the former uses multinomial distributions over words, and the latter uses multivariate Gaussian distributions over pre-trained word embedding vectors as the latent topic representations, respectively. Compared with latent Dirichlet allocation, Gaussian latent Dirichlet allocation is limited in the sense that it does not capture the polysemy of a word such as ``bank.'' In this paper, we show that Gaussian latent Dirichlet allocation could recover the ability to capture polysemy by introducing a hierarchical structure in the set of topics that the model can use to represent a given document. Our Gaussian hierarchical latent Dirichlet allocation significantly improves polysemy detection compared with Gaussian-based models and provides more parsimonious topic representations compared with hierarchical latent Dirichlet allocation. Our extensive quantitative experiments show that our model also achieves better topic coherence and held-out document predictive accuracy over a wide range of corpus and word embedding vectors.
http://arxiv.org/abs/2002.10855v2
"2020-02-25T13:52:20Z"
stat.ML, cs.LG
2,020
HybridCite: A Hybrid Model for Context-Aware Citation Recommendation
Michael Färber, Ashwath Sampath
Citation recommendation systems aim to recommend citations for either a complete paper or a small portion of text called a citation context. The process of recommending citations for citation contexts is called local citation recommendation and is the focus of this paper. Firstly, we develop citation recommendation approaches based on embeddings, topic modeling, and information retrieval techniques. We combine, for the first time to the best of our knowledge, the best-performing algorithms into a semi-genetic hybrid recommender system for citation recommendation. We evaluate the single approaches and the hybrid approach offline based on several data sets, such as the Microsoft Academic Graph (MAG) and the MAG in combination with arXiv and ACL. We further conduct a user study for evaluating our approaches online. Our evaluation results show that a hybrid model containing embedding and information retrieval-based components outperforms its individual components and further algorithms by a large margin.
http://arxiv.org/abs/2002.06406v2
"2020-02-15T16:19:55Z"
cs.IR, cs.DL, cs.LG
2,020
Improving Reliability of Latent Dirichlet Allocation by Assessing Its Stability Using Clustering Techniques on Replicated Runs
Jonas Rieger, Lars Koppers, Carsten Jentsch, Jörg Rahnenführer
For organizing large text corpora topic modeling provides useful tools. A widely used method is Latent Dirichlet Allocation (LDA), a generative probabilistic model which models single texts in a collection of texts as mixtures of latent topics. The assignments of words to topics rely on initial values such that generally the outcome of LDA is not fully reproducible. In addition, the reassignment via Gibbs Sampling is based on conditional distributions, leading to different results in replicated runs on the same text data. This fact is often neglected in everyday practice. We aim to improve the reliability of LDA results. Therefore, we study the stability of LDA by comparing assignments from replicated runs. We propose to quantify the similarity of two generated topics by a modified Jaccard coefficient. Using such similarities, topics can be clustered. A new pruning algorithm for hierarchical clustering results based on the idea that two LDA runs create pairs of similar topics is proposed. This approach leads to the new measure S-CLOP ({\bf S}imilarity of multiple sets by {\bf C}lustering with {\bf LO}cal {\bf P}runing) for quantifying the stability of LDA models. We discuss some characteristics of this measure and illustrate it with an application to real data consisting of newspaper articles from \textit{USA Today}. Our results show that the measure S-CLOP is useful for assessing the stability of LDA models or any other topic modeling procedure that characterize its topics by word distributions. Based on the newly proposed measure for LDA stability, we propose a method to increase the reliability and hence to improve the reproducibility of empirical findings based on topic modeling. This increase in reliability is obtained by running the LDA several times and taking as prototype the most representative run, that is the LDA run with highest average similarity to all other runs.
http://arxiv.org/abs/2003.04980v1
"2020-02-14T07:10:18Z"
cs.CL, cs.AI, cs.LG, stat.ML
2,020
Two Huge Title and Keyword Generation Corpora of Research Articles
Erion Çano, Ondřej Bojar
Recent developments in sequence-to-sequence learning with neural networks have considerably improved the quality of automatically generated text summaries and document keywords, stipulating the need for even bigger training corpora. Metadata of research articles are usually easy to find online and can be used to perform research on various tasks. In this paper, we introduce two huge datasets for text summarization (OAGSX) and keyword generation (OAGKX) research, containing 34 million and 23 million records, respectively. The data were retrieved from the Open Academic Graph which is a network of research profiles and publications. We carefully processed each record and also tried several extractive and abstractive methods of both tasks to create performance baselines for other researchers. We further illustrate the performance of those methods previewing their outputs. In the near future, we would like to apply topic modeling on the two sets to derive subsets of research articles from more specific disciplines.
http://arxiv.org/abs/2002.04689v1
"2020-02-11T21:17:29Z"
cs.CL, cs.IR
2,020