Title
stringlengths
14
179
Authors
stringlengths
6
464
Abstract
stringlengths
83
1.93k
entry_id
stringlengths
32
34
Date
unknown
Categories
stringlengths
5
168
year
int32
2.01k
2.02k
SeNMFk-SPLIT: Large Corpora Topic Modeling by Semantic Non-negative Matrix Factorization with Automatic Model Selection
Maksim E. Eren, Nick Solovyev, Manish Bhattarai, Kim Rasmussen, Charles Nicholas, Boian S. Alexandrov
As the amount of text data continues to grow, topic modeling is serving an important role in understanding the content hidden by the overwhelming quantity of documents. One popular topic modeling approach is non-negative matrix factorization (NMF), an unsupervised machine learning (ML) method. Recently, Semantic NMF with automatic model selection (SeNMFk) has been proposed as a modification to NMF. In addition to heuristically estimating the number of topics, SeNMFk also incorporates the semantic structure of the text. This is performed by jointly factorizing the term frequency-inverse document frequency (TF-IDF) matrix with the co-occurrence/word-context matrix, the values of which represent the number of times two words co-occur in a predetermined window of the text. In this paper, we introduce a novel distributed method, SeNMFk-SPLIT, for semantic topic extraction suitable for large corpora. Contrary to SeNMFk, our method enables the joint factorization of large documents by decomposing the word-context and term-document matrices separately. We demonstrate the capability of SeNMFk-SPLIT by applying it to the entire artificial intelligence (AI) and ML scientific literature uploaded on arXiv.
http://arxiv.org/abs/2208.09942v1
"2022-08-21T18:33:05Z"
cs.IR
2,022
Searching for Structure in Unfalsifiable Claims
Peter Ebert Christensen, Frederik Warburg, Menglin Jia, Serge Belongie
Social media platforms give rise to an abundance of posts and comments on every topic imaginable. Many of these posts express opinions on various aspects of society, but their unfalsifiable nature makes them ill-suited to fact-checking pipelines. In this work, we aim to distill such posts into a small set of narratives that capture the essential claims related to a given topic. Understanding and visualizing these narratives can facilitate more informed debates on social media. As a first step towards systematically identifying the underlying narratives on social media, we introduce PAPYER, a fine-grained dataset of online comments related to hygiene in public restrooms, which contains a multitude of unfalsifiable claims. We present a human-in-the-loop pipeline that uses a combination of machine and human kernels to discover the prevailing narratives and show that this pipeline outperforms recent large transformer models and state-of-the-art unsupervised topic models.
http://arxiv.org/abs/2209.00495v1
"2022-08-19T13:32:15Z"
cs.CL, cs.LG, cs.SI
2,022
SimLDA: A tool for topic model evaluation
Rebecca M. C. Taylor, Johan A. du Preez
Variational Bayes (VB) applied to latent Dirichlet allocation (LDA) has become the most popular algorithm for aspect modeling. While sufficiently successful in text topic extraction from large corpora, VB is less successful in identifying aspects in the presence of limited data. We present a novel variational message passing algorithm as applied to Latent Dirichlet Allocation (LDA) and compare it with the gold standard VB and collapsed Gibbs sampling. In situations where marginalisation leads to non-conjugate messages, we use ideas from sampling to derive approximate update equations. In cases where conjugacy holds, Loopy Belief update (LBU) (also known as Lauritzen-Spiegelhalter) is used. Our algorithm, ALBU (approximate LBU), has strong similarities with Variational Message Passing (VMP) (which is the message passing variant of VB). To compare the performance of the algorithms in the presence of limited data, we use data sets consisting of tweets and news groups. Using coherence measures we show that ALBU learns latent distributions more accurately than does VB, especially for smaller data sets.
http://arxiv.org/abs/2208.09299v1
"2022-08-19T12:25:53Z"
cs.LG, cs.CL, cs.IR, stat.ML
2,022
Scholastic: Graphical Human-Al Collaboration for Inductive and Interpretive Text Analysis
Matt-Heun Hong, Lauren A. Marsh, Jessica L. Feuston, Janet Ruppert, Jed R. Brubaker, Danielle Albers Szafir
Interpretive scholars generate knowledge from text corpora by manually sampling documents, applying codes, and refining and collating codes into categories until meaningful themes emerge. Given a large corpus, machine learning could help scale this data sampling and analysis, but prior research shows that experts are generally concerned about algorithms potentially disrupting or driving interpretive scholarship. We take a human-centered design approach to addressing concerns around machine-assisted interpretive research to build Scholastic, which incorporates a machine-in-the-loop clustering algorithm to scaffold interpretive text analysis. As a scholar applies codes to documents and refines them, the resulting coding schema serves as structured metadata which constrains hierarchical document and word clusters inferred from the corpus. Interactive visualizations of these clusters can help scholars strategically sample documents further toward insights. Scholastic demonstrates how human-centered algorithm design and visualizations employing familiar metaphors can support inductive and interpretive research methodologies through interactive topic modeling and document clustering.
http://arxiv.org/abs/2208.06133v1
"2022-08-12T06:41:45Z"
cs.HC, cs.LG
2,022
Court Judgement Labeling Using Topic Modeling and Syntactic Parsing
Yuchen Liu
In regions that practice common law, relevant historical cases are essential references for sentencing. To help legal practitioners find previous judgement easier, this paper aims to label each court judgement by some tags. These tags are legally important to summarize the judgement and can guide the user to similar judgements. We introduce a heuristic system to solve the problem, which starts from Aspect-driven Topic Modeling and uses Dependency Parsing and Constituency Parsing for phrase generation. We also construct a legal term tree for Hong Kong and implemented a sentence simplification module to support the system. Finally, we propose a similar document recommendation algorithm based on the generated tags. It enables users to find similar documents based on a few selected aspects rather than the whole passage. Experiment results show that this system is the best approach for this specific task. It is better than simple term extraction method in terms of summarizing the document, and the recommendation algorithm is more effective than full-text comparison approaches. We believe that the system has huge potential in law as well as in other areas.
http://arxiv.org/abs/2208.04225v2
"2022-08-03T06:32:16Z"
cs.IR, cs.LG
2,022
No Pattern, No Recognition: a Survey about Reproducibility and Distortion Issues of Text Clustering and Topic Modeling
Marília Costa Rosendo Silva, Felipe Alves Siqueira, João Pedro Mantovani Tarrega, João Vitor Pataca Beinotti, Augusto Sousa Nunes, Miguel de Mattos Gardini, Vinícius Adolfo Pereira da Silva, Nádia Félix Felipe da Silva, André Carlos Ponce de Leon Ferreira de Carvalho
Extracting knowledge from unlabeled texts using machine learning algorithms can be complex. Document categorization and information retrieval are two applications that may benefit from unsupervised learning (e.g., text clustering and topic modeling), including exploratory data analysis. However, the unsupervised learning paradigm poses reproducibility issues. The initialization can lead to variability depending on the machine learning algorithm. Furthermore, the distortions can be misleading when regarding cluster geometry. Amongst the causes, the presence of outliers and anomalies can be a determining factor. Despite the relevance of initialization and outlier issues for text clustering and topic modeling, the authors did not find an in-depth analysis of them. This survey provides a systematic literature review (2011-2022) of these subareas and proposes a common terminology since similar procedures have different terms. The authors describe research opportunities, trends, and open issues. The appendices summarize the theoretical background of the text vectorization, the factorization, and the clustering algorithms that are directly or indirectly related to the reviewed works.
http://arxiv.org/abs/2208.01712v1
"2022-08-02T19:51:43Z"
cs.LG, cs.CL, cs.IR, stat.ML, I.2; I.2.7; I.5.3
2,022
Unsupervised Learning under Latent Label Shift
Manley Roberts, Pranav Mani, Saurabh Garg, Zachary C. Lipton
What sorts of structure might enable a learner to discover classes from unlabeled data? Traditional approaches rely on feature-space similarity and heroic assumptions on the data. In this paper, we introduce unsupervised learning under Latent Label Shift (LLS), where we have access to unlabeled data from multiple domains such that the label marginals $p_d(y)$ can shift across domains but the class conditionals $p(\mathbf{x}|y)$ do not. This work instantiates a new principle for identifying classes: elements that shift together group together. For finite input spaces, we establish an isomorphism between LLS and topic modeling: inputs correspond to words, domains to documents, and labels to topics. Addressing continuous data, we prove that when each label's support contains a separable region, analogous to an anchor word, oracle access to $p(d|\mathbf{x})$ suffices to identify $p_d(y)$ and $p_d(y|\mathbf{x})$ up to permutation. Thus motivated, we introduce a practical algorithm that leverages domain-discriminative models as follows: (i) push examples through domain discriminator $p(d|\mathbf{x})$; (ii) discretize the data by clustering examples in $p(d|\mathbf{x})$ space; (iii) perform non-negative matrix factorization on the discrete data; (iv) combine the recovered $p(y|d)$ with the discriminator outputs $p(d|\mathbf{x})$ to compute $p_d(y|x) \; \forall d$. With semi-synthetic experiments, we show that our algorithm can leverage domain information to improve upon competitive unsupervised classification methods. We reveal a failure mode of standard unsupervised classification methods when feature-space similarity does not indicate true groupings, and show empirically that our method better handles this case. Our results establish a deep connection between distribution shift and topic modeling, opening promising lines for future work.
http://arxiv.org/abs/2207.13179v2
"2022-07-26T20:52:53Z"
cs.LG, stat.ML
2,022
Efficient Algorithms for Sparse Moment Problems without Separation
Zhiyuan Fan, Jian Li
We consider the sparse moment problem of learning a $k$-spike mixture in high-dimensional space from its noisy moment information in any dimension. We measure the accuracy of the learned mixtures using transportation distance. Previous algorithms either assume certain separation assumptions, use more recovery moments, or run in (super) exponential time. Our algorithm for the one-dimensional problem (also called the sparse Hausdorff moment problem) is a robust version of the classic Prony's method, and our contribution mainly lies in the analysis. We adopt a global and much tighter analysis than previous work (which analyzes the perturbation of the intermediate results of Prony's method). A useful technical ingredient is a connection between the linear system defined by the Vandermonde matrix and the Schur polynomial, which allows us to provide tight perturbation bound independent of the separation and may be useful in other contexts. To tackle the high-dimensional problem, we first solve the two-dimensional problem by extending the one-dimensional algorithm and analysis to complex numbers. Our algorithm for the high-dimensional case determines the coordinates of each spike by aligning a 1d projection of the mixture to a random vector and a set of 2d projections of the mixture. Our results have applications to learning topic models and Gaussian mixtures, implying improved sample complexity results or running time over prior work.
http://arxiv.org/abs/2207.13008v2
"2022-07-26T16:17:32Z"
cs.LG
2,022
Skill requirements in job advertisements: A comparison of skill-categorization methods based on explanatory power in wage regressions
Ziqiao Ao, Gergely Horvath, Chunyuan Sheng, Yifan Song, Yutong Sun
In this paper, we compare different methods to extract skill requirements from job advertisements. We consider three top-down methods that are based on expert-created dictionaries of keywords, and a bottom-up method of unsupervised topic modeling, the Latent Dirichlet Allocation (LDA) model. We measure the skill requirements based on these methods using a U.K. dataset of job advertisements that contains over 1 million entries. We estimate the returns of the identified skills using wage regressions. Finally, we compare the different methods by the wage variation they can explain, assuming that better-identified skills will explain a higher fraction of the wage variation in the labor market. We find that the top-down methods perform worse than the LDA model, as they can explain only about 20% of the wage variation, while the LDA model explains about 45% of it.
http://arxiv.org/abs/2207.12834v1
"2022-07-26T11:58:44Z"
econ.GN, q-fin.EC
2,022
Multimodal Neural Machine Translation with Search Engine Based Image Retrieval
ZhenHao Tang, XiaoBing Zhang, Zi Long, XiangHua Fu
Recently, numbers of works shows that the performance of neural machine translation (NMT) can be improved to a certain extent with using visual information. However, most of these conclusions are drawn from the analysis of experimental results based on a limited set of bilingual sentence-image pairs, such as Multi30K. In these kinds of datasets, the content of one bilingual parallel sentence pair must be well represented by a manually annotated image, which is different with the actual translation situation. Some previous works are proposed to addressed the problem by retrieving images from exiting sentence-image pairs with topic model. However, because of the limited collection of sentence-image pairs they used, their image retrieval method is difficult to deal with the out-of-vocabulary words, and can hardly prove that visual information enhance NMT rather than the co-occurrence of images and sentences. In this paper, we propose an open-vocabulary image retrieval methods to collect descriptive images for bilingual parallel corpus using image search engine. Next, we propose text-aware attentive visual encoder to filter incorrectly collected noise images. Experiment results on Multi30K and other two translation datasets show that our proposed method achieves significant improvements over strong baselines.
http://arxiv.org/abs/2208.00767v2
"2022-07-26T08:42:06Z"
cs.CV, cs.AI, cs.CL, cs.IR
2,022
A Data-driven Latent Semantic Analysis for Automatic Text Summarization using LDA Topic Modelling
Daniel F. O. Onah, Elaine L. L. Pang, Mahmoud El-Haj
With the advent and popularity of big data mining and huge text analysis in modern times, automated text summarization became prominent for extracting and retrieving important information from documents. This research investigates aspects of automatic text summarization from the perspectives of single and multiple documents. Summarization is a task of condensing huge text articles into short, summarized versions. The text is reduced in size for summarization purpose but preserving key vital information and retaining the meaning of the original document. This study presents the Latent Dirichlet Allocation (LDA) approach used to perform topic modelling from summarised medical science journal articles with topics related to genes and diseases. In this study, PyLDAvis web-based interactive visualization tool was used to visualise the selected topics. The visualisation provides an overarching view of the main topics while allowing and attributing deep meaning to the prevalence individual topic. This study presents a novel approach to summarization of single and multiple documents. The results suggest the terms ranked purely by considering their probability of the topic prevalence within the processed document using extractive summarization technique. PyLDAvis visualization describes the flexibility of exploring the terms of the topics' association to the fitted LDA model. The topic modelling result shows prevalence within topics 1 and 2. This association reveals that there is similarity between the terms in topic 1 and 2 in this study. The efficacy of the LDA and the extractive summarization methods were measured using Latent Semantic Analysis (LSA) and Recall-Oriented Understudy for Gisting Evaluation (ROUGE) metrics to evaluate the reliability and validity of the model.
http://arxiv.org/abs/2207.14687v7
"2022-07-23T11:04:03Z"
cs.IR, cs.LG
2,022
Generalized Identifiability Bounds for Mixture Models with Grouped Samples
Robert A. Vandermeulen, René Saitenmacher
Recent work has shown that finite mixture models with $m$ components are identifiable, while making no assumptions on the mixture components, so long as one has access to groups of samples of size $2m-1$ which are known to come from the same mixture component. In this work we generalize that result and show that, if every subset of $k$ mixture components of a mixture model are linearly independent, then that mixture model is identifiable with only $(2m-1)/(k-1)$ samples per group. We further show that this value cannot be improved. We prove an analogous result for a stronger form of identifiability known as "determinedness" along with a corresponding lower bound. This independence assumption almost surely holds if mixture components are chosen randomly from a $k$-dimensional space. We describe some implications of our results for multinomial mixture models and topic modeling.
http://arxiv.org/abs/2207.11164v1
"2022-07-22T16:01:51Z"
math.ST, cs.LG, stat.ML, stat.TH, 62G07, 62G05
2,022
Evolution of the public opinion on COVID-19 vaccination in Japan
Yuri Nakayama, Yuka Takedomi, Towa Suda, Takeaki Uno, Takako Hashimoto, Masashi Toyoda, Naoki Yoshinaga, Masaru Kitsuregawa, Luis E. C. Rocha, Ryota Kobayashi
Vaccines are promising tools to control the spread of COVID-19. An effective vaccination campaign requires government policies and community engagement, sharing experiences for social support, and voicing concerns to vaccine safety and efficiency. The increasing use of online social platforms allows us to trace large-scale communication and infer public opinion in real-time. We collected more than 100 million vaccine-related tweets posted by 8 million users and used the Latent Dirichlet Allocation model to perform automated topic modeling of tweet texts during the vaccination campaign in Japan. We identified 15 topics grouped into 4 themes on Personal issue, Breaking news, Politics, and Conspiracy and humour. The evolution of the popularity of themes revealed a shift in public opinion, initially sharing the attention over personal issues (individual aspect), collecting information from the news (knowledge acquisition), and government criticisms, towards personal experiences once confidence in the vaccination campaign was established. An interrupted time series regression analysis showed that the Tokyo Olympic Games affected public opinion more than other critical events but not the course of the vaccination. Public opinion on politics was significantly affected by various events, positively shifting the attention in the early stages of the vaccination campaign and negatively later. Tweets about personal issues were mostly retweeted when the vaccination reached the younger population. The associations between the vaccination campaign stages and tweet themes suggest that the public engagement in the social platform contributed to speedup vaccine uptake by reducing anxiety via social learning and support.
http://arxiv.org/abs/2207.10924v1
"2022-07-22T08:00:35Z"
physics.soc-ph, cs.SI
2,022
Open Tracing Tools: Overview and Critical Comparison
Andrea Janes, Xiaozhou Li, Valentina Lenarduzzi
Background. Coping with the rapid growing complexity in contemporary software architecture, tracing has become an increasingly critical practice and been adopted widely by software engineers. By adopting tracing tools, practitioners are able to monitor, debug, and optimize distributed software architectures easily. However, with excessive number of valid candidates, researchers and practitioners have a hard time finding and selecting the suitable tracing tools by systematically considering their features and advantages.Objective. To such a purpose, this paper aims to provide an overview of popular Open tracing tools via comparison. Method. Herein, we first identified \ra{30} tools in an objective, systematic, and reproducible manner adopting the Systematic Multivocal Literature Review protocol. Then, we characterized each tool looking at the 1) measured features, 2) popularity both in peer-reviewed literature and online media, and 3) benefits and issues. We used topic modeling and sentiment analysis to extract and summarize the benefits and issues. Specially, we adopted ChatGPT to support the topic interpretation. Results. As a result, this paper presents a systematic comparison amongst the selected tracing tools in terms of their features, popularity, benefits and issues. Conclusion. The result mainly shows that each tracing tool provides a unique combination of features with also different pros and cons. The contribution of this paper is to provide the practitioners better understanding of the tracing tools facilitating their adoption.
http://arxiv.org/abs/2207.06875v2
"2022-07-14T12:52:32Z"
cs.SE
2,022
Twitmo: A Twitter Data Topic Modeling and Visualization Package for R
Andreas Buchmüller, Gillian Kant, Christoph Weisser, Benjamin Säfken, Krisztina Kis-Katos, Thomas Kneib
We present Twitmo, a package that provides a broad range of methods to collect, pre-process, analyze and visualize geo-tagged Twitter data. Twitmo enables the user to collect geo-tagged Tweets from Twitter and and provides a comprehensive and user-friendly toolbox to generate topic distributions from Latent Dirichlet Allocations (LDA), correlated topic models (CTM) and structural topic models (STM). Functions are included for pre-processing of text, model building and prediction. In addition, one of the innovations of the package is the automatic pooling of Tweets into longer pseudo-documents using hashtags and cosine similarities for better topic coherence. The package additionally comes with functionality to visualize collected data sets and fitted models in static as well as interactive ways and offers built-in support for model visualizations via LDAvis providing great convenience for researchers in this area. The Twitmo package is an innovative toolbox that can be used to analyze public discourse of various topics, political parties or persons of interest in space and time.
http://arxiv.org/abs/2207.11236v1
"2022-07-08T12:23:20Z"
cs.IR, cs.CL, cs.LG, stat.ML, 68N30 (Primary) 62P25, 97K80 (Secondary)
2,022
TopicFM: Robust and Interpretable Topic-Assisted Feature Matching
Khang Truong Giang, Soohwan Song, Sungho Jo
This study addresses an image-matching problem in challenging cases, such as large scene variations or textureless scenes. To gain robustness to such situations, most previous studies have attempted to encode the global contexts of a scene via graph neural networks or transformers. However, these contexts do not explicitly represent high-level contextual information, such as structural shapes or semantic instances; therefore, the encoded features are still not sufficiently discriminative in challenging scenes. We propose a novel image-matching method that applies a topic-modeling strategy to encode high-level contexts in images. The proposed method trains latent semantic instances called topics. It explicitly models an image as a multinomial distribution of topics, and then performs probabilistic feature matching. This approach improves the robustness of matching by focusing on the same semantic areas between the images. In addition, the inferred topics provide interpretability for matching the results, making our method explainable. Extensive experiments on outdoor and indoor datasets show that our method outperforms other state-of-the-art methods, particularly in challenging cases. The code is available at https://github.com/TruongKhang/TopicFM.
http://arxiv.org/abs/2207.00328v3
"2022-07-01T10:39:14Z"
cs.CV
2,022
Gaussian Latent Dirichlet Allocation for Discrete Human State Discovery
Congyu Wu, Aaron Fisher, David Schnyer
In this article we propose and validate an unsupervised probabilistic model, Gaussian Latent Dirichlet Allocation (GLDA), for the problem of discrete state discovery from repeated, multivariate psychophysiological samples collected from multiple, inherently distinct, individuals. Psychology and medical research heavily involves measuring potentially related but individually inconclusive variables from a cohort of participants to derive diagnosis, necessitating clustering analysis. Traditional probabilistic clustering models such as Gaussian Mixture Model (GMM) assume a global mixture of component distributions, which may not be realistic for observations from different patients. The GLDA model borrows the individual-specific mixture structure from a popular topic model Latent Dirichlet Allocation (LDA) in Natural Language Processing and merges it with the Gaussian component distributions of GMM to suit continuous type data. We implemented GLDA using STAN (a probabilistic modeling language) and applied it on two datasets, one containing Ecological Momentary Assessments (EMA) and the other heart measures from electrocardiogram and impedance cardiograph. We found that in both datasets the GLDA-learned class weights achieved significantly higher correlations with clinically assessed depression, anxiety, and stress scores than those produced by the baseline GMM. Our findings demonstrate the advantage of GLDA over conventional finite mixture models for human state discovery from repeated multivariate data, likely due to better characterization of potential underlying between-participant differences. Future work is required to validate the utility of this model on a broader range of applications.
http://arxiv.org/abs/2206.14233v1
"2022-06-28T18:33:46Z"
cs.LG, 62H22, 62H30, 60-04, 91C20
2,022
Creation and Analysis of an International Corpus of Privacy Laws
Sonu Gupta, Ellen Poplavska, Nora O'Toole, Siddhant Arora, Thomas Norton, Norman Sadeh, Shomir Wilson
The landscape of privacy laws and regulations around the world is complex and ever-changing. National and super-national laws, agreements, decrees, and other government-issued rules form a patchwork that companies must follow to operate internationally. To examine the status and evolution of this patchwork, we introduce the Government Privacy Instructions Corpus, or GPI Corpus, of 1,043 privacy laws, regulations, and guidelines, covering 182 jurisdictions. This corpus enables a large-scale quantitative and qualitative examination of legal foci on privacy. We examine the temporal distribution of when GPIs were created and illustrate the dramatic increase in privacy legislation over the past 50 years, although a finer-grained examination reveals that the rate of increase varies depending on the personal data types that GPIs address. Our exploration also demonstrates that most privacy laws respectively address relatively few personal data types, showing that comprehensive privacy legislation remains rare. Additionally, topic modeling results show the prevalence of common themes in GPIs, such as finance, healthcare, and telecommunications. Finally, we release the corpus to the research community to promote further study.
http://arxiv.org/abs/2206.14169v1
"2022-06-28T17:36:12Z"
cs.CL
2,022
Combining Topic Modeling with Grounded Theory: Case Studies of Project Collaboration
Eyyub Can Odacioglu, Lihong Zhang, Richard Allmendinger
This paper proposes an Artificial Intelligence (AI) Grounded Theory for management studies. We argue that this novel and rigorous approach that embeds topic modelling will lead to the latent knowledge to be found. We illustrate this abductive method using 51 case studies of collaborative innovation published by Project Management Institute (PMI). Initial results are presented and discussed that include 40 topics, 6 categories, 4 of which are core categories, and two new theories of project collaboration.
http://arxiv.org/abs/2207.02212v1
"2022-06-28T10:21:58Z"
cs.HC, cs.IR
2,022
Estimation and inference for the Wasserstein distance between mixing measures in topic models
Xin Bing, Florentina Bunea, Jonathan Niles-Weed
The Wasserstein distance between mixing measures has come to occupy a central place in the statistical analysis of mixture models. This work proposes a new canonical interpretation of this distance and provides tools to perform inference on the Wasserstein distance between mixing measures in topic models. We consider the general setting of an identifiable mixture model consisting of mixtures of distributions from a set $\mathcal{A}$ equipped with an arbitrary metric $d$, and show that the Wasserstein distance between mixing measures is uniquely characterized as the most discriminative convex extension of the metric $d$ to the set of mixtures of elements of $\mathcal{A}$. The Wasserstein distance between mixing measures has been widely used in the study of such models, but without axiomatic justification. Our results establish this metric to be a canonical choice. Specializing our results to topic models, we consider estimation and inference of this distance. Though upper bounds for its estimation have been recently established elsewhere, we prove the first minimax lower bounds for the estimation of the Wasserstein distance in topic models. We also establish fully data-driven inferential tools for the Wasserstein distance in the topic model context. Our results apply to potentially sparse mixtures of high-dimensional discrete probability distributions. These results allow us to obtain the first asymptotically valid confidence intervals for the Wasserstein distance in topic models.
http://arxiv.org/abs/2206.12768v2
"2022-06-26T02:33:40Z"
math.ST, stat.ML, stat.TH
2,022
A Temporal Extension of Latent Dirichlet Allocation for Unsupervised Acoustic Unit Discovery
Werner van der Merwe, Herman Kamper, Johan du Preez
Latent Dirichlet allocation (LDA) is widely used for unsupervised topic modelling on sets of documents. No temporal information is used in the model. However, there is often a relationship between the corresponding topics of consecutive tokens. In this paper, we present an extension to LDA that uses a Markov chain to model temporal information. We use this new model for acoustic unit discovery from speech. As input tokens, the model takes a discretised encoding of speech from a vector quantised (VQ) neural network with 512 codes. The goal is then to map these 512 VQ codes to 50 phone-like units (topics) in order to more closely resemble true phones. In contrast to the base LDA, which only considers how VQ codes co-occur within utterances (documents), the Markov chain LDA additionally captures how consecutive codes follow one another. This extension leads to an increase in cluster quality and phone segmentation results compared to the base LDA. Compared to a recent vector quantised neural network approach that also learns 50 units, the extended LDA model performs better in phone segmentation but worse in mutual information.
http://arxiv.org/abs/2206.11706v2
"2022-06-23T13:53:59Z"
eess.AS, cs.CL, cs.LG, stat.ML
2,022
Research Topic Flows in Co-Authorship Networks
Bastian Schäfermeier, Johannes Hirth, Tom Hanika
In scientometrics, scientific collaboration is often analyzed by means of co-authorships. An aspect which is often overlooked and more difficult to quantify is the flow of expertise between authors from different research topics, which is an important part of scientific progress. With the Topic Flow Network (TFN) we propose a graph structure for the analysis of research topic flows between scientific authors and their respective research fields. Based on a multi-graph and a topic model, our proposed network structure accounts for intratopic as well as intertopic flows. Our method requires for the construction of a TFN solely a corpus of publications (i.e., author and abstract information). From this, research topics are discovered automatically through non-negative matrix factorization. The thereof derived TFN allows for the application of social network analysis techniques, such as common metrics and community detection. Most importantly, it allows for the analysis of intertopic flows on a large, macroscopic scale, i.e., between research topic, as well as on a microscopic scale, i.e., between certain sets of authors. We demonstrate the utility of TFNs by applying our method to two comprehensive corpora of altogether 20 Mio. publications spanning more than 60 years of research in the fields computer science and mathematics. Our results give evidence that TFNs are suitable, e.g., for the analysis of topical communities, the discovery of important authors in different fields, and, most notably, the analysis of intertopic flows, i.e., the transfer of topical expertise. Besides that, our method opens new directions for future research, such as the investigation of influence relationships between research fields.
http://arxiv.org/abs/2206.07980v1
"2022-06-16T07:45:53Z"
cs.SI, cs.DL, cs.IR, cs.LG, 68T50 68U35 68V35 01A90 00A15
2,022
Towards Better Understanding with Uniformity and Explicit Regularization of Embeddings in Embedding-based Neural Topic Models
Wei Shao, Lei Huang, Shuqi Liu, Shihua Ma, Linqi Song
Embedding-based neural topic models could explicitly represent words and topics by embedding them to a homogeneous feature space, which shows higher interpretability. However, there are no explicit constraints for the training of embeddings, leading to a larger optimization space. Also, a clear description of the changes in embeddings and the impact on model performance is still lacking. In this paper, we propose an embedding regularized neural topic model, which applies the specially designed training constraints on word embedding and topic embedding to reduce the optimization space of parameters. To reveal the changes and roles of embeddings, we introduce \textbf{uniformity} into the embedding-based neural topic model as the evaluation metric of embedding space. On this basis, we describe how embeddings tend to change during training via the changes in the uniformity of embeddings. Furthermore, we demonstrate the impact of changes in embeddings in embedding-based neural topic models through ablation studies. The results of experiments on two mainstream datasets indicate that our model significantly outperforms baseline models in terms of the harmony between topic quality and document modeling. This work is the first attempt to exploit uniformity to explore changes in embeddings of embedding-based neural topic models and their impact on model performance to the best of our knowledge.
http://arxiv.org/abs/2206.07960v1
"2022-06-16T07:02:55Z"
cs.CL
2,022
US News and Social Media Framing around Vaping
Keyu Chen, Marzieh Babaeianjelodar, Yiwen Shi, Rohan Aanegola, Lam Yin Cheung, Preslav Ivanov Nakov, Shweta Yadav, Angus Bancroft, Ashiqur R. KhudaBukhsh, Munmun De Choudhury, Frederick L. Altice, Navin Kumar
In this paper, we investigate how vaping is framed differently (2008-2021) between US news and social media. We analyze 15,711 news articles and 1,231,379 Facebook posts about vaping to study the differences in framing between media varieties. We use word embeddings to provide two-dimensional visualizations of the semantic changes around vaping for news and for social media. We detail that news media framing of vaping shifted over time in line with emergent regulatory trends, such as; flavored vaping bans, with little discussion around vaping as a smoking cessation tool. We found that social media discussions were far more varied, with transitions toward vaping both as a public health harm and as a smoking cessation tool. Our cloze test, dynamic topic model, and question answering showed similar patterns, where social media, but not news media, characterizes vaping as combustible cigarette substitute. We use n-grams to detail that social media data first centered on vaping as a smoking cessation tool, and in 2019 moved toward narratives around vaping regulation, similar to news media frames. Overall, social media tracks the evolution of vaping as a social practice, while news media reflects more risk based concerns. A strength of our work is how the different techniques we have applied validate each other. Stakeholders may utilize our findings to intervene around the framing of vaping, and may design communications campaigns that improve the way society sees vaping, thus possibly aiding smoking cessation; and reducing youth vaping.
http://arxiv.org/abs/2206.07765v3
"2022-06-15T18:59:18Z"
cs.SI
2,022
Supervised Dictionary Learning with Auxiliary Covariates
Joowon Lee, Hanbaek Lyu, Weixin Yao
Supervised dictionary learning (SDL) is a classical machine learning method that simultaneously seeks feature extraction and classification tasks, which are not necessarily a priori aligned objectives. The goal of SDL is to learn a class-discriminative dictionary, which is a set of latent feature vectors that can well-explain both the features as well as labels of observed data. In this paper, we provide a systematic study of SDL, including the theory, algorithm, and applications of SDL. First, we provide a novel framework that `lifts' SDL as a convex problem in a combined factor space and propose a low-rank projected gradient descent algorithm that converges exponentially to the global minimizer of the objective. We also formulate generative models of SDL and provide global estimation guarantees of the true parameters depending on the hyperparameter regime. Second, viewed as a nonconvex constrained optimization problem, we provided an efficient block coordinate descent algorithm for SDL that is guaranteed to find an $\varepsilon$-stationary point of the objective in $O(\varepsilon^{-1}(\log \varepsilon^{-1})^{2})$ iterations. For the corresponding generative model, we establish a novel non-asymptotic local consistency result for constrained and regularized maximum likelihood estimation problems, which may be of independent interest. Third, we apply SDL for imbalanced document classification by supervised topic modeling and also for pneumonia detection from chest X-ray images. We also provide simulation studies to demonstrate that SDL becomes more effective when there is a discrepancy between the best reconstructive and the best discriminative dictionaries.
http://arxiv.org/abs/2206.06774v1
"2022-06-14T12:10:03Z"
stat.ML, cs.LG, math.ST, stat.TH
2,022
Analyzing Folktales of Different Regions Using Topic Modeling and Clustering
Jacob Werzinsky, Zhiyan Zhong, Xuedan Zou
This paper employs two major natural language processing techniques, topic modeling and clustering, to find patterns in folktales and reveal cultural relationships between regions. In particular, we used Latent Dirichlet Allocation and BERTopic to extract the recurring elements as well as K-means clustering to group folktales. Our paper tries to answer the question what are the similarities and differences between folktales, and what do they say about culture. Here we show that the common trends between folktales are family, food, traditional gender roles, mythological figures, and animals. Also, folktales topics differ based on geographical location with folktales found in different regions having different animals and environment. We were not surprised to find that religious figures and animals are some of the common topics in all cultures. However, we were surprised that European and Asian folktales were often paired together. Our results demonstrate the prevalence of certain elements in cultures across the world. We anticipate our work to be a resource to future research of folktales and an example of using natural language processing to analyze documents in specific domains. Furthermore, since we only analyzed the documents based on their topics, more work could be done in analyzing the structure, sentiment, and the characters of these folktales.
http://arxiv.org/abs/2206.04221v1
"2022-06-09T02:04:18Z"
cs.CL, cs.CY
2,022
Separating Diffractive and Non-Diffractive events in High energy Collisions at LHC energies
Sadhana Dash, Nirbhay Behera, Basanta Nandi
The charged particle multiplicity distribution in high energy hadronic and nuclear collisions receive contribution from both diffractive and non diffractive processes. It is experimentally challenging to segregate diffractive events from non-diffractive events. The present work aims to separate and extract the charged particle multiplicity distribution of diffractive and non-diffractive events in hadronic collisions at LHC energies. A data driven model using the topic modelling statistical tool, DEMIX, has been used to demonstrate the proof of concept for p p collisions at 7 TeV generated by Pythia 8 event generator. The study suggests that DEMIX technique can be used to extract the underlying base distributions and fractions for experimental observables pertaining to diffractive and non-diffractive events at LHC energies and can therefore be used as a step forward for an experimental determination of precise inelastic cross-sections in pp collisions.
http://arxiv.org/abs/2206.03199v1
"2022-06-07T11:42:42Z"
hep-ph
2,022
An Empirical Study of IoT Security Aspects at Sentence-Level in Developer Textual Discussions
Nibir Chandra Mandal, Gias Uddin
IoT is a rapidly emerging paradigm that now encompasses almost every aspect of our modern life. As such, ensuring the security of IoT devices is crucial. IoT devices can differ from traditional computing, thereby the design and implementation of proper security measures can be challenging in IoT devices. We observed that IoT developers discuss their security-related challenges in developer forums like Stack Overflow(SO). However, we find that IoT security discussions can also be buried inside non-security discussions in SO. In this paper, we aim to understand the challenges IoT developers face while applying security practices and techniques to IoT devices. We have two goals: (1) Develop a model that can automatically find security-related IoT discussions in SO, and (2) Study the model output to learn about IoT developer security-related challenges. First, we download 53K posts from SO that contain discussions about IoT. Second, we manually labeled 5,919 sentences from 53K posts as 1 or 0. Third, we use this benchmark to investigate a suite of deep learning transformer models. The best performing model is called SecBot. Fourth, we apply SecBot on the entire posts and find around 30K security related sentences. Fifth, we apply topic modeling to the security-related sentences. Then we label and categorize the topics. Sixth, we analyze the evolution of the topics in SO. We found that (1) SecBot is based on the retraining of the deep learning model RoBERTa. SecBot offers the best F1-Score of 0.935, (2) there are six error categories in misclassified samples by SecBot. SecBot was mostly wrong when the keywords/contexts were ambiguous (e.g., gateway can be a security gateway or a simple gateway), (3) there are 9 security topics grouped into three categories: Software, Hardware, and Network, and (4) the highest number of topics belongs to software security, followed by network security.
http://arxiv.org/abs/2206.03079v1
"2022-06-07T07:54:35Z"
cs.CR, cs.IR, cs.LG, cs.SE
2,022
Perspectives of Non-Expert Users on Cyber Security and Privacy: An Analysis of Online Discussions on Twitter
Nandita Pattnaik, Shujun Li, Jason R. C. Nurse
Current research on users` perspectives of cyber security and privacy related to traditional and smart devices at home is very active, but the focus is often more on specific modern devices such as mobile and smart IoT devices in a home context. In addition, most were based on smaller-scale empirical studies such as online surveys and interviews. We endeavour to fill these research gaps by conducting a larger-scale study based on a real-world dataset of 413,985 tweets posted by non-expert users on Twitter in six months of three consecutive years (January and February in 2019, 2020 and 2021). Two machine learning-based classifiers were developed to identify the 413,985 tweets. We analysed this dataset to understand non-expert users` cyber security and privacy perspectives, including the yearly trend and the impact of the COVID-19 pandemic. We applied topic modelling, sentiment analysis and qualitative analysis of selected tweets in the dataset, leading to various interesting findings. For instance, we observed a 54% increase in non-expert users` tweets on cyber security and/or privacy related topics in 2021, compared to before the start of global COVID-19 lockdowns (January 2019 to February 2020). We also observed an increased level of help-seeking tweets during the COVID-19 pandemic. Our analysis revealed a diverse range of topics discussed by non-expert users across the three years, including VPNs, Wi-Fi, smartphones, laptops, smart home devices, financial security, and security and privacy issues involving different stakeholders. Overall negative sentiment was observed across almost all topics non-expert users discussed on Twitter in all the three years. Our results confirm the multi-faceted nature of non-expert users` perspectives on cyber security and privacy and call for more holistic, comprehensive and nuanced research on different facets of such perspectives.
http://arxiv.org/abs/2206.02156v2
"2022-06-05T11:54:48Z"
cs.CR, cs.LG
2,022
Modeling electronic health record data using a knowledge-graph-embedded topic model
Yuesong Zou, Ahmad Pesaranghader, Aman Verma, David Buckeridge, Yue Li
The rapid growth of electronic health record (EHR) datasets opens up promising opportunities to understand human diseases in a systematic way. However, effective extraction of clinical knowledge from the EHR data has been hindered by its sparsity and noisy information. We present KG-ETM, an end-to-end knowledge graph-based multimodal embedded topic model. KG-ETM distills latent disease topics from EHR data by learning the embedding from the medical knowledge graphs. We applied KG-ETM to a large-scale EHR dataset consisting of over 1 million patients. We evaluated its performance based on EHR reconstruction and drug imputation. KG-ETM demonstrated superior performance over the alternative methods on both tasks. Moreover, our model learned clinically meaningful graph-informed embedding of the EHR codes. In additional, our model is also able to discover interpretable and accurate patient representations for patient stratification and drug recommendations.
http://arxiv.org/abs/2206.01436v1
"2022-06-03T07:58:17Z"
cs.LG, cs.IR, q-bio.QM
2,022
Assessing the trade-off between prediction accuracy and interpretability for topic modeling on energetic materials corpora
Monica Puerto, Mason Kellett, Rodanthi Nikopoulou, Mark D. Fuge, Ruth Doherty, Peter W. Chung, Zois Boukouvalas
As the amount and variety of energetics research increases, machine aware topic identification is necessary to streamline future research pipelines. The makeup of an automatic topic identification process consists of creating document representations and performing classification. However, the implementation of these processes on energetics research imposes new challenges. Energetics datasets contain many scientific terms that are necessary to understand the context of a document but may require more complex document representations. Secondly, the predictions from classification must be understandable and trusted by the chemists within the pipeline. In this work, we study the trade-off between prediction accuracy and interpretability by implementing three document embedding methods that vary in computational complexity. With our accuracy results, we also introduce local interpretability model-agnostic explanations (LIME) of each prediction to provide a localized understanding of each prediction and to validate classifier decisions with our team of energetics experts. This study was carried out on a novel labeled energetics dataset created and validated by our team of energetics experts.
http://arxiv.org/abs/2206.00773v1
"2022-06-01T21:28:21Z"
cs.CL, cs.LG
2,022
Assessing Group-level Gender Bias in Professional Evaluations: The Case of Medical Student End-of-Shift Feedback
Emmy Liu, Michael Henry Tessler, Nicole Dubosh, Katherine Mosher Hiller, Roger Levy
Although approximately 50% of medical school graduates today are women, female physicians tend to be underrepresented in senior positions, make less money than their male counterparts and receive fewer promotions. There is a growing body of literature demonstrating gender bias in various forms of evaluation in medicine, but this work was mainly conducted by looking for specific words using fixed dictionaries such as LIWC and focused on recommendation letters. We use a dataset of written and quantitative assessments of medical student performance on individual shifts of work, collected across multiple institutions, to investigate the extent to which gender bias exists in a day-to-day context for medical students. We investigate differences in the narrative comments given to male and female students by both male or female faculty assessors, using a fine-tuned BERT model. This allows us to examine whether groups are written about in systematically different ways, without relying on hand-crafted wordlists or topic models. We compare these results to results from the traditional LIWC method and find that, although we find no evidence of group-level gender bias in this dataset, terms related to family and children are used more in feedback given to women.
http://arxiv.org/abs/2206.00234v1
"2022-06-01T05:01:36Z"
cs.CL
2,022
Anchor Prediction: A Topic Modeling Approach
Jean Dupuy, Adrien Guille, Julien Jacques
Networks of documents connected by hyperlinks, such as Wikipedia, are ubiquitous. Hyperlinks are inserted by the authors to enrich the text and facilitate the navigation through the network. However, authors tend to insert only a fraction of the relevant hyperlinks, mainly because this is a time consuming task. In this paper we address an annotation, which we refer to as anchor prediction. Even though it is conceptually close to link prediction or entity linking, it is a different task that require developing a specific method to solve it. Given a source document and a target document, this task consists in automatically identifying anchors in the source document, i.e words or terms that should carry a hyperlink pointing towards the target document. We propose a contextualized relational topic model, CRTM, that models directed links between documents as a function of the local context of the anchor in the source document and the whole content of the target document. The model can be used to predict anchors in a source document, given the target document, without relying on a dictionary of previously seen mention or title, nor any external knowledge graph. Authors can benefit from CRTM, by letting it automatically suggest hyperlinks, given a new document and the set of target document to connect to. It can also benefit to readers, by dynamically inserting hyperlinks between the documents they're reading. Experiments conducted on several Wikipedia corpora (in English, Italian and German) highlight the practical usefulness of anchor prediction and demonstrate the relevancy of our approach.
http://arxiv.org/abs/2205.14631v2
"2022-05-29T11:26:52Z"
cs.CL
2,022
Federated Non-negative Matrix Factorization for Short Texts Topic Modeling with Mutual Information
Shijing Si, Jianzong Wang, Ruiyi Zhang, Qinliang Su, Jing Xiao
Non-negative matrix factorization (NMF) based topic modeling is widely used in natural language processing (NLP) to uncover hidden topics of short text documents. Usually, training a high-quality topic model requires large amount of textual data. In many real-world scenarios, customer textual data should be private and sensitive, precluding uploading to data centers. This paper proposes a Federated NMF (FedNMF) framework, which allows multiple clients to collaboratively train a high-quality NMF based topic model with locally stored data. However, standard federated learning will significantly undermine the performance of topic models in downstream tasks (e.g., text classification) when the data distribution over clients is heterogeneous. To alleviate this issue, we further propose FedNMF+MI, which simultaneously maximizes the mutual information (MI) between the count features of local texts and their topic weight vectors to mitigate the performance degradation. Experimental results show that our FedNMF+MI methods outperform Federated Latent Dirichlet Allocation (FedLDA) and the FedNMF without MI methods for short texts by a significant margin on both coherence score and classification F1 score.
http://arxiv.org/abs/2205.13300v1
"2022-05-26T12:22:34Z"
cs.CL, cs.AI, cs.LG
2,022
Artificial intelligence for topic modelling in Hindu philosophy: mapping themes between the Upanishads and the Bhagavad Gita
Rohitash Chandra, Mukul Ranjan
A distinct feature of Hindu religious and philosophical text is that they come from a library of texts rather than single source. The Upanishads is known as one of the oldest philosophical texts in the world that forms the foundation of Hindu philosophy. The Bhagavad Gita is core text of Hindu philosophy and is known as a text that summarises the key philosophies of the Upanishads with major focus on the philosophy of karma. These texts have been translated into many languages and there exists studies about themes and topics that are prominent; however, there is not much study of topic modelling using language models which are powered by deep learning. In this paper, we use advanced language produces such as BERT to provide topic modelling of the key texts of the Upanishads and the Bhagavad Gita. We analyse the distinct and overlapping topics amongst the texts and visualise the link of selected texts of the Upanishads with Bhagavad Gita. Our results show a very high similarity between the topics of these two texts with the mean cosine similarity of 73%. We find that out of the fourteen topics extracted from the Bhagavad Gita, nine of them have a cosine similarity of more than 70% with the topics of the Upanishads. We also found that topics generated by the BERT-based models show very high coherence as compared to that of conventional models. Our best performing model gives a coherence score of 73% on the Bhagavad Gita and 69% on The Upanishads. The visualization of the low dimensional embeddings of these texts shows very clear overlapping among their topics adding another level of validation to our results.
http://arxiv.org/abs/2205.11020v1
"2022-05-23T03:39:00Z"
cs.CL
2,022
A Weakly-Supervised Iterative Graph-Based Approach to Retrieve COVID-19 Misinformation Topics
Harry Wang, Sharath Chandra Guntuku
The COVID-19 pandemic has been accompanied by an `infodemic' -- of accurate and inaccurate health information across social media. Detecting misinformation amidst dynamically changing information landscape is challenging; identifying relevant keywords and posts is arduous due to the large amount of human effort required to inspect the content and sources of posts. We aim to reduce the resource cost of this process by introducing a weakly-supervised iterative graph-based approach to detect keywords, topics, and themes related to misinformation, with a focus on COVID-19. Our approach can successfully detect specific topics from general misinformation-related seed words in a few seed texts. Our approach utilizes the BERT-based Word Graph Search (BWGS) algorithm that builds on context-based neural network embeddings for retrieving misinformation-related posts. We utilize Latent Dirichlet Allocation (LDA) topic modeling for obtaining misinformation-related themes from the texts returned by BWGS. Furthermore, we propose the BERT-based Multi-directional Word Graph Search (BMDWGS) algorithm that utilizes greater starting context information for misinformation extraction. In addition to a qualitative analysis of our approach, our quantitative analyses show that BWGS and BMDWGS are effective in extracting misinformation-related content compared to common baselines in low data resource settings. Extracting such content is useful for uncovering prevalent misconceptions and concerns and for facilitating precision public health messaging campaigns to improve health behaviors.
http://arxiv.org/abs/2205.09416v1
"2022-05-19T09:30:39Z"
cs.HC, cs.CL, cs.SI
2,022
Topic Modelling on Consumer Financial Protection Bureau Data: An Approach Using BERT Based Embeddings
Vasudeva Raju Sangaraju, Bharath Kumar Bolla, Deepak Kumar Nayak, Jyothsna Kh
Customers' reviews and comments are important for businesses to understand users' sentiment about the products and services. However, this data needs to be analyzed to assess the sentiment associated with topics/aspects to provide efficient customer assistance. LDA and LSA fail to capture the semantic relationship and are not specific to any domain. In this study, we evaluate BERTopic, a novel method that generates topics using sentence embeddings on Consumer Financial Protection Bureau (CFPB) data. Our work shows that BERTopic is flexible and yet provides meaningful and diverse topics compared to LDA and LSA. Furthermore, domain-specific pre-trained embeddings (FinBERT) yield even better topics. We evaluated the topics on coherence score (c_v) and UMass.
http://arxiv.org/abs/2205.07259v1
"2022-05-15T11:14:47Z"
cs.LG, cs.AI, cs.CL, cs.IR, cs.IT, math.IT
2,022
Disentangling Quarks and Gluons with CMS Open Data
Patrick T. Komiske, Serhii Kryhin, Jesse Thaler
We study quark and gluon jets separately using public collider data from the CMS experiment. Our analysis is based on 2.3/fb of proton-proton collisions at 7 TeV, collected at the Large Hadron Collider in 2011. We define two non-overlapping samples via a pseudorapidity cut -- central jets with |eta| < 0.65 and forward jets with |eta| > 0.65 -- and employ jet topic modeling to extract individual distributions for the maximally separable categories. Under certain assumptions, such as sample independence and mutual irreducibility, these categories correspond to "quark" and "gluon" jets, as given by a recently proposed operational definition. We consider a number of different methods for extracting reducibility factors from the central and forward datasets, from which the fractions of quark jets in each sample can be determined. The greatest stability and robustness to statistical uncertainties is achieved by a novel method based on parametrizing the endpoints of a receiver operating characteristic (ROC) curve. To mitigate detector effects, which would otherwise induce unphysical differences between central and forward jets, we use the OmniFold method to perform central value unfolding. As a demonstration of the power of this method, we extract the intrinsic dimensionality of the quark and gluon jet samples, which exhibit Casimir scaling, as expected from the strongly-ordered limit. To our knowledge, this work is the first application of full phase space unfolding to real collider data, and one of the first applications of topic modeling to extract separate quark and gluon distributions at the LHC.
http://arxiv.org/abs/2205.04459v2
"2022-05-09T17:59:59Z"
hep-ph, hep-ex, physics.data-an
2,022
Understanding Quantum Software Engineering Challenges An Empirical Study on Stack Exchange Forums and GitHub Issues
Mohamed Raed El aoun, Heng Li, Foutse Khomh, Moses Openja
With the advance in quantum computing, quantum software becomes critical for exploring the full potential of quantum computing systems. Recently, quantum software engineering (QSE) becomes an emerging area attracting more and more attention. However, it is not clear what are the challenges and opportunities of quantum computing facing the software engineering community. This work aims to understand the QSE-related challenges perceived by developers. We perform an empirical study on Stack Exchange forums where developers post-QSE-related questions & answers and Github issue reports where developers raise QSE-related issues in practical quantum computing projects. Based on an existing taxonomy of question types on Stack Overflow, we first perform a qualitative analysis of the types of QSE-related questions asked on Stack Exchange forums. We then use automated topic modeling to uncover the topics in QSE-related Stack Exchange posts and GitHub issue reports. Our study highlights some particularly challenging areas of QSE that are different from that of traditional software engineering, such as explaining the theory behind quantum computing code, interpreting quantum program outputs, and bridging the knowledge gap between quantum computing and classical computing, as well as their associated opportunities.
http://arxiv.org/abs/2205.03181v1
"2022-05-06T12:51:54Z"
cs.SE
2,022
Seed-Guided Topic Discovery with Out-of-Vocabulary Seeds
Yu Zhang, Yu Meng, Xuan Wang, Sheng Wang, Jiawei Han
Discovering latent topics from text corpora has been studied for decades. Many existing topic models adopt a fully unsupervised setting, and their discovered topics may not cater to users' particular interests due to their inability of leveraging user guidance. Although there exist seed-guided topic discovery approaches that leverage user-provided seeds to discover topic-representative terms, they are less concerned with two factors: (1) the existence of out-of-vocabulary seeds and (2) the power of pre-trained language models (PLMs). In this paper, we generalize the task of seed-guided topic discovery to allow out-of-vocabulary seeds. We propose a novel framework, named SeeTopic, wherein the general knowledge of PLMs and the local semantics learned from the input corpus can mutually benefit each other. Experiments on three real datasets from different domains demonstrate the effectiveness of SeeTopic in terms of topic coherence, accuracy, and diversity.
http://arxiv.org/abs/2205.01845v1
"2022-05-04T01:49:36Z"
cs.CL, cs.IR
2,022
A Comparison of Approaches for Imbalanced Classification Problems in the Context of Retrieving Relevant Documents for an Analysis
Sandra Wankmüller
One of the first steps in many text-based social science studies is to retrieve documents that are relevant for the analysis from large corpora of otherwise irrelevant documents. The conventional approach in social science to address this retrieval task is to apply a set of keywords and to consider those documents to be relevant that contain at least one of the keywords. But the application of incomplete keyword lists risks drawing biased inferences. More complex and costly methods such as query expansion techniques, topic model-based classification rules, and active as well as passive supervised learning could have the potential to more accurately separate relevant from irrelevant documents and thereby reduce the potential size of bias. Yet, whether applying these more expensive approaches increases retrieval performance compared to keyword lists at all, and if so, by how much, is unclear as a comparison of these approaches is lacking. This study closes this gap by comparing these methods across three retrieval tasks associated with a data set of German tweets (Linder, 2017), the Social Bias Inference Corpus (SBIC) (Sap et al., 2020), and the Reuters-21578 corpus (Lewis, 1997). Results show that query expansion techniques and topic model-based classification rules in most studied settings tend to decrease rather than increase retrieval performance. Active supervised learning, however, if applied on a not too small set of labeled training instances (e.g. 1,000 documents), reaches a substantially higher retrieval performance than keyword lists.
http://arxiv.org/abs/2205.01600v1
"2022-05-03T16:22:42Z"
cs.IR, cs.CL, stat.AP, stat.ML, I.2.7; H.3.3
2,022
Open vs Closed-ended questions in attitudinal surveys -- comparing, combining, and interpreting using natural language processing
Vishnu Baburajan, João de Abreu e Silva, Francisco Camara Pereira
To improve the traveling experience, researchers have been analyzing the role of attitudes in travel behavior modeling. Although most researchers use closed-ended surveys, the appropriate method to measure attitudes is debatable. Topic Modeling could significantly reduce the time to extract information from open-ended responses and eliminate subjective bias, thereby alleviating analyst concerns. Our research uses Topic Modeling to extract information from open-ended questions and compare its performance with closed-ended responses. Furthermore, some respondents might prefer answering questions using their preferred questionnaire type. So, we propose a modeling framework that allows respondents to use their preferred questionnaire type to answer the survey and enable analysts to use the modeling frameworks of their choice to predict behavior. We demonstrate this using a dataset collected from the USA that measures the intention to use Autonomous Vehicles for commute trips. Respondents were presented with alternative questionnaire versions (open- and closed- ended). Since our objective was also to compare the performance of alternative questionnaire versions, the survey was designed to eliminate influences resulting from statements, behavioral framework, and the choice experiment. Results indicate the suitability of using Topic Modeling to extract information from open-ended responses; however, the models estimated using the closed-ended questions perform better compared to them. Besides, the proposed model performs better compared to the models used currently. Furthermore, our proposed framework will allow respondents to choose the questionnaire type to answer, which could be particularly beneficial to them when using voice-based surveys.
http://arxiv.org/abs/2205.01317v1
"2022-05-03T06:01:03Z"
econ.GN, cs.CL, cs.LG, q-fin.EC
2,022
Making sense of violence risk predictions using clinical notes
Pablo Mosteiro, Emil Rijcken, Kalliopi Zervanou, Uzay Kaymak, Floortje Scheepers, Marco Spruit
Violence risk assessment in psychiatric institutions enables interventions to avoid violence incidents. Clinical notes written by practitioners and available in electronic health records (EHR) are valuable resources that are seldom used to their full potential. Previous studies have attempted to assess violence risk in psychiatric patients using such notes, with acceptable performance. However, they do not explain why classification works and how it can be improved. We explore two methods to better understand the quality of a classifier in the context of clinical note analysis: random forests using topic models, and choice of evaluation metric. These methods allow us to understand both our data and our methodology more profoundly, setting up the groundwork to work on improved models that build upon this understanding. This is particularly important when it comes to the generalizability of evaluated classifiers to new data, a trustworthiness problem that is of great interest due to the increased availability of new data in electronic format.
http://arxiv.org/abs/2204.13976v1
"2022-04-29T10:00:07Z"
cs.LG, cs.CL, cs.CY
2,022
The SCORE normalization, especially for highly heterogeneous network and text data
Zheng Tracy Ke, Jiashun Jin
SCORE was introduced as a spectral approach to network community detection. Since many networks have severe degree heterogeneity, the ordinary spectral clustering (OSC) approach to community detection may perform unsatisfactorily. SCORE alleviates the effect of degree heterogeneity by introducing a new normalization idea in the spectral domain and makes OSC more effective. SCORE is easy to use and computationally fast. It adapts easily to new directions and sees an increasing interest in practice. In this paper, we review the basics of SCORE, the adaption of SCORE to network mixed membership estimation and topic modeling, and the application of SCORE in real data, including two datasets on the publications of statisticians. We also review the theoretical 'ideology' underlying SCORE. We show that in the spectral domain, SCORE converts a simplicial cone to a simplex, and provides a simple and direct link between the simplex and network memberships. SCORE attains an exponential rate and a sharp phase transition in community detection, and achieves optimal rates in mixed membership estimation and topic modeling.
http://arxiv.org/abs/2204.11097v1
"2022-04-23T16:05:30Z"
cs.SI, stat.ME
2,022
Neural Contrastive Clustering: Fully Unsupervised Bias Reduction for Sentiment Classification
Jared Mowery
Background: Neural networks produce biased classification results due to correlation bias (they learn correlations between their inputs and outputs to classify samples, even when those correlations do not represent cause-and-effect relationships). Objective: This study introduces a fully unsupervised method of mitigating correlation bias, demonstrated with sentiment classification on COVID-19 social media data. Methods: Correlation bias in sentiment classification often arises in conversations about controversial topics. Therefore, this study uses adversarial learning to contrast clusters based on sentiment classification labels, with clusters produced by unsupervised topic modeling. This discourages the neural network from learning topic-related features that produce biased classification results. Results: Compared to a baseline classifier, neural contrastive clustering approximately doubles accuracy on bias-prone sentences for human-labeled COVID-19 social media data, without adversely affecting the classifier's overall F1 score. Despite being a fully unsupervised approach, neural contrastive clustering achieves a larger improvement in accuracy on bias-prone sentences than a supervised masking approach. Conclusions: Neural contrastive clustering reduces correlation bias in sentiment text classification. Further research is needed to explore generalizing this technique to other neural network architectures and application domains.
http://arxiv.org/abs/2204.10467v1
"2022-04-22T02:34:41Z"
cs.CL, cs.LG
2,022
Is Neural Topic Modelling Better than Clustering? An Empirical Study on Clustering with Contextual Embeddings for Topics
Zihan Zhang, Meng Fang, Ling Chen, Mohammad-Reza Namazi-Rad
Recent work incorporates pre-trained word embeddings such as BERT embeddings into Neural Topic Models (NTMs), generating highly coherent topics. However, with high-quality contextualized document representations, do we really need sophisticated neural models to obtain coherent and interpretable topics? In this paper, we conduct thorough experiments showing that directly clustering high-quality sentence embeddings with an appropriate word selecting method can generate more coherent and diverse topics than NTMs, achieving also higher efficiency and simplicity.
http://arxiv.org/abs/2204.09874v1
"2022-04-21T04:26:51Z"
cs.CL
2,022
On Xing Tian and the Perseverance of Anti-China Sentiment Online
Xinyue Shen, Xinlei He, Michael Backes, Jeremy Blackburn, Savvas Zannettou, Yang Zhang
Sinophobia, anti-Chinese sentiment, has existed on the Web for a long time. The outbreak of COVID-19 and the extended quarantine has further amplified it. However, we lack a quantitative understanding of the cause of Sinophobia as well as how it evolves over time. In this paper, we conduct a large-scale longitudinal measurement of Sinophobia, between 2016 and 2021, on two mainstream and fringe Web communities. By analyzing 8B posts from Reddit and 206M posts from 4chan's /pol/, we investigate the origins, evolution, and content of Sinophobia. We find that, anti-Chinese content may be evoked by political events not directly related to China, e.g., the U.S. withdrawal from the Paris Agreement. And during the COVID-19 pandemic, daily usage of Sinophobic slurs has significantly increased even with the hate-speech ban policy. We also show that the semantic meaning of the words "China" and "Chinese" are shifting towards Sinophobic slurs with the rise of COVID-19 and remain the same in the pandemic period. We further use topic modeling to show the topics of Sinophobic discussion are pretty diverse and broad. We find that both Web communities share some common Sinophobic topics like ethnics, economics and commerce, weapons and military, foreign relations, etc. However, compared to 4chan's /pol/, more daily life-related topics including food, game, and stock are found in Reddit. Our finding also reveals that the topics related to COVID-19 and blaming the Chinese government are more prevalent in the pandemic period. To the best of our knowledge, this paper is the longest quantitative measurement of Sinophobia.
http://arxiv.org/abs/2204.08935v1
"2022-04-19T15:17:28Z"
cs.SI, cs.CY
2,022
Leveraging Natural Learning Processing to Uncover Themes in Clinical Notes of Patients Admitted for Heart Failure
Ankita Agarwal, Krishnaprasad Thirunarayan, William L. Romine, Amanuel Alambo, Mia Cajita, Tanvi Banerjee
Heart failure occurs when the heart is not able to pump blood and oxygen to support other organs in the body as it should. Treatments include medications and sometimes hospitalization. Patients with heart failure can have both cardiovascular as well as non-cardiovascular comorbidities. Clinical notes of patients with heart failure can be analyzed to gain insight into the topics discussed in these notes and the major comorbidities in these patients. In this regard, we apply machine learning techniques, such as topic modeling, to identify the major themes found in the clinical notes specific to the procedures performed on 1,200 patients admitted for heart failure at the University of Illinois Hospital and Health Sciences System (UI Health). Topic modeling revealed five hidden themes in these clinical notes, including one related to heart disease comorbidities.
http://arxiv.org/abs/2204.07074v1
"2022-04-14T16:08:13Z"
cs.LG
2,022
Neural Topic Modeling of Psychotherapy Sessions
Baihan Lin, Djallel Bouneffouf, Guillermo Cecchi, Ravi Tejwani
In this work, we compare different neural topic modeling methods in learning the topical propensities of different psychiatric conditions from the psychotherapy session transcripts parsed from speech recordings. We also incorporate temporal modeling to put this additional interpretability to action by parsing out topic similarities as a time series in a turn-level resolution. We believe this topic modeling framework can offer interpretable insights for the therapist to optimally decide his or her strategy and improve psychotherapy effectiveness.
http://arxiv.org/abs/2204.10189v2
"2022-04-13T04:05:39Z"
cs.CL, cs.AI, cs.HC, cs.LG, q-bio.NC
2,022
Scratch as Social Network: Topic Modeling and Sentiment Analysis in Scratch Projects
Isabella Graßl, Gordon Fraser
Societal matters like the Black Lives Matter (BLM) movement influence software engineering, as the recent debate on replacing certain discriminatory terms such as whitelist/blacklist has shown. Identifying relevant and trending societal matters is important, and often done using social network analysis for traditional social media channels such as twitter. In this paper we explore whether this type of analysis can also be used for introspection of the software world, by looking at the thriving scene of young Scratch programmers. The educational programming language Scratch is not only used for teaching programming concepts, but offers a platform for young programmers to express and share their creativity on any topics of relevance. By analyzing titles and project comments in a dataset of 106.032 Scratch projects, we explore which topics are common in the Scratch community, whether socially relevant events are reflected and how how the sentiment in the comments is. It turns out that the diversity of topics within the Scratch projects make the analysis process challenging. Our results nevertheless show that topics from pop and net culture in particular are present, and even recent societal events such as the Covid-19 pandemic or BLM are to some extent reflected in Scratch. The tone in the comments is mostly positive with catchy youth language. Hence, despite the challenges, Scratch projects can be studied in the same way as social networks, which opens up new possibilities to improve our understanding of the behavior and motivation of novice programmers.
http://arxiv.org/abs/2204.05902v1
"2022-04-12T15:55:52Z"
cs.SE
2,022
A Joint Learning Approach for Semi-supervised Neural Topic Modeling
Jeffrey Chiu, Rajat Mittal, Neehal Tumma, Abhishek Sharma, Finale Doshi-Velez
Topic models are some of the most popular ways to represent textual data in an interpret-able manner. Recently, advances in deep generative models, specifically auto-encoding variational Bayes (AEVB), have led to the introduction of unsupervised neural topic models, which leverage deep generative models as opposed to traditional statistics-based topic models. We extend upon these neural topic models by introducing the Label-Indexed Neural Topic Model (LI-NTM), which is, to the extent of our knowledge, the first effective upstream semi-supervised neural topic model. We find that LI-NTM outperforms existing neural topic models in document reconstruction benchmarks, with the most notable results in low labeled data regimes and for data-sets with informative labels; furthermore, our jointly learned classifier outperforms baseline classifiers in ablation studies.
http://arxiv.org/abs/2204.03208v1
"2022-04-07T04:42:17Z"
cs.IR, cs.CL, cs.LG, stat.ML
2,022
How do media talk about the Covid-19 pandemic? Metaphorical thematic clustering in Italian online newspapers
Lucia Busso, Ottavia Tordini
The contribution presents a study on figurative language of the first months of the COVID-19 crisis in Italian online newspapers. Particularly, we contrast topics and metaphorical language used by journalists in the first and second phase of the government response to the pandemic in Spring 2020. The analysis is conducted on a journalistic corpus collected between February 24th and June 3rd, 2020. The analysis is performed using both quantitative and qualitative approaches, combining Structural Topic Modelling (Roberts et al. 2016), Conceptual Metaphor Theory (Lakoff & Johnson, 1980), and qualitative-corpus based metaphor analysis (Charteris-Black, 2004). We find a significant shift in topics discussed across Phase 1 and Phase 2, and interesting overlaps in topic-specific metaphors. Using qualitative corpus analysis, we present a more in-depth case study discussing metaphorical collocations of the topics of Economy and Society
http://arxiv.org/abs/2204.02106v2
"2022-04-05T10:55:33Z"
cs.CL
2,022
Data-driven extraction of the substructure of quark and gluon jets in proton-proton and heavy-ion collisions
Yueyang Ying, Jasmine Brewer, Yi Chen, Yen-Jie Lee
The different modifications of quark- and gluon-initiated jets in the quark-gluon plasma (QGP) produced in heavy-ion collisions is a long-standing question that has not yet received a definitive answer from experiments. In particular, the relative sizes of the modification of quark and gluon jets differ between theoretical models. Therefore, a fully data-driven technique is crucial for an unbiased extraction of the quark and gluon jet spectra and substructure. We perform a proof-of-concept study based on proton-proton and heavy-ion collision events from the \textsc{Pyquen} generator with statistics accessible in Run 4 of the Large Hadron Collider. We use a statistical technique called topic modeling to separate quark and gluon contributions to jet observables. We demonstrate that jet substructure observables, such as the jet shape and jet fragmentation function, can be extracted using this data-driven method. These values can then be used to obtain the modification of quark and gluon jet substructures in the QGP. We also perform the topic separation on smeared input data to demonstrate that the approach is robust to fluctuations arising from a QGP background. These results suggest the potential for an experimental determination of quark and gluon jet spectra and their substructure.
http://arxiv.org/abs/2204.00641v3
"2022-04-01T18:02:10Z"
hep-ph
2,022
Mapping Topics in 100,000 Real-life Moral Dilemmas
Tuan Dung Nguyen, Georgiana Lyall, Alasdair Tran, Minjeong Shin, Nicholas George Carroll, Colin Klein, Lexing Xie
Moral dilemmas play an important role in theorizing both about ethical norms and moral psychology. Yet thought experiments borrowed from the philosophical literature often lack the nuances and complexity of real life. We leverage 100,000 threads -- the largest collection to date -- from Reddit's r/AmItheAsshole to examine the features of everyday moral dilemmas. Combining topic modeling with evaluation from both expert and crowd-sourced workers, we discover 47 finer-grained, meaningful topics and group them into five meta-categories. We show that most dilemmas combine at least two topics, such as family and money. We also observe that the pattern of topic co-occurrence carries interesting information about the structure of everyday moral concerns: for example, the generation of moral dilemmas from nominally neutral topics, and interaction effects in which final verdicts do not line up with the moral concerns in the original stories in any simple way. Our analysis demonstrates the utility of a fine-grained data-driven approach to online moral dilemmas, and provides a valuable resource for researchers aiming to explore the intersection of practical and theoretical ethics.
http://arxiv.org/abs/2203.16762v1
"2022-03-31T02:36:02Z"
cs.SI
2,022
Hate speech, Censorship, and Freedom of Speech: The Changing Policies of Reddit
Elissa Nakajima Wickham, Emily Öhman
This paper examines the shift in focus on content policies and user attitudes on the social media platform Reddit. We do this by focusing on comments from general Reddit users from five posts made by admins (moderators) on updates to Reddit Content Policy. All five concern the nature of what kind of content is allowed to be posted on Reddit, and which measures will be taken against content that violates these policies. We use topic modeling to probe how the general discourse for Redditors has changed around limitations on content, and later, limitations on hate speech, or speech that incites violence against a particular group. We show that there is a clear shift in both the contents and the user attitudes that can be linked to contemporary societal upheaval as well as newly passed laws and regulations, and contribute to the wider discussion on hate speech moderation.
http://arxiv.org/abs/2203.09673v1
"2022-03-18T00:46:58Z"
cs.CL
2,022
Short Text Topic Modeling: Application to tweets about Bitcoin
Hugo Schnoering
Understanding the semantic of a collection of texts is a challenging task. Topic models are probabilistic models that aims at extracting "topics" from a corpus of documents. This task is particularly difficult when the corpus is composed of short texts, such as posts on social networks. Following several previous research papers, we explore in this paper a set of collected tweets about bitcoin. In this work, we train three topic models and evaluate their output with several scores. We also propose a concrete application of the extracted topics.
http://arxiv.org/abs/2203.11152v1
"2022-03-17T15:53:47Z"
cs.IR, cs.LG
2,022
TSM: Measuring the Enticement of Honeyfiles with Natural Language Processing
Roelien C. Timmer, David Liebowitz, Surya Nepal, Salil Kanhere
Honeyfile deployment is a useful breach detection method in cyber deception that can also inform defenders about the intent and interests of intruders and malicious insiders. A key property of a honeyfile, enticement, is the extent to which the file can attract an intruder to interact with it. We introduce a novel metric, Topic Semantic Matching (TSM), which uses topic modelling to represent files in the repository and semantic matching in an embedding vector space to compare honeyfile text and topic words robustly. We also present a honeyfile corpus created with different Natural Language Processing (NLP) methods. Experiments show that TSM is effective in inter-corpus comparisons and is a promising tool to measure the enticement of honeyfiles. TSM is the first measure to use NLP techniques to quantify the enticement of honeyfile content that compares the essential topical content of local contexts to honeyfiles and is robust to paraphrasing.
http://arxiv.org/abs/2203.07580v1
"2022-03-15T01:07:51Z"
cs.CL, cs.CR, cs.LG
2,022
A Novel Perspective to Look At Attention: Bi-level Attention-based Explainable Topic Modeling for News Classification
Dairui Liu, Derek Greene, Ruihai Dong
Many recent deep learning-based solutions have widely adopted the attention-based mechanism in various tasks of the NLP discipline. However, the inherent characteristics of deep learning models and the flexibility of the attention mechanism increase the models' complexity, thus leading to challenges in model explainability. In this paper, to address this challenge, we propose a novel practical framework by utilizing a two-tier attention architecture to decouple the complexity of explanation and the decision-making process. We apply it in the context of a news article classification task. The experiments on two large-scaled news corpora demonstrate that the proposed model can achieve competitive performance with many state-of-the-art alternatives and illustrate its appropriateness from an explainability perspective.
http://arxiv.org/abs/2203.07216v2
"2022-03-14T15:55:21Z"
cs.CL
2,022
Are Deepfakes Concerning? Analyzing Conversations of Deepfakes on Reddit and Exploring Societal Implications
Dilrukshi Gamage, Piyush Ghasiya, Vamshi Krishna Bonagiri, Mark E Whiting, Kazutoshi Sasahara
Deepfakes are synthetic content generated using advanced deep learning and AI technologies. The advancement of technology has created opportunities for anyone to create and share deepfakes much easier. This may lead to societal concerns based on how communities engage with it. However, there is limited research available to understand how communities perceive deepfakes. We examined deepfake conversations on Reddit from 2018 to 2021 -- including major topics and their temporal changes as well as implications of these conversations. Using a mixed-method approach -- topic modeling and qualitative coding, we found 6,638 posts and 86,425 comments discussing concerns of the believable nature of deepfakes and how platforms moderate them. We also found Reddit conversations to be pro-deepfake and building a community that supports creating and sharing deepfake artifacts and building a marketplace regardless of the consequences. Possible implications derived from qualitative codes indicate that deepfake conversations raise societal concerns. We propose that there are implications for Human Computer Interaction (HCI) to mitigate the harm created from deepfakes.
http://arxiv.org/abs/2203.15044v1
"2022-03-14T09:36:17Z"
cs.HC, cs.SI
2,022
Neural Topic Modeling with Deep Mutual Information Estimation
Kang Xu, Xiaoqiu Lu, Yuan-fang Li, Tongtong Wu, Guilin Qi, Ning Ye, Dong Wang, Zheng Zhou
The emerging neural topic models make topic modeling more easily adaptable and extendable in unsupervised text mining. However, the existing neural topic models is difficult to retain representative information of the documents within the learnt topic representation. In this paper, we propose a neural topic model which incorporates deep mutual information estimation, i.e., Neural Topic Modeling with Deep Mutual Information Estimation(NTM-DMIE). NTM-DMIE is a neural network method for topic learning which maximizes the mutual information between the input documents and their latent topic representation. To learn robust topic representation, we incorporate the discriminator to discriminate negative examples and positive examples via adversarial learning. Moreover, we use both global and local mutual information to preserve the rich information of the input documents in the topic representation. We evaluate NTM-DMIE on several metrics, including accuracy of text clustering, with topic representation, topic uniqueness and topic coherence. Compared to the existing methods, the experimental results show that NTM-DMIE can outperform in all the metrics on the four datasets.
http://arxiv.org/abs/2203.06298v1
"2022-03-12T01:08:10Z"
cs.CL, cs.AI
2,022
BERTopic: Neural topic modeling with a class-based TF-IDF procedure
Maarten Grootendorst
Topic models can be useful tools to discover latent topics in collections of documents. Recent studies have shown the feasibility of approach topic modeling as a clustering task. We present BERTopic, a topic model that extends this process by extracting coherent topic representation through the development of a class-based variation of TF-IDF. More specifically, BERTopic generates document embedding with pre-trained transformer-based language models, clusters these embeddings, and finally, generates topic representations with the class-based TF-IDF procedure. BERTopic generates coherent topics and remains competitive across a variety of benchmarks involving classical models and those that follow the more recent clustering approach of topic modeling.
http://arxiv.org/abs/2203.05794v1
"2022-03-11T08:35:15Z"
cs.CL
2,022
Automatic Language Identification for Celtic Texts
Olha Dovbnia, Anna Wróblewska
Language identification is an important Natural Language Processing task. It has been thoroughly researched in the literature. However, some issues are still open. This work addresses the identification of the related low-resource languages on the example of the Celtic language family. This work's main goals were: (1) to collect the dataset of three Celtic languages; (2) to prepare a method to identify the languages from the Celtic family, i.e. to train a successful classification model; (3) to evaluate the influence of different feature extraction methods, and explore the applicability of the unsupervised models as a feature extraction technique; (4) to experiment with the unsupervised feature extraction on a reduced annotated set. We collected a new dataset including Irish, Scottish, Welsh and English records. We tested supervised models such as SVM and neural networks with traditional statistical features alongside the output of clustering, autoencoder, and topic modelling methods. The analysis showed that the unsupervised features could serve as a valuable extension to the n-gram feature vectors. It led to an improvement in performance for more entangled classes. The best model achieved a 98\% F1 score and 97\% MCC. The dense neural network consistently outperformed the SVM model. The low-resource languages are also challenging due to the scarcity of available annotated training data. This work evaluated the performance of the classifiers using the unsupervised feature extraction on the reduced labelled dataset to handle this issue. The results uncovered that the unsupervised feature vectors are more robust to the labelled set reduction. Therefore, they proved to help achieve comparable classification performance with much less labelled data.
http://arxiv.org/abs/2203.04831v1
"2022-03-09T16:04:13Z"
cs.CL, cs.LG
2,022
Enhance Topics Analysis based on Keywords Properties
Antonio Penta
Topic Modelling is one of the most prevalent text analysis technique used to explore and retrieve collection of documents. The evaluation of the topic model algorithms is still a very challenging tasks due to the absence of gold-standard list of topics to compare against for every corpus. In this work, we present a specificity score based on keywords properties that is able to select the most informative topics. This approach helps the user to focus on the most informative topics. In the experiments, we show that we are able to compress the state-of-the-art topic modelling results of different factors with an information loss that is much lower than the solution based on the recent coherence score presented in literature.
http://arxiv.org/abs/2203.04786v1
"2022-03-09T15:10:12Z"
cs.IR, cs.CL, H.5; I.7
2,022
Online Adaptable Bug Localization for Rapidly Evolving Software
Agnieszka Ciborowska, Michael J. Decker, Kostadin Damevski
Bug localization aims to reduce debugging time by recommending program elements that are relevant for a specific bug report. To date, researchers have primarily addressed this problem by applying different information retrieval techniques that leverage similarities between a given bug report and source code. However, with modern software development trending towards increased speed of software change and continuous delivery to the user, the current generation of bug localization techniques, which cannot quickly adapt to the latest version of the software, is becoming inadequate. In this paper, we propose a technique for online bug localization, which enables rapidly updatable bug localization models. More specifically, we propose a streaming bug localization technique, based on an ensemble of online topic models, that is able to adapt to both specific (with explicit code mentions) and more abstract bug reports. By using changesets (diffs) as the input instead of a snapshot of the source code, the model naturally integrates defect prediction and co-change information into its prediction. Initial results indicate that the proposed approach improves bug localization performance for 42 out of 56 evaluation projects, with an average MAP improvement of 5.9%.
http://arxiv.org/abs/2203.03544v1
"2022-03-07T17:47:08Z"
cs.SE
2,022
Representing Mixtures of Word Embeddings with Mixtures of Topic Embeddings
Dongsheng Wang, Dandan Guo, He Zhao, Huangjie Zheng, Korawat Tanwisuth, Bo Chen, Mingyuan Zhou
A topic model is often formulated as a generative model that explains how each word of a document is generated given a set of topics and document-specific topic proportions. It is focused on capturing the word co-occurrences in a document and hence often suffers from poor performance in analyzing short documents. In addition, its parameter estimation often relies on approximate posterior inference that is either not scalable or suffers from large approximation error. This paper introduces a new topic-modeling framework where each document is viewed as a set of word embedding vectors and each topic is modeled as an embedding vector in the same embedding space. Embedding the words and topics in the same vector space, we define a method to measure the semantic difference between the embedding vectors of the words of a document and these of the topics, and optimize the topic embeddings to minimize the expected difference over all documents. Experiments on text analysis demonstrate that the proposed method, which is amenable to mini-batch stochastic gradient descent based optimization and hence scalable to big corpora, provides competitive performance in discovering more coherent and diverse topics and extracting better document representations.
http://arxiv.org/abs/2203.01570v2
"2022-03-03T08:46:23Z"
cs.LG, stat.ME, stat.ML
2,022
Topic Analysis for Text with Side Data
Biyi Fang, Kripa Rajshekhar, Diego Klabjan
Although latent factor models (e.g., matrix factorization) obtain good performance in predictions, they suffer from several problems including cold-start, non-transparency, and suboptimal recommendations. In this paper, we employ text with side data to tackle these limitations. We introduce a hybrid generative probabilistic model that combines a neural network with a latent topic model, which is a four-level hierarchical Bayesian model. In the model, each document is modeled as a finite mixture over an underlying set of topics and each topic is modeled as an infinite mixture over an underlying set of topic probabilities. Furthermore, each topic probability is modeled as a finite mixture over side data. In the context of text, the neural network provides an overview distribution about side data for the corresponding text, which is the prior distribution in LDA to help perform topic grouping. The approach is evaluated on several different datasets, where the model is shown to outperform standard LDA and Dirichlet-multinomial regression (DMR) in terms of topic grouping, model perplexity, classification and comment generation.
http://arxiv.org/abs/2203.00762v1
"2022-03-01T22:06:30Z"
cs.LG, cs.CL, cs.IR
2,022
Mental Health Pandemic during the COVID-19 Outbreak: Social Media as a Window to Public Mental Health
Michelle Bak, Chungyi Chiu, Jessie Chin
Intensified preventive measures during the COVID-19 pandemic, such as lockdown and social distancing, heavily increased the perception of social isolation (i.e., a discrepancy between one's social needs and the provisions of the social environment) among young adults. Social isolation is closely associated with situational loneliness (i.e., loneliness emerging from environmental change), a risk factor for depressive symptoms. Prior research suggested vulnerable young adults are likely to seek support from an online social platform such as Reddit, a perceived comfortable environment for lonely individuals to seek mental health help through anonymous communication with a broad social network. Therefore, this study aims to identify and analyze depression-related dialogues on loneliness subreddits during the COVID-19 outbreak, with the implications on depression-related infoveillance during the pandemic. Our study utilized logistic regression and topic modeling to classify and examine depression-related discussions on loneliness subreddits before and during the pandemic. Our results showed significant increases in the volume of depression-related discussions (i.e., topics related to mental health, social interaction, family, and emotion) where challenges were reported during the pandemic. We also found a switch in dominant topics emerging from depression-related discussions on loneliness subreddits, from dating (prepandemic) to online interaction and community (pandemic), suggesting the increased expressions or need of online social support during the pandemic. The current findings suggest the potential of social media to serve as a window for monitoring public mental health. Our future study will clinically validate the current approach, which has implications for designing a surveillance system during the crisis.
http://arxiv.org/abs/2203.00237v4
"2022-03-01T05:24:00Z"
cs.SI, cs.CY
2,022
Semi-supervised Nonnegative Matrix Factorization for Document Classification
Jamie Haddock, Lara Kassab, Sixian Li, Alona Kryshchenko, Rachel Grotheer, Elena Sizikova, Chuntian Wang, Thomas Merkh, RWMA Madushani, Miju Ahn, Deanna Needell, Kathryn Leonard
We propose new semi-supervised nonnegative matrix factorization (SSNMF) models for document classification and provide motivation for these models as maximum likelihood estimators. The proposed SSNMF models simultaneously provide both a topic model and a model for classification, thereby offering highly interpretable classification results. We derive training methods using multiplicative updates for each new model, and demonstrate the application of these models to single-label and multi-label document classification, although the models are flexible to other supervised learning tasks such as regression. We illustrate the promise of these models and training methods on document classification datasets (e.g., 20 Newsgroups, Reuters).
http://arxiv.org/abs/2203.03551v1
"2022-02-28T19:00:49Z"
cs.IR, cs.LG, cs.NA, math.NA
2,022
How Do Mothers and Fathers Talk About Parenting to Different Audiences?: Stereotypes and Audience Effects: An Analysis of r/Daddit, r/Mommit, and r/Parenting Using Topic Modelling
Melody Sepahpour-Fard, Michael Quayle
While major strides have been made towards gender equality in public life, serious inequality remains in the domestic sphere, especially around parenting. The present study analyses discussions about parenting on Reddit to explore audience effects and gender stereotypes. It suggests a novel method to study topical variation in individuals' language when interacting with different audiences. Comments posted in 2020 were collected from three parenting subreddits, described as being for fathers (r/Daddit), mothers (r/Mommit), and all parents (r/Parenting). Users posting on r/Parenting and r/Daddit or on r/Parenting and r/Mommit were assumed to identify as fathers or mothers, respectively, allowing gender comparison. Users' comments on r/Parenting (to a mixed-gender audience) were compared with their comments to single-gender audiences on r/Daddit or r/Mommit using LDA topic modelling. Results showed that the most discussed topic among parents is about education and family advice, a topic mainly discussed in the mixed-gender subreddit and more by fathers than mothers. Regarding the basic needs of children (sleep, food, and medical care), mothers seemed to be more concerned regardless of the audience. In contrast, topics such as birth and pregnancy announcements and physical appearance were more discussed by fathers in the father-centric subreddit. Overall, findings seem to show that mothers are generally more concerned about the practical sides of parenting while fathers' expressed concerns are more contextual: with other fathers, there seems to be a desire to show their fatherhood and be recognized for it while they discuss education with mothers. These results demonstrate that concerns expressed by parents on Reddit are context-sensitive but also consistent with gender stereotypes, potentially reflecting a persistent gendered and unequal division of labour in parenting.
http://arxiv.org/abs/2202.12962v1
"2022-02-25T20:35:35Z"
cs.CY, cs.SI, 68U15, J.4
2,022
A new LDA formulation with covariates
Gilson Shimizu, Rafael Izbicki, Denis Valle
The Latent Dirichlet Allocation (LDA) model is a popular method for creating mixed-membership clusters. Despite having been originally developed for text analysis, LDA has been used for a wide range of other applications. We propose a new formulation for the LDA model which incorporates covariates. In this model, a negative binomial regression is embedded within LDA, enabling straight-forward interpretation of the regression coefficients and the analysis of the quantity of cluster-specific elements in each sampling units (instead of the analysis being focused on modeling the proportion of each cluster, as in Structural Topic Models). We use slice sampling within a Gibbs sampling algorithm to estimate model parameters. We rely on simulations to show how our algorithm is able to successfully retrieve the true parameter values and the ability to make predictions for the abundance matrix using the information given by the covariates. The model is illustrated using real data sets from three different areas: text-mining of Coronavirus articles, analysis of grocery shopping baskets, and ecology of tree species on Barro Colorado Island (Panama). This model allows the identification of mixed-membership clusters in discrete data and provides inference on the relationship between covariates and the abundance of these clusters.
http://arxiv.org/abs/2202.11527v1
"2022-02-18T19:58:24Z"
cs.IR, cs.LG, stat.ME, stat.ML
2,022
Conversational Speech Recognition By Learning Conversation-level Characteristics
Kun Wei, Yike Zhang, Sining Sun, Lei Xie, Long Ma
Conversational automatic speech recognition (ASR) is a task to recognize conversational speech including multiple speakers. Unlike sentence-level ASR, conversational ASR can naturally take advantages from specific characteristics of conversation, such as role preference and topical coherence. This paper proposes a conversational ASR model which explicitly learns conversation-level characteristics under the prevalent end-to-end neural framework. The highlights of the proposed model are twofold. First, a latent variational module (LVM) is attached to a conformer-based encoder-decoder ASR backbone to learn role preference and topical coherence. Second, a topic model is specifically adopted to bias the outputs of the decoder to words in the predicted topics. Experiments on two Mandarin conversational ASR tasks show that the proposed model achieves a maximum 12% relative character error rate (CER) reduction.
http://arxiv.org/abs/2202.07855v2
"2022-02-16T04:33:05Z"
cs.SD, cs.CL, eess.AS
2,022
One Configuration to Rule Them All? Towards Hyperparameter Transfer in Topic Models using Multi-Objective Bayesian Optimization
Silvia Terragni, Ismail Harrando, Pasquale Lisena, Raphael Troncy, Elisabetta Fersini
Topic models are statistical methods that extract underlying topics from document collections. When performing topic modeling, a user usually desires topics that are coherent, diverse between each other, and that constitute good document representations for downstream tasks (e.g. document classification). In this paper, we conduct a multi-objective hyperparameter optimization of three well-known topic models. The obtained results reveal the conflicting nature of different objectives and that the training corpus characteristics are crucial for the hyperparameter selection, suggesting that it is possible to transfer the optimal hyperparameter configurations between datasets.
http://arxiv.org/abs/2202.07631v1
"2022-02-15T18:26:02Z"
cs.CL
2,022
Topic Discovery via Latent Space Clustering of Pretrained Language Model Representations
Yu Meng, Yunyi Zhang, Jiaxin Huang, Yu Zhang, Jiawei Han
Topic models have been the prominent tools for automatic topic discovery from text corpora. Despite their effectiveness, topic models suffer from several limitations including the inability of modeling word ordering information in documents, the difficulty of incorporating external linguistic knowledge, and the lack of both accurate and efficient inference methods for approximating the intractable posterior. Recently, pretrained language models (PLMs) have brought astonishing performance improvements to a wide variety of tasks due to their superior representations of text. Interestingly, there have not been standard approaches to deploy PLMs for topic discovery as better alternatives to topic models. In this paper, we begin by analyzing the challenges of using PLM representations for topic discovery, and then propose a joint latent space learning and clustering framework built upon PLM embeddings. In the latent space, topic-word and document-topic distributions are jointly modeled so that the discovered topics can be interpreted by coherent and distinctive terms and meanwhile serve as meaningful summaries of the documents. Our model effectively leverages the strong representation power and superb linguistic features brought by PLMs for topic discovery, and is conceptually simpler than topic models. On two benchmark datasets in different domains, our model generates significantly more coherent and diverse topics than strong topic models, and offers better topic-wise document representations, based on both automatic and human evaluations.
http://arxiv.org/abs/2202.04582v1
"2022-02-09T17:26:08Z"
cs.CL, cs.IR, cs.LG
2,022
Crime Hot-Spot Modeling via Topic Modeling and Relative Density Estimation
Jonathan Zhou, Sarah Huestis-Mitchell, Xiuyuan Cheng, Yao Xie
We present a method to capture groupings of similar calls and determine their relative spatial distribution from a collection of crime record narratives. We first obtain a topic distribution for each narrative, and then propose a nearest neighbors relative density estimation (kNN-RDE) approach to obtain spatial relative densities per topic. Experiments over a large corpus ($n=475,019$) of narrative documents from the Atlanta Police Department demonstrate the viability of our method in capturing geographic hot-spot trends which call dispatchers do not initially pick up on and which go unnoticed due to conflation with elevated event density in general.
http://arxiv.org/abs/2202.04176v3
"2022-02-08T22:18:25Z"
cs.LG, cs.CL, cs.IR
2,022
Assessing the alignment between the information needs of developers and the documentation of programming languages: A case study on Rust
Filipe R. Cogo, Xin Xia, Ahmed E. Hassan
Programming language documentation refers to the set of technical documents that provide application developers with a description of the high-level concepts of a language. Such documentation is essential to support application developers in the effective use of a programming language. One of the challenges faced by documenters (i.e., personnel that produce documentation) is to ensure that documentation has relevant information that aligns with the concrete needs of developers. In this paper, we present an automated approach to support documenters in evaluating the differences and similarities between the concrete information need of developers and the current state of documentation (a problem that we refer to as the topical alignment of a programming language documentation). Our approach leverages semi-supervised topic modelling to assess the similarities and differences between the topics of Q&A posts and the official documentation. To demonstrate the application of our approach, we perform a case study on the documentation of Rust. Our results show that there is a relatively high level of topical alignment in Rust documentation. Still, information about specific topics is scarce in both the Q&A websites and the documentation, particularly related topics with programming niches such as network, game, and database development. For other topics (e.g., related topics with language features such as structs, patterns and matchings, and foreign function interface), information is only available on Q&A websites while lacking in the official documentation. Finally, we discuss implications for programming language documenters, particularly how to leverage our approach to prioritize topics that should be added to the documentation.
http://arxiv.org/abs/2202.04431v1
"2022-02-08T14:45:16Z"
cs.SE, cs.PL
2,022
Language Models Explain Word Reading Times Better Than Empirical Predictability
Markus J. Hofmann, Steffen Remus, Chris Biemann, Ralph Radach, Lars Kuchinke
Though there is a strong consensus that word length and frequency are the most important single-word features determining visual-orthographic access to the mental lexicon, there is less agreement as how to best capture syntactic and semantic factors. The traditional approach in cognitive reading research assumes that word predictability from sentence context is best captured by cloze completion probability (CCP) derived from human performance data. We review recent research suggesting that probabilistic language models provide deeper explanations for syntactic and semantic effects than CCP. Then we compare CCP with (1) Symbolic n-gram models consolidate syntactic and semantic short-range relations by computing the probability of a word to occur, given two preceding words. (2) Topic models rely on subsymbolic representations to capture long-range semantic similarity by word co-occurrence counts in documents. (3) In recurrent neural networks (RNNs), the subsymbolic units are trained to predict the next word, given all preceding words in the sentences. To examine lexical retrieval, these models were used to predict single fixation durations and gaze durations to capture rapidly successful and standard lexical access, and total viewing time to capture late semantic integration. The linear item-level analyses showed greater correlations of all language models with all eye-movement measures than CCP. Then we examined non-linear relations between the different types of predictability and the reading times using generalized additive models. N-gram and RNN probabilities of the present word more consistently predicted reading performance compared with topic models or CCP.
http://arxiv.org/abs/2202.01128v1
"2022-02-02T16:38:43Z"
cs.CL, cs.AI
2,022
Understanding The Robustness of Self-supervised Learning Through Topic Modeling
Zeping Luo, Shiyou Wu, Cindy Weng, Mo Zhou, Rong Ge
Self-supervised learning has significantly improved the performance of many NLP tasks. However, how can self-supervised learning discover useful representations, and why is it better than traditional approaches such as probabilistic models are still largely unknown. In this paper, we focus on the context of topic modeling and highlight a key advantage of self-supervised learning - when applied to data generated by topic models, self-supervised learning can be oblivious to the specific model, and hence is less susceptible to model misspecification. In particular, we prove that commonly used self-supervised objectives based on reconstruction or contrastive samples can both recover useful posterior information for general topic models. Empirically, we show that the same objectives can perform on par with posterior inference using the correct model, while outperforming posterior inference using misspecified models.
http://arxiv.org/abs/2203.03539v2
"2022-02-02T06:20:59Z"
cs.CL, cs.LG, stat.ML
2,022
Guided Semi-Supervised Non-negative Matrix Factorization on Legal Documents
Pengyu Li, Christine Tseng, Yaxuan Zheng, Joyce A. Chew, Longxiu Huang, Benjamin Jarman, Deanna Needell
Classification and topic modeling are popular techniques in machine learning that extract information from large-scale datasets. By incorporating a priori information such as labels or important features, methods have been developed to perform classification and topic modeling tasks; however, most methods that can perform both do not allow for guidance of the topics or features. In this paper, we propose a method, namely Guided Semi-Supervised Non-negative Matrix Factorization (GSSNMF), that performs both classification and topic modeling by incorporating supervision from both pre-assigned document class labels and user-designed seed words. We test the performance of this method through its application to legal documents provided by the California Innocence Project, a nonprofit that works to free innocent convicted persons and reform the justice system. The results show that our proposed method improves both classification accuracy and topic coherence in comparison to past methods like Semi-Supervised Non-negative Matrix Factorization (SSNMF) and Guided Non-negative Matrix Factorization (Guided NMF).
http://arxiv.org/abs/2201.13324v1
"2022-01-31T16:21:51Z"
cs.LG, cs.IR, stat.ML
2,022
Why the Rich Get Richer? On the Balancedness of Random Partition Models
Changwoo J. Lee, Huiyan Sang
Random partition models are widely used in Bayesian methods for various clustering tasks, such as mixture models, topic models, and community detection problems. While the number of clusters induced by random partition models has been studied extensively, another important model property regarding the balancedness of partition has been largely neglected. We formulate a framework to define and theoretically study the balancedness of exchangeable random partition models, by analyzing how a model assigns probabilities to partitions with different levels of balancedness. We demonstrate that the "rich-get-richer" characteristic of many existing popular random partition models is an inevitable consequence of two common assumptions: product-form exchangeability and projectivity. We propose a principled way to compare the balancedness of random partition models, which gives a better understanding of what model works better and what doesn't for different applications. We also introduce the "rich-get-poorer" random partition models and illustrate their application to entity resolution tasks.
http://arxiv.org/abs/2201.12697v2
"2022-01-30T01:19:41Z"
stat.ML, cs.LG, stat.ME
2,022
A probabilistic latent variable model for detecting structure in binary data
Christopher Warner, Kiersten Ruda, Friedrich T. Sommer
We introduce a novel, probabilistic binary latent variable model to detect noisy or approximate repeats of patterns in sparse binary data. The model is based on the "Noisy-OR model" (Heckerman, 1990), used previously for disease and topic modelling. The model's capability is demonstrated by extracting structure in recordings from retinal neurons, but it can be widely applied to discover and model latent structure in noisy binary data. In the context of spiking neural data, the task is to "explain" spikes of individual neurons in terms of groups of neurons, "Cell Assemblies" (CAs), that often fire together, due to mutual interactions or other causes. The model infers sparse activity in a set of binary latent variables, each describing the activity of a cell assembly. When the latent variable of a cell assembly is active, it reduces the probabilities of neurons belonging to this assembly to be inactive. The conditional probability kernels of the latent components are learned from the data in an expectation maximization scheme, involving inference of latent states and parameter adjustments to the model. We thoroughly validate the model on synthesized spike trains constructed to statistically resemble recorded retinal responses to white noise stimulus and natural movie stimulus in data. We also apply our model to spiking responses recorded in retinal ganglion cells (RGCs) during stimulation with a movie and discuss the found structure.
http://arxiv.org/abs/2201.11108v1
"2022-01-26T18:37:35Z"
stat.ML, cs.LG, q-bio.NC
2,022
Proactive Query Expansion for Streaming Data Using External Source
Farah Alshanik, Amy Apon, Yuheng Du, Alexander Herzog, Ilya Safro
Query expansion is the process of reformulating the original query by adding relevant words. Choosing which terms to add in order to improve the performance of the query expansion methods or to enhance the quality of the retrieved results is an important aspect of any information retrieval system. Adding words that can positively impact the quality of the search query or are informative enough play an important role in returning or gathering relevant documents that cover a certain topic can result in improving the efficiency of the information retrieval system. Typically, query expansion techniques are used to add or substitute words to a given search query to collect relevant data. In this paper, we design and implement a pipeline of automated query expansion. We outline several tools using different methods to expand the query. Our methods depend on targeting emergent events in streaming data over time and finding the hidden topics from targeted documents using probabilistic topic models. We employ Dynamic Eigenvector Centrality to trigger the emergent events, and the Latent Dirichlet Allocation to discover the topics. Also, we use an external data source as a secondary stream to supplement the primary stream with relevant words and expand the query using the words from both primary and secondary streams. An experimental study is performed on Twitter data (primary stream) related to the events that happened during protests in Baltimore in 2015. The quality of the retrieved results was measured using a quality indicator of the streaming data: tweets count, hashtag count, and hashtag clustering.
http://arxiv.org/abs/2201.06592v1
"2022-01-17T19:11:26Z"
cs.IR
2,022
Exploring COVID-19 Related Stressors Using Topic Modeling
Yue Tong Leung, Farzad Khalvati
The COVID-19 pandemic has affected lives of people from different countries for almost two years. The changes on lifestyles due to the pandemic may cause psychosocial stressors for individuals, and have a potential to lead to mental health problems. To provide high quality mental health supports, healthcare organization need to identify the COVID-19 specific stressors, and notice the trends of prevalence of those stressors. This study aims to apply natural language processing (NLP) on social media data to identify the psychosocial stressors during COVID-19 pandemic, and to analyze the trend on prevalence of stressors at different stages of the pandemic. We obtained dataset of 9266 Reddit posts from subreddit \rCOVID19_support, from 14th Feb ,2020 to 19th July 2021. We used Latent Dirichlet Allocation (LDA) topic model and lexicon methods to identify the topics that were mentioned on the subreddit. Our result presented a dashboard to visualize the trend of prevalence of topics about covid-19 related stressors being discussed on social media platform. The result could provide insights about the prevalence of pandemic related stressors during different stages of COVID-19. The NLP techniques leveraged in this study could also be applied to analyze event specific stressors in the future.
http://arxiv.org/abs/2202.00476v1
"2022-01-12T20:22:43Z"
cs.CL
2,022
Topic Modeling on Podcast Short-Text Metadata
Francisco B. Valero, Marion Baranes, Elena V. Epure
Podcasts have emerged as a massively consumed online content, notably due to wider accessibility of production means and scaled distribution through large streaming platforms. Categorization systems and information access technologies typically use topics as the primary way to organize or navigate podcast collections. However, annotating podcasts with topics is still quite problematic because the assigned editorial genres are broad, heterogeneous or misleading, or because of data challenges (e.g. short metadata text, noisy transcripts). Here, we assess the feasibility to discover relevant topics from podcast metadata, titles and descriptions, using topic modeling techniques for short text. We also propose a new strategy to leverage named entities (NEs), often present in podcast metadata, in a Non-negative Matrix Factorization (NMF) topic modeling framework. Our experiments on two existing datasets from Spotify and iTunes and Deezer, a new dataset from an online service providing a catalog of podcasts, show that our proposed document representation, NEiCE, leads to improved topic coherence over the baselines. We release the code for experimental reproducibility of the results.
http://arxiv.org/abs/2201.04419v1
"2022-01-12T11:07:05Z"
cs.IR, cs.CL
2,022
Large Scale Analysis of Open MOOC Reviews to Support Learners' Course Selection
Manuel J. Gomez, Mario Calderón, Victor Sánchez, Félix J. García Clemente, José A. Ruipérez-Valiente
The recent pandemic has changed the way we see education. It is not surprising that children and college students are not the only ones using online education. Millions of adults have signed up for online classes and courses during last years, and MOOC providers, such as Coursera or edX, are reporting millions of new users signing up in their platforms. However, students do face some challenges when choosing courses. Though online review systems are standard among many verticals, no standardized or fully decentralized review systems exist in the MOOC ecosystem. In this vein, we believe that there is an opportunity to leverage available open MOOC reviews in order to build simpler and more transparent reviewing systems, allowing users to really identify the best courses out there. Specifically, in our research we analyze 2.4 million reviews (which is the largest MOOC reviews dataset used until now) from five different platforms in order to determine the following: (1) if the numeric ratings provide discriminant information to learners, (2) if NLP-driven sentiment analysis on textual reviews could provide valuable information to learners, (3) if we can leverage NLP-driven topic finding techniques to infer themes that could be important for learners, and (4) if we can use these models to effectively characterize MOOCs based on the open reviews. Results show that numeric ratings are clearly biased (63\% of them are 5-star ratings), and the topic modeling reveals some interesting topics related with course advertisements, the real applicability, or the difficulty of the different courses. We expect our study to shed some light on the area and promote a more transparent approach in online education reviews, which are becoming more and more popular as we enter the post-pandemic era.
http://arxiv.org/abs/2201.06967v1
"2022-01-11T10:24:49Z"
cs.CY, cs.CL
2,022
ZeroBERTo: Leveraging Zero-Shot Text Classification by Topic Modeling
Alexandre Alcoforado, Thomas Palmeira Ferraz, Rodrigo Gerber, Enzo Bustos, André Seidel Oliveira, Bruno Miguel Veloso, Fabio Levy Siqueira, Anna Helena Reali Costa
Traditional text classification approaches often require a good amount of labeled data, which is difficult to obtain, especially in restricted domains or less widespread languages. This lack of labeled data has led to the rise of low-resource methods, that assume low data availability in natural language processing. Among them, zero-shot learning stands out, which consists of learning a classifier without any previously labeled data. The best results reported with this approach use language models such as Transformers, but fall into two problems: high execution time and inability to handle long texts as input. This paper proposes a new model, ZeroBERTo, which leverages an unsupervised clustering step to obtain a compressed data representation before the classification task. We show that ZeroBERTo has better performance for long inputs and shorter execution time, outperforming XLM-R by about 12% in the F1 score in the FolhaUOL dataset. Keywords: Low-Resource NLP, Unlabeled data, Zero-Shot Learning, Topic Modeling, Transformers.
http://arxiv.org/abs/2201.01337v3
"2022-01-04T20:08:17Z"
cs.CL, cs.AI, cs.LG
2,022
Thirty Years of Academic Finance
David Ardia, Keven Bluteau, Mohammad Abbas Meghani
We study how the financial literature has evolved in scale, research team composition, and article topicality across 32 finance-focused academic journals from 1992 to 2021. We document that the field has vastly expanded regarding outlets and published articles. Teams have become larger, and the proportion of women participating in research has increased significantly. Using the Structural Topic Model, we identify 45 topics discussed in the literature. We investigate the topic coverage of individual journals and can identify highly specialized and generalist outlets, but our analyses reveal that most journals have covered more topics over time, thus becoming more generalist. Finally, we find that articles with at least one woman author focus more on topics related to social and governance aspects of corporate finance. We also find that teams with at least one top-tier institution scholar tend to focus more on theoretical aspects of finance.
http://arxiv.org/abs/2112.14902v2
"2021-12-30T03:04:53Z"
q-fin.GN, econ.GN, q-fin.EC, I.2; I.7
2,021
Pretty Princess vs. Successful Leader: Gender Roles in Greeting Card Messages
Jiao Sun, Tongshuang Wu, Yue Jiang, Ronil Awalegaonkar, Xi Victoria Lin, Diyi Yang
People write personalized greeting cards on various occasions. While prior work has studied gender roles in greeting card messages, systematic analysis at scale and tools for raising the awareness of gender stereotyping remain under-investigated. To this end, we collect a large greeting card message corpus covering three different occasions (birthday, Valentine's Day and wedding) from three sources (exemplars from greeting message websites, real-life greetings from social media and language model generated ones). We uncover a wide range of gender stereotypes in this corpus via topic modeling, odds ratio and Word Embedding Association Test (WEAT). We further conduct a survey to understand people's perception of gender roles in messages from this corpus and if gender stereotyping is a concern. The results show that people want to be aware of gender roles in the messages, but remain unconcerned unless the perceived gender roles conflict with the recipient's true personality. In response, we developed GreetA, an interactive visualization and writing assistant tool to visualize fine-grained topics in greeting card messages drafted by the users and the associated gender perception scores, but without suggesting text changes as an intervention.
http://arxiv.org/abs/2112.13980v1
"2021-12-28T03:32:30Z"
cs.HC
2,021
Analyzing Scientific Publications using Domain-Specific Word Embedding and Topic Modelling
Trisha Singhal, Junhua Liu, Lucienne T. M. Blessing, Kwan Hui Lim
The scientific world is changing at a rapid pace, with new technology being developed and new trends being set at an increasing frequency. This paper presents a framework for conducting scientific analyses of academic publications, which is crucial to monitor research trends and identify potential innovations. This framework adopts and combines various techniques of Natural Language Processing, such as word embedding and topic modelling. Word embedding is used to capture semantic meanings of domain-specific words. We propose two novel scientific publication embedding, i.e., PUB-G and PUB-W, which are capable of learning semantic meanings of general as well as domain-specific words in various research fields. Thereafter, topic modelling is used to identify clusters of research topics within these larger research fields. We curated a publication dataset consisting of two conferences and two journals from 1995 to 2020 from two research domains. Experimental results show that our PUB-G and PUB-W embeddings are superior in comparison to other baseline embeddings by a margin of ~0.18-1.03 based on topic coherence.
http://arxiv.org/abs/2112.12940v1
"2021-12-24T04:25:34Z"
cs.CL, cs.AI
2,021
Improved Topic modeling in Twitter through Community Pooling
Federico Albanese, Esteban Feuerstein
Social networks play a fundamental role in propagation of information and news. Characterizing the content of the messages becomes vital for different tasks, like breaking news detection, personalized message recommendation, fake users detection, information flow characterization and others. However, Twitter posts are short and often less coherent than other text documents, which makes it challenging to apply text mining algorithms to these datasets efficiently. Tweet-pooling (aggregating tweets into longer documents) has been shown to improve automatic topic decomposition, but the performance achieved in this task varies depending on the pooling method. In this paper, we propose a new pooling scheme for topic modeling in Twitter, which groups tweets whose authors belong to the same community (group of users who mainly interact with each other but not with other groups) on a user interaction graph. We present a complete evaluation of this methodology, state of the art schemes and previous pooling models in terms of the cluster quality, document retrieval tasks performance and supervised machine learning classification score. Results show that our Community polling method outperformed other methods on the majority of metrics in two heterogeneous datasets, while also reducing the running time. This is useful when dealing with big amounts of noisy and short user-generated social media texts. Overall, our findings contribute to an improved methodology for identifying the latent topics in a Twitter dataset, without the need of modifying the basic machinery of a topic decomposition model.
http://arxiv.org/abs/2201.00690v1
"2021-12-20T17:05:32Z"
cs.IR, cs.LG
2,021
Topic-Aware Encoding for Extractive Summarization
Mingyang Song, Liping Jing
Document summarization provides an instrument for faster understanding the collection of text documents and has several real-life applications. With the growth of online text data, numerous summarization models have been proposed recently. The Sequence-to-Sequence (Seq2Seq) based neural summarization model is the most widely used in the summarization field due to its high performance. This is because semantic information and structure information in the text is adequately considered when encoding. However, the existing extractive summarization models pay little attention to and use the central topic information to assist the generation of summaries, which leads to models not ensuring the generated summary under the primary topic. A lengthy document can span several topics, and a single summary cannot do justice to all the topics. Therefore, the key to generating a high-quality summary is determining the central topic and building a summary based on it, especially for a long document. We propose a topic-aware encoding for document summarization to deal with this issue. This model effectively combines syntactic-level and topic-level information to build a comprehensive sentence representation. Specifically, a neural topic model is added in the neural-based sentence-level representation learning to adequately consider the central topic information for capturing the critical content in the original document. The experimental results on three public datasets show that our model outperforms the state-of-the-art models.
http://arxiv.org/abs/2112.09572v3
"2021-12-17T15:26:37Z"
cs.CL
2,021
TopNet: Learning from Neural Topic Model to Generate Long Stories
Yazheng Yang, Boyuan Pan, Deng Cai, Huan Sun
Long story generation (LSG) is one of the coveted goals in natural language processing. Different from most text generation tasks, LSG requires to output a long story of rich content based on a much shorter text input, and often suffers from information sparsity. In this paper, we propose \emph{TopNet} to alleviate this problem, by leveraging the recent advances in neural topic modeling to obtain high-quality skeleton words to complement the short input. In particular, instead of directly generating a story, we first learn to map the short text input to a low-dimensional topic distribution (which is pre-assigned by a topic model). Based on this latent topic distribution, we can use the reconstruction decoder of the topic model to sample a sequence of inter-related words as a skeleton for the story. Experiments on two benchmark datasets show that our proposed framework is highly effective in skeleton word selection and significantly outperforms the state-of-the-art models in both automatic evaluation and human evaluation.
http://arxiv.org/abs/2112.07259v1
"2021-12-14T09:47:53Z"
cs.LG, cs.CL
2,021
LDA2Net: Digging under the surface of COVID-19 topics in scientific literature
Giorgia Minello, Carlo R. M. A. Santagiustina, Massimo Warglien
During the COVID-19 pandemic, the scientific literature related to SARS-COV-2 has been growing dramatically, both in terms of the number of publications and of its impact on people's life. This literature encompasses a varied set of sensible topics, ranging from vaccination, to protective equipment efficacy, to lockdown policy evaluation. Up to now, hundreds of thousands of papers have been uploaded on online repositories and published in scientific journals. As a result, the development of digital methods that allow an in-depth exploration of this growing literature has become a relevant issue, both to identify the topical trends of COVID-related research and to zoom-in its sub-themes. This work proposes a novel methodology, called LDA2Net, which combines topic modelling and network analysis to investigate topics under their surface. Specifically, LDA2Net exploits the frequencies of pairs of consecutive words to reconstruct the network structure of topics discussed in the Cord-19 corpus. The results suggest that the effectiveness of topic models can be magnified by enriching them with word network representations, and by using the latter to display, analyse, and explore COVID-related topics at different levels of granularity.
http://arxiv.org/abs/2112.01181v2
"2021-12-02T12:55:28Z"
cs.DL, cs.IR, 62P10, 62H22
2,021
Topic Analysis of Superconductivity Literature by Semantic Non-negative Matrix Factorization
Valentin Stanev, Erik Skau, Ichiro Takeuchi, Boian S. Alexandrov
We utilize a recently developed topic modeling method called SeNMFk, extending the standard Non-negative Matrix Factorization (NMF) methods by incorporating the semantic structure of the text, and adding a robust system for determining the number of topics. With SeNMFk, we were able to extract coherent topics validated by human experts. From these topics, a few are relatively general and cover broad concepts, while the majority can be precisely mapped to specific scientific effects or measurement techniques. The topics also differ by ubiquity, with only three topics prevalent in almost 40 percent of the abstract, while each specific topic tends to dominate a small subset of the abstracts. These results demonstrate the ability of SeNMFk to produce a layered and nuanced analysis of large scientific corpora.
http://arxiv.org/abs/2201.00687v1
"2021-12-01T05:51:19Z"
cs.DL, cs.LG
2,021
Bilingual Topic Models for Comparable Corpora
Georgios Balikas, Massih-Reza Amini, Marianne Clausel
Probabilistic topic models like Latent Dirichlet Allocation (LDA) have been previously extended to the bilingual setting. A fundamental modeling assumption in several of these extensions is that the input corpora are in the form of document pairs whose constituent documents share a single topic distribution. However, this assumption is strong for comparable corpora that consist of documents thematically similar to an extent only, which are, in turn, the most commonly available or easy to obtain. In this paper we relax this assumption by proposing for the paired documents to have separate, yet bound topic distributions. % a binding mechanism between the distributions of the paired documents. We suggest that the strength of the bound should depend on each pair's semantic similarity. To estimate the similarity of documents that are written in different languages we use cross-lingual word embeddings that are learned with shallow neural networks. We evaluate the proposed binding mechanism by extending two topic models: a bilingual adaptation of LDA that assumes bag-of-words inputs and a model that incorporates part of the text structure in the form of boundaries of semantically coherent segments. To assess the performance of the novel topic models we conduct intrinsic and extrinsic experiments on five bilingual, comparable corpora of English documents with French, German, Italian, Spanish and Portuguese documents. The results demonstrate the efficiency of our approach in terms of both topic coherence measured by the normalized point-wise mutual information, and generalization performance measured by perplexity and in terms of Mean Reciprocal Rank in a cross-lingual document retrieval task for each of the language pairs.
http://arxiv.org/abs/2111.15278v1
"2021-11-30T10:53:41Z"
cs.CL
2,021
Changepoint Analysis of Topic Proportions in Temporal Text Data
Avinandan Bose, Soumendu Sundar Mukherjee
Changepoint analysis deals with unsupervised detection and/or estimation of time-points in time-series data, when the distribution generating the data changes. In this article, we consider \emph{offline} changepoint detection in the context of large scale textual data. We build a specialised temporal topic model with provisions for changepoints in the distribution of topic proportions. As full likelihood based inference in this model is computationally intractable, we develop a computationally tractable approximate inference procedure. More specifically, we use sample splitting to estimate topic polytopes first and then apply a likelihood ratio statistic together with a modified version of the wild binary segmentation algorithm of Fryzlewicz et al. (2014). Our methodology facilitates automated detection of structural changes in large corpora without the need of manual processing by domain experts. As changepoints under our model correspond to changes in topic structure, the estimated changepoints are often highly interpretable as marking the surge or decline in popularity of a fashionable topic. We apply our procedure on two large datasets: (i) a corpus of English literature from the period 1800-1922 (Underwoodet al., 2015); (ii) abstracts from the High Energy Physics arXiv repository (Clementet al., 2019). We obtain some historically well-known changepoints and discover some new ones.
http://arxiv.org/abs/2112.00827v1
"2021-11-29T17:20:51Z"
cs.CL, cs.IR, cs.LG, stat.ME, stat.ML
2,021
Topic Driven Adaptive Network for Cross-Domain Sentiment Classification
Yicheng Zhu, Yiqiao Qiu, Qingyuan Wu, Fu Lee Wang, Yanghui Rao
Cross-domain sentiment classification has been a hot spot these years, which aims to learn a reliable classifier using labeled data from a source domain and evaluate it on a target domain. In this vein, most approaches utilized domain adaptation that maps data from different domains into a common feature space. To further improve the model performance, several methods targeted to mine domain-specific information were proposed. However, most of them only utilized a limited part of domain-specific information. In this study, we first develop a method of extracting domain-specific words based on the topic information derived from topic models. Then, we propose a Topic Driven Adaptive Network (TDAN) for cross-domain sentiment classification. The network consists of two sub-networks: a semantics attention network and a domain-specific word attention network, the structures of which are based on transformers. These sub-networks take different forms of input and their outputs are fused as the feature vector. Experiments validate the effectiveness of our TDAN on sentiment classification across domains. Case studies also indicate that topic models have the potential to add value to cross-domain sentiment classification by discovering interpretable and low-dimensional subspaces.
http://arxiv.org/abs/2111.14094v2
"2021-11-28T10:17:11Z"
cs.CL, cs.AI
2,021
HTMOT : Hierarchical Topic Modelling Over Time
Judicael Poumay, Ashwin Ittoo
Over the years, topic models have provided an efficient way of extracting insights from text. However, while many models have been proposed, none are able to model topic temporality and hierarchy jointly. Modelling time provide more precise topics by separating lexically close but temporally distinct topics while modelling hierarchy provides a more detailed view of the content of a document corpus. In this study, we therefore propose a novel method, HTMOT, to perform Hierarchical Topic Modelling Over Time. We train HTMOT using a new implementation of Gibbs sampling, which is more efficient. Specifically, we show that only applying time modelling to deep sub-topics provides a way to extract specific stories or events while high level topics extract larger themes in the corpus. Our results show that our training procedure is fast and can extract accurate high-level topics and temporally precise sub-topics. We measured our model's performance using the Word Intrusion task and outlined some limitations of this evaluation method, especially for hierarchical models. As a case study, we focused on the various developments in the space industry in 2020.
http://arxiv.org/abs/2112.03104v2
"2021-11-22T11:02:35Z"
cs.IR, cs.CL, cs.LG
2,021
Keyword Assisted Embedded Topic Model
Bahareh Harandizadeh, J. Hunter Priniski, Fred Morstatter
By illuminating latent structures in a corpus of text, topic models are an essential tool for categorizing, summarizing, and exploring large collections of documents. Probabilistic topic models, such as latent Dirichlet allocation (LDA), describe how words in documents are generated via a set of latent distributions called topics. Recently, the Embedded Topic Model (ETM) has extended LDA to utilize the semantic information in word embeddings to derive semantically richer topics. As LDA and its extensions are unsupervised models, they aren't defined to make efficient use of a user's prior knowledge of the domain. To this end, we propose the Keyword Assisted Embedded Topic Model (KeyETM), which equips ETM with the ability to incorporate user knowledge in the form of informative topic-level priors over the vocabulary. Using both quantitative metrics and human responses on a topic intrusion task, we demonstrate that KeyETM produces better topics than other guided, generative models in the literature.
http://arxiv.org/abs/2112.03101v1
"2021-11-22T07:27:17Z"
cs.IR, cs.CL, cs.LG
2,021
Jointly Dynamic Topic Model for Recognition of Lead-lag Relationship in Two Text Corpora
Yandi Zhu, Xiaoling Lu, Jingya Hong, Feifei Wang
Topic evolution modeling has received significant attentions in recent decades. Although various topic evolution models have been proposed, most studies focus on the single document corpus. However in practice, we can easily access data from multiple sources and also observe relationships between them. Then it is of great interest to recognize the relationship between multiple text corpora and further utilize this relationship to improve topic modeling. In this work, we focus on a special type of relationship between two text corpora, which we define as the "lead-lag relationship". This relationship characterizes the phenomenon that one text corpus would influence the topics to be discussed in the other text corpus in the future. To discover the lead-lag relationship, we propose a jointly dynamic topic model and also develop an embedding extension to address the modeling problem of large-scale text corpus. With the recognized lead-lag relationship, the similarities of the two text corpora can be figured out and the quality of topic learning in both corpora can be improved. We numerically investigate the performance of the jointly dynamic topic modeling approach using synthetic data. Finally, we apply the proposed model on two text corpora consisting of statistical papers and the graduation theses. Results show the proposed model can well recognize the lead-lag relationship between the two corpora, and the specific and shared topic patterns in the two corpora are also discovered.
http://arxiv.org/abs/2111.10846v1
"2021-11-21T15:53:15Z"
cs.CL, stat.ME, stat.ML
2,021
Weakly Supervised Prototype Topic Model with Discriminative Seed Words: Modifying the Category Prior by Self-exploring Supervised Signals
Bing Wang, Yue Wang, Ximing Li, Jihong Ouyang
Dataless text classification, i.e., a new paradigm of weakly supervised learning, refers to the task of learning with unlabeled documents and a few predefined representative words of categories, known as seed words. The recent generative dataless methods construct document-specific category priors by using seed word occurrences only, however, such category priors often contain very limited and even noisy supervised signals. To remedy this problem, in this paper we propose a novel formulation of category prior. First, for each document, we consider its label membership degree by not only counting seed word occurrences, but also using a novel prototype scheme, which captures pseudo-nearest neighboring categories. Second, for each label, we consider its frequency prior knowledge of the corpus, which is also a discriminative knowledge for classification. By incorporating the proposed category prior into the previous generative dataless method, we suggest a novel generative dataless method, namely Weakly Supervised Prototype Topic Model (WSPTM). The experimental results on real-world datasets demonstrate that WSPTM outperforms the existing baseline methods.
http://arxiv.org/abs/2112.03009v1
"2021-11-20T00:00:56Z"
cs.CL, cs.AI
2,021