Title
stringlengths
14
179
Authors
stringlengths
6
464
Abstract
stringlengths
83
1.93k
entry_id
stringlengths
32
34
Date
unknown
Categories
stringlengths
5
168
year
int32
2.01k
2.02k
Graph2topic: an opensource topic modeling framework based on sentence embedding and community detection
Leihang Zhang, Jiapeng Liu, Qiang Yan
It has been reported that clustering-based topic models, which cluster high-quality sentence embeddings with an appropriate word selection method, can generate better topics than generative probabilistic topic models. However, these approaches suffer from the inability to select appropriate parameters and incomplete models that overlook the quantitative relation between words with topics and topics with text. To solve these issues, we propose graph to topic (G2T), a simple but effective framework for topic modelling. The framework is composed of four modules. First, document representation is acquired using pretrained language models. Second, a semantic graph is constructed according to the similarity between document representations. Third, communities in document semantic graphs are identified, and the relationship between topics and documents is quantified accordingly. Fourth, the word--topic distribution is computed based on a variant of TFIDF. Automatic evaluation suggests that G2T achieved state-of-the-art performance on both English and Chinese documents with different lengths.
http://arxiv.org/abs/2304.06653v3
"2023-04-13T16:28:07Z"
cs.CL, cs.LG
2,023
How research programs come apart: the example of supersymmetry and the disunity of physics
Lucas Gautheron, Elisa Omodei
According to Peter Galison, the coordination of different ``subcultures'' within a scientific field happens through local exchanges within ``trading zones''. In his view, the workability of such trading zones is not guaranteed, and science is not necessarily driven towards further integration. In this paper, we develop and apply quantitative methods (using semantic, authorship, and citation data from scientific literature), inspired by Galison's framework, to the case of the disunity of high-energy physics. We give prominence to supersymmetry, a concept that has given rise to several major but distinct research programs in the field, such as the formulation of a consistent theory of quantum gravity or the search for new particles. We show that ``theory'' and `phenomenology'' in high-energy physics should be regarded as distinct theoretical subcultures, between which supersymmetry has helped sustain scientific ``trades''. However, as we demonstrate using a topic model, the phenomenological component of supersymmetry research has lost traction and the ability of supersymmetry to tie these subcultures together is now compromised. Our work supports that even fields with an initially strong sentiment of unity may eventually generate diverging research programs and demonstrates the fruitfulness of the notion of trading zones for informing quantitative approaches to scientific pluralism.
http://arxiv.org/abs/2304.03673v1
"2023-04-07T14:46:02Z"
physics.hist-ph
2,023
InfoCTM: A Mutual Information Maximization Perspective of Cross-Lingual Topic Modeling
Xiaobao Wu, Xinshuai Dong, Thong Nguyen, Chaoqun Liu, Liangming Pan, Anh Tuan Luu
Cross-lingual topic models have been prevalent for cross-lingual text analysis by revealing aligned latent topics. However, most existing methods suffer from producing repetitive topics that hinder further analysis and performance decline caused by low-coverage dictionaries. In this paper, we propose the Cross-lingual Topic Modeling with Mutual Information (InfoCTM). Instead of the direct alignment in previous work, we propose a topic alignment with mutual information method. This works as a regularization to properly align topics and prevent degenerate topic representations of words, which mitigates the repetitive topic issue. To address the low-coverage dictionary issue, we further propose a cross-lingual vocabulary linking method that finds more linked cross-lingual words for topic alignment beyond the translations of a given dictionary. Extensive experiments on English, Chinese, and Japanese datasets demonstrate that our method outperforms state-of-the-art baselines, producing more coherent, diverse, and well-aligned topics and showing better transferability for cross-lingual classification tasks.
http://arxiv.org/abs/2304.03544v2
"2023-04-07T08:49:43Z"
cs.CL
2,023
A User-Centered, Interactive, Human-in-the-Loop Topic Modelling System
Zheng Fang, Lama Alqazlan, Du Liu, Yulan He, Rob Procter
Human-in-the-loop topic modelling incorporates users' knowledge into the modelling process, enabling them to refine the model iteratively. Recent research has demonstrated the value of user feedback, but there are still issues to consider, such as the difficulty in tracking changes, comparing different models and the lack of evaluation based on real-world examples of use. We developed a novel, interactive human-in-the-loop topic modeling system with a user-friendly interface that enables users compare and record every step they take, and a novel topic words suggestion feature to help users provide feedback that is faithful to the ground truth. Our system also supports not only what traditional topic models can do, i.e., learning the topics from the whole corpus, but also targeted topic modelling, i.e., learning topics for specific aspects of the corpus. In this article, we provide an overview of the system and present the results of a series of user studies designed to assess the value of the system in progressively more realistic applications of topic modelling.
http://arxiv.org/abs/2304.01774v1
"2023-04-04T13:05:10Z"
cs.CL, cs.HC, cs.LG
2,023
MMT: A Multilingual and Multi-Topic Indian Social Media Dataset
Dwip Dalal, Vivek Srivastava, Mayank Singh
Social media plays a significant role in cross-cultural communication. A vast amount of this occurs in code-mixed and multilingual form, posing a significant challenge to Natural Language Processing (NLP) tools for processing such information, like language identification, topic modeling, and named-entity recognition. To address this, we introduce a large-scale multilingual, and multi-topic dataset (MMT) collected from Twitter (1.7 million Tweets), encompassing 13 coarse-grained and 63 fine-grained topics in the Indian context. We further annotate a subset of 5,346 tweets from the MMT dataset with various Indian languages and their code-mixed counterparts. Also, we demonstrate that the currently existing tools fail to capture the linguistic diversity in MMT on two downstream tasks, i.e., topic modeling and language identification. To facilitate future research, we will make the anonymized and annotated dataset available in the public domain.
http://arxiv.org/abs/2304.00634v1
"2023-04-02T21:39:00Z"
cs.CL, cs.LG, cs.SI
2,023
What Does the Indian Parliament Discuss? An Exploratory Analysis of the Question Hour in the Lok Sabha
Suman Adhya, Debarshi Kumar Sanyal
The TCPD-IPD dataset is a collection of questions and answers discussed in the Lower House of the Parliament of India during the Question Hour between 1999 and 2019. Although it is difficult to analyze such a huge collection manually, modern text analysis tools can provide a powerful means to navigate it. In this paper, we perform an exploratory analysis of the dataset. In particular, we present insightful corpus-level statistics and a detailed analysis of three subsets of the dataset. In the latter analysis, the focus is on understanding the temporal evolution of topics using a dynamic topic model. We observe that the parliamentary conversation indeed mirrors the political and socio-economic tensions of each period.
http://arxiv.org/abs/2304.00235v1
"2023-04-01T05:43:22Z"
cs.CL
2,023
Topics in the Haystack: Extracting and Evaluating Topics beyond Coherence
Anton Thielmann, Quentin Seifert, Arik Reuter, Elisabeth Bergherr, Benjamin Säfken
Extracting and identifying latent topics in large text corpora has gained increasing importance in Natural Language Processing (NLP). Most models, whether probabilistic models similar to Latent Dirichlet Allocation (LDA) or neural topic models, follow the same underlying approach of topic interpretability and topic extraction. We propose a method that incorporates a deeper understanding of both sentence and document themes, and goes beyond simply analyzing word frequencies in the data. This allows our model to detect latent topics that may include uncommon words or neologisms, as well as words not present in the documents themselves. Additionally, we propose several new evaluation metrics based on intruder words and similarity measures in the semantic space. We present correlation coefficients with human identification of intruder words and achieve near-human level results at the word-intrusion task. We demonstrate the competitive performance of our method with a large benchmark study, and achieve superior results compared to state-of-the-art topic modeling and document clustering models.
http://arxiv.org/abs/2303.17324v1
"2023-03-30T12:24:25Z"
cs.CL
2,023
Do Neural Topic Models Really Need Dropout? Analysis of the Effect of Dropout in Topic Modeling
Suman Adhya, Avishek Lahiri, Debarshi Kumar Sanyal
Dropout is a widely used regularization trick to resolve the overfitting issue in large feedforward neural networks trained on a small dataset, which performs poorly on the held-out test subset. Although the effectiveness of this regularization trick has been extensively studied for convolutional neural networks, there is a lack of analysis of it for unsupervised models and in particular, VAE-based neural topic models. In this paper, we have analyzed the consequences of dropout in the encoder as well as in the decoder of the VAE architecture in three widely used neural topic models, namely, contextualized topic model (CTM), ProdLDA, and embedded topic model (ETM) using four publicly available datasets. We characterize the dropout effect on these models in terms of the quality and predictive performance of the generated topics.
http://arxiv.org/abs/2303.15973v1
"2023-03-28T13:45:39Z"
cs.CL, cs.LG
2,023
Improving Neural Topic Models with Wasserstein Knowledge Distillation
Suman Adhya, Debarshi Kumar Sanyal
Topic modeling is a dominant method for exploring document collections on the web and in digital libraries. Recent approaches to topic modeling use pretrained contextualized language models and variational autoencoders. However, large neural topic models have a considerable memory footprint. In this paper, we propose a knowledge distillation framework to compress a contextualized topic model without loss in topic quality. In particular, the proposed distillation objective is to minimize the cross-entropy of the soft labels produced by the teacher and the student models, as well as to minimize the squared 2-Wasserstein distance between the latent distributions learned by the two models. Experiments on two publicly available datasets show that the student trained with knowledge distillation achieves topic coherence much higher than that of the original student model, and even surpasses the teacher while containing far fewer parameters than the teacher's. The distilled model also outperforms several other competitive topic models on topic coherence.
http://arxiv.org/abs/2303.15350v1
"2023-03-27T16:07:44Z"
cs.CL, cs.IR, cs.LG
2,023
Improving Contextualized Topic Models with Negative Sampling
Suman Adhya, Avishek Lahiri, Debarshi Kumar Sanyal, Partha Pratim Das
Topic modeling has emerged as a dominant method for exploring large document collections. Recent approaches to topic modeling use large contextualized language models and variational autoencoders. In this paper, we propose a negative sampling mechanism for a contextualized topic model to improve the quality of the generated topics. In particular, during model training, we perturb the generated document-topic vector and use a triplet loss to encourage the document reconstructed from the correct document-topic vector to be similar to the input document and dissimilar to the document reconstructed from the perturbed vector. Experiments for different topic counts on three publicly available benchmark datasets show that in most cases, our approach leads to an increase in topic coherence over that of the baselines. Our model also achieves very high topic diversity.
http://arxiv.org/abs/2303.14951v1
"2023-03-27T07:28:46Z"
cs.CL, cs.LG
2,023
No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment using Adversarial Learning
Thorsten Eisenhofer, Erwin Quiring, Jonas Möller, Doreen Riepel, Thorsten Holz, Konrad Rieck
The number of papers submitted to academic conferences is steadily rising in many scientific disciplines. To handle this growth, systems for automatic paper-reviewer assignments are increasingly used during the reviewing process. These systems use statistical topic models to characterize the content of submissions and automate the assignment to reviewers. In this paper, we show that this automation can be manipulated using adversarial learning. We propose an attack that adapts a given paper so that it misleads the assignment and selects its own reviewers. Our attack is based on a novel optimization strategy that alternates between the feature space and problem space to realize unobtrusive changes to the paper. To evaluate the feasibility of our attack, we simulate the paper-reviewer assignment of an actual security conference (IEEE S&P) with 165 reviewers on the program committee. Our results show that we can successfully select and remove reviewers without access to the assignment system. Moreover, we demonstrate that the manipulated papers remain plausible and are often indistinguishable from benign submissions.
http://arxiv.org/abs/2303.14443v1
"2023-03-25T11:34:27Z"
cs.CR, cs.LG
2,023
How People Respond to the COVID-19 Pandemic on Twitter: A Comparative Analysis of Emotional Expressions from US and India
Brandon Siyuan Loh, Raj Kumar Gupta, Ajay Vishwanath, Andrew Ortony, Yinping Yang
The COVID-19 pandemic has claimed millions of lives worldwide and elicited heightened emotions. This study examines the expression of various emotions pertaining to COVID-19 in the United States and India as manifested in over 54 million tweets, covering the fifteen-month period from February 2020 through April 2021, a period which includes the beginnings of the huge and disastrous increase in COVID-19 cases that started to ravage India in March 2021. Employing pre-trained emotion analysis and topic modeling algorithms, four distinct types of emotions (fear, anger, happiness, and sadness) and their time- and location-associated variations were examined. Results revealed significant country differences and temporal changes in the relative proportions of fear, anger, and happiness, with fear declining and anger and happiness fluctuating in 2020 until new situations over the first four months of 2021 reversed the trends. Detected differences are discussed briefly in terms of the latent topics revealed and through the lens of appraisal theories of emotions, and the implications of the findings are discussed.
http://arxiv.org/abs/2303.10560v1
"2023-03-19T04:05:10Z"
cs.CL
2,023
Authority without Care: Moral Values behind the Mask Mandate Response
Yelena Mejova, Kyrieki Kalimeri, Gianmarco De Francisci Morales
Face masks are one of the cheapest and most effective non-pharmaceutical interventions available against airborne diseases such as COVID-19. Unfortunately, they have been met with resistance by a substantial fraction of the populace, especially in the U.S. In this study, we uncover the latent moral values that underpin the response to the mask mandate, and paint them against the country's political backdrop. We monitor the discussion about masks on Twitter, which involves almost 600k users in a time span of 7 months. By using a combination of graph mining, natural language processing, topic modeling, content analysis, and time series analysis, we characterize the responses to the mask mandate of both those in favor and against them. We base our analysis on the theoretical frameworks of Moral Foundation Theory and Hofstede's cultural dimensions. Our results show that, while the anti-mask stance is associated with a conservative political leaning, the moral values expressed by its adherents diverge from the ones typically used by conservatives. In particular, the expected emphasis on the values of authority and purity is accompanied by an atypical dearth of in-group loyalty. We find that after the mandate, both pro- and anti-mask sides decrease their emphasis on care about others, and increase their attention on authority and fairness, further politicizing the issue. In addition, the mask mandate reverses the expression of Individualism-Collectivism between the two sides, with an increase of individualism in the anti-mask narrative, and a decrease in the pro-mask one. We argue that monitoring the dynamics of moral positioning is crucial for designing effective public health campaigns that are sensitive to the underlying values of the target audience.
http://arxiv.org/abs/2303.12014v2
"2023-03-16T17:08:16Z"
cs.SI, cs.CY
2,023
Generating Query Focused Summaries without Fine-tuning the Transformer-based Pre-trained Models
Deen Abdullah, Shamanth Nayak, Gandharv Suri, Yllias Chali
Fine-tuning the Natural Language Processing (NLP) models for each new data set requires higher computational time associated with increased carbon footprint and cost. However, fine-tuning helps the pre-trained models adapt to the latest data sets; what if we avoid the fine-tuning steps and attempt to generate summaries using just the pre-trained models to reduce computational time and cost. In this paper, we tried to omit the fine-tuning steps and investigate whether the Marginal Maximum Relevance (MMR)-based approach can help the pre-trained models to obtain query-focused summaries directly from a new data set that was not used to pre-train the models. First, we used topic modelling on Wikipedia Current Events Portal (WCEP) and Debatepedia datasets to generate queries for summarization tasks. Then, using MMR, we ranked the sentences of the documents according to the queries. Next, we passed the ranked sentences to seven transformer-based pre-trained models to perform the summarization tasks. Finally, we used the MMR approach again to select the query relevant sentences from the generated summaries of individual pre-trained models and constructed the final summary. As indicated by the experimental results, our MMR-based approach successfully ranked and selected the most relevant sentences as summaries and showed better performance than the individual pre-trained models.
http://arxiv.org/abs/2303.06230v1
"2023-03-10T22:40:15Z"
cs.CL
2,023
Topic Modeling Based on Two-Step Flow Theory: Application to Tweets about Bitcoin
Aos Mulahuwaish, Matthew Loucks, Basheer Qolomany, Ala Al-Fuqaha
Digital cryptocurrencies such as Bitcoin have exploded in recent years in both popularity and value. By their novelty, cryptocurrencies tend to be both volatile and highly speculative. The capricious nature of these coins is helped facilitated by social media networks such as Twitter. However, not everyone's opinion matters equally, with most posts garnering little to no attention. Additionally, the majority of tweets are retweeted from popular posts. We must determine whose opinion matters and the difference between influential and non-influential users. This study separates these two groups and analyzes the differences between them. It uses Hypertext-induced Topic Selection (HITS) algorithm, which segregates the dataset based on influence. Topic modeling is then employed to uncover differences in each group's speech types and what group may best represent the entire community. We found differences in language and interest between these two groups regarding Bitcoin and that the opinion leaders of Twitter are not aligned with the majority of users. There were 2559 opinion leaders (0.72% of users) who accounted for 80% of the authority and the majority (99.28%) users for the remaining 20% out of a total of 355,139 users.
http://arxiv.org/abs/2303.02032v1
"2023-03-03T15:51:05Z"
cs.SI, cs.AI, cs.CY
2,023
Deep learning for COVID-19 topic modelling via Twitter: Alpha, Delta and Omicron
Janhavi Lande, Arti Pillay, Rohitash Chandra
Topic modelling with innovative deep learning methods has gained interest for a wide range of applications that includes COVID-19. Topic modelling can provide, psychological, social and cultural insights for understanding human behaviour in extreme events such as the COVID-19 pandemic. In this paper, we use prominent deep learning-based language models for COVID-19 topic modelling taking into account data from emergence (Alpha) to the Omicron variant. We apply topic modeling to review the public behaviour across the first, second and third waves based on Twitter dataset from India. Our results show that the topics extracted for the subsequent waves had certain overlapping themes such as covers governance, vaccination, and pandemic management while novel issues aroused in political, social and economic situation during COVID-19 pandemic. We also found a strong correlation of the major topics qualitatively to news media prevalent at the respective time period. Hence, our framework has the potential to capture major issues arising during different phases of the COVID-19 pandemic which can be extended to other countries and regions.
http://arxiv.org/abs/2303.00135v1
"2023-02-28T23:40:41Z"
cs.LG, cs.CL, cs.IR
2,023
Neural Nonnegative Matrix Factorization for Hierarchical Multilayer Topic Modeling
Tyler Will, Runyu Zhang, Eli Sadovnik, Mengdi Gao, Joshua Vendrow, Jamie Haddock, Denali Molitor, Deanna Needell
We introduce a new method based on nonnegative matrix factorization, Neural NMF, for detecting latent hierarchical structure in data. Datasets with hierarchical structure arise in a wide variety of fields, such as document classification, image processing, and bioinformatics. Neural NMF recursively applies NMF in layers to discover overarching topics encompassing the lower-level features. We derive a backpropagation optimization scheme that allows us to frame hierarchical NMF as a neural network. We test Neural NMF on a synthetic hierarchical dataset, the 20 Newsgroups dataset, and the MyLymeData symptoms dataset. Numerical results demonstrate that Neural NMF outperforms other hierarchical NMF methods on these data sets and offers better learned hierarchical structure and interpretability of topics.
http://arxiv.org/abs/2303.00058v1
"2023-02-28T20:00:16Z"
cs.LG, stat.ML
2,023
HADES: Homologous Automated Document Exploration and Summarization
Piotr Wilczyński, Artur Żółkowski, Mateusz Krzyziński, Emilia Wiśnios, Bartosz Pieliński, Stanisław Giziński, Julian Sienkiewicz, Przemysław Biecek
This paper introduces HADES, a novel tool for automatic comparative documents with similar structures. HADES is designed to streamline the work of professionals dealing with large volumes of documents, such as policy documents, legal acts, and scientific papers. The tool employs a multi-step pipeline that begins with processing PDF documents using topic modeling, summarization, and analysis of the most important words for each topic. The process concludes with an interactive web app with visualizations that facilitate the comparison of the documents. HADES has the potential to significantly improve the productivity of professionals dealing with high volumes of documents, reducing the time and effort required to complete tasks related to comparative document analysis. Our package is publically available on GitHub.
http://arxiv.org/abs/2302.13099v1
"2023-02-25T15:16:10Z"
cs.CL
2,023
Suspension Analysis and Selective Continuation-Passing Style for Universal Probabilistic Programming Languages
Daniel Lundén, Lars Hummelgren, Jan Kudlicka, Oscar Eriksson, David Broman
Universal probabilistic programming languages (PPLs) make it relatively easy to encode and automatically solve statistical inference problems. To solve inference problems, PPL implementations often apply Monte Carlo inference algorithms that rely on execution suspension. State-of-the-art solutions enable execution suspension either through (i) continuation-passing style (CPS) transformations or (ii) efficient, but comparatively complex, low-level solutions that are often not available in high-level languages. CPS transformations introduce overhead due to unnecessary closure allocations -- a problem the PPL community has generally overlooked. To reduce overhead, we develop a new efficient selective CPS approach for PPLs. Specifically, we design a novel static suspension analysis technique that determines parts of programs that require suspension, given a particular inference algorithm. The analysis allows selectively CPS transforming the program only where necessary. We formally prove the correctness of the analysis and implement the analysis and transformation in the Miking CorePPL compiler. We evaluate the implementation for a large number of Monte Carlo inference algorithms on real-world models from phylogenetics, epidemiology, and topic modeling. The evaluation results demonstrate significant improvements across all models and inference algorithms.
http://arxiv.org/abs/2302.13051v2
"2023-02-25T10:32:01Z"
cs.PL
2,023
A Twitter narrative of the COVID-19 pandemic in Australia
Rabindra Lamsal, Maria Rodriguez Read, Shanika Karunasekera
Social media platforms contain abundant data that can provide comprehensive knowledge of historical and real-time events. During crisis events, the use of social media peaks, as people discuss what they have seen, heard, or felt. Previous studies confirm the usefulness of such socially generated discussions for the public, first responders, and decision-makers to gain a better understanding of events as they unfold at the ground level. This study performs an extensive analysis of COVID-19-related Twitter discussions generated in Australia between January 2020, and October 2022. We explore the Australian Twitterverse by employing state-of-the-art approaches from both supervised and unsupervised domains to perform network analysis, topic modeling, sentiment analysis, and causality analysis. As the presented results provide a comprehensive understanding of the Australian Twitterverse during the COVID-19 pandemic, this study aims to explore the discussion dynamics to aid the development of future automated information systems for epidemic/pandemic management.
http://arxiv.org/abs/2302.11136v2
"2023-02-22T04:06:59Z"
cs.SI
2,023
Sedition Hunters: A Quantitative Study of the Crowdsourced Investigation into the 2021 U.S. Capitol Attack
Tianjiao Yu, Sukrit Venkatagiri, Ismini Lourentzou, Kurt Luther
Social media platforms have enabled extremists to organize violent events, such as the 2021 U.S. Capitol Attack. Simultaneously, these platforms enable professional investigators and amateur sleuths to collaboratively collect and identify imagery of suspects with the goal of holding them accountable for their actions. Through a case study of Sedition Hunters, a Twitter community whose goal is to identify individuals who participated in the 2021 U.S. Capitol Attack, we explore what are the main topics or targets of the community, who participates in the community, and how. Using topic modeling, we find that information sharing is the main focus of the community. We also note an increase in awareness of privacy concerns. Furthermore, using social network analysis, we show how some participants played important roles in the community. Finally, we discuss implications for the content and structure of online crowdsourced investigations.
http://arxiv.org/abs/2302.10964v1
"2023-02-21T19:47:43Z"
cs.HC, cs.LG, cs.SI
2,023
TherapyView: Visualizing Therapy Sessions with Temporal Topic Modeling and AI-Generated Arts
Baihan Lin, Stefan Zecevic, Djallel Bouneffouf, Guillermo Cecchi
We present the TherapyView, a demonstration system to help therapists visualize the dynamic contents of past treatment sessions, enabled by the state-of-the-art neural topic modeling techniques to analyze the topical tendencies of various psychiatric conditions and deep learning-based image generation engine to provide a visual summary. The system incorporates temporal modeling to provide a time-series representation of topic similarities at a turn-level resolution and AI-generated artworks given the dialogue segments to provide a concise representations of the contents covered in the session, offering interpretable insights for therapists to optimize their strategies and enhance the effectiveness of psychotherapy. This system provides a proof of concept of AI-augmented therapy tools with e in-depth understanding of the patient's mental state and enabling more effective treatment.
http://arxiv.org/abs/2302.10845v1
"2023-02-21T17:53:45Z"
cs.CL, cs.AI, cs.HC, cs.LG
2,023
Connecting Humanities and Social Sciences: Applying Language and Speech Technology to Online Panel Surveys
Henk van den Heuvel, Martijn Bentum, Simone Wills, Judith C. Koops
In this paper, we explore the application of language and speech technology to open-ended questions in a Dutch panel survey. In an experimental wave respondents could choose to answer open questions via speech or keyboard. Automatic speech recognition (ASR) was used to process spoken responses. We evaluated answers from these input modalities to investigate differences between spoken and typed answers.We report the errors the ASR system produces and investigate the impact of these errors on downstream analyses. Open-ended questions give more freedom to answer for respondents, but entail a non-trivial amount of work to analyse. We evaluated the feasibility of using transformer-based models (e.g. BERT) to apply sentiment analysis and topic modelling on the answers of open questions. A big advantage of transformer-based models is that they are trained on a large amount of language materials and do not necessarily need training on the target materials. This is especially advantageous for survey data, which does not contain a lot of text materials. We tested the quality of automatic sentiment analysis by comparing automatic labeling with three human raters and tested the robustness of topic modelling by comparing the generated models based on automatic and manually transcribed spoken answers.
http://arxiv.org/abs/2302.10593v1
"2023-02-21T10:52:15Z"
cs.CL
2,023
Topic Modeling in Density Functional Theory on Citations of Condensed Matter Electronic Structure Packages
Marie Dumaz, Camila Romero-Bohorquez, Donald Adjeroh, Aldo H. Romero
With an increasing number of new scientific papers being released, it becomes harder for researchers to be aware of recent articles in their field of study. Accurately classifying papers is a first step in the direction of personalized catering and easy access to research of interest. The field of Density Functional Theory (DFT) in particular is a good example of a methodology used in very different studies, and interconnected disciplines, which has a very strong community publishing many research articles. We devise a new unsupervised method for classifying publications, based on topic modeling, and use a DFT-related selection of documents as a use case. We first create topics from word analysis and clustering of the abstracts from the publications, then attribute each publication/paper to a topic based on word similarity. We then make interesting observations by analyzing connections between the topics and publishers, journals, country or year of publication. The proposed approach is general, and can be applied to analyze publication and citation trends in other areas of study, beyond the field of Density Function Theory.
http://arxiv.org/abs/2303.10239v1
"2023-02-16T17:04:20Z"
cond-mat.other, cs.DL, cs.IR, physics.soc-ph
2,023
Visualizing Topic Uncertainty in Topic Modelling
Peter Winker
Word clouds became a standard tool for presenting results of natural language processing methods such as topic modelling. They exhibit most important words, where word size is often chosen proportional to the relevance of words within a topic. In the latent Dirichlet allocation (LDA) model, word clouds are graphical presentations of a vector of weights for words within a topic. These vectors are the result of a statistical procedure based on a specific corpus. Therefore, they are subject to uncertainty coming from different sources as sample selection, random components in the optimization algorithm, or parameter settings. A novel approach for presenting word clouds including information on such types of uncertainty is introduced and illustrated with an application of the LDA model to conference abstracts.
http://arxiv.org/abs/2302.06482v1
"2023-02-13T15:59:15Z"
stat.CO
2,023
A Large-Scale Analysis of Persian Tweets Regarding Covid-19 Vaccination
Taha ShabaniMirzaei, Houmaan Chamani, Amirhossein Abaskohi, Zhivar Sourati Hassan Zadeh, Behnam Bahrak
The Covid-19 pandemic had an enormous effect on our lives, especially on people's interactions. By introducing Covid-19 vaccines, both positive and negative opinions were raised over the subject of taking vaccines or not. In this paper, using data gathered from Twitter, including tweets and user profiles, we offer a comprehensive analysis of public opinion in Iran about the Coronavirus vaccines. For this purpose, we applied a search query technique combined with a topic modeling approach to extract vaccine-related tweets. We utilized transformer-based models to classify the content of the tweets and extract themes revolving around vaccination. We also conducted an emotion analysis to evaluate the public happiness and anger around this topic. Our results demonstrate that Covid-19 vaccination has attracted considerable attention from different angles, such as governmental issues, safety or hesitancy, and side effects. Moreover, Coronavirus-relevant phenomena like public vaccination and the rate of infection deeply impacted public emotional status and users' interactions.
http://arxiv.org/abs/2302.04511v3
"2023-02-09T09:08:19Z"
cs.CL, cs.SI, I.2.7
2,023
Estimation of Gaussian Bi-Clusters with General Block-Diagonal Covariance Matrix and Applications
Anastasiia Livochka, Ryan Browne, Sanjeena Subedi
Bi-clustering is a technique that allows for the simultaneous clustering of observations and features in a dataset. This technique is often used in bioinformatics, text mining, and time series analysis. An important advantage of biclustering algorithm is the ability to uncover multiple ``views'' (i.e., through rows and column groupings) in the data. Several Gaussian mixture model based biclustering approach currently exist in the literature. However, they impose severe restrictions on the structure of the covariance matrix. Here, we propose a Gaussian mixture model-based bi-clustering approach that provides a more flexible block-diagonal covariance structure. We show that the clustering accuracy of the proposed model is comparable to other known techniques but our approach provides a more flexible covariance structure and has substantially lower computational time. We demonstrate the application of the proposed model in bioinformatics and topic modelling.
http://arxiv.org/abs/2302.03849v1
"2023-02-08T02:47:26Z"
stat.CO, 62H30
2,023
Natural Language Processing for Policymaking
Zhijing Jin, Rada Mihalcea
Language is the medium for many political activities, from campaigns to news reports. Natural language processing (NLP) uses computational tools to parse text into key information that is needed for policymaking. In this chapter, we introduce common methods of NLP, including text classification, topic modeling, event extraction, and text scaling. We then overview how these methods can be used for policymaking through four major applications including data collection for evidence-based policymaking, interpretation of political decisions, policy communication, and investigation of policy effects. Finally, we highlight some potential limitations and ethical concerns when using NLP for policymaking. This text is from Chapter 7 (pages 141-162) of the Handbook of Computational Social Science for Policy (2023). Open Access on Springer: https://doi.org/10.1007/978-3-031-16624-2
http://arxiv.org/abs/2302.03490v1
"2023-02-07T14:34:39Z"
cs.CL, cs.AI, cs.CY, cs.LG
2,023
Federated Variational Inference Methods for Structured Latent Variable Models
Conor Hassan, Robert Salomone, Kerrie Mengersen
Federated learning methods enable model training across distributed data sources without data leaving their original locations and have gained increasing interest in various fields. However, existing approaches are limited, excluding many structured probabilistic models. We present a general and elegant solution based on structured variational inference, widely used in Bayesian machine learning, adapted for the federated setting. Additionally, we provide a communication-efficient variant analogous to the canonical FedAvg algorithm. The proposed algorithms' effectiveness is demonstrated, and their performance is compared with hierarchical Bayesian neural networks and topic models.
http://arxiv.org/abs/2302.03314v2
"2023-02-07T08:35:04Z"
stat.ML, cs.LG, stat.CO, stat.ME
2,023
Efficient and Flexible Topic Modeling using Pretrained Embeddings and Bag of Sentences
Johannes Schneider
Pre-trained language models have led to a new state-of-the-art in many NLP tasks. However, for topic modeling, statistical generative models such as LDA are still prevalent, which do not easily allow incorporating contextual word vectors. They might yield topics that do not align well with human judgment. In this work, we propose a novel topic modeling and inference algorithm. We suggest a bag of sentences (BoS) approach using sentences as the unit of analysis. We leverage pre-trained sentence embeddings by combining generative process models and clustering. We derive a fast inference algorithm based on expectation maximization, hard assignments, and an annealing process. The evaluation shows that our method yields state-of-the art results with relatively little computational demands. Our method is also more flexible compared to prior works leveraging word embeddings, since it provides the possibility to customize topic-document distributions using priors. Code and data is at \url{https://github.com/JohnTailor/BertSenClu}.
http://arxiv.org/abs/2302.03106v3
"2023-02-06T20:13:11Z"
cs.CL, cs.LG
2,023
Analyzing the impact of climate change on critical infrastructure from the scientific literature: A weakly supervised NLP approach
Tanwi Mallick, Joshua David Bergerson, Duane R. Verner, John K Hutchison, Leslie-Anne Levy, Prasanna Balaprakash
Natural language processing (NLP) is a promising approach for analyzing large volumes of climate-change and infrastructure-related scientific literature. However, best-in-practice NLP techniques require large collections of relevant documents (corpus). Furthermore, NLP techniques using machine learning and deep learning techniques require labels grouping the articles based on user-defined criteria for a significant subset of a corpus in order to train the supervised model. Even labeling a few hundred documents with human subject-matter experts is a time-consuming process. To expedite this process, we developed a weak supervision-based NLP approach that leverages semantic similarity between categories and documents to (i) establish a topic-specific corpus by subsetting a large-scale open-access corpus and (ii) generate category labels for the topic-specific corpus. In comparison with a months-long process of subject-matter expert labeling, we assign category labels to the whole corpus using weak supervision and supervised learning in about 13 hours. The labeled climate and NCF corpus enable targeted, efficient identification of documents discussing a topic (or combination of topics) of interest and identification of various effects of climate change on critical infrastructure, improving the usability of scientific literature and ultimately supporting enhanced policy and decision making. To demonstrate this capability, we conduct topic modeling on pairs of climate hazards and NCFs to discover trending topics at the intersection of these categories. This method is useful for analysts and decision-makers to quickly grasp the relevant topics and most important documents linked to the topic.
http://arxiv.org/abs/2302.01887v2
"2023-02-03T17:47:42Z"
cs.LG
2,023
ANTM: An Aligned Neural Topic Model for Exploring Evolving Topics
Hamed Rahimi, Hubert Naacke, Camelia Constantin, Bernd Amann
This paper presents an algorithmic family of dynamic topic models called Aligned Neural Topic Models (ANTM), which combine novel data mining algorithms to provide a modular framework for discovering evolving topics. ANTM maintains the temporal continuity of evolving topics by extracting time-aware features from documents using advanced pre-trained Large Language Models (LLMs) and employing an overlapping sliding window algorithm for sequential document clustering. This overlapping sliding window algorithm identifies a different number of topics within each time frame and aligns semantically similar document clusters across time periods. This process captures emerging and fading trends across different periods and allows for a more interpretable representation of evolving topics. Experiments on four distinct datasets show that ANTM outperforms probabilistic dynamic topic models in terms of topic coherence and diversity metrics. Moreover, it improves the scalability and flexibility of dynamic topic models by being accessible and adaptable to different types of algorithms. Additionally, a Python package is developed for researchers and scientists who wish to study the trends and evolving patterns of topics in large-scale textual data.
http://arxiv.org/abs/2302.01501v2
"2023-02-03T02:31:12Z"
cs.IR, cs.AI, cs.LG, cs.NE, cs.SI
2,023
You Are What You Talk About: Inducing Evaluative Topics for Personality Analysis
Josip Jukić, Iva Vukojević, Jan Šnajder
Expressing attitude or stance toward entities and concepts is an integral part of human behavior and personality. Recently, evaluative language data has become more accessible with social media's rapid growth, enabling large-scale opinion analysis. However, surprisingly little research examines the relationship between personality and evaluative language. To bridge this gap, we introduce the notion of evaluative topics, obtained by applying topic models to pre-filtered evaluative text from social media. We then link evaluative topics to individual text authors to build their evaluative profiles. We apply evaluative profiling to Reddit comments labeled with personality scores and conduct an exploratory study on the relationship between evaluative topics and Big Five personality facets, aiming for a more interpretable, facet-level analysis. Finally, we validate our approach by observing correlations consistent with prior research in personality psychology.
http://arxiv.org/abs/2302.00493v1
"2023-02-01T15:04:04Z"
cs.CL
2,023
ContCommRTD: A Distributed Content-based Misinformation-aware Community Detection System for Real-Time Disaster Reporting
Elena-Simona Apostol, Ciprian-Octavian Truică, Adrian Paschke
Real-time social media data can provide useful information on evolving hazards. Alongside traditional methods of disaster detection, the integration of social media data can considerably enhance disaster management. In this paper, we investigate the problem of detecting geolocation-content communities on Twitter and propose a novel distributed system that provides in near real-time information on hazard-related events and their evolution. We show that content-based community analysis leads to better and faster dissemination of reports on hazards. Our distributed disaster reporting system analyzes the social relationship among worldwide geolocated tweets, and applies topic modeling to group tweets by topics. Considering for each tweet the following information: user, timestamp, geolocation, retweets, and replies, we create a publisher-subscriber distribution model for topics. We use content similarity and the proximity of nodes to create a new model for geolocation-content based communities. Users can subscribe to different topics in specific geographical areas or worldwide and receive real-time reports regarding these topics. As misinformation can lead to increase damage if propagated in hazards related tweets, we propose a new deep learning model to detect fake news. The misinformed tweets are then removed from display. We also show empirically the scalability capabilities of the proposed system.
http://arxiv.org/abs/2301.12984v1
"2023-01-30T15:28:47Z"
cs.SI, cs.AI, cs.DC, 91D30
2,023
Neural Dynamic Focused Topic Model
Kostadin Cvejoski, Ramsés J. Sánchez, César Ojeda
Topic models and all their variants analyse text by learning meaningful representations through word co-occurrences. As pointed out by Williamson et al. (2010), such models implicitly assume that the probability of a topic to be active and its proportion within each document are positively correlated. This correlation can be strongly detrimental in the case of documents created over time, simply because recent documents are likely better described by new and hence rare topics. In this work we leverage recent advances in neural variational inference and present an alternative neural approach to the dynamic Focused Topic Model. Indeed, we develop a neural model for topic evolution which exploits sequences of Bernoulli random variables in order to track the appearances of topics, thereby decoupling their activities from their proportions. We evaluate our model on three different datasets (the UN general debates, the collection of NeurIPS papers, and the ACL Anthology dataset) and show that it (i) outperforms state-of-the-art topic models in generalization tasks and (ii) performs comparably to them on prediction tasks, while employing roughly the same number of parameters, and converging about two times faster. Source code to reproduce our experiments is available online.
http://arxiv.org/abs/2301.10988v1
"2023-01-26T08:37:34Z"
cs.CL, cs.LG
2,023
Improving the Inference of Topic Models via Infinite Latent State Replications
Daniel Rugeles, Zhen Hai, Juan Felipe Carmona, Manoranjan Dash, Gao Cong
In text mining, topic models are a type of probabilistic generative models for inferring latent semantic topics from text corpus. One of the most popular inference approaches to topic models is perhaps collapsed Gibbs sampling (CGS), which typically samples one single topic label for each observed document-word pair. In this paper, we aim at improving the inference of CGS for topic models. We propose to leverage state augmentation technique by maximizing the number of topic samples to infinity, and then develop a new inference approach, called infinite latent state replication (ILR), to generate robust soft topic assignment for each given document-word pair. Experimental results on the publicly available datasets show that ILR outperforms CGS for inference of existing established topic models.
http://arxiv.org/abs/2301.12974v1
"2023-01-25T17:07:25Z"
cs.CL, cs.AI, cs.LG, math.ST, stat.TH, G.3; I.2.6
2,023
Sensemaking About Contraceptive Methods Across Online Platforms
LeAnn McDowall, Maria Antoniak, David Mimno
Selecting a birth control method is a complex healthcare decision. While birth control methods provide important benefits, they can also cause unpredictable side effects and be stigmatized, leading many people to seek additional information online, where they can find reviews, advice, hypotheses, and experiences of other birth control users. However, the relationships between their healthcare concerns, sensemaking activities, and online settings are not well understood. We gather texts about birth control shared on Twitter, Reddit, and WebMD -- platforms with different affordances, moderation, and audiences -- to study where and how birth control is discussed online. Using a combination of topic modeling and hand annotation, we identify and characterize the dominant sensemaking practices across these platforms, and we create lexicons to draw comparisons across birth control methods and side effects. We use these to measure variations from survey reports of side effect experiences and method usage. Our findings characterize how online platforms are used to make sense of difficult healthcare choices and highlight unmet needs of birth control users.
http://arxiv.org/abs/2301.09295v1
"2023-01-23T06:44:38Z"
cs.CL
2,023
A Semantic Modular Framework for Events Topic Modeling in Social Media
Arya Hadizadeh Moghaddam, Saeedeh Momtazi
The advancement of social media contributes to the growing amount of content they share frequently. This framework provides a sophisticated place for people to report various real-life events. Detecting these events with the help of natural language processing has received researchers' attention, and various algorithms have been developed for this goal. In this paper, we propose a Semantic Modular Model (SMM) consisting of 5 different modules, namely Distributional Denoising Autoencoder, Incremental Clustering, Semantic Denoising, Defragmentation, and Ranking and Processing. The proposed model aims to (1) cluster various documents and ignore the documents that might not contribute to the identification of events, (2) identify more important and descriptive keywords. Compared to the state-of-the-art methods, the results show that the proposed model has a higher performance in identifying events with lower ranks and extracting keywords for more important events in three English Twitter datasets: FACup, SuperTuesday, and USElection. The proposed method outperformed the best reported results in the mean keyword-precision metric by 7.9\%.
http://arxiv.org/abs/2301.09009v1
"2023-01-21T21:08:45Z"
cs.LG, cs.AI
2,023
A Review of the Trends and Challenges in Adopting Natural Language Processing Methods for Education Feedback Analysis
Thanveer Shaik, Xiaohui Tao, Yan Li, Christopher Dann, Jacquie Mcdonald, Petrea Redmond, Linda Galligan
Artificial Intelligence (AI) is a fast-growing area of study that stretching its presence to many business and research domains. Machine learning, deep learning, and natural language processing (NLP) are subsets of AI to tackle different areas of data processing and modelling. This review article presents an overview of AI impact on education outlining with current opportunities. In the education domain, student feedback data is crucial to uncover the merits and demerits of existing services provided to students. AI can assist in identifying the areas of improvement in educational infrastructure, learning management systems, teaching practices and study environment. NLP techniques play a vital role in analyzing student feedback in textual format. This research focuses on existing NLP methodologies and applications that could be adapted to educational domain applications like sentiment annotations, entity annotations, text summarization, and topic modelling. Trends and challenges in adopting NLP in education were reviewed and explored. Contextbased challenges in NLP like sarcasm, domain-specific language, ambiguity, and aspect-based sentiment analysis are explained with existing methodologies to overcome them. Research community approaches to extract the semantic meaning of emoticons and special characters in feedback which conveys user opinion and challenges in adopting NLP in education are explored.
http://arxiv.org/abs/2301.08826v1
"2023-01-20T23:38:58Z"
cs.CL
2,023
Contextualizing Emerging Trends in Financial News Articles
Nhu Khoa Nguyen, Thierry Delahaut, Emanuela Boros, Antoine Doucet, Gaël Lejeune
Identifying and exploring emerging trends in the news is becoming more essential than ever with many changes occurring worldwide due to the global health crises. However, most of the recent research has focused mainly on detecting trends in social media, thus, benefiting from social features (e.g. likes and retweets on Twitter) which helped the task as they can be used to measure the engagement and diffusion rate of content. Yet, formal text data, unlike short social media posts, comes with a longer, less restricted writing format, and thus, more challenging. In this paper, we focus our study on emerging trends detection in financial news articles about Microsoft, collected before and during the start of the COVID-19 pandemic (July 2019 to July 2020). We make the dataset accessible and propose a strong baseline (Contextual Leap2Trend) for exploring the dynamics of similarities between pairs of keywords based on topic modelling and term frequency. Finally, we evaluate against a gold standard (Google Trends) and present noteworthy real-world scenarios regarding the influence of the pandemic on Microsoft.
http://arxiv.org/abs/2301.11318v1
"2023-01-20T12:56:52Z"
cs.CL, cs.SI, q-fin.GN
2,023
Sentiment Analysis for Measuring Hope and Fear from Reddit Posts During the 2022 Russo-Ukrainian Conflict
Alessio Guerra, Oktay Karakuş
This paper proposes a novel lexicon-based unsupervised sentimental analysis method to measure the $``\textit{hope}"$ and $``\textit{fear}"$ for the 2022 Ukrainian-Russian Conflict. $\textit{Reddit.com}$ is utilised as the main source of human reactions to daily events during nearly the first three months of the conflict. The top 50 $``hot"$ posts of six different subreddits about Ukraine and news (Ukraine, worldnews, Ukraina, UkrainianConflict, UkraineWarVideoReport, UkraineWarReports) and their relative comments are scraped and a data set is created. On this corpus, multiple analyses such as (1) public interest, (2) hope/fear score, (3) stock price interaction are employed. We promote using a dictionary approach, which scores the hopefulness of every submitted user post. The Latent Dirichlet Allocation (LDA) algorithm of topic modelling is also utilised to understand the main issues raised by users and what are the key talking points. Experimental analysis shows that the hope strongly decreases after the symbolic and strategic losses of Azovstal (Mariupol) and Severodonetsk. Spikes in hope/fear, both positives and negatives, are present after important battles, but also some non-military events, such as Eurovision and football games.
http://arxiv.org/abs/2301.08347v1
"2023-01-19T22:43:59Z"
cs.CL, cs.CY
2,023
Interpretable and Scalable Graphical Models for Complex Spatio-temporal Processes
Yu Wang
This thesis focuses on data that has complex spatio-temporal structure and on probabilistic graphical models that learn the structure in an interpretable and scalable manner. We target two research areas of interest: Gaussian graphical models for tensor-variate data and summarization of complex time-varying texts using topic models. This work advances the state-of-the-art in several directions. First, it introduces a new class of tensor-variate Gaussian graphical models via the Sylvester tensor equation. Second, it develops an optimization technique based on a fast-converging proximal alternating linearized minimization method, which scales tensor-variate Gaussian graphical model estimations to modern big-data settings. Third, it connects Kronecker-structured (inverse) covariance models with spatio-temporal partial differential equations (PDEs) and introduces a new framework for ensemble Kalman filtering that is capable of tracking chaotic physical systems. Fourth, it proposes a modular and interpretable framework for unsupervised and weakly-supervised probabilistic topic modeling of time-varying data that combines generative statistical models with computational geometric methods. Throughout, practical applications of the methodology are considered using real datasets. This includes brain-connectivity analysis using EEG data, space weather forecasting using solar imaging data, longitudinal analysis of public opinions using Twitter data, and mining of mental health related issues using TalkLife data. We show in each case that the graphical modeling framework introduced here leads to improved interpretability, accuracy, and scalability.
http://arxiv.org/abs/2301.06021v1
"2023-01-15T05:39:30Z"
cs.LG, stat.ME
2,023
Natural Language Processing of Aviation Occurrence Reports for Safety Management
Patrick Jonk, Vincent de Vries, Rombout Wever, Georgios Sidiropoulos, Evangelos Kanoulas
Occurrence reporting is a commonly used method in safety management systems to obtain insight in the prevalence of hazards and accident scenarios. In support of safety data analysis, reports are often categorized according to a taxonomy. However, the processing of the reports can require significant effort from safety analysts and a common problem is interrater variability in labeling processes. Also, in some cases, reports are not processed according to a taxonomy, or the taxonomy does not fully cover the contents of the documents. This paper explores various Natural Language Processing (NLP) methods to support the analysis of aviation safety occurrence reports. In particular, the problems studied are the automatic labeling of reports using a classification model, extracting the latent topics in a collection of texts using a topic model and the automatic generation of probable cause texts. Experimental results showed that (i) under the right conditions the labeling of occurrence reports can be effectively automated with a transformer-based classifier, (ii) topic modeling can be useful for finding the topics present in a collection of reports, and (iii) using a summarization model can be a promising direction for generating probable cause texts.
http://arxiv.org/abs/2301.05663v1
"2023-01-13T17:00:09Z"
cs.CL, cs.LG
2,023
Topics in Contextualised Attention Embeddings
Mozhgan Talebpour, Alba Garcia Seco de Herrera, Shoaib Jameel
Contextualised word vectors obtained via pre-trained language models encode a variety of knowledge that has already been exploited in applications. Complementary to these language models are probabilistic topic models that learn thematic patterns from the text. Recent work has demonstrated that conducting clustering on the word-level contextual representations from a language model emulates word clusters that are discovered in latent topics of words from Latent Dirichlet Allocation. The important question is how such topical word clusters are automatically formed, through clustering, in the language model when it has not been explicitly designed to model latent topics. To address this question, we design different probe experiments. Using BERT and DistilBERT, we find that the attention framework plays a key role in modelling such word topic clusters. We strongly believe that our work paves way for further research into the relationships between probabilistic topic models and pre-trained language models.
http://arxiv.org/abs/2301.04339v1
"2023-01-11T07:26:19Z"
cs.CL, cs.IR
2,023
Topic Modelling of Swedish Newspaper Articles about Coronavirus: a Case Study using Latent Dirichlet Allocation Method
Bernadeta Griciūtė, Lifeng Han, Goran Nenadic
Topic Modelling (TM) is from the research branches of natural language understanding (NLU) and natural language processing (NLP) that is to facilitate insightful analysis from large documents and datasets, such as a summarisation of main topics and the topic changes. This kind of discovery is getting more popular in real-life applications due to its impact on big data analytics. In this study, from the social-media and healthcare domain, we apply popular Latent Dirichlet Allocation (LDA) methods to model the topic changes in Swedish newspaper articles about Coronavirus. We describe the corpus we created including 6515 articles, methods applied, and statistics on topic changes over approximately 1 year and two months period of time from 17th January 2020 to 13th March 2021. We hope this work can be an asset for grounding applications of topic modelling and can be inspiring for similar case studies in an era with pandemics, to support socio-economic impact research as well as clinical and healthcare analytics. Our data and source code are openly available at https://github. com/poethan/Swed_Covid_TM Keywords: Latent Dirichlet Allocation (LDA); Topic Modelling; Coronavirus; Pandemics; Natural Language Understanding; BERT-topic
http://arxiv.org/abs/2301.03029v6
"2023-01-08T12:33:58Z"
cs.CL, cs.SI
2,023
Nested Dirichlet models for unsupervised attack pattern detection in honeypot data
Francesco Sanna Passino, Anastasia Mantziou, Daniyar Ghani, Philip Thiede, Ross Bevington, Nicholas A. Heard
Cyber-systems are under near-constant threat from intrusion attempts. Attacks types vary, but each attempt typically has a specific underlying intent, and the perpetrators are typically groups of individuals with similar objectives. Clustering attacks appearing to share a common intent is very valuable to threat-hunting experts. This article explores Dirichlet distribution topic models for clustering terminal session commands collected from honeypots, which are special network hosts designed to entice malicious attackers. The main practical implications of clustering the sessions are two-fold: finding similar groups of attacks, and identifying outliers. A range of statistical models are considered, adapted to the structures of command-line syntax. In particular, concepts of primary and secondary topics, and then session-level and command-level topics, are introduced into the models to improve interpretability. The proposed methods are further extended in a Bayesian nonparametric fashion to allow unboundedness in the vocabulary size and the number of latent intents. The methods are shown to discover an unusual MIRAI variant which attempts to take over existing cryptocurrency coin-mining infrastructure, not detected by traditional topic-modelling approaches.
http://arxiv.org/abs/2301.02505v2
"2023-01-06T13:26:44Z"
cs.CR, stat.AP
2,023
Topics as Entity Clusters: Entity-based Topics from Language Models and Graph Neural Networks
Manuel V. Loureiro, Steven Derby, Tri Kurniawan Wijaya
Topic models aim to reveal the latent structure behind a corpus, typically conducted over a bag-of-words representation of documents. In the context of topic modeling, most vocabulary is either irrelevant for uncovering underlying topics or contains strong relationships with relevant concepts, impacting the interpretability of these topics. Furthermore, their limited expressiveness and dependency on language demand considerable computation resources. Hence, we propose a novel approach for cluster-based topic modeling that employs conceptual entities. Entities are language-agnostic representations of real-world concepts rich in relational information. To this end, we extract vector representations of entities from (i) an encyclopedic corpus using a language model; and (ii) a knowledge base using a graph neural network. We demonstrate that our approach consistently outperforms other state-of-the-art topic models across coherency metrics and find that the explicit knowledge encoded in the graph-based embeddings provides more coherent topics than the implicit knowledge encoded with the contextualized embeddings of language models.
http://arxiv.org/abs/2301.02458v1
"2023-01-06T10:54:54Z"
cs.CL, cs.LG
2,023
Tracking Brand-Associated Polarity-Bearing Topics in User Reviews
Runcong Zhao, Lin Gui, Hanqi Yan, Yulan He
Monitoring online customer reviews is important for business organisations to measure customer satisfaction and better manage their reputations. In this paper, we propose a novel dynamic Brand-Topic Model (dBTM) which is able to automatically detect and track brand-associated sentiment scores and polarity-bearing topics from product reviews organised in temporally-ordered time intervals. dBTM models the evolution of the latent brand polarity scores and the topic-word distributions over time by Gaussian state space models. It also incorporates a meta learning strategy to control the update of the topic-word distribution in each time interval in order to ensure smooth topic transitions and better brand score predictions. It has been evaluated on a dataset constructed from MakeupAlley reviews and a hotel review dataset. Experimental results show that dBTM outperforms a number of competitive baselines in brand ranking, achieving a good balance of topic coherence and uniqueness, and extracting well-separated polarity-bearing topics across time intervals.
http://arxiv.org/abs/2301.07183v1
"2023-01-03T18:30:34Z"
cs.IR, cs.LG
2,023
Topical Hidden Genome: Discovering Latent Cancer Mutational Topics using a Bayesian Multilevel Context-learning Approach
Saptarshi Chakraborty, Zoe Guan, Colin B. Begg, Ronglai Shen
Statistical inference on the cancer-site specificities of collective ultra-rare whole genome somatic mutations is an open problem. Traditional statistical methods cannot handle whole-genome mutation data due to their ultra-high-dimensionality and extreme data sparsity -- e.g., >30 million unique variants are observed in the ~1700 whole-genome tumor dataset considered herein, of which >99% variants are encountered only once. To harness information in these rare variants we have recently proposed the "hidden genome model", a formal multilevel multi-logistic model that mines information in ultra-rare somatic variants to characterize tumor types. The model condenses signals in rare variants through a hierarchical layer leveraging contexts of individual mutations. The model is currently implemented using consistent, scalable point estimation techniques that can handle 10s of millions of variants detected across thousands of tumors. Our recent publications have evidenced its impressive accuracy and attributability at scale. However, principled statistical inference from the model is infeasible due to the volume, correlation, and non-interpretability of the mutation contexts. In this paper we propose a novel framework that leverages topic models from the field of computational linguistics to induce an *interpretable dimension reduction* of the mutation contexts used in the model. The proposed model is implemented using an efficient MCMC algorithm that permits rigorous full Bayesian inference at a scale that is orders of magnitude beyond the capability of out-of-the-box high-dimensional multi-class regression methods and software. We employ our model on the Pan Cancer Analysis of Whole Genomes (PCAWG) dataset, and our results reveal interesting novel insights.
http://arxiv.org/abs/2212.14567v1
"2022-12-30T06:47:34Z"
stat.ME, stat.AP, stat.CO
2,022
Cyber Security and Online Safety Education for Schools in the UK: Looking through the Lens of Twitter Data
Jamie Knott, Haiyue Yuan, Matthew Boakes, Shujun Li
In recent years, digital technologies have grown in many ways. As a result, many school-aged children have been exposed to the digital world a lot. Children are using more digital technologies, so schools need to teach kids more about cyber security and online safety. Because of this, there are now more school programmes and projects that teach students about cyber security and online safety and help them learn and improve their skills. Still, despite many programmes and projects, there is not much proof of how many schools have taken part and helped spread the word about them. This work shows how we can learn about the size and scope of cyber security and online safety education in schools in the UK, a country with a very active and advanced cyber security education profile, using nearly 200k public tweets from over 15k schools. By using simple techniques like descriptive statistics and visualisation as well as advanced natural language processing (NLP) techniques like sentiment analysis and topic modelling, we show some new findings and insights about how UK schools as a sector have been doing on Twitter with their cyber security and online safety education activities. Our work has led to a range of large-scale and real-world evidence that can help inform people and organisations interested in cyber security and teaching online safety in schools.
http://arxiv.org/abs/2212.13742v2
"2022-12-28T08:30:24Z"
cs.CY, cs.SI
2,022
Nanomaterials for Supercapacitors: Uncovering Research Themes with Unsupervised Machine Learning
Mridhula Venkatanarayanan, Amit K Chakraborty, Sayantari Ghosh
Identification of important topics in a text can facilitate knowledge curation, discover thematic trends, and predict future directions. In this paper, we aim to quantitatively detect the most common research themes in the emerging supercapacitor research area, and summarize their trends and characteristics through the proposed unsupervised, machine learning approach. We have retrieved the complete reference entries of article abstracts from Scopus database for all original research articles from 2004 to 2021. Abstracts were processed through a natural language processing pipeline and analyzed by a latent Dirichlet allocation topic modeling algorithm for unsupervised topic discovery. Nine major topics were further examined through topic-word associations, Inter-topic distance map and topic-specific word cloud. We observed the greatest importance is being given to performance metrics (28.2%), flexible electronics (8%), and graphene-based nanocomposites (10.9%). The analysis also points out crucial future research directions towards bio-derived carbon nanomaterials (such as RGO) and flexible supercapacitors.
http://arxiv.org/abs/2212.13550v1
"2022-12-27T16:59:41Z"
cond-mat.stat-mech, cond-mat.mtrl-sci, physics.comp-ph
2,022
A Topic Modeling Approach to Classifying Open Street Map Health Clinics and Schools in Sub-Saharan Africa
Joshua W. Anderson, Luis Iñaki Alberro Encina, Tina George Karippacheril, Jonathan Hersh, Cadence Stringer
Data deprivation, or the lack of easily available and actionable information on the well-being of individuals, is a significant challenge for the developing world and an impediment to the design and operationalization of policies intended to alleviate poverty. In this paper we explore the suitability of data derived from OpenStreetMap to proxy for the location of two crucial public services: schools and health clinics. Thanks to the efforts of thousands of digital humanitarians, online mapping repositories such as OpenStreetMap contain millions of records on buildings and other structures, delineating both their location and often their use. Unfortunately much of this data is locked in complex, unstructured text rendering it seemingly unsuitable for classifying schools or clinics. We apply a scalable, unsupervised learning method to unlabeled OpenStreetMap building data to extract the location of schools and health clinics in ten countries in Africa. We find the topic modeling approach greatly improves performance versus reliance on structured keys alone. We validate our results by comparing schools and clinics identified by our OSM method versus those identified by the WHO, and describe OSM coverage gaps more broadly.
http://arxiv.org/abs/2212.12084v1
"2022-12-22T23:56:43Z"
cs.LG
2,022
Human in the loop: How to effectively create coherent topics by manually labeling only a few documents per class
Anton Thielmann, Christoph Weisser, Benjamin Säfken
Few-shot methods for accurate modeling under sparse label-settings have improved significantly. However, the applications of few-shot modeling in natural language processing remain solely in the field of document classification. With recent performance improvements, supervised few-shot methods, combined with a simple topic extraction method pose a significant challenge to unsupervised topic modeling methods. Our research shows that supervised few-shot learning, combined with a simple topic extraction method, can outperform unsupervised topic modeling techniques in terms of generating coherent topics, even when only a few labeled documents per class are used.
http://arxiv.org/abs/2212.09422v1
"2022-12-19T12:57:11Z"
cs.CL
2,022
Mining User Privacy Concern Topics from App Reviews
Jianzhang Zhang, Jinping Hua, Yiyang Chen, Nan Niu, Chuang Liu
Context: As mobile applications (Apps) widely spread over our society and life, various personal information is constantly demanded by Apps in exchange for more intelligent and customized functionality. An increasing number of users are voicing their privacy concerns through app reviews on App stores. Objective: The main challenge of effectively mining privacy concerns from user reviews lies in the fact that reviews expressing privacy concerns are overridden by a large number of reviews expressing more generic themes and noisy content. In this work, we propose a novel automated approach to overcome that challenge. Method: Our approach first employs information retrieval and document embeddings to unsupervisedly extract candidate privacy reviews that are further labeled to prepare the annotation dataset. Then, supervised classifiers are trained to automatically identify privacy reviews. Finally, we design an interpretable topic mining algorithm to detect privacy concern topics contained in the privacy reviews. Results: Experimental results show that the best performed document embedding achieves an average precision of 96.80% in the top 100 retrieved candidate privacy reviews. All of the trained privacy review classifiers can achieve an F1 value of more than 91%, outperforming the recent keywords matching baseline with the maximum F1 margin being 7.5%. For detecting privacy concern topics from privacy reviews, our proposed algorithm achieves both better topic coherence and diversity than three strong topic modeling baselines including LDA. Conclusion: Empirical evaluation results demonstrate the effectiveness of our approach in identifying privacy reviews and detecting user privacy concerns expressed in App reviews.
http://arxiv.org/abs/2212.09289v4
"2022-12-19T08:07:27Z"
cs.SE
2,022
Very Large Language Model as a Unified Methodology of Text Mining
Meng Jiang
Text data mining is the process of deriving essential information from language text. Typical text mining tasks include text categorization, text clustering, topic modeling, information extraction, and text summarization. Various data sets are collected and various algorithms are designed for the different types of tasks. In this paper, I present a blue sky idea that very large language model (VLLM) will become an effective unified methodology of text mining. I discuss at least three advantages of this new methodology against conventional methods. Finally I discuss the challenges in the design and development of VLLM techniques for text mining.
http://arxiv.org/abs/2212.09271v2
"2022-12-19T06:52:13Z"
cs.DB, cs.AI, cs.LG
2,022
Experiments on Generalizability of BERTopic on Multi-Domain Short Text
Muriël de Groot, Mohammad Aliannejadi, Marcel R. Haas
Topic modeling is widely used for analytically evaluating large collections of textual data. One of the most popular topic techniques is Latent Dirichlet Allocation (LDA), which is flexible and adaptive, but not optimal for e.g. short texts from various domains. We explore how the state-of-the-art BERTopic algorithm performs on short multi-domain text and find that it generalizes better than LDA in terms of topic coherence and diversity. We further analyze the performance of the HDBSCAN clustering algorithm utilized by BERTopic and find that it classifies a majority of the documents as outliers. This crucial, yet overseen problem excludes too many documents from further analysis. When we replace HDBSCAN with k-Means, we achieve similar performance, but without outliers.
http://arxiv.org/abs/2212.08459v1
"2022-12-16T13:07:39Z"
cs.CL
2,022
Political and Economic Patterns in COVID-19 News: From Lockdown to Vaccination
Abdul Sittar, Daniela Major, Caio Mello, Dunja Mladenic, Marko Grobelnik
The purpose of this study is to analyse COVID-19 related news published across different geographical places, in order to gain insights in reporting differences. The COVID-19 pandemic had a major outbreak in January 2020 and was followed by different preventive measures, lockdown, and finally by the process of vaccination. To date, more comprehensive analysis of news related to COVID-19 pandemic are missing, especially those which explain what aspects of this pandemic are being reported by newspapers inserted in different economies and belonging to different political alignments. Since LDA is often less coherent when there are news articles published across the world about an event and you look answers for specific queries. It is because of having semantically different content. To address this challenge, we performed pooling of news articles based on information retrieval using TF-IDF score in a data processing step and topic modeling using LDA with combination of 1 to 6 ngrams. We used VADER sentiment analyzer to analyze the differences in sentiments in news articles reported across different geographical places. The novelty of this study is to look at how COVID-19 pandemic was reported by the media, providing a comparison among countries in different political and economic contexts. Our findings suggest that the news reporting by newspapers with different political alignment support the reported content. Also, economic issues reported by newspapers depend on economy of the place where a newspaper resides.
http://arxiv.org/abs/2212.13875v1
"2022-12-15T11:46:21Z"
cs.IR
2,022
"I think this is the most disruptive technology": Exploring Sentiments of ChatGPT Early Adopters using Twitter Data
Mubin Ul Haque, Isuru Dharmadasa, Zarrin Tasnim Sworna, Roshan Namal Rajapakse, Hussain Ahmad
Large language models have recently attracted significant attention due to their impressive performance on a variety of tasks. ChatGPT developed by OpenAI is one such implementation of a large, pre-trained language model that has gained immense popularity among early adopters, where certain users go to the extent of characterizing it as a disruptive technology in many domains. Understanding such early adopters' sentiments is important because it can provide insights into the potential success or failure of the technology, as well as its strengths and weaknesses. In this paper, we conduct a mixed-method study using 10,732 tweets from early ChatGPT users. We first use topic modelling to identify the main topics and then perform an in-depth qualitative sentiment analysis of each topic. Our results show that the majority of the early adopters have expressed overwhelmingly positive sentiments related to topics such as Disruptions to software development, Entertainment and exercising creativity. Only a limited percentage of users expressed concerns about issues such as the potential for misuse of ChatGPT, especially regarding topics such as Impact on educational aspects. We discuss these findings by providing specific examples for each topic and then detail implications related to addressing these concerns for both researchers and users.
http://arxiv.org/abs/2212.05856v1
"2022-12-12T12:41:24Z"
cs.CL
2,022
Routine Outcome Monitoring in Psychotherapy Treatment using Sentiment-Topic Modelling Approach
Noor Fazilla Abd Yusof, Chenghua Lin
Despite the importance of emphasizing the right psychotherapy treatment for an individual patient, assessing the outcome of the therapy session is equally crucial. Evidence showed that continuous monitoring patient's progress can significantly improve the therapy outcomes to an expected change. By monitoring the outcome, the patient's progress can be tracked closely to help clinicians identify patients who are not progressing in the treatment. These monitoring can help the clinician to consider any necessary actions for the patient's treatment as early as possible, e.g., recommend different types of treatment, or adjust the style of approach. Currently, the evaluation system is based on the clinical-rated and self-report questionnaires that measure patients' progress pre- and post-treatment. While outcome monitoring tends to improve the therapy outcomes, however, there are many challenges in the current method, e.g. time and financial burden for administering questionnaires, scoring and analysing the results. Therefore, a computational method for measuring and monitoring patient progress over the course of treatment is needed, in order to enhance the likelihood of positive treatment outcome. Moreover, this computational method could potentially lead to an inexpensive monitoring tool to evaluate patients' progress in clinical care that could be administered by a wider range of health-care professionals.
http://arxiv.org/abs/2212.08111v1
"2022-12-08T20:14:10Z"
cs.CL, cs.AI
2,022
The Ordered Matrix Dirichlet for State-Space Models
Niklas Stoehr, Benjamin J. Radford, Ryan Cotterell, Aaron Schein
Many dynamical systems in the real world are naturally described by latent states with intrinsic orderings, such as "ally", "neutral", and "enemy" relationships in international relations. These latent states manifest through countries' cooperative versus conflictual interactions over time. State-space models (SSMs) explicitly relate the dynamics of observed measurements to transitions in latent states. For discrete data, SSMs commonly do so through a state-to-action emission matrix and a state-to-state transition matrix. This paper introduces the Ordered Matrix Dirichlet (OMD) as a prior distribution over ordered stochastic matrices wherein the discrete distribution in the kth row stochastically dominates the (k+1)th, such that probability mass is shifted to the right when moving down rows. We illustrate the OMD prior within two SSMs: a hidden Markov model, and a novel dynamic Poisson Tucker decomposition model tailored to international relations data. We find that models built on the OMD recover interpretable ordered latent structure without forfeiting predictive performance. We suggest future applications to other domains where models with stochastic matrices are popular (e.g., topic modeling), and publish user-friendly code.
http://arxiv.org/abs/2212.04130v2
"2022-12-08T08:04:26Z"
stat.ML, cs.LG, cs.SI, stat.AP
2,022
Federated Neural Topic Models
Lorena Calvo-Bartolomé, Jerónimo Arenas-García
Over the last years, topic modeling has emerged as a powerful technique for organizing and summarizing big collections of documents or searching for particular patterns in them. However, privacy concerns may arise when cross-analyzing data from different sources. Federated topic modeling solves this issue by allowing multiple parties to jointly train a topic model without sharing their data. While several federated approximations of classical topic models do exist, no research has been conducted on their application for neural topic models. To fill this gap, we propose and analyze a federated implementation based on state-of-the-art neural topic modeling implementations, showing its benefits when there is a diversity of topics across the nodes' documents and the need to build a joint model. In practice, our approach is equivalent to a centralized model training, but preserves the privacy of the nodes. Advantages of this federated scenario are illustrated by means of experiments using both synthetic and real data scenarios.
http://arxiv.org/abs/2212.02269v2
"2022-12-05T13:49:26Z"
cs.LG, cs.CL
2,022
Topic Modeling on Clinical Social Work Notes for Exploring Social Determinants of Health Factors
Shenghuan Sun, Travis Zack, Madhumita Sushil, Atul J. Butte
Most research studying social determinants of health (SDoH) has focused on physician notes or structured elements of the electronic medical record (EMR). We hypothesize that clinical notes from social workers, whose role is to ameliorate social and economic factors, might provide a richer source of data on SDoH. We sought to perform topic modeling to identify robust topics of discussion within a large cohort of social work notes. We retrieved a diverse, deidentified corpus of 0.95 million clinical social work notes from 181,644 patients at the University of California, San Francisco. We used word frequency analysis and Latent Dirichlet Allocation (LDA) topic modeling analysis to characterize this corpus and identify potential topics of discussion. Word frequency analysis identified both medical and non-medical terms associated with specific ICD10 chapters. The LDA topic modeling analysis extracted 11 topics related to social determinants of health risk factors including financial status, abuse history, social support, risk of death, and mental health. In addition, the topic modeling approach captured the variation between different types of social work notes and across patients with different types of diseases or conditions. We demonstrated that social work notes contain rich, unique, and otherwise unobtainable information on an individual's SDoH.
http://arxiv.org/abs/2212.01462v1
"2022-12-02T21:54:55Z"
cs.CL
2,022
Twitter Data Analysis: Izmir Earthquake Case
Özgür Agrali, Hakan Sökün, Enis Karaarslan
T\"urkiye is located on a fault line; earthquakes often occur on a large and small scale. There is a need for effective solutions for gathering current information during disasters. We can use social media to get insight into public opinion. This insight can be used in public relations and disaster management. In this study, Twitter posts on Izmir Earthquake that took place on October 2020 are analyzed. We question if this analysis can be used to make social inferences on time. Data mining and natural language processing (NLP) methods are used for this analysis. NLP is used for sentiment analysis and topic modelling. The latent Dirichlet Allocation (LDA) algorithm is used for topic modelling. We used the Bidirectional Encoder Representations from Transformers (BERT) model working with Transformers architecture for sentiment analysis. It is shown that the users shared their goodwill wishes and aimed to contribute to the initiated aid activities after the earthquake. The users desired to make their voices heard by competent institutions and organizations. The proposed methods work effectively. Future studies are also discussed.
http://arxiv.org/abs/2212.01453v1
"2022-12-02T21:30:34Z"
cs.CL, cs.AI, cs.LG, I.2.7; I.2.m
2,022
TopicFlow: Disentangling quark and gluon jets with normalizing flows
Matthew J. Dolan, Ayodele Ore
The isolation of pure samples of quark and gluon jets is of key interest at hadron colliders. Recent work has employed topic modeling to disentangle the underlying distributions in mixed samples obtained from experiments. However, current implementations do not scale to high-dimensional observables as they rely on binning the data. In this work we introduce TopicFlow, a method based on normalizing flows to learn quark and gluon jet topic distributions from mixed datasets. These networks are as performant as the histogram-based approach, but since they are unbinned, they are efficient even in high dimension. The models can also be oversampled to alleviate the statistical limitations of histograms. As an example use case, we demonstrate how our models can improve the calibration accuracy of a classifier. Finally, we discuss how the flow likelihoods can be used to perform outlier-robust quark/gluon classification.
http://arxiv.org/abs/2211.16053v3
"2022-11-29T10:00:00Z"
hep-ph
2,022
Data-driven extraction of the substructure of quark and gluon jets in proton-proton and heavy-ion collisions
Yueyang Ying
The modification of quark- and gluon-initiated jets in the quark-gluon plasma produced in heavy-ion collisions is a long-standing question that has not yet received a definitive answer from experiments. In particular, the size of the modifications in the quark-gluon plasma differs between theoretical models. Therefore a fully data-driven technique is crucial for an unbiased extraction of the quark and gluon jet spectra and substructure. Corroborating past results, I demonstrate the capability of a fully data-driven technique called topic modeling in separating quark and gluon contributions to jet observables. The data-driven topic separation results can further be used to extract jet substructures, such as jet shapes and jet fragmentation function, and their respective QGP modifications. In addition, I propose the use of machine learning constructed observables and demonstrate the potential to increase separability for the input observable. This proof-of-concept study is based on proton-proton and heavy-ion collision events from the PYQUEN generator with statistics accessible in Run 4 of the Large Hadron Collider. These results suggest the potential for an experimental determination of quark- and gluon-jet spectra, their substructures, and their modification in the QGP.
http://arxiv.org/abs/2211.15785v1
"2022-11-28T21:38:25Z"
hep-ph
2,022
Low-resource Personal Attribute Prediction from Conversation
Yinan Liu, Hu Chen, Wei Shen, Jiaoyan Chen
Personal knowledge bases (PKBs) are crucial for a broad range of applications such as personalized recommendation and Web-based chatbots. A critical challenge to build PKBs is extracting personal attribute knowledge from users' conversation data. Given some users of a conversational system, a personal attribute and these users' utterances, our goal is to predict the ranking of the given personal attribute values for each user. Previous studies often rely on a relative number of resources such as labeled utterances and external data, yet the attribute knowledge embedded in unlabeled utterances is underutilized and their performance of predicting some difficult personal attributes is still unsatisfactory. In addition, it is found that some text classification methods could be employed to resolve this task directly. However, they also perform not well over those difficult personal attributes. In this paper, we propose a novel framework PEARL to predict personal attributes from conversations by leveraging the abundant personal attribute knowledge from utterances under a low-resource setting in which no labeled utterances or external data are utilized. PEARL combines the biterm semantic information with the word co-occurrence information seamlessly via employing the updated prior attribute knowledge to refine the biterm topic model's Gibbs sampling process in an iterative manner. The extensive experimental results show that PEARL outperforms all the baseline methods not only on the task of personal attribute prediction from conversations over two data sets, but also on the more general weakly supervised text classification task over one data set.
http://arxiv.org/abs/2211.15324v1
"2022-11-28T14:04:51Z"
cs.AI
2,022
Multi-scale Hybridized Topic Modeling: A Pipeline for Analyzing Unstructured Text Datasets via Topic Modeling
Keyi Cheng, Stefan Inzer, Adrian Leung, Xiaoxian Shen, Michael Perlmutter, Michael Lindstrom, Joyce Chew, Todd Presner, Deanna Needell
We propose a multi-scale hybridized topic modeling method to find hidden topics from transcribed interviews more accurately and efficiently than traditional topic modeling methods. Our multi-scale hybridized topic modeling method (MSHTM) approaches data at different scales and performs topic modeling in a hierarchical way utilizing first a classical method, Nonnegative Matrix Factorization, and then a transformer-based method, BERTopic. It harnesses the strengths of both NMF and BERTopic. Our method can help researchers and the public better extract and interpret the interview information. Additionally, it provides insights for new indexing systems based on the topic level. We then deploy our method on real-world interview transcripts and find promising results.
http://arxiv.org/abs/2211.13496v1
"2022-11-24T09:37:45Z"
stat.CO
2,022
Mitigating Data Sparsity for Short Text Topic Modeling by Topic-Semantic Contrastive Learning
Xiaobao Wu, Anh Tuan Luu, Xinshuai Dong
To overcome the data sparsity issue in short text topic modeling, existing methods commonly rely on data augmentation or the data characteristic of short texts to introduce more word co-occurrence information. However, most of them do not make full use of the augmented data or the data characteristic: they insufficiently learn the relations among samples in data, leading to dissimilar topic distributions of semantically similar text pairs. To better address data sparsity, in this paper we propose a novel short text topic modeling framework, Topic-Semantic Contrastive Topic Model (TSCTM). To sufficiently model the relations among samples, we employ a new contrastive learning method with efficient positive and negative sampling strategies based on topic semantics. This contrastive learning method refines the representations, enriches the learning signals, and thus mitigates the sparsity issue. Extensive experimental results show that our TSCTM outperforms state-of-the-art baselines regardless of the data augmentation availability, producing high-quality topics and topic distributions.
http://arxiv.org/abs/2211.12878v1
"2022-11-23T11:33:43Z"
cs.CL
2,022
H2-Golden-Retriever: Methodology and Tool for an Evidence-Based Hydrogen Research Grantsmanship
Paul Seurin, Olusola Olabanjo, Joseph Wiggins, Lorien Pratt, Loveneesh Rana, Rozhin Yasaei, Gregory Renard
Hydrogen is poised to play a major role in decarbonizing the economy. The need to discover, develop, and understand low-cost, high-performance, durable materials that can help maximize the cost of electrolysis as well as the need for an intelligent tool to make evidence-based Hydrogen research funding decisions relatively easier warranted this study.In this work, we developed H2 Golden Retriever (H2GR) system for Hydrogen knowledge discovery and representation using Natural Language Processing (NLP), Knowledge Graph and Decision Intelligence. This system represents a novel methodology encapsulating state-of-the-art technique for evidence-based research grantmanship. Relevant Hydrogen papers were scraped and indexed from the web and preprocessing was done using noise and stop-words removal, language and spell check, stemming and lemmatization. The NLP tasks included Named Entity Recognition using Stanford and Spacy NER, topic modeling using Latent Dirichlet Allocation and TF-IDF. The Knowledge Graph module was used for the generation of meaningful entities and their relationships, trends and patterns in relevant H2 papers, thanks to an ontology of the hydrogen production domain. The Decision Intelligence component provides stakeholders with a simulation environment for cost and quantity dependencies. PageRank algorithm was used to rank papers of interest. Random searches were made on the proposed H2GR and the results included a list of papers ranked by relevancy score, entities, graphs of relationships between the entities, ontology of H2 production and Causal Decision Diagrams showing component interactivity. Qualitative assessment was done by the experts and H2GR is deemed to function to a satisfactory level.
http://arxiv.org/abs/2211.08614v1
"2022-11-16T02:02:28Z"
cs.IR, cs.HC
2,022
Multilingual and Multimodal Topic Modelling with Pretrained Embeddings
Elaine Zosa, Lidia Pivovarova
This paper presents M3L-Contrast -- a novel multimodal multilingual (M3L) neural topic model for comparable data that maps texts from multiple languages and images into a shared topic space. Our model is trained jointly on texts and images and takes advantage of pretrained document and image embeddings to abstract the complexities between different languages and modalities. As a multilingual topic model, it produces aligned language-specific topics and as multimodal model, it infers textual representations of semantic concepts in images. We demonstrate that our model is competitive with a zero-shot topic model in predicting topic distributions for comparable multilingual data and significantly outperforms a zero-shot model in predicting topic distributions for comparable texts and images. We also show that our model performs almost as well on unaligned embeddings as it does on aligned embeddings.
http://arxiv.org/abs/2211.08057v1
"2022-11-15T11:15:50Z"
cs.CL, cs.AI, I.2.7
2,022
Using Open-Ended Stressor Responses to Predict Depressive Symptoms across Demographics
Carlos Aguirre, Mark Dredze, Philip Resnik
Stressors are related to depression, but this relationship is complex. We investigate the relationship between open-ended text responses about stressors and depressive symptoms across gender and racial/ethnic groups. First, we use topic models and other NLP tools to find thematic and vocabulary differences when reporting stressors across demographic groups. We train language models using self-reported stressors to predict depressive symptoms, finding a relationship between stressors and depression. Finally, we find that differences in stressors translate to downstream performance differences across demographic groups.
http://arxiv.org/abs/2211.07932v1
"2022-11-15T06:34:58Z"
cs.CL
2,022
Climate Policy Tracker: Pipeline for automated analysis of public climate policies
Artur Żółkowski, Mateusz Krzyziński, Piotr Wilczyński, Stanisław Giziński, Emilia Wiśnios, Bartosz Pieliński, Julian Sienkiewicz, Przemysław Biecek
The number of standardized policy documents regarding climate policy and their publication frequency is significantly increasing. The documents are long and tedious for manual analysis, especially for policy experts, lawmakers, and citizens who lack access or domain expertise to utilize data analytics tools. Potential consequences of such a situation include reduced citizen governance and involvement in climate policies and an overall surge in analytics costs, rendering less accessibility for the public. In this work, we use a Latent Dirichlet Allocation-based pipeline for the automatic summarization and analysis of 10-years of national energy and climate plans (NECPs) for the period from 2021 to 2030, established by 27 Member States of the European Union. We focus on analyzing policy framing, the language used to describe specific issues, to detect essential nuances in the way governments frame their climate policies and achieve climate goals. The methods leverage topic modeling and clustering for the comparative analysis of policy documents across different countries. It allows for easier integration in potential user-friendly applications for the development of theories and processes of climate policy. This would further lead to better citizen governance and engagement over climate policies and public policy research.
http://arxiv.org/abs/2211.05852v1
"2022-11-10T20:19:28Z"
cs.CL
2,022
Submission-Aware Reviewer Profiling for Reviewer Recommender System
Omer Anjum, Alok Kamatar, Toby Liang, Jinjun Xiong, Wen-mei Hwu
Assigning qualified, unbiased and interested reviewers to paper submissions is vital for maintaining the integrity and quality of the academic publishing system and providing valuable reviews to authors. However, matching thousands of submissions with thousands of potential reviewers within a limited time is a daunting challenge for a conference program committee. Prior efforts based on topic modeling have suffered from losing the specific context that help define the topics in a publication or submission abstract. Moreover, in some cases, topics identified are difficult to interpret. We propose an approach that learns from each abstract published by a potential reviewer the topics studied and the explicit context in which the reviewer studied the topics. Furthermore, we contribute a new dataset for evaluating reviewer matching systems. Our experiments show a significant, consistent improvement in precision when compared with the existing methods. We also use examples to demonstrate why our recommendations are more explainable. The new approach has been deployed successfully at top-tier conferences in the last two years.
http://arxiv.org/abs/2211.04194v1
"2022-11-08T12:18:02Z"
cs.IR, cs.AI
2,022
Learning Semantic Textual Similarity via Topic-informed Discrete Latent Variables
Erxin Yu, Lan Du, Yuan Jin, Zhepei Wei, Yi Chang
Recently, discrete latent variable models have received a surge of interest in both Natural Language Processing (NLP) and Computer Vision (CV), attributed to their comparable performance to the continuous counterparts in representation learning, while being more interpretable in their predictions. In this paper, we develop a topic-informed discrete latent variable model for semantic textual similarity, which learns a shared latent space for sentence-pair representation via vector quantization. Compared with previous models limited to local semantic contexts, our model can explore richer semantic information via topic modeling. We further boost the performance of semantic similarity by injecting the quantized representation into a transformer-based language model with a well-designed semantic-driven attention mechanism. We demonstrate, through extensive experiments across various English language datasets, that our model is able to surpass several strong neural baselines in semantic textual similarity tasks.
http://arxiv.org/abs/2211.03616v1
"2022-11-07T15:09:58Z"
cs.CL, cs.AI
2,022
The future is different: Large pre-trained language models fail in prediction tasks
Kostadin Cvejoski, Ramsés J. Sánchez, César Ojeda
Large pre-trained language models (LPLM) have shown spectacular success when fine-tuned on downstream supervised tasks. Yet, it is known that their performance can drastically drop when there is a distribution shift between the data used during training and that used at inference time. In this paper we focus on data distributions that naturally change over time and introduce four new REDDIT datasets, namely the WALLSTREETBETS, ASKSCIENCE, THE DONALD, and POLITICS sub-reddits. First, we empirically demonstrate that LPLM can display average performance drops of about 88% (in the best case!) when predicting the popularity of future posts from sub-reddits whose topic distribution changes with time. We then introduce a simple methodology that leverages neural variational dynamic topic models and attention mechanisms to infer temporal language model representations for regression tasks. Our models display performance drops of only about 40% in the worst cases (2% in the best ones) when predicting the popularity of future posts, while using only about 7% of the total number of parameters of LPLM and providing interpretable representations that offer insight into real-world events, like the GameStop short squeeze of 2021
http://arxiv.org/abs/2211.00384v2
"2022-11-01T11:01:36Z"
cs.CL, cs.AI, cs.LG, cs.SI
2,022
Moving beyond word lists: towards abstractive topic labels for human-like topics of scientific documents
Domenic Rosati
Topic models represent groups of documents as a list of words (the topic labels). This work asks whether an alternative approach to topic labeling can be developed that is closer to a natural language description of a topic than a word list. To this end, we present an approach to generating human-like topic labels using abstractive multi-document summarization (MDS). We investigate our approach with an exploratory case study. We model topics in citation sentences in order to understand what further research needs to be done to fully operationalize MDS for topic labeling. Our case study shows that in addition to more human-like topics there are additional advantages to evaluation by using clustering and summarization measures instead of topic model measures. However, we find that there are several developments needed before we can design a well-powered study to evaluate MDS for topic modeling fully. Namely, improving cluster cohesion, improving the factuality and faithfulness of MDS, and increasing the number of documents that might be supported by MDS. We present a number of ideas on how these can be tackled and conclude with some thoughts on how topic modeling can also be used to improve MDS in general.
http://arxiv.org/abs/2211.05599v1
"2022-10-28T17:47:12Z"
cs.CL
2,022
Are Neural Topic Models Broken?
Alexander Hoyle, Pranav Goel, Rupak Sarkar, Philip Resnik
Recently, the relationship between automated and human evaluation of topic models has been called into question. Method developers have staked the efficacy of new topic model variants on automated measures, and their failure to approximate human preferences places these models on uncertain ground. Moreover, existing evaluation paradigms are often divorced from real-world use. Motivated by content analysis as a dominant real-world use case for topic modeling, we analyze two related aspects of topic models that affect their effectiveness and trustworthiness in practice for that purpose: the stability of their estimates and the extent to which the model's discovered categories align with human-determined categories in the data. We find that neural topic models fare worse in both respects compared to an established classical method. We take a step toward addressing both issues in tandem by demonstrating that a straightforward ensembling method can reliably outperform the members of the ensemble.
http://arxiv.org/abs/2210.16162v1
"2022-10-28T14:38:50Z"
cs.CL, cs.HC
2,022
BERT-Flow-VAE: A Weakly-supervised Model for Multi-Label Text Classification
Ziwen Liu, Josep Grau-Bove, Scott Allan Orr
Multi-label Text Classification (MLTC) is the task of categorizing documents into one or more topics. Considering the large volumes of data and varying domains of such tasks, fully supervised learning requires manually fully annotated datasets which is costly and time-consuming. In this paper, we propose BERT-Flow-VAE (BFV), a Weakly-Supervised Multi-Label Text Classification (WSMLTC) model that reduces the need for full supervision. This new model (1) produces BERT sentence embeddings and calibrates them using a flow model, (2) generates an initial topic-document matrix by averaging results of a seeded sparse topic model and a textual entailment model which only require surface name of topics and 4-6 seed words per topic, and (3) adopts a VAE framework to reconstruct the embeddings under the guidance of the topic-document matrix. Finally, (4) it uses the means produced by the encoder model in the VAE architecture as predictions for MLTC. Experimental results on 6 multi-label datasets show that BFV can substantially outperform other baseline WSMLTC models in key metrics and achieve approximately 84% performance of a fully-supervised model.
http://arxiv.org/abs/2210.15225v1
"2022-10-27T07:18:56Z"
cs.CL
2,022
ProSiT! Latent Variable Discovery with PROgressive SImilarity Thresholds
Tommaso Fornaciari, Dirk Hovy, Federico Bianchi
The most common ways to explore latent document dimensions are topic models and clustering methods. However, topic models have several drawbacks: e.g., they require us to choose the number of latent dimensions a priori, and the results are stochastic. Most clustering methods have the same issues and lack flexibility in various ways, such as not accounting for the influence of different topics on single documents, forcing word-descriptors to belong to a single topic (hard-clustering) or necessarily relying on word representations. We propose PROgressive SImilarity Thresholds - ProSiT, a deterministic and interpretable method, agnostic to the input format, that finds the optimal number of latent dimensions and only has two hyper-parameters, which can be set efficiently via grid search. We compare this method with a wide range of topic models and clustering methods on four benchmark data sets. In most setting, ProSiT matches or outperforms the other methods in terms six metrics of topic coherence and distinctiveness, producing replicable, deterministic results.
http://arxiv.org/abs/2210.14763v1
"2022-10-26T14:52:44Z"
cs.CL
2,022
Identifying Crisis Response Communities in Online Social Networks for Compound Disasters: The Case of Hurricane Laura and Covid-19
Khondhaker Al Momin, H M Imran Kays, Arif Mohaimin Sadri
Online social networks allow different agencies and the public to interact and share the underlying risks and protective actions during major disasters. This study revealed such crisis communication patterns during hurricane Laura compounded by the COVID-19 pandemic. Laura was one of the strongest (Category 4) hurricanes on record to make landfall in Cameron, Louisiana. Using the Application Programming Interface (API), this study utilizes large-scale social media data obtained from Twitter through the recently released academic track that provides complete and unbiased observations. The data captured publicly available tweets shared by active Twitter users from the vulnerable areas threatened by Laura. Online social networks were based on user influence feature ( mentions or tags) that allows notifying other users while posting a tweet. Using network science theories and advanced community detection algorithms, the study split these networks into twenty-one components of various sizes, the largest of which contained eight well-defined communities. Several natural language processing techniques (i.e., word clouds, bigrams, topic modeling) were applied to the tweets shared by the users in these communities to observe their risk-taking or risk-averse behavior during a major compounding crisis. Social media accounts of local news media, radio, universities, and popular sports pages were among those who involved heavily and interacted closely with local residents. In contrast, emergency management and planning units in the area engaged less with the public. The findings of this study provide novel insights into the design of efficient social media communication guidelines to respond better in future disasters.
http://arxiv.org/abs/2210.14970v1
"2022-10-23T16:55:22Z"
cs.SI
2,022
Multi-view Semantic Matching of Question retrieval using Fine-grained Semantic Representations
Li Chong, Denghao Ma, Yueguo Chen
As a key task of question answering, question retrieval has attracted much attention from the communities of academia and industry. Previous solutions mainly focus on the translation model, topic model, and deep learning techniques. Distinct from the previous solutions, we propose to construct fine-grained semantic representations of a question by a learned importance score assigned to each keyword, so that we can achieve a fine-grained question matching solution with these semantic representations of different lengths. Accordingly, we propose a multi-view semantic matching model by reusing the important keywords in multiple semantic representations. As a key of constructing fine-grained semantic representations, we are the first to use a cross-task weakly supervised extraction model that applies question-question labelled signals to supervise the keyword extraction process (i.e. to learn the keyword importance). The extraction model integrates the deep semantic representation and lexical matching information with statistical features to estimate the importance of keywords. We conduct extensive experiments on three public datasets and the experimental results show that our proposed model significantly outperforms the state-of-the-art solutions.
http://arxiv.org/abs/2210.11806v2
"2022-10-21T08:32:38Z"
cs.IR
2,022
Design a Sustainable Micro-mobility Future: Trends and Challenges in the United States and European Union Using Natural Language Processing Techniques
Lilit Avetisyan, Chengxin Zhang, Sue Bai, Ehsan Moradi Pari, Fred Feng, Shan Bao, Feng Zhou
Micro-mobility is promising to contribute to sustainable cities in the future with its efficiency and low cost. To better design such a sustainable future, it is necessary to understand the trends and challenges. Thus, we examined people's opinions on micro-mobility in the US and the EU using Tweets. We used topic modeling based on advanced natural language processing techniques and categorized the data into seven topics: promotion and service, mobility, technical features, acceptance, recreation, infrastructure and regulations. Furthermore, using sentiment analysis, we investigated people's positive and negative attitudes towards specific aspects of these topics and compared the patterns of the trends and challenges in the US and the EU. We found that 1) promotion and service included the majority of Twitter discussions in the both regions, 2) the EU had more positive opinions than the US, 3) micro-mobility devices were more widely used for utilitarian mobility and recreational purposes in the EU than in the US, and 4) compared to the EU, people in the US had many more concerns related to infrastructure and regulation issues. These findings help us understand the trends and challenges and prioritize different aspects in micro-mobility to improve their safety and experience across the two areas for designing a more sustainable micro-mobility future.
http://arxiv.org/abs/2210.11714v2
"2022-10-21T03:47:59Z"
cs.CL, cs.HC, I.5
2,022
Not All Asians are the Same: A Disaggregated Approach to Identifying Anti-Asian Racism in Social Media
Fan Wu, Sanyam Lakhanpal, Qian Li, Kookjin Lee, Doowon Kim, Heewon Chae, Hazel K. Kwon
Recent policy initiatives have acknowledged the importance of disaggregating data pertaining to diverse Asian ethnic communities to gain a more comprehensive understanding of their current status and to improve their overall well-being. However, research on anti-Asian racism has thus far fallen short of properly incorporating data disaggregation practices. Our study addresses this gap by collecting 12-month-long data from X (formerly known as Twitter) that contain diverse sub-ethnic group representations within Asian communities. In this dataset, we break down anti-Asian toxic messages based on both temporal and ethnic factors and conduct a series of comparative analyses of toxic messages, targeting different ethnic groups. Using temporal persistence analysis, $n$-gram-based correspondence analysis, and topic modeling, this study provides compelling evidence that anti-Asian messages comprise various distinctive narratives. Certain messages targeting sub-ethnic Asian groups entail different topics that distinguish them from those targeting Asians in a generic manner or those aimed at major ethnic groups, such as Chinese and Indian. By introducing several techniques that facilitate comparisons of online anti-Asian hate towards diverse ethnic communities, this study highlights the importance of taking a nuanced and disaggregated approach for understanding racial hatred to formulate effective mitigation strategies.
http://arxiv.org/abs/2210.11640v2
"2022-10-20T23:55:50Z"
cs.SI
2,022
Coordinated Topic Modeling
Pritom Saha Akash, Jie Huang, Kevin Chen-Chuan Chang
We propose a new problem called coordinated topic modeling that imitates human behavior while describing a text corpus. It considers a set of well-defined topics like the axes of a semantic space with a reference representation. It then uses the axes to model a corpus for easily understandable representation. This new task helps represent a corpus more interpretably by reusing existing knowledge and benefits the corpora comparison task. We design ECTM, an embedding-based coordinated topic model that effectively uses the reference representation to capture the target corpus-specific aspects while maintaining each topic's global semantics. In ECTM, we introduce the topic- and document-level supervision with a self-training mechanism to solve the problem. Finally, extensive experiments on multiple domains show the superiority of our model over other baselines.
http://arxiv.org/abs/2210.08559v2
"2022-10-16T15:10:54Z"
cs.CL, cs.IR
2,022
HyperMiner: Topic Taxonomy Mining with Hyperbolic Embedding
Yishi Xu, Dongsheng Wang, Bo Chen, Ruiying Lu, Zhibin Duan, Mingyuan Zhou
Embedded topic models are able to learn interpretable topics even with large and heavy-tailed vocabularies. However, they generally hold the Euclidean embedding space assumption, leading to a basic limitation in capturing hierarchical relations. To this end, we present a novel framework that introduces hyperbolic embeddings to represent words and topics. With the tree-likeness property of hyperbolic space, the underlying semantic hierarchy among words and topics can be better exploited to mine more interpretable topics. Furthermore, due to the superiority of hyperbolic geometry in representing hierarchical data, tree-structure knowledge can also be naturally injected to guide the learning of a topic hierarchy. Therefore, we further develop a regularization term based on the idea of contrastive learning to inject prior structural knowledge efficiently. Experiments on both topic taxonomy discovery and document representation demonstrate that the proposed framework achieves improved performance against existing embedded topic models.
http://arxiv.org/abs/2210.10625v1
"2022-10-16T02:54:17Z"
cs.IR, cs.LG
2,022
Ensemble Creation via Anchored Regularization for Unsupervised Aspect Extraction
Pulah Dhandekar, Manu Joseph
Aspect Based Sentiment Analysis is the most granular form of sentiment analysis that can be performed on the documents / sentences. Besides delivering the most insights at a finer grain, it also poses equally daunting challenges. One of them being the shortage of labelled data. To bring in value right out of the box for the text data being generated at a very fast pace in today's world, unsupervised aspect-based sentiment analysis allows us to generate insights without investing time or money in generating labels. From topic modelling approaches to recent deep learning-based aspect extraction models, this domain has seen a lot of development. One of the models that we improve upon is ABAE that reconstructs the sentences as a linear combination of aspect terms present in it, In this research we explore how we can use information from another unsupervised model to regularize ABAE, leading to better performance. We contrast it with baseline rule based ensemble and show that the ensemble methods work better than the individual models and the regularization based ensemble performs better than the rule-based one.
http://arxiv.org/abs/2210.06829v1
"2022-10-13T08:23:56Z"
cs.CL, cs.LG
2,022
Good AI for Good: How AI Strategies of the Nordic Countries Address the Sustainable Development Goals
Andreas Theodorou, Juan Carlos Nieves, Virginia Dignum
Developed and used responsibly Artificial Intelligence (AI) is a force for global sustainable development. Given this opportunity, we expect that the many of the existing guidelines and recommendations for trustworthy or responsible AI will provide explicit guidance on how AI can contribute to the achievement of United Nations' Sustainable Development Goals (SDGs). This would in particular be the case for the AI strategies of the Nordic countries, at least given their high ranking and overall political focus when it comes to the achievement of the SDGs. In this paper, we present an analysis of existing AI recommendations from 10 different countries or organisations based on topic modelling techniques to identify how much these strategy documents refer to the SDGs. The analysis shows no significant difference on how much these documents refer to SDGs. Moreover, the Nordic countries are not different from the others albeit their long-term commitment to SDGs. More importantly, references to \textit{gender equality} (SDG 5) and \textit{inequality} (SDG 10), as well as references to environmental impact of AI development and use, and in particular the consequences for life on earth, are notably missing from the guidelines.
http://arxiv.org/abs/2210.09010v1
"2022-10-08T08:17:30Z"
cs.CY, cs.AI
2,022
An Empirical Study on How the Developers Discussed about Pandas Topics
Sajib Kumar Saha Joy, Farzad Ahmed, Al Hasib Mahamud, Nibir Chandra Mandal
Pandas is defined as a software library which is used for data analysis in Python programming language. As pandas is a fast, easy and open source data analysis tool, it is rapidly used in different software engineering projects like software development, machine learning, computer vision, natural language processing, robotics, and others. So a huge interests are shown in software developers regarding pandas and a huge number of discussions are now becoming dominant in online developer forums, like Stack Overflow (SO). Such discussions can help to understand the popularity of pandas library and also can help to understand the importance, prevalence, difficulties of pandas topics. The main aim of this research paper is to find the popularity and difficulty of pandas topics. For this regard, SO posts are collected which are related to pandas topic discussions. Topic modeling are done on the textual contents of the posts. We found 26 topics which we further categorized into 5 board categories. We observed that developers discuss variety of pandas topics in SO related to error and excepting handling, visualization, External support, dataframe, and optimization. In addition, a trend chart is generated according to the discussion of topics in a predefined time series. The finding of this paper can provide a path to help the developers, educators and learners. For example, beginner developers can learn most important topics in pandas which are essential for develop any model. Educators can understand the topics which seem hard to learners and can build different tutorials which can make that pandas topic understandable. From this empirical study it is possible to understand the preferences of developers in pandas topic by processing their SO posts
http://arxiv.org/abs/2210.03519v2
"2022-10-07T13:04:58Z"
cs.SE, cs.AI, cs.IR
2,022
From Rules to Regs: A Structural Topic Model of Collusion Research
W. Benedikt Schmal
Collusive practices of firms continue to be a major threat to competition and consumer welfare. Academic research on this topic aims at understanding the economic drivers and behavioral patterns of cartels, among others, to guide competition authorities on how to tackle them. Utilizing topical machine learning techniques in the domain of natural language processing enables me to analyze the publications on this issue over more than 20 years in a novel way. Coming from a stylized oligopoly-game theory focus, researchers recently turned toward empirical case studies of bygone cartels. Uni- and multivariate time series analyses reveal that the latter did not supersede the former but filled a gap the decline in rule-based reasoning has left. Together with a tendency towards monocultures in topics covered and an endogenous constriction of the topic variety, the course of cartel research has changed notably: The variety of subjects included has grown, but the pluralism in economic questions addressed is in descent. It remains to be seen whether this will benefit or harm the cartel detection capabilities of authorities in the future.
http://arxiv.org/abs/2210.02957v1
"2022-10-06T14:49:55Z"
econ.GN, q-fin.EC
2,022
Theme and Topic: How Qualitative Research and Topic Modeling Can Be Brought Together
Marco Gillies, Dhiraj Murthy, Harry Brenton, Rapheal Olaniyan
Qualitative research is an approach to understanding social phenomenon based around human interpretation of data, particularly text. Probabilistic topic modelling is a machine learning approach that is also based around the analysis of text and often is used to in order to understand social phenomena. Both of these approaches aim to extract important themes or topics in a textual corpus and therefore we may see them as analogous to each other. However there are also considerable differences in how the two approaches function. One is a highly human interpretive process, the other is automated and statistical. In this paper we use this analogy as the basis for our Theme and Topic system, a tool for qualitative researchers to conduct textual research that integrates topic modelling into an accessible interface. This is an example of a more general approach to the design of interactive machine learning systems in which existing human professional processes can be used as the model for processes involving machine learning. This has the particular benefit of providing a familiar approach to existing professionals, that may can make machine learning seem less alien and easier to learn. Our design approach has two elements. We first investigate the steps professionals go through when performing tasks and design a workflow for Theme and Topic that integrates machine learning. We then designed interfaces for topic modelling in which familiar concepts from qualitative research are mapped onto machine learning concepts. This makes these the machine learning concepts more familiar and easier to learn for qualitative researchers.
http://arxiv.org/abs/2210.00707v1
"2022-10-03T04:21:08Z"
cs.HC, cs.LG
2,022
Transition to Adulthood for Young People with Intellectual or Developmental Disabilities: Emotion Detection and Topic Modeling
Yan Liu, Maria Laricheva, Chiyu Zhang, Patrick Boutet, Guanyu Chen, Terence Tracey, Giuseppe Carenini, Richard Young
Transition to Adulthood is an essential life stage for many families. The prior research has shown that young people with intellectual or development disabil-ities (IDD) have more challenges than their peers. This study is to explore how to use natural language processing (NLP) methods, especially unsupervised machine learning, to assist psychologists to analyze emotions and sentiments and to use topic modeling to identify common issues and challenges that young people with IDD and their families have. Additionally, the results were compared to those obtained from young people without IDD who were in tran-sition to adulthood. The findings showed that NLP methods can be very useful for psychologists to analyze emotions, conduct cross-case analysis, and sum-marize key topics from conversational data. Our Python code is available at https://github.com/mlaricheva/emotion_topic_modeling.
http://arxiv.org/abs/2209.10477v1
"2022-09-21T16:23:45Z"
cs.CL, stat.AP, stat.ML
2,022
Twitter Topic Classification
Dimosthenis Antypas, Asahi Ushio, Jose Camacho-Collados, Leonardo Neves, Vítor Silva, Francesco Barbieri
Social media platforms host discussions about a wide variety of topics that arise everyday. Making sense of all the content and organising it into categories is an arduous task. A common way to deal with this issue is relying on topic modeling, but topics discovered using this technique are difficult to interpret and can differ from corpus to corpus. In this paper, we present a new task based on tweet topic classification and release two associated datasets. Given a wide range of topics covering the most important discussion points in social media, we provide training and testing data from recent time periods that can be used to evaluate tweet classification models. Moreover, we perform a quantitative evaluation and analysis of current general- and domain-specific language models on the task, which provide more insights on the challenges and nature of the task.
http://arxiv.org/abs/2209.09824v1
"2022-09-20T16:13:52Z"
cs.CL
2,022
Knowledge-Aware Bayesian Deep Topic Model
Dongsheng Wang, Yishi Xu, Miaoge Li, Zhibin Duan, Chaojie Wang, Bo Chen, Mingyuan Zhou
We propose a Bayesian generative model for incorporating prior domain knowledge into hierarchical topic modeling. Although embedded topic models (ETMs) and its variants have gained promising performance in text analysis, they mainly focus on mining word co-occurrence patterns, ignoring potentially easy-to-obtain prior topic hierarchies that could help enhance topic coherence. While several knowledge-based topic models have recently been proposed, they are either only applicable to shallow hierarchies or sensitive to the quality of the provided prior knowledge. To this end, we develop a novel deep ETM that jointly models the documents and the given prior knowledge by embedding the words and topics into the same space. Guided by the provided knowledge, the proposed model tends to discover topic hierarchies that are organized into interpretable taxonomies. Besides, with a technique for adapting a given graph, our extended version allows the provided prior topic structure to be finetuned to match the target corpus. Extensive experiments show that our proposed model efficiently integrates the prior knowledge and improves both hierarchical topic discovery and document representation.
http://arxiv.org/abs/2209.14228v1
"2022-09-20T09:16:05Z"
cs.CL, cs.LG
2,022
Embedded Topics in the Stochastic Block Model
Rémi Boutin, Charles Bouveyron, Pierre Latouche
Communication networks such as emails or social networks are now ubiquitous and their analysis has become a strategic field. In many applications, the goal is to automatically extract relevant information by looking at the nodes and their connections. Unfortunately, most of the existing methods focus on analysing the presence or absence of edges and textual data is often discarded. However, all communication networks actually come with textual data on the edges. In order to take into account this specificity, we consider in this paper networks for which two nodes are linked if and only if they share textual data. We introduce a deep latent variable model allowing embedded topics to be handled called ETSBM to simultaneously perform clustering on the nodes while modelling the topics used between the different clusters. ETSBM extends both the stochastic block model (SBM) and the embedded topic model (ETM) which are core models for studying networks and corpora, respectively. The inference is done using a variational-Bayes expectation-maximisation algorithm combined with a stochastic gradient descent. The methodology is evaluated on synthetic data and on a real world dataset.
http://arxiv.org/abs/2209.10097v2
"2022-09-19T21:44:39Z"
cs.SI, stat.ME
2,022
Doctors vs. Nurses: Understanding the Great Divide in Vaccine Hesitancy among Healthcare Workers
Sajid Hussain Rafi Ahamed, Shahid Shakil, Hanjia Lyu, Xinping Zhang, Jiebo Luo
Healthcare workers such as doctors and nurses are expected to be trustworthy and creditable sources of vaccine-related information. Their opinions toward the COVID-19 vaccines may influence the vaccine uptake among the general population. However, vaccine hesitancy is still an important issue even among the healthcare workers. Therefore, it is critical to understand their opinions to help reduce the level of vaccine hesitancy. There have been studies examining healthcare workers' viewpoints on COVID-19 vaccines using questionnaires. Reportedly, a considerably higher proportion of vaccine hesitancy is observed among nurses, compared to doctors. We intend to verify and study this phenomenon at a much larger scale and in fine grain using social media data, which has been effectively and efficiently leveraged by researchers to address real-world issues during the COVID-19 pandemic. More specifically, we use a keyword search to identify healthcare workers and further classify them into doctors and nurses from the profile descriptions of the corresponding Twitter users. Moreover, we apply a transformer-based language model to remove irrelevant tweets. Sentiment analysis and topic modeling are employed to analyze and compare the sentiment and thematic differences in the tweets posted by doctors and nurses. We find that doctors are overall more positive toward the COVID-19 vaccines. The focuses of doctors and nurses when they discuss vaccines in a negative way are in general different. Doctors are more concerned with the effectiveness of the vaccines over newer variants while nurses pay more attention to the potential side effects on children. Therefore, we suggest that more customized strategies should be deployed when communicating with different groups of healthcare workers.
http://arxiv.org/abs/2209.04874v2
"2022-09-11T14:22:16Z"
cs.SI, cs.CY
2,022
Developer Discussion Topics on the Adoption and Barriers of Low Code Software Development Platforms
Md Abdullah Al Alamin, Gias Uddin, Sanjay Malakar, Sadia Afroz, Tameem Bin Haider, Anindya Iqbal
Low-code software development (LCSD) is an emerging approach to democratize application development for software practitioners from diverse backgrounds. LCSD platforms promote rapid application development with a drag-and-drop interface and minimal programming by hand. As it is a relatively new paradigm, it is vital to study developers' difficulties when adopting LCSD platforms. Software engineers frequently use the online developer forum Stack Overflow (SO) to seek assistance with technical issues. We observe a growing body of LCSD-related posts in SO. This paper presents an empirical study of around 33K SO posts containing discussions of 38 popular LCSD platforms. We use Topic Modeling to determine the topics discussed in those posts. Additionally, we examine how these topics are spread across the various phases of the agile software development life cycle (SDLC) and which part of LCSD is the most popular and challenging. Our study offers several interesting findings. First, we find 40 LCSD topics that we group into five categories: Application Customization, Database, and File Management, Platform Adoption, Platform Maintenance, and Third-party API Integration. Second, while the Application Customization (30\%) and Data Storage (25\%) \rev{topic} categories are the most common, inquiries relating to several other categories (e.g., the Platform Adoption \rev{topic} category) have gained considerable attention in recent years. Third, all topic categories are evolving rapidly, especially during the Covid-19 pandemic. The findings of this study have implications for all three LCSD stakeholders: LCSD platform vendors, LCSD developers/practitioners, Researchers, and Educators. Researchers and LCSD platform vendors can collaborate to improve different aspects of LCSD, such as better tutorial-based documentation, testing, and DevOps support.
http://arxiv.org/abs/2209.00844v1
"2022-09-02T07:01:16Z"
cs.SE
2,022
Demystifying the COVID-19 vaccine discourse on Twitter
Zainab Zaidi, Mengbin Ye, Fergus John Samon, Abdisalam Jama, Binduja Gopalakrishnan, Chenhao Gu, Shanika Karunasekera, Jamie Evans, Yoshihisa Kashima
Developing an understanding of the public discourse on COVID-19 vaccination on social media is important not only for addressing the current COVID-19 pandemic, but also for future pathogen outbreaks. We examine a Twitter dataset containing 75 million English tweets discussing COVID-19 vaccination from March 2020 to March 2021. We train a stance detection algorithm using natural language processing (NLP) techniques to classify tweets as `anti-vax' or `pro-vax', and examine the main topics of discourse using topic modelling techniques. While pro-vax tweets (37 million) far outnumbered anti-vax tweets (10 million), a majority of tweets from both stances (63% anti-vax and 53% pro-vax tweets) came from dual-stance users who posted both pro- and anti-vax tweets during the observation period. Pro-vax tweets focused mostly on vaccine development, while anti-vax tweets covered a wide range of topics, some of which included genuine concerns, though there was a large dose of falsehoods. A number of topics were common to both stances, though pro- and anti-vax tweets discussed them from opposite viewpoints. Memes and jokes were amongst the most retweeted messages. Whereas concerns about polarisation and online prevalence of anti-vax discourse are unfounded, targeted countering of falsehoods is important.
http://arxiv.org/abs/2208.13523v1
"2022-08-29T11:56:21Z"
cs.SI, cs.CL
2,022
Multi-dimensional Racism Classification during COVID-19: Stigmatization, Offensiveness, Blame, and Exclusion
Xin Pei, Deval Mehta
Transcending the binary categorization of racist texts, our study takes cues from social science theories to develop a multi-dimensional model for racism detection, namely stigmatization, offensiveness, blame, and exclusion. With the aid of BERT and topic modeling, this categorical detection enables insights into the underlying subtlety of racist discussion on digital platforms during COVID-19. Our study contributes to enriching the scholarly discussion on deviant racist behaviours on social media. First, a stage-wise analysis is applied to capture the dynamics of the topic changes across the early stages of COVID-19 which transformed from a domestic epidemic to an international public health emergency and later to a global pandemic. Furthermore, mapping this trend enables a more accurate prediction of public opinion evolvement concerning racism in the offline world, and meanwhile, the enactment of specified intervention strategies to combat the upsurge of racism during the global public health crisis like COVID-19. In addition, this interdisciplinary research also points out a direction for future studies on social network analysis and mining. Integration of social science perspectives into the development of computational methods provides insights into more accurate data detection and analytics.
http://arxiv.org/abs/2208.13318v1
"2022-08-29T00:38:56Z"
cs.CY, cs.AI, cs.SI
2,022
On confidence intervals for precision matrices and the eigendecomposition of covariance matrices
Teodora Popordanoska, Aleksei Tiulpin, Wacha Bounliphone, Matthew B. Blaschko
The eigendecomposition of a matrix is the central procedure in probabilistic models based on matrix factorization, for instance principal component analysis and topic models. Quantifying the uncertainty of such a decomposition based on a finite sample estimate is essential to reasoning under uncertainty when employing such models. This paper tackles the challenge of computing confidence bounds on the individual entries of eigenvectors of a covariance matrix of fixed dimension. Moreover, we derive a method to bound the entries of the inverse covariance matrix, the so-called precision matrix. The assumptions behind our method are minimal and require that the covariance matrix exists, and its empirical estimator converges to the true covariance. We make use of the theory of U-statistics to bound the $L_2$ perturbation of the empirical covariance matrix. From this result, we obtain bounds on the eigenvectors using Weyl's theorem and the eigenvalue-eigenvector identity and we derive confidence intervals on the entries of the precision matrix using matrix inversion perturbation bounds. As an application of these results, we demonstrate a new statistical test, which allows us to test for non-zero values of the precision matrix. We compare this test to the well-known Fisher-z test for partial correlations, and demonstrate the soundness and scalability of the proposed statistical test, as well as its application to real-world data from medical and physics domains.
http://arxiv.org/abs/2208.11977v1
"2022-08-25T10:12:53Z"
math.ST, cs.LG, stat.TH
2,022
GRETEL: Graph Contrastive Topic Enhanced Language Model for Long Document Extractive Summarization
Qianqian Xie, Jimin Huang, Tulika Saha, Sophia Ananiadou
Recently, neural topic models (NTMs) have been incorporated into pre-trained language models (PLMs), to capture the global semantic information for text summarization. However, in these methods, there remain limitations in the way they capture and integrate the global semantic information. In this paper, we propose a novel model, the graph contrastive topic enhanced language model (GRETEL), that incorporates the graph contrastive topic model with the pre-trained language model, to fully leverage both the global and local contextual semantics for long document extractive summarization. To better capture and incorporate the global semantic information into PLMs, the graph contrastive topic model integrates the hierarchical transformer encoder and the graph contrastive learning to fuse the semantic information from the global document context and the gold summary. To this end, GRETEL encourages the model to efficiently extract salient sentences that are topically related to the gold summary, rather than redundant sentences that cover sub-optimal topics. Experimental results on both general domain and biomedical datasets demonstrate that our proposed method outperforms SOTA methods.
http://arxiv.org/abs/2208.09982v1
"2022-08-21T23:09:29Z"
cs.CL
2,022