Title
stringlengths
14
179
Authors
stringlengths
6
464
Abstract
stringlengths
83
1.93k
entry_id
stringlengths
32
34
Date
unknown
Categories
stringlengths
5
168
year
int32
2.01k
2.02k
Enhancing Knowledge Retrieval with Topic Modeling for Knowledge-Grounded Dialogue
Nhat Tran, Diane Litman
Knowledge retrieval is one of the major challenges in building a knowledge-grounded dialogue system. A common method is to use a neural retriever with a distributed approximate nearest-neighbor database to quickly find the relevant knowledge sentences. In this work, we propose an approach that utilizes topic modeling on the knowledge base to further improve retrieval accuracy and as a result, improve response generation. Additionally, we experiment with a large language model, ChatGPT, to take advantage of the improved retrieval performance to further improve the generation results. Experimental results on two datasets show that our approach can increase retrieval and generation performance. The results also indicate that ChatGPT is a better response generator for knowledge-grounded dialogue when relevant knowledge is provided.
http://arxiv.org/abs/2405.04713v1
"2024-05-07T23:32:32"
cs.IR
2,024
Inferring Discussion Topics about Exploitation of Vulnerabilities from Underground Hacking Forums
Felipe Moreno-Vera
The increasing sophistication of cyber threats necessitates proactive measures to identify vulnerabilities and potential exploits. Underground hacking forums serve as breeding grounds for the exchange of hacking techniques and discussions related to exploitation. In this research, we propose an innovative approach using topic modeling to analyze and uncover key themes in vulnerabilities discussed within these forums. The objective of our study is to develop a machine learning-based model that can automatically detect and classify vulnerability-related discussions in underground hacking forums. By monitoring and analyzing the content of these forums, we aim to identify emerging vulnerabilities, exploit techniques, and potential threat actors. To achieve this, we collect a large-scale dataset consisting of posts and threads from multiple underground forums. We preprocess and clean the data to ensure accuracy and reliability. Leveraging topic modeling techniques, specifically Latent Dirichlet Allocation (LDA), we uncover latent topics and their associated keywords within the dataset. This enables us to identify recurring themes and prevalent discussions related to vulnerabilities, exploits, and potential targets.
http://arxiv.org/abs/2405.04561v1
"2024-05-07T14:54:32"
cs.CR, cs.AI, cs.CY, cs.LG
2,024
Identifying Narrative Patterns and Outliers in Holocaust Testimonies Using Topic Modeling
Maxim Ifergan, Renana Keydar, Omri Abend, Amit Pinchevski
The vast collection of Holocaust survivor testimonies presents invaluable historical insights but poses challenges for manual analysis. This paper leverages advanced Natural Language Processing (NLP) techniques to explore the USC Shoah Foundation Holocaust testimony corpus. By treating testimonies as structured question-and-answer sections, we apply topic modeling to identify key themes. We experiment with BERTopic, which leverages recent advances in language modeling technology. We align testimony sections into fixed parts, revealing the evolution of topics across the corpus of testimonies. This highlights both a common narrative schema and divergences between subgroups based on age and gender. We introduce a novel method to identify testimonies within groups that exhibit atypical topic distributions resembling those of other groups. This study offers unique insights into the complex narratives of Holocaust survivors, demonstrating the power of NLP to illuminate historical discourse and identify potential deviations in survivor experiences.
http://arxiv.org/abs/2405.02650v1
"2024-05-04T12:29:00"
cs.CL, cs.AI
2,024
A Named Entity Recognition and Topic Modeling-based Solution for Locating and Better Assessment of Natural Disasters in Social Media
Ayaz Mehmood, Muhammad Tayyab Zamir, Muhammad Asif Ayub, Nasir Ahmad, Kashif Ahmad
Over the last decade, similar to other application domains, social media content has been proven very effective in disaster informatics. However, due to the unstructured nature of the data, several challenges are associated with disaster analysis in social media content. To fully explore the potential of social media content in disaster informatics, access to relevant content and the correct geo-location information is very critical. In this paper, we propose a three-step solution to tackling these challenges. Firstly, the proposed solution aims to classify social media posts into relevant and irrelevant posts followed by the automatic extraction of location information from the posts' text through Named Entity Recognition (NER) analysis. Finally, to quickly analyze the topics covered in large volumes of social media posts, we perform topic modeling resulting in a list of top keywords, that highlight the issues discussed in the tweet. For the Relevant Classification of Twitter Posts (RCTP), we proposed a merit-based fusion framework combining the capabilities of four different models namely BERT, RoBERTa, Distil BERT, and ALBERT obtaining the highest F1-score of 0.933 on a benchmark dataset. For the Location Extraction from Twitter Text (LETT), we evaluated four models namely BERT, RoBERTa, Distil BERTA, and Electra in an NER framework obtaining the highest F1-score of 0.960. For topic modeling, we used the BERTopic library to discover the hidden topic patterns in the relevant tweets. The experimental results of all the components of the proposed end-to-end solution are very encouraging and hint at the potential of social media content and NLP in disaster management.
http://arxiv.org/abs/2405.00903v1
"2024-05-01T23:19:49"
cs.CL
2,024
Addressing Topic Granularity and Hallucination in Large Language Models for Topic Modelling
Yida Mu, Peizhen Bai, Kalina Bontcheva, Xingyi Song
Large language models (LLMs) with their strong zero-shot topic extraction capabilities offer an alternative to probabilistic topic modelling and closed-set topic classification approaches. As zero-shot topic extractors, LLMs are expected to understand human instructions to generate relevant and non-hallucinated topics based on the given documents. However, LLM-based topic modelling approaches often face difficulties in generating topics with adherence to granularity as specified in human instructions, often resulting in many near-duplicate topics. Furthermore, methods for addressing hallucinated topics generated by LLMs have not yet been investigated. In this paper, we focus on addressing the issues of topic granularity and hallucinations for better LLM-based topic modelling. To this end, we introduce a novel approach that leverages Direct Preference Optimisation (DPO) to fine-tune open-source LLMs, such as Mistral-7B. Our approach does not rely on traditional human annotation to rank preferred answers but employs a reconstruction pipeline to modify raw topics generated by LLMs, thus enabling a fast and efficient training and inference framework. Comparative experiments show that our fine-tuning approach not only significantly improves the LLM's capability to produce more coherent, relevant, and precise topics, but also reduces the number of hallucinated topics.
http://arxiv.org/abs/2405.00611v1
"2024-05-01T16:32:07"
cs.CL
2,024
Unraveling the Italian and English Telegram Conspiracy Spheres through Message Forwarding
Lorenzo Alvisi, Serena Tardelli, Maurizio Tesconi
Telegram has grown into a significant platform for news and information sharing, favored for its anonymity and minimal moderation. This openness, however, makes it vulnerable to misinformation and conspiracy theories. In this study, we explore the dynamics of conspiratorial narrative dissemination within Telegram, focusing on Italian and English landscapes. In particular, we leverage the mechanism of message forwarding within Telegram and collect two extensive datasets through snowball strategy. We adopt a network-based approach and build the Italian and English Telegram networks to reveal their respective communities. By employing topic modeling, we uncover distinct narratives and dynamics of misinformation spread. Results highlight differences between Italian and English conspiracy landscapes, with Italian discourse involving assorted conspiracy theories and alternative news sources intertwined with legitimate news sources, whereas English discourse is characterized by a more focused approach on specific narratives such as QAnon and political conspiracies. Finally, we show that our methodology exhibits robustness across initial seed selections, suggesting broader applicability. This study contributes to understanding information and misinformation spread on Italian and English Telegram ecosystems through the mechanism of message forwarding
http://arxiv.org/abs/2404.18602v1
"2024-04-29T11:17:42"
cs.SI, F.2.2, I.2.7
2,024
Quantitative Tools for Time Series Analysis in Natural Language Processing: A Practitioners Guide
W. Benedikt Schmal
Natural language processing tools have become frequently used in social sciences such as economics, political science, and sociology. Many publications apply topic modeling to elicit latent topics in text corpora and their development over time. Here, most publications rely on visual inspections and draw inference on changes, structural breaks, and developments over time. We suggest using univariate time series econometrics to introduce more quantitative rigor that can strengthen the analyses. In particular, we discuss the econometric topics of non-stationarity as well as structural breaks. This paper serves as a comprehensive practitioners guide to provide researchers in the social and life sciences as well as the humanities with concise advice on how to implement econometric time series methods to thoroughly investigate topic prevalences over time. We provide coding advice for the statistical software R throughout the paper. The application of the discussed tools to a sample dataset completes the analysis.
http://arxiv.org/abs/2404.18499v1
"2024-04-29T08:41:17"
econ.GN, q-fin.EC
2,024
A Large-Scale Empirical Study of COVID-19 Contact Tracing Mobile App Reviews
Sifat Ishmam Parisa, Md Awsaf Alam Anindya, Anindya Iqbal, Gias Uddin
Since the beginning of 2020, the novel coronavirus has begun to sweep across the globe. Given the prevalence of smartphones everywhere, many countries across continents also developed COVID-19 contract tracing apps that users can install to get a warning of potential contacts with infected people. Unlike regular apps that undergo detailed requirement analysis, carefully designed development, rigorous testing, contact tracing apps were deployed after rapid development. Therefore such apps may not reach expectations for all end users. Users share their opinions and experience of the usage of the apps in the app store. This paper aims to understand the types of topics users discuss in the reviews of the COVID-19 contact tracing apps across the continents by analyzing the app reviews. We collected all the reviews of 35 COVID-19 contact tracing apps developed by 34 countries across the globe. We group the app reviews into the following geographical regions: Asia, Europe, North America, Latin America, Africa, Middle East, and Australasia (Australia and NZ). We run topic modeling on the app reviews of each region. We analyze the produced topics and their evolution over time by categorizing them into hierarchies and computing the ratings of reviews related to the topics. While privacy could be a concern with such apps, we only find privacy-related topics in Australasia, North America, and Middle East. Topics related to usability and performance of the apps are prevalent across all regions. Users frequently complained about the lack of features, user interface and the negative impact of such apps on their mobile batteries. Still, we also find that many users praised the apps because they helped them stay aware of the potential danger of getting infected. The finding of this study is expected to help app developers utilize their resources to address the reported issues in a prioritized way.
http://arxiv.org/abs/2404.18125v1
"2024-04-28T09:31:36"
cs.SE
2,024
Social Media and Artificial Intelligence for Sustainable Cities and Societies: A Water Quality Analysis Use-case
Muhammad Asif Auyb, Muhammad Tayyab Zamir, Imran Khan, Hannia Naseem, Nasir Ahmad, Kashif Ahmad
This paper focuses on a very important societal challenge of water quality analysis. Being one of the key factors in the economic and social development of society, the provision of water and ensuring its quality has always remained one of the top priorities of public authorities. To ensure the quality of water, different methods for monitoring and assessing the water networks, such as offline and online surveys, are used. However, these surveys have several limitations, such as the limited number of participants and low frequency due to the labor involved in conducting such surveys. In this paper, we propose a Natural Language Processing (NLP) framework to automatically collect and analyze water-related posts from social media for data-driven decisions. The proposed framework is composed of two components, namely (i) text classification, and (ii) topic modeling. For text classification, we propose a merit-fusion-based framework incorporating several Large Language Models (LLMs) where different weight selection and optimization methods are employed to assign weights to the LLMs. In topic modeling, we employed the BERTopic library to discover the hidden topic patterns in the water-related tweets. We also analyzed relevant tweets originating from different regions and countries to explore global, regional, and country-specific issues and water-related concerns. We also collected and manually annotated a large-scale dataset, which is expected to facilitate future research on the topic.
http://arxiv.org/abs/2404.14977v1
"2024-04-23T12:33:14"
cs.SI, cs.CL
2,024
A Survey of Decomposition-Based Evolutionary Multi-Objective Optimization: Part II -- A Data Science Perspective
Mingyu Huang, Ke Li
This paper presents the second part of the two-part survey series on decomposition-based evolutionary multi-objective optimization where we mainly focus on discussing the literature related to multi-objective evolutionary algorithms based on decomposition (MOEA/D). Complementary to the first part, here we employ a series of advanced data mining approaches to provide a comprehensive anatomy of the enormous landscape of MOEA/D research, which is far beyond the capacity of classic manual literature review protocol. In doing so, we construct a heterogeneous knowledge graph that encapsulates more than 5,400 papers, 10,000 authors, 400 venues, and 1,600 institutions for MOEA/D research. We start our analysis with basic descriptive statistics. Then we delve into prominent research/application topics pertaining to MOEA/D with state-of-the-art topic modeling techniques and interrogate their sptial-temporal and bilateral relationships. We also explored the collaboration and citation networks of MOEA/D, uncovering hidden patterns in the growth of literature as well as collaboration between researchers. Our data mining results here, combined with the expert review in Part I, together offer a holistic view of the MOEA/D research, and demonstrate the potential of an exciting new paradigm for conducting scientific surveys from a data science perspective.
http://arxiv.org/abs/2404.14228v1
"2024-04-22T14:38:58"
cs.NE
2,024
Concept Induction: Analyzing Unstructured Text with High-Level Concepts Using LLooM
Michelle S. Lam, Janice Teoh, James Landay, Jeffrey Heer, Michael S. Bernstein
Data analysts have long sought to turn unstructured text data into meaningful concepts. Though common, topic modeling and clustering focus on lower-level keywords and require significant interpretative work. We introduce concept induction, a computational process that instead produces high-level concepts, defined by explicit inclusion criteria, from unstructured text. For a dataset of toxic online comments, where a state-of-the-art BERTopic model outputs "women, power, female," concept induction produces high-level concepts such as "Criticism of traditional gender roles" and "Dismissal of women's concerns." We present LLooM, a concept induction algorithm that leverages large language models to iteratively synthesize sampled text and propose human-interpretable concepts of increasing generality. We then instantiate LLooM in a mixed-initiative text analysis tool, enabling analysts to shift their attention from interpreting topics to engaging in theory-driven analysis. Through technical evaluations and four analysis scenarios ranging from literature review to content moderation, we find that LLooM's concepts improve upon the prior art of topic models in terms of quality and data coverage. In expert case studies, LLooM helped researchers to uncover new insights even from familiar datasets, for example by suggesting a previously unnoticed concept of attacks on out-party stances in a political social media dataset.
http://arxiv.org/abs/2404.12259v1
"2024-04-18T15:26:02"
cs.HC, cs.AI
2,024
Empowering Interdisciplinary Research with BERT-Based Models: An Approach Through SciBERT-CNN with Topic Modeling
Darya Likhareva, Hamsini Sankaran, Sivakumar Thiyagarajan
Researchers must stay current in their fields by regularly reviewing academic literature, a task complicated by the daily publication of thousands of papers. Traditional multi-label text classification methods often ignore semantic relationships and fail to address the inherent class imbalances. This paper introduces a novel approach using the SciBERT model and CNNs to systematically categorize academic abstracts from the Elsevier OA CC-BY corpus. We use a multi-segment input strategy that processes abstracts, body text, titles, and keywords obtained via BERT topic modeling through SciBERT. Here, the [CLS] token embeddings capture the contextual representation of each segment, concatenated and processed through a CNN. The CNN uses convolution and pooling to enhance feature extraction and reduce dimensionality, optimizing the data for classification. Additionally, we incorporate class weights based on label frequency to address the class imbalance, significantly improving the classification F1 score and enhancing text classification systems and literature review efficiency.
http://arxiv.org/abs/2404.13078v2
"2024-04-16T05:21:47"
cs.CL, cs.LG
2,024
Uncovering Latent Arguments in Social Media Messaging by Employing LLMs-in-the-Loop Strategy
Tunazzina Islam, Dan Goldwasser
The widespread use of social media has led to a surge in popularity for automated methods of analyzing public opinion. Supervised methods are adept at text categorization, yet the dynamic nature of social media discussions poses a continual challenge for these techniques due to the constant shifting of the focus. On the other hand, traditional unsupervised methods for extracting themes from public discourse, such as topic modeling, often reveal overarching patterns that might not capture specific nuances. Consequently, a significant portion of research into social media discourse still depends on labor-intensive manual coding techniques and a human-in-the-loop approach, which are both time-consuming and costly. In this work, we study the problem of discovering arguments associated with a specific theme. We propose a generic LLMs-in-the-Loop strategy that leverages the advanced capabilities of Large Language Models (LLMs) to extract latent arguments from social media messaging. To demonstrate our approach, we apply our framework to contentious topics. We use two publicly available datasets: (1) the climate campaigns dataset of 14k Facebook ads with 25 themes and (2) the COVID-19 vaccine campaigns dataset of 9k Facebook ads with 14 themes. Furthermore, we analyze demographic targeting and the adaptation of messaging based on real-world events.
http://arxiv.org/abs/2404.10259v1
"2024-04-16T03:26:43"
cs.CL, cs.AI, cs.CY, cs.LG, cs.SI
2,024
A solution for the mean parametrization of the von Mises-Fisher distribution
Marcel Nonnenmacher, Maneesh Sahani
The von Mises-Fisher distribution as an exponential family can be expressed in terms of either its natural or its mean parameters. Unfortunately, however, the normalization function for the distribution in terms of its mean parameters is not available in closed form, limiting the practicality of the mean parametrization and complicating maximum-likelihood estimation more generally. We derive a second-order ordinary differential equation, the solution to which yields the mean-parameter normalizer along with its first two derivatives, as well as the variance function of the family. We also provide closed-form approximations to the solution of the differential equation. This allows rapid evaluation of both densities and natural parameters in terms of mean parameters. We show applications to topic modeling with mixtures of von Mises-Fisher distributions using Bregman Clustering.
http://arxiv.org/abs/2404.07358v1
"2024-04-10T21:28:54"
stat.CO, stat.ML
2,024
GINopic: Topic Modeling with Graph Isomorphism Network
Suman Adhya, Debarshi Kumar Sanyal
Topic modeling is a widely used approach for analyzing and exploring large document collections. Recent research efforts have incorporated pre-trained contextualized language models, such as BERT embeddings, into topic modeling. However, they often neglect the intrinsic informational value conveyed by mutual dependencies between words. In this study, we introduce GINopic, a topic modeling framework based on graph isomorphism networks to capture the correlation between words. By conducting intrinsic (quantitative as well as qualitative) and extrinsic evaluations on diverse benchmark datasets, we demonstrate the effectiveness of GINopic compared to existing topic models and highlight its potential for advancing topic modeling.
http://arxiv.org/abs/2404.02115v1
"2024-04-02T17:18:48"
cs.CL, cs.LG
2,024
Automatic detection of relevant information, predictions and forecasts in financial news through topic modelling with Latent Dirichlet Allocation
Silvia García-Méndez, Francisco de Arriba-Pérez, Ana Barros-Vila, Francisco J. González-Castaño, Enrique Costa-Montenegro
Financial news items are unstructured sources of information that can be mined to extract knowledge for market screening applications. Manual extraction of relevant information from the continuous stream of finance-related news is cumbersome and beyond the skills of many investors, who, at most, can follow a few sources and authors. Accordingly, we focus on the analysis of financial news to identify relevant text and, within that text, forecasts and predictions. We propose a novel Natural Language Processing (NLP) system to assist investors in the detection of relevant financial events in unstructured textual sources by considering both relevance and temporality at the discursive level. Firstly, we segment the text to group together closely related text. Secondly, we apply co-reference resolution to discover internal dependencies within segments. Finally, we perform relevant topic modelling with Latent Dirichlet Allocation (LDA) to separate relevant from less relevant text and then analyse the relevant text using a Machine Learning-oriented temporal approach to identify predictions and speculative statements. We created an experimental data set composed of 2,158 financial news items that were manually labelled by NLP researchers to evaluate our solution. The ROUGE-L values for the identification of relevant text and predictions/forecasts were 0.662 and 0.982, respectively. To our knowledge, this is the first work to jointly consider relevance and temporality at the discursive level. It contributes to the transfer of human associative discourse capabilities to expert systems through the combination of multi-paragraph topic segmentation and co-reference resolution to separate author expression patterns, topic modelling with LDA to detect relevant text, and discursive temporality analysis to identify forecasts and predictions within this text.
http://arxiv.org/abs/2404.01338v1
"2024-03-30T17:49:34"
cs.CL, cs.CE, cs.IR, cs.LG, q-fin.ST
2,024
Dual Simplex Volume Maximization for Simplex-Structured Matrix Factorization
Maryam Abdolali, Giovanni Barbarino, Nicolas Gillis
Simplex-structured matrix factorization (SSMF) is a generalization of nonnegative matrix factorization, a fundamental interpretable data analysis model, and has applications in hyperspectral unmixing and topic modeling. To obtain identifiable solutions, a standard approach is to find minimum-volume solutions. By taking advantage of the duality/polarity concept for polytopes, we convert minimum-volume SSMF in the primal space to a maximum-volume problem in the dual space. We first prove the identifiability of this maximum-volume dual problem. Then, we use this dual formulation to provide a novel optimization approach which bridges the gap between two existing families of algorithms for SSMF, namely volume minimization and facet identification. Numerical experiments show that the proposed approach performs favorably compared to the state-of-the-art SSMF algorithms.
http://arxiv.org/abs/2403.20197v1
"2024-03-29T14:19:26"
math.NA, cs.IR, cs.LG, cs.NA, eess.SP, stat.ML
2,024
Enhanced Short Text Modeling: Leveraging Large Language Models for Topic Refinement
Shuyu Chang, Rui Wang, Peng Ren, Haiping Huang
Crafting effective topic models for brief texts, like tweets and news headlines, is essential for capturing the swift shifts in social dynamics. Traditional topic models, however, often fall short in accurately representing the semantic intricacies of short texts due to their brevity and lack of contextual data. In our study, we harness the advanced capabilities of Large Language Models (LLMs) to introduce a novel approach termed "Topic Refinement". This approach does not directly involve itself in the initial modeling of topics but focuses on improving topics after they have been mined. By employing prompt engineering, we direct LLMs to eliminate off-topic words within a given topic, ensuring that only contextually relevant words are preserved or substituted with ones that fit better semantically. This method emulates human-like scrutiny and improvement of topics, thereby elevating the semantic quality of the topics generated by various models. Our comprehensive evaluation across three unique datasets has shown that our topic refinement approach significantly enhances the semantic coherence of topics.
http://arxiv.org/abs/2403.17706v1
"2024-03-26T13:50:34"
cs.CL, cs.AI
2,024
Decoding excellence: Mapping the demand for psychological traits of operations and supply chain professionals through text mining
S. Di Luozzo, A. Fronzetti Colladon, M. M. Schiraldi
The current study proposes an innovative methodology for the profiling of psychological traits of Operations Management (OM) and Supply Chain Management (SCM) professionals. We use innovative methods and tools of text mining and social network analysis to map the demand for relevant skills from a set of job descriptions, with a focus on psychological characteristics. The proposed approach aims to evaluate the market demand for specific traits by combining relevant psychological constructs, text mining techniques, and an innovative measure, namely, the Semantic Brand Score. We apply the proposed methodology to a dataset of job descriptions for OM and SCM professionals, with the objective of providing a mapping of their relevant required skills, including psychological characteristics. In addition, the analysis is then detailed by considering the region of the organization that issues the job description, its organizational size, and the seniority level of the open position in order to understand their nuances. Finally, topic modeling is used to examine key components and their relative significance in job descriptions. By employing a novel methodology and considering contextual factors, we provide an innovative understanding of the attitudinal traits that differentiate professionals. This research contributes to talent management, recruitment practices, and professional development initiatives, since it provides new figures and perspectives to improve the effectiveness and success of Operations Management and Supply Chain Management professionals.
http://arxiv.org/abs/2403.17546v1
"2024-03-26T09:51:43"
cs.CL, cs.SI, econ.GN, physics.soc-ph, q-fin.EC, I.2.7; J.4; H.4.0
2,024
An Empirical Study of ChatGPT-related projects on GitHub
Zheng Lin, Neng Zhang
As ChatGPT possesses powerful capabilities in natural language processing and code analysis, it has received widespread attention since its launch. Developers have applied its powerful capabilities to various domains through software projects which are hosted on the largest open-source platform (GitHub) worldwide. Simultaneously, these projects have triggered extensive discussions. In order to comprehend the research content of these projects and understand the potential requirements discussed, we collected ChatGPT-related projects from the GitHub platform and utilized the LDA topic model to identify the discussion topics. Specifically, we selected 200 projects, categorizing them into three primary categories through analyzing their descriptions: ChatGPT implementation & training, ChatGPT application, ChatGPT improvement & extension. Subsequently, we employed the LDA topic model to identify 10 topics from issue texts, and compared the distribution and evolution trend of the discovered topics within the three primary project categories. Our observations include (1) The number of projects growing in a single month for the three primary project categories are closely associated with the development of ChatGPT. (2) There exist significant variations in the popularity of each topic for the three primary project categories. (3) The monthly changes in the absolute impact of each topic for the three primary project categories are diverse, which is often closely associated with the variation in the number of projects owned by that category. (4) With the passage of time, the relative impact of each topic exhibits different development trends in the three primary project categories. Based on these findings, we discuss implications for developers and users.
http://arxiv.org/abs/2403.17437v1
"2024-03-26T07:06:54"
cs.SE
2,024
Neural Multimodal Topic Modeling: A Comprehensive Evaluation
Felipe González-Pizarro, Giuseppe Carenini
Neural topic models can successfully find coherent and diverse topics in textual data. However, they are limited in dealing with multimodal datasets (e.g., images and text). This paper presents the first systematic and comprehensive evaluation of multimodal topic modeling of documents containing both text and images. In the process, we propose two novel topic modeling solutions and two novel evaluation metrics. Overall, our evaluation on an unprecedented rich and diverse collection of datasets indicates that both of our models generate coherent and diverse topics. Nevertheless, the extent to which one method outperforms the other depends on the metrics and dataset combinations, which suggests further exploration of hybrid solutions in the future. Notably, our succinct human evaluation aligns with the outcomes determined by our proposed metrics. This alignment not only reinforces the credibility of our metrics but also highlights the potential for their application in guiding future multimodal topic modeling endeavors.
http://arxiv.org/abs/2403.17308v1
"2024-03-26T01:29:46"
cs.CL, cs.AI, cs.LG, I.2.7
2,024
A Mixed Method Study of DevOps Challenges
Minaoar Hossain Tanzil, Masud Sarker, Gias Uddin, Anindya Iqbal
Context: DevOps practices combine software development and IT operations. There is a growing number of DevOps related posts in popular online developer forum Stack Overflow (SO). While previous research analyzed SO posts related to build/release engineering, we are aware of no research that specifically focused on DevOps related discussions. Objective: To learn the challenges developers face while using the currently available DevOps tools and techniques along with the organizational challenges in DevOps practices. Method: We conduct an empirical study by applying topic modeling on 174K SO posts that contain DevOps discussions. We then validate and extend the empirical study findings with a survey of 21 professional DevOps practitioners. Results: We find that: (1) There are 23 DevOps topics grouped into four categories: Cloud & CI/CD Tools, Infrastructure as Code, Container & Orchestration, and Quality Assurance. (2) The topic category Cloud & CI/CD Tools contains the highest number of topics (10) which cover 48.6% of all questions in our dataset, followed by the category Infrastructure as Code (28.9%). (3) File management is the most popular topic followed by Jenkins Pipeline, while infrastructural Exception Handling and Jenkins Distributed Architecture are the most difficult topics (with least accepted answers). (4) In the survey, developers mention that it requires hands-on experience before current DevOps tools can be considered easy. They raised the needs for better documentation and learning resources to learn the rapidly changing DevOps tools and techniques. Practitioners also emphasized on the formal training approach by the organizations for DevOps skill development. Conclusion: Architects and managers can use the findings of this research to adopt appropriate DevOps technologies, and organizations can design tool or process specific DevOps training programs.
http://arxiv.org/abs/2403.16436v1
"2024-03-25T05:35:40"
cs.SE, cs.HC
2,024
Large Language Models Offer an Alternative to the Traditional Approach of Topic Modelling
Yida Mu, Chun Dong, Kalina Bontcheva, Xingyi Song
Topic modelling, as a well-established unsupervised technique, has found extensive use in automatically detecting significant topics within a corpus of documents. However, classic topic modelling approaches (e.g., LDA) have certain drawbacks, such as the lack of semantic understanding and the presence of overlapping topics. In this work, we investigate the untapped potential of large language models (LLMs) as an alternative for uncovering the underlying topics within extensive text corpora. To this end, we introduce a framework that prompts LLMs to generate topics from a given set of documents and establish evaluation protocols to assess the clustering efficacy of LLMs. Our findings indicate that LLMs with appropriate prompts can stand out as a viable alternative, capable of generating relevant topic titles and adhering to human guidelines to refine and merge topics. Through in-depth experiments and evaluation, we summarise the advantages and constraints of employing LLMs in topic extraction.
http://arxiv.org/abs/2403.16248v2
"2024-03-24T17:39:51"
cs.CL
2,024
AllHands: Ask Me Anything on Large-scale Verbatim Feedback via Large Language Models
Chaoyun Zhang, Zicheng Ma, Yuhao Wu, Shilin He, Si Qin, Minghua Ma, Xiaoting Qin, Yu Kang, Yuyi Liang, Xiaoyu Gou, Yajie Xue, Qingwei Lin, Saravan Rajmohan, Dongmei Zhang, Qi Zhang
Verbatim feedback constitutes a valuable repository of user experiences, opinions, and requirements essential for software development. Effectively and efficiently extracting valuable insights from such data poses a challenging task. This paper introduces Allhands , an innovative analytic framework designed for large-scale feedback analysis through a natural language interface, leveraging large language models (LLMs). Allhands adheres to a conventional feedback analytic workflow, initially conducting classification and topic modeling on the feedback to convert them into a structurally augmented format, incorporating LLMs to enhance accuracy, robustness, generalization, and user-friendliness. Subsequently, an LLM agent is employed to interpret users' diverse questions in natural language on feedback, translating them into Python code for execution, and delivering comprehensive multi-modal responses, including text, code, tables, and images. We evaluate Allhands across three diverse feedback datasets. The experiments demonstrate that Allhands achieves superior efficacy at all stages of analysis, including classification and topic modeling, eventually providing users with an "ask me anything" experience with comprehensive, correct and human-readable response. To the best of our knowledge, Allhands stands as the first comprehensive feedback analysis framework that supports diverse and customized requirements for insight extraction through a natural language interface.
http://arxiv.org/abs/2403.15157v2
"2024-03-22T12:13:16"
cs.SE
2,024
Uncovering Latent Themes of Messaging on Social Media by Integrating LLMs: A Case Study on Climate Campaigns
Tunazzina Islam, Dan Goldwasser
This paper introduces a novel approach to uncovering and analyzing themes in social media messaging. Recognizing the limitations of traditional topic-level analysis, which tends to capture only the overarching patterns, this study emphasizes the need for a finer-grained, theme-focused exploration. Conventional methods of theme discovery, involving manual processes and a human-in-the-loop approach, are valuable but face challenges in scalability, consistency, and resource intensity in terms of time and cost. To address these challenges, we propose a machine-in-the-loop approach that leverages the advanced capabilities of Large Language Models (LLMs). This approach allows for a deeper investigation into the thematic aspects of social media discourse, enabling us to uncover a diverse array of themes, each with unique characteristics and relevance, thereby offering a comprehensive understanding of the nuances present within broader topics. Furthermore, this method efficiently maps the text and the newly discovered themes, enhancing our understanding of the thematic nuances in social media messaging. We employ climate campaigns as a case study and demonstrate that our methodology yields more accurate and interpretable results compared to traditional topic models. Our results not only demonstrate the effectiveness of our approach in uncovering latent themes but also illuminate how these themes are tailored for demographic targeting in social media contexts. Additionally, our work sheds light on the dynamic nature of social media, revealing the shifts in the thematic focus of messaging in response to real-world events.
http://arxiv.org/abs/2403.10707v1
"2024-03-15T21:54:00"
cs.CL, cs.AI, cs.CY, cs.LG, cs.SI
2,024
Automating the Information Extraction from Semi-Structured Interview Transcripts
Angelina Parfenova
This paper explores the development and application of an automated system designed to extract information from semi-structured interview transcripts. Given the labor-intensive nature of traditional qualitative analysis methods, such as coding, there exists a significant demand for tools that can facilitate the analysis process. Our research investigates various topic modeling techniques and concludes that the best model for analyzing interview texts is a combination of BERT embeddings and HDBSCAN clustering. We present a user-friendly software prototype that enables researchers, including those without programming skills, to efficiently process and visualize the thematic structure of interview data. This tool not only facilitates the initial stages of qualitative analysis but also offers insights into the interconnectedness of topics revealed, thereby enhancing the depth of qualitative analysis.
http://arxiv.org/abs/2403.04819v1
"2024-03-07T13:53:03"
cs.CL, cs.CY, cs.IR, cs.SI
2,024
Membership Inference Attacks and Privacy in Topic Modeling
Nico Manzonelli, Wanrong Zhang, Salil Vadhan
Recent research shows that large language models are susceptible to privacy attacks that infer aspects of the training data. However, it is unclear if simpler generative models, like topic models, share similar vulnerabilities. In this work, we propose an attack against topic models that can confidently identify members of the training data in Latent Dirichlet Allocation. Our results suggest that the privacy risks associated with generative modeling are not restricted to large neural models. Additionally, to mitigate these vulnerabilities, we explore differentially private (DP) topic modeling. We propose a framework for private topic modeling that incorporates DP vocabulary selection as a pre-processing step, and show that it improves privacy while having limited effects on practical utility.
http://arxiv.org/abs/2403.04451v1
"2024-03-07T12:43:42"
cs.CR, cs.CL, cs.LG
2,024
Does Documentation Matter? An Empirical Study of Practitioners' Perspective on Open-Source Software Adoption
Aaron Imani, Shiva Radmanesh, Iftekhar Ahmed, Mohammad Moshirpour
In recent years, open-source software (OSS) has become increasingly prevalent in developing software products. While OSS documentation is the primary source of information provided by the developers' community about a product, its role in the industry's adoption process has yet to be examined. We conducted semi-structured interviews and an online survey to provide insight into this area. Based on interviews and survey insights, we developed a topic model to collect relevant information from OSS documentation automatically. Additionally, according to our survey responses regarding challenges associated with OSS documentation, we propose a novel information augmentation approach, DocMentor, by combining OSS documentation corpus TF-IDF scores and ChatGPT. Through explaining technical terms and providing examples and references, our approach enhances the documentation context and improves practitioners' understanding. Our tool's effectiveness is assessed by surveying practitioners.
http://arxiv.org/abs/2403.03819v1
"2024-03-06T16:06:08"
cs.SE
2,024
Probabilistic Topic Modelling with Transformer Representations
Arik Reuter, Anton Thielmann, Christoph Weisser, Benjamin Säfken, Thomas Kneib
Topic modelling was mostly dominated by Bayesian graphical models during the last decade. With the rise of transformers in Natural Language Processing, however, several successful models that rely on straightforward clustering approaches in transformer-based embedding spaces have emerged and consolidated the notion of topics as clusters of embedding vectors. We propose the Transformer-Representation Neural Topic Model (TNTM), which combines the benefits of topic representations in transformer-based embedding spaces and probabilistic modelling. Therefore, this approach unifies the powerful and versatile notion of topics based on transformer embeddings with fully probabilistic modelling, as in models such as Latent Dirichlet Allocation (LDA). We utilize the variational autoencoder (VAE) framework for improved inference speed and modelling flexibility. Experimental results show that our proposed model achieves results on par with various state-of-the-art approaches in terms of embedding coherence while maintaining almost perfect topic diversity. The corresponding source code is available at https://github.com/ArikReuter/TNTM.
http://arxiv.org/abs/2403.03737v1
"2024-03-06T14:27:29"
cs.LG, cs.CL
2,024
GPTopic: Dynamic and Interactive Topic Representations
Arik Reuter, Anton Thielmann, Christoph Weisser, Sebastian Fischer, Benjamin Säfken
Topic modeling seems to be almost synonymous with generating lists of top words to represent topics within large text corpora. However, deducing a topic from such list of individual terms can require substantial expertise and experience, making topic modelling less accessible to people unfamiliar with the particularities and pitfalls of top-word interpretation. A topic representation limited to top-words might further fall short of offering a comprehensive and easily accessible characterization of the various aspects, facets and nuances a topic might have. To address these challenges, we introduce GPTopic, a software package that leverages Large Language Models (LLMs) to create dynamic, interactive topic representations. GPTopic provides an intuitive chat interface for users to explore, analyze, and refine topics interactively, making topic modeling more accessible and comprehensive. The corresponding code is available here: https://github. com/05ec6602be/GPTopic.
http://arxiv.org/abs/2403.03628v1
"2024-03-06T11:34:20"
cs.CL
2,024
The Geometric Structure of Topic Models
Johannes Hirth, Tom Hanika
Topic models are a popular tool for clustering and analyzing textual data. They allow texts to be classified on the basis of their affiliation to the previously calculated topics. Despite their widespread use in research and application, an in-depth analysis of topic models is still an open research topic. State-of-the-art methods for interpreting topic models are based on simple visualizations, such as similarity matrices, top-term lists or embeddings, which are limited to a maximum of three dimensions. In this paper, we propose an incidence-geometric method for deriving an ordinal structure from flat topic models, such as non-negative matrix factorization. These enable the analysis of the topic model in a higher (order) dimension and the possibility of extracting conceptual relationships between several topics at once. Due to the use of conceptual scaling, our approach does not introduce any artificial topical relationships, such as artifacts of feature compression. Based on our findings, we present a new visualization paradigm for concept hierarchies based on ordinal motifs. These allow for a top-down view on topic spaces. We introduce and demonstrate the applicability of our approach based on a topic model derived from a corpus of scientific papers taken from 32 top machine learning venues.
http://arxiv.org/abs/2403.03607v1
"2024-03-06T10:53:51"
cs.AI
2,024
Arabic Text Sentiment Analysis: Reinforcing Human-Performed Surveys with Wider Topic Analysis
Latifah Almurqren, Ryan Hodgson, Alexandra Cristea
Sentiment analysis (SA) has been, and is still, a thriving research area. However, the task of Arabic sentiment analysis (ASA) is still underrepresented in the body of research. This study offers the first in-depth and in-breadth analysis of existing ASA studies of textual content and identifies their common themes, domains of application, methods, approaches, technologies and algorithms used. The in-depth study manually analyses 133 ASA papers published in the English language between 2002 and 2020 from four academic databases (SAGE, IEEE, Springer, WILEY) and from Google Scholar. The in-breadth study uses modern, automatic machine learning techniques, such as topic modelling and temporal analysis, on Open Access resources, to reinforce themes and trends identified by the prior study, on 2297 ASA publications between 2010-2020. The main findings show the different approaches used for ASA: machine learning, lexicon-based and hybrid approaches. Other findings include ASA 'winning' algorithms (SVM, NB, hybrid methods). Deep learning methods, such as LSTM can provide higher accuracy, but for ASA sometimes the corpora are not large enough to support them. Additionally, whilst there are some ASA corpora and lexicons, more are required. Specifically, Arabic tweets corpora and datasets are currently only moderately sized. Moreover, Arabic lexicons that have high coverage contain only Modern Standard Arabic (MSA) words, and those with Arabic dialects are quite small. Thus, new corpora need to be created. On the other hand, ASA tools are stringently lacking. There is a need to develop ASA tools that can be used in industry, as well as in academia, for Arabic text SA. Hence, our study offers insights into the challenges associated with ASA research and provides suggestions for ways to move the field forward such as lack of Dialectical Arabic resource, Arabic tweets, corpora and data sets for SA.
http://arxiv.org/abs/2403.01921v1
"2024-03-04T10:37:48"
cs.CL
2,024
TopicDiff: A Topic-enriched Diffusion Approach for Multimodal Conversational Emotion Detection
Jiamin Luo, Jingjing Wang, Guodong Zhou
Multimodal Conversational Emotion (MCE) detection, generally spanning across the acoustic, vision and language modalities, has attracted increasing interest in the multimedia community. Previous studies predominantly focus on learning contextual information in conversations with only a few considering the topic information in single language modality, while always neglecting the acoustic and vision topic information. On this basis, we propose a model-agnostic Topic-enriched Diffusion (TopicDiff) approach for capturing multimodal topic information in MCE tasks. Particularly, we integrate the diffusion model into neural topic model to alleviate the diversity deficiency problem of neural topic model in capturing topic information. Detailed evaluations demonstrate the significant improvements of TopicDiff over the state-of-the-art MCE baselines, justifying the importance of multimodal topic information to MCE and the effectiveness of TopicDiff in capturing such information. Furthermore, we observe an interesting finding that the topic information in acoustic and vision is more discriminative and robust compared to the language.
http://arxiv.org/abs/2403.04789v2
"2024-03-04T08:38:53"
cs.CL, cs.AI, cs.LG
2,024
Topic Modeling Analysis of Aviation Accident Reports: A Comparative Study between LDA and NMF Models
Aziida Nanyonga, Hassan Wasswa, Graham Wild
Aviation safety is paramount in the modern world, with a continuous commitment to reducing accidents and improving safety standards. Central to this endeavor is the analysis of aviation accident reports, rich textual resources that hold insights into the causes and contributing factors behind aviation mishaps. This paper compares two prominent topic modeling techniques, Latent Dirichlet Allocation (LDA) and Non-negative Matrix Factorization (NMF), in the context of aviation accident report analysis. The study leverages the National Transportation Safety Board (NTSB) Dataset with the primary objective of automating and streamlining the process of identifying latent themes and patterns within accident reports. The Coherence Value (C_v) metric was used to evaluate the quality of generated topics. LDA demonstrates higher topic coherence, indicating stronger semantic relevance among words within topics. At the same time, NMF excelled in producing distinct and granular topics, enabling a more focused analysis of specific aspects of aviation accidents.
http://arxiv.org/abs/2403.04788v1
"2024-03-04T01:41:07"
cs.CL, Topic Modeling, Aviation Safety, Aviation Accident Reports, Machine Learning, LDA, NMF
2,024
Using Text Embeddings for Deductive Qualitative Research at Scale in Physics Education
Tor Ole B. Odden, Halvor Tyseng, Jonas Timmann Mjaaland, Markus Fleten Kreutzer, Anders Malthe-Sørenssen
We propose a technique for performing deductive qualitative data analysis at scale on text-based data. Using a natural language processing technique known as text embeddings, we create vector-based representations of texts in a high-dimensional meaning space within which it is possible to quantify differences as vector distances. To apply the technique, we build off prior work that used topic modeling via Latent Dirichlet Allocation to thematically analyze 18 years of the Physics Education Research Conference proceedings literature. We first extend this analysis through 2023. Next, we create embeddings of all texts and, using representative articles from the 10 topics found by the LDA analysis, define centroids in the meaning space. We calculate the distances between every article and centroid and use the inverted, scaled distances between these centroids and articles to create an alternate topic model. We benchmark this model against the LDA model results and show that this embeddings model recovers most of the trends from that analysis. Finally, to illustrate the versatility of the method we define 8 new topic centroids derived from a review of the physics education research literature by Docktor and Mestre (2014) and re-analyze the literature using these researcher-defined topics. Based on these analyses, we critically discuss the features, uses, and limitations of this method and argue that it holds promise for flexible deductive qualitative analysis of a wide variety of text-based data that avoids many of the drawbacks inherent to prior NLP methods.
http://arxiv.org/abs/2402.18087v1
"2024-02-28T06:18:54"
physics.ed-ph, physics.data-an
2,024
COMPASS: Computational Mapping of Patient-Therapist Alliance Strategies with Language Modeling
Baihan Lin, Djallel Bouneffouf, Yulia Landa, Rachel Jespersen, Cheryl Corcoran, Guillermo Cecchi
The therapeutic working alliance is a critical factor in predicting the success of psychotherapy treatment. Traditionally, working alliance assessment relies on questionnaires completed by both therapists and patients. In this paper, we present COMPASS, a novel framework to directly infer the therapeutic working alliance from the natural language used in psychotherapy sessions. Our approach utilizes advanced large language models to analyze transcripts of psychotherapy sessions and compare them with distributed representations of statements in the working alliance inventory. Analyzing a dataset of over 950 sessions covering diverse psychiatric conditions, we demonstrate the effectiveness of our method in microscopically mapping patient-therapist alignment trajectories and providing interpretability for clinical psychiatry and in identifying emerging patterns related to the condition being treated. By employing various neural topic modeling techniques in combination with generative language prompting, we analyze the topical characteristics of different psychiatric conditions and incorporate temporal modeling to capture the evolution of topics at a turn-level resolution. This combined framework enhances the understanding of therapeutic interactions, enabling timely feedback for therapists regarding conversation quality and providing interpretable insights to improve the effectiveness of psychotherapy.
http://arxiv.org/abs/2402.14701v1
"2024-02-22T16:56:44"
cs.CL, cs.AI, cs.HC, cs.LG, q-bio.NC
2,024
Topic Modeling as Multi-Objective Contrastive Optimization
Thong Nguyen, Xiaobao Wu, Xinshuai Dong, Cong-Duy T Nguyen, See-Kiong Ng, Anh Tuan Luu
Recent representation learning approaches enhance neural topic models by optimizing the weighted linear combination of the evidence lower bound (ELBO) of the log-likelihood and the contrastive learning objective that contrasts pairs of input documents. However, document-level contrastive learning might capture low-level mutual information, such as word ratio, which disturbs topic modeling. Moreover, there is a potential conflict between the ELBO loss that memorizes input details for better reconstruction quality, and the contrastive loss which attempts to learn topic representations that generalize among input documents. To address these issues, we first introduce a novel contrastive learning method oriented towards sets of topic vectors to capture useful semantics that are shared among a set of input documents. Secondly, we explicitly cast contrastive topic modeling as a gradient-based multi-objective optimization problem, with the goal of achieving a Pareto stationary solution that balances the trade-off between the ELBO and the contrastive objective. Extensive experiments demonstrate that our framework consistently produces higher-performing neural topic models in terms of topic coherence, topic diversity, and downstream performance.
http://arxiv.org/abs/2402.07577v2
"2024-02-12T11:18:32"
cs.CL
2,024
Understanding the Progression of Educational Topics via Semantic Matching
Tamador Alkhidir, Edmond Awad, Aamena Alshamsi
Education systems are dynamically changing to accommodate technological advances, industrial and societal needs, and to enhance students' learning journeys. Curriculum specialists and educators constantly revise taught subjects across educational grades to identify gaps, introduce new learning topics, and enhance the learning outcomes. This process is usually done within the same subjects (e.g. math) or across related subjects (e.g. math and physics) considering the same and different educational levels, leading to massive multi-layer comparisons. Having nuanced data about subjects, topics, and learning outcomes structured within a dataset, empowers us to leverage data science to better understand the progression of various learning topics. In this paper, Bidirectional Encoder Representations from Transformers (BERT) topic modeling was used to extract topics from the curriculum, which were then used to identify relationships between subjects, track their progression, and identify conceptual gaps. We found that grouping learning outcomes by common topics helped specialists reduce redundancy and introduce new concepts in the curriculum. We built a dashboard to avail the methodology to curriculum specials. Finally, we tested the validity of the approach with subject matter experts.
http://arxiv.org/abs/2403.05553v1
"2024-02-10T08:24:29"
cs.CY, cs.CL, cs.LG
2,024
RankSum An unsupervised extractive text summarization based on rank fusion
A. Joshi, E. Fidalgo, E. Alegre, R. Alaiz-Rodriguez
In this paper, we propose Ranksum, an approach for extractive text summarization of single documents based on the rank fusion of four multi-dimensional sentence features extracted for each sentence: topic information, semantic content, significant keywords, and position. The Ranksum obtains the sentence saliency rankings corresponding to each feature in an unsupervised way followed by the weighted fusion of the four scores to rank the sentences according to their significance. The scores are generated in completely unsupervised way, and a labeled document set is required to learn the fusion weights. Since we found that the fusion weights can generalize to other datasets, we consider the Ranksum as an unsupervised approach. To determine topic rank, we employ probabilistic topic models whereas semantic information is captured using sentence embeddings. To derive rankings using sentence embeddings, we utilize Siamese networks to produce abstractive sentence representation and then we formulate a novel strategy to arrange them in their order of importance. A graph-based strategy is applied to find the significant keywords and related sentence rankings in the document. We also formulate a sentence novelty measure based on bigrams, trigrams, and sentence embeddings to eliminate redundant sentences from the summary. The ranks of all the sentences computed for each feature are finally fused to get the final score for each sentence in the document. We evaluate our approach on publicly available summarization datasets CNN/DailyMail and DUC 2002. Experimental results show that our approach outperforms other existing state-of-the-art summarization methods.
http://arxiv.org/abs/2402.05976v1
"2024-02-07T22:24:09"
cs.LG, cs.AI
2,024
AlbNews: A Corpus of Headlines for Topic Modeling in Albanian
Erion Çano, Dario Lamaj
The scarcity of available text corpora for low-resource languages like Albanian is a serious hurdle for research in natural language processing tasks. This paper introduces AlbNews, a collection of 600 topically labeled news headlines and 2600 unlabeled ones in Albanian. The data can be freely used for conducting topic modeling research. We report the initial classification scores of some traditional machine learning classifiers trained with the AlbNews samples. These results show that basic models outrun the ensemble learning ones and can serve as a baseline for future experiments.
http://arxiv.org/abs/2402.04028v1
"2024-02-06T14:24:28"
cs.CL, cs.AI, cs.LG
2,024
Identifying Reasons for Contraceptive Switching from Real-World Data Using Large Language Models
Brenda Y. Miao, Christopher YK Williams, Ebenezer Chinedu-Eneh, Travis Zack, Emily Alsentzer, Atul J. Butte, Irene Y. Chen
Prescription contraceptives play a critical role in supporting women's reproductive health. With nearly 50 million women in the United States using contraceptives, understanding the factors that drive contraceptives selection and switching is of significant interest. However, many factors related to medication switching are often only captured in unstructured clinical notes and can be difficult to extract. Here, we evaluate the zero-shot abilities of a recently developed large language model, GPT-4 (via HIPAA-compliant Microsoft Azure API), to identify reasons for switching between classes of contraceptives from the UCSF Information Commons clinical notes dataset. We demonstrate that GPT-4 can accurately extract reasons for contraceptive switching, outperforming baseline BERT-based models with microF1 scores of 0.849 and 0.881 for contraceptive start and stop extraction, respectively. Human evaluation of GPT-4-extracted reasons for switching showed 91.4% accuracy, with minimal hallucinations. Using extracted reasons, we identified patient preference, adverse events, and insurance as key reasons for switching using unsupervised topic modeling approaches. Notably, we also showed using our approach that "weight gain/mood change" and "insurance coverage" are disproportionately found as reasons for contraceptive switching in specific demographic populations. Our code and supplemental data are available at https://github.com/BMiao10/contraceptive-switching.
http://arxiv.org/abs/2402.03597v1
"2024-02-06T00:14:53"
cs.CL, cs.IR, cs.LG
2,024
Comparison of Topic Modelling Approaches in the Banking Context
Bayode Ogunleye, Tonderai Maswera, Laurence Hirsch, Jotham Gaudoin, Teresa Brunsdon
Topic modelling is a prominent task for automatic topic extraction in many applications such as sentiment analysis and recommendation systems. The approach is vital for service industries to monitor their customer discussions. The use of traditional approaches such as Latent Dirichlet Allocation (LDA) for topic discovery has shown great performances, however, they are not consistent in their results as these approaches suffer from data sparseness and inability to model the word order in a document. Thus, this study presents the use of Kernel Principal Component Analysis (KernelPCA) and K-means Clustering in the BERTopic architecture. We have prepared a new dataset using tweets from customers of Nigerian banks and we use this to compare the topic modelling approaches. Our findings showed KernelPCA and K-means in the BERTopic architecture-produced coherent topics with a coherence score of 0.8463.
http://arxiv.org/abs/2402.03176v1
"2024-02-05T16:43:53"
cs.IR, cs.AI, cs.LG, stat.CO, H.3.3
2,024
Multilingual transformer and BERTopic for short text topic modeling: The case of Serbian
Darija Medvecki, Bojana Bašaragin, Adela Ljajić, Nikola Milošević
This paper presents the results of the first application of BERTopic, a state-of-the-art topic modeling technique, to short text written in a morphologi-cally rich language. We applied BERTopic with three multilingual embed-ding models on two levels of text preprocessing (partial and full) to evalu-ate its performance on partially preprocessed short text in Serbian. We also compared it to LDA and NMF on fully preprocessed text. The experiments were conducted on a dataset of tweets expressing hesitancy toward COVID-19 vaccination. Our results show that with adequate parameter setting, BERTopic can yield informative topics even when applied to partially pre-processed short text. When the same parameters are applied in both prepro-cessing scenarios, the performance drop on partially preprocessed text is minimal. Compared to LDA and NMF, judging by the keywords, BERTopic offers more informative topics and gives novel insights when the number of topics is not limited. The findings of this paper can be significant for re-searchers working with other morphologically rich low-resource languages and short text.
http://arxiv.org/abs/2402.03067v1
"2024-02-05T14:59:29"
cs.CL, cs.AI
2,024
Modified K-means with Cluster Assignment -- Application to COVID-19 Data
Shreyash Rawat, V. Vijayarajan, V. B. Surya Prasath
Text extraction is a highly subjective problem which depends on the dataset that one is working on and the kind of summarization details that needs to be extracted out. All the steps ranging from preprocessing of the data, to the choice of an optimal model for predictions, depends on the problem and the corpus at hand. In this paper, we describe a text extraction model where the aim is to extract word specified information relating to the semantics such that we can get all related and meaningful information about that word in a succinct format. This model can obtain meaningful results and can augment ubiquitous search model or a normal clustering or topic modelling algorithms. By utilizing new technique called two cluster assignment technique with K-means model, we improved the ontology of the retrieved text. We further apply the vector average damping technique for flexible movement of clusters. Our experimental results on a recent corpus of Covid-19 shows that we obtain good results based on main keywords.
http://arxiv.org/abs/2402.03380v1
"2024-02-04T05:46:21"
cs.IR
2,024
From PARIS to LE-PARIS: Toward Patent Response Automation with Recommender Systems and Collaborative Large Language Models
Jung-Mei Chu, Hao-Cheng Lo, Jieh Hsiang, Chun-Chieh Cho
In patent prosecution, timely and effective responses to Office Actions (OAs) are crucial for securing patents. However, past automation and artificial intelligence research have largely overlooked this aspect. To bridge this gap, our study introduces the Patent Office Action Response Intelligence System (PARIS) and its advanced version, the Large Language Model (LLM) Enhanced PARIS (LE-PARIS). These systems are designed to enhance the efficiency of patent attorneys in handling OA responses through collaboration with AI. The systems' key features include the construction of an OA Topics Database, development of Response Templates, and implementation of Recommender Systems and LLM-based Response Generation. To validate the effectiveness of the systems, we have employed a multi-paradigm analysis using the USPTO Office Action database and longitudinal data based on attorney interactions with our systems over six years. Through five studies, we have examined the constructiveness of OA topics (studies 1 and 2) using topic modeling and our proposed Delphi process, the efficacy of our proposed hybrid LLM-based recommender system tailored for OA responses (study 3), the quality of generated responses (study 4), and the systems' practical value in real-world scenarios through user studies (study 5). The results indicate that both PARIS and LE-PARIS significantly achieve key metrics and have a positive impact on attorney performance.
http://arxiv.org/abs/2402.00421v2
"2024-02-01T08:37:13"
cs.CL, cs.HC, cs.IR, cs.LG
2,024
Network-based Topic Structure Visualization
Yeseul Jeon, Jina Park, Ick Hoon Jin, Dongjun Chungc
In the real world, many topics are inter-correlated, making it challenging to investigate their structure and relationships. Understanding the interplay between topics and their relevance can provide valuable insights for researchers, guiding their studies and informing the direction of research. In this paper, we utilize the topic-words distribution, obtained from topic models, as item-response data to model the structure of topics using a latent space item response model. By estimating the latent positions of topics based on their distances toward words, we can capture the underlying topic structure and reveal their relationships. Visualizing the latent positions of topics in Euclidean space allows for an intuitive understanding of their proximity and associations. We interpret relationships among topics by characterizing each topic based on representative words selected using a newly proposed scoring scheme. Additionally, we assess the maturity of topics by tracking their latent positions using different word sets, providing insights into the robustness of topics. To demonstrate the effectiveness of our approach, we analyze the topic composition of COVID-19 studies during the early stage of its emergence using biomedical literature in the PubMed database. The software and data used in this paper are publicly available at https://github.com/jeon9677/gViz .
http://arxiv.org/abs/2401.17855v1
"2024-01-31T14:17:00"
stat.AP, cs.HC, cs.IR
2,024
Improving the TENOR of Labeling: Re-evaluating Topic Models for Content Analysis
Zongxia Li, Andrew Mao, Daniel Stephens, Pranav Goel, Emily Walpole, Alden Dima, Juan Fung, Jordan Boyd-Graber
Topic models are a popular tool for understanding text collections, but their evaluation has been a point of contention. Automated evaluation metrics such as coherence are often used, however, their validity has been questioned for neural topic models (NTMs) and can overlook a models benefits in real world applications. To this end, we conduct the first evaluation of neural, supervised and classical topic models in an interactive task based setting. We combine topic models with a classifier and test their ability to help humans conduct content analysis and document annotation. From simulated, real user and expert pilot studies, the Contextual Neural Topic Model does the best on cluster evaluation metrics and human evaluations; however, LDA is competitive with two other NTMs under our simulated experiment and user study results, contrary to what coherence scores suggest. We show that current automated metrics do not provide a complete picture of topic modeling capabilities, but the right choice of NTMs can be better than classical models on practical task.
http://arxiv.org/abs/2401.16348v2
"2024-01-29T17:54:04"
cs.CL, cs.CY, cs.HC
2,024
CFTM: Continuous time fractional topic model
Kei Nakagawa, Kohei Hayashi, Yugo Fujimoto
In this paper, we propose the Continuous Time Fractional Topic Model (cFTM), a new method for dynamic topic modeling. This approach incorporates fractional Brownian motion~(fBm) to effectively identify positive or negative correlations in topic and word distribution over time, revealing long-term dependency or roughness. Our theoretical analysis shows that the cFTM can capture these long-term dependency or roughness in both topic and word distributions, mirroring the main characteristics of fBm. Moreover, we prove that the parameter estimation process for the cFTM is on par with that of LDA, traditional topic models. To demonstrate the cFTM's property, we conduct empirical study using economic news articles. The results from these tests support the model's ability to identify and track long-term dependency or roughness in topics over time.
http://arxiv.org/abs/2402.01734v2
"2024-01-29T08:07:41"
cs.CL, cs.LG, q-fin.CP, stat.AP
2,024
A Survey on Neural Topic Models: Methods, Applications, and Challenges
Xiaobao Wu, Thong Nguyen, Anh Tuan Luu
Topic models have been prevalent for decades to discover latent topics and infer topic proportions of documents in an unsupervised fashion. They have been widely used in various applications like text analysis and context recommendation. Recently, the rise of neural networks has facilitated the emergence of a new research field -- Neural Topic Models (NTMs). Different from conventional topic models, NTMs directly optimize parameters without requiring model-specific derivations. This endows NTMs with better scalability and flexibility, resulting in significant research attention and plentiful new methods and applications. In this paper, we present a comprehensive survey on neural topic models concerning methods, applications, and challenges. Specifically, we systematically organize current NTM methods according to their network structures and introduce the NTMs for various scenarios like short texts and cross-lingual documents. We also discuss a wide range of popular applications built on NTMs. Finally, we highlight the challenges confronted by NTMs to inspire future research.
http://arxiv.org/abs/2401.15351v1
"2024-01-27T08:52:19"
cs.CL, cs.AI, cs.IR
2,024
On the Affinity, Rationality, and Diversity of Hierarchical Topic Modeling
Xiaobao Wu, Fengjun Pan, Thong Nguyen, Yichao Feng, Chaoqun Liu, Cong-Duy Nguyen, Anh Tuan Luu
Hierarchical topic modeling aims to discover latent topics from a corpus and organize them into a hierarchy to understand documents with desirable semantic granularity. However, existing work struggles with producing topic hierarchies of low affinity, rationality, and diversity, which hampers document understanding. To overcome these challenges, we in this paper propose Transport Plan and Context-aware Hierarchical Topic Model (TraCo). Instead of early simple topic dependencies, we propose a transport plan dependency method. It constrains dependencies to ensure their sparsity and balance, and also regularizes topic hierarchy building with them. This improves affinity and diversity of hierarchies. We further propose a context-aware disentangled decoder. Rather than previously entangled decoding, it distributes different semantic granularity to topics at different levels by disentangled decoding. This facilitates the rationality of hierarchies. Experiments on benchmark datasets demonstrate that our method surpasses state-of-the-art baselines, effectively improving the affinity, rationality, and diversity of hierarchical topic modeling with better performance on downstream tasks.
http://arxiv.org/abs/2401.14113v2
"2024-01-25T11:47:58"
cs.CL
2,024
Dynamic embedded topic models and change-point detection for exploring literary-historical hypotheses
Hale Sirin, Tom Lippincott
We present a novel combination of dynamic embedded topic models and change-point detection to explore diachronic change of lexical semantic modality in classical and early Christian Latin. We demonstrate several methods for finding and characterizing patterns in the output, and relating them to traditional scholarship in Comparative Literature and Classics. This simple approach to unsupervised models of semantic change can be applied to any suitable corpus, and we conclude with future directions and refinements aiming to allow noisier, less-curated materials to meet that threshold.
http://arxiv.org/abs/2401.13905v1
"2024-01-25T02:50:03"
cs.CL
2,024
Navigating Dataset Documentations in AI: A Large-Scale Analysis of Dataset Cards on Hugging Face
Xinyu Yang, Weixin Liang, James Zou
Advances in machine learning are closely tied to the creation of datasets. While data documentation is widely recognized as essential to the reliability, reproducibility, and transparency of ML, we lack a systematic empirical understanding of current dataset documentation practices. To shed light on this question, here we take Hugging Face -- one of the largest platforms for sharing and collaborating on ML models and datasets -- as a prominent case study. By analyzing all 7,433 dataset documentation on Hugging Face, our investigation provides an overview of the Hugging Face dataset ecosystem and insights into dataset documentation practices, yielding 5 main findings: (1) The dataset card completion rate shows marked heterogeneity correlated with dataset popularity. (2) A granular examination of each section within the dataset card reveals that the practitioners seem to prioritize Dataset Description and Dataset Structure sections, while the Considerations for Using the Data section receives the lowest proportion of content. (3) By analyzing the subsections within each section and utilizing topic modeling to identify key topics, we uncover what is discussed in each section, and underscore significant themes encompassing both technical and social impacts, as well as limitations within the Considerations for Using the Data section. (4) Our findings also highlight the need for improved accessibility and reproducibility of datasets in the Usage sections. (5) In addition, our human annotation evaluation emphasizes the pivotal role of comprehensive dataset content in shaping individuals' perceptions of a dataset card's overall quality. Overall, our study offers a unique perspective on analyzing dataset documentation through large-scale data science analysis and underlines the need for more thorough dataset documentation in machine learning research.
http://arxiv.org/abs/2401.13822v1
"2024-01-24T21:47:13"
cs.LG, cs.AI
2,024
Longitudinal Sentiment Topic Modelling of Reddit Posts
Fabian Nwaoha, Ziyad Gaffar, Ho Joon Chun, Marina Sokolova
In this study, we analyze texts of Reddit posts written by students of four major Canadian universities. We gauge the emotional tone and uncover prevailing themes and discussions through longitudinal topic modeling of posts textual data. Our study focuses on four years, 2020-2023, covering COVID-19 pandemic and after pandemic years. Our results highlight a gradual uptick in discussions related to mental health.
http://arxiv.org/abs/2401.13805v1
"2024-01-24T20:56:23"
cs.SI, cs.IR, I.2.7
2,024
ConceptThread: Visualizing Threaded Concepts in MOOC Videos
Zhiguang Zhou, Li Ye, Lihong Cai, Lei Wang, Yigang Wang, Yongheng Wang, Wei Chen, Yong Wang
Massive Open Online Courses (MOOCs) platforms are becoming increasingly popular in recent years. Online learners need to watch the whole course video on MOOC platforms to learn the underlying new knowledge, which is often tedious and time-consuming due to the lack of a quick overview of the covered knowledge and their structures. In this paper, we propose ConceptThread, a visual analytics approach to effectively show the concepts and the relations among them to facilitate effective online learning. Specifically, given that the majority of MOOC videos contain slides, we first leverage video processing and speech analysis techniques, including shot recognition, speech recognition and topic modeling, to extract core knowledge concepts and construct the hierarchical and temporal relations among them. Then, by using a metaphor of thread, we present a novel visualization to intuitively display the concepts based on video sequential flow, and enable learners to perform interactive visual exploration of concepts. We conducted a quantitative study, two case studies, and a user study to extensively evaluate ConceptThread. The results demonstrate the effectiveness and usability of ConceptThread in providing online learners with a quick understanding of the knowledge content of MOOC videos.
http://arxiv.org/abs/2401.11132v1
"2024-01-20T06:03:44"
cs.HC
2,024
An Information Retrieval and Extraction Tool for Covid-19 Related Papers
Marcos V. L. Pivetta
Background: The COVID-19 pandemic has caused severe impacts on health systems worldwide. Its critical nature and the increased interest of individuals and organizations to develop countermeasures to the problem has led to a surge of new studies in scientific journals. Objetive: We sought to develop a tool that incorporates, in a novel way, aspects of Information Retrieval (IR) and Extraction (IE) applied to the COVID-19 Open Research Dataset (CORD-19). The main focus of this paper is to provide researchers with a better search tool for COVID-19 related papers, helping them find reference papers and hightlight relevant entities in text. Method: We applied Latent Dirichlet Allocation (LDA) to model, based on research aspects, the topics of all English abstracts in CORD-19. Relevant named entities of each abstract were extracted and linked to the corresponding UMLS concept. Regular expressions and the K-Nearest Neighbors algorithm were used to rank relevant papers. Results: Our tool has shown the potential to assist researchers by automating a topic-based search of CORD-19 papers. Nonetheless, we identified that more fine-tuned topic modeling parameters and increased accuracy of the research aspect classifier model could lead to a more accurate and reliable tool. Conclusion: We emphasize the need of new automated tools to help researchers find relevant COVID-19 documents, in addition to automatically extracting useful information contained in them. Our work suggests that combining different algorithms and models could lead to new ways of browsing COVID-19 paper data.
http://arxiv.org/abs/2401.16430v1
"2024-01-20T01:34:50"
cs.IR, cs.CL
2,024
Combining topic modelling and citation network analysis to study case law from the European Court on Human Rights on the right to respect for private and family life
M. Mohammadi, L. M. Bruijn, M. Wieling, M. Vols
As legal case law databases such as HUDOC continue to grow rapidly, it has become essential for legal researchers to find efficient methods to handle such large-scale data sets. Such case law databases usually consist of the textual content of cases together with the citations between them. This paper focuses on case law from the European Court of Human Rights on Article 8 of the European Convention of Human Rights, the right to respect private and family life, home and correspondence. In this study, we demonstrate and compare the potential of topic modelling and citation network to find and organize case law on Article 8 based on their general themes and citation patterns, respectively. Additionally, we explore whether combining these two techniques leads to better results compared to the application of only one of the methods. We evaluate the effectiveness of the combined method on a unique manually collected and annotated dataset of Aricle 8 case law on evictions. The results of our experiments show that our combined (text and citation-based) approach provides the best results in finding and grouping case law, providing scholars with an effective way to extract and analyse relevant cases on a specific issue.
http://arxiv.org/abs/2401.16429v1
"2024-01-19T14:30:35"
cs.IR, cs.CL, cs.DL, cs.LG
2,024
Landscape of Generative AI in Global News: Topics, Sentiments, and Spatiotemporal Analysis
Lu Xian, Lingyao Li, Yiwei Xu, Ben Zefeng Zhang, Libby Hemphill
Generative AI has exhibited considerable potential to transform various industries and public life. The role of news media coverage of generative AI is pivotal in shaping public perceptions and judgments about this significant technological innovation. This paper provides in-depth analysis and rich insights into the temporal and spatial distribution of topics, sentiment, and substantive themes within global news coverage focusing on the latest emerging technology --generative AI. We collected a comprehensive dataset of news articles (January 2018 to November 2023, N = 24,827). For topic modeling, we employed the BERTopic technique and combined it with qualitative coding to identify semantic themes. Subsequently, sentiment analysis was conducted using the RoBERTa-base model. Analysis of temporal patterns in the data reveals notable variability in coverage across key topics--business, corporate technological development, regulation and security, and education--with spikes in articles coinciding with major AI developments and policy discussions. Sentiment analysis shows a predominantly neutral to positive media stance, with the business-related articles exhibiting more positive sentiment, while regulation and security articles receive a reserved, neutral to negative sentiment. Our study offers a valuable framework to investigate global news discourse and evaluate news attitudes and themes related to emerging technologies.
http://arxiv.org/abs/2401.08899v1
"2024-01-17T00:53:31"
cs.CY
2,024
Topic Modelling: Going Beyond Token Outputs
Lowri Williams, Eirini Anthi, Laura Arman, Pete Burnap
Topic modelling is a text mining technique for identifying salient themes from a number of documents. The output is commonly a set of topics consisting of isolated tokens that often co-occur in such documents. Manual effort is often associated with interpreting a topic's description from such tokens. However, from a human's perspective, such outputs may not adequately provide enough information to infer the meaning of the topics; thus, their interpretability is often inaccurately understood. Although several studies have attempted to automatically extend topic descriptions as a means of enhancing the interpretation of topic models, they rely on external language sources that may become unavailable, must be kept up-to-date to generate relevant results, and present privacy issues when training on or processing data. This paper presents a novel approach towards extending the output of traditional topic modelling methods beyond a list of isolated tokens. This approach removes the dependence on external sources by using the textual data itself by extracting high-scoring keywords and mapping them to the topic model's token outputs. To measure the interpretability of the proposed outputs against those of the traditional topic modelling approach, independent annotators manually scored each output based on their quality and usefulness, as well as the efficiency of the annotation task. The proposed approach demonstrated higher quality and usefulness, as well as higher efficiency in the annotation task, in comparison to the outputs of a traditional topic modelling method, demonstrating an increase in their interpretability.
http://arxiv.org/abs/2401.12990v1
"2024-01-16T16:05:54"
cs.CL, cs.LG
2,024
Understanding Emotional Disclosure via Diary-keeping in Quarantine on Social Media
Yue Deng, Changyang He, Bo Li
Quarantine is a widely-adopted measure during health crises caused by highly-contagious diseases like COVID-19, yet it poses critical challenges to public mental health. Given this context, emotional disclosure on social media in the form of keeping a diary emerges as a popular way for individuals to express emotions and record their mental health status. However, the exploration of emotional disclosure via diary-keeping on social media during quarantine is underexplored, understanding which could be beneficial to facilitate emotional connections and enlighten health intervention measures. Focusing on this particular form of self-disclosure, this work proposes a quantitative approach to figure out the prevalence and changing patterns of emotional disclosure during quarantine, and the possible factors contributing to the negative emotions. We collected 58, 796 posts with the "Quarantine Diary" keyword on Weibo, a popular social media website in China. Through text classification, we capture diverse emotion categories that characterize public emotion disclosure during quarantine, such as annoyed, anxious, boring, happy, hopeful and appreciative. Based on temporal analysis, we uncover the changing patterns of emotional disclosure from long-term perspectives and period-based perspectives (e.g., the gradual decline of all negative emotions and the upsurge of the annoyed emotion near the end of quarantine). Leveraging topic modeling, we also encapsulate the possible influencing factors of negative emotions, such as freedom restriction and solitude, and uncertainty of infection and supply. We reflect on how our findings could deepen the understanding of mental health on social media and further provide practical and design implications to mitigate mental health issues during quarantine.
http://arxiv.org/abs/2401.07230v1
"2024-01-14T08:31:08"
cs.HC, cs.SI
2,024
The Pulse of Mood Online: Unveiling Emotional Reactions in a Dynamic Social Media Landscape
Siyi Guo, Zihao He, Ashwin Rao, Fred Morstatter, Jeffrey Brantingham, Kristina Lerman
The rich and dynamic information environment of social media provides researchers, policy makers, and entrepreneurs with opportunities to learn about social phenomena in a timely manner. However, using these data to understand social behavior is difficult due to heterogeneity of topics and events discussed in the highly dynamic online information environment. To address these challenges, we present a method for systematically detecting and measuring emotional reactions to offline events using change point detection on the time series of collective affect, and further explaining these reactions using a transformer-based topic model. We demonstrate the utility of the method by successfully detecting major and smaller events on three different datasets, including (1) a Los Angeles Tweet dataset between Jan. and Aug. 2020, in which we revealed the complex psychological impact of the BlackLivesMatter movement and the COVID-19 pandemic, (2) a dataset related to abortion rights discussions in USA, in which we uncovered the strong emotional reactions to the overturn of Roe v. Wade and state abortion bans, and (3) a dataset about the 2022 French presidential election, in which we discovered the emotional and moral shift from positive before voting to fear and criticism after voting. The capability of our method allows for better sensing and monitoring of population's reactions during crises using online data.
http://arxiv.org/abs/2401.06275v1
"2024-01-11T22:12:55"
cs.SI
2,024
Short-Form Videos and Mental Health: A Knowledge-Guided Neural Topic Model
Jiaheng Xie, Ruicheng Liang, Yidong Chai, Yang Liu, Daniel Zeng
While short-form videos head to reshape the entire social media landscape, experts are exceedingly worried about their depressive impacts on viewers, as evidenced by medical studies. To prevent widespread consequences, platforms are eager to predict these videos' impact on viewers' mental health. Subsequently, they can take intervention measures, such as revising recommendation algorithms and displaying viewer discretion. Nevertheless, applicable predictive methods lack relevance to well-established medical knowledge, which outlines clinically proven external and environmental factors of depression. To account for such medical knowledge, we resort to an emergent methodological discipline, seeded Neural Topic Models (NTMs). However, existing seeded NTMs suffer from the limitations of single-origin topics, unknown topic sources, unclear seed supervision, and suboptimal convergence. To address those challenges, we develop a novel Knowledge-guided Multimodal NTM to predict a short-form video's depressive impact on viewers. Extensive empirical analyses using TikTok and Douyin datasets prove that our method outperforms state-of-the-art benchmarks. Our method also discovers medically relevant topics from videos that are linked to depressive impact. We contribute to IS with a novel video analytics method that is generalizable to other video classification problems. Practically, our method can help platforms understand videos' mental impacts, thus adjusting recommendations and video topic disclosure.
http://arxiv.org/abs/2402.10045v3
"2024-01-11T03:36:47"
cs.CV, cs.LG
2,024
Probabilistic emotion and sentiment modelling of patient-reported experiences
Curtis Murray, Lewis Mitchell, Jonathan Tuke, Mark Mackay
This study introduces a novel methodology for modelling patient emotions from online patient experience narratives. We employed metadata network topic modelling to analyse patient-reported experiences from Care Opinion, revealing key emotional themes linked to patient-caregiver interactions and clinical outcomes. We develop a probabilistic, context-specific emotion recommender system capable of predicting both multilabel emotions and binary sentiments using a naive Bayes classifier using contextually meaningful topics as predictors. The superior performance of our predicted emotions under this model compared to baseline models was assessed using the information retrieval metrics nDCG and Q-measure, and our predicted sentiments achieved an F1 score of 0.921, significantly outperforming standard sentiment lexicons. This method offers a transparent, cost-effective way to understand patient feedback, enhancing traditional collection methods and informing individualised patient care. Our findings are accessible via an R package and interactive dashboard, providing valuable tools for healthcare researchers and practitioners.
http://arxiv.org/abs/2401.04367v1
"2024-01-09T05:39:20"
cs.CL
2,024
Using Zero-shot Prompting in the Automatic Creation and Expansion of Topic Taxonomies for Tagging Retail Banking Transactions
Daniel de S. Moraes, Pedro T. C. Santos, Polyana B. da Costa, Matheus A. S. Pinto, Ivan de J. P. Pinto, Álvaro M. G. da Veiga, Sergio Colcher, Antonio J. G. Busson, Rafael H. Rocha, Rennan Gaio, Rafael Miceli, Gabriela Tourinho, Marcos Rabaioli, Leandro Santos, Fellipe Marques, David Favaro
This work presents an unsupervised method for automatically constructing and expanding topic taxonomies using instruction-based fine-tuned LLMs (Large Language Models). We apply topic modeling and keyword extraction techniques to create initial topic taxonomies and LLMs to post-process the resulting terms and create a hierarchy. To expand an existing taxonomy with new terms, we use zero-shot prompting to find out where to add new nodes, which, to our knowledge, is the first work to present such an approach to taxonomy tasks. We use the resulting taxonomies to assign tags that characterize merchants from a retail bank dataset. To evaluate our work, we asked 12 volunteers to answer a two-part form in which we first assessed the quality of the taxonomies created and then the tags assigned to merchants based on that taxonomy. The evaluation revealed a coherence rate exceeding 90% for the chosen taxonomies. The taxonomies' expansion with LLMs also showed exciting results for parent node prediction, with an f1-score above 70% in our taxonomies.
http://arxiv.org/abs/2401.06790v2
"2024-01-08T00:27:16"
cs.CL, cs.AI
2,024
German Text Embedding Clustering Benchmark
Silvan Wehrli, Bert Arnrich, Christopher Irrgang
This work introduces a benchmark assessing the performance of clustering German text embeddings in different domains. This benchmark is driven by the increasing use of clustering neural text embeddings in tasks that require the grouping of texts (such as topic modeling) and the need for German resources in existing benchmarks. We provide an initial analysis for a range of pre-trained mono- and multilingual models evaluated on the outcome of different clustering algorithms. Results include strong performing mono- and multilingual models. Reducing the dimensions of embeddings can further improve clustering. Additionally, we conduct experiments with continued pre-training for German BERT models to estimate the benefits of this additional training. Our experiments suggest that significant performance improvements are possible for short text. All code and datasets are publicly available.
http://arxiv.org/abs/2401.02709v1
"2024-01-05T08:42:45"
cs.CL, cs.AI
2,024
Text mining arXiv: a look through quantitative finance papers
Michele Leonardo Bianchi
This paper explores articles hosted on the arXiv preprint server with the aim to uncover valuable insights hidden in this vast collection of research. Employing text mining techniques and through the application of natural language processing methods, we examine the contents of quantitative finance papers posted in arXiv from 1997 to 2022. We extract and analyze crucial information from the entire documents, including the references, to understand the topics trends over time and to find out the most cited researchers and journals on this domain. Additionally, we compare numerous algorithms to perform topic modeling, including state-of-the-art approaches.
http://arxiv.org/abs/2401.01751v2
"2024-01-03T14:06:06"
cs.DL, cs.IR, q-fin.GN
2,024
A Latent Dirichlet Allocation (LDA) Semantic Text Analytics Approach to Explore Topical Features in Charity Crowdfunding Campaigns
Prathamesh Muzumdar, George Kurian, Ganga Prasad Basyal
Crowdfunding in the realm of the Social Web has received substantial attention, with prior research examining various aspects of campaigns, including project objectives, durations, and influential project categories for successful fundraising. These factors are crucial for entrepreneurs seeking donor support. However, the terrain of charity crowdfunding within the Social Web remains relatively unexplored, lacking comprehension of the motivations driving donations that often lack concrete reciprocation. Distinct from conventional crowdfunding that offers tangible returns, charity crowdfunding relies on intangible rewards like tax advantages, recognition posts, or advisory roles. Such details are often embedded within campaign narratives, yet, the analysis of textual content in charity crowdfunding is limited. This study introduces an inventive text analytics framework, utilizing Latent Dirichlet Allocation (LDA) to extract latent themes from textual descriptions of charity campaigns. The study has explored four different themes, two each in campaign and incentive descriptions. Campaign description themes are focused on child and elderly health mainly the ones who are diagnosed with terminal diseases. Incentive description themes are based on tax benefits, certificates, and appreciation posts. These themes, combined with numerical parameters, predict campaign success. The study was successful in using Random Forest Classifier to predict success of the campaign using both thematic and numerical parameters. The study distinguishes thematic categories, particularly medical need-based charity and general causes, based on project and incentive descriptions. In conclusion, this research bridges the gap by showcasing topic modelling utility in uncharted charity crowdfunding domains.
http://arxiv.org/abs/2401.02988v1
"2024-01-03T09:17:46"
cs.CL, stat.AP
2,024
Discovering Significant Topics from Legal Decisions with Selective Inference
Jerrold Soh
We propose and evaluate an automated pipeline for discovering significant topics from legal decision texts by passing features synthesized with topic models through penalised regressions and post-selection significance tests. The method identifies case topics significantly correlated with outcomes, topic-word distributions which can be manually-interpreted to gain insights about significant topics, and case-topic weights which can be used to identify representative cases for each topic. We demonstrate the method on a new dataset of domain name disputes and a canonical dataset of European Court of Human Rights violation cases. Topic models based on latent semantic analysis as well as language model embeddings are evaluated. We show that topics derived by the pipeline are consistent with legal doctrines in both areas and can be useful in other related legal analysis tasks.
http://arxiv.org/abs/2401.01068v1
"2024-01-02T07:00:24"
cs.CL, cs.AI
2,024
Recent Advances in Text Analysis
Zheng Tracy Ke, Pengsheng Ji, Jiashun Jin, Wanshan Li
Text analysis is an interesting research area in data science and has various applications, such as in artificial intelligence, biomedical research, and engineering. We review popular methods for text analysis, ranging from topic modeling to the recent neural language models. In particular, we review Topic-SCORE, a statistical approach to topic modeling, and discuss how to use it to analyze MADStat - a dataset on statistical publications that we collected and cleaned. The application of Topic-SCORE and other methods on MADStat leads to interesting findings. For example, $11$ representative topics in statistics are identified. For each journal, the evolution of topic weights over time can be visualized, and these results are used to analyze the trends in statistical research. In particular, we propose a new statistical model for ranking the citation impacts of $11$ topics, and we also build a cross-topic citation graph to illustrate how research results on different topics spread to one another. The results on MADStat provide a data-driven picture of the statistical research in $1975$--$2015$, from a text analysis perspective.
http://arxiv.org/abs/2401.00775v2
"2024-01-01T14:41:10"
stat.AP, cs.IR
2,024
AHAM: Adapt, Help, Ask, Model -- Harvesting LLMs for literature mining
Boshko Koloski, Nada Lavrač, Bojan Cestnik, Senja Pollak, Blaž Škrlj, Andrej Kastrin
In an era marked by a rapid increase in scientific publications, researchers grapple with the challenge of keeping pace with field-specific advances. We present the `AHAM' methodology and a metric that guides the domain-specific \textbf{adapt}ation of the BERTopic topic modeling framework to improve scientific text analysis. By utilizing the LLaMa2 generative language model, we generate topic definitions via one-shot learning by crafting prompts with the \textbf{help} of domain experts to guide the LLM for literature mining by \textbf{asking} it to model the topic names. For inter-topic similarity evaluation, we leverage metrics from language generation and translation processes to assess lexical and semantic similarity of the generated topics. Our system aims to reduce both the ratio of outlier topics to the total number of topics and the similarity between topic definitions. The methodology has been assessed on a newly gathered corpus of scientific papers on literature-based discovery. Through rigorous evaluation by domain experts, AHAM has been validated as effective in uncovering intriguing and novel insights within broad research areas. We explore the impact of domain adaptation of sentence-transformers for the task of topic \textbf{model}ing using two datasets, each specialized to specific scientific domains within arXiv and medarxiv. We evaluate the impact of data size, the niche of adaptation, and the importance of domain adaptation. Our results suggest a strong interaction between domain adaptation and topic modeling precision in terms of outliers and topic definitions.
http://arxiv.org/abs/2312.15784v1
"2023-12-25T18:23:03"
cs.CL, cs.AI
2,023
Inference of Dependency Knowledge Graph for Electronic Health Records
Zhiwei Xu, Ziming Gan, Doudou Zhou, Shuting Shen, Junwei Lu, Tianxi Cai
The effective analysis of high-dimensional Electronic Health Record (EHR) data, with substantial potential for healthcare research, presents notable methodological challenges. Employing predictive modeling guided by a knowledge graph (KG), which enables efficient feature selection, can enhance both statistical efficiency and interpretability. While various methods have emerged for constructing KGs, existing techniques often lack statistical certainty concerning the presence of links between entities, especially in scenarios where the utilization of patient-level EHR data is limited due to privacy concerns. In this paper, we propose the first inferential framework for deriving a sparse KG with statistical guarantee based on the dynamic log-linear topic model proposed by \cite{arora2016latent}. Within this model, the KG embeddings are estimated by performing singular value decomposition on the empirical pointwise mutual information matrix, offering a scalable solution. We then establish entrywise asymptotic normality for the KG low-rank estimator, enabling the recovery of sparse graph edges with controlled type I error. Our work uniquely addresses the under-explored domain of statistical inference about non-linear statistics under the low-rank temporal dependent models, a critical gap in existing research. We validate our approach through extensive simulation studies and then apply the method to real-world EHR data in constructing clinical KGs and generating clinical feature embeddings.
http://arxiv.org/abs/2312.15611v1
"2023-12-25T04:45:36"
stat.ME, stat.ML
2,023
Deep de Finetti: Recovering Topic Distributions from Large Language Models
Liyi Zhang, R. Thomas McCoy, Theodore R. Sumers, Jian-Qiao Zhu, Thomas L. Griffiths
Large language models (LLMs) can produce long, coherent passages of text, suggesting that LLMs, although trained on next-word prediction, must represent the latent structure that characterizes a document. Prior work has found that internal representations of LLMs encode one aspect of latent structure, namely syntax; here we investigate a complementary aspect, namely the document's topic structure. We motivate the hypothesis that LLMs capture topic structure by connecting LLM optimization to implicit Bayesian inference. De Finetti's theorem shows that exchangeable probability distributions can be represented as a mixture with respect to a latent generating distribution. Although text is not exchangeable at the level of syntax, exchangeability is a reasonable starting assumption for topic structure. We thus hypothesize that predicting the next token in text will lead LLMs to recover latent topic distributions. We examine this hypothesis using Latent Dirichlet Allocation (LDA), an exchangeable probabilistic topic model, as a target, and we show that the representations formed by LLMs encode both the topics used to generate synthetic data and those used to explain natural corpus data.
http://arxiv.org/abs/2312.14226v1
"2023-12-21T16:44:39"
cs.CL, cs.AI, cs.LG, stat.ML, I.2.6; I.2.7
2,023
MixEHR-SurG: a joint proportional hazard and guided topic model for inferring mortality-associated topics from electronic health records
Yixuan Li, Archer Y. Yang, Ariane Marelli, Yue Li
Survival models can help medical practitioners to evaluate the prognostic importance of clinical variables to patient outcomes such as mortality or hospital readmission and subsequently design personalized treatment regimes. Electronic Health Records (EHRs) hold the promise for large-scale survival analysis based on systematically recorded clinical features for each patient. However, existing survival models either do not scale to high dimensional and multi-modal EHR data or are difficult to interpret. In this study, we present a supervised topic model called MixEHR-SurG to simultaneously integrate heterogeneous EHR data and model survival hazard. Our contributions are three-folds: (1) integrating EHR topic inference with Cox proportional hazards likelihood; (2) integrating patient-specific topic hyperparameters using the PheCode concepts such that each topic can be identified with exactly one PheCode-associated phenotype; (3) multi-modal survival topic inference. This leads to a highly interpretable survival topic model that can infer PheCode-specific phenotype topics associated with patient mortality. We evaluated MixEHR-SurG using a simulated dataset and two real-world EHR datasets: the Quebec Congenital Heart Disease (CHD) data consisting of 8,211 subjects with 75,187 outpatient claim records of 1,767 unique ICD codes; the MIMIC-III consisting of 1,458 subjects with multi-modal EHR records. Compared to the baselines, MixEHR-SurG achieved a superior dynamic AUROC for mortality prediction, with a mean AUROC score of 0.89 in the simulation dataset and a mean AUROC of 0.645 on the CHD dataset. Qualitatively, MixEHR-SurG associates severe cardiac conditions with high mortality risk among the CHD patients after the first heart failure hospitalization and critical brain injuries with increased mortality among the MIMIC-III patients after their ICU discharge.
http://arxiv.org/abs/2312.13454v3
"2023-12-20T22:13:45"
cs.LG, stat.ME
2,023
Analyzing Public Reactions, Perceptions, and Attitudes during the MPox Outbreak: Findings from Topic Modeling of Tweets
Nirmalya Thakur, Yuvraj Nihal Duggal, Zihui Liu
The recent outbreak of the MPox virus has resulted in a tremendous increase in the usage of Twitter. Prior works in this area of research have primarily focused on the sentiment analysis and content analysis of these Tweets, and the few works that have focused on topic modeling have multiple limitations. This paper aims to address this research gap and makes two scientific contributions to this field. First, it presents the results of performing Topic Modeling on 601,432 Tweets about the 2022 Mpox outbreak that were posted on Twitter between 7 May 2022 and 3 March 2023. The results indicate that the conversations on Twitter related to Mpox during this time range may be broadly categorized into four distinct themes - Views and Perspectives about Mpox, Updates on Cases and Investigations about Mpox, Mpox and the LGBTQIA+ Community, and Mpox and COVID-19. Second, the paper presents the findings from the analysis of these Tweets. The results show that the theme that was most popular on Twitter (in terms of the number of Tweets posted) during this time range was Views and Perspectives about Mpox. This was followed by the theme of Mpox and the LGBTQIA+ Community, which was followed by the themes of Mpox and COVID-19 and Updates on Cases and Investigations about Mpox, respectively. Finally, a comparison with related studies in this area of research is also presented to highlight the novelty and significance of this research work.
http://arxiv.org/abs/2312.11895v1
"2023-12-19T06:39:38"
cs.SI, cs.AI, cs.CL, cs.CY
2,023
Dynamic Topic Language Model on Heterogeneous Children's Mental Health Clinical Notes
Hanwen Ye, Tatiana Moreno, Adrianne Alpern, Louis Ehwerhemuepha, Annie Qu
Mental health diseases affect children's lives and well-beings which have received increased attention since the COVID-19 pandemic. Analyzing psychiatric clinical notes with topic models is critical to evaluate children's mental status over time. However, few topic models are built for longitudinal settings, and they fail to keep consistent topics and capture temporal trajectories for each document. To address these challenges, we develop a longitudinal topic model with time-invariant topics and individualized temporal dependencies on the evolving document metadata. Our model preserves the semantic meaning of discovered topics over time and incorporates heterogeneity among documents. In particular, when documents can be categorized, we propose an unsupervised topics learning approach to maximize topic heterogeneity across different document groups. We also present an efficient variational optimization procedure adapted for the multistage longitudinal setting. In this case study, we apply our method to the psychiatric clinical notes from a large tertiary pediatric hospital in Southern California and achieve a 38% increase in the overall coherence of extracted topics. Our real data analysis reveals that children tend to express more negative emotions during state shutdowns and more positive when schools reopen. Furthermore, it suggests that sexual and gender minority (SGM) children display more pronounced reactions to major COVID-19 events and a greater sensitivity to vaccine-related news than non-SGM children. This study examines the progression of children's mental health during the pandemic and offers clinicians valuable insights to recognize the disparities in children's mental health related to their sexual and gender identities.
http://arxiv.org/abs/2312.14180v1
"2023-12-19T00:36:53"
cs.CL, cs.LG, stat.AP, stat.ML
2,023
Topic-VQ-VAE: Leveraging Latent Codebooks for Flexible Topic-Guided Document Generation
YoungJoon Yoo, Jongwon Choi
This paper introduces a novel approach for topic modeling utilizing latent codebooks from Vector-Quantized Variational Auto-Encoder~(VQ-VAE), discretely encapsulating the rich information of the pre-trained embeddings such as the pre-trained language model. From the novel interpretation of the latent codebooks and embeddings as conceptual bag-of-words, we propose a new generative topic model called Topic-VQ-VAE~(TVQ-VAE) which inversely generates the original documents related to the respective latent codebook. The TVQ-VAE can visualize the topics with various generative distributions including the traditional BoW distribution and the autoregressive image generation. Our experimental results on document analysis and image generation demonstrate that TVQ-VAE effectively captures the topic context which reveals the underlying structures of the dataset and supports flexible forms of document generation. Official implementation of the proposed TVQ-VAE is available at https://github.com/clovaai/TVQ-VAE.
http://arxiv.org/abs/2312.11532v2
"2023-12-15T15:01:10"
cs.CL, cs.AI, cs.LG
2,023
Prompting Large Language Models for Topic Modeling
Han Wang, Nirmalendu Prakash, Nguyen Khoi Hoang, Ming Shan Hee, Usman Naseem, Roy Ka-Wei Lee
Topic modeling is a widely used technique for revealing underlying thematic structures within textual data. However, existing models have certain limitations, particularly when dealing with short text datasets that lack co-occurring words. Moreover, these models often neglect sentence-level semantics, focusing primarily on token-level semantics. In this paper, we propose PromptTopic, a novel topic modeling approach that harnesses the advanced language understanding of large language models (LLMs) to address these challenges. It involves extracting topics at the sentence level from individual documents, then aggregating and condensing these topics into a predefined quantity, ultimately providing coherent topics for texts of varying lengths. This approach eliminates the need for manual parameter tuning and improves the quality of extracted topics. We benchmark PromptTopic against the state-of-the-art baselines on three vastly diverse datasets, establishing its proficiency in discovering meaningful topics. Furthermore, qualitative analysis showcases PromptTopic's ability to uncover relevant topics in multiple datasets.
http://arxiv.org/abs/2312.09693v1
"2023-12-15T11:15:05"
cs.AI, I.2.7
2,023
Topic Bias in Emotion Classification
Maximilian Wegge, Roman Klinger
Emotion corpora are typically sampled based on keyword/hashtag search or by asking study participants to generate textual instances. In any case, these corpora are not uniform samples representing the entirety of a domain. We hypothesize that this practice of data acquisition leads to unrealistic correlations between overrepresented topics in these corpora that harm the generalizability of models. Such topic bias could lead to wrong predictions for instances like "I organized the service for my aunt's funeral." when funeral events are over-represented for instances labeled with sadness, despite the emotion of pride being more appropriate here. In this paper, we study this topic bias both from the data and the modeling perspective. We first label a set of emotion corpora automatically via topic modeling and show that emotions in fact correlate with specific topics. Further, we see that emotion classifiers are confounded by such topics. Finally, we show that the established debiasing method of adversarial correction via gradient reversal mitigates the issue. Our work points out issues with existing emotion corpora and that more representative resources are required for fair evaluation of models predicting affective concepts from text.
http://arxiv.org/abs/2312.09043v3
"2023-12-14T15:40:27"
cs.CL
2,023
Contrastive News and Social Media Linking using BERT for Articles and Tweets across Dual Platforms
Jan Piotrowski, Marek Wachnicki, Mateusz Perlik, Jakub Podolak, Grzegorz Rucki, Michał Brzozowski, Paweł Olejnik, Julian Kozłowski, Tomasz Nocoń, Jakub Kozieł, Stanisław Giziński, Piotr Sankowski
X (formerly Twitter) has evolved into a contemporary agora, offering a platform for individuals to express opinions and viewpoints on current events. The majority of the topics discussed on Twitter are directly related to ongoing events, making it an important source for monitoring public discourse. However, linking tweets to specific news presents a significant challenge due to their concise and informal nature. Previous approaches, including topic models, graph-based models, and supervised classifiers, have fallen short in effectively capturing the unique characteristics of tweets and articles. Inspired by the success of the CLIP model in computer vision, which employs contrastive learning to model similarities between images and captions, this paper introduces a contrastive learning approach for training a representation space where linked articles and tweets exhibit proximity. We present our contrastive learning approach, CATBERT (Contrastive Articles Tweets BERT), leveraging pre-trained BERT models. The model is trained and tested on a dataset containing manually labeled English and Polish tweets and articles related to the Russian-Ukrainian war. We evaluate CATBERT's performance against traditional approaches like LDA, and the novel method based on OpenAI embeddings, which has not been previously applied to this task. Our findings indicate that CATBERT demonstrates superior performance in associating tweets with relevant news articles. Furthermore, we demonstrate the performance of the models when applied to finding the main topic -- represented by an article -- of the whole cascade of tweets. In this new task, we report the performance of the different models in dependence on the cascade size.
http://arxiv.org/abs/2312.07599v1
"2023-12-11T13:38:16"
cs.CL, cs.LG, I.2.7
2,023
PromptMTopic: Unsupervised Multimodal Topic Modeling of Memes using Large Language Models
Nirmalendu Prakash, Han Wang, Nguyen Khoi Hoang, Ming Shan Hee, Roy Ka-Wei Lee
The proliferation of social media has given rise to a new form of communication: memes. Memes are multimodal and often contain a combination of text and visual elements that convey meaning, humor, and cultural significance. While meme analysis has been an active area of research, little work has been done on unsupervised multimodal topic modeling of memes, which is important for content moderation, social media analysis, and cultural studies. We propose \textsf{PromptMTopic}, a novel multimodal prompt-based model designed to learn topics from both text and visual modalities by leveraging the language modeling capabilities of large language models. Our model effectively extracts and clusters topics learned from memes, considering the semantic interaction between the text and visual modalities. We evaluate our proposed model through extensive experiments on three real-world meme datasets, which demonstrate its superiority over state-of-the-art topic modeling baselines in learning descriptive topics in memes. Additionally, our qualitative analysis shows that \textsf{PromptMTopic} can identify meaningful and culturally relevant topics from memes. Our work contributes to the understanding of the topics and themes of memes, a crucial form of communication in today's society.\\ \red{\textbf{Disclaimer: This paper contains sensitive content that may be disturbing to some readers.}}
http://arxiv.org/abs/2312.06093v1
"2023-12-11T03:36:50"
cs.CL, cs.CV, cs.MM, I.1.4; I.1.7
2,023
Revisiting Topic-Guided Language Models
Carolina Zheng, Keyon Vafa, David M. Blei
A recent line of work in natural language processing has aimed to combine language models and topic models. These topic-guided language models augment neural language models with topic models, unsupervised learning methods that can discover document-level patterns of word use. This paper compares the effectiveness of these methods in a standardized setting. We study four topic-guided language models and two baselines, evaluating the held-out predictive performance of each model on four corpora. Surprisingly, we find that none of these methods outperform a standard LSTM language model baseline, and most fail to learn good topics. Further, we train a probe of the neural language model that shows that the baseline's hidden states already encode topic information. We make public all code used for this study.
http://arxiv.org/abs/2312.02331v1
"2023-12-04T20:33:24"
cs.CL, cs.LG
2,023
Near-real-time Earthquake-induced Fatality Estimation using Crowdsourced Data and Large-Language Models
Chenguang Wang, Davis Engler, Xuechun Li, James Hou, David J. Wald, Kishor Jaiswal, Susu Xu
When a damaging earthquake occurs, immediate information about casualties is critical for time-sensitive decision-making by emergency response and aid agencies in the first hours and days. Systems such as Prompt Assessment of Global Earthquakes for Response (PAGER) by the U.S. Geological Survey (USGS) were developed to provide a forecast within about 30 minutes of any significant earthquake globally. Traditional systems for estimating human loss in disasters often depend on manually collected early casualty reports from global media, a process that's labor-intensive and slow with notable time delays. Recently, some systems have employed keyword matching and topic modeling to extract relevant information from social media. However, these methods struggle with the complex semantics in multilingual texts and the challenge of interpreting ever-changing, often conflicting reports of death and injury numbers from various unverified sources on social media platforms. In this work, we introduce an end-to-end framework to significantly improve the timeliness and accuracy of global earthquake-induced human loss forecasting using multi-lingual, crowdsourced social media. Our framework integrates (1) a hierarchical casualty extraction model built upon large language models, prompt design, and few-shot learning to retrieve quantitative human loss claims from social media, (2) a physical constraint-aware, dynamic-truth discovery model that discovers the truthful human loss from massive noisy and potentially conflicting human loss claims, and (3) a Bayesian updating loss projection model that dynamically updates the final loss estimation using discovered truths. We test the framework in real-time on a series of global earthquake events in 2021 and 2022 and show that our framework streamlines casualty data retrieval, achieving speed and accuracy comparable to manual methods by USGS.
http://arxiv.org/abs/2312.03755v1
"2023-12-04T17:09:58"
cs.CL, cs.AI, cs.CY, cs.LG
2,023
Cybersecurity threats in FinTech: A systematic review
Danial Javaheri, Mahdi Fahmideh, Hassan Chizari, Pooia Lalbakhsh, Junbeom Hur
The rapid evolution of the Smart-everything movement and Artificial Intelligence (AI) advancements have given rise to sophisticated cyber threats that traditional methods cannot counteract. Cyber threats are extremely critical in financial technology (FinTech) as a data-centric sector expected to provide 24/7 services. This paper introduces a novel and refined taxonomy of security threats in FinTech and conducts a comprehensive systematic review of defensive strategies. Through PRISMA methodology applied to 74 selected studies and topic modeling, we identified 11 central cyber threats, with 43 papers detailing them, and pinpointed 9 corresponding defense strategies, as covered in 31 papers. This in-depth analysis offers invaluable insights for stakeholders ranging from banks and enterprises to global governmental bodies, highlighting both the current challenges in FinTech and effective countermeasures, as well as directions for future research.
http://arxiv.org/abs/2312.01752v1
"2023-12-04T09:25:54"
cs.CR, cs.AI
2,023
Understanding Opinions Towards Climate Change on Social Media
Yashaswi Pupneja, Joseph Zou, Sacha Lévy, Shenyang Huang
Social media platforms such as Twitter (now known as X) have revolutionized how the public engage with important societal and political topics. Recently, climate change discussions on social media became a catalyst for political polarization and the spreading of misinformation. In this work, we aim to understand how real world events influence the opinions of individuals towards climate change related topics on social media. To this end, we extracted and analyzed a dataset of 13.6 millions tweets sent by 3.6 million users from 2006 to 2019. Then, we construct a temporal graph from the user-user mentions network and utilize the Louvain community detection algorithm to analyze the changes in community structure around Conference of the Parties on Climate Change~(COP) events. Next, we also apply tools from the Natural Language Processing literature to perform sentiment analysis and topic modeling on the tweets. Our work acts as a first step towards understanding the evolution of pro-climate change communities around COP events. Answering these questions helps us understand how to raise people's awareness towards climate change thus hopefully calling on more individuals to join the collaborative effort in slowing down climate change.
http://arxiv.org/abs/2312.01217v1
"2023-12-02T20:02:34"
cs.SI, cs.CL, cs.LG
2,023
From Voices to Validity: Leveraging Large Language Models (LLMs) for Textual Analysis of Policy Stakeholder Interviews
Alex Liu, Min Sun
Obtaining stakeholders' diverse experiences and opinions about current policy in a timely manner is crucial for policymakers to identify strengths and gaps in resource allocation, thereby supporting effective policy design and implementation. However, manually coding even moderately sized interview texts or open-ended survey responses from stakeholders can often be labor-intensive and time-consuming. This study explores the integration of Large Language Models (LLMs)--like GPT-4--with human expertise to enhance text analysis of stakeholder interviews regarding K-12 education policy within one U.S. state. Employing a mixed-methods approach, human experts developed a codebook and coding processes as informed by domain knowledge and unsupervised topic modeling results. They then designed prompts to guide GPT-4 analysis and iteratively evaluate different prompts' performances. This combined human-computer method enabled nuanced thematic and sentiment analysis. Results reveal that while GPT-4 thematic coding aligned with human coding by 77.89% at specific themes, expanding to broader themes increased congruence to 96.02%, surpassing traditional Natural Language Processing (NLP) methods by over 25%. Additionally, GPT-4 is more closely matched to expert sentiment analysis than lexicon-based methods. Findings from quantitative measures and qualitative reviews underscore the complementary roles of human domain expertise and automated analysis as LLMs offer new perspectives and coding consistency. The human-computer interactive approach enhances efficiency, validity, and interpretability of educational policy research.
http://arxiv.org/abs/2312.01202v1
"2023-12-02T18:55:14"
cs.HC, cs.AI, cs.CL
2,023
Use of explicit replies as coordination mechanisms in online student debate
Bruno D. Ferreira-Saraiva, Joao P. Matos-Carvalho, Manuel Pita
People in conversation entrain their linguistic behaviours through spontaneous alignment mechanisms [7] - both in face-to-face and computer-mediated communication (CMC) [8]. In CMC, one of the mechanisms through which linguistic entrainment happens is through explicit replies. Indeed, the use of explicit replies influences the structure of conversations, favouring the formation of reply-trees typically delineated by topic shifts [5]. The interpersonal coordination mechanisms realized by how actors address each other have been studied using a probabilistic framework proposed by David Gibson [2,3]. Other recent approaches use computational methods and information theory to quantify changes in text. We explore coordination mechanisms concerned with some of the roles utterances play in dialogues - specifically in explicit replies. We identify these roles by finding community structure in the conversation's vocabulary using a non-parametric, hierarchical topic model. Some conversations may always stay on the ground, remaining at the level of general introductory chatter. Some others may develop a specific sub-topic in significant depth and detail. Even others may jump between general chatter, out-of-topic remarks and people agreeing or disagreeing without further elaboration.
http://arxiv.org/abs/2311.18466v1
"2023-11-30T11:18:45"
cs.CL, cs.CY, cs.SI
2,023
Public sentiment analysis and topic modeling regarding ChatGPT in mental health on Reddit: Negative sentiments increase over time
Yunna Cai, Fan Wang, Haowei Wang, Qianwen Qian
In order to uncover users' attitudes towards ChatGPT in mental health, this study examines public opinions about ChatGPT in mental health discussions on Reddit. Researchers used the bert-base-multilingual-uncased-sentiment techniques for sentiment analysis and the BERTopic model for topic modeling. It was found that overall, negative sentiments prevail, followed by positive ones, with neutral sentiments being the least common. The prevalence of negative emotions has increased over time. Negative emotions encompass discussions on ChatGPT providing bad mental health advice, debates on machine vs. human value, the fear of AI, and concerns about Universal Basic Income (UBI). In contrast, positive emotions highlight ChatGPT's effectiveness in counseling, with mentions of keywords like "time" and "wallet." Neutral discussions center around private data concerns. These findings shed light on public attitudes toward ChatGPT in mental health, potentially contributing to the development of trustworthy AI in mental health from the public perspective.
http://arxiv.org/abs/2311.15800v1
"2023-11-27T13:23:11"
cs.CY
2,023
Searching for Snippets of Open-Domain Dialogue in Task-Oriented Dialogue Datasets
Armand Stricker, Patrick Paroubek
Most existing dialogue corpora and models have been designed to fit into 2 predominant categories : task-oriented dialogues portray functional goals, such as making a restaurant reservation or booking a plane ticket, while chit-chat/open-domain dialogues focus on holding a socially engaging talk with a user. However, humans tend to seamlessly switch between modes and even use chitchat to enhance task-oriented conversations. To bridge this gap, new datasets have recently been created, blending both communication modes into conversation examples. The approaches used tend to rely on adding chit-chat snippets to pre-existing, human-generated task-oriented datasets. Given the tendencies observed in humans, we wonder however if the latter do not \textit{already} hold chit-chat sequences. By using topic modeling and searching for topics which are most similar to a set of keywords related to social talk, we explore the training sets of Schema-Guided Dialogues and MultiWOZ. Our study shows that sequences related to social talk are indeed naturally present, motivating further research on ways chitchat is combined into task-oriented dialogues.
http://arxiv.org/abs/2311.14076v1
"2023-11-23T16:08:39"
cs.CL
2,023
Artificial Intelligence in the Service of Entrepreneurial Finance: Knowledge Structure and the Foundational Algorithmic Paradigm
Robert Kudelić, Tamara Šmaguc, Sherry Robinson
While the application of Artificial Intelligence in Finance has a long tradition, its potential in Entrepreneurship has been intensively explored only recently. In this context, Entrepreneurial Finance is a particularly fertile ground for future Artificial Intelligence proliferation. To support the latter, the study provides a bibliometric review of Artificial Intelligence applications in (1) entrepreneurial finance literature, and (2) corporate finance literature with implications for Entrepreneurship. Rigorous search and screening procedures of the scientific database Web of Science Core Collection resulted in the identification of 1890 relevant journal articles subjected to analysis. The bibliometric analysis gives a rich insight into the knowledge field's conceptual, intellectual, and social structure, indicating nascent and underdeveloped research directions. As far as we were able to identify, this is the first study to map and bibliometrically analyze the academic field concerning the relationship between Artificial Intelligence, Entrepreneurship, and Finance, and the first review that deals with Artificial Intelligence methods in Entrepreneurship. According to the results, Artificial Neural Network, Deep Neural Network and Support Vector Machine are highly represented in almost all identified topic niches. At the same time, applying Topic Modeling, Fuzzy Neural Network and Growing Hierarchical Self-organizing Map is quite rare. As an element of the research, and before final remarks, the article deals as well with a discussion of certain gaps in the relationship between Computer Science and Economics. These gaps do represent problems in the application of Artificial Intelligence in Economic Science. As a way to at least in part remedy this situation, the foundational paradigm and the bespoke demonstration of the Monte Carlo randomized algorithm are presented.
http://arxiv.org/abs/2311.13213v1
"2023-11-22T07:58:46"
cs.AI
2,023
Hate speech and hate crimes: a data-driven study of evolving discourse around marginalized groups
Malvina Bozhidarova, Jonathn Chang, Aaishah Ale-rasool, Yuxiang Liu, Chongyao Ma, Andrea L. Bertozzi, P. Jeffrey Brantingham, Junyuan Lin, Sanjukta Krishnagopal
This study explores the dynamic relationship between online discourse, as observed in tweets, and physical hate crimes, focusing on marginalized groups. Leveraging natural language processing techniques, including keyword extraction and topic modeling, we analyze the evolution of online discourse after events affecting these groups. Examining sentiment and polarizing tweets, we establish correlations with hate crimes in Black and LGBTQ+ communities. Using a knowledge graph, we connect tweets, users, topics, and hate crimes, enabling network analyses. Our findings reveal divergent patterns in the evolution of user communities for Black and LGBTQ+ groups, with notable differences in sentiment among influential users. This analysis sheds light on distinctive online discourse patterns and emphasizes the need to monitor hate speech to prevent hate crimes, especially following significant events impacting marginalized communities.
http://arxiv.org/abs/2311.11163v1
"2023-11-18T20:49:15"
cs.SI, stat.AP, stat.CO
2,023
Labeled Interactive Topic Models
Kyle Seelman, Mozhi Zhang, Jordan Boyd-Graber
Topic models are valuable for understanding extensive document collections, but they don't always identify the most relevant topics. Classical probabilistic and anchor-based topic models offer interactive versions that allow users to guide the models towards more pertinent topics. However, such interactive features have been lacking in neural topic models. To correct this lacuna, we introduce a user-friendly interaction for neural topic models. This interaction permits users to assign a word label to a topic, leading to an update in the topic model where the words in the topic become closely aligned with the given label. Our approach encompasses two distinct kinds of neural topic models. The first includes models where topic embeddings are trainable and evolve during the training process. The second kind involves models where topic embeddings are integrated post-training, offering a different approach to topic refinement. To facilitate user interaction with these neural topic models, we have developed an interactive interface. This interface enables users to engage with and re-label topics as desired. We evaluate our method through a human study, where users can relabel topics to find relevant documents. Using our method, user labeling improves document rank scores, helping to find more relevant documents to a given query when compared to no user labeling.
http://arxiv.org/abs/2311.09438v2
"2023-11-15T23:18:01"
cs.LG, cs.CL, cs.HC, cs.IR
2,023
Multi-Label Topic Model for Financial Textual Data
Moritz Scherrmann
This paper presents a multi-label topic model for financial texts like ad-hoc announcements, 8-K filings, finance related news or annual reports. I train the model on a new financial multi-label database consisting of 3,044 German ad-hoc announcements that are labeled manually using 20 predefined, economically motivated topics. The best model achieves a macro F1 score of more than 85%. Translating the data results in an English version of the model with similar performance. As application of the model, I investigate differences in stock market reactions across topics. I find evidence for strong positive or negative market reactions for some topics, like announcements of new Large Scale Projects or Bankruptcy Filings, while I do not observe significant price effects for some other topics. Furthermore, in contrast to previous studies, the multi-label structure of the model allows to analyze the effects of co-occurring topics on stock market reactions. For many cases, the reaction to a specific topic depends heavily on the co-occurrence with other topics. For example, if allocated capital from a Seasoned Equity Offering (SEO) is used for restructuring a company in the course of a Bankruptcy Proceeding, the market reacts positively on average. However, if that capital is used for covering unexpected, additional costs from the development of new drugs, the SEO implies negative reactions on average.
http://arxiv.org/abs/2311.07598v1
"2023-11-10T12:56:07"
q-fin.ST, cs.CL, cs.LG
2,023
Profiling Irony & Stereotype: Exploring Sentiment, Topic, and Lexical Features
Tibor L. R. Krols, Marie Mortensen, Ninell Oldenburg
Social media has become a very popular source of information. With this popularity comes an interest in systems that can classify the information produced. This study tries to create such a system detecting irony in Twitter users. Recent work emphasize the importance of lexical features, sentiment features and the contrast herein along with TF-IDF and topic models. Based on a thorough feature selection process, the resulting model contains specific sub-features from these areas. Our model reaches an F1-score of 0.84, which is above the baseline. We find that lexical features, especially TF-IDF, contribute the most to our models while sentiment and topic modeling features contribute less to overall performance. Lastly, we highlight multiple interesting and important paths for further exploration.
http://arxiv.org/abs/2311.04885v1
"2023-11-08T18:44:47"
cs.CL
2,023
Topic model based on co-occurrence word networks for unbalanced short text datasets
Chengjie Ma, Junping Du, Meiyu Liang, Zeli Guan
We propose a straightforward solution for detecting scarce topics in unbalanced short-text datasets. Our approach, named CWUTM (Topic model based on co-occurrence word networks for unbalanced short text datasets), Our approach addresses the challenge of sparse and unbalanced short text topics by mitigating the effects of incidental word co-occurrence. This allows our model to prioritize the identification of scarce topics (Low-frequency topics). Unlike previous methods, CWUTM leverages co-occurrence word networks to capture the topic distribution of each word, and we enhanced the sensitivity in identifying scarce topics by redefining the calculation of node activity and normalizing the representation of both scarce and abundant topics to some extent. Moreover, CWUTM adopts Gibbs sampling, similar to LDA, making it easily adaptable to various application scenarios. Our extensive experimental validation on unbalanced short-text datasets demonstrates the superiority of CWUTM compared to baseline approaches in discovering scarce topics. According to the experimental results the proposed model is effective in early and accurate detection of emerging topics or unexpected events on social platforms.
http://arxiv.org/abs/2311.02566v1
"2023-11-05T04:44:23"
cs.CL
2,023
TopicGPT: A Prompt-based Topic Modeling Framework
Chau Minh Pham, Alexander Hoyle, Simeng Sun, Philip Resnik, Mohit Iyyer
Topic modeling is a well-established technique for exploring text corpora. Conventional topic models (e.g., LDA) represent topics as bags of words that often require "reading the tea leaves" to interpret; additionally, they offer users minimal control over the formatting and specificity of resulting topics. To tackle these issues, we introduce TopicGPT, a prompt-based framework that uses large language models (LLMs) to uncover latent topics in a text collection. TopicGPT produces topics that align better with human categorizations compared to competing methods: it achieves a harmonic mean purity of 0.74 against human-annotated Wikipedia topics compared to 0.64 for the strongest baseline. Its topics are also interpretable, dispensing with ambiguous bags of words in favor of topics with natural language labels and associated free-form descriptions. Moreover, the framework is highly adaptable, allowing users to specify constraints and modify topics without the need for model retraining. By streamlining access to high-quality and interpretable topics, TopicGPT represents a compelling, human-centered approach to topic modeling.
http://arxiv.org/abs/2311.01449v2
"2023-11-02T17:57:10"
cs.CL
2,023
Software Repositories and Machine Learning Research in Cyber Security
Mounika Vanamala, Keith Bryant, Alex Caravella
In today's rapidly evolving technological landscape and advanced software development, the rise in cyber security attacks has become a pressing concern. The integration of robust cyber security defenses has become essential across all phases of software development. It holds particular significance in identifying critical cyber security vulnerabilities at the initial stages of the software development life cycle, notably during the requirement phase. Through the utilization of cyber security repositories like The Common Attack Pattern Enumeration and Classification (CAPEC) from MITRE and the Common Vulnerabilities and Exposures (CVE) databases, attempts have been made to leverage topic modeling and machine learning for the detection of these early-stage vulnerabilities in the software requirements process. Past research themes have returned successful outcomes in attempting to automate vulnerability identification for software developers, employing a mixture of unsupervised machine learning methodologies such as LDA and topic modeling. Looking ahead, in our pursuit to improve automation and establish connections between software requirements and vulnerabilities, our strategy entails adopting a variety of supervised machine learning techniques. This array encompasses Support Vector Machines (SVM), Na\"ive Bayes, random forest, neural networking and eventually transitioning into deep learning for our investigation. In the face of the escalating complexity of cyber security, the question of whether machine learning can enhance the identification of vulnerabilities in diverse software development scenarios is a paramount consideration, offering crucial assistance to software developers in developing secure software.
http://arxiv.org/abs/2311.00691v1
"2023-11-01T17:46:07"
cs.SE, cs.CR, cs.LG
2,023
Federated Topic Model and Model Pruning Based on Variational Autoencoder
Chengjie Ma, Yawen Li, Meiyu Liang, Ang Li
Topic modeling has emerged as a valuable tool for discovering patterns and topics within large collections of documents. However, when cross-analysis involves multiple parties, data privacy becomes a critical concern. Federated topic modeling has been developed to address this issue, allowing multiple parties to jointly train models while protecting pri-vacy. However, there are communication and performance challenges in the federated sce-nario. In order to solve the above problems, this paper proposes a method to establish a federated topic model while ensuring the privacy of each node, and use neural network model pruning to accelerate the model, where the client periodically sends the model neu-ron cumulative gradients and model weights to the server, and the server prunes the model. To address different requirements, two different methods are proposed to determine the model pruning rate. The first method involves slow pruning throughout the entire model training process, which has limited acceleration effect on the model training process, but can ensure that the pruned model achieves higher accuracy. This can significantly reduce the model inference time during the inference process. The second strategy is to quickly reach the target pruning rate in the early stage of model training in order to accelerate the model training speed, and then continue to train the model with a smaller model size after reaching the target pruning rate. This approach may lose more useful information but can complete the model training faster. Experimental results show that the federated topic model pruning based on the variational autoencoder proposed in this paper can greatly accelerate the model training speed while ensuring the model's performance.
http://arxiv.org/abs/2311.00314v1
"2023-11-01T06:00:14"
cs.LG, cs.IR
2,023
Uncovering Gender Bias within Journalist-Politician Interaction in Indian Twitter
Brisha Jain, Mainack Mondal
Gender bias in political discourse is a significant problem on today's social media. Previous studies found that the gender of politicians indeed influences the content directed towards them by the general public. However, these works are particularly focused on the global north, which represents individualistic culture. Furthermore, they did not address whether there is gender bias even within the interaction between popular journalists and politicians in the global south. These understudied journalist-politician interactions are important (more so in collectivistic cultures like the global south) as they can significantly affect public sentiment and help set gender-biased social norms. In this work, using large-scale data from Indian Twitter we address this research gap. We curated a gender-balanced set of 100 most-followed Indian journalists on Twitter and 100 most-followed politicians. Then we collected 21,188 unique tweets posted by these journalists that mentioned these politicians. Our analysis revealed that there is a significant gender bias -- the frequency with which journalists mention male politicians vs. how frequently they mention female politicians is statistically significantly different ($p<<0.05$). In fact, median tweets from female journalists mentioning female politicians received ten times fewer likes than median tweets from female journalists mentioning male politicians. However, when we analyzed tweet content, our emotion score analysis and topic modeling analysis did not reveal any significant gender-based difference within the journalists' tweets towards politicians. Finally, we found a potential reason for the significant gender bias: the number of popular male Indian politicians is almost twice as large as the number of popular female Indian politicians, which might have resulted in the observed bias. We conclude by discussing the implications of this work.
http://arxiv.org/abs/2310.18911v1
"2023-10-29T05:41:53"
cs.SI, cs.CY, cs.HC
2,023
Understanding Social Structures from Contemporary Literary Fiction using Character Interaction Graph -- Half Century Chronology of Influential Bengali Writers
Nafis Irtiza Tripto, Mohammed Eunus Ali
Social structures and real-world incidents often influence contemporary literary fiction. Existing research in literary fiction analysis explains these real-world phenomena through the manual critical analysis of stories. Conventional Natural Language Processing (NLP) methodologies, including sentiment analysis, narrative summarization, and topic modeling, have demonstrated substantial efficacy in analyzing and identifying similarities within fictional works. However, the intricate dynamics of character interactions within fiction necessitate a more nuanced approach that incorporates visualization techniques. Character interaction graphs (or networks) emerge as a highly suitable means for visualization and information retrieval from the realm of fiction. Therefore, we leverage character interaction graphs with NLP-derived features to explore a diverse spectrum of societal inquiries about contemporary culture's impact on the landscape of literary fiction. Our study involves constructing character interaction graphs from fiction, extracting relevant graph features, and exploiting these features to resolve various real-life queries. Experimental evaluation of influential Bengali fiction over half a century demonstrates that character interaction graphs can be highly effective in specific assessments and information retrieval from literary fiction. Our data and codebase are available at https://cutt.ly/fbMgGEM
http://arxiv.org/abs/2310.16968v1
"2023-10-25T20:09:14"
cs.CL, cs.CY
2,023
A Roadmap of Emerging Trends Discovery in Hydrology: A Topic Modeling Approach
Sila Ovgu Korkut, Oznur Oztunc Kaymak, Aytug Onan, Erman Ulker, Femin Yalcin
In the new global era, determining trends can play an important role in guiding researchers, scientists, and agencies. The main faced challenge is to track the emerging topics among the stacked publications. Therefore, any study done to propose the trend topics in a field to foresee upcoming subjects is crucial. In the current study, the trend topics in the field of "Hydrology" have been attempted to evaluate. To do so, the model is composed of three key components: a gathering of data, preprocessing of the article's significant features, and determining trend topics. Various topic models including Latent Dirichlet Allocation (LDA), Non-negative Matrix Factorization (NMF), and Latent Semantic Analysis (LSA) have been implemented. Comparing the obtained results with respect to the $C_V$ coherence score, in 2022, the topics of "Climate change", "River basin", "Water management", "Natural hazards/erosion", and "Hydrologic cycle" have been obtained. According to a further analysis, it is shown that these topics keep their impact on the field in 2023, as well.
http://arxiv.org/abs/2310.15943v1
"2023-10-24T15:40:05"
cs.CE, E.0; I.7; J.2
2,023
Let the Pretrained Language Models "Imagine" for Short Texts Topic Modeling
Pritom Saha Akash, Jie Huang, Kevin Chen-Chuan Chang
Topic models are one of the compelling methods for discovering latent semantics in a document collection. However, it assumes that a document has sufficient co-occurrence information to be effective. However, in short texts, co-occurrence information is minimal, which results in feature sparsity in document representation. Therefore, existing topic models (probabilistic or neural) mostly fail to mine patterns from them to generate coherent topics. In this paper, we take a new approach to short-text topic modeling to address the data-sparsity issue by extending short text into longer sequences using existing pre-trained language models (PLMs). Besides, we provide a simple solution extending a neural topic model to reduce the effect of noisy out-of-topics text generation from PLMs. We observe that our model can substantially improve the performance of short-text topic modeling. Extensive experiments on multiple real-world datasets under extreme data sparsity scenarios show that our models can generate high-quality topics outperforming state-of-the-art models.
http://arxiv.org/abs/2310.15420v1
"2023-10-24T00:23:30"
cs.CL
2,023
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card