Datasets:

bibtex_url
stringlengths
41
50
proceedings
stringlengths
38
47
bibtext
stringlengths
709
3.56k
abstract
stringlengths
17
2.11k
authors
sequencelengths
1
72
title
stringlengths
12
207
id
stringlengths
7
16
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
276 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
14
num_comments
int64
-1
11
n_authors
int64
-1
44
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
100
Datasets
sequencelengths
0
14
Spaces
sequencelengths
0
100
https://aclanthology.org/2023.wassa-1.5.bib
https://aclanthology.org/2023.wassa-1.5/
@inproceedings{velutharambath-klinger-2023-unidecor, title = "{UNIDECOR}: A Unified Deception Corpus for Cross-Corpus Deception Detection", author = "Velutharambath, Aswathy and Klinger, Roman", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.5", doi = "10.18653/v1/2023.wassa-1.5", pages = "39--51", abstract = "Verbal deception has been studied in psychology, forensics, and computational linguistics for a variety of reasons, like understanding behaviour patterns, identifying false testimonies, and detecting deception in online communication. Varying motivations across research fields lead to differences in the domain choices to study and in the conceptualization of deception, making it hard to compare models and build robust deception detection systems for a given language. With this paper, we improve this situation by surveying available English deception datasets which include domains like social media reviews, court testimonials, opinion statements on specific topics, and deceptive dialogues from online strategy games. We consolidate these datasets into a single unified corpus. Based on this resource, we conduct a correlation analysis of linguistic cues of deception across datasets to understand the differences and perform cross-corpus modeling experiments which show that a cross-domain generalization is challenging to achieve. The unified deception corpus (UNIDECOR) can be obtained from \url{https://www.ims.uni-stuttgart.de/data/unidecor}.", }
Verbal deception has been studied in psychology, forensics, and computational linguistics for a variety of reasons, like understanding behaviour patterns, identifying false testimonies, and detecting deception in online communication. Varying motivations across research fields lead to differences in the domain choices to study and in the conceptualization of deception, making it hard to compare models and build robust deception detection systems for a given language. With this paper, we improve this situation by surveying available English deception datasets which include domains like social media reviews, court testimonials, opinion statements on specific topics, and deceptive dialogues from online strategy games. We consolidate these datasets into a single unified corpus. Based on this resource, we conduct a correlation analysis of linguistic cues of deception across datasets to understand the differences and perform cross-corpus modeling experiments which show that a cross-domain generalization is challenging to achieve. The unified deception corpus (UNIDECOR) can be obtained from \url{https://www.ims.uni-stuttgart.de/data/unidecor}.
[ "Velutharambath, Aswathy", "Klinger, Roman" ]
UNIDECOR: A Unified Deception Corpus for Cross-Corpus Deception Detection
wassa-1.5
Poster
2306.02827
[ "https://bitbucket.org/aswathyve/unifieddeceptioncorpus" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.6.bib
https://aclanthology.org/2023.wassa-1.6/
@inproceedings{sazzed-2023-discourse, title = "Discourse Mode Categorization of {B}engali Social Media Health Text", author = "Sazzed, Salim", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.6", doi = "10.18653/v1/2023.wassa-1.6", pages = "52--57", abstract = "The scarcity of annotated data is a major impediment to natural language processing (NLP) research in Bengali, a language that is considered low-resource. In particular, the health and medical domains suffer from a severe paucity of annotated data. Thus, this study aims to introduce BanglaSocialHealth, an annotated social media health corpus that provides sentence-level annotations of four distinct types of expression modes, namely narrative (NAR), informative (INF), suggestive (SUG), and inquiring (INQ) modes in Bengali. We provide details regarding the annotation procedures and report various statistics, such as the median and mean length of words in different sentence modes. Additionally, we apply classical machine learning (CML) classifiers and transformer-based language models to classify sentence modes. We find that most of the statistical properties are similar in different types of sentence modes. To determine the sentence mode, the transformer-based M-BERT model provides slightly better efficacy than the CML classifiers. Our developed corpus and analysis represent a much-needed contribution to Bengali NLP research in medical and health domains and have the potential to facilitate a range of downstream tasks, including question-answering, misinformation detection, and information retrieval.", }
The scarcity of annotated data is a major impediment to natural language processing (NLP) research in Bengali, a language that is considered low-resource. In particular, the health and medical domains suffer from a severe paucity of annotated data. Thus, this study aims to introduce BanglaSocialHealth, an annotated social media health corpus that provides sentence-level annotations of four distinct types of expression modes, namely narrative (NAR), informative (INF), suggestive (SUG), and inquiring (INQ) modes in Bengali. We provide details regarding the annotation procedures and report various statistics, such as the median and mean length of words in different sentence modes. Additionally, we apply classical machine learning (CML) classifiers and transformer-based language models to classify sentence modes. We find that most of the statistical properties are similar in different types of sentence modes. To determine the sentence mode, the transformer-based M-BERT model provides slightly better efficacy than the CML classifiers. Our developed corpus and analysis represent a much-needed contribution to Bengali NLP research in medical and health domains and have the potential to facilitate a range of downstream tasks, including question-answering, misinformation detection, and information retrieval.
[ "Sazzed, Salim" ]
Discourse Mode Categorization of Bengali Social Media Health Text
wassa-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.7.bib
https://aclanthology.org/2023.wassa-1.7/
@inproceedings{xie-agrawal-2023-emotion, title = "Emotion and Sentiment Guided Paraphrasing", author = "Xie, Justin and Agrawal, Ameeta", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.7", doi = "10.18653/v1/2023.wassa-1.7", pages = "58--70", abstract = "Paraphrase generation, a.k.a. paraphrasing, is a common and important task in natural language processing. Emotional paraphrasing, which changes the emotion embodied in a piece of text while preserving its meaning, has many potential applications, including moderating online dialogues and preventing cyberbullying. We introduce a new task of fine-grained emotional paraphrasing along emotion gradients, that is, altering the emotional intensities of the paraphrases in fine-grained settings following smooth variations in affective dimensions while preserving the meaning of the original text. We reconstruct several widely used paraphrasing datasets by augmenting the input and target texts with their fine-grained emotion labels. Then, we propose a framework for emotion and sentiment guided paraphrasing by leveraging pre-trained language models for conditioned text generation. Extensive evaluation of the fine-tuned models suggests that including fine-grained emotion labels in the paraphrase task significantly improves the likelihood of obtaining high-quality paraphrases that reflect the desired emotions while achieving consistently better scores in paraphrase metrics such as BLEU, ROUGE, and METEOR.", }
Paraphrase generation, a.k.a. paraphrasing, is a common and important task in natural language processing. Emotional paraphrasing, which changes the emotion embodied in a piece of text while preserving its meaning, has many potential applications, including moderating online dialogues and preventing cyberbullying. We introduce a new task of fine-grained emotional paraphrasing along emotion gradients, that is, altering the emotional intensities of the paraphrases in fine-grained settings following smooth variations in affective dimensions while preserving the meaning of the original text. We reconstruct several widely used paraphrasing datasets by augmenting the input and target texts with their fine-grained emotion labels. Then, we propose a framework for emotion and sentiment guided paraphrasing by leveraging pre-trained language models for conditioned text generation. Extensive evaluation of the fine-tuned models suggests that including fine-grained emotion labels in the paraphrase task significantly improves the likelihood of obtaining high-quality paraphrases that reflect the desired emotions while achieving consistently better scores in paraphrase metrics such as BLEU, ROUGE, and METEOR.
[ "Xie, Justin", "Agrawal, Ameeta" ]
Emotion and Sentiment Guided Paraphrasing
wassa-1.7
Poster
2306.05556
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.8.bib
https://aclanthology.org/2023.wassa-1.8/
@inproceedings{probol-mieskes-2023-emotions, title = "Emotions in Spoken Language - Do we need acoustics?", author = "Probol, Nadine and Mieskes, Margot", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.8", doi = "10.18653/v1/2023.wassa-1.8", pages = "71--84", abstract = "Work on emotion detection is often focused on textual data from i.e. Social Media. If multimodal data (i.e. speech) is analysed, the focus again is often placed on the transcription. This paper takes a closer look at how crucial acoustic information actually is for the recognition of emotions from multimodal data. To this end we use the IEMOCAP data, which is one of the larger data sets that provides transcriptions, audio recordings and manual emotion categorization. We build models for emotion classification using text-only, acoustics-only and combining both modalities in order to examine the influence of the various modalities on the final categorization. Our results indicate that using text-only models outperform acoustics-only models. But combining text-only and acoustic-only models improves the results. Additionally, we perform a qualitative analysis and find that a range of misclassifications are due to factors not related to the model, but to the data such as, recording quality, a challenging classification task and misclassifications that are unsurprising for humans.", }
Work on emotion detection is often focused on textual data from i.e. Social Media. If multimodal data (i.e. speech) is analysed, the focus again is often placed on the transcription. This paper takes a closer look at how crucial acoustic information actually is for the recognition of emotions from multimodal data. To this end we use the IEMOCAP data, which is one of the larger data sets that provides transcriptions, audio recordings and manual emotion categorization. We build models for emotion classification using text-only, acoustics-only and combining both modalities in order to examine the influence of the various modalities on the final categorization. Our results indicate that using text-only models outperform acoustics-only models. But combining text-only and acoustic-only models improves the results. Additionally, we perform a qualitative analysis and find that a range of misclassifications are due to factors not related to the model, but to the data such as, recording quality, a challenging classification task and misclassifications that are unsurprising for humans.
[ "Probol, Nadine", "Mieskes, Margot" ]
Emotions in Spoken Language - Do we need acoustics?
wassa-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.9.bib
https://aclanthology.org/2023.wassa-1.9/
@inproceedings{roccabruna-etal-2023-understanding, title = "Understanding Emotion Valence is a Joint Deep Learning Task", author = "Roccabruna, Gabriel and Mousavi, Seyed Mahed and Riccardi, Giuseppe", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.9", doi = "10.18653/v1/2023.wassa-1.9", pages = "85--95", abstract = "The valence analysis of speakers{'} utterances or written posts helps to understand the activation and variations of the emotional state throughout the conversation. More recently, the concept of Emotion Carriers (EC) has been introduced to explain the emotion felt by the speaker and its manifestations. In this work, we investigate the natural inter-dependency of valence and ECs via a multi-task learning approach. We experiment with Pre-trained Language Models (PLM) for single-task, two-step, and joint settings for the valence and EC prediction tasks. We compare and evaluate the performance of generative (GPT-2) and discriminative (BERT) architectures in each setting. We observed that providing the ground truth label of one task improves the prediction performance of the models in the other task. We further observed that the discriminative model achieves the best trade-off of valence and EC prediction tasks in the joint prediction setting. As a result, we attain a single model that performs both tasks, thus, saving computation resources at training and inference times.", }
The valence analysis of speakers{'} utterances or written posts helps to understand the activation and variations of the emotional state throughout the conversation. More recently, the concept of Emotion Carriers (EC) has been introduced to explain the emotion felt by the speaker and its manifestations. In this work, we investigate the natural inter-dependency of valence and ECs via a multi-task learning approach. We experiment with Pre-trained Language Models (PLM) for single-task, two-step, and joint settings for the valence and EC prediction tasks. We compare and evaluate the performance of generative (GPT-2) and discriminative (BERT) architectures in each setting. We observed that providing the ground truth label of one task improves the prediction performance of the models in the other task. We further observed that the discriminative model achieves the best trade-off of valence and EC prediction tasks in the joint prediction setting. As a result, we attain a single model that performs both tasks, thus, saving computation resources at training and inference times.
[ "Roccabruna, Gabriel", "Mousavi, Seyed Mahed", "Riccardi, Giuseppe" ]
Understanding Emotion Valence is a Joint Deep Learning Task
wassa-1.9
Poster
2305.17422
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.10.bib
https://aclanthology.org/2023.wassa-1.10/
@inproceedings{bohacek-etal-2023-czech, title = "{C}zech-ing the News: Article Trustworthiness Dataset for {C}zech", author = "Bohacek, Matyas and Bravansky, Michal and Trhl{\'\i}k, Filip and Moravec, Vaclav", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.10", doi = "10.18653/v1/2023.wassa-1.10", pages = "96--109", abstract = "We present the Verifee dataset: a multimodal dataset of news articles with fine-grained trustworthiness annotations. We bring a diverse set of researchers from social, media, and computer sciences aboard to study this interdisciplinary problem holistically and develop a detailed methodology that assesses the texts through the lens of editorial transparency, journalist conventions, and objective reporting while penalizing manipulative techniques. We collect over 10,000 annotated articles from 60 Czech online news sources. Each item is categorized into one of the 4 proposed classes on the credibility spectrum {--} ranging from entirely trustworthy articles to deceptive ones {--} and annotated of its manipulative attributes. We fine-tune prominent sequence-to-sequence language models for the trustworthiness classification task on our dataset and report the best F-1 score of 0.53. We open-source the dataset, annotation methodology, and annotators{'} instructions in full length at \url{https://www.verifee.ai/research/} to enable easy build-up work.", }
We present the Verifee dataset: a multimodal dataset of news articles with fine-grained trustworthiness annotations. We bring a diverse set of researchers from social, media, and computer sciences aboard to study this interdisciplinary problem holistically and develop a detailed methodology that assesses the texts through the lens of editorial transparency, journalist conventions, and objective reporting while penalizing manipulative techniques. We collect over 10,000 annotated articles from 60 Czech online news sources. Each item is categorized into one of the 4 proposed classes on the credibility spectrum {--} ranging from entirely trustworthy articles to deceptive ones {--} and annotated of its manipulative attributes. We fine-tune prominent sequence-to-sequence language models for the trustworthiness classification task on our dataset and report the best F-1 score of 0.53. We open-source the dataset, annotation methodology, and annotators{'} instructions in full length at \url{https://www.verifee.ai/research/} to enable easy build-up work.
[ "Bohacek, Matyas", "Bravansky, Michal", "Trhl{\\'\\i}k, Filip", "Moravec, Vaclav" ]
Czech-ing the News: Article Trustworthiness Dataset for Czech
wassa-1.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.11.bib
https://aclanthology.org/2023.wassa-1.11/
@inproceedings{subbiah-etal-2023-towards, title = "Towards Detecting Harmful Agendas in News Articles", author = "Subbiah, Melanie and Bhattacharjee, Amrita and Hua, Yilun and Kumarage, Tharindu and Liu, Huan and McKeown, Kathleen", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.11", doi = "10.18653/v1/2023.wassa-1.11", pages = "110--128", abstract = "Manipulated news online is a growing problem which necessitates the use of automated systems to curtail its spread. We argue that while misinformation and disinformation detection have been studied, there has been a lack of investment in the important open challenge of detecting harmful agendas in news articles; identifying harmful agendas is critical to flag news campaigns with the greatest potential for real world harm. Moreover, due to real concerns around censorship, harmful agenda detectors must be interpretable to be effective. In this work, we propose this new task and release a dataset, NewsAgendas, of annotated news articles for agenda identification. We show how interpretable systems can be effective on this task and demonstrate that they can perform comparably to black-box models.", }
Manipulated news online is a growing problem which necessitates the use of automated systems to curtail its spread. We argue that while misinformation and disinformation detection have been studied, there has been a lack of investment in the important open challenge of detecting harmful agendas in news articles; identifying harmful agendas is critical to flag news campaigns with the greatest potential for real world harm. Moreover, due to real concerns around censorship, harmful agenda detectors must be interpretable to be effective. In this work, we propose this new task and release a dataset, NewsAgendas, of annotated news articles for agenda identification. We show how interpretable systems can be effective on this task and demonstrate that they can perform comparably to black-box models.
[ "Subbiah, Melanie", "Bhattacharjee, Amrita", "Hua, Yilun", "Kumarage, Tharindu", "Liu, Huan", "McKeown, Kathleen" ]
Towards Detecting Harmful Agendas in News Articles
wassa-1.11
Poster
2302.00102
[ "https://github.com/melaniesubbiah/harmfulagendasnews" ]
https://huggingface.co/papers/2302.00102
3
0
0
6
1
[]
[]
[]
https://aclanthology.org/2023.wassa-1.12.bib
https://aclanthology.org/2023.wassa-1.12/
@inproceedings{gokani-mamidi-2023-gsac, title = "{GSAC}: A {G}ujarati Sentiment Analysis Corpus from {T}witter", author = "Gokani, Monil and Mamidi, Radhika", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.12", doi = "10.18653/v1/2023.wassa-1.12", pages = "129--137", abstract = "Sentiment Analysis is an important task for analysing online content across languages for tasks such as content moderation and opinion mining. Though a significant amount of resources are available for Sentiment Analysis in several Indian languages, there do not exist any large-scale, open-access corpora for Gujarati. Our paper presents and describes the Gujarati Sentiment Analysis Corpus (GSAC), which has been sourced from Twitter and manually annotated by native speakers of the language. We describe in detail our collection and annotation processes and conduct extensive experiments on our corpus to provide reliable baselines for future work using our dataset.", }
Sentiment Analysis is an important task for analysing online content across languages for tasks such as content moderation and opinion mining. Though a significant amount of resources are available for Sentiment Analysis in several Indian languages, there do not exist any large-scale, open-access corpora for Gujarati. Our paper presents and describes the Gujarati Sentiment Analysis Corpus (GSAC), which has been sourced from Twitter and manually annotated by native speakers of the language. We describe in detail our collection and annotation processes and conduct extensive experiments on our corpus to provide reliable baselines for future work using our dataset.
[ "Gokani, Monil", "Mamidi, Radhika" ]
GSAC: A Gujarati Sentiment Analysis Corpus from Twitter
wassa-1.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.13.bib
https://aclanthology.org/2023.wassa-1.13/
@inproceedings{zielinski-etal-2023-dataset, title = "A Dataset for Explainable Sentiment Analysis in the {G}erman Automotive Industry", author = "Zielinski, Andrea and Spolwind, Calvin and Kroll, Henning and Grimm, Anna", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.13", doi = "10.18653/v1/2023.wassa-1.13", pages = "138--148", abstract = "While deep learning models have greatly improved the performance of many tasks related to sentiment analysis and classification, they are often criticized for being untrustworthy due to their black-box nature. As a result, numerous explainability techniques have been proposed to better understand the model predictions and to improve the deep learning models. In this work, we introduce InfoBarometer, the first benchmark for examining interpretable methods related to sentiment analysis in the German automotive sector based on online news. Each news article in our dataset is annotated w.r.t. overall sentiment (i.e., positive, negative and neutral), the target of the sentiment (focusing on innovation-related topics such as e.g. electromobility) and the rationales, i.e., textual explanations for the sentiment label that can be leveraged during both training and evaluation. For this research, we compare different state-of-the-art approaches to perform sentiment analysis and observe that even models that perform very well in classification do not score high on explainability metrics like model plausibility and faithfulness. We calculated the polarity scores for the best method BERT and got an F-score of 73.6. Moreover, we evaluated different interpretability algorithms (LIME, SHAP, Integrated Gradients, Saliency) based on explicitly marked rationales by human annotators quantitatively and qualitatively. Our experiments demonstrate that the textual explanations often do not agree with human interpretations, and rarely help to justify the models decision. However, local and global features provide useful insights to help uncover spurious features in the model and biases within the dataset. We intend to make our dataset public for other researchers", }
While deep learning models have greatly improved the performance of many tasks related to sentiment analysis and classification, they are often criticized for being untrustworthy due to their black-box nature. As a result, numerous explainability techniques have been proposed to better understand the model predictions and to improve the deep learning models. In this work, we introduce InfoBarometer, the first benchmark for examining interpretable methods related to sentiment analysis in the German automotive sector based on online news. Each news article in our dataset is annotated w.r.t. overall sentiment (i.e., positive, negative and neutral), the target of the sentiment (focusing on innovation-related topics such as e.g. electromobility) and the rationales, i.e., textual explanations for the sentiment label that can be leveraged during both training and evaluation. For this research, we compare different state-of-the-art approaches to perform sentiment analysis and observe that even models that perform very well in classification do not score high on explainability metrics like model plausibility and faithfulness. We calculated the polarity scores for the best method BERT and got an F-score of 73.6. Moreover, we evaluated different interpretability algorithms (LIME, SHAP, Integrated Gradients, Saliency) based on explicitly marked rationales by human annotators quantitatively and qualitatively. Our experiments demonstrate that the textual explanations often do not agree with human interpretations, and rarely help to justify the models decision. However, local and global features provide useful insights to help uncover spurious features in the model and biases within the dataset. We intend to make our dataset public for other researchers
[ "Zielinski, Andrea", "Spolwind, Calvin", "Kroll, Henning", "Grimm, Anna" ]
A Dataset for Explainable Sentiment Analysis in the German Automotive Industry
wassa-1.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.14.bib
https://aclanthology.org/2023.wassa-1.14/
@inproceedings{huang-etal-2023-examining, title = "Examining Bias in Opinion Summarisation through the Perspective of Opinion Diversity", author = "Huang, Nannan and Tian, Lin and Fayek, Haytham and Zhang, Xiuzhen", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.14", doi = "10.18653/v1/2023.wassa-1.14", pages = "149--161", abstract = "Opinion summarisation is a task that aims to condense the information presented in the source documents while retaining the core message and opinions. A summary that only represents the majority opinions will leave the minority opinions unrepresented in the summary. In this paper, we use the stance towards a certain target as an opinion. We study bias in opinion summarisation from the perspective of opinion diversity, which measures whether the model generated summary can cover a diverse set of opinions. In addition, we examine opinion similarity, a measure of how closely related two opinions are in terms of their stance on a given topic, and its relationship with opinion diversity. Through the lense of stances towards a topic, we examine opinion diversity and similarity using three debatable topics under COVID-19. Experimental results on these topics revealed that a higher degree of similarity of opinions did not indicate good diversity or fairly cover the various opinions originally presented in the source documents. We found that BART and ChatGPT can better capture diverse opinions presented in the source documents.", }
Opinion summarisation is a task that aims to condense the information presented in the source documents while retaining the core message and opinions. A summary that only represents the majority opinions will leave the minority opinions unrepresented in the summary. In this paper, we use the stance towards a certain target as an opinion. We study bias in opinion summarisation from the perspective of opinion diversity, which measures whether the model generated summary can cover a diverse set of opinions. In addition, we examine opinion similarity, a measure of how closely related two opinions are in terms of their stance on a given topic, and its relationship with opinion diversity. Through the lense of stances towards a topic, we examine opinion diversity and similarity using three debatable topics under COVID-19. Experimental results on these topics revealed that a higher degree of similarity of opinions did not indicate good diversity or fairly cover the various opinions originally presented in the source documents. We found that BART and ChatGPT can better capture diverse opinions presented in the source documents.
[ "Huang, Nannan", "Tian, Lin", "Fayek, Haytham", "Zhang, Xiuzhen" ]
Examining Bias in Opinion Summarisation through the Perspective of Opinion Diversity
wassa-1.14
Poster
2306.04424
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.15.bib
https://aclanthology.org/2023.wassa-1.15/
@inproceedings{han-sohn-2023-fluency, title = "Fluency Matters! Controllable Style Transfer with Syntax Guidance", author = "Han, Ji-Eun and Sohn, Kyung-Ah", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.15", doi = "10.18653/v1/2023.wassa-1.15", pages = "162--171", abstract = "Unsupervised text style transfer is a challenging task that aims to alter the stylistic attributes of a given text without affecting its original content. One of the methods to achieve this is controllable style transfer, which allows for the control of the degree of style transfer. However, an issue encountered with controllable style transfer is the instability of transferred text fluency when the degree of the style transfer changes. To address this problem, we propose a novel approach that incorporates additional syntax parsing information during style transfer. By leveraging the syntactic information, our model is guided to generate natural sentences that effectively reflect the desired style while maintaining fluency. Experimental results show that our method achieves robust performance and improved fluency compared to previous controllable style transfer methods.", }
Unsupervised text style transfer is a challenging task that aims to alter the stylistic attributes of a given text without affecting its original content. One of the methods to achieve this is controllable style transfer, which allows for the control of the degree of style transfer. However, an issue encountered with controllable style transfer is the instability of transferred text fluency when the degree of the style transfer changes. To address this problem, we propose a novel approach that incorporates additional syntax parsing information during style transfer. By leveraging the syntactic information, our model is guided to generate natural sentences that effectively reflect the desired style while maintaining fluency. Experimental results show that our method achieves robust performance and improved fluency compared to previous controllable style transfer methods.
[ "Han, Ji-Eun", "Sohn, Kyung-Ah" ]
Fluency Matters! Controllable Style Transfer with Syntax Guidance
wassa-1.15
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.16.bib
https://aclanthology.org/2023.wassa-1.16/
@inproceedings{ghanadian-etal-2023-chatgpt, title = "{C}hat{GPT} for Suicide Risk Assessment on Social Media: Quantitative Evaluation of Model Performance, Potentials and Limitations", author = "Ghanadian, Hamideh and Nejadgholi, Isar and Al Osman, Hussein", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.16", doi = "10.18653/v1/2023.wassa-1.16", pages = "172--183", abstract = "This paper presents a novel framework for quantitatively evaluating the interactive ChatGPT model in the context of suicidality assessment from social media posts, utilizing the University of Maryland Reddit suicidality dataset. We conduct a technical evaluation of ChatGPT{'}s performance on this task using Zero-Shot and Few-Shot experiments and compare its results with those of two fine-tuned transformer-based models. Additionally, we investigate the impact of different temperature parameters on ChatGPT{'}s response generation and discuss the optimal temperature based on the inconclusiveness rate of ChatGPT. Our results indicate that while ChatGPT attains considerable accuracy in this task, transformer-based models fine-tuned on human-annotated datasets exhibit superior performance. Moreover, our analysis sheds light on how adjusting the ChatGPT{'}s hyperparameters can improve its ability to assist mental health professionals in this critical task.", }
This paper presents a novel framework for quantitatively evaluating the interactive ChatGPT model in the context of suicidality assessment from social media posts, utilizing the University of Maryland Reddit suicidality dataset. We conduct a technical evaluation of ChatGPT{'}s performance on this task using Zero-Shot and Few-Shot experiments and compare its results with those of two fine-tuned transformer-based models. Additionally, we investigate the impact of different temperature parameters on ChatGPT{'}s response generation and discuss the optimal temperature based on the inconclusiveness rate of ChatGPT. Our results indicate that while ChatGPT attains considerable accuracy in this task, transformer-based models fine-tuned on human-annotated datasets exhibit superior performance. Moreover, our analysis sheds light on how adjusting the ChatGPT{'}s hyperparameters can improve its ability to assist mental health professionals in this critical task.
[ "Ghanadian, Hamideh", "Nejadgholi, Isar", "Al Osman, Hussein" ]
ChatGPT for Suicide Risk Assessment on Social Media: Quantitative Evaluation of Model Performance, Potentials and Limitations
wassa-1.16
Poster
2306.09390
[ "https://github.com/hamideh-ghanadian/chatgpt_for_suicide_risk_assessment_on_social_media" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.17.bib
https://aclanthology.org/2023.wassa-1.17/
@inproceedings{gupta-etal-2023-unsupervised, title = "Unsupervised Domain Adaptation using Lexical Transformations and Label Injection for {T}witter Data", author = "Gupta, Akshat and Liu, Xiaomo and Shah, Sameena", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.17", doi = "10.18653/v1/2023.wassa-1.17", pages = "184--193", abstract = "Domain adaptation is an important and widely studied problem in natural language processing. A large body of literature tries to solve this problem by adapting models trained on the source domain to the target domain. In this paper, we instead solve this problem from a dataset perspective. We modify the source domain dataset with simple lexical transformations to reduce the domain shift between the source dataset distribution and the target dataset distribution. We find that models trained on the transformed source domain dataset performs significantly better than zero-shot models. Using our proposed transformations to convert standard English to tweets, we reach an unsupervised part-of-speech (POS) tagging accuracy of 92.14{\%} (from 81.54{\%} zero shot accuracy), which is only slightly below the supervised performance of 94.45{\%}. We also use our proposed transformations to synthetically generate tweets and augment the Twitter dataset to achieve state-of-the-art performance for POS tagging.", }
Domain adaptation is an important and widely studied problem in natural language processing. A large body of literature tries to solve this problem by adapting models trained on the source domain to the target domain. In this paper, we instead solve this problem from a dataset perspective. We modify the source domain dataset with simple lexical transformations to reduce the domain shift between the source dataset distribution and the target dataset distribution. We find that models trained on the transformed source domain dataset performs significantly better than zero-shot models. Using our proposed transformations to convert standard English to tweets, we reach an unsupervised part-of-speech (POS) tagging accuracy of 92.14{\%} (from 81.54{\%} zero shot accuracy), which is only slightly below the supervised performance of 94.45{\%}. We also use our proposed transformations to synthetically generate tweets and augment the Twitter dataset to achieve state-of-the-art performance for POS tagging.
[ "Gupta, Akshat", "Liu, Xiaomo", "Shah, Sameena" ]
Unsupervised Domain Adaptation using Lexical Transformations and Label Injection for Twitter Data
wassa-1.17
Poster
2307.10210
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.18.bib
https://aclanthology.org/2023.wassa-1.18/
@inproceedings{gonzalez-lopez-bethard-2023-transformer, title = "Transformer-based cynical expression detection in a corpus of {S}panish {Y}ou{T}ube reviews", author = "Gonzalez-Lopez, Samuel and Bethard, Steven", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.18", doi = "10.18653/v1/2023.wassa-1.18", pages = "194--201", abstract = "Consumers of services and products exhibit a wide range of behaviors on social networks when they are dissatisfied. In this paper, we consider three types of cynical expressions negative feelings, specific reasons, and attitude of being right and annotate a corpus of 3189 comments in Spanish on car analysis channels from YouTube. We evaluate both token classification and text classification settings for this problem, and compare performance of different pre-trained models including BETO, SpanBERTa, Multilingual Bert, and RoBERTuito. The results show that models achieve performance above 0.8 F1 for all types of cynical expressions in the text classification setting, but achieve lower performance (around 0.6-0.7 F1) for the harder token classification setting.", }
Consumers of services and products exhibit a wide range of behaviors on social networks when they are dissatisfied. In this paper, we consider three types of cynical expressions negative feelings, specific reasons, and attitude of being right and annotate a corpus of 3189 comments in Spanish on car analysis channels from YouTube. We evaluate both token classification and text classification settings for this problem, and compare performance of different pre-trained models including BETO, SpanBERTa, Multilingual Bert, and RoBERTuito. The results show that models achieve performance above 0.8 F1 for all types of cynical expressions in the text classification setting, but achieve lower performance (around 0.6-0.7 F1) for the harder token classification setting.
[ "Gonzalez-Lopez, Samuel", "Bethard, Steven" ]
Transformer-based cynical expression detection in a corpus of Spanish YouTube reviews
wassa-1.18
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.19.bib
https://aclanthology.org/2023.wassa-1.19/
@inproceedings{havaldar-etal-2023-multilingual, title = "Multilingual Language Models are not Multicultural: A Case Study in Emotion", author = "Havaldar, Shreya and Singhal, Bhumika and Rai, Sunny and Liu, Langchen and Guntuku, Sharath Chandra and Ungar, Lyle", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.19", doi = "10.18653/v1/2023.wassa-1.19", pages = "202--214", abstract = "Emotions are experienced and expressed differently across the world. In order to use Large Language Models (LMs) for multilingual tasks that require emotional sensitivity, LMs must reflect this cultural variation in emotion. In this study, we investigate whether the widely-used multilingual LMs in 2023 reflect differences in emotional expressions across cultures and languages. We find that embeddings obtained from LMs (e.g., XLM-RoBERTa) are Anglocentric, and generative LMs (e.g., ChatGPT) reflect Western norms, even when responding to prompts in other languages. Our results show that multilingual LMs do not successfully learn the culturally appropriate nuances of emotion and we highlight possible research directions towards correcting this.", }
Emotions are experienced and expressed differently across the world. In order to use Large Language Models (LMs) for multilingual tasks that require emotional sensitivity, LMs must reflect this cultural variation in emotion. In this study, we investigate whether the widely-used multilingual LMs in 2023 reflect differences in emotional expressions across cultures and languages. We find that embeddings obtained from LMs (e.g., XLM-RoBERTa) are Anglocentric, and generative LMs (e.g., ChatGPT) reflect Western norms, even when responding to prompts in other languages. Our results show that multilingual LMs do not successfully learn the culturally appropriate nuances of emotion and we highlight possible research directions towards correcting this.
[ "Havaldar, Shreya", "Singhal, Bhumika", "Rai, Sunny", "Liu, Langchen", "Guntuku, Sharath Ch", "ra", "Ungar, Lyle" ]
Multilingual Language Models are not Multicultural: A Case Study in Emotion
wassa-1.19
Poster
2307.01370
[ "https://github.com/shreyahavaldar/multicultural_emotion" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.20.bib
https://aclanthology.org/2023.wassa-1.20/
@inproceedings{lee-etal-2023-painsight, title = "Painsight: An Extendable Opinion Mining Framework for Detecting Pain Points Based on Online Customer Reviews", author = "Lee, Yukyung and Kim, Jaehee and Kim, Doyoon and Kho, Yookyung and Kim, Younsun and Kang, Pilsung", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.20", doi = "10.18653/v1/2023.wassa-1.20", pages = "215--227", abstract = "As the e-commerce market continues to expand and online transactions proliferate, customer reviews have emerged as a critical element in shaping the purchasing decisions of prospective buyers. Previous studies have endeavored to identify key aspects of customer reviews through the development of sentiment analysis models and topic models. However, extracting specific dissatisfaction factors remains a challenging task. In this study, we delineate the pain point detection problem and propose Painsight, an unsupervised framework for automatically extracting distinct dissatisfaction factors from customer reviews without relying on ground truth labels. Painsight employs pre-trained language models to construct sentiment analysis and topic models, leveraging attribution scores derived from model gradients to extract dissatisfaction factors. Upon application of the proposed methodology to customer review data spanning five product categories, we successfully identified and categorized dissatisfaction factors within each group, as well as isolated factors for each type. Notably, Painsight outperformed benchmark methods, achieving substantial performance enhancements and exceptional results in human evaluations.", }
As the e-commerce market continues to expand and online transactions proliferate, customer reviews have emerged as a critical element in shaping the purchasing decisions of prospective buyers. Previous studies have endeavored to identify key aspects of customer reviews through the development of sentiment analysis models and topic models. However, extracting specific dissatisfaction factors remains a challenging task. In this study, we delineate the pain point detection problem and propose Painsight, an unsupervised framework for automatically extracting distinct dissatisfaction factors from customer reviews without relying on ground truth labels. Painsight employs pre-trained language models to construct sentiment analysis and topic models, leveraging attribution scores derived from model gradients to extract dissatisfaction factors. Upon application of the proposed methodology to customer review data spanning five product categories, we successfully identified and categorized dissatisfaction factors within each group, as well as isolated factors for each type. Notably, Painsight outperformed benchmark methods, achieving substantial performance enhancements and exceptional results in human evaluations.
[ "Lee, Yukyung", "Kim, Jaehee", "Kim, Doyoon", "Kho, Yookyung", "Kim, Younsun", "Kang, Pilsung" ]
Painsight: An Extendable Opinion Mining Framework for Detecting Pain Points Based on Online Customer Reviews
wassa-1.20
Poster
2306.02043
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.21.bib
https://aclanthology.org/2023.wassa-1.21/
@inproceedings{pereira-etal-2023-context, title = "Context-Dependent Embedding Utterance Representations for Emotion Recognition in Conversations", author = "Pereira, Patr{\'\i}cia and Moniz, Helena and Dias, Isabel and Carvalho, Joao Paulo", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.21", doi = "10.18653/v1/2023.wassa-1.21", pages = "228--236", abstract = "Emotion Recognition in Conversations (ERC) has been gaining increasing importance as conversational agents become more and more common. Recognizing emotions is key for effective communication, being a crucial component in the development of effective and empathetic conversational agents. Knowledge and understanding of the conversational context are extremely valuable for identifying the emotions of the interlocutor. We thus approach Emotion Recognition in Conversations leveraging the conversational context, i.e., taking into attention previous conversational turns. The usual approach to model the conversational context has been to produce context-independent representations of each utterance and subsequently perform contextual modeling of these. Here we propose context-dependent embedding representations of each utterance by leveraging the contextual representational power of pre-trained transformer language models. In our approach, we feed the conversational context appended to the utterance to be classified as input to the RoBERTa encoder, to which we append a simple classification module, thus discarding the need to deal with context after obtaining the embeddings since these constitute already an efficient representation of such context. We also investigate how the number of introduced conversational turns influences our model performance. The effectiveness of our approach is validated on the open-domain DailyDialog dataset and on the task-oriented EmoWOZ dataset.", }
Emotion Recognition in Conversations (ERC) has been gaining increasing importance as conversational agents become more and more common. Recognizing emotions is key for effective communication, being a crucial component in the development of effective and empathetic conversational agents. Knowledge and understanding of the conversational context are extremely valuable for identifying the emotions of the interlocutor. We thus approach Emotion Recognition in Conversations leveraging the conversational context, i.e., taking into attention previous conversational turns. The usual approach to model the conversational context has been to produce context-independent representations of each utterance and subsequently perform contextual modeling of these. Here we propose context-dependent embedding representations of each utterance by leveraging the contextual representational power of pre-trained transformer language models. In our approach, we feed the conversational context appended to the utterance to be classified as input to the RoBERTa encoder, to which we append a simple classification module, thus discarding the need to deal with context after obtaining the embeddings since these constitute already an efficient representation of such context. We also investigate how the number of introduced conversational turns influences our model performance. The effectiveness of our approach is validated on the open-domain DailyDialog dataset and on the task-oriented EmoWOZ dataset.
[ "Pereira, Patr{\\'\\i}cia", "Moniz, Helena", "Dias, Isabel", "Carvalho, Joao Paulo" ]
Context-Dependent Embedding Utterance Representations for Emotion Recognition in Conversations
wassa-1.21
Poster
2304.08216
[ "https://github.com/patricia-pereira/cd-erc" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.22.bib
https://aclanthology.org/2023.wassa-1.22/
@inproceedings{lemmens-daelemans-2023-combining, title = "Combining Active Learning and Task Adaptation with {BERT} for Cost-Effective Annotation of Social Media Datasets", author = "Lemmens, Jens and Daelemans, Walter", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.22", doi = "10.18653/v1/2023.wassa-1.22", pages = "237--250", abstract = "Social media provide a rich source of data that can be mined and used for a wide variety of research purposes. However, annotating this data can be expensive, yet necessary for state-of-the-art pre-trained language models to achieve high prediction performance. Therefore, we combine pool-based active learning based on prediction uncertainty (an established method for reducing annotation costs) with unsupervised task adaptation through Masked Language Modeling (MLM). The results on three different datasets (two social media corpora, one benchmark dataset) show that task adaptation significantly improves results and that with only a fraction of the available training data, this approach reaches similar F1-scores as those achieved by an upper-bound baseline model fine-tuned on all training data. We hereby contribute to the scarce corpus of research on active learning with pre-trained language models and propose a cost-efficient annotation sampling and fine-tuning approach that can be applied to a wide variety of tasks and datasets.", }
Social media provide a rich source of data that can be mined and used for a wide variety of research purposes. However, annotating this data can be expensive, yet necessary for state-of-the-art pre-trained language models to achieve high prediction performance. Therefore, we combine pool-based active learning based on prediction uncertainty (an established method for reducing annotation costs) with unsupervised task adaptation through Masked Language Modeling (MLM). The results on three different datasets (two social media corpora, one benchmark dataset) show that task adaptation significantly improves results and that with only a fraction of the available training data, this approach reaches similar F1-scores as those achieved by an upper-bound baseline model fine-tuned on all training data. We hereby contribute to the scarce corpus of research on active learning with pre-trained language models and propose a cost-efficient annotation sampling and fine-tuning approach that can be applied to a wide variety of tasks and datasets.
[ "Lemmens, Jens", "Daelemans, Walter" ]
Combining Active Learning and Task Adaptation with BERT for Cost-Effective Annotation of Social Media Datasets
wassa-1.22
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.23.bib
https://aclanthology.org/2023.wassa-1.23/
@inproceedings{van-nooten-daelemans-2023-improving, title = "Improving {D}utch Vaccine Hesitancy Monitoring via Multi-Label Data Augmentation with {GPT}-3.5", author = "Van Nooten, Jens and Daelemans, Walter", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.23", doi = "10.18653/v1/2023.wassa-1.23", pages = "251--270", abstract = "In this paper, we leverage the GPT-3.5 language model both using the Chat-GPT API interface and the GPT-3.5 API interface to generate realistic examples of anti-vaccination tweets in Dutch with the aim of augmenting an imbalanced multi-label vaccine hesitancy argumentation classification dataset. In line with previous research, we devise a prompt that, on the one hand, instructs the model to generate realistic examples based on the gold standard dataset and, on the other hand, to assign multiple pseudo-labels (or a single pseudo-label) to the generated instances. We then augment our gold standard data with the generated examples and evaluate the impact thereof in a cross-validation setting with several state-of-the-art Dutch large language models. This augmentation technique predominantly shows improvements in F1 for classifying underrepresented classes while increasing the overall recall, paired with a slight decrease in precision for more common classes. Furthermore, we examine how well the synthetic data generalises to human data in the classification task. To our knowledge, we are the first to utilise Chat-GPT and GPT-3.5 for augmenting a Dutch multi-label dataset classification task.", }
In this paper, we leverage the GPT-3.5 language model both using the Chat-GPT API interface and the GPT-3.5 API interface to generate realistic examples of anti-vaccination tweets in Dutch with the aim of augmenting an imbalanced multi-label vaccine hesitancy argumentation classification dataset. In line with previous research, we devise a prompt that, on the one hand, instructs the model to generate realistic examples based on the gold standard dataset and, on the other hand, to assign multiple pseudo-labels (or a single pseudo-label) to the generated instances. We then augment our gold standard data with the generated examples and evaluate the impact thereof in a cross-validation setting with several state-of-the-art Dutch large language models. This augmentation technique predominantly shows improvements in F1 for classifying underrepresented classes while increasing the overall recall, paired with a slight decrease in precision for more common classes. Furthermore, we examine how well the synthetic data generalises to human data in the classification task. To our knowledge, we are the first to utilise Chat-GPT and GPT-3.5 for augmenting a Dutch multi-label dataset classification task.
[ "Van Nooten, Jens", "Daelemans, Walter" ]
Improving Dutch Vaccine Hesitancy Monitoring via Multi-Label Data Augmentation with GPT-3.5
wassa-1.23
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.24.bib
https://aclanthology.org/2023.wassa-1.24/
@inproceedings{hussiny-ovrelid-2023-emotion, title = "Emotion Analysis of Tweets Banning Education in {A}fghanistan", author = "Hussiny, Mohammad Ali and {\O}vrelid, Lilja", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.24", doi = "10.18653/v1/2023.wassa-1.24", pages = "271--277", abstract = "This paper introduces the first emotion-annotated dataset for the Dari variant of Persian spoken in Afghanistan. The LetHerLearn dataset contains 7,600 tweets posted in reaction to the Taliban{'}s ban of women{'}s rights to education in 2022 and has been manually annotated according to Ekman{'}s emotion categories. We here detail the data collection and annotation process, present relevant dataset statistics as well as initial experiments on the resulting dataset, benchmarking a number of different neural architectures for the task of Dari emotion classification.", }
This paper introduces the first emotion-annotated dataset for the Dari variant of Persian spoken in Afghanistan. The LetHerLearn dataset contains 7,600 tweets posted in reaction to the Taliban{'}s ban of women{'}s rights to education in 2022 and has been manually annotated according to Ekman{'}s emotion categories. We here detail the data collection and annotation process, present relevant dataset statistics as well as initial experiments on the resulting dataset, benchmarking a number of different neural architectures for the task of Dari emotion classification.
[ "Hussiny, Mohammad Ali", "{\\O}vrelid, Lilja" ]
Emotion Analysis of Tweets Banning Education in Afghanistan
wassa-1.24
Poster
2306.16268
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.25.bib
https://aclanthology.org/2023.wassa-1.25/
@inproceedings{hoeken-etal-2023-identifying, title = "Identifying Slurs and Lexical Hate Speech via Light-Weight Dimension Projection in Embedding Space", author = "Hoeken, Sanne and Zarrie{\ss}, Sina and Alacam, Ozge", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.25", doi = "10.18653/v1/2023.wassa-1.25", pages = "278--289", abstract = "The prevalence of hate speech on online platforms has become a pressing concern for society, leading to increased attention towards detecting hate speech. Prior work in this area has primarily focused on identifying hate speech at the utterance level that reflects the complex nature of hate speech. In this paper, we propose a targeted and efficient approach to identifying hate speech by detecting slurs at the lexical level using contextualized word embeddings. We hypothesize that slurs have a systematically different representation than their neutral counterparts, making them identifiable through existing methods for discovering semantic dimensions in word embeddings. The results demonstrate the effectiveness of our approach in predicting slurs, confirming linguistic theory that the meaning of slurs is stable across contexts. Our robust hate dimension approach for slur identification offers a promising solution to tackle a smaller yet crucial piece of the complex puzzle of hate speech detection.", }
The prevalence of hate speech on online platforms has become a pressing concern for society, leading to increased attention towards detecting hate speech. Prior work in this area has primarily focused on identifying hate speech at the utterance level that reflects the complex nature of hate speech. In this paper, we propose a targeted and efficient approach to identifying hate speech by detecting slurs at the lexical level using contextualized word embeddings. We hypothesize that slurs have a systematically different representation than their neutral counterparts, making them identifiable through existing methods for discovering semantic dimensions in word embeddings. The results demonstrate the effectiveness of our approach in predicting slurs, confirming linguistic theory that the meaning of slurs is stable across contexts. Our robust hate dimension approach for slur identification offers a promising solution to tackle a smaller yet crucial piece of the complex puzzle of hate speech detection.
[ "Hoeken, Sanne", "Zarrie{\\ss}, Sina", "Alacam, Ozge" ]
Identifying Slurs and Lexical Hate Speech via Light-Weight Dimension Projection in Embedding Space
wassa-1.25
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.26.bib
https://aclanthology.org/2023.wassa-1.26/
@inproceedings{barnes-2023-sentiment, title = "Sentiment and Emotion Classification in Low-resource Settings", author = "Barnes, Jeremy", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.26", doi = "10.18653/v1/2023.wassa-1.26", pages = "290--304", abstract = "The popularity of sentiment and emotion analysis has lead to an explosion of datasets, approaches, and papers. However, these are often tested in optimal settings, where plentiful training and development data are available, and compared mainly with recent state-of-the-art models that have been similarly evaluated. In this paper, we instead present a systematic comparison of sentiment and emotion classification methods, ranging from rule- and dictionary-based methods to recently proposed few-shot and prompting methods with large language models. We test these methods in-domain, out-of-domain, and in cross-lingual settings and find that in low-resource settings, rule- and dictionary-based methods perform as well or better than few-shot and prompting methods, especially for emotion classification. Zero-shot cross-lingual approaches, however, still outperform in-language dictionary induction.", }
The popularity of sentiment and emotion analysis has lead to an explosion of datasets, approaches, and papers. However, these are often tested in optimal settings, where plentiful training and development data are available, and compared mainly with recent state-of-the-art models that have been similarly evaluated. In this paper, we instead present a systematic comparison of sentiment and emotion classification methods, ranging from rule- and dictionary-based methods to recently proposed few-shot and prompting methods with large language models. We test these methods in-domain, out-of-domain, and in cross-lingual settings and find that in low-resource settings, rule- and dictionary-based methods perform as well or better than few-shot and prompting methods, especially for emotion classification. Zero-shot cross-lingual approaches, however, still outperform in-language dictionary induction.
[ "Barnes, Jeremy" ]
Sentiment and Emotion Classification in Low-resource Settings
wassa-1.26
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.27.bib
https://aclanthology.org/2023.wassa-1.27/
@inproceedings{savinova-moscoso-del-prado-2023-analyzing, title = {Analyzing Subjectivity Using a Transformer-Based Regressor Trained on Na{\"\i}ve Speakers{'} Judgements}, author = "Savinova, Elena and Moscoso Del Prado, Fermin", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.27", doi = "10.18653/v1/2023.wassa-1.27", pages = "305--314", abstract = "The problem of subjectivity detection is often approached as a preparatory binary task for sentiment analysis, despite the fact that theoretically subjectivity is often defined as a matter of degree. In this work, we approach subjectivity analysis as a regression task and test the efficiency of a transformer RoBERTa model in annotating subjectivity of online news, including news from social media, based on a small subset of human-labeled training data. The results of experiments comparing our model to an existing rule-based subjectivity regressor and a state-of-the-art binary classifier reveal that: 1) our model highly correlates with the human subjectivity ratings and outperforms the widely used rule-based {``}pattern{''} subjectivity regressor (De Smedt and Daelemans, 2012); 2) our model performs well as a binary classifier and generalizes to the benchmark subjectivity dataset (Pang and Lee, 2004); 3) in contrast, state-of-the-art classifiers trained on the benchmark dataset show catastrophic performance on our human-labeled data. The results bring to light the issues of the gold standard subjectivity dataset, and the models trained on it, which seem to distinguish between the origin/style of the texts rather than subjectivity as perceived by human English speakers.", }
The problem of subjectivity detection is often approached as a preparatory binary task for sentiment analysis, despite the fact that theoretically subjectivity is often defined as a matter of degree. In this work, we approach subjectivity analysis as a regression task and test the efficiency of a transformer RoBERTa model in annotating subjectivity of online news, including news from social media, based on a small subset of human-labeled training data. The results of experiments comparing our model to an existing rule-based subjectivity regressor and a state-of-the-art binary classifier reveal that: 1) our model highly correlates with the human subjectivity ratings and outperforms the widely used rule-based {``}pattern{''} subjectivity regressor (De Smedt and Daelemans, 2012); 2) our model performs well as a binary classifier and generalizes to the benchmark subjectivity dataset (Pang and Lee, 2004); 3) in contrast, state-of-the-art classifiers trained on the benchmark dataset show catastrophic performance on our human-labeled data. The results bring to light the issues of the gold standard subjectivity dataset, and the models trained on it, which seem to distinguish between the origin/style of the texts rather than subjectivity as perceived by human English speakers.
[ "Savinova, Elena", "Moscoso Del Prado, Fermin" ]
Analyzing Subjectivity Using a Transformer-Based Regressor Trained on Naïve Speakers' Judgements
wassa-1.27
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.28.bib
https://aclanthology.org/2023.wassa-1.28/
@inproceedings{maladry-etal-2023-fine, title = "A Fine Line Between Irony and Sincerity: Identifying Bias in Transformer Models for Irony Detection", author = "Maladry, Aaron and Lefever, Els and Van Hee, Cynthia and Hoste, Veronique", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.28", doi = "10.18653/v1/2023.wassa-1.28", pages = "315--324", abstract = "In this paper we investigate potential bias in fine-tuned transformer models for irony detection. Bias is defined in this research as spurious associations between word n-grams and class labels, that can cause the system to rely too much on superficial cues and miss the essence of the irony. For this purpose, we looked for correlations between class labels and words that are prone to trigger irony, such as positive adjectives, intensifiers and topical nouns. Additionally, we investigate our irony model{'}s predictions before and after manipulating the data set through irony trigger replacements. We further support these insights with state-of-the-art explainability techniques (Layer Integrated Gradients, Discretized Integrated Gradients and Layer-wise Relevance Propagation). Both approaches confirm the hypothesis that transformer models generally encode correlations between positive sentiments and ironic texts, with even higher correlations between vividly expressed sentiment and irony. Based on these insights, we implemented a number of modification strategies to enhance the robustness of our irony classifier.", }
In this paper we investigate potential bias in fine-tuned transformer models for irony detection. Bias is defined in this research as spurious associations between word n-grams and class labels, that can cause the system to rely too much on superficial cues and miss the essence of the irony. For this purpose, we looked for correlations between class labels and words that are prone to trigger irony, such as positive adjectives, intensifiers and topical nouns. Additionally, we investigate our irony model{'}s predictions before and after manipulating the data set through irony trigger replacements. We further support these insights with state-of-the-art explainability techniques (Layer Integrated Gradients, Discretized Integrated Gradients and Layer-wise Relevance Propagation). Both approaches confirm the hypothesis that transformer models generally encode correlations between positive sentiments and ironic texts, with even higher correlations between vividly expressed sentiment and irony. Based on these insights, we implemented a number of modification strategies to enhance the robustness of our irony classifier.
[ "Maladry, Aaron", "Lefever, Els", "Van Hee, Cynthia", "Hoste, Veronique" ]
A Fine Line Between Irony and Sincerity: Identifying Bias in Transformer Models for Irony Detection
wassa-1.28
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.29.bib
https://aclanthology.org/2023.wassa-1.29/
@inproceedings{jentzsch-kersting-2023-chatgpt, title = "{C}hat{GPT} is fun, but it is not funny! Humor is still challenging Large Language Models", author = "Jentzsch, Sophie and Kersting, Kristian", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.29", doi = "10.18653/v1/2023.wassa-1.29", pages = "325--340", abstract = "Humor is a central aspect of human communication that has not been solved for artificial agents so far. Large language models (LLMs) are increasingly able to capture implicit and contextual information. Especially, OpenAI{'}s ChatGPT recently gained immense public attention. The GPT3-based model almost seems to communicate on a human level and can even tell jokes. Humor is an essential component of human communication. But is ChatGPT really funny?We put ChatGPT{'}s sense of humor to the test. In a series of exploratory experiments around jokes, i.e., generation, explanation, and detection, we seek to understand ChatGPT{'}s capability to grasp and reproduce human humor. Since the model itself is not accessible, we applied prompt-based experiments. Our empirical evidence indicates that jokes are not hard-coded but mostly also not newly generated by the model. Over 90{\%} of 1008 generated jokes were the same 25 Jokes. The system accurately explains valid jokes but also comes up with fictional explanations for invalid jokes. Joke-typical characteristics can mislead ChatGPT in the classification of jokes. ChatGPT has not solved computational humor yet but it can be a big leap toward {``}funny{''} machines.", }
Humor is a central aspect of human communication that has not been solved for artificial agents so far. Large language models (LLMs) are increasingly able to capture implicit and contextual information. Especially, OpenAI{'}s ChatGPT recently gained immense public attention. The GPT3-based model almost seems to communicate on a human level and can even tell jokes. Humor is an essential component of human communication. But is ChatGPT really funny?We put ChatGPT{'}s sense of humor to the test. In a series of exploratory experiments around jokes, i.e., generation, explanation, and detection, we seek to understand ChatGPT{'}s capability to grasp and reproduce human humor. Since the model itself is not accessible, we applied prompt-based experiments. Our empirical evidence indicates that jokes are not hard-coded but mostly also not newly generated by the model. Over 90{\%} of 1008 generated jokes were the same 25 Jokes. The system accurately explains valid jokes but also comes up with fictional explanations for invalid jokes. Joke-typical characteristics can mislead ChatGPT in the classification of jokes. ChatGPT has not solved computational humor yet but it can be a big leap toward {``}funny{''} machines.
[ "Jentzsch, Sophie", "Kersting, Kristian" ]
ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models
wassa-1.29
Poster
2306.04563
[ "https://github.com/dlr-sc/jokegpt-wassa23" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.30.bib
https://aclanthology.org/2023.wassa-1.30/
@inproceedings{lorandi-belz-2023-control, title = "How to Control Sentiment in Text Generation: A Survey of the State-of-the-Art in Sentiment-Control Techniques", author = "Lorandi, Michela and Belz, Anya", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.30", doi = "10.18653/v1/2023.wassa-1.30", pages = "341--353", abstract = "Recent advances in the development of large Pretrained Language Models, such as GPT, BERT and Bloom, have achieved remarkable performance on a wide range of different NLP tasks. However, when used for text generation tasks, these models still have limitations when it comes to controlling the content and style of the generated text, often producing content that is incorrect, irrelevant, or inappropriate in the context of a given task. In this survey paper, we explore methods for controllable text generation with a focus on sentiment control. We systematically collect papers from the ACL Anthology, create a categorisation scheme based on different control techniques and controlled attributes, and use the scheme to categorise and compare methods. The result is a detailed and comprehensive overview of state-of-the-art techniques for sentiment-controlled text generation categorised on the basis of how the control is implemented and what attributes are controlled and providing a clear idea of their relative strengths and weaknesses.", }
Recent advances in the development of large Pretrained Language Models, such as GPT, BERT and Bloom, have achieved remarkable performance on a wide range of different NLP tasks. However, when used for text generation tasks, these models still have limitations when it comes to controlling the content and style of the generated text, often producing content that is incorrect, irrelevant, or inappropriate in the context of a given task. In this survey paper, we explore methods for controllable text generation with a focus on sentiment control. We systematically collect papers from the ACL Anthology, create a categorisation scheme based on different control techniques and controlled attributes, and use the scheme to categorise and compare methods. The result is a detailed and comprehensive overview of state-of-the-art techniques for sentiment-controlled text generation categorised on the basis of how the control is implemented and what attributes are controlled and providing a clear idea of their relative strengths and weaknesses.
[ "Lor", "i, Michela", "Belz, Anya" ]
How to Control Sentiment in Text Generation: A Survey of the State-of-the-Art in Sentiment-Control Techniques
wassa-1.30
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.31.bib
https://aclanthology.org/2023.wassa-1.31/
@inproceedings{benedetto-etal-2023-transformer, title = "Transformer-based Prediction of Emotional Reactions to Online Social Network Posts", author = "Benedetto, Irene and La Quatra, Moreno and Cagliero, Luca and Vassio, Luca and Trevisan, Martino", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.31", doi = "10.18653/v1/2023.wassa-1.31", pages = "354--364", abstract = "Emotional reactions to Online Social Network posts have recently gained importance in the study of the online ecosystem. Prior to post publication, the number of received reactions can be predicted based on either the textual content of the post or the related metadata. However, existing approaches suffer from both the lack of semantic-aware language understanding models and the limited explainability of the prediction models. To overcome these issues, we present a new transformer-based method to predict the number of emotional reactions of different types to social posts. It leverages the attention mechanism to capture arbitrary semantic textual relations neglected by prior works. Furthermore, it also provides end-users with textual explanations of the predictions. The results achieved on a large collection of Facebook posts confirm the applicability of the presented methodology.", }
Emotional reactions to Online Social Network posts have recently gained importance in the study of the online ecosystem. Prior to post publication, the number of received reactions can be predicted based on either the textual content of the post or the related metadata. However, existing approaches suffer from both the lack of semantic-aware language understanding models and the limited explainability of the prediction models. To overcome these issues, we present a new transformer-based method to predict the number of emotional reactions of different types to social posts. It leverages the attention mechanism to capture arbitrary semantic textual relations neglected by prior works. Furthermore, it also provides end-users with textual explanations of the predictions. The results achieved on a large collection of Facebook posts confirm the applicability of the presented methodology.
[ "Benedetto, Irene", "La Quatra, Moreno", "Cagliero, Luca", "Vassio, Luca", "Trevisan, Martino" ]
Transformer-based Prediction of Emotional Reactions to Online Social Network Posts
wassa-1.31
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.32.bib
https://aclanthology.org/2023.wassa-1.32/
@inproceedings{tatariya-etal-2023-transfer, title = "Transfer Learning for Code-Mixed Data: Do Pretraining Languages Matter?", author = "Tatariya, Kushal and Lent, Heather and de Lhoneux, Miryam", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.32", doi = "10.18653/v1/2023.wassa-1.32", pages = "365--378", abstract = "Monolinguals make up a minority of the world{'}s speakers, and yet most language technologies lag behind in handling linguistic behaviours produced by bilingual and multilingual speakers. A commonly observed phenomenon in such communities is code-mixing, which is prevalent on social media, and thus requires attention in NLP research. In this work, we look into the ability of pretrained language models to handle code-mixed data, with a focus on the impact of languages present in pretraining on the downstream performance of the model as measured on the task of sentiment analysis. Ultimately, we find that the pretraining language has little effect on performance when the model sees code-mixed data during downstream finetuning. We also evaluate the models on code-mixed data in a zero-shot setting, after task-specific finetuning on a monolingual dataset. We find that this brings out differences in model performance that can be attributed to the pretraining languages. We present a thorough analysis of these findings that also looks at model performance based on the composition of participating languages in the code-mixed datasets.", }
Monolinguals make up a minority of the world{'}s speakers, and yet most language technologies lag behind in handling linguistic behaviours produced by bilingual and multilingual speakers. A commonly observed phenomenon in such communities is code-mixing, which is prevalent on social media, and thus requires attention in NLP research. In this work, we look into the ability of pretrained language models to handle code-mixed data, with a focus on the impact of languages present in pretraining on the downstream performance of the model as measured on the task of sentiment analysis. Ultimately, we find that the pretraining language has little effect on performance when the model sees code-mixed data during downstream finetuning. We also evaluate the models on code-mixed data in a zero-shot setting, after task-specific finetuning on a monolingual dataset. We find that this brings out differences in model performance that can be attributed to the pretraining languages. We present a thorough analysis of these findings that also looks at model performance based on the composition of participating languages in the code-mixed datasets.
[ "Tatariya, Kushal", "Lent, Heather", "de Lhoneux, Miryam" ]
Transfer Learning for Code-Mixed Data: Do Pretraining Languages Matter?
wassa-1.32
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.33.bib
https://aclanthology.org/2023.wassa-1.33/
@inproceedings{kim-etal-2023-chatgpt, title = "Can {C}hat{GPT} Understand Causal Language in Science Claims?", author = "Kim, Yuheun and Guo, Lu and Yu, Bei and Li, Yingya", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.33", doi = "10.18653/v1/2023.wassa-1.33", pages = "379--389", abstract = "This study evaluated ChatGPT{'}s ability to understand causal language in science papers and news by testing its accuracy in a task of labeling the strength of a claim as causal, conditional causal, correlational, or no relationship. The results show that ChatGPT is still behind the existing fine-tuned BERT models by a large margin. ChatGPT also had difficulty understanding conditional causal claims mitigated by hedges. However, its weakness may be utilized to improve the clarity of human annotation guideline. Chain-of-Thoughts were faithful and helpful for improving prompt performance, but finding the optimal prompt is difficult with inconsistent results and the lack of effective method to establish cause-effect between prompts and outcomes, suggesting caution when generalizing prompt engineering results across tasks or models.", }
This study evaluated ChatGPT{'}s ability to understand causal language in science papers and news by testing its accuracy in a task of labeling the strength of a claim as causal, conditional causal, correlational, or no relationship. The results show that ChatGPT is still behind the existing fine-tuned BERT models by a large margin. ChatGPT also had difficulty understanding conditional causal claims mitigated by hedges. However, its weakness may be utilized to improve the clarity of human annotation guideline. Chain-of-Thoughts were faithful and helpful for improving prompt performance, but finding the optimal prompt is difficult with inconsistent results and the lack of effective method to establish cause-effect between prompts and outcomes, suggesting caution when generalizing prompt engineering results across tasks or models.
[ "Kim, Yuheun", "Guo, Lu", "Yu, Bei", "Li, Yingya" ]
Can ChatGPT Understand Causal Language in Science Claims?
wassa-1.33
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.34.bib
https://aclanthology.org/2023.wassa-1.34/
@inproceedings{v-ganesan-etal-2023-systematic, title = "Systematic Evaluation of {GPT}-3 for Zero-Shot Personality Estimation", author = "V Ganesan, Adithya and Lal, Yash Kumar and Nilsson, August and Schwartz, H.", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.34", doi = "10.18653/v1/2023.wassa-1.34", pages = "390--400", abstract = "Very large language models (LLMs) perform extremely well on a spectrum of NLP tasks in a zero-shot setting. However, little is known about their performance on human-level NLP problems which rely on understanding psychological concepts, such as assessing personality traits. In this work, we investigate the zero-shot ability of GPT-3 to estimate the Big 5 personality traits from users{'} social media posts. Through a set of systematic experiments, we find that zero-shot GPT-3 performance is somewhat close to an existing pre-trained SotA for broad classification upon injecting knowledge about the trait in the prompts. However, when prompted to provide fine-grained classification, its performance drops to close to a simple most frequent class (MFC) baseline. We further analyze where GPT-3 performs better, as well as worse, than a pretrained lexical model, illustrating systematic errors that suggest ways to improve LLMs on human-level NLP tasks. The code for this project is available on Github.", }
Very large language models (LLMs) perform extremely well on a spectrum of NLP tasks in a zero-shot setting. However, little is known about their performance on human-level NLP problems which rely on understanding psychological concepts, such as assessing personality traits. In this work, we investigate the zero-shot ability of GPT-3 to estimate the Big 5 personality traits from users{'} social media posts. Through a set of systematic experiments, we find that zero-shot GPT-3 performance is somewhat close to an existing pre-trained SotA for broad classification upon injecting knowledge about the trait in the prompts. However, when prompted to provide fine-grained classification, its performance drops to close to a simple most frequent class (MFC) baseline. We further analyze where GPT-3 performs better, as well as worse, than a pretrained lexical model, illustrating systematic errors that suggest ways to improve LLMs on human-level NLP tasks. The code for this project is available on Github.
[ "V Ganesan, Adithya", "Lal, Yash Kumar", "Nilsson, August", "Schwartz, H." ]
Systematic Evaluation of GPT-3 for Zero-Shot Personality Estimation
wassa-1.34
Poster
2306.01183
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.35.bib
https://aclanthology.org/2023.wassa-1.35/
@inproceedings{teodorescu-etal-2023-utterance, title = "Utterance Emotion Dynamics in Children{'}s Poems: Emotional Changes Across Age", author = "Teodorescu, Daniela and Fyshe, Alona and Mohammad, Saif", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.35", doi = "10.18653/v1/2023.wassa-1.35", pages = "401--415", abstract = "Emerging psychopathology studies are showing that patterns of changes in emotional state {---} emotion dynamics {---} are associated with overall well-being and mental health. More recently, there has been some work in tracking emotion dynamics through one{'}s utterances, allowing for data to be collected on a larger scale across time and people. However, several questions about how emotion dynamics change with age, especially in children, and when determined through children{'}s writing, remain unanswered. In this work, we use both a lexicon and a machine learning based approach to quantify characteristics of emotion dynamics determined from poems written by children of various ages. We show that both approaches point to similar trends: consistent increasing intensities for some emotions (e.g., anger, fear, joy, sadness, arousal, and dominance) with age and a consistent decreasing valence with age. We also find increasing emotional variability, rise rates (i.e., emotional reactivity), and recovery rates (i.e., emotional regulation) with age. These results act as a useful baselines for further research in how patterns of emotions expressed by children change with age, and their association with mental health.", }
Emerging psychopathology studies are showing that patterns of changes in emotional state {---} emotion dynamics {---} are associated with overall well-being and mental health. More recently, there has been some work in tracking emotion dynamics through one{'}s utterances, allowing for data to be collected on a larger scale across time and people. However, several questions about how emotion dynamics change with age, especially in children, and when determined through children{'}s writing, remain unanswered. In this work, we use both a lexicon and a machine learning based approach to quantify characteristics of emotion dynamics determined from poems written by children of various ages. We show that both approaches point to similar trends: consistent increasing intensities for some emotions (e.g., anger, fear, joy, sadness, arousal, and dominance) with age and a consistent decreasing valence with age. We also find increasing emotional variability, rise rates (i.e., emotional reactivity), and recovery rates (i.e., emotional regulation) with age. These results act as a useful baselines for further research in how patterns of emotions expressed by children change with age, and their association with mental health.
[ "Teodorescu, Daniela", "Fyshe, Alona", "Mohammad, Saif" ]
Utterance Emotion Dynamics in Children's Poems: Emotional Changes Across Age
wassa-1.35
Poster
2306.05387
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.36.bib
https://aclanthology.org/2023.wassa-1.36/
@inproceedings{alexeeva-etal-2023-annotating, title = "Annotating and Training for Population Subjective Views", author = "Alexeeva, Maria and Hyland, Caroline and Alcock, Keith and Cohen, Allegra A. Beal and Kanyamahanga, Hubert and Anni, Isaac Kobby and Surdeanu, Mihai", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.36", doi = "10.18653/v1/2023.wassa-1.36", pages = "416--430", abstract = "In this paper, we present a dataset of subjective views (beliefs and attitudes) held by individuals or groups. We analyze the usefulness of the dataset by training a neural classifier that identifies belief-containing sentences that are relevant for our broader project of interest{---}scientific modeling of complex systems. We also explore and discuss difficulties related to annotation of subjective views and propose ways of addressing them.", }
In this paper, we present a dataset of subjective views (beliefs and attitudes) held by individuals or groups. We analyze the usefulness of the dataset by training a neural classifier that identifies belief-containing sentences that are relevant for our broader project of interest{---}scientific modeling of complex systems. We also explore and discuss difficulties related to annotation of subjective views and propose ways of addressing them.
[ "Alexeeva, Maria", "Hyl", ", Caroline", "Alcock, Keith", "Cohen, Allegra A. Beal", "Kanyamahanga, Hubert", "Anni, Isaac Kobby", "Surdeanu, Mihai" ]
Annotating and Training for Population Subjective Views
wassa-1.36
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.37.bib
https://aclanthology.org/2023.wassa-1.37/
@inproceedings{rajendran-trabelsi-2023-exploration, title = "Exploration of Contrastive Learning Strategies toward more Robust Stance Detection", author = "Rajendran, Udhaya Kumar and Trabelsi, Amine", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.37", doi = "10.18653/v1/2023.wassa-1.37", pages = "431--440", abstract = "Stance Detection is the task of identifying the position of an author of a text towards an issue or a target. Previous studies on Stance Detection indicate that the existing systems are non-robust to the variations and errors in input sentences. Our proposed methodology uses Contrastive Learning to learn sentence representations by bringing semantically similar sentences and sentences implying the same stance closer to each other in the embedding space. We compare our approach to a pretrained transformer model directly finetuned with the stance datasets. We use char-level and word-level adversarial perturbation attacks to measure the resilience of the models and we show that our approach achieves better performances and is more robust to the different adversarial perturbations introduced to the test data. The results indicate that our approach performs better on small-sized and class-imbalanced stance datasets.", }
Stance Detection is the task of identifying the position of an author of a text towards an issue or a target. Previous studies on Stance Detection indicate that the existing systems are non-robust to the variations and errors in input sentences. Our proposed methodology uses Contrastive Learning to learn sentence representations by bringing semantically similar sentences and sentences implying the same stance closer to each other in the embedding space. We compare our approach to a pretrained transformer model directly finetuned with the stance datasets. We use char-level and word-level adversarial perturbation attacks to measure the resilience of the models and we show that our approach achieves better performances and is more robust to the different adversarial perturbations introduced to the test data. The results indicate that our approach performs better on small-sized and class-imbalanced stance datasets.
[ "Rajendran, Udhaya Kumar", "Trabelsi, Amine" ]
Exploration of Contrastive Learning Strategies toward more Robust Stance Detection
wassa-1.37
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.38.bib
https://aclanthology.org/2023.wassa-1.38/
@inproceedings{bhaumik-etal-2023-adapting, title = "Adapting Emotion Detection to Analyze Influence Campaigns on Social Media", author = "Bhaumik, Ankita and Bernhardt, Andy and Katsios, Gregorios and Sa, Ning and Strzalkowski, Tomek", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.38", doi = "10.18653/v1/2023.wassa-1.38", pages = "441--451", abstract = "Social media is an extremely potent tool for influencing public opinion, particularly during important events such as elections, pandemics, and national conflicts. Emotions are a crucial aspect of this influence, but detecting them accurately in the political domain is a significant challenge due to the lack of suitable emotion labels and training datasets. In this paper, we present a generalized approach to emotion detection that can be adapted to the political domain with minimal performance sacrifice. Our approach is designed to be easily integrated into existing models without the need for additional training or fine-tuning. We demonstrate the zero-shot and few-shot performance of our model on the 2017 French presidential elections and propose efficient emotion groupings that would aid in effectively analyzing influence campaigns and agendas on social media.", }
Social media is an extremely potent tool for influencing public opinion, particularly during important events such as elections, pandemics, and national conflicts. Emotions are a crucial aspect of this influence, but detecting them accurately in the political domain is a significant challenge due to the lack of suitable emotion labels and training datasets. In this paper, we present a generalized approach to emotion detection that can be adapted to the political domain with minimal performance sacrifice. Our approach is designed to be easily integrated into existing models without the need for additional training or fine-tuning. We demonstrate the zero-shot and few-shot performance of our model on the 2017 French presidential elections and propose efficient emotion groupings that would aid in effectively analyzing influence campaigns and agendas on social media.
[ "Bhaumik, Ankita", "Bernhardt, Andy", "Katsios, Gregorios", "Sa, Ning", "Strzalkowski, Tomek" ]
Adapting Emotion Detection to Analyze Influence Campaigns on Social Media
wassa-1.38
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.39.bib
https://aclanthology.org/2023.wassa-1.39/
@inproceedings{oboyle-doyle-2023-just, title = "Not Just Iconic: Emoji Interpretation is Shaped by Use", author = "O{'}Boyle, Brianna and Doyle, Gabriel", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.39", doi = "10.18653/v1/2023.wassa-1.39", pages = "452--457", abstract = "Where do the meaning of emoji come from? Though it is generally assumed that emoji are fully iconic, with meanings derived from their visual forms, we argue that this is only one component of their meaning. We surveyed users and non-users of the Chinese social media platform WeChat for their interpretations of emoji specific to WeChat. We find that some emoji show significant differences in their interpretations between users and non-users, and based on how familiar a person is with the specific emoji. We argue that this reflects a more complex process for building the meaning of emoji on a platform than pure iconicity.", }
Where do the meaning of emoji come from? Though it is generally assumed that emoji are fully iconic, with meanings derived from their visual forms, we argue that this is only one component of their meaning. We surveyed users and non-users of the Chinese social media platform WeChat for their interpretations of emoji specific to WeChat. We find that some emoji show significant differences in their interpretations between users and non-users, and based on how familiar a person is with the specific emoji. We argue that this reflects a more complex process for building the meaning of emoji on a platform than pure iconicity.
[ "O{'}Boyle, Brianna", "Doyle, Gabriel" ]
Not Just Iconic: Emoji Interpretation is Shaped by Use
wassa-1.39
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.40.bib
https://aclanthology.org/2023.wassa-1.40/
@inproceedings{de-bruyne-2023-paradox, title = "The Paradox of Multilingual Emotion Detection", author = "De Bruyne, Luna", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.40", doi = "10.18653/v1/2023.wassa-1.40", pages = "458--466", abstract = "The dominance of English is a well-known issue in NLP research. In this position paper, I turn to state-of-the-art psychological insights to explain why this problem is especially persistent in research on automatic emotion detection, and why the seemingly promising approach of using multilingual models to include lower-resourced languages might not be the desired solution. Instead, I campaign for the use of models that acknowledge linguistic and cultural differences in emotion conceptualization and verbalization. Moreover, I see much potential in NLP to better understand emotions and emotional language use across different languages.", }
The dominance of English is a well-known issue in NLP research. In this position paper, I turn to state-of-the-art psychological insights to explain why this problem is especially persistent in research on automatic emotion detection, and why the seemingly promising approach of using multilingual models to include lower-resourced languages might not be the desired solution. Instead, I campaign for the use of models that acknowledge linguistic and cultural differences in emotion conceptualization and verbalization. Moreover, I see much potential in NLP to better understand emotions and emotional language use across different languages.
[ "De Bruyne, Luna" ]
The Paradox of Multilingual Emotion Detection
wassa-1.40
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.41.bib
https://aclanthology.org/2023.wassa-1.41/
@inproceedings{ireland-etal-2023-sadness, title = "Sadness and Anxiety Language in {R}eddit Messages Before and After Quitting a Job", author = "Ireland, Molly and Iserman, Micah and Adams, Kiki", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.41", doi = "10.18653/v1/2023.wassa-1.41", pages = "467--478", abstract = "People globally quit their jobs at high rates during the COVID-19 pandemic, yet there is scant research about emotional trajectories surrounding voluntary resignations before or during that era. To explore long-term emotional language patterns before and after quitting a job, we amassed a Reddit sample of people who indicated resigning on a specific day (n = 7,436), each of whom was paired with a comparison user matched on posting history. After excluding people on the basis of low posting frequency and word count, we analyzed 150.3 million words (53.1{\%} from 5,134 target users who indicated quitting) using SALLEE, a dictionary-based syntax-aware tool, and Linguistic Inquiry and Word Count (LIWC) dictionaries. Based on posts in the year before and after quitting, people who had quit their jobs used more sadness and anxiety language than matched comparison users. Lower rates of {``}I{''} pronouns and cognitive processing language were associated with less sadness and anxiety surrounding quitting. Emotional trajectories during and before the pandemic were parallel, though pandemic messages were more negative. The results have relevance for strategic self-distancing as a means of regulating negative emotions around major life changes.", }
People globally quit their jobs at high rates during the COVID-19 pandemic, yet there is scant research about emotional trajectories surrounding voluntary resignations before or during that era. To explore long-term emotional language patterns before and after quitting a job, we amassed a Reddit sample of people who indicated resigning on a specific day (n = 7,436), each of whom was paired with a comparison user matched on posting history. After excluding people on the basis of low posting frequency and word count, we analyzed 150.3 million words (53.1{\%} from 5,134 target users who indicated quitting) using SALLEE, a dictionary-based syntax-aware tool, and Linguistic Inquiry and Word Count (LIWC) dictionaries. Based on posts in the year before and after quitting, people who had quit their jobs used more sadness and anxiety language than matched comparison users. Lower rates of {``}I{''} pronouns and cognitive processing language were associated with less sadness and anxiety surrounding quitting. Emotional trajectories during and before the pandemic were parallel, though pandemic messages were more negative. The results have relevance for strategic self-distancing as a means of regulating negative emotions around major life changes.
[ "Irel", ", Molly", "Iserman, Micah", "Adams, Kiki" ]
Sadness and Anxiety Language in Reddit Messages Before and After Quitting a Job
wassa-1.41
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.42.bib
https://aclanthology.org/2023.wassa-1.42/
@inproceedings{schaefer-etal-2023-communicating, title = "Communicating Climate Change: A Comparison Between Tweets and Speeches by {G}erman Members of Parliament", author = "Schaefer, Robin and Abels, Christoph and Lewandowsky, Stephan and Stede, Manfred", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.42", doi = "10.18653/v1/2023.wassa-1.42", pages = "479--496", abstract = "Twitter and parliamentary speeches are very different communication channels, but many members of parliament (MPs) make use of both. Focusing on the topic of climate change, we undertake a comparative analysis of speeches and tweets uttered by MPs in Germany in a recent six-year period. By keyword/hashtag analyses and topic modeling, we find substantial differences along party lines, with left-leaning parties discussing climate change through a crisis frame, while liberal and conservative parties try to address climate change through the lens of climate-friendly technology and practices. Only the AfD denies the need to adopt climate change mitigating measures, demeaning those concerned about a deteriorating climate as climate cult or fanatics. Our analysis reveals that climate change communication does not differ substantially between Twitter and parliamentary speeches, but across the political spectrum.", }
Twitter and parliamentary speeches are very different communication channels, but many members of parliament (MPs) make use of both. Focusing on the topic of climate change, we undertake a comparative analysis of speeches and tweets uttered by MPs in Germany in a recent six-year period. By keyword/hashtag analyses and topic modeling, we find substantial differences along party lines, with left-leaning parties discussing climate change through a crisis frame, while liberal and conservative parties try to address climate change through the lens of climate-friendly technology and practices. Only the AfD denies the need to adopt climate change mitigating measures, demeaning those concerned about a deteriorating climate as climate cult or fanatics. Our analysis reveals that climate change communication does not differ substantially between Twitter and parliamentary speeches, but across the political spectrum.
[ "Schaefer, Robin", "Abels, Christoph", "Lew", "owsky, Stephan", "Stede, Manfred" ]
Communicating Climate Change: A Comparison Between Tweets and Speeches by German Members of Parliament
wassa-1.42
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.43.bib
https://aclanthology.org/2023.wassa-1.43/
@inproceedings{rawat-etal-2023-modelling, title = "Modelling Political Aggression on Social Media Platforms", author = "Rawat, Akash and Nafis, Nazia and Bhadane, Dnyaneshwar and Kanojia, Diptesh and Murthy, Rudra", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.43", doi = "10.18653/v1/2023.wassa-1.43", pages = "497--510", abstract = "Recent years have seen a proliferation of aggressive social media posts, often wreaking even real-world consequences for victims. Aggressive behaviour on social media is especially evident during important sociopolitical events such as elections, communal incidents, and public protests. In this paper, we introduce a dataset in English to model political aggression. The dataset comprises public tweets collated across the time-frames of two of the most recent Indian general elections. We manually annotate this data for the task of aggression detection and analyze this data for aggressive behaviour. To benchmark the efficacy of our dataset, we perform experiments by fine-tuning pre-trained language models and comparing the results with models trained on an existing but general domain dataset. Our models consistently outperform the models trained on existing data. Our best model achieves a macro F1-score of 66.66 on our dataset. We also train models on a combined version of both datasets, achieving best macro F1-score of 92.77, on our dataset. Additionally, we create subsets of code-mixed and non-code-mixed data from the combined dataset to observe variations in results due to the Hindi-English code-mixing phenomenon. We publicly release the anonymized data, code, and models for further research.", }
Recent years have seen a proliferation of aggressive social media posts, often wreaking even real-world consequences for victims. Aggressive behaviour on social media is especially evident during important sociopolitical events such as elections, communal incidents, and public protests. In this paper, we introduce a dataset in English to model political aggression. The dataset comprises public tweets collated across the time-frames of two of the most recent Indian general elections. We manually annotate this data for the task of aggression detection and analyze this data for aggressive behaviour. To benchmark the efficacy of our dataset, we perform experiments by fine-tuning pre-trained language models and comparing the results with models trained on an existing but general domain dataset. Our models consistently outperform the models trained on existing data. Our best model achieves a macro F1-score of 66.66 on our dataset. We also train models on a combined version of both datasets, achieving best macro F1-score of 92.77, on our dataset. Additionally, we create subsets of code-mixed and non-code-mixed data from the combined dataset to observe variations in results due to the Hindi-English code-mixing phenomenon. We publicly release the anonymized data, code, and models for further research.
[ "Rawat, Akash", "Nafis, Nazia", "Bhadane, Dnyaneshwar", "Kanojia, Diptesh", "Murthy, Rudra" ]
Modelling Political Aggression on Social Media Platforms
wassa-1.43
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.44.bib
https://aclanthology.org/2023.wassa-1.44/
@inproceedings{barriere-etal-2023-findings, title = "Findings of {WASSA} 2023 Shared Task on Empathy, Emotion and Personality Detection in Conversation and Reactions to News Articles", author = "Barriere, Valentin and Sedoc, Jo{\~a}o and Tafreshi, Shabnam and Giorgi, Salvatore", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.44", doi = "10.18653/v1/2023.wassa-1.44", pages = "511--525", abstract = "This paper presents the results of the WASSA 2023 shared task on predicting empathy, emotion, and personality in conversations and reactions to news articles. Participating teams were given access to a new dataset from Omitaomu et al. (2022) comprising empathic and emotional reactions to news articles. The dataset included formal and informal text, self-report data, and third-party annotations. Specifically, the dataset contained news articles (where harm is done to a person, group, or other) and crowd-sourced essays written in reaction to the article. After reacting via essays, crowd workers engaged in conversations about the news articles. Finally, the crowd workers self-reported their empathic concern and distress, personality (using the Big Five), and multi-dimensional empathy (via the Interpersonal Reactivity Index). A third-party annotated both the conversational turns (for empathy, emotion polarity, and emotion intensity) and essays (for multi-label emotions). Thus, the dataset contained outcomes (self-reported or third-party annotated) at the turn level (within conversations) and the essay level. Participation was encouraged in five tracks: (i) predicting turn-level empathy, emotion polarity, and emotion intensity in conversations, (ii) predicting state empathy and distress scores, (iii) predicting emotion categories, (iv) predicting personality, and (v) predicting multi-dimensional trait empathy. In total, 21 teams participated in the shared task. We summarize the methods and resources used by the participating teams.", }
This paper presents the results of the WASSA 2023 shared task on predicting empathy, emotion, and personality in conversations and reactions to news articles. Participating teams were given access to a new dataset from Omitaomu et al. (2022) comprising empathic and emotional reactions to news articles. The dataset included formal and informal text, self-report data, and third-party annotations. Specifically, the dataset contained news articles (where harm is done to a person, group, or other) and crowd-sourced essays written in reaction to the article. After reacting via essays, crowd workers engaged in conversations about the news articles. Finally, the crowd workers self-reported their empathic concern and distress, personality (using the Big Five), and multi-dimensional empathy (via the Interpersonal Reactivity Index). A third-party annotated both the conversational turns (for empathy, emotion polarity, and emotion intensity) and essays (for multi-label emotions). Thus, the dataset contained outcomes (self-reported or third-party annotated) at the turn level (within conversations) and the essay level. Participation was encouraged in five tracks: (i) predicting turn-level empathy, emotion polarity, and emotion intensity in conversations, (ii) predicting state empathy and distress scores, (iii) predicting emotion categories, (iv) predicting personality, and (v) predicting multi-dimensional trait empathy. In total, 21 teams participated in the shared task. We summarize the methods and resources used by the participating teams.
[ "Barriere, Valentin", "Sedoc, Jo{\\~a}o", "Tafreshi, Shabnam", "Giorgi, Salvatore" ]
Findings of WASSA 2023 Shared Task on Empathy, Emotion and Personality Detection in Conversation and Reactions to News Articles
wassa-1.44
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.45.bib
https://aclanthology.org/2023.wassa-1.45/
@inproceedings{wang-etal-2023-ynu, title = "{YNU}-{HPCC} at {WASSA}-2023 Shared Task 1: Large-scale Language Model with {L}o{RA} Fine-Tuning for Empathy Detection and Emotion Classification", author = "Wang, Yukun and Wang, Jin and Zhang, Xuejie", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.45", doi = "10.18653/v1/2023.wassa-1.45", pages = "526--530", abstract = "This paper describes the system for the YNU-HPCC team in WASSA-2023 Shared Task 1: Empathy Detection and Emotion Classification. This task needs to predict the empathy, emotion, and personality of the empathic reactions. This system is mainly based on the Decoding-enhanced BERT with disentangled attention (DeBERTa) model with parameter-efficient fine-tuning (PEFT) and the Robustly Optimized BERT Pretraining Approach (RoBERTa). Low-Rank Adaptation (LoRA) fine-tuning in PEFT is used to reduce the training parameters of large language models. Moreover, back translation is introduced to augment the training dataset. This system achieved relatively good results on the competition{'}s official leaderboard. The code of this system is available here.", }
This paper describes the system for the YNU-HPCC team in WASSA-2023 Shared Task 1: Empathy Detection and Emotion Classification. This task needs to predict the empathy, emotion, and personality of the empathic reactions. This system is mainly based on the Decoding-enhanced BERT with disentangled attention (DeBERTa) model with parameter-efficient fine-tuning (PEFT) and the Robustly Optimized BERT Pretraining Approach (RoBERTa). Low-Rank Adaptation (LoRA) fine-tuning in PEFT is used to reduce the training parameters of large language models. Moreover, back translation is introduced to augment the training dataset. This system achieved relatively good results on the competition{'}s official leaderboard. The code of this system is available here.
[ "Wang, Yukun", "Wang, Jin", "Zhang, Xuejie" ]
YNU-HPCC at WASSA-2023 Shared Task 1: Large-scale Language Model with LoRA Fine-Tuning for Empathy Detection and Emotion Classification
wassa-1.45
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.46.bib
https://aclanthology.org/2023.wassa-1.46/
@inproceedings{patkar-etal-2023-adityapatkar, title = "{A}ditya{P}atkar at {WASSA} 2023 Empathy, Emotion, and Personality Shared Task: {R}o{BERT}a-Based Emotion Classification of Essays, Improving Performance on Imbalanced Data", author = "Patkar, Aditya and Chandrashekhar, Suraj and Kadiyala, Ram Mohan Rao", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.46", doi = "10.18653/v1/2023.wassa-1.46", pages = "531--535", abstract = "This paper presents a study on using the RoBERTa language model for emotion classification of essays as part of the {`}Shared Task on Empathy Detection, Emotion Classification and Personality Detection in Interactions{'} organized as part of {`}WASSA 2023{'} at {`}ACL 2023{'}. Emotion classification is a challenging task in natural language processing, and imbalanced datasets further exacerbate this challenge. In this study, we explore the use of various data balancing techniques in combination with RoBERTa to improve the classification performance. We evaluate the performance of our approach (denoted by adityapatkar on Codalab) on a benchmark multi-label dataset of essays annotated with eight emotion categories, provided by the Shared Task organizers. Our results show that the proposed approach achieves the best macro F1 score in the competition{'}s training and evaluation phase. Our study provides insights into the potential of RoBERTa for handling imbalanced data in emotion classification. The results can have implications for the natural language processing tasks related to emotion classification.", }
This paper presents a study on using the RoBERTa language model for emotion classification of essays as part of the {`}Shared Task on Empathy Detection, Emotion Classification and Personality Detection in Interactions{'} organized as part of {`}WASSA 2023{'} at {`}ACL 2023{'}. Emotion classification is a challenging task in natural language processing, and imbalanced datasets further exacerbate this challenge. In this study, we explore the use of various data balancing techniques in combination with RoBERTa to improve the classification performance. We evaluate the performance of our approach (denoted by adityapatkar on Codalab) on a benchmark multi-label dataset of essays annotated with eight emotion categories, provided by the Shared Task organizers. Our results show that the proposed approach achieves the best macro F1 score in the competition{'}s training and evaluation phase. Our study provides insights into the potential of RoBERTa for handling imbalanced data in emotion classification. The results can have implications for the natural language processing tasks related to emotion classification.
[ "Patkar, Aditya", "Ch", "rashekhar, Suraj", "Kadiyala, Ram Mohan Rao" ]
AdityaPatkar at WASSA 2023 Empathy, Emotion, and Personality Shared Task: RoBERTa-Based Emotion Classification of Essays, Improving Performance on Imbalanced Data
wassa-1.46
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.47.bib
https://aclanthology.org/2023.wassa-1.47/
@inproceedings{hasan-etal-2023-curtin, title = "Curtin {OCAI} at {WASSA} 2023 Empathy, Emotion and Personality Shared Task: Demographic-Aware Prediction Using Multiple Transformers", author = "Hasan, Md Rakibul and Hossain, Md Zakir and Gedeon, Tom and Soon, Susannah and Rahman, Shafin", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.47", doi = "10.18653/v1/2023.wassa-1.47", pages = "536--541", abstract = "The WASSA 2023 shared task on predicting empathy, emotion and other personality traits consists of essays, conversations and articles in textual form and participants{'} demographic information in numerical form. To address the tasks, our contributions include (1) converting numerical information into meaningful text information using appropriate templates, (2) summarising lengthy articles, and (3) augmenting training data by paraphrasing. To achieve these contributions, we leveraged two separate T5-based pre-trained transformers. We then fine-tuned pre-trained BERT, DistilBERT and ALBERT for predicting empathy and personality traits. We used the Optuna hyperparameter optimisation framework to fine-tune learning rates, batch sizes and weight initialisation. Our proposed system achieved its highest performance {--} a Pearson correlation coefficient of 0.750 {--} on the onversation-level empathy prediction task1 . The system implementation is publicly available at https: //github.com/hasan-rakibul/WASSA23-empathy-emotion.", }
The WASSA 2023 shared task on predicting empathy, emotion and other personality traits consists of essays, conversations and articles in textual form and participants{'} demographic information in numerical form. To address the tasks, our contributions include (1) converting numerical information into meaningful text information using appropriate templates, (2) summarising lengthy articles, and (3) augmenting training data by paraphrasing. To achieve these contributions, we leveraged two separate T5-based pre-trained transformers. We then fine-tuned pre-trained BERT, DistilBERT and ALBERT for predicting empathy and personality traits. We used the Optuna hyperparameter optimisation framework to fine-tune learning rates, batch sizes and weight initialisation. Our proposed system achieved its highest performance {--} a Pearson correlation coefficient of 0.750 {--} on the onversation-level empathy prediction task1 . The system implementation is publicly available at https: //github.com/hasan-rakibul/WASSA23-empathy-emotion.
[ "Hasan, Md Rakibul", "Hossain, Md Zakir", "Gedeon, Tom", "Soon, Susannah", "Rahman, Shafin" ]
Curtin OCAI at WASSA 2023 Empathy, Emotion and Personality Shared Task: Demographic-Aware Prediction Using Multiple Transformers
wassa-1.47
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.48.bib
https://aclanthology.org/2023.wassa-1.48/
@inproceedings{srinivas-etal-2023-team, title = "{T}eam{\_}{H}awk at {WASSA} 2023 Empathy, Emotion, and Personality Shared Task: Multi-tasking Multi-encoder based transformers for Empathy and Emotion Prediction in Conversations", author = "Srinivas, Addepalli Sai and Barua, Nabarun and Pal, Santanu", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.48", doi = "10.18653/v1/2023.wassa-1.48", pages = "542--547", abstract = "In this paper, we present Team Hawk{'}s participation in Track 1 of the WASSA 2023 shared task. The objective of the task is to understand the empathy that emerges between individuals during their conversations. In our study, we developed a multi-tasking framework that is capable of automatically assessing empathy, intensity of emotion, and polarity of emotion within participants{'} conversations. Our proposed core model extends the transformer architecture, utilizing two separate RoBERTa-based encoders to encode both the articles and conversations. Subsequently, a sequence of self-attention, position-wise feed-forward, and dense layers are employed to predict the regression scores for the three sub-tasks: empathy, intensity of emotion, and polarity of emotion. Our best model achieved average Pearson{'}s correlation of 0.7710 (Empathy: 0.7843, Emotion Polarity: 0.7917, Emotion Intensity: 0.7381) on the released development set and 0.7250 (Empathy: 0.8090, Emotion Polarity: 0.7010, Emotion Intensity: 0.6650) on the released test set. These results earned us the 3rd position in the test set evaluation phase of Track 1.", }
In this paper, we present Team Hawk{'}s participation in Track 1 of the WASSA 2023 shared task. The objective of the task is to understand the empathy that emerges between individuals during their conversations. In our study, we developed a multi-tasking framework that is capable of automatically assessing empathy, intensity of emotion, and polarity of emotion within participants{'} conversations. Our proposed core model extends the transformer architecture, utilizing two separate RoBERTa-based encoders to encode both the articles and conversations. Subsequently, a sequence of self-attention, position-wise feed-forward, and dense layers are employed to predict the regression scores for the three sub-tasks: empathy, intensity of emotion, and polarity of emotion. Our best model achieved average Pearson{'}s correlation of 0.7710 (Empathy: 0.7843, Emotion Polarity: 0.7917, Emotion Intensity: 0.7381) on the released development set and 0.7250 (Empathy: 0.8090, Emotion Polarity: 0.7010, Emotion Intensity: 0.6650) on the released test set. These results earned us the 3rd position in the test set evaluation phase of Track 1.
[ "Srinivas, Addepalli Sai", "Barua, Nabarun", "Pal, Santanu" ]
Team_Hawk at WASSA 2023 Empathy, Emotion, and Personality Shared Task: Multi-tasking Multi-encoder based transformers for Empathy and Emotion Prediction in Conversations
wassa-1.48
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.49.bib
https://aclanthology.org/2023.wassa-1.49/
@inproceedings{lin-etal-2023-ncuee, title = "{NCUEE}-{NLP} at {WASSA} 2023 Shared Task 1: Empathy and Emotion Prediction Using Sentiment-Enhanced {R}o{BERT}a Transformers", author = "Lin, Tzu-Mi and Chang, Jung-Ying and Lee, Lung-Hao", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.49", doi = "10.18653/v1/2023.wassa-1.49", pages = "548--552", abstract = "This paper describes our proposed system design for the WASSA 2023 shared task 1. We propose a unified architecture of ensemble neural networks to integrate the original RoBERTa transformer with two sentiment-enhanced RoBERTa-Twitter and EmoBERTa models. For Track 1 at the speech-turn level, our best submission achieved an average Pearson correlation score of 0.7236, ranking fourth for empathy, emotion polarity and emotion intensity prediction. For Track 2 at the essay-level, our best submission obtained an average Pearson correlation score of 0.4178 for predicting empathy and distress scores, ranked first among all nine submissions.", }
This paper describes our proposed system design for the WASSA 2023 shared task 1. We propose a unified architecture of ensemble neural networks to integrate the original RoBERTa transformer with two sentiment-enhanced RoBERTa-Twitter and EmoBERTa models. For Track 1 at the speech-turn level, our best submission achieved an average Pearson correlation score of 0.7236, ranking fourth for empathy, emotion polarity and emotion intensity prediction. For Track 2 at the essay-level, our best submission obtained an average Pearson correlation score of 0.4178 for predicting empathy and distress scores, ranked first among all nine submissions.
[ "Lin, Tzu-Mi", "Chang, Jung-Ying", "Lee, Lung-Hao" ]
NCUEE-NLP at WASSA 2023 Shared Task 1: Empathy and Emotion Prediction Using Sentiment-Enhanced RoBERTa Transformers
wassa-1.49
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.50.bib
https://aclanthology.org/2023.wassa-1.50/
@inproceedings{gruschka-etal-2023-domain, title = "Domain Transfer for Empathy, Distress, and Personality Prediction", author = "Gruschka, Fabio and Lahnala, Allison and Welch, Charles and Flek, Lucie", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.50", doi = "10.18653/v1/2023.wassa-1.50", pages = "553--557", abstract = "This research contributes to the task of predicting empathy and personality traits within dialogue, an important aspect of natural language processing, as part of our experimental work for the WASSA 2023 Empathy and Emotion Shared Task. For predicting empathy, emotion polarity, and emotion intensity on turns within a dialogue, we employ adapters trained on social media interactions labeled with empathy ratings in a stacked composition with the target task adapters. Furthermore, we embed demographic information to predict Interpersonal Reactivity Index (IRI) subscales and Big Five Personality Traits utilizing BERT-based models. The results from our study provide valuable insights, contributing to advancements in understanding human behavior and interaction through text. Our team ranked 2nd on the personality and empathy prediction tasks, 4th on the interpersonal reactivity index, and 6th on the conversational task.", }
This research contributes to the task of predicting empathy and personality traits within dialogue, an important aspect of natural language processing, as part of our experimental work for the WASSA 2023 Empathy and Emotion Shared Task. For predicting empathy, emotion polarity, and emotion intensity on turns within a dialogue, we employ adapters trained on social media interactions labeled with empathy ratings in a stacked composition with the target task adapters. Furthermore, we embed demographic information to predict Interpersonal Reactivity Index (IRI) subscales and Big Five Personality Traits utilizing BERT-based models. The results from our study provide valuable insights, contributing to advancements in understanding human behavior and interaction through text. Our team ranked 2nd on the personality and empathy prediction tasks, 4th on the interpersonal reactivity index, and 6th on the conversational task.
[ "Gruschka, Fabio", "Lahnala, Allison", "Welch, Charles", "Flek, Lucie" ]
Domain Transfer for Empathy, Distress, and Personality Prediction
wassa-1.50
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.51.bib
https://aclanthology.org/2023.wassa-1.51/
@inproceedings{paranjape-etal-2023-converge, title = "Converge at {WASSA} 2023 Empathy, Emotion and Personality Shared Task: A Transformer-based Approach for Multi-Label Emotion Classification", author = "Paranjape, Aditya and Kolhatkar, Gaurav and Patwardhan, Yash and Gokhale, Omkar and Dharmadhikari, Shweta", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.51", doi = "10.18653/v1/2023.wassa-1.51", pages = "558--563", abstract = "In this paper, we highlight our approach for the {``}WASSA 2023 Shared-Task 1: Empathy Detection and Emotion Classification{''}. By accurately identifying emotions from textual sources of data, deep learning models can be trained to understand and interpret human emotions more effectively. The classification of emotions facilitates the creation of more emotionally intelligent systems that can better understand and respond to human emotions. We compared multiple transformer-based models for multi-label classification. Ensembling and oversampling were used to improve the performance of the system. A threshold-based voting mechanism performed on three models (Longformer, BERT, BigBird) yields the highest overall macro F1-score of 0.6605.", }
In this paper, we highlight our approach for the {``}WASSA 2023 Shared-Task 1: Empathy Detection and Emotion Classification{''}. By accurately identifying emotions from textual sources of data, deep learning models can be trained to understand and interpret human emotions more effectively. The classification of emotions facilitates the creation of more emotionally intelligent systems that can better understand and respond to human emotions. We compared multiple transformer-based models for multi-label classification. Ensembling and oversampling were used to improve the performance of the system. A threshold-based voting mechanism performed on three models (Longformer, BERT, BigBird) yields the highest overall macro F1-score of 0.6605.
[ "Paranjape, Aditya", "Kolhatkar, Gaurav", "Patwardhan, Yash", "Gokhale, Omkar", "Dharmadhikari, Shweta" ]
Converge at WASSA 2023 Empathy, Emotion and Personality Shared Task: A Transformer-based Approach for Multi-Label Emotion Classification
wassa-1.51
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.52.bib
https://aclanthology.org/2023.wassa-1.52/
@inproceedings{chavan-etal-2023-pict, title = "{PICT}-{CLRL} at {WASSA} 2023 Empathy, Emotion and Personality Shared Task: Empathy and Distress Detection using Ensembles of Transformer Models", author = "Chavan, Tanmay and Deshpande, Kshitij and Sonawane, Sheetal", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.52", doi = "10.18653/v1/2023.wassa-1.52", pages = "564--568", abstract = "This paper presents our approach for the WASSA 2023 Empathy, Emotion and Personality Shared Task. Empathy and distress are human feelings that are implicitly expressed in natural discourses. Empathy and distress detection are crucial challenges in Natural Language Processing that can aid our understanding of conversations. The provided dataset consists of several long-text examples in the English language, with each example associated with a numeric score for empathy and distress. We experiment with several BERT-based models as a part of our approach. We also try various ensemble methods. Our final submission has a Pearson{'}s r score of 0.346, placing us third in the empathy and distress detection subtask.", }
This paper presents our approach for the WASSA 2023 Empathy, Emotion and Personality Shared Task. Empathy and distress are human feelings that are implicitly expressed in natural discourses. Empathy and distress detection are crucial challenges in Natural Language Processing that can aid our understanding of conversations. The provided dataset consists of several long-text examples in the English language, with each example associated with a numeric score for empathy and distress. We experiment with several BERT-based models as a part of our approach. We also try various ensemble methods. Our final submission has a Pearson{'}s r score of 0.346, placing us third in the empathy and distress detection subtask.
[ "Chavan, Tanmay", "Deshp", "e, Kshitij", "Sonawane, Sheetal" ]
PICT-CLRL at WASSA 2023 Empathy, Emotion and Personality Shared Task: Empathy and Distress Detection using Ensembles of Transformer Models
wassa-1.52
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.53.bib
https://aclanthology.org/2023.wassa-1.53/
@inproceedings{nedilko-chu-2023-team, title = "Team Bias Busters at {WASSA} 2023 Empathy, Emotion and Personality Shared Task: Emotion Detection with Generative Pretrained Transformers", author = "Nedilko, Andrew and Chu, Yi", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.53", doi = "10.18653/v1/2023.wassa-1.53", pages = "569--573", abstract = "This paper describes the approach that we used to take part in the multi-label multi-class emotion classification as Track 3 of the WASSA 2023 Empathy, Emotion and Personality Shared Task at ACL 2023. The overall goal of this track is to build models that can predict 8 classes (7 emotions + neutral) based on short English essays written in response to news article that talked about events perceived as harmful to people. We used OpenAI generative pretrained transformers with full-scale APIs for the emotion prediction task by fine-tuning a GPT-3 model and doing prompt engineering for zero-shot / few-shot learning with ChatGPT and GPT-4 models based on multiple experiments on the dev set. The most efficient method was fine-tuning a GPT-3 model which allowed us to beat our baseline character-based XGBoost Classifier and rank 2nd among all other participants by achieving a macro F1 score of 0.65 and a micro F1 score of 0.7 on the final blind test set.", }
This paper describes the approach that we used to take part in the multi-label multi-class emotion classification as Track 3 of the WASSA 2023 Empathy, Emotion and Personality Shared Task at ACL 2023. The overall goal of this track is to build models that can predict 8 classes (7 emotions + neutral) based on short English essays written in response to news article that talked about events perceived as harmful to people. We used OpenAI generative pretrained transformers with full-scale APIs for the emotion prediction task by fine-tuning a GPT-3 model and doing prompt engineering for zero-shot / few-shot learning with ChatGPT and GPT-4 models based on multiple experiments on the dev set. The most efficient method was fine-tuning a GPT-3 model which allowed us to beat our baseline character-based XGBoost Classifier and rank 2nd among all other participants by achieving a macro F1 score of 0.65 and a micro F1 score of 0.7 on the final blind test set.
[ "Nedilko, Andrew", "Chu, Yi" ]
Team Bias Busters at WASSA 2023 Empathy, Emotion and Personality Shared Task: Emotion Detection with Generative Pretrained Transformers
wassa-1.53
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.54.bib
https://aclanthology.org/2023.wassa-1.54/
@inproceedings{lu-etal-2023-hit, title = "{HIT}-{SCIR} at {WASSA} 2023: Empathy and Emotion Analysis at the Utterance-Level and the Essay-Level", author = "Lu, Xin and Li, Zhuojun and Tong, Yanpeng and Zhao, Yanyan and Qin, Bing", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.54", doi = "10.18653/v1/2023.wassa-1.54", pages = "574--580", abstract = "This paper introduces the participation of team HIT-SCIR to the WASSA 2023 Shared Task on Empathy Detection and Emotion Classification and Personality Detection in Interactions. We focus on three tracks: Track 1 (Empathy and Emotion Prediction in Conversations, CONV), Track 2 (Empathy Prediction, EMP) and Track 3 (Emotion Classification, EMO), and designed three different models to address them separately. For Track 1, we designed a direct fine-tuning DeBERTa model for three regression tasks at the utterance-level. For Track 2, we designed a multi-task learning RoBERTa model for two regression tasks at the essay-level. For Track 3, we designed a RoBERTa model with data augmentation for the classification task at the essay-level. Finally, our team ranked 1st in the Track 1 (CONV), 5th in the Track 2 (EMP) and 3rd in the Track 3 (EMO) in the evaluation phase.", }
This paper introduces the participation of team HIT-SCIR to the WASSA 2023 Shared Task on Empathy Detection and Emotion Classification and Personality Detection in Interactions. We focus on three tracks: Track 1 (Empathy and Emotion Prediction in Conversations, CONV), Track 2 (Empathy Prediction, EMP) and Track 3 (Emotion Classification, EMO), and designed three different models to address them separately. For Track 1, we designed a direct fine-tuning DeBERTa model for three regression tasks at the utterance-level. For Track 2, we designed a multi-task learning RoBERTa model for two regression tasks at the essay-level. For Track 3, we designed a RoBERTa model with data augmentation for the classification task at the essay-level. Finally, our team ranked 1st in the Track 1 (CONV), 5th in the Track 2 (EMP) and 3rd in the Track 3 (EMO) in the evaluation phase.
[ "Lu, Xin", "Li, Zhuojun", "Tong, Yanpeng", "Zhao, Yanyan", "Qin, Bing" ]
HIT-SCIR at WASSA 2023: Empathy and Emotion Analysis at the Utterance-Level and the Essay-Level
wassa-1.54
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.55.bib
https://aclanthology.org/2023.wassa-1.55/
@inproceedings{kumar-etal-2023-visu, title = "{VISU} at {WASSA} 2023 Shared Task: Detecting Emotions in Reaction to News Stories Using Transformers and Stacked Embeddings", author = "Kumar, Vivek and Tiwari, Prayag and Singh, Sushmita", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.55", doi = "10.18653/v1/2023.wassa-1.55", pages = "581--586", abstract = "Our system, VISU, participated in the WASSA 2023 Shared Task (3) of Emotion Classification from essays written in reaction to news articles. Emotion detection from complex dialogues is challenging and often requires context/domain understanding. Therefore in this research, we have focused on developing deep learning (DL) models using the combination of word embedding representations with tailored prepossessing strategies to capture the nuances of emotions expressed. Our experiments used static and contextual embeddings (individual and stacked) with Bidirectional Long short-term memory (BiLSTM) and Transformer based models. We occupied rank tenth in the emotion detection task by scoring a Macro F1-Score of 0.2717, validating the efficacy of our implemented approaches for small and imbalanced datasets with mixed categories of target emotions.", }
Our system, VISU, participated in the WASSA 2023 Shared Task (3) of Emotion Classification from essays written in reaction to news articles. Emotion detection from complex dialogues is challenging and often requires context/domain understanding. Therefore in this research, we have focused on developing deep learning (DL) models using the combination of word embedding representations with tailored prepossessing strategies to capture the nuances of emotions expressed. Our experiments used static and contextual embeddings (individual and stacked) with Bidirectional Long short-term memory (BiLSTM) and Transformer based models. We occupied rank tenth in the emotion detection task by scoring a Macro F1-Score of 0.2717, validating the efficacy of our implemented approaches for small and imbalanced datasets with mixed categories of target emotions.
[ "Kumar, Vivek", "Tiwari, Prayag", "Singh, Sushmita" ]
VISU at WASSA 2023 Shared Task: Detecting Emotions in Reaction to News Stories Using Transformers and Stacked Embeddings
wassa-1.55
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.57.bib
https://aclanthology.org/2023.wassa-1.57/
@inproceedings{zhang-etal-2023-emotion, title = "Emotion classification on code-mixed text messages via soft prompt tuning", author = "Zhang, Jinghui and Yang, Dongming and Bao, Siyu and Cao, Lina and Fan, Shunguo", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.57", doi = "10.18653/v1/2023.wassa-1.57", pages = "596--600", abstract = "Emotion classification on code-mixed text messages is challenging due to the multilingual languages and non-literal cues (i.e., emoticons). To solve these problems, we propose an innovative soft prompt tuning method, which is lightweight and effective to release potential abilities of the pre-trained language models and improve the classification results. Firstly, we transform emoticons into textual information to utilize their rich emotional information. Then, variety of innovative templates and verbalizers are applied to promote emotion classification. Extensive experiments show that transforming emoticons and employing prompt tuning both benefit the performance. Finally, as a part of WASSA 2023, we obtain the accuracy of 0.972 in track MLEC and 0.892 in track MCEC, yielding the second place in both two tracks.", }
Emotion classification on code-mixed text messages is challenging due to the multilingual languages and non-literal cues (i.e., emoticons). To solve these problems, we propose an innovative soft prompt tuning method, which is lightweight and effective to release potential abilities of the pre-trained language models and improve the classification results. Firstly, we transform emoticons into textual information to utilize their rich emotional information. Then, variety of innovative templates and verbalizers are applied to promote emotion classification. Extensive experiments show that transforming emoticons and employing prompt tuning both benefit the performance. Finally, as a part of WASSA 2023, we obtain the accuracy of 0.972 in track MLEC and 0.892 in track MCEC, yielding the second place in both two tracks.
[ "Zhang, Jinghui", "Yang, Dongming", "Bao, Siyu", "Cao, Lina", "Fan, Shunguo" ]
Emotion classification on code-mixed text messages via soft prompt tuning
wassa-1.57
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.58.bib
https://aclanthology.org/2023.wassa-1.58/
@inproceedings{vedula-etal-2023-precogiiith, title = "{P}recog{IIITH}@{WASSA}2023: Emotion Detection for {U}rdu-{E}nglish Code-mixed Text", author = "Vedula, Bhaskara Hanuma and Kodali, Prashant and Shrivastava, Manish and Kumaraguru, Ponnurangam", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.58", doi = "10.18653/v1/2023.wassa-1.58", pages = "601--605", abstract = "Code-mixing refers to the phenomenon of using two or more languages interchangeably within a speech or discourse context. This practice is particularly prevalent on social media platforms, and determining the embedded affects in a code-mixed sentence remains as a challenging problem. In this submission we describe our system for WASSA 2023 Shared Task on Emotion Detection in English-Urdu code-mixed text. In our system we implement a multiclass emotion detection model with label space of 11 emotions. Samples are code-mixed English-Urdu text, where Urdu is written in romanised form. Our submission is limited to one of the subtasks - Multi Class classification and we leverage transformer-based Multilingual Large Language Models (MLLMs), XLM-RoBERTa and Indic-BERT. We fine-tune MLLMs on the released data splits, with and without pre-processing steps (translation to english), for classifying texts into the appropriate emotion category. Our methods did not surpass the baseline, and our submission is ranked sixth overall.", }
Code-mixing refers to the phenomenon of using two or more languages interchangeably within a speech or discourse context. This practice is particularly prevalent on social media platforms, and determining the embedded affects in a code-mixed sentence remains as a challenging problem. In this submission we describe our system for WASSA 2023 Shared Task on Emotion Detection in English-Urdu code-mixed text. In our system we implement a multiclass emotion detection model with label space of 11 emotions. Samples are code-mixed English-Urdu text, where Urdu is written in romanised form. Our submission is limited to one of the subtasks - Multi Class classification and we leverage transformer-based Multilingual Large Language Models (MLLMs), XLM-RoBERTa and Indic-BERT. We fine-tune MLLMs on the released data splits, with and without pre-processing steps (translation to english), for classifying texts into the appropriate emotion category. Our methods did not surpass the baseline, and our submission is ranked sixth overall.
[ "Vedula, Bhaskara Hanuma", "Kodali, Prashant", "Shrivastava, Manish", "Kumaraguru, Ponnurangam" ]
PrecogIIITH@WASSA2023: Emotion Detection for Urdu-English Code-mixed Text
wassa-1.58
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.59.bib
https://aclanthology.org/2023.wassa-1.59/
@inproceedings{pahwa-2023-bphigh, title = "{B}p{H}igh at {WASSA} 2023: Using Contrastive Learning to build Sentence Transformer models for Multi-Class Emotion Classification in Code-mixed {U}rdu", author = "Pahwa, Bhavish", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.59", doi = "10.18653/v1/2023.wassa-1.59", pages = "606--610", abstract = "In this era of digital communication and social media, texting and chatting among individuals occur mainly through code-mixed or Romanized versions of the native language prevalent in the region. The presence of Romanized and code-mixed language develops the need to build NLP systems in these domains to leverage the digital content for various use cases. This paper describes our contribution to the subtask MCEC of the shared task WASSA 2023:Shared Task on Multi-Label and Multi-Class Emotion Classification on Code-Mixed Text Messages. We explore how one can build sentence transformers models for low-resource languages using unsupervised data by leveraging contrastive learning techniques described in the SIMCSE paper and using the sentence transformer developed to build classification models using the SetFit approach. Additionally, we{'}ll publish our code and models on GitHub and HuggingFace, two open-source hosting services.", }
In this era of digital communication and social media, texting and chatting among individuals occur mainly through code-mixed or Romanized versions of the native language prevalent in the region. The presence of Romanized and code-mixed language develops the need to build NLP systems in these domains to leverage the digital content for various use cases. This paper describes our contribution to the subtask MCEC of the shared task WASSA 2023:Shared Task on Multi-Label and Multi-Class Emotion Classification on Code-Mixed Text Messages. We explore how one can build sentence transformers models for low-resource languages using unsupervised data by leveraging contrastive learning techniques described in the SIMCSE paper and using the sentence transformer developed to build classification models using the SetFit approach. Additionally, we{'}ll publish our code and models on GitHub and HuggingFace, two open-source hosting services.
[ "Pahwa, Bhavish" ]
BpHigh at WASSA 2023: Using Contrastive Learning to build Sentence Transformer models for Multi-Class Emotion Classification in Code-mixed Urdu
wassa-1.59
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.60.bib
https://aclanthology.org/2023.wassa-1.60/
@inproceedings{ran-etal-2023-ynu, title = "{YNU}-{HPCC} at {WASSA} 2023: Using Text-Mixed Data Augmentation for Emotion Classification on Code-Mixed Text Message", author = "Ran, Xuqiao and Zhang, You and Wang, Jin and Xu, Dan and Zhang, Xuejie", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.60", doi = "10.18653/v1/2023.wassa-1.60", pages = "611--615", abstract = "Emotion classification on code-mixed texts has been widely used in real-world applications. In this paper, we build a system that participates in the WASSA 2023 Shared Task 2 for emotion classification on code-mixed text messages from Roman Urdu and English. The main goal of the proposed method is to adopt a text-mixed data augmentation for robust code-mixed text representation. We mix texts with both multi-label (track 1) and multi-class (track 2) annotations in a unified multilingual pre-trained model, i.e., XLM-RoBERTa, for both subtasks. Our results show that the proposed text-mixed method performs competitively, ranking first in both tracks, achieving an average Macro F1 score of 0.9782 on the multi-label track and of 0.9329 on the multi-class track.", }
Emotion classification on code-mixed texts has been widely used in real-world applications. In this paper, we build a system that participates in the WASSA 2023 Shared Task 2 for emotion classification on code-mixed text messages from Roman Urdu and English. The main goal of the proposed method is to adopt a text-mixed data augmentation for robust code-mixed text representation. We mix texts with both multi-label (track 1) and multi-class (track 2) annotations in a unified multilingual pre-trained model, i.e., XLM-RoBERTa, for both subtasks. Our results show that the proposed text-mixed method performs competitively, ranking first in both tracks, achieving an average Macro F1 score of 0.9782 on the multi-label track and of 0.9329 on the multi-class track.
[ "Ran, Xuqiao", "Zhang, You", "Wang, Jin", "Xu, Dan", "Zhang, Xuejie" ]
YNU-HPCC at WASSA 2023: Using Text-Mixed Data Augmentation for Emotion Classification on Code-Mixed Text Message
wassa-1.60
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.61.bib
https://aclanthology.org/2023.wassa-1.61/
@inproceedings{nedilko-2023-generative, title = "Generative Pretrained Transformers for Emotion Detection in a Code-Switching Setting", author = "Nedilko, Andrew", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.61", doi = "10.18653/v1/2023.wassa-1.61", pages = "616--620", abstract = "This paper describes the approach that we utilized to participate in the shared task for multi-label and multi-class emotion classification organized as part of WASSA 2023 at ACL 2023. The objective was to build mod- els that can predict 11 classes of emotions, or the lack thereof (neutral class) based on code- mixed Roman Urdu and English SMS text messages. We participated in Track 2 of this task - multi-class emotion classification (MCEC). We used generative pretrained transformers, namely ChatGPT because it has a commercially available full-scale API, for the emotion detec- tion task by leveraging the prompt engineer- ing and zero-shot / few-shot learning method- ologies based on multiple experiments on the dev set. Although this was the first time we used a GPT model for the purpose, this ap- proach allowed us to beat our own baseline character-based XGBClassifier, as well as the baseline model trained by the organizers (bert- base-multilingual-cased). We ranked 4th and achieved the macro F1 score of 0.7038 and the accuracy of 0.7313 on the blind test set.", }
This paper describes the approach that we utilized to participate in the shared task for multi-label and multi-class emotion classification organized as part of WASSA 2023 at ACL 2023. The objective was to build mod- els that can predict 11 classes of emotions, or the lack thereof (neutral class) based on code- mixed Roman Urdu and English SMS text messages. We participated in Track 2 of this task - multi-class emotion classification (MCEC). We used generative pretrained transformers, namely ChatGPT because it has a commercially available full-scale API, for the emotion detec- tion task by leveraging the prompt engineer- ing and zero-shot / few-shot learning method- ologies based on multiple experiments on the dev set. Although this was the first time we used a GPT model for the purpose, this ap- proach allowed us to beat our own baseline character-based XGBClassifier, as well as the baseline model trained by the organizers (bert- base-multilingual-cased). We ranked 4th and achieved the macro F1 score of 0.7038 and the accuracy of 0.7313 on the blind test set.
[ "Nedilko, Andrew" ]
Generative Pretrained Transformers for Emotion Detection in a Code-Switching Setting
wassa-1.61
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wnu-1.1.bib
https://aclanthology.org/2023.wnu-1.1/
@inproceedings{mousavi-etal-2023-whats, title = "What{'}s New? Identifying the Unfolding of New Events in a Narrative", author = "Mousavi, Seyed Mahed and Tanaka, Shohei and Roccabruna, Gabriel and Yoshino, Koichiro and Nakamura, Satoshi and Riccardi, Giuseppe", editor = "Akoury, Nader and Clark, Elizabeth and Iyyer, Mohit and Chaturvedi, Snigdha and Brahman, Faeze and Chandu, Khyathi", booktitle = "Proceedings of the 5th Workshop on Narrative Understanding", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wnu-1.1", doi = "10.18653/v1/2023.wnu-1.1", pages = "1--10", abstract = "Narratives include a rich source of events unfolding over time and context. Automatic understanding of these events provides a summarised comprehension of the narrative for further computation (such as reasoning). In this paper, we study the Information Status (IS) of the events and propose a novel challenging task: the automatic identification of new events in a narrative. We define an event as a triplet of subject, predicate, and object. The event is categorized as new with respect to the discourse context and whether it can be inferred through commonsense reasoning. We annotated a publicly available corpus of narratives with the new events at sentence level using human annotators. We present the annotation protocol and study the quality of the annotation and the difficulty of the task. We publish the annotated dataset, annotation materials, and machine learning baseline models for the task of new event extraction for narrative understanding.", }
Narratives include a rich source of events unfolding over time and context. Automatic understanding of these events provides a summarised comprehension of the narrative for further computation (such as reasoning). In this paper, we study the Information Status (IS) of the events and propose a novel challenging task: the automatic identification of new events in a narrative. We define an event as a triplet of subject, predicate, and object. The event is categorized as new with respect to the discourse context and whether it can be inferred through commonsense reasoning. We annotated a publicly available corpus of narratives with the new events at sentence level using human annotators. We present the annotation protocol and study the quality of the annotation and the difficulty of the task. We publish the annotated dataset, annotation materials, and machine learning baseline models for the task of new event extraction for narrative understanding.
[ "Mousavi, Seyed Mahed", "Tanaka, Shohei", "Roccabruna, Gabriel", "Yoshino, Koichiro", "Nakamura, Satoshi", "Riccardi, Giuseppe" ]
What's New? Identifying the Unfolding of New Events in a Narrative
wnu-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wnu-1.2.bib
https://aclanthology.org/2023.wnu-1.2/
@inproceedings{sazzed-2023-emotion, title = "Emotion and Modifier in Henry Rider Haggard{'}s Novels", author = "Sazzed, Salim", editor = "Akoury, Nader and Clark, Elizabeth and Iyyer, Mohit and Chaturvedi, Snigdha and Brahman, Faeze and Chandu, Khyathi", booktitle = "Proceedings of the 5th Workshop on Narrative Understanding", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wnu-1.2", doi = "10.18653/v1/2023.wnu-1.2", pages = "11--15", abstract = "In recent years, there has been a growing scholarly interest in employing quantitative methods to analyze literary texts, as they offer unique insights, theories, and interpretations. In light of this, the current study employs quantitative analysis to examine the fiction written by the renowned British adventure novelist, Sir Henry Rider Haggard. Specifically, the study aims to investigate the affective content and prevalence of distinctive linguistic features in six of Haggard{'}s most distinguished works. We evaluate dominant emotional states at the sentence level as well as investigate the deployment of specific linguistic features such as modifiers and deontic modals, and collocated terms. Through sentence-level emotion analysis the findings reveal a notable prevalence of {``}joy{''}-related emotions across the novels. Furthermore, the study observes that intensifiers are employed more commonly than the mitigators as modifiers and the collocated terms of modifiers exhibit high similarity across the novels. By integrating quantitative analyses with qualitative assessments, this study presents a novel perspective on the patterns of emotion and specialized grammatical features in some of Haggard{'}s most celebrated literary works.", }
In recent years, there has been a growing scholarly interest in employing quantitative methods to analyze literary texts, as they offer unique insights, theories, and interpretations. In light of this, the current study employs quantitative analysis to examine the fiction written by the renowned British adventure novelist, Sir Henry Rider Haggard. Specifically, the study aims to investigate the affective content and prevalence of distinctive linguistic features in six of Haggard{'}s most distinguished works. We evaluate dominant emotional states at the sentence level as well as investigate the deployment of specific linguistic features such as modifiers and deontic modals, and collocated terms. Through sentence-level emotion analysis the findings reveal a notable prevalence of {``}joy{''}-related emotions across the novels. Furthermore, the study observes that intensifiers are employed more commonly than the mitigators as modifiers and the collocated terms of modifiers exhibit high similarity across the novels. By integrating quantitative analyses with qualitative assessments, this study presents a novel perspective on the patterns of emotion and specialized grammatical features in some of Haggard{'}s most celebrated literary works.
[ "Sazzed, Salim" ]
Emotion and Modifier in Henry Rider Haggard's Novels
wnu-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wnu-1.4.bib
https://aclanthology.org/2023.wnu-1.4/
@inproceedings{pawar-etal-2023-evaluation, title = "Evaluation Metrics for Depth and Flow of Knowledge in Non-fiction Narrative Texts", author = "Pawar, Sachin and Palshikar, Girish and Jain, Ankita and Singh, Mahesh and Rangarajan, Mahesh and Agarwal, Aman and Kumar, Vishal and Singh, Karan", editor = "Akoury, Nader and Clark, Elizabeth and Iyyer, Mohit and Chaturvedi, Snigdha and Brahman, Faeze and Chandu, Khyathi", booktitle = "Proceedings of the 5th Workshop on Narrative Understanding", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wnu-1.4", doi = "10.18653/v1/2023.wnu-1.4", pages = "16--24", abstract = "In this paper, we describe the problem of automatically evaluating quality of knowledge expressed in a non-fiction narrative text. We focus on a specific type of documents where each document describes a certain technical problem and its solution. The goal is not only to evaluate the quality of knowledge in such a document, but also to automatically suggest possible improvements to the writer so that a better knowledge-rich document is produced. We propose new evaluation metrics to evaluate quality of knowledge contents as well as flow of different types of sentences. The suggestions for improvement are generated based on these metrics. The proposed metrics are completely unsupervised in nature and they are derived from a set of simple corpus statistics. We demonstrate the effectiveness of the proposed metrics as compared to other existing baseline metrics in our experiments.", }
In this paper, we describe the problem of automatically evaluating quality of knowledge expressed in a non-fiction narrative text. We focus on a specific type of documents where each document describes a certain technical problem and its solution. The goal is not only to evaluate the quality of knowledge in such a document, but also to automatically suggest possible improvements to the writer so that a better knowledge-rich document is produced. We propose new evaluation metrics to evaluate quality of knowledge contents as well as flow of different types of sentences. The suggestions for improvement are generated based on these metrics. The proposed metrics are completely unsupervised in nature and they are derived from a set of simple corpus statistics. We demonstrate the effectiveness of the proposed metrics as compared to other existing baseline metrics in our experiments.
[ "Pawar, Sachin", "Palshikar, Girish", "Jain, Ankita", "Singh, Mahesh", "Rangarajan, Mahesh", "Agarwal, Aman", "Kumar, Vishal", "Singh, Karan" ]
Evaluation Metrics for Depth and Flow of Knowledge in Non-fiction Narrative Texts
wnu-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wnu-1.5.bib
https://aclanthology.org/2023.wnu-1.5/
@inproceedings{moreira-etal-2023-modeling, title = "Modeling Readers{'} Appreciation of Literary Narratives Through Sentiment Arcs and Semantic Profiles", author = "Moreira, Pascale and Bizzoni, Yuri and Nielbo, Kristoffer and Lassen, Ida Marie and Thomsen, Mads", editor = "Akoury, Nader and Clark, Elizabeth and Iyyer, Mohit and Chaturvedi, Snigdha and Brahman, Faeze and Chandu, Khyathi", booktitle = "Proceedings of the 5th Workshop on Narrative Understanding", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wnu-1.5", doi = "10.18653/v1/2023.wnu-1.5", pages = "25--35", abstract = "Predicting literary quality and reader appreciation of narrative texts are highly complex challenges in quantitative and computational literary studies due to the fluid definitions of quality and the vast feature space that can be considered when modeling a literary work. This paper investigates the potential of sentiment arcs combined with topical-semantic profiling of literary narratives as indicators for their literary quality. Our experiments focus on a large corpus of 19th and 20the century English language literary fiction, using GoodReads{'} ratings as an imperfect approximation of the diverse range of reader evaluations and preferences. By leveraging a stacked ensemble of regression models, we achieve a promising performance in predicting average readers{'} scores, indicating the potential of our approach in modeling literary quality.", }
Predicting literary quality and reader appreciation of narrative texts are highly complex challenges in quantitative and computational literary studies due to the fluid definitions of quality and the vast feature space that can be considered when modeling a literary work. This paper investigates the potential of sentiment arcs combined with topical-semantic profiling of literary narratives as indicators for their literary quality. Our experiments focus on a large corpus of 19th and 20the century English language literary fiction, using GoodReads{'} ratings as an imperfect approximation of the diverse range of reader evaluations and preferences. By leveraging a stacked ensemble of regression models, we achieve a promising performance in predicting average readers{'} scores, indicating the potential of our approach in modeling literary quality.
[ "Moreira, Pascale", "Bizzoni, Yuri", "Nielbo, Kristoffer", "Lassen, Ida Marie", "Thomsen, Mads" ]
Modeling Readers' Appreciation of Literary Narratives Through Sentiment Arcs and Semantic Profiles
wnu-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wnu-1.6.bib
https://aclanthology.org/2023.wnu-1.6/
@inproceedings{wu-etal-2023-word, title = "Word Category Arcs in Literature Across Languages and Genres", author = "Wu, Winston and Wang, Lu and Mihalcea, Rada", editor = "Akoury, Nader and Clark, Elizabeth and Iyyer, Mohit and Chaturvedi, Snigdha and Brahman, Faeze and Chandu, Khyathi", booktitle = "Proceedings of the 5th Workshop on Narrative Understanding", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wnu-1.6", doi = "10.18653/v1/2023.wnu-1.6", pages = "36--47", abstract = "Word category arcs measure the progression of word usage across a story. Previous work on arcs has explored structural and psycholinguistic arcs through the course of narratives, but so far it has been limited to {\textbackslash}textit{English} narratives and a narrow set of word categories covering binary emotions and cognitive processes. In this paper, we expand over previous work by (1) introducing a novel, general approach to quantitatively analyze word usage arcs for any word category through a combination of clustering and filtering; and (2) exploring narrative arcs in literature in eight different languages across multiple genres. Through multiple experiments and analyses, we quantify the nature of narratives across languages, corroborating existing work on monolingual narrative arcs as well as drawing new insights about the interpretation of arcs through correlation analyses.", }
Word category arcs measure the progression of word usage across a story. Previous work on arcs has explored structural and psycholinguistic arcs through the course of narratives, but so far it has been limited to {\textbackslash}textit{English} narratives and a narrow set of word categories covering binary emotions and cognitive processes. In this paper, we expand over previous work by (1) introducing a novel, general approach to quantitatively analyze word usage arcs for any word category through a combination of clustering and filtering; and (2) exploring narrative arcs in literature in eight different languages across multiple genres. Through multiple experiments and analyses, we quantify the nature of narratives across languages, corroborating existing work on monolingual narrative arcs as well as drawing new insights about the interpretation of arcs through correlation analyses.
[ "Wu, Winston", "Wang, Lu", "Mihalcea, Rada" ]
Word Category Arcs in Literature Across Languages and Genres
wnu-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wnu-1.7.bib
https://aclanthology.org/2023.wnu-1.7/
@inproceedings{van-eecke-etal-2023-candide, title = "The Candide model: How narratives emerge where observations meet beliefs", author = "Van Eecke, Paul and Verheyen, Lara and Willaert, Tom and Beuls, Katrien", editor = "Akoury, Nader and Clark, Elizabeth and Iyyer, Mohit and Chaturvedi, Snigdha and Brahman, Faeze and Chandu, Khyathi", booktitle = "Proceedings of the 5th Workshop on Narrative Understanding", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wnu-1.7", doi = "10.18653/v1/2023.wnu-1.7", pages = "48--57", abstract = "This paper presents the Candide model as a computational architecture for modelling human-like, narrative-based language understanding. The model starts from the idea that narratives emerge through the process of interpreting novel linguistic observations, such as utterances, paragraphs and texts, with respect to previously acquired knowledge and beliefs. Narratives are personal, as they are rooted in past experiences, and constitute perspectives on the world that might motivate different interpretations of the same observations. Concretely, the Candide model operationalises this idea by dynamically modelling the belief systems and background knowledge of individual agents, updating these as new linguistic observations come in, and exposing them to a logic reasoning engine that reveals the possible sources of divergent interpretations. Apart from introducing the foundational ideas, we also present a proof-of-concept implementation that demonstrates the approach through a number of illustrative examples.", }
This paper presents the Candide model as a computational architecture for modelling human-like, narrative-based language understanding. The model starts from the idea that narratives emerge through the process of interpreting novel linguistic observations, such as utterances, paragraphs and texts, with respect to previously acquired knowledge and beliefs. Narratives are personal, as they are rooted in past experiences, and constitute perspectives on the world that might motivate different interpretations of the same observations. Concretely, the Candide model operationalises this idea by dynamically modelling the belief systems and background knowledge of individual agents, updating these as new linguistic observations come in, and exposing them to a logic reasoning engine that reveals the possible sources of divergent interpretations. Apart from introducing the foundational ideas, we also present a proof-of-concept implementation that demonstrates the approach through a number of illustrative examples.
[ "Van Eecke, Paul", "Verheyen, Lara", "Willaert, Tom", "Beuls, Katrien" ]
The Candide model: How narratives emerge where observations meet beliefs
wnu-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wnu-1.8.bib
https://aclanthology.org/2023.wnu-1.8/
@inproceedings{yamshchikov-tikhonov-2023-wrong, title = "What is Wrong with Language Models that Can Not Tell a Story?", author = "Yamshchikov, Ivan P. and Tikhonov, Alexey", editor = "Akoury, Nader and Clark, Elizabeth and Iyyer, Mohit and Chaturvedi, Snigdha and Brahman, Faeze and Chandu, Khyathi", booktitle = "Proceedings of the 5th Workshop on Narrative Understanding", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wnu-1.8", doi = "10.18653/v1/2023.wnu-1.8", pages = "58--64", abstract = "In this position paper, we contend that advancing our understanding of narrative and the effective generation of longer, subjectively engaging texts is crucial for progress in modern Natural Language Processing (NLP) and potentially the broader field of Artificial Intelligence. We highlight the current lack of appropriate datasets, evaluation methods, and operational concepts necessary for initiating work on narrative processing.", }
In this position paper, we contend that advancing our understanding of narrative and the effective generation of longer, subjectively engaging texts is crucial for progress in modern Natural Language Processing (NLP) and potentially the broader field of Artificial Intelligence. We highlight the current lack of appropriate datasets, evaluation methods, and operational concepts necessary for initiating work on narrative processing.
[ "Yamshchikov, Ivan P.", "Tikhonov, Alexey" ]
What is Wrong with Language Models that Can Not Tell a Story?
wnu-1.8
Poster
2211.05044
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wnu-1.12.bib
https://aclanthology.org/2023.wnu-1.12/
@inproceedings{rittichier-2023-story, title = "Story Settings: A Dataset", author = "Rittichier, Kaley", editor = "Akoury, Nader and Clark, Elizabeth and Iyyer, Mohit and Chaturvedi, Snigdha and Brahman, Faeze and Chandu, Khyathi", booktitle = "Proceedings of the 5th Workshop on Narrative Understanding", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wnu-1.12", doi = "10.18653/v1/2023.wnu-1.12", pages = "65--72", abstract = "Understanding the settings of a given story has long been viewed as an essential component of understanding the story at large. This significance is not only underscored in academic literary analysis but also in kindergarten education. However, despite this significance, it has received relatively little attention regarding computational analyses of stories. This paper presents a dataset of 2,302 time period setting labeled works and 6,991 location setting labeled works. This dataset aims to help with Cultural Analytics of literary works but may also aid in time-period-related questions within literary Q{\textbackslash}{\&}amp;A systems.", }
Understanding the settings of a given story has long been viewed as an essential component of understanding the story at large. This significance is not only underscored in academic literary analysis but also in kindergarten education. However, despite this significance, it has received relatively little attention regarding computational analyses of stories. This paper presents a dataset of 2,302 time period setting labeled works and 6,991 location setting labeled works. This dataset aims to help with Cultural Analytics of literary works but may also aid in time-period-related questions within literary Q{\textbackslash}{\&}amp;A systems.
[ "Rittichier, Kaley" ]
Story Settings: A Dataset
wnu-1.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wnu-1.13.bib
https://aclanthology.org/2023.wnu-1.13/
@inproceedings{neis-etal-2023-analysis, title = "An Analysis of Reader Engagement in Literary Fiction through Eye Tracking and Linguistic Features", author = "Neis, Rose and De Langis, Karin and Kim, Zae Myung and Kang, Dongyeop", editor = "Akoury, Nader and Clark, Elizabeth and Iyyer, Mohit and Chaturvedi, Snigdha and Brahman, Faeze and Chandu, Khyathi", booktitle = "Proceedings of the 5th Workshop on Narrative Understanding", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wnu-1.13", doi = "10.18653/v1/2023.wnu-1.13", pages = "73--81", abstract = "Capturing readers{'} engagement in fiction is a challenging but important aspect of narrative understanding. In this study, we collected 23 readers{'} reactions to 2 short stories through eye tracking, sentence-level annotations, and an overall engagement scale survey. We analyzed the significance of various qualities of the text in predicting how engaging a reader is likely to find it. As enjoyment of fiction is highly contextual, we also investigated individual differences in our data. Furthering our understanding of what captivates readers in fiction will help better inform models used in creative narrative generation and collaborative writing tools.", }
Capturing readers{'} engagement in fiction is a challenging but important aspect of narrative understanding. In this study, we collected 23 readers{'} reactions to 2 short stories through eye tracking, sentence-level annotations, and an overall engagement scale survey. We analyzed the significance of various qualities of the text in predicting how engaging a reader is likely to find it. As enjoyment of fiction is highly contextual, we also investigated individual differences in our data. Furthering our understanding of what captivates readers in fiction will help better inform models used in creative narrative generation and collaborative writing tools.
[ "Neis, Rose", "De Langis, Karin", "Kim, Zae Myung", "Kang, Dongyeop" ]
An Analysis of Reader Engagement in Literary Fiction through Eye Tracking and Linguistic Features
wnu-1.13
Poster
2306.04043
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wnu-1.14.bib
https://aclanthology.org/2023.wnu-1.14/
@inproceedings{edlin-reiss-2023-identifying, title = "Identifying Visual Depictions of Animate Entities in Narrative Comics: An Annotation Study", author = "Edlin, Lauren and Reiss, Joshua", editor = "Akoury, Nader and Clark, Elizabeth and Iyyer, Mohit and Chaturvedi, Snigdha and Brahman, Faeze and Chandu, Khyathi", booktitle = "Proceedings of the 5th Workshop on Narrative Understanding", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wnu-1.14", doi = "10.18653/v1/2023.wnu-1.14", pages = "82--91", abstract = "Animate entities in narrative comics stories are expressed through a number of visual representations across panels. Identifying these entities is necessary for recognizing characters and analysing narrative affordances unique to comics, and integrating these with linguistic reference annotation, however an annotation process for animate entity identification has not received adequate attention. This research explores methods for identifying animate entities visually in comics using annotation experiments. Two rounds of inter-annotator agreement experiments are run: the first asks annotators to outline areas on comic pages using a Polygon segmentation tool, and the second prompts annotators to assign each outlined entity{'}s animacy type to derive a quantitative measure of agreement. The first experiment results show that Polygon-based outlines successfully produce a qualitative measure of agreement; the second experiment supports that animacy status is best conceptualised as a graded, rather than binary, concept.", }
Animate entities in narrative comics stories are expressed through a number of visual representations across panels. Identifying these entities is necessary for recognizing characters and analysing narrative affordances unique to comics, and integrating these with linguistic reference annotation, however an annotation process for animate entity identification has not received adequate attention. This research explores methods for identifying animate entities visually in comics using annotation experiments. Two rounds of inter-annotator agreement experiments are run: the first asks annotators to outline areas on comic pages using a Polygon segmentation tool, and the second prompts annotators to assign each outlined entity{'}s animacy type to derive a quantitative measure of agreement. The first experiment results show that Polygon-based outlines successfully produce a qualitative measure of agreement; the second experiment supports that animacy status is best conceptualised as a graded, rather than binary, concept.
[ "Edlin, Lauren", "Reiss, Joshua" ]
Identifying Visual Depictions of Animate Entities in Narrative Comics: An Annotation Study
wnu-1.14
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wnu-1.15.bib
https://aclanthology.org/2023.wnu-1.15/
@inproceedings{sui-etal-2023-mrs, title = "Mrs. Dalloway Said She Would Segment the Chapters Herself", author = "Sui, Peiqi and Wang, Lin and Hamilton, Sil and Ries, Thorsten and Wong, Kelvin and Wong, Stephen", editor = "Akoury, Nader and Clark, Elizabeth and Iyyer, Mohit and Chaturvedi, Snigdha and Brahman, Faeze and Chandu, Khyathi", booktitle = "Proceedings of the 5th Workshop on Narrative Understanding", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wnu-1.15", doi = "10.18653/v1/2023.wnu-1.15", pages = "92--105", abstract = "This paper proposes a sentiment-centric pipeline to perform unsupervised plot extraction on non-linear novels like Virginia Woolf{'}s Mrs. Dalloway, a novel widely considered to be {``}plotless. Combining transformer-based sentiment analysis models with statistical testing, we model sentiment{'}s rate-of-change and correspondingly segment the novel into emotionally self-contained units qualitatively evaluated to be meaningful surrogate pseudo-chapters. We validate our findings by evaluating our pipeline as a fully unsupervised text segmentation model, achieving a F-1 score of 0.643 (regional) and 0.214 (exact) in chapter break prediction on a validation set of linear novels with existing chapter structures. In addition, we observe notable differences between the distributions of predicted chapter lengths in linear and non-linear fictional narratives, with the latter exhibiting significantly greater variability. Our results hold significance for narrative researchers appraising methods for extracting plots from non-linear novels.", }
This paper proposes a sentiment-centric pipeline to perform unsupervised plot extraction on non-linear novels like Virginia Woolf{'}s Mrs. Dalloway, a novel widely considered to be {``}plotless. Combining transformer-based sentiment analysis models with statistical testing, we model sentiment{'}s rate-of-change and correspondingly segment the novel into emotionally self-contained units qualitatively evaluated to be meaningful surrogate pseudo-chapters. We validate our findings by evaluating our pipeline as a fully unsupervised text segmentation model, achieving a F-1 score of 0.643 (regional) and 0.214 (exact) in chapter break prediction on a validation set of linear novels with existing chapter structures. In addition, we observe notable differences between the distributions of predicted chapter lengths in linear and non-linear fictional narratives, with the latter exhibiting significantly greater variability. Our results hold significance for narrative researchers appraising methods for extracting plots from non-linear novels.
[ "Sui, Peiqi", "Wang, Lin", "Hamilton, Sil", "Ries, Thorsten", "Wong, Kelvin", "Wong, Stephen" ]
Mrs. Dalloway Said She Would Segment the Chapters Herself
wnu-1.15
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wnu-1.16.bib
https://aclanthology.org/2023.wnu-1.16/
@inproceedings{wu-smith-2023-composition, title = "Composition and Deformance: Measuring Imageability with a Text-to-Image Model", author = "Wu, Si and Smith, David", editor = "Akoury, Nader and Clark, Elizabeth and Iyyer, Mohit and Chaturvedi, Snigdha and Brahman, Faeze and Chandu, Khyathi", booktitle = "Proceedings of the 5th Workshop on Narrative Understanding", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wnu-1.16", doi = "10.18653/v1/2023.wnu-1.16", pages = "106--117", abstract = "Although psycholinguists and psychologists have long studied the tendency of linguistic strings to evoke mental images in hearers or readers, most computational studies have applied this concept of imageability only to isolated words. Using recent developments in text-to-image generation models, such as DALLE mini, we propose computational methods that use generated images to measure the imageability of both single English words and connected text. We sample text prompts for image generation from three corpora: human-generated image captions, news article sentences, and poem lines. We subject these prompts to different deformances to examine the model{'}s ability to detect changes in imageability caused by compositional change. We find high correlation between the proposed computational measures of imageability and human judgments of individual words. We also find the proposed measures more consistently respond to changes in compositionality than baseline approaches. We discuss possible effects of model training and implications for the study of compositionality in text-to-image models.", }
Although psycholinguists and psychologists have long studied the tendency of linguistic strings to evoke mental images in hearers or readers, most computational studies have applied this concept of imageability only to isolated words. Using recent developments in text-to-image generation models, such as DALLE mini, we propose computational methods that use generated images to measure the imageability of both single English words and connected text. We sample text prompts for image generation from three corpora: human-generated image captions, news article sentences, and poem lines. We subject these prompts to different deformances to examine the model{'}s ability to detect changes in imageability caused by compositional change. We find high correlation between the proposed computational measures of imageability and human judgments of individual words. We also find the proposed measures more consistently respond to changes in compositionality than baseline approaches. We discuss possible effects of model training and implications for the study of compositionality in text-to-image models.
[ "Wu, Si", "Smith, David" ]
Composition and Deformance: Measuring Imageability with a Text-to-Image Model
wnu-1.16
Poster
2306.03168
[ "https://github.com/swsiwu/composition_and_deformance" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wnu-1.19.bib
https://aclanthology.org/2023.wnu-1.19/
@inproceedings{hatzel-biemann-2023-narrative, title = "Narrative Cloze as a Training Objective: Towards Modeling Stories Using Narrative Chain Embeddings", author = "Hatzel, Hans Ole and Biemann, Chris", editor = "Akoury, Nader and Clark, Elizabeth and Iyyer, Mohit and Chaturvedi, Snigdha and Brahman, Faeze and Chandu, Khyathi", booktitle = "Proceedings of the 5th Workshop on Narrative Understanding", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wnu-1.19", doi = "10.18653/v1/2023.wnu-1.19", pages = "118--127", abstract = "We present a novel approach to modeling narratives using narrative chain embeddings.A new dataset of narrative chains extracted from German news texts is presented. With neural methods, we produce models for both German and English that achieve state-of-the-art performance on the Multiple Choice Narrative Cloze task. Subsequently, we perform an extrinsic evaluation of the embeddings our models produce and show that they perform rather poorly in identifying narratively similar texts. We explore some of the reasons for this underperformance and discuss the upsides of our approach. We provide an outlook on alternative ways to model narratives, as well as techniques for evaluating such models.", }
We present a novel approach to modeling narratives using narrative chain embeddings.A new dataset of narrative chains extracted from German news texts is presented. With neural methods, we produce models for both German and English that achieve state-of-the-art performance on the Multiple Choice Narrative Cloze task. Subsequently, we perform an extrinsic evaluation of the embeddings our models produce and show that they perform rather poorly in identifying narratively similar texts. We explore some of the reasons for this underperformance and discuss the upsides of our approach. We provide an outlook on alternative ways to model narratives, as well as techniques for evaluating such models.
[ "Hatzel, Hans Ole", "Biemann, Chris" ]
Narrative Cloze as a Training Objective: Towards Modeling Stories Using Narrative Chain Embeddings
wnu-1.19
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.woah-1.1.bib
https://aclanthology.org/2023.woah-1.1/
@inproceedings{yoder-etal-2023-identity, title = "Identity Construction in a Misogynist Incels Forum", author = "Yoder, Michael and Perry, Chloe and Brown, David and Carley, Kathleen and Pruden, Meredith", editor = {Chung, Yi-ling and R{{\textbackslash}"ottger}, Paul and Nozza, Debora and Talat, Zeerak and Mostafazadeh Davani, Aida}, booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.woah-1.1", doi = "10.18653/v1/2023.woah-1.1", pages = "1--13", abstract = "Online communities of involuntary celibates (incels) are a prominent source of misogynist hate speech. In this paper, we use quantitative text and network analysis approaches to examine how identity groups are discussed on incels.is, the largest black-pilled incels forum. We find that this community produces a wide range of novel identity terms and, while terms for women are most common, mentions of other minoritized identities are increasing. An analysis of the associations made with identity groups suggests an essentialist ideology where physical appearance, as well as gender and racial hierarchies, determine human value. We discuss implications for research into automated misogynist hate speech detection.", }
Online communities of involuntary celibates (incels) are a prominent source of misogynist hate speech. In this paper, we use quantitative text and network analysis approaches to examine how identity groups are discussed on incels.is, the largest black-pilled incels forum. We find that this community produces a wide range of novel identity terms and, while terms for women are most common, mentions of other minoritized identities are increasing. An analysis of the associations made with identity groups suggests an essentialist ideology where physical appearance, as well as gender and racial hierarchies, determine human value. We discuss implications for research into automated misogynist hate speech detection.
[ "Yoder, Michael", "Perry, Chloe", "Brown, David", "Carley, Kathleen", "Pruden, Meredith" ]
Identity Construction in a Misogynist Incels Forum
woah-1.1
Poster
2306.15745
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.woah-1.2.bib
https://aclanthology.org/2023.woah-1.2/
@inproceedings{chernodub-etal-2023-detexd, title = "{D}e{T}ex{D}: A Benchmark Dataset for Delicate Text Detection", author = "Yavnyi, Serhii and Sliusarenko, Oleksii and Razzaghi, Jade and Nahorna, Olena and Mo, Yichen and Hovakimyan, Knar and Chernodub, Artem", editor = {Chung, Yi-ling and R{{\textbackslash}"ottger}, Paul and Nozza, Debora and Talat, Zeerak and Mostafazadeh Davani, Aida}, booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.woah-1.2", doi = "10.18653/v1/2023.woah-1.2", pages = "14--28", abstract = "Over the past few years, much research has been conducted to identify and regulate toxic language. However, few studies have addressed a broader range of sensitive texts that are not necessarily overtly toxic. In this paper, we introduce and define a new category of sensitive text called {``}delicate text.{''} We provide the taxonomy of delicate text and present a detailed annotation scheme. We annotate DeTexD, the first benchmark dataset for delicate text detection. The significance of the difference in the definitions is highlighted by the relative performance deltas between models trained each definitions and corpora and evaluated on the other. We make publicly available the DeTexD Benchmark dataset, annotation guidelines, and baseline model for delicate text detection.", }
Over the past few years, much research has been conducted to identify and regulate toxic language. However, few studies have addressed a broader range of sensitive texts that are not necessarily overtly toxic. In this paper, we introduce and define a new category of sensitive text called {``}delicate text.{''} We provide the taxonomy of delicate text and present a detailed annotation scheme. We annotate DeTexD, the first benchmark dataset for delicate text detection. The significance of the difference in the definitions is highlighted by the relative performance deltas between models trained each definitions and corpora and evaluated on the other. We make publicly available the DeTexD Benchmark dataset, annotation guidelines, and baseline model for delicate text detection.
[ "Yavnyi, Serhii", "Sliusarenko, Oleksii", "Razzaghi, Jade", "Nahorna, Olena", "Mo, Yichen", "Hovakimyan, Knar", "Chernodub, Artem" ]
DeTexD: A Benchmark Dataset for Delicate Text Detection
woah-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.woah-1.3.bib
https://aclanthology.org/2023.woah-1.3/
@inproceedings{nafis-etal-2023-towards, title = "Towards Safer Communities: Detecting Aggression and Offensive Language in Code-Mixed Tweets to Combat Cyberbullying", author = "Nafis, Nazia and Kanojia, Diptesh and Saini, Naveen and Murthy, Rudra", editor = {Chung, Yi-ling and R{{\textbackslash}"ottger}, Paul and Nozza, Debora and Talat, Zeerak and Mostafazadeh Davani, Aida}, booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.woah-1.3", doi = "10.18653/v1/2023.woah-1.3", pages = "29--41", abstract = "Cyberbullying is a serious societal issue widespread on various channels and platforms, particularly social networking sites. Such platforms have proven to be exceptionally fertile grounds for such behavior. The dearth of high-quality training data for multilingual and low-resource scenarios, data that can accurately capture the nuances of social media conversations, often poses a roadblock to this task. This paper attempts to tackle cyberbullying, specifically its two most common manifestations - aggression and offensiveness. We present a novel, manually annotated dataset of a total of 10,000 English and Hindi-English code-mixed tweets, manually annotated for aggression detection and offensive language detection tasks. Our annotations are supported by inter-annotator agreement scores of 0.67 and 0.74 for the two tasks, indicating substantial agreement. We perform comprehensive fine-tuning of pre-trained language models (PTLMs) using this dataset to check its efficacy. Our challenging test sets show that the best models achieve macro F1-scores of 67.87 and 65.45 on the two tasks, respectively. Further, we perform cross-dataset transfer learning to benchmark our dataset against existing aggression and offensive language datasets. We also present a detailed quantitative and qualitative analysis of errors in prediction, and with this paper, we publicly release the novel dataset, code, and models.", }
Cyberbullying is a serious societal issue widespread on various channels and platforms, particularly social networking sites. Such platforms have proven to be exceptionally fertile grounds for such behavior. The dearth of high-quality training data for multilingual and low-resource scenarios, data that can accurately capture the nuances of social media conversations, often poses a roadblock to this task. This paper attempts to tackle cyberbullying, specifically its two most common manifestations - aggression and offensiveness. We present a novel, manually annotated dataset of a total of 10,000 English and Hindi-English code-mixed tweets, manually annotated for aggression detection and offensive language detection tasks. Our annotations are supported by inter-annotator agreement scores of 0.67 and 0.74 for the two tasks, indicating substantial agreement. We perform comprehensive fine-tuning of pre-trained language models (PTLMs) using this dataset to check its efficacy. Our challenging test sets show that the best models achieve macro F1-scores of 67.87 and 65.45 on the two tasks, respectively. Further, we perform cross-dataset transfer learning to benchmark our dataset against existing aggression and offensive language datasets. We also present a detailed quantitative and qualitative analysis of errors in prediction, and with this paper, we publicly release the novel dataset, code, and models.
[ "Nafis, Nazia", "Kanojia, Diptesh", "Saini, Naveen", "Murthy, Rudra" ]
Towards Safer Communities: Detecting Aggression and Offensive Language in Code-Mixed Tweets to Combat Cyberbullying
woah-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.woah-1.4.bib
https://aclanthology.org/2023.woah-1.4/
@inproceedings{jin-etal-2023-towards, title = "Towards Weakly-Supervised Hate Speech Classification Across Datasets", author = "Jin, Yiping and Wanner, Leo and Kadam, Vishakha and Shvets, Alexander", editor = {Chung, Yi-ling and R{{\textbackslash}"ottger}, Paul and Nozza, Debora and Talat, Zeerak and Mostafazadeh Davani, Aida}, booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.woah-1.4", doi = "10.18653/v1/2023.woah-1.4", pages = "42--59", abstract = "As pointed out by several scholars, current research on hate speech (HS) recognition is characterized by unsystematic data creation strategies and diverging annotation schemata. Subsequently, supervised-learning models tend to generalize poorly to datasets they were not trained on, and the performance of the models trained on datasets labeled using different HS taxonomies cannot be compared. To ease this problem, we propose applying extremely weak supervision that only relies on the class name rather than on class samples from the annotated data. We demonstrate the effectiveness of a state-of-the-art weakly-supervised text classification model in various in-dataset and cross-dataset settings. Furthermore, we conduct an in-depth quantitative and qualitative analysis of the source of poor generalizability of HS classification models.", }
As pointed out by several scholars, current research on hate speech (HS) recognition is characterized by unsystematic data creation strategies and diverging annotation schemata. Subsequently, supervised-learning models tend to generalize poorly to datasets they were not trained on, and the performance of the models trained on datasets labeled using different HS taxonomies cannot be compared. To ease this problem, we propose applying extremely weak supervision that only relies on the class name rather than on class samples from the annotated data. We demonstrate the effectiveness of a state-of-the-art weakly-supervised text classification model in various in-dataset and cross-dataset settings. Furthermore, we conduct an in-depth quantitative and qualitative analysis of the source of poor generalizability of HS classification models.
[ "Jin, Yiping", "Wanner, Leo", "Kadam, Vishakha", "Shvets, Alex", "er" ]
Towards Weakly-Supervised Hate Speech Classification Across Datasets
woah-1.4
Poster
2305.02637
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.woah-1.6.bib
https://aclanthology.org/2023.woah-1.6/
@inproceedings{plaza-del-arco-etal-2023-respectful, title = "Respectful or Toxic? Using Zero-Shot Learning with Language Models to Detect Hate Speech", author = "Plaza-del-arco, Flor Miriam and Nozza, Debora and Hovy, Dirk", editor = {Chung, Yi-ling and R{{\textbackslash}"ottger}, Paul and Nozza, Debora and Talat, Zeerak and Mostafazadeh Davani, Aida}, booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.woah-1.6", doi = "10.18653/v1/2023.woah-1.6", pages = "60--68", abstract = "Hate speech detection faces two significant challenges: 1) the limited availability of labeled data and 2) the high variability of hate speech across different contexts and languages. Prompting brings a ray of hope to these challenges. It allows injecting a model with task-specific knowledge without relying on labeled data. This paper explores zero-shot learning with prompting for hate speech detection. We investigate how well zero-shot learning can detect hate speech in 3 languages with limited labeled data. We experiment with various large language models and verbalizers on 8 benchmark datasets. Our findings highlight the impact of prompt selection on the results. They also suggest that prompting, specifically with recent large language models, can achieve performance comparable to and surpass fine-tuned models, making it a promising alternative for under-resourced languages. Our findings highlight the potential of prompting for hate speech detection and show how both the prompt and the model have a significant impact on achieving more accurate predictions in this task.", }
Hate speech detection faces two significant challenges: 1) the limited availability of labeled data and 2) the high variability of hate speech across different contexts and languages. Prompting brings a ray of hope to these challenges. It allows injecting a model with task-specific knowledge without relying on labeled data. This paper explores zero-shot learning with prompting for hate speech detection. We investigate how well zero-shot learning can detect hate speech in 3 languages with limited labeled data. We experiment with various large language models and verbalizers on 8 benchmark datasets. Our findings highlight the impact of prompt selection on the results. They also suggest that prompting, specifically with recent large language models, can achieve performance comparable to and surpass fine-tuned models, making it a promising alternative for under-resourced languages. Our findings highlight the potential of prompting for hate speech detection and show how both the prompt and the model have a significant impact on achieving more accurate predictions in this task.
[ "Plaza-del-arco, Flor Miriam", "Nozza, Debora", "Hovy, Dirk" ]
Respectful or Toxic? Using Zero-Shot Learning with Language Models to Detect Hate Speech
woah-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.woah-1.7.bib
https://aclanthology.org/2023.woah-1.7/
@inproceedings{caselli-van-der-veen-2023-benchmarking, title = "Benchmarking Offensive and Abusive Language in {D}utch Tweets", author = "Caselli, Tommaso and Van Der Veen, Hylke", editor = {Chung, Yi-ling and R{{\textbackslash}"ottger}, Paul and Nozza, Debora and Talat, Zeerak and Mostafazadeh Davani, Aida}, booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.woah-1.7", doi = "10.18653/v1/2023.woah-1.7", pages = "69--84", abstract = "We present an extensive evaluation of different fine-tuned models to detect instances of offensive and abusive language in Dutch across three benchmarks: a standard held-out test, a task- agnostic functional benchmark, and a dynamic test set. We also investigate the use of data cartography to identify high quality training data. Our results show a relatively good quality of the manually annotated data used to train the models while highlighting some critical weakness. We have also found a good portability of trained models along the same language phenomena. As for the data cartography, we have found a positive impact only on the functional benchmark and when selecting data per annotated dimension rather than using the entire training material.", }
We present an extensive evaluation of different fine-tuned models to detect instances of offensive and abusive language in Dutch across three benchmarks: a standard held-out test, a task- agnostic functional benchmark, and a dynamic test set. We also investigate the use of data cartography to identify high quality training data. Our results show a relatively good quality of the manually annotated data used to train the models while highlighting some critical weakness. We have also found a good portability of trained models along the same language phenomena. As for the data cartography, we have found a positive impact only on the functional benchmark and when selecting data per annotated dimension rather than using the entire training material.
[ "Caselli, Tommaso", "Van Der Veen, Hylke" ]
Benchmarking Offensive and Abusive Language in Dutch Tweets
woah-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.woah-1.8.bib
https://aclanthology.org/2023.woah-1.8/
@inproceedings{amironesei-diaz-2023-relationality, title = "Relationality and Offensive Speech: A Research Agenda", author = "Amironesei, Razvan and Diaz, Mark", editor = {Chung, Yi-ling and R{{\textbackslash}"ottger}, Paul and Nozza, Debora and Talat, Zeerak and Mostafazadeh Davani, Aida}, booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.woah-1.8", doi = "10.18653/v1/2023.woah-1.8", pages = "85--95", abstract = "We draw from the framework of relationality as a pathway for modeling social relations to address gaps in text classification, generally, and offensive language classification, specifically. We use minoritized language, such as queer speech, to motivate a need for understanding and modeling social relations{--}both among individuals and among their social communities. We then point to socio-ethical style as a research area for inferring and measuring social relations as well as propose additional questions to structure future research on operationalizing social context.", }
We draw from the framework of relationality as a pathway for modeling social relations to address gaps in text classification, generally, and offensive language classification, specifically. We use minoritized language, such as queer speech, to motivate a need for understanding and modeling social relations{--}both among individuals and among their social communities. We then point to socio-ethical style as a research area for inferring and measuring social relations as well as propose additional questions to structure future research on operationalizing social context.
[ "Amironesei, Razvan", "Diaz, Mark" ]
Relationality and Offensive Speech: A Research Agenda
woah-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.woah-1.9.bib
https://aclanthology.org/2023.woah-1.9/
@inproceedings{khondaker-etal-2023-cross, title = "Cross-Platform and Cross-Domain Abusive Language Detection with Supervised Contrastive Learning", author = "Khondaker, Md Tawkat Islam and Abdul-mageed, Muhammad and Lakshmanan, V.s., Laks", editor = {Chung, Yi-ling and R{{\textbackslash}"ottger}, Paul and Nozza, Debora and Talat, Zeerak and Mostafazadeh Davani, Aida}, booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.woah-1.9", doi = "10.18653/v1/2023.woah-1.9", pages = "96--112", abstract = "The prevalence of abusive language on different online platforms has been a major concern that raises the need for automated cross-platform abusive language detection. However, prior works focus on concatenating data from multiple platforms, inherently adopting Empirical Risk Minimization (ERM) method. In this work, we address this challenge from the perspective of domain generalization objective. We design SCL-Fish, a supervised contrastive learning integrated meta-learning algorithm to detect abusive language on unseen platforms. Our experimental analysis shows that SCL-Fish achieves better performance over ERM and the existing state-of-the-art models. We also show that SCL-Fish is data-efficient and achieves comparable performance with the large-scale pre-trained models upon finetuning for the abusive language detection task.", }
The prevalence of abusive language on different online platforms has been a major concern that raises the need for automated cross-platform abusive language detection. However, prior works focus on concatenating data from multiple platforms, inherently adopting Empirical Risk Minimization (ERM) method. In this work, we address this challenge from the perspective of domain generalization objective. We design SCL-Fish, a supervised contrastive learning integrated meta-learning algorithm to detect abusive language on unseen platforms. Our experimental analysis shows that SCL-Fish achieves better performance over ERM and the existing state-of-the-art models. We also show that SCL-Fish is data-efficient and achieves comparable performance with the large-scale pre-trained models upon finetuning for the abusive language detection task.
[ "Khondaker, Md Tawkat Islam", "Abdul-mageed, Muhammad", "Lakshmanan, V.s., Laks" ]
Cross-Platform and Cross-Domain Abusive Language Detection with Supervised Contrastive Learning
woah-1.9
Poster
2211.06452
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.woah-1.12.bib
https://aclanthology.org/2023.woah-1.12/
@inproceedings{kiritchenko-etal-2023-aporophobia, title = "Aporophobia: An Overlooked Type of Toxic Language Targeting the Poor", author = "Kiritchenko, Svetlana and Curto Rex, Georgina and Nejadgholi, Isar and Fraser, Kathleen C.", editor = {Chung, Yi-ling and R{{\textbackslash}"ottger}, Paul and Nozza, Debora and Talat, Zeerak and Mostafazadeh Davani, Aida}, booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.woah-1.12", doi = "10.18653/v1/2023.woah-1.12", pages = "113--125", abstract = "While many types of hate speech and online toxicity have been the focus of extensive research in NLP, toxic language stigmatizing poor people has been mostly disregarded. Yet, aporophobia, a social bias against the poor, is a common phenomenon online, which can be psychologically damaging as well as hindering poverty reduction policy measures. We demonstrate that aporophobic attitudes are indeed present in social media and argue that the existing NLP datasets and models are inadequate to effectively address this problem. Efforts toward designing specialized resources and novel socio-technical mechanisms for confronting aporophobia are needed.", }
While many types of hate speech and online toxicity have been the focus of extensive research in NLP, toxic language stigmatizing poor people has been mostly disregarded. Yet, aporophobia, a social bias against the poor, is a common phenomenon online, which can be psychologically damaging as well as hindering poverty reduction policy measures. We demonstrate that aporophobic attitudes are indeed present in social media and argue that the existing NLP datasets and models are inadequate to effectively address this problem. Efforts toward designing specialized resources and novel socio-technical mechanisms for confronting aporophobia are needed.
[ "Kiritchenko, Svetlana", "Curto Rex, Georgina", "Nejadgholi, Isar", "Fraser, Kathleen C." ]
Aporophobia: An Overlooked Type of Toxic Language Targeting the Poor
woah-1.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.woah-1.13.bib
https://aclanthology.org/2023.woah-1.13/
@inproceedings{sood-dandapat-2023-problematic, title = "Problematic Webpage Identification: A Trilogy of Hatespeech, Search Engines and {GPT}", author = "Sood, Ojasvin and Dandapat, Sandipan", editor = {Chung, Yi-ling and R{{\textbackslash}"ottger}, Paul and Nozza, Debora and Talat, Zeerak and Mostafazadeh Davani, Aida}, booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.woah-1.13", doi = "10.18653/v1/2023.woah-1.13", pages = "126--137", abstract = "In this paper, we introduce a fine-tuned transformer-based model focused on problematic webpage classification to identify webpages promoting hate and violence of various forms. Due to the unavailability of labelled problematic webpage data, first we propose a novel webpage data collection strategy which leverages well-studied short-text hate speech datasets. We have introduced a custom GPT-4 few-shot prompt annotation scheme taking various webpage features to label the prohibitively expensive webpage annotation task. The resulting annotated data is used to build our problematic webpage classification model. We report the accuracy (87.6{\%} F1-score) of our webpage classification model and conduct a detailed comparison of it against other state-of-the-art hate speech classification model on problematic webpage identification task. Finally, we have showcased the importance of various webpage features in identifying a problematic webpage.", }
In this paper, we introduce a fine-tuned transformer-based model focused on problematic webpage classification to identify webpages promoting hate and violence of various forms. Due to the unavailability of labelled problematic webpage data, first we propose a novel webpage data collection strategy which leverages well-studied short-text hate speech datasets. We have introduced a custom GPT-4 few-shot prompt annotation scheme taking various webpage features to label the prohibitively expensive webpage annotation task. The resulting annotated data is used to build our problematic webpage classification model. We report the accuracy (87.6{\%} F1-score) of our webpage classification model and conduct a detailed comparison of it against other state-of-the-art hate speech classification model on problematic webpage identification task. Finally, we have showcased the importance of various webpage features in identifying a problematic webpage.
[ "Sood, Ojasvin", "D", "apat, S", "ipan" ]
Problematic Webpage Identification: A Trilogy of Hatespeech, Search Engines and GPT
woah-1.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.woah-1.14.bib
https://aclanthology.org/2023.woah-1.14/
@inproceedings{nejadgholi-etal-2023-concept, title = "Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers", author = "Nejadgholi, Isar and Kiritchenko, Svetlana and Fraser, Kathleen C. and Balkir, Esma", editor = {Chung, Yi-ling and R{{\textbackslash}"ottger}, Paul and Nozza, Debora and Talat, Zeerak and Mostafazadeh Davani, Aida}, booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.woah-1.14", doi = "10.18653/v1/2023.woah-1.14", pages = "138--149", abstract = "Classifiers tend to learn a false causal relationship between an over-represented concept and a label, which can result in over-reliance on the concept and compromised classification accuracy. It is imperative to have methods in place that can compare different models and identify over-reliances on specific concepts. We consider three well-known abusive language classifiers trained on large English datasets and focus on the concept of negative emotions, which is an important signal but should not be learned as a sufficient feature for the label of abuse. Motivated by the definition of global sufficiency, we first examine the unwanted dependencies learned by the classifiers by assessing their accuracy on a challenge set across all decision thresholds. Further, recognizing that a challenge set might not always be available, we introduce concept-based explanation metrics to assess the influence of the concept on the labels. These explanations allow us to compare classifiers regarding the degree of false global sufficiency they have learned between a concept and a label.", }
Classifiers tend to learn a false causal relationship between an over-represented concept and a label, which can result in over-reliance on the concept and compromised classification accuracy. It is imperative to have methods in place that can compare different models and identify over-reliances on specific concepts. We consider three well-known abusive language classifiers trained on large English datasets and focus on the concept of negative emotions, which is an important signal but should not be learned as a sufficient feature for the label of abuse. Motivated by the definition of global sufficiency, we first examine the unwanted dependencies learned by the classifiers by assessing their accuracy on a challenge set across all decision thresholds. Further, recognizing that a challenge set might not always be available, we introduce concept-based explanation metrics to assess the influence of the concept on the labels. These explanations allow us to compare classifiers regarding the degree of false global sufficiency they have learned between a concept and a label.
[ "Nejadgholi, Isar", "Kiritchenko, Svetlana", "Fraser, Kathleen C.", "Balkir, Esma" ]
Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers
woah-1.14
Poster
2307.01900
[ "https://github.com/isarnejad/global-sufficiency" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.woah-1.15.bib
https://aclanthology.org/2023.woah-1.15/
@inproceedings{singh-etal-2023-female, title = "{``}Female Astronaut: Because sandwiches won{'}t make themselves up there{''}: Towards Multimodal misogyny detection in memes", author = "Singh, Smriti and Haridasan, Amritha and Mooney, Raymond", editor = {Chung, Yi-ling and R{{\textbackslash}"ottger}, Paul and Nozza, Debora and Talat, Zeerak and Mostafazadeh Davani, Aida}, booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.woah-1.15", doi = "10.18653/v1/2023.woah-1.15", pages = "150--159", abstract = "A rise in the circulation of memes has led to the spread of a new form of multimodal hateful content. Unfortunately, the degree of hate women receive on the internet is disproportionately skewed against them. This, combined with the fact that multimodal misogyny is more challenging to detect as opposed to traditional text-based misogyny, signifies that the task of identifying misogynistic memes online is one of utmost importance. To this end, the MAMI dataset was released, consisting of 12000 memes annotated for misogyny and four sub-classes of misogyny - shame, objectification, violence and stereotype. While this balanced dataset is widely cited, we find that the task itself remains largely unsolved. Thus, in our work, we investigate the performance of multiple models in an effort to analyse whether domain specific pretraining helps model performance. We also investigate why even state of the art models find this task so challenging, and whether domain-specific pretraining can help. Our results show that pretraining BERT on hateful memes and leveraging an attention based approach with ViT outperforms state of the art models by more than 10{\%}. Further, we provide insight into why these models may be struggling with this task with an extensive qualitative analysis of random samples from the test set.", }
A rise in the circulation of memes has led to the spread of a new form of multimodal hateful content. Unfortunately, the degree of hate women receive on the internet is disproportionately skewed against them. This, combined with the fact that multimodal misogyny is more challenging to detect as opposed to traditional text-based misogyny, signifies that the task of identifying misogynistic memes online is one of utmost importance. To this end, the MAMI dataset was released, consisting of 12000 memes annotated for misogyny and four sub-classes of misogyny - shame, objectification, violence and stereotype. While this balanced dataset is widely cited, we find that the task itself remains largely unsolved. Thus, in our work, we investigate the performance of multiple models in an effort to analyse whether domain specific pretraining helps model performance. We also investigate why even state of the art models find this task so challenging, and whether domain-specific pretraining can help. Our results show that pretraining BERT on hateful memes and leveraging an attention based approach with ViT outperforms state of the art models by more than 10{\%}. Further, we provide insight into why these models may be struggling with this task with an extensive qualitative analysis of random samples from the test set.
[ "Singh, Smriti", "Haridasan, Amritha", "Mooney, Raymond" ]
“Female Astronaut: Because sandwiches won't make themselves up there”: Towards Multimodal misogyny detection in memes
woah-1.15
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.woah-1.16.bib
https://aclanthology.org/2023.woah-1.16/
@inproceedings{altarawneh-etal-2023-conversation, title = "Conversation Derailment Forecasting with Graph Convolutional Networks", author = "Altarawneh, Enas and Agrawal, Ameeta and Jenkin, Michael and Papagelis, Manos", editor = {Chung, Yi-ling and R{{\textbackslash}"ottger}, Paul and Nozza, Debora and Talat, Zeerak and Mostafazadeh Davani, Aida}, booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.woah-1.16", doi = "10.18653/v1/2023.woah-1.16", pages = "160--169", abstract = "Online conversations are particularly susceptible to derailment, which can manifest itself in the form of toxic communication patterns like disrespectful comments or verbal abuse. Forecasting conversation derailment predicts signs of derailment in advance enabling proactive moderation of conversations. Current state-of-the-art approaches to address this problem rely on sequence models that treat dialogues as text streams. We propose a novel model based on a graph convolutional neural network that considers dialogue user dynamics and the influence of public perception on conversation utterances. Through empirical evaluation, we show that our model effectively captures conversation dynamics and outperforms the state-of-the-art models on the CGA and CMV benchmark datasets by 1.5{\textbackslash}{\%} and 1.7{\textbackslash}{\%}, respectively.", }
Online conversations are particularly susceptible to derailment, which can manifest itself in the form of toxic communication patterns like disrespectful comments or verbal abuse. Forecasting conversation derailment predicts signs of derailment in advance enabling proactive moderation of conversations. Current state-of-the-art approaches to address this problem rely on sequence models that treat dialogues as text streams. We propose a novel model based on a graph convolutional neural network that considers dialogue user dynamics and the influence of public perception on conversation utterances. Through empirical evaluation, we show that our model effectively captures conversation dynamics and outperforms the state-of-the-art models on the CGA and CMV benchmark datasets by 1.5{\textbackslash}{\%} and 1.7{\textbackslash}{\%}, respectively.
[ "Altarawneh, Enas", "Agrawal, Ameeta", "Jenkin, Michael", "Papagelis, Manos" ]
Conversation Derailment Forecasting with Graph Convolutional Networks
woah-1.16
Poster
2306.12982
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.woah-1.17.bib
https://aclanthology.org/2023.woah-1.17/
@inproceedings{abercrombie-etal-2023-resources, title = "Resources for Automated Identification of Online Gender-Based Violence: A Systematic Review", author = "Abercrombie, Gavin and Jiang, Aiqi and Gerrard-abbott, Poppy and Konstas, Ioannis and Rieser, Verena", editor = {Chung, Yi-ling and R{{\textbackslash}"ottger}, Paul and Nozza, Debora and Talat, Zeerak and Mostafazadeh Davani, Aida}, booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.woah-1.17", doi = "10.18653/v1/2023.woah-1.17", pages = "170--186", abstract = "Online Gender-Based Violence (GBV), such as misogynistic abuse is an increasingly prevalent problem that technological approaches have struggled to address. Through the lens of the GBV framework, which is rooted in social science and policy, we systematically review 63 available resources for automated identification of such language. We find the datasets are limited in a number of important ways, such as their lack of theoretical grounding and stakeholder input, static nature, and focus on certain media platforms. Based on this review, we recommend development of future resources rooted in sociological expertise andcentering stakeholder voices, namely GBV experts and people with lived experience of GBV.", }
Online Gender-Based Violence (GBV), such as misogynistic abuse is an increasingly prevalent problem that technological approaches have struggled to address. Through the lens of the GBV framework, which is rooted in social science and policy, we systematically review 63 available resources for automated identification of such language. We find the datasets are limited in a number of important ways, such as their lack of theoretical grounding and stakeholder input, static nature, and focus on certain media platforms. Based on this review, we recommend development of future resources rooted in sociological expertise andcentering stakeholder voices, namely GBV experts and people with lived experience of GBV.
[ "Abercrombie, Gavin", "Jiang, Aiqi", "Gerrard-abbott, Poppy", "Konstas, Ioannis", "Rieser, Verena" ]
Resources for Automated Identification of Online Gender-Based Violence: A Systematic Review
woah-1.17
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.woah-1.19.bib
https://aclanthology.org/2023.woah-1.19/
@inproceedings{goldzycher-etal-2023-evaluating, title = "Evaluating the Effectiveness of Natural Language Inference for Hate Speech Detection in Languages with Limited Labeled Data", author = "Goldzycher, Janis and Preisig, Moritz and Amrhein, Chantal and Schneider, Gerold", editor = {Chung, Yi-ling and R{{\textbackslash}"ottger}, Paul and Nozza, Debora and Talat, Zeerak and Mostafazadeh Davani, Aida}, booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.woah-1.19", doi = "10.18653/v1/2023.woah-1.19", pages = "187--201", abstract = "Most research on hate speech detection has focused on English where a sizeable amount of labeled training data is available. However, to expand hate speech detection into more languages, approaches that require minimal training data are needed. In this paper, we test whether natural language inference (NLI) models which perform well in zero- and few-shot settings can benefit hate speech detection performance in scenarios where only a limited amount of labeled data is available in the target language. Our evaluation on five languages demonstrates large performance improvements of NLI fine-tuning over direct fine-tuning in the target language. However, the effectiveness of previous work that proposed intermediate fine-tuning on English data is hard to match. Only in settings where the English training data does not match the test domain, can our customised NLI-formulation outperform intermediate fine-tuning on English. Based on our extensive experiments, we propose a set of recommendations for hate speech detection in languages where minimal labeled training data is available.", }
Most research on hate speech detection has focused on English where a sizeable amount of labeled training data is available. However, to expand hate speech detection into more languages, approaches that require minimal training data are needed. In this paper, we test whether natural language inference (NLI) models which perform well in zero- and few-shot settings can benefit hate speech detection performance in scenarios where only a limited amount of labeled data is available in the target language. Our evaluation on five languages demonstrates large performance improvements of NLI fine-tuning over direct fine-tuning in the target language. However, the effectiveness of previous work that proposed intermediate fine-tuning on English data is hard to match. Only in settings where the English training data does not match the test domain, can our customised NLI-formulation outperform intermediate fine-tuning on English. Based on our extensive experiments, we propose a set of recommendations for hate speech detection in languages where minimal labeled training data is available.
[ "Goldzycher, Janis", "Preisig, Moritz", "Amrhein, Chantal", "Schneider, Gerold" ]
Evaluating the Effectiveness of Natural Language Inference for Hate Speech Detection in Languages with Limited Labeled Data
woah-1.19
Poster
2306.03722
[ "https://github.com/jagol/xnli4xhsd" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.woah-1.20.bib
https://aclanthology.org/2023.woah-1.20/
@inproceedings{vasquez-etal-2023-homo, title = "{HOMO}-{MEX}: A {M}exican {S}panish Annotated Corpus for {LGBT}+phobia Detection on {T}witter", author = "V{\'a}squez, Juan and Andersen, Scott and Bel-enguix, Gemma and G{\'o}mez-adorno, Helena and Ojeda-trueba, Sergio-luis", editor = {Chung, Yi-ling and R{{\textbackslash}"ottger}, Paul and Nozza, Debora and Talat, Zeerak and Mostafazadeh Davani, Aida}, booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.woah-1.20", doi = "10.18653/v1/2023.woah-1.20", pages = "202--214", abstract = "In the past few years, the NLP community has actively worked on detecting LGBT+Phobia in online spaces, using textual data publicly available Most of these are for the English language and its variants since it is the most studied language by the NLP community. Nevertheless, efforts towards creating corpora in other languages are active worldwide. Despite this, the Spanish language is an understudied language regarding digital LGBT+Phobia. The only corpus we found in the literature was for the Peninsular Spanish dialects, which use LGBT+phobic terms different than those in the Mexican dialect. For this reason, we present Homo-MEX, a novel corpus for detecting LGBT+Phobia in Mexican Spanish. In this paper, we describe our data-gathering and annotation process. Also, we present a classification benchmark using various traditional machine learning algorithms and two pre-trained deep learning models to showcase our corpus classification potential.", }
In the past few years, the NLP community has actively worked on detecting LGBT+Phobia in online spaces, using textual data publicly available Most of these are for the English language and its variants since it is the most studied language by the NLP community. Nevertheless, efforts towards creating corpora in other languages are active worldwide. Despite this, the Spanish language is an understudied language regarding digital LGBT+Phobia. The only corpus we found in the literature was for the Peninsular Spanish dialects, which use LGBT+phobic terms different than those in the Mexican dialect. For this reason, we present Homo-MEX, a novel corpus for detecting LGBT+Phobia in Mexican Spanish. In this paper, we describe our data-gathering and annotation process. Also, we present a classification benchmark using various traditional machine learning algorithms and two pre-trained deep learning models to showcase our corpus classification potential.
[ "V{\\'a}squez, Juan", "Andersen, Scott", "Bel-enguix, Gemma", "G{\\'o}mez-adorno, Helena", "Ojeda-trueba, Sergio-luis" ]
HOMO-MEX: A Mexican Spanish Annotated Corpus for LGBT+phobia Detection on Twitter
woah-1.20
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.woah-1.21.bib
https://aclanthology.org/2023.woah-1.21/
@inproceedings{ron-etal-2023-factoring, title = "Factoring Hate Speech: A New Annotation Framework to Study Hate Speech in Social Media", author = "Ron, Gal and Levi, Effi and Oshri, Odelia and Shenhav, Shaul", editor = {Chung, Yi-ling and R{{\textbackslash}"ottger}, Paul and Nozza, Debora and Talat, Zeerak and Mostafazadeh Davani, Aida}, booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.woah-1.21", doi = "10.18653/v1/2023.woah-1.21", pages = "215--220", abstract = "In this work we propose a novel annotation scheme which factors hate speech into five separate discursive categories. To evaluate our scheme, we construct a corpus of over 2.9M Twitter posts containing hateful expressions directed at Jews, and annotate a sample dataset of 1,050 tweets. We present a statistical analysis of the annotated dataset as well as discuss annotation examples, and conclude by discussing promising directions for future work.", }
In this work we propose a novel annotation scheme which factors hate speech into five separate discursive categories. To evaluate our scheme, we construct a corpus of over 2.9M Twitter posts containing hateful expressions directed at Jews, and annotate a sample dataset of 1,050 tweets. We present a statistical analysis of the annotated dataset as well as discuss annotation examples, and conclude by discussing promising directions for future work.
[ "Ron, Gal", "Levi, Effi", "Oshri, Odelia", "Shenhav, Shaul" ]
Factoring Hate Speech: A New Annotation Framework to Study Hate Speech in Social Media
woah-1.21
Poster
2311.03969
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.woah-1.24.bib
https://aclanthology.org/2023.woah-1.24/
@inproceedings{korre-etal-2023-harmful, title = "Harmful Language Datasets: An Assessment of Robustness", author = "Korre, Katerina and Pavlopoulos, John and Sorensen, Jeffrey and Laugier, L{\'e}o and Androutsopoulos, Ion and Dixon, Lucas and Barr{\'o}n-cede{\~n}o, Alberto", editor = {Chung, Yi-ling and R{{\textbackslash}"ottger}, Paul and Nozza, Debora and Talat, Zeerak and Mostafazadeh Davani, Aida}, booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.woah-1.24", doi = "10.18653/v1/2023.woah-1.24", pages = "221--230", abstract = "The automated detection of harmful language has been of great importance for the online world, especially with the growing importance of social media and, consequently, polarisation. There are many open challenges to high quality detection of harmful text, from dataset creation to generalisable application, thus calling for more systematic studies. In this paper, we explore re-annotation as a means of examining the robustness of already existing labelled datasets, showing that, despite using alternative definitions, the inter-annotator agreement remains very inconsistent, highlighting the intrinsically subjective and variable nature of the task. In addition, we build automatic toxicity detectors using the existing datasets, with their original labels, and we evaluate them on our multi-definition and multi-source datasets. Surprisingly, while other studies show that hate speech detection models perform better on data that are derived from the same distribution as the training set, our analysis demonstrates this is not necessarily true.", }
The automated detection of harmful language has been of great importance for the online world, especially with the growing importance of social media and, consequently, polarisation. There are many open challenges to high quality detection of harmful text, from dataset creation to generalisable application, thus calling for more systematic studies. In this paper, we explore re-annotation as a means of examining the robustness of already existing labelled datasets, showing that, despite using alternative definitions, the inter-annotator agreement remains very inconsistent, highlighting the intrinsically subjective and variable nature of the task. In addition, we build automatic toxicity detectors using the existing datasets, with their original labels, and we evaluate them on our multi-definition and multi-source datasets. Surprisingly, while other studies show that hate speech detection models perform better on data that are derived from the same distribution as the training set, our analysis demonstrates this is not necessarily true.
[ "Korre, Katerina", "Pavlopoulos, John", "Sorensen, Jeffrey", "Laugier, L{\\'e}o", "Androutsopoulos, Ion", "Dixon, Lucas", "Barr{\\'o}n-cede{\\~n}o, Alberto" ]
Harmful Language Datasets: An Assessment of Robustness
woah-1.24
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.woah-1.25.bib
https://aclanthology.org/2023.woah-1.25/
@inproceedings{antypas-camacho-collados-2023-robust, title = "Robust Hate Speech Detection in Social Media: A Cross-Dataset Empirical Evaluation", author = "Antypas, Dimosthenis and Camacho-Collados, Jose", editor = {Chung, Yi-ling and R{{\textbackslash}"ottger}, Paul and Nozza, Debora and Talat, Zeerak and Mostafazadeh Davani, Aida}, booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.woah-1.25", doi = "10.18653/v1/2023.woah-1.25", pages = "231--242", abstract = "The automatic detection of hate speech online is an active research area in NLP. Most of the studies to date are based on social media datasets that contribute to the creation of hate speech detection models trained on them. However, data creation processes contain their own biases, and models inherently learn from these dataset-specific biases. In this paper, we perform a large-scale cross-dataset comparison where we fine-tune language models on different hate speech detection datasets. This analysis shows how some datasets are more generalizable than others when used as training data. Crucially, our experiments show how combining hate speech detection datasets can contribute to the development of robust hate speech detection models. This robustness holds even when controlling by data size and compared with the best individual datasets.", }
The automatic detection of hate speech online is an active research area in NLP. Most of the studies to date are based on social media datasets that contribute to the creation of hate speech detection models trained on them. However, data creation processes contain their own biases, and models inherently learn from these dataset-specific biases. In this paper, we perform a large-scale cross-dataset comparison where we fine-tune language models on different hate speech detection datasets. This analysis shows how some datasets are more generalizable than others when used as training data. Crucially, our experiments show how combining hate speech detection datasets can contribute to the development of robust hate speech detection models. This robustness holds even when controlling by data size and compared with the best individual datasets.
[ "Antypas, Dimosthenis", "Camacho-Collados, Jose" ]
Robust Hate Speech Detection in Social Media: A Cross-Dataset Empirical Evaluation
woah-1.25
Poster
2307.01680
[ "" ]
https://huggingface.co/papers/2307.01680
0
0
0
2
1
[ "cardiffnlp/twitter-roberta-base-hate-multiclass-latest" ]
[]
[ "yunusserhat/Crime-Map", "interactive-crime/map", "crime-interactive/map" ]