{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:07:02.515626Z" }, "title": "Empathy and Distress Prediction using Transformer Multi-output Regression and Emotion Analysis with an Ensemble of Supervised and Zero-Shot Learning Models", "authors": [ { "first": "Flor", "middle": [ "Miriam" ], "last": "Plaza-Del-Arco", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universidad de Ja\u00e9n", "location": { "country": "Spain" } }, "email": "" }, { "first": "Jaime", "middle": [], "last": "Collado-Monta\u00f1ez", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universidad de Ja\u00e9n", "location": { "country": "Spain" } }, "email": "" }, { "first": "L", "middle": [], "last": "Alfonso Ure\u00f1a-L\u00f3pez", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universidad de Ja\u00e9n", "location": { "country": "Spain" } }, "email": "" }, { "first": "Mar\u00eda-Teresa", "middle": [], "last": "Mart\u00edn-Valdivia", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universidad de Ja\u00e9n", "location": { "country": "Spain" } }, "email": "" }, { "first": "", "middle": [], "last": "Sinai", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universidad de Ja\u00e9n", "location": { "country": "Spain" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes the participation of the SINAI research group at WASSA 2022 (Empathy and Personality Detection and Emotion Classification). Specifically, we participate in Track 1 (Empathy and Distress predictions) and Track 2 (Emotion classification). We conducted extensive experiments developing different machine learning solutions in line with the state of the art in Natural Language Processing. For Track 1, a Transformer multi-output regression model is proposed. For Track 2, we aim to explore recent techniques based on Zero-Shot Learning models including a Natural Language Inference model and GPT-3, using them in an ensemble manner with a fine-tune RoBERTa model. Our team ranked 2 nd in the first track and 3 rd in the second track.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "This paper describes the participation of the SINAI research group at WASSA 2022 (Empathy and Personality Detection and Emotion Classification). Specifically, we participate in Track 1 (Empathy and Distress predictions) and Track 2 (Emotion classification). We conducted extensive experiments developing different machine learning solutions in line with the state of the art in Natural Language Processing. For Track 1, a Transformer multi-output regression model is proposed. For Track 2, we aim to explore recent techniques based on Zero-Shot Learning models including a Natural Language Inference model and GPT-3, using them in an ensemble manner with a fine-tune RoBERTa model. Our team ranked 2 nd in the first track and 3 rd in the second track.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Emotion analysis is a popular and established task in natural language processing (NLP) with a large number of studies conducted during the last few years (Bostan and Klinger, 2018; Plaza-del-Arco et al., 2020) . Emotion detection can be considered as the main task in this area which consists of mapping textual units to different emotion categories within a text following different psychological models such as Ekman's theory (Ekman, 1992) , with six basic emotions, or Plutchik's (Plutchik, 2001) with the addition of anticipation and trust. Two inextricably related concepts to emotions that have received less attention are empathy and distress. The former is defined as the ability to sense other people's emotions, coupled with the ability to imagine what someone else might be thinking or feeling, while the latter is a self-focused, negative affective state that arises when one feels upset due to witnessing an entity's suffering or need (Batson et al., 1987; Buechel et al., 2018) .", "cite_spans": [ { "start": 155, "end": 181, "text": "(Bostan and Klinger, 2018;", "ref_id": "BIBREF1" }, { "start": 182, "end": 210, "text": "Plaza-del-Arco et al., 2020)", "ref_id": "BIBREF11" }, { "start": 429, "end": 442, "text": "(Ekman, 1992)", "ref_id": "BIBREF6" }, { "start": 484, "end": 500, "text": "(Plutchik, 2001)", "ref_id": "BIBREF12" }, { "start": 949, "end": 970, "text": "(Batson et al., 1987;", "ref_id": "BIBREF0" }, { "start": 971, "end": 992, "text": "Buechel et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A linked task that plays an important role in the study of these concepts is personality trait detec-tion, which is related to author profiling and is commonly defined as the task of detecting the five basic personality traits (extraversion, agreeableness, openness, conscientiousness, and neuroticism) in the text (Mehta et al., 2020) . We refer the reader to a recent survey in the task (Stajner and Yenikent, 2020) . All these concepts together have potential applications and play an important role in helping victims of abuse (Burleson et al., 2009; Pfetsch, 2017; SarahWoods et al., 2009) , mental and physical health support (Sharma et al., 2020 (Sharma et al., , 2021 , and in the study of reaction to news stories (Buechel et al., 2018) .", "cite_spans": [ { "start": 315, "end": 335, "text": "(Mehta et al., 2020)", "ref_id": "BIBREF9" }, { "start": 389, "end": 417, "text": "(Stajner and Yenikent, 2020)", "ref_id": "BIBREF17" }, { "start": 531, "end": 554, "text": "(Burleson et al., 2009;", "ref_id": "BIBREF3" }, { "start": 555, "end": 569, "text": "Pfetsch, 2017;", "ref_id": "BIBREF10" }, { "start": 570, "end": 594, "text": "SarahWoods et al., 2009)", "ref_id": "BIBREF13" }, { "start": 632, "end": 652, "text": "(Sharma et al., 2020", "ref_id": "BIBREF16" }, { "start": 653, "end": 675, "text": "(Sharma et al., , 2021", "ref_id": "BIBREF15" }, { "start": 723, "end": 745, "text": "(Buechel et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present our participation as SINAI team in the Shared Task on Empathy and Personality Detection and Emotion Classification (WASSA 2022). Within this shared task, four main tracks are proposed that aim to develop models that can predict empathy, distress, emotion, and personality traits in reaction to English news articles. Track 1: Empathy Prediction (EMP) consists in predicting both the empathy concern and the personal distress at the essay level. Track 2: Emotion Classification (EMO) refers to detecting the emotion at the essay level. Track 3: Personality Prediction (PER) aims to predict the Big Five personality traits, and Track 4: Interpersonal Reactivity Index Prediction (IRI) consists of predicting each dimension of assessment of empathy: perspective taking, fantasy, empathic concern. Our team SINAI has participated in the first and second tracks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The dataset provided by the organizers of WASSA 2022 shared task is an extension of the one presented in (Buechel et al., 2018) which is composed of posts in reactions to news articles where there is harm to a person, group, or other. Person-level demographic information (age, gender, ethnicity, income, education level) is included for each post. A set of 2,130 training documents annotated with empathy, distress, and emotions is provided (see Table 1 for the data set size). With each post, regression scores for empathy and distress that range from 1 to 7 have been associated to address track 1. For track 2, each post is annotated with seven emotions following the six Ekman's categories (anger, fear, sadness, joy, disgust, and surprise) plus the neutral class. ", "cite_spans": [ { "start": 105, "end": 127, "text": "(Buechel et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 447, "end": 454, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "In this section, we describe the systems our team SINAI developed for Track 1 (EMP) and Track 2 (EMO) at WASSA 2022.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "3" }, { "text": "This track is a multi-output regression task in which a system has to learn to predict both empathy and distress scores from users' reaction posts to news articles. To address this task, we have focused on two main approaches: A single multi-output regression model that learns to predict both empathy and distress at once, and two separated regression models, one predicting the empathy score and the other predicting that of distress. For each approach, three different models based on RoBERTa (Liu et al., 2020) and BERT (Devlin et al., 2019) have been tested: roberta-large, bertbase-uncased fine-tuned on the GoEmotions dataset (Demszky et al., 2020) which contains Reddit comments labeled for 27 emotion categories plus neutral, and a distilled version of BERT (distilbertbase-uncased) fine-tuned on the CARER dataset (Saravia et al., 2018) which contains Twitter messages labeled with six basic emotions: anger, fear, joy, love, sadness and surprise. By proposing the latter two models, we aim to observe whether sequential transfer learning models that have first fine-tuned on the emotion task help in the detection of empathy and distress, as they are inherently related tasks.", "cite_spans": [ { "start": 496, "end": 514, "text": "(Liu et al., 2020)", "ref_id": "BIBREF8" }, { "start": 524, "end": 545, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 633, "end": 655, "text": "(Demszky et al., 2020)", "ref_id": "BIBREF4" }, { "start": 824, "end": 846, "text": "(Saravia et al., 2018)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Track 1: Empathy Prediction", "sec_num": "3.1" }, { "text": "The WASSA 2022 dataset provides several numerical demographic features, namely: gender, education, race, age, and income. Two of these are actual numerical features (age and income) but the others are categorical features that have been numerically encoded. As we did not have the right labels associated with these categorical features, we tried to decode them by analyzing the training set. We noticed that all essays containing the sentence \"as a woman\" were labeled as 2, so we inferred gender 1 as male and gender 5, which only identifies two authors in the entire training set, as \"other\". The rest of the features (race and education level) have not been used in our system as we could not decode them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Track 1: Empathy Prediction", "sec_num": "3.1" }, { "text": "We finally fine-tuned all three models with the raw essays. Then, we used both the essays and a concatenation of the three previously mentioned features (e.g. \"male, 32, 20000\") as two different input sentences for the tokenizer, which internally merges them with a special separator token: for RoBERTa and [SEP] for BERT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Track 1: Empathy Prediction", "sec_num": "3.1" }, { "text": "Multi-output regression model. In this approach, the prediction of both empathy and distress is learned at once by minimizing the average between the mean squared error (MSE) of each. This is accomplished by fine-tuning a single transformer model to predict two regression outputs given essays as inputs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Track 1: Empathy Prediction", "sec_num": "3.1" }, { "text": "Separated regression models. In this case, we focused on predicting each class separately, this means, fine-tuning two different models where the former is designed to minimize the MSE loss while learning to predict the empathy's regression value while the latter does the same for that of distress.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Track 1: Empathy Prediction", "sec_num": "3.1" }, { "text": "This task aims to predict the emotion experienced by the user at the essay level. It is a multi-class classification task where the system has to predict one of the following emotion categories: anger, fear, sadness, joy, disgust, surprise and neutral. In order to address this task we focused on different paradigms within the NLP area, namely supervised learning and ZSL. We aimed to compare these two approaches and evaluate how ZSL learning works in emotion classification and whether it can assist in the detection of this task. In particular, for supervised learning we followed the state-of-the-art Transformer (Vaswani et al., 2017) and, for ZSL, the natural language inference (NLI) and an autoregressive language model (GPT-3) have been tested.", "cite_spans": [ { "start": 618, "end": 640, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Track 2: Emotion Classification", "sec_num": "3.2" }, { "text": "Transformer fine-tuning. As a supervised model, we chose the Transformer RoBERTa, specifically roberta-base model. We fine-tuned this model on the raw essays of the corpus provided by the organizers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Track 2: Emotion Classification", "sec_num": "3.2" }, { "text": "NLI. One of the instances of ZSL is via NLI models, in which the inference task needs to perform abductive reasoning. The NLI model needs to decide if the hypothesis (a prompt which represents the class label) entails the premise (which corresponds to the instance to be classified) or contradict it (Yin et al., 2019) . For emotion classification, we used as prompt \"This person feels emotion name\" being emotion name replaced by each emotion category (anger, fear, sadness, joy, disgust, surprise, and neutral). As final label, the one with highest entailment probability is picked. In our experiments, we used the DeBERTa Transformer (He et al., 2021) , specifically the microsoft/debertaxlarge-mnli model from Hugging Face.", "cite_spans": [ { "start": 300, "end": 318, "text": "(Yin et al., 2019)", "ref_id": "BIBREF20" }, { "start": 637, "end": 654, "text": "(He et al., 2021)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Track 2: Emotion Classification", "sec_num": "3.2" }, { "text": "GPT-3. This model aims to produce human-like text. In this case, we used the model to ask about the emotion expressed in the text. Therefore, we used as a prompt \"Classify the following texts in only one of the following emotions anger, fear, sadness, joy, disgust, surprise or neutral.\" and we showed one example to the model which is \"I feel so happy today: joy\". We employed the OpenAI Davinci's model as it is the most capable one, often with less context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Track 2: Emotion Classification", "sec_num": "3.2" }, { "text": "Final Ensemble. We aim to observe how these different type of models all together perform to address the task of emotion classification. Therefore, we conducted a voting ensemble where the majority emotion is picked as the final emotion. In case of disagreement or tie, we selected the emotion given by the supervised model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Track 2: Emotion Classification", "sec_num": "3.2" }, { "text": "All the transformer based models have been finetuned on a single NVIDIA Ampere A100 GPU by making use of the Hugging Face's transformer library (Wolf et al., 2019) . Regarding the hyperparameters used, we computed a grid search in order to find out the combination that maximized each task's metric on the development set. The batch size values tested during the optimization were 8, 16 and 32. Concerning the learning rate, the range of values we tested during the grid search was 1e-5, 2e-5, 3e-5, 4e-5 and 5e-5. We also set the maximum length of the tokenizer (the length from which the tokenizer will truncate a tokenized sequence) equal to the longest essay in the training set as tokenized by the RoBERTa's byte-pair encoding tokenizer, that is, 221. Regarding the epochs, we trained every model until an early stopping mechanism determined the model was starting to overfit on the training data, which usually happened between epochs 2 and 3, depending on the model.", "cite_spans": [ { "start": 144, "end": 163, "text": "(Wolf et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "In this section, we present the results obtained by the systems we developed as part of our participation in WASSA 2022 Track 1 and Track 2. To evaluate our systems, we used the official competition metrics given by the organizers. Specifically, the average of the two Pearson correlations is computed for EMP and the macro F 1 -score for EMO. Further, for the latter we report macro precision and recall scores. The experiments are conducted in two phases: the model selection phase and the evaluation phase, which are explained in the following two sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "In order to select the best model for each task, we trained all the systems described in Section 3 with the training set provided by the organizers and then, we evaluated them with the development one. All the results achieved by our models in this pre-evaluation phase are shown in Tables 2 and 3. In Table 2 , results obtained in the first track are shown. RoBERTa large in separated regression models (SEP) with and without features scored an averaged Pearson correlation of 0.518 and 0.503 respectively on the development set. Regarding the RoBERTa's multi-output regression models (MOR), features have proven to improve the results with respect to the baseline version (0.504 to 0.528), which is the best model we achieved and therefore, the one selected for the evaluation phase. It can also be observed that the models finetuned on emotions that we chose are not helpful to determine empathy nor distress on essays.", "cite_spans": [], "ref_spans": [ { "start": 283, "end": 298, "text": "Tables 2 and 3.", "ref_id": "TABREF2" }, { "start": 302, "end": 309, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Model selection", "sec_num": "5.1" }, { "text": "In Table 3 , results obtained in the second track are presented. As can be seen, the ZSL-based models (NLI and GPT-3) obtain promising results (0.419 and 0.476 of macro-F 1 ) without having been tuned in the emotion task. Specifically, among these two ZSL models, the GPT-3 system obtained the best results. The supervised model, RoBERTa, obtained an F1 of 0.587. Finally, the ensemble of these models obtained the best result for the task in this phase, a 0.602 of F 1 score and therefore, we decided to use this model for the evaluation phase.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Model selection", "sec_num": "5.1" }, { "text": "During the evaluation phase, we trained our systems on the joint training and development sets and evaluate them on the test set. The results of the EMP track on the test set can be seen in Table 4. The multi-output regression model based on RoBERTa achieved 0.541 and 0.519 Pearson correlations on the empathy and distress predictions, respectively. This amounts to an average score of 0.53 which ranks 2 nd on this track.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation phase", "sec_num": "5.2" }, { "text": "In Table 5 we report the results on the EMO track test set. The ensemble model achieved an accuracy of 0.636 and macro values of precision 0.589, recall 0.535, and F 1 -score 0.553 which ranked 3 rd in this track.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Evaluation phase", "sec_num": "5.2" }, { "text": "This paper presents the participation of the SINAI research group in the shared task on Empathy and Personality Detection and Emotion Classification (WASSA 2022). For the first task, we explore how different raw language models and models fine- tuned on emotions work for the empathy and distress prediction. For this task, we observe that the raw language model RoBERTa in a multi-output regression fashion together with the features of gender, age and income perform better than the models which contain emotion knowledge. Therefore, this shows that not all models previously fine-tuned on emotions help in the prediction of empathy and distress. Regarding the track 2, emotion detection, we have experimented with recent ZSL models including NLI and GPT-3. Results on the development set suggest that they are promising options for emotion detection when no labeled data is available. Therefore, our proposal for this task is an ensemble model that takes advantage of both supervised and ZSL models. Our final results in both Track 1 (EMP) and Track 2 (EMO) demonstrate the success of our proposal's approaches since we ranked 2 nd and 3 rd among all the participants, respectively. As future work, we plan to further explore ZSL models as they have shown promising results in the emotion classification task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" } ], "back_matter": [ { "text": "This work has been partially supported by the grants 1380939 (FEDER Andaluc\u00eda 2014-2020), P20_00956 (PAIDI 2020) funded by the Andalusian Regional Government, LIVING-LANG project (RTI2018-094653-B-C21) funded by MCIN/AEI/10.13039/501100011033 and by ERDF A way of making Europe, and the scholarship (FPI-PRE2019-089310) from the Ministry of Science, Innovation and Universities of the Spanish Government.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Distress and empathy: Two qualitatively distinct vicarious emotions with different motivational consequences", "authors": [ { "first": "Jim", "middle": [], "last": "Daniel Batson", "suffix": "" }, { "first": "Patricia", "middle": [ "A" ], "last": "Fultz", "suffix": "" }, { "first": "", "middle": [], "last": "Schoenrade", "suffix": "" } ], "year": 1987, "venue": "Journal of personality", "volume": "55", "issue": "1", "pages": "19--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "C Daniel Batson, Jim Fultz, and Patricia A Schoenrade. 1987. Distress and empathy: Two qualitatively dis- tinct vicarious emotions with different motivational consequences. Journal of personality, 55(1):19-39.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An analysis of annotated corpora for emotion classification in text", "authors": [ { "first": "Laura-Ana-Maria", "middle": [], "last": "Bostan", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Klinger", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "2104--2119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura-Ana-Maria Bostan and Roman Klinger. 2018. An analysis of annotated corpora for emotion clas- sification in text. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 2104-2119, Santa Fe, New Mexico, USA. As- sociation for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Modeling empathy and distress in reaction to news stories", "authors": [ { "first": "Sven", "middle": [], "last": "Buechel", "suffix": "" }, { "first": "Anneke", "middle": [], "last": "Buffone", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Slaff", "suffix": "" }, { "first": "Lyle", "middle": [], "last": "Ungar", "suffix": "" }, { "first": "Jo\u00e3o", "middle": [], "last": "Sedoc", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4758--4765", "other_ids": { "DOI": [ "10.18653/v1/D18-1507" ] }, "num": null, "urls": [], "raw_text": "Sven Buechel, Anneke Buffone, Barry Slaff, Lyle Ungar, and Jo\u00e3o Sedoc. 2018. Modeling empathy and dis- tress in reaction to news stories. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4758-4765, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Explaining gender differences in responses to supportive messages: Two tests of a dual-process approach", "authors": [ { "first": "Lisa", "middle": [ "K" ], "last": "Brant R Burleson", "suffix": "" }, { "first": "", "middle": [], "last": "Hanasono", "suffix": "" }, { "first": "D", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Amanda", "middle": [ "J" ], "last": "Bodie", "suffix": "" }, { "first": "Jessica", "middle": [ "J" ], "last": "Holmstrom", "suffix": "" }, { "first": "Jennifer", "middle": [ "Gill" ], "last": "Rack", "suffix": "" }, { "first": "Jennifer", "middle": [ "D" ], "last": "Rosier", "suffix": "" }, { "first": "", "middle": [], "last": "Mccullough", "suffix": "" } ], "year": 2009, "venue": "Sex Roles", "volume": "61", "issue": "3", "pages": "265--280", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brant R Burleson, Lisa K Hanasono, Graham D Bodie, Amanda J Holmstrom, Jessica J Rack, Jennifer Gill Rosier, and Jennifer D McCullough. 2009. Ex- plaining gender differences in responses to supportive messages: Two tests of a dual-process approach. Sex Roles, 61(3):265-280.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Goemotions: A dataset of fine-grained emotions", "authors": [ { "first": "Dorottya", "middle": [], "last": "Demszky", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Movshovitz-Attias", "suffix": "" }, { "first": "Jeongwoo", "middle": [], "last": "Ko", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Cowen", "suffix": "" }, { "first": "Gaurav", "middle": [], "last": "Nemade", "suffix": "" }, { "first": "Sujith", "middle": [], "last": "Ravi", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. 2020. Goemotions: A dataset of fine-grained emo- tions.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "An argument for basic emotions", "authors": [ { "first": "Paul", "middle": [], "last": "Ekman", "suffix": "" } ], "year": 1992, "venue": "Cognition and Emotion", "volume": "6", "issue": "3-4", "pages": "169--200", "other_ids": { "DOI": [ "10.1080/02699939208411068" ] }, "num": null, "urls": [], "raw_text": "Paul Ekman. 1992. An argument for basic emotions. Cognition and Emotion, 6(3-4):169-200.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "DeBERTa: Decoding-enhanced BERT with Disentangled Attention", "authors": [ { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2021, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. DeBERTa: Decoding-enhanced BERT with Disentangled Attention. In International Conference on Learning Representations.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2020. RoBERTa: A Robustly Optimized BERT Pretraining Approach.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Recent trends in deep learning based personality detection", "authors": [ { "first": "Yash", "middle": [], "last": "Mehta", "suffix": "" }, { "first": "Navonil", "middle": [], "last": "Majumder", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Gelbukh", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Cambria", "suffix": "" } ], "year": 2020, "venue": "Artificial Intelligence Review", "volume": "53", "issue": "4", "pages": "2313--2339", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yash Mehta, Navonil Majumder, Alexander Gelbukh, and Erik Cambria. 2020. Recent trends in deep learn- ing based personality detection. Artificial Intelli- gence Review, 53(4):2313-2339.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Empathic skills and cyberbullying: relationship of different measures of empathy to cyberbullying in comparison to offline bullying among young adults", "authors": [ { "first": "S", "middle": [], "last": "Jan", "suffix": "" }, { "first": "", "middle": [], "last": "Pfetsch", "suffix": "" } ], "year": 2017, "venue": "The Journal of genetic psychology", "volume": "178", "issue": "1", "pages": "58--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jan S Pfetsch. 2017. Empathic skills and cyberbullying: relationship of different measures of empathy to cy- berbullying in comparison to offline bullying among young adults. The Journal of genetic psychology, 178(1):58-72.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "EmoEvent: A Multilingual Emotion Corpus based on different Events", "authors": [ { "first": "Flor", "middle": [ "Miriam" ], "last": "Plaza-Del-Arco", "suffix": "" }, { "first": "Carlo", "middle": [], "last": "Strapparava", "suffix": "" }, { "first": "L", "middle": [ "Alfonso" ], "last": "Ure\u00f1a-Lopez", "suffix": "" }, { "first": "M", "middle": [ "Teresa" ], "last": "Martin-Valdivia", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "1492--1498", "other_ids": {}, "num": null, "urls": [], "raw_text": "Flor Miriam Plaza-del-Arco, Carlo Strapparava, L. Al- fonso Ure\u00f1a-Lopez, and M. Teresa Martin-Valdivia. 2020. EmoEvent: A Multilingual Emotion Corpus based on different Events. In Proceedings of the 12th Language Resources and Evaluation Confer- ence, pages 1492-1498, Marseille, France. European Language Resources Association.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The Nature of Emotions: Human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice", "authors": [ { "first": "Robert", "middle": [], "last": "Plutchik", "suffix": "" } ], "year": 2001, "venue": "American scientist", "volume": "89", "issue": "4", "pages": "344--350", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Plutchik. 2001. The Nature of Emotions: Human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. American scientist, 89(4):344-350.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Emotion recognition abilities and empathy of victims of victims of bullying", "authors": [ { "first": "Dieterwolke", "middle": [], "last": "Sarahwoods", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Nowicki", "suffix": "" }, { "first": "Lynne", "middle": [], "last": "Hall", "suffix": "" } ], "year": 2009, "venue": "Development", "volume": "75", "issue": "4", "pages": "987--1002", "other_ids": {}, "num": null, "urls": [], "raw_text": "DieterWolke SarahWoods, Stephen Nowicki, and Lynne Hall. 2009. Emotion recognition abilities and empa- thy of victims of victims of bullying. Development, 75(4):987-1002.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "CARER: Contextualized affect representations for emotion recognition", "authors": [ { "first": "Elvis", "middle": [], "last": "Saravia", "suffix": "" }, { "first": "Hsien-Chi Toby", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yen-Hao", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Junlin", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yi-Shin", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3687--3697", "other_ids": { "DOI": [ "10.18653/v1/D18-1404" ] }, "num": null, "urls": [], "raw_text": "Elvis Saravia, Hsien-Chi Toby Liu, Yen-Hao Huang, Junlin Wu, and Yi-Shin Chen. 2018. CARER: Con- textualized affect representations for emotion recog- nition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3687-3697, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Towards facilitating empathic conversations in online mental health support: A reinforcement learning approach", "authors": [ { "first": "Ashish", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "W", "middle": [], "last": "Inna", "suffix": "" }, { "first": "Adam", "middle": [ "S" ], "last": "Lin", "suffix": "" }, { "first": "", "middle": [], "last": "Miner", "suffix": "" }, { "first": "C", "middle": [], "last": "David", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Atkins", "suffix": "" }, { "first": "", "middle": [], "last": "Althoff", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Web Conference 2021", "volume": "", "issue": "", "pages": "194--205", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Sharma, Inna W Lin, Adam S Miner, David C Atkins, and Tim Althoff. 2021. Towards facilitating empathic conversations in online mental health sup- port: A reinforcement learning approach. In Proceed- ings of the Web Conference 2021, pages 194-205.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A computational approach to understanding empathy expressed in text-based mental health support", "authors": [ { "first": "Ashish", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Miner", "suffix": "" }, { "first": "David", "middle": [], "last": "Atkins", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Althoff", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "5263--5276", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.425" ] }, "num": null, "urls": [], "raw_text": "Ashish Sharma, Adam Miner, David Atkins, and Tim Al- thoff. 2020. A computational approach to understand- ing empathy expressed in text-based mental health support. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5263-5276, Online. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A survey of automatic personality detection from texts", "authors": [ { "first": "Sanja", "middle": [], "last": "Stajner", "suffix": "" }, { "first": "Seren", "middle": [], "last": "Yenikent", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "6284--6295", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.553" ] }, "num": null, "urls": [], "raw_text": "Sanja Stajner and Seren Yenikent. 2020. A survey of au- tomatic personality detection from texts. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 6284-6295, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17", "volume": "", "issue": "", "pages": "6000--6010", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Sys- tems, NIPS'17, page 6000-6010, Red Hook, NY, USA. Curran Associates Inc.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Hug-gingFace's Transformers: State-of-the-art Natural Language Processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "", "middle": [], "last": "Gugger", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.48550/arXiv.1910.03771" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Hug- gingFace's Transformers: State-of-the-art Natural Language Processing. arXiv.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach", "authors": [ { "first": "Wenpeng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Jamaal", "middle": [], "last": "Hay", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3914--3923", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. \"Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach\". In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3914-3923, Hong Kong, China. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "-uncased-emotion (MOR) 0.435 0.387 0.411", "num": null }, "TABREF1": { "type_str": "table", "content": "
: WASSA 2022 dataset splits. Training, develop-ment and test set sizes.
", "num": null, "html": null, "text": "" }, "TABREF2": { "type_str": "table", "content": "
: Multi-Output Regression (MOR) and Sepa-rated Regression Models (SEP) results in Track 1 (EMP) for empathy (Emp) and distress (Dis) predictions on WASSA 2022 development set. Best results are shown in bold and selected model marked with *.
ModelPRF1
RoBERTa 0.625 0.578 0.587 NLI 0.456 0.463 0.419 GPT-3 0.524 0.469 0.476 Ensemble* 0.642 0.580 0.601
", "num": null, "html": null, "text": "" }, "TABREF3": { "type_str": "table", "content": "
: RoBERTa, NLI, GPT-3 and Ensemble models in Track 2 (EMO) on WASSA 2022 development set. Macro-averaged precision (P), recall (R), and F1-score (F 1 ). Best results are shown in bold and selected model marked with *.
", "num": null, "html": null, "text": "" }, "TABREF4": { "type_str": "table", "content": "
Pearson correlations.
ModelPRF1Acc
Ensemble 0.589 0.535 0.553 0.636
", "num": null, "html": null, "text": "Multi-Output Regression (MOR) results in Track 1 for empathy (Emp) and distress (Dis) detection on WASSA 2022 test set (SINAI Team submission)." }, "TABREF5": { "type_str": "table", "content": "
Macro-averaged precision (P), recall (R), F1-score (F 1 ) and accuracy (Acc).
", "num": null, "html": null, "text": "Ensemble results in Track 2 for emotion detection on WASSA 2022 test set (SINAI Team submission)." } } } }