Datasets:

bibtex_url
stringlengths
41
50
proceedings
stringlengths
38
47
bibtext
stringlengths
709
3.56k
abstract
stringlengths
17
2.11k
authors
sequencelengths
1
72
title
stringlengths
12
207
id
stringlengths
7
16
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
276 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
14
num_comments
int64
-1
11
n_authors
int64
-1
44
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
100
Datasets
sequencelengths
0
14
Spaces
sequencelengths
0
100
https://aclanthology.org/2023.semeval-1.155.bib
https://aclanthology.org/2023.semeval-1.155/
@inproceedings{nguyen-etal-2023-vtcc, title = "{VTCC}-{NLP} at {S}em{E}val-2023 Task 6:Long-Text Representation Based on Graph Neural Network for Rhetorical Roles Prediction", author = "Nguyen, Hiep and Ngo, Hoang and Bui, Nam", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.155", doi = "10.18653/v1/2023.semeval-1.155", pages = "1121--1126", abstract = "Rhetorical Roles (RR) prediction is to predict the label of each sentence in legal documents, which is regarded as an emergent task for legal document understanding. In this study, we present a novel method for the RR task by exploiting the long context representation. Specifically, legal documents are known as long texts, in which previous works have no ability to consider the inherent dependencies among sentences. In this paper, we propose GNNRR (Graph Neural Network for Rhetorical Roles Prediction), which is able to model the cross-information for long texts. Furthermore, we develop multitask learning by incorporating label shift prediction (LSP) for segmenting a legal document. The proposed model is evaluated on the SemEval 2023 Task 6 - Legal Eval Understanding Legal Texts for RR sub-task. Accordingly, our method achieves the top 4 in the public leaderboard of the sub-task. Our source code is available for further investigation{\textbackslash}footnote{https://github.com/hiepnh137/SemEval2023-Task6-Rhetorical-Roles}.", }
Rhetorical Roles (RR) prediction is to predict the label of each sentence in legal documents, which is regarded as an emergent task for legal document understanding. In this study, we present a novel method for the RR task by exploiting the long context representation. Specifically, legal documents are known as long texts, in which previous works have no ability to consider the inherent dependencies among sentences. In this paper, we propose GNNRR (Graph Neural Network for Rhetorical Roles Prediction), which is able to model the cross-information for long texts. Furthermore, we develop multitask learning by incorporating label shift prediction (LSP) for segmenting a legal document. The proposed model is evaluated on the SemEval 2023 Task 6 - Legal Eval Understanding Legal Texts for RR sub-task. Accordingly, our method achieves the top 4 in the public leaderboard of the sub-task. Our source code is available for further investigation{\textbackslash}footnote{https://github.com/hiepnh137/SemEval2023-Task6-Rhetorical-Roles}.
[ "Nguyen, Hiep", "Ngo, Hoang", "Bui, Nam" ]
VTCC-NLP at SemEval-2023 Task 6:Long-Text Representation Based on Graph Neural Network for Rhetorical Roles Prediction
semeval-1.155
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.156.bib
https://aclanthology.org/2023.semeval-1.156/
@inproceedings{hofer-mottahedin-2023-minanto, title = "Minanto at {S}em{E}val-2023 Task 2: Fine-tuning {XLM}-{R}o{BERT}a for Named Entity Recognition on {E}nglish Data", author = {H{\"o}fer, Antonia and Mottahedin, Mina}, editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.156", doi = "10.18653/v1/2023.semeval-1.156", pages = "1127--1130", abstract = "Within the scope of the shared task MultiCoNER II our aim was to improve the recognition of named entities in English. We as team Minanto fine-tuned a cross-lingual model for Named Entity Recognition on English data and achieved an average F1 score of 51.47{\textbackslash}{\%} in the final submission. We found that a monolingual model works better on English data than a cross-lingual and that the input of external data from earlier Named Entity Recognition tasks provides only minor improvements. In this paper we present our system, discuss our results and analyze the impact of external data.", }
Within the scope of the shared task MultiCoNER II our aim was to improve the recognition of named entities in English. We as team Minanto fine-tuned a cross-lingual model for Named Entity Recognition on English data and achieved an average F1 score of 51.47{\textbackslash}{\%} in the final submission. We found that a monolingual model works better on English data than a cross-lingual and that the input of external data from earlier Named Entity Recognition tasks provides only minor improvements. In this paper we present our system, discuss our results and analyze the impact of external data.
[ "H{\\\"o}fer, Antonia", "Mottahedin, Mina" ]
Minanto at SemEval-2023 Task 2: Fine-tuning XLM-RoBERTa for Named Entity Recognition on English Data
semeval-1.156
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.157.bib
https://aclanthology.org/2023.semeval-1.157/
@inproceedings{biales-2023-sab, title = "{SAB} at {S}em{E}val-2023 Task 2: Does Linguistic Information Aid in Named Entity Recognition?", author = "Biales, Siena", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.157", doi = "10.18653/v1/2023.semeval-1.157", pages = "1131--1137", abstract = "This paper describes the submission to SemEval-2023 Task 2: Multilingual Complex Named Entity Recognition (MultiCoNER II) by team SAB. This task aims to encourage growth in the field of Named Entity Recognition (NER) by focusing on complex and difficult categories of entities, in 12 different language tracks. The task of NER has historically shown the best results when a model incorporates an external knowledge base or gazetteer, however, less research has been applied to examining the effects of incorporating linguistic information into the model. In this task, we explored combining NER, part-of-speech (POS), and dependency relation labels into a multi-task model and report on the findings. We determine that the addition of POS and dependency relation information in this manner does not improve results.", }
This paper describes the submission to SemEval-2023 Task 2: Multilingual Complex Named Entity Recognition (MultiCoNER II) by team SAB. This task aims to encourage growth in the field of Named Entity Recognition (NER) by focusing on complex and difficult categories of entities, in 12 different language tracks. The task of NER has historically shown the best results when a model incorporates an external knowledge base or gazetteer, however, less research has been applied to examining the effects of incorporating linguistic information into the model. In this task, we explored combining NER, part-of-speech (POS), and dependency relation labels into a multi-task model and report on the findings. We determine that the addition of POS and dependency relation information in this manner does not improve results.
[ "Biales, Siena" ]
SAB at SemEval-2023 Task 2: Does Linguistic Information Aid in Named Entity Recognition?
semeval-1.157
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.158.bib
https://aclanthology.org/2023.semeval-1.158/
@inproceedings{muti-etal-2023-uniboes, title = "{U}ni{B}oe{'}s at {S}em{E}val-2023 Task 10: Model-Agnostic Strategies for the Improvement of Hate-Tuned and Generative Models in the Classification of Sexist Posts", author = "Muti, Arianna and Fernicola, Francesco and Barr{\'o}n-Cede{\~n}o, Alberto", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.158", doi = "10.18653/v1/2023.semeval-1.158", pages = "1138--1147", abstract = "We present our submission to SemEval-2023 Task 10: Explainable Detection of Online Sexism (EDOS). We address all three tasks: Task A consists of identifying whether a post is sexist. If so, Task B attempts to assign it one of four categories: threats, derogation, animosity, and prejudiced discussions. Task C aims for an even more fine-grained classification, divided among 11 classes. Our team UniBoe{'}s experiments with fine-tuning of hate-tuned Transformer-based models and priming for generative models. In addition, we explore model-agnostic strategies, such as data augmentation techniques combined with active learning, as well as obfuscation of identity terms. Our official submissions obtain an F1{\_}score of 0.83 for Task A, 0.58 for Task B and 0.32 for Task C.", }
We present our submission to SemEval-2023 Task 10: Explainable Detection of Online Sexism (EDOS). We address all three tasks: Task A consists of identifying whether a post is sexist. If so, Task B attempts to assign it one of four categories: threats, derogation, animosity, and prejudiced discussions. Task C aims for an even more fine-grained classification, divided among 11 classes. Our team UniBoe{'}s experiments with fine-tuning of hate-tuned Transformer-based models and priming for generative models. In addition, we explore model-agnostic strategies, such as data augmentation techniques combined with active learning, as well as obfuscation of identity terms. Our official submissions obtain an F1{\_}score of 0.83 for Task A, 0.58 for Task B and 0.32 for Task C.
[ "Muti, Arianna", "Fernicola, Francesco", "Barr{\\'o}n-Cede{\\~n}o, Alberto" ]
UniBoe's at SemEval-2023 Task 10: Model-Agnostic Strategies for the Improvement of Hate-Tuned and Generative Models in the Classification of Sexist Posts
semeval-1.158
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.159.bib
https://aclanthology.org/2023.semeval-1.159/
@inproceedings{elkaref-etal-2023-nlpeople, title = "{NLP}eople at {S}em{E}val-2023 Task 2: A Staged Approach for Multilingual Named Entity Recognition", author = "Elkaref, Mohab and Herr, Nathan and Tanaka, Shinnosuke and De Mel, Geeth", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.159", doi = "10.18653/v1/2023.semeval-1.159", pages = "1148--1153", abstract = "The MultiCoNER II shared task aims at detecting complex, ambiguous named entities with fine-grained types in a low context setting. Previous winning systems incorporated external knowledge bases to retrieve helpful contexts. In our submission we additionally propose splitting the NER task into two stages, a Span Extraction Step, and an Entity Classification step. Our results show that the former does not suffer from the low context setting comparably, and in so leading to a higher overall performance for an external KB-assisted system. We achieve 3rd place on the multilingual track and an average of 6th place overall.", }
The MultiCoNER II shared task aims at detecting complex, ambiguous named entities with fine-grained types in a low context setting. Previous winning systems incorporated external knowledge bases to retrieve helpful contexts. In our submission we additionally propose splitting the NER task into two stages, a Span Extraction Step, and an Entity Classification step. Our results show that the former does not suffer from the low context setting comparably, and in so leading to a higher overall performance for an external KB-assisted system. We achieve 3rd place on the multilingual track and an average of 6th place overall.
[ "Elkaref, Mohab", "Herr, Nathan", "Tanaka, Shinnosuke", "De Mel, Geeth" ]
NLPeople at SemEval-2023 Task 2: A Staged Approach for Multilingual Named Entity Recognition
semeval-1.159
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.160.bib
https://aclanthology.org/2023.semeval-1.160/
@inproceedings{sindhu-etal-2023-nitk, title = "{NITK}{\_}{LEGAL} at {S}em{E}val-2023 Task 6: A Hierarchical based system for identification of Rhetorical Roles in legal judgements", author = "Sindhu, Patchipulusu and Gupta, Diya and Meghana, Sanjeevi and Kumar M, Anand", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.160", doi = "10.18653/v1/2023.semeval-1.160", pages = "1154--1160", abstract = "The ability to automatically recognise the rhetorical roles of sentences in a legal case judgement is a crucial challenge to tackle since it can be useful for a number of activities that come later, such as summarising legal judgements and doing legal searches. The task is exigent since legal case documents typically lack structure, and their rhetorical roles could be subjective. This paper describes SemEval-2023 Task 6: LegalEval: Understanding Legal Texts, Sub-task A: Rhetorical Roles Prediction (RR). We propose a system to automatically generate rhetorical roles of all the sentences in a legal case document using Hierarchical Bi-LSTM CRF model and RoBERTa transformer. We also showcase different techniques used to manipulate dataset to generate a set of varying embeddings and train the Hierarchical Bi-LSTM CRF model to achieve better performance. Among all, model trained with the sent2vec embeddings concatenated with the handcrafted features perform better with the micro f1-score of 0.74 on test data.", }
The ability to automatically recognise the rhetorical roles of sentences in a legal case judgement is a crucial challenge to tackle since it can be useful for a number of activities that come later, such as summarising legal judgements and doing legal searches. The task is exigent since legal case documents typically lack structure, and their rhetorical roles could be subjective. This paper describes SemEval-2023 Task 6: LegalEval: Understanding Legal Texts, Sub-task A: Rhetorical Roles Prediction (RR). We propose a system to automatically generate rhetorical roles of all the sentences in a legal case document using Hierarchical Bi-LSTM CRF model and RoBERTa transformer. We also showcase different techniques used to manipulate dataset to generate a set of varying embeddings and train the Hierarchical Bi-LSTM CRF model to achieve better performance. Among all, model trained with the sent2vec embeddings concatenated with the handcrafted features perform better with the micro f1-score of 0.74 on test data.
[ "Sindhu, Patchipulusu", "Gupta, Diya", "Meghana, Sanjeevi", "Kumar M, An", "" ]
NITK_LEGAL at SemEval-2023 Task 6: A Hierarchical based system for identification of Rhetorical Roles in legal judgements
semeval-1.160
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.161.bib
https://aclanthology.org/2023.semeval-1.161/
@inproceedings{rathi-etal-2023-trinity, title = "Trinity at {S}em{E}val-2023 Task 12: Sentiment Analysis for Low-resource {A}frican Languages using {T}witter Dataset", author = "Rathi, Shashank and Pande, Siddhesh and Atkare, Harshwardhan and Tangsali, Rahul and Vyawahare, Aditya and Kadam, Dipali", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.161", doi = "10.18653/v1/2023.semeval-1.161", pages = "1161--1165", abstract = "In this paper, we have performed sentiment analysis on three African languages (Hausa, Swahili, and Yoruba). We used various deep learning and traditional models paired with a vectorizer for classification and data -preprocessing. We have also used a few data oversampling methods to handle the imbalanced text data. Thus, we could analyze the performance of those models in all the languages by using weighted and macro F1 scores as evaluation metrics.", }
In this paper, we have performed sentiment analysis on three African languages (Hausa, Swahili, and Yoruba). We used various deep learning and traditional models paired with a vectorizer for classification and data -preprocessing. We have also used a few data oversampling methods to handle the imbalanced text data. Thus, we could analyze the performance of those models in all the languages by using weighted and macro F1 scores as evaluation metrics.
[ "Rathi, Shashank", "P", "e, Siddhesh", "Atkare, Harshwardhan", "Tangsali, Rahul", "Vyawahare, Aditya", "Kadam, Dipali" ]
Trinity at SemEval-2023 Task 12: Sentiment Analysis for Low-resource African Languages using Twitter Dataset
semeval-1.161
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.162.bib
https://aclanthology.org/2023.semeval-1.162/
@inproceedings{billert-conrad-2023-hhu, title = "{HHU} at {S}em{E}val-2023 Task 3: An Adapter-based Approach for News Genre Classification", author = "Billert, Fabian and Conrad, Stefan", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.162", doi = "10.18653/v1/2023.semeval-1.162", pages = "1166--1171", abstract = "This paper describes our approach for Subtask 1 of Task 3 at SemEval-2023. In this subtask, task participants were asked to classify multilingual news articles for one of three classes: Reporting, Opinion Piece or Satire. By training an AdapterFusion layer composing the task-adapters from different languages, we successfully combine the language-exclusive knowledge and show that this improves the results in nearly all cases, including in zero-shot scenarios.", }
This paper describes our approach for Subtask 1 of Task 3 at SemEval-2023. In this subtask, task participants were asked to classify multilingual news articles for one of three classes: Reporting, Opinion Piece or Satire. By training an AdapterFusion layer composing the task-adapters from different languages, we successfully combine the language-exclusive knowledge and show that this improves the results in nearly all cases, including in zero-shot scenarios.
[ "Billert, Fabian", "Conrad, Stefan" ]
HHU at SemEval-2023 Task 3: An Adapter-based Approach for News Genre Classification
semeval-1.162
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.163.bib
https://aclanthology.org/2023.semeval-1.163/
@inproceedings{alam-etal-2023-gmnlp, title = "{GMNLP} at {S}em{E}val-2023 Task 12: Sentiment Analysis with Phylogeny-Based Adapters", author = "Alam, Md Mahfuz Ibn and Xie, Ruoyu and Faisal, Fahim and Anastasopoulos, Antonios", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.163", doi = "10.18653/v1/2023.semeval-1.163", pages = "1172--1182", abstract = "This report describes GMU{'}s sentiment analysis system for the SemEval-2023 shared task AfriSenti-SemEval. We participated in all three sub-tasks: Monolingual, Multilingual, and Zero-Shot. Our approach uses models initialized with AfroXLMR-large, a pre-trained multilingual language model trained on African languages and fine-tuned correspondingly. We also introduce augmented training data along with original training data. Alongside finetuning, we perform phylogeny-based adapter-tuning to create several models and ensemble the best models for the final submission. Our system achieves the best F1-score on track 5: Amharic, with 6.2 points higher F1-score than the second-best performing system on this track. Overall, our system ranks 5th among the 10 systems participating in all 15 tracks.", }
This report describes GMU{'}s sentiment analysis system for the SemEval-2023 shared task AfriSenti-SemEval. We participated in all three sub-tasks: Monolingual, Multilingual, and Zero-Shot. Our approach uses models initialized with AfroXLMR-large, a pre-trained multilingual language model trained on African languages and fine-tuned correspondingly. We also introduce augmented training data along with original training data. Alongside finetuning, we perform phylogeny-based adapter-tuning to create several models and ensemble the best models for the final submission. Our system achieves the best F1-score on track 5: Amharic, with 6.2 points higher F1-score than the second-best performing system on this track. Overall, our system ranks 5th among the 10 systems participating in all 15 tracks.
[ "Alam, Md Mahfuz Ibn", "Xie, Ruoyu", "Faisal, Fahim", "Anastasopoulos, Antonios" ]
GMNLP at SemEval-2023 Task 12: Sentiment Analysis with Phylogeny-Based Adapters
semeval-1.163
Poster
2304.12979
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.164.bib
https://aclanthology.org/2023.semeval-1.164/
@inproceedings{singh-tiwary-2023-silp, title = "Silp{\_}nlp at {S}em{E}val-2023 Task 2: Cross-lingual Knowledge Transfer for Mono-lingual Learning", author = "Singh, Sumit and Tiwary, Uma", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.164", doi = "10.18653/v1/2023.semeval-1.164", pages = "1183--1189", abstract = "Our team silp{\_}nlp participated in SemEval2023 Task 2: MultiCoNER II. Our work made systems for 11 mono-lingual tracks. For leveraging the advantage of all track knowledge we chose transformer-based pretrained models, which have strong cross-lingual transferability. Hence our model trained in two stages, the first stage for multi-lingual learning from all tracks and the second for fine-tuning individual tracks. Our work highlights that the knowledge of all tracks can be transferred to an individual track if the baseline language model has crosslingual features. Our system positioned itself in the top 10 for 4 tracks by scoring 0.7432 macro F1 score for the Hindi track ( 7th rank ) and 0.7322 macro F1 score for the Bangla track ( 9th rank ).", }
Our team silp{\_}nlp participated in SemEval2023 Task 2: MultiCoNER II. Our work made systems for 11 mono-lingual tracks. For leveraging the advantage of all track knowledge we chose transformer-based pretrained models, which have strong cross-lingual transferability. Hence our model trained in two stages, the first stage for multi-lingual learning from all tracks and the second for fine-tuning individual tracks. Our work highlights that the knowledge of all tracks can be transferred to an individual track if the baseline language model has crosslingual features. Our system positioned itself in the top 10 for 4 tracks by scoring 0.7432 macro F1 score for the Hindi track ( 7th rank ) and 0.7322 macro F1 score for the Bangla track ( 9th rank ).
[ "Singh, Sumit", "Tiwary, Uma" ]
Silp_nlp at SemEval-2023 Task 2: Cross-lingual Knowledge Transfer for Mono-lingual Learning
semeval-1.164
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.165.bib
https://aclanthology.org/2023.semeval-1.165/
@inproceedings{ramanathan-etal-2023-techssn, title = "{T}ech{SSN} at {S}em{E}val-2023 Task 12: Monolingual Sentiment Classification in {H}ausa Tweets", author = "Ramanathan, Nishaanth and Sivanaiah, Rajalakshmi and S, Angel Deborah and Thanka Nadar Thanagathai, Mirnalinee", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.165", doi = "10.18653/v1/2023.semeval-1.165", pages = "1190--1194", abstract = "This paper elaborates on our work in designing a system for SemEval 2023 Task 12: AfriSentiSemEval, which involves sentiment analysis for low-resource African languages using the Twitter dataset. We utilised a pre-trained model to perform sentiment classification in Hausa language tweets. We used a multilingual version of the roBERTa model, which is pretrained on 100 languages, to classify sentiments in Hausa. To tokenize the text, we used the AfriBERTa model, which is specifically pretrained on African languages.", }
This paper elaborates on our work in designing a system for SemEval 2023 Task 12: AfriSentiSemEval, which involves sentiment analysis for low-resource African languages using the Twitter dataset. We utilised a pre-trained model to perform sentiment classification in Hausa language tweets. We used a multilingual version of the roBERTa model, which is pretrained on 100 languages, to classify sentiments in Hausa. To tokenize the text, we used the AfriBERTa model, which is specifically pretrained on African languages.
[ "Ramanathan, Nishaanth", "Sivanaiah, Rajalakshmi", "S, Angel Deborah", "Thanka Nadar Thanagathai, Mirnalinee" ]
TechSSN at SemEval-2023 Task 12: Monolingual Sentiment Classification in Hausa Tweets
semeval-1.165
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.166.bib
https://aclanthology.org/2023.semeval-1.166/
@inproceedings{sorensen-etal-2023-juage, title = "{JUAGE} at {S}em{E}val-2023 Task 10: Parameter Efficient Classification", author = "Sorensen, Jeffrey and Korre, Katerina and Pavlopoulos, John and Tomanek, Katrin and Thain, Nithum and Dixon, Lucas and Laugier, L{\'e}o", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.166", doi = "10.18653/v1/2023.semeval-1.166", pages = "1195--1203", abstract = "Using pre-trained language models to implement classifiers from small to modest amounts of training data is an area of active research. The ability of large language models to generalize from few-shot examples and to produce strong classifiers is extended using the engineering approach of parameter-efficient tuning. Using the Explainable Detection of Online Sexism (EDOS) training data and a small number of trainable weights to create a tuned prompt vector, a competitive model for this task was built, which was top-ranked in Subtask B.", }
Using pre-trained language models to implement classifiers from small to modest amounts of training data is an area of active research. The ability of large language models to generalize from few-shot examples and to produce strong classifiers is extended using the engineering approach of parameter-efficient tuning. Using the Explainable Detection of Online Sexism (EDOS) training data and a small number of trainable weights to create a tuned prompt vector, a competitive model for this task was built, which was top-ranked in Subtask B.
[ "Sorensen, Jeffrey", "Korre, Katerina", "Pavlopoulos, John", "Tomanek, Katrin", "Thain, Nithum", "Dixon, Lucas", "Laugier, L{\\'e}o" ]
JUAGE at SemEval-2023 Task 10: Parameter Efficient Classification
semeval-1.166
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.167.bib
https://aclanthology.org/2023.semeval-1.167/
@inproceedings{mihalcea-nisioi-2023-clark, title = "{C}lark {K}ent at {S}em{E}val-2023 Task 5: {SVM}s, Transformers, and Pixels for Clickbait Spoiling", author = "Mihalcea, Dragos-stefan and Nisioi, Sergiu", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.167", doi = "10.18653/v1/2023.semeval-1.167", pages = "1204--1212", abstract = "In this paper we present an analysis of our approaches for the 2023 SemEval-2023 Clickbait Challenge. We only participated in the sub-task aiming at identifying different clikcbait spoiling types comparing several machine learning and deep learning approaches. Our analysis confirms previous results on this task and show that automatic methods are able to reach approximately 70{\textbackslash}{\%} accuracy at predicting what type of additional content is needed to mitigate sensationalistic posts on social media. Furthermore, we provide a qualitative analysis of the results, showing that the models may do better in practice than the metric indicates since the evaluate does not depend only on the predictor, but also on the typology we choose to define clickbait spoiling.", }
In this paper we present an analysis of our approaches for the 2023 SemEval-2023 Clickbait Challenge. We only participated in the sub-task aiming at identifying different clikcbait spoiling types comparing several machine learning and deep learning approaches. Our analysis confirms previous results on this task and show that automatic methods are able to reach approximately 70{\textbackslash}{\%} accuracy at predicting what type of additional content is needed to mitigate sensationalistic posts on social media. Furthermore, we provide a qualitative analysis of the results, showing that the models may do better in practice than the metric indicates since the evaluate does not depend only on the predictor, but also on the typology we choose to define clickbait spoiling.
[ "Mihalcea, Dragos-stefan", "Nisioi, Sergiu" ]
Clark Kent at SemEval-2023 Task 5: SVMs, Transformers, and Pixels for Clickbait Spoiling
semeval-1.167
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.168.bib
https://aclanthology.org/2023.semeval-1.168/
@inproceedings{al-qarqaz-abdullah-2023-team, title = "Team {JUSTR}00 at {S}em{E}val-2023 Task 3: Transformers for News Articles Classification", author = "Al-Qarqaz, Ahmed and Abdullah, Malak", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.168", doi = "10.18653/v1/2023.semeval-1.168", pages = "1213--1216", abstract = "The SemEval-2023 Task 3 competition offers participants a multi-lingual dataset with three schemes one for each subtask. The competition challenges participants to construct machine learning systems that can categorize news articles based on their nature and style of writing. We esperiment with many state-of-the-art transformer-based language models proposed in the natural language processing literature and report the results of the best ones. Our top performing model is based on a transformer called {``}Longformer{''} and has achieved an F1-Micro score of 0.256 on the English version of subtask-1 and F1-Macro of 0.442 on subtask-2 on the test data. We also experiment with a number of state-of-the-art multi-lingual transformer-based models and report the results of the best performing ones.", }
The SemEval-2023 Task 3 competition offers participants a multi-lingual dataset with three schemes one for each subtask. The competition challenges participants to construct machine learning systems that can categorize news articles based on their nature and style of writing. We esperiment with many state-of-the-art transformer-based language models proposed in the natural language processing literature and report the results of the best ones. Our top performing model is based on a transformer called {``}Longformer{''} and has achieved an F1-Micro score of 0.256 on the English version of subtask-1 and F1-Macro of 0.442 on subtask-2 on the test data. We also experiment with a number of state-of-the-art multi-lingual transformer-based models and report the results of the best performing ones.
[ "Al-Qarqaz, Ahmed", "Abdullah, Malak" ]
Team JUSTR00 at SemEval-2023 Task 3: Transformers for News Articles Classification
semeval-1.168
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.169.bib
https://aclanthology.org/2023.semeval-1.169/
@inproceedings{stormer-etal-2023-sam, title = "{S}am Miller at {S}em{E}val-2023 Task 5: Classification and Type-specific Spoiler Extraction Using {XLNET} and Other Transformer Models", author = {St{\"o}rmer, Pia and Esser, Tobias and Thomasius, Patrick}, editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.169", doi = "10.18653/v1/2023.semeval-1.169", pages = "1217--1224", abstract = "This paper proposes an approach to classify andan approach to generate spoilers for clickbaitarticles and posts. For the spoiler classification,XLNET was trained to fine-tune a model. Withan accuracy of 0.66, 2 out of 3 spoilers arepredicted accurately. The spoiler generationapproach involves preprocessing the clickbaittext and post-processing the output to fit thespoiler type. The approach is evaluated on atest dataset of 1000 posts, with the best resultfor spoiler generation achieved by fine-tuninga RoBERTa Large model with a small learningrate and sample size, reaching a BLEU scoreof 0.311. The paper provides an overview ofthe models and techniques used and discussesthe experimental setup.", }
This paper proposes an approach to classify andan approach to generate spoilers for clickbaitarticles and posts. For the spoiler classification,XLNET was trained to fine-tune a model. Withan accuracy of 0.66, 2 out of 3 spoilers arepredicted accurately. The spoiler generationapproach involves preprocessing the clickbaittext and post-processing the output to fit thespoiler type. The approach is evaluated on atest dataset of 1000 posts, with the best resultfor spoiler generation achieved by fine-tuninga RoBERTa Large model with a small learningrate and sample size, reaching a BLEU scoreof 0.311. The paper provides an overview ofthe models and techniques used and discussesthe experimental setup.
[ "St{\\\"o}rmer, Pia", "Esser, Tobias", "Thomasius, Patrick" ]
Sam Miller at SemEval-2023 Task 5: Classification and Type-specific Spoiler Extraction Using XLNET and Other Transformer Models
semeval-1.169
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.170.bib
https://aclanthology.org/2023.semeval-1.170/
@inproceedings{arampatzis-etal-2023-duth, title = "{DUTH} at {S}em{E}val-2023 Task 9: An Ensemble Approach for {T}witter Intimacy Analysis", author = "Arampatzis, Giorgos and Perifanis, Vasileios and Symeonidis, Symeon and Arampatzis, Avi", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.170", doi = "10.18653/v1/2023.semeval-1.170", pages = "1225--1230", abstract = "This work presents the approach developed by the DUTH team for participating in the SemEval-2023 Task 9: Multilingual Tweet Intimacy Analysis. Our results show that pre-processing techniques do not affect the learning performance for the task of multilingual intimacy analysis. In addition, we show that fine-tuning a transformer-based model does not provide advantages over using the pre-trained model to generate text embeddings and using the resulting representations to train simpler and more efficient models such as MLP. Finally, we utilize an ensemble of classifiers, including three MLPs with different architectures and a CatBoost model, to improve the regression accuracy.", }
This work presents the approach developed by the DUTH team for participating in the SemEval-2023 Task 9: Multilingual Tweet Intimacy Analysis. Our results show that pre-processing techniques do not affect the learning performance for the task of multilingual intimacy analysis. In addition, we show that fine-tuning a transformer-based model does not provide advantages over using the pre-trained model to generate text embeddings and using the resulting representations to train simpler and more efficient models such as MLP. Finally, we utilize an ensemble of classifiers, including three MLPs with different architectures and a CatBoost model, to improve the regression accuracy.
[ "Arampatzis, Giorgos", "Perifanis, Vasileios", "Symeonidis, Symeon", "Arampatzis, Avi" ]
DUTH at SemEval-2023 Task 9: An Ensemble Approach for Twitter Intimacy Analysis
semeval-1.170
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.171.bib
https://aclanthology.org/2023.semeval-1.171/
@inproceedings{rallabandi-etal-2023-sss, title = "{SSS} at {S}em{E}val-2023 Task 10: Explainable Detection of Online Sexism using Majority Voted Fine-Tuned Transformers", author = "Rallabandi, Sriya and Singhal, Sanchit and Seth, Pratinav", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.171", doi = "10.18653/v1/2023.semeval-1.171", pages = "1231--1236", abstract = "This paper describes our submission to Task 10 at SemEval 2023-Explainable Detection of Online Sexism (EDOS), divided into three subtasks. The recent rise in social media platforms has seen an increase in disproportionate levels of sexism experienced by women on social media platforms. This has made detecting and explaining online sexist content more important than ever to make social media safer and more accessible for women. Our approach consists of experimenting and finetuning BERT-based models and using a Majority Voting ensemble model that outperforms individual baseline model scores. Our system achieves a macro F1 score of 0.8392 for Task A, 0.6092 for Task B, and 0.4319 for Task C.", }
This paper describes our submission to Task 10 at SemEval 2023-Explainable Detection of Online Sexism (EDOS), divided into three subtasks. The recent rise in social media platforms has seen an increase in disproportionate levels of sexism experienced by women on social media platforms. This has made detecting and explaining online sexist content more important than ever to make social media safer and more accessible for women. Our approach consists of experimenting and finetuning BERT-based models and using a Majority Voting ensemble model that outperforms individual baseline model scores. Our system achieves a macro F1 score of 0.8392 for Task A, 0.6092 for Task B, and 0.4319 for Task C.
[ "Rallab", "i, Sriya", "Singhal, Sanchit", "Seth, Pratinav" ]
SSS at SemEval-2023 Task 10: Explainable Detection of Online Sexism using Majority Voted Fine-Tuned Transformers
semeval-1.171
Poster
2304.03518
[ "https://github.com/sriya26/semeval" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.172.bib
https://aclanthology.org/2023.semeval-1.172/
@inproceedings{hasanain-etal-2023-qcri, title = "{QCRI} at {S}em{E}val-2023 Task 3: News Genre, Framing and Persuasion Techniques Detection Using Multilingual Models", author = "Hasanain, Maram and El-Shangiti, Ahmed and Nandi, Rabindra Nath and Nakov, Preslav and Alam, Firoj", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.172", doi = "10.18653/v1/2023.semeval-1.172", pages = "1237--1244", abstract = "Misinformation spreading in mainstream and social media has been misleading users in different ways. Manual detection and verification efforts by journalists and fact-checkers can no longer cope with the great scale and quick spread of misleading information. This motivated research and industry efforts to develop systems for analyzing and verifying news spreading online. The SemEval-2023 Task 3 is an attempt to address several subtasks under this overarching problem, targeting writing techniques used in news articles to affect readers{'} opinions. The task addressed three subtasks with six languages, in addition to three {``}surprise{''} test languages, resulting in 27 different test setups. This paper describes our participating system to this task. Our team is one of the 6 teams that successfully submitted runs for all setups. The official results show that our system is ranked among the top 3 systems for 10 out of the 27 setups.", }
Misinformation spreading in mainstream and social media has been misleading users in different ways. Manual detection and verification efforts by journalists and fact-checkers can no longer cope with the great scale and quick spread of misleading information. This motivated research and industry efforts to develop systems for analyzing and verifying news spreading online. The SemEval-2023 Task 3 is an attempt to address several subtasks under this overarching problem, targeting writing techniques used in news articles to affect readers{'} opinions. The task addressed three subtasks with six languages, in addition to three {``}surprise{''} test languages, resulting in 27 different test setups. This paper describes our participating system to this task. Our team is one of the 6 teams that successfully submitted runs for all setups. The official results show that our system is ranked among the top 3 systems for 10 out of the 27 setups.
[ "Hasanain, Maram", "El-Shangiti, Ahmed", "N", "i, Rabindra Nath", "Nakov, Preslav", "Alam, Firoj" ]
QCRI at SemEval-2023 Task 3: News Genre, Framing and Persuasion Techniques Detection Using Multilingual Models
semeval-1.172
Poster
2305.03336
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.173.bib
https://aclanthology.org/2023.semeval-1.173/
@inproceedings{ningthoujam-etal-2023-researchteam, title = "{R}esearch{T}eam{\_}{HCN} at {S}em{E}val-2023 Task 6: A knowledge enhanced transformers based legal {NLP} system", author = "Ningthoujam, Dhanachandra and Patel, Pinal and Kareddula, Rajkamal and Vangipuram, Ramanand", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.173", doi = "10.18653/v1/2023.semeval-1.173", pages = "1245--1253", abstract = "This paper presents our work on LegalEval (understanding legal text), one of the tasks in SemEval-2023. It comprises of three sub-tasks namely Rhetorical Roles (RR), Legal Named Entity Recognition (L-NER), and Court Judge- ment Prediction with Explanation (CJPE). We developed different deep-learning models for each sub-tasks. For RR, we developed a multi- task learning model with contextual sequential sentence classification as the main task and non- contextual single sentence prediction as the sec- ondary task. Our model achieved an F1-score of 76.50{\%} on the unseen test set, and we at- tained the 14th position on the leaderboard. For the L-NER problem, we have designed a hybrid model, consisting of a multi-stage knowledge transfer learning framework and a rule-based system. This model achieved an F1-score of 91.20{\%} on the blind test set and attained the top position on the final leaderboard. Finally, for the CJPE task, we used a hierarchical ap- proach and could get around 66.67{\%} F1-score on judgment prediction and 45.83{\%} F1-score on the explainability of the CJPE task, and we attained 8th position on the leaderboard for this sub-task.", }
This paper presents our work on LegalEval (understanding legal text), one of the tasks in SemEval-2023. It comprises of three sub-tasks namely Rhetorical Roles (RR), Legal Named Entity Recognition (L-NER), and Court Judge- ment Prediction with Explanation (CJPE). We developed different deep-learning models for each sub-tasks. For RR, we developed a multi- task learning model with contextual sequential sentence classification as the main task and non- contextual single sentence prediction as the sec- ondary task. Our model achieved an F1-score of 76.50{\%} on the unseen test set, and we at- tained the 14th position on the leaderboard. For the L-NER problem, we have designed a hybrid model, consisting of a multi-stage knowledge transfer learning framework and a rule-based system. This model achieved an F1-score of 91.20{\%} on the blind test set and attained the top position on the final leaderboard. Finally, for the CJPE task, we used a hierarchical ap- proach and could get around 66.67{\%} F1-score on judgment prediction and 45.83{\%} F1-score on the explainability of the CJPE task, and we attained 8th position on the leaderboard for this sub-task.
[ "Ningthoujam, Dhanach", "ra", "Patel, Pinal", "Kareddula, Rajkamal", "Vangipuram, Raman", "" ]
ResearchTeam_HCN at SemEval-2023 Task 6: A knowledge enhanced transformers based legal NLP system
semeval-1.173
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.174.bib
https://aclanthology.org/2023.semeval-1.174/
@inproceedings{chatterjee-etal-2023-lsjsp, title = "{LSJSP} at {S}em{E}val-2023 Task 2: {FTBC}: A {F}ast{T}ext based framework with pre-trained {BERT} for {NER}", author = "Chatterjee, Shilpa and Evenss, Leo and Bhattacharyya, Pramit and Mondal, Joydeep", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.174", doi = "10.18653/v1/2023.semeval-1.174", pages = "1254--1259", abstract = "This study introduces the system submitted to the SemEval 2022 Task 2: MultiCoNER II (Multilingual Complex Named Entity Recognition) by the LSJSP team. We propose FTBC, a FastText-based framework with pre-trained Bert for NER tasks with complex entities and over a noisy dataset. Our system achieves an average of 58.27{\%} F1 score (fine-grained) and 75.79{\%} F1 score (coarse-grained) across all languages. FTBC outperforms the baseline BERT-CRF model on all 12 monolingual tracks.", }
This study introduces the system submitted to the SemEval 2022 Task 2: MultiCoNER II (Multilingual Complex Named Entity Recognition) by the LSJSP team. We propose FTBC, a FastText-based framework with pre-trained Bert for NER tasks with complex entities and over a noisy dataset. Our system achieves an average of 58.27{\%} F1 score (fine-grained) and 75.79{\%} F1 score (coarse-grained) across all languages. FTBC outperforms the baseline BERT-CRF model on all 12 monolingual tracks.
[ "Chatterjee, Shilpa", "Evenss, Leo", "Bhattacharyya, Pramit", "Mondal, Joydeep" ]
LSJSP at SemEval-2023 Task 2: FTBC: A FastText based framework with pre-trained BERT for NER
semeval-1.174
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.175.bib
https://aclanthology.org/2023.semeval-1.175/
@inproceedings{feely-etal-2023-qcon, title = "{QC}on at {S}em{E}val-2023 Task 10: Data Augmentation and Model Ensembling for Detection of Online Sexism", author = "Feely, Weston and Gupta, Prabhakar and Mohanty, Manas Ranjan and Chon, Timothy and Kundu, Tuhin and Singh, Vijit and Atluri, Sandeep and Roosta, Tanya and Ghaderi, Viviane and Schulam, Peter", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.175", doi = "10.18653/v1/2023.semeval-1.175", pages = "1260--1270", abstract = "The web contains an abundance of user- generated content. While this content is useful for many applications, it poses many challenges due to the presence of offensive, biased, and overall toxic language. In this work, we present a system that identifies and classifies sexist content at different levels of granularity. Using transformer-based models, we explore the value of data augmentation, use of ensemble methods, and leverage in-context learning using foundation models to tackle the task. We evaluate the different components of our system both quantitatively and qualitatively. Our best systems achieve an F1 score of 0.84 for the binary classification task aiming to identify whether a given content is sexist or not and 0.64 and 0.47 for the two multi-class tasks that aim to identify the coarse and fine-grained types of sexism present in the given content respectively.", }
The web contains an abundance of user- generated content. While this content is useful for many applications, it poses many challenges due to the presence of offensive, biased, and overall toxic language. In this work, we present a system that identifies and classifies sexist content at different levels of granularity. Using transformer-based models, we explore the value of data augmentation, use of ensemble methods, and leverage in-context learning using foundation models to tackle the task. We evaluate the different components of our system both quantitatively and qualitatively. Our best systems achieve an F1 score of 0.84 for the binary classification task aiming to identify whether a given content is sexist or not and 0.64 and 0.47 for the two multi-class tasks that aim to identify the coarse and fine-grained types of sexism present in the given content respectively.
[ "Feely, Weston", "Gupta, Prabhakar", "Mohanty, Manas Ranjan", "Chon, Timothy", "Kundu, Tuhin", "Singh, Vijit", "Atluri, S", "eep", "Roosta, Tanya", "Ghaderi, Viviane", "Schulam, Peter" ]
QCon at SemEval-2023 Task 10: Data Augmentation and Model Ensembling for Detection of Online Sexism
semeval-1.175
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.176.bib
https://aclanthology.org/2023.semeval-1.176/
@inproceedings{patil-etal-2023-rahul, title = "Rahul Patil at {S}em{E}val-2023 Task 1: {V}-{WSD}: Visual Word Sense Disambiguation", author = "Patil, Rahul and Patel, Pinal and Patel, Charin and Verma, Mangal", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.176", doi = "10.18653/v1/2023.semeval-1.176", pages = "1271--1275", abstract = "Semeval 2023 task 1: VWSD, In this paper, we propose an ensemble of two Neural network systems that ranks 10 images given a word and limited textual context. We have used openAI Clip based models for the English language and multilingual text-to-text translation models for Farsi-to-English and Italian-to-English. Additionally, we propose a system that learns from multilingual bert-base embeddings for text and resnet101 embeddings for the image. Considering all the three languages into account this system has achieved the fourth rank.", }
Semeval 2023 task 1: VWSD, In this paper, we propose an ensemble of two Neural network systems that ranks 10 images given a word and limited textual context. We have used openAI Clip based models for the English language and multilingual text-to-text translation models for Farsi-to-English and Italian-to-English. Additionally, we propose a system that learns from multilingual bert-base embeddings for text and resnet101 embeddings for the image. Considering all the three languages into account this system has achieved the fourth rank.
[ "Patil, Rahul", "Patel, Pinal", "Patel, Charin", "Verma, Mangal" ]
Rahul Patil at SemEval-2023 Task 1: V-WSD: Visual Word Sense Disambiguation
semeval-1.176
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.177.bib
https://aclanthology.org/2023.semeval-1.177/
@inproceedings{sriram-etal-2023-posh, title = "{P}o{S}h at {S}em{E}val-2023 Task 10: Explainable Detection of Online Sexism", author = "Sriram, Shruti and Chandran, Padma Pooja and M R, Shrijith", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.177", doi = "10.18653/v1/2023.semeval-1.177", pages = "1276--1281", abstract = "To precisely identify the different forms of online sexism, we utilize several sentence transformer models such as ALBERT, BERT, RoBERTa, DistilBERT, and XLNet. By combining the predictions from these models, we can generate a more comprehensive and improved result. Each transformer model is trained after pre-processing the data from the training dataset, ensuring that the models are effective at detecting and classifying instances of online sexism. For Task A, the model had to classify the texts as sexist or not sexist. We implemented ALBERT, an NLP-based sentence transformer. For task B, we implemented BERT, RoBERTa, DistilBERT and XLNet and took the mode of predictions for each text as the final prediction for the given text. For task C, we implemented ALBERT, BERT, RoBERTa, DistilBERT and XLNet and took the mode of predictions as the final prediction for the given text.", }
To precisely identify the different forms of online sexism, we utilize several sentence transformer models such as ALBERT, BERT, RoBERTa, DistilBERT, and XLNet. By combining the predictions from these models, we can generate a more comprehensive and improved result. Each transformer model is trained after pre-processing the data from the training dataset, ensuring that the models are effective at detecting and classifying instances of online sexism. For Task A, the model had to classify the texts as sexist or not sexist. We implemented ALBERT, an NLP-based sentence transformer. For task B, we implemented BERT, RoBERTa, DistilBERT and XLNet and took the mode of predictions for each text as the final prediction for the given text. For task C, we implemented ALBERT, BERT, RoBERTa, DistilBERT and XLNet and took the mode of predictions as the final prediction for the given text.
[ "Sriram, Shruti", "Ch", "ran, Padma Pooja", "M R, Shrijith" ]
PoSh at SemEval-2023 Task 10: Explainable Detection of Online Sexism
semeval-1.177
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.178.bib
https://aclanthology.org/2023.semeval-1.178/
@inproceedings{zhao-etal-2023-legal, title = "Legal{\_}try at {S}em{E}val-2023 Task 6: Voting Heterogeneous Models for Entities identification in Legal Documents", author = "Zhao, Junzhe and Wang, Yingxi and Rusnachenko, Nicolay and Liang, Huizhi", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.178", doi = "10.18653/v1/2023.semeval-1.178", pages = "1282--1286", abstract = "Named Entity Recognition (NER) is a subtask of Natural Language Processing (NLP) that involves identifying and categorizing named entities. The result annotation makes unstructured natural texts applicable for other NLP tasks, including information retrieval, question answering, and machine translation. NER is also essential in legal as an initial stage in extracting relevant entities. However, legal texts contain domain-specific named entities, such as applicants, defendants, courts, statutes, and articles. The latter makes standard named entity recognizers incompatible with legal documents. This paper proposes an approach combining multiple models{'} results via a voting mechanism for unique entity identification in legal texts. This endeavor focuses on extracting legal named entities, and the specific assignment (task B) is to create a legal NER system for unique entity annotation in legal documents. The results of our experiments and system implementation are published in \url{https://github.com/SuperEDG/Legal_Project}.", }
Named Entity Recognition (NER) is a subtask of Natural Language Processing (NLP) that involves identifying and categorizing named entities. The result annotation makes unstructured natural texts applicable for other NLP tasks, including information retrieval, question answering, and machine translation. NER is also essential in legal as an initial stage in extracting relevant entities. However, legal texts contain domain-specific named entities, such as applicants, defendants, courts, statutes, and articles. The latter makes standard named entity recognizers incompatible with legal documents. This paper proposes an approach combining multiple models{'} results via a voting mechanism for unique entity identification in legal texts. This endeavor focuses on extracting legal named entities, and the specific assignment (task B) is to create a legal NER system for unique entity annotation in legal documents. The results of our experiments and system implementation are published in \url{https://github.com/SuperEDG/Legal_Project}.
[ "Zhao, Junzhe", "Wang, Yingxi", "Rusnachenko, Nicolay", "Liang, Huizhi" ]
Legal_try at SemEval-2023 Task 6: Voting Heterogeneous Models for Entities identification in Legal Documents
semeval-1.178
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.179.bib
https://aclanthology.org/2023.semeval-1.179/
@inproceedings{bevan-etal-2023-mdc, title = "{MDC} at {S}em{E}val-2023 Task 7: Fine-tuning Transformers for Textual Entailment Prediction and Evidence Retrieval in Clinical Trials", author = "Bevan, Robert and Turbitt, Ois{\'\i}n and Aboshokor, Mouhamad", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.179", doi = "10.18653/v1/2023.semeval-1.179", pages = "1287--1292", abstract = "We present our entry to the Multi-evidence Natural Language Inference for Clinical Trial Datatask at SemEval 2023. We submitted entries forboth the evidence retrieval and textual entailment sub-tasks. For the evidence retrieval task,we fine-tuned the PubMedBERT transformermodel to extract relevant evidence from clinicaltrial data given a hypothesis concerning either asingle clinical trial or pair of clinical trials. Ourbest performing model achieved an F1 scoreof 0.804. For the textual entailment task, inwhich systems had to predict whether a hypothesis about either a single clinical trial or pair ofclinical trials is true or false, we fine-tuned theBioLinkBERT transformer model. We passedour evidence retrieval model{'}s output into ourtextual entailment model and submitted its output for the evaluation. Our best performingmodel achieved an F1 score of 0.695.", }
We present our entry to the Multi-evidence Natural Language Inference for Clinical Trial Datatask at SemEval 2023. We submitted entries forboth the evidence retrieval and textual entailment sub-tasks. For the evidence retrieval task,we fine-tuned the PubMedBERT transformermodel to extract relevant evidence from clinicaltrial data given a hypothesis concerning either asingle clinical trial or pair of clinical trials. Ourbest performing model achieved an F1 scoreof 0.804. For the textual entailment task, inwhich systems had to predict whether a hypothesis about either a single clinical trial or pair ofclinical trials is true or false, we fine-tuned theBioLinkBERT transformer model. We passedour evidence retrieval model{'}s output into ourtextual entailment model and submitted its output for the evaluation. Our best performingmodel achieved an F1 score of 0.695.
[ "Bevan, Robert", "Turbitt, Ois{\\'\\i}n", "Aboshokor, Mouhamad" ]
MDC at SemEval-2023 Task 7: Fine-tuning Transformers for Textual Entailment Prediction and Evidence Retrieval in Clinical Trials
semeval-1.179
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.180.bib
https://aclanthology.org/2023.semeval-1.180/
@inproceedings{nigam-etal-2023-nonet, title = "Nonet at {S}em{E}val-2023 Task 6: Methodologies for Legal Evaluation", author = "Nigam, Shubham Kumar and Deroy, Aniket and Shallum, Noel and Mishra, Ayush Kumar and Roy, Anup and Mishra, Shubham Kumar and Bhattacharya, Arnab and Ghosh, Saptarshi and Ghosh, Kripabandhu", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.180", doi = "10.18653/v1/2023.semeval-1.180", pages = "1293--1303", abstract = "This paper describes our submission to the SemEval-2023 for Task 6 on LegalEval: Understanding Legal Texts. Our submission concentrated on three subtasks: Legal Named Entity Recognition (L-NER) for Task-B, Legal Judgment Prediction (LJP) for Task-C1, and Court Judgment Prediction with Explanation (CJPE) for Task-C2. We conducted various experiments on these subtasks and presented the results in detail, including data statistics and methodology. It is worth noting that legal tasks, such as those tackled in this research, have been gaining importance due to the increasing need to automate legal analysis and support. Our team obtained competitive rankings of 15th, 11th, and 1st in Task-B, Task-C1, and Task-C2, respectively, as reported on the leaderboard.", }
This paper describes our submission to the SemEval-2023 for Task 6 on LegalEval: Understanding Legal Texts. Our submission concentrated on three subtasks: Legal Named Entity Recognition (L-NER) for Task-B, Legal Judgment Prediction (LJP) for Task-C1, and Court Judgment Prediction with Explanation (CJPE) for Task-C2. We conducted various experiments on these subtasks and presented the results in detail, including data statistics and methodology. It is worth noting that legal tasks, such as those tackled in this research, have been gaining importance due to the increasing need to automate legal analysis and support. Our team obtained competitive rankings of 15th, 11th, and 1st in Task-B, Task-C1, and Task-C2, respectively, as reported on the leaderboard.
[ "Nigam, Shubham Kumar", "Deroy, Aniket", "Shallum, Noel", "Mishra, Ayush Kumar", "Roy, Anup", "Mishra, Shubham Kumar", "Bhattacharya, Arnab", "Ghosh, Saptarshi", "Ghosh, Kripab", "hu" ]
Nonet at SemEval-2023 Task 6: Methodologies for Legal Evaluation
semeval-1.180
Poster
2310.11049
[ "https://github.com/shubhamkumarnigam/legaleval23_nonet" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.181.bib
https://aclanthology.org/2023.semeval-1.181/
@inproceedings{chavan-patwardhan-2023-chapat, title = "{C}ha{P}at at {S}em{E}val-2023 Task 9: Text Intimacy Analysis using Ensembles of Multilingual Transformers", author = "Chavan, Tanmay and Patwardhan, Ved", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.181", doi = "10.18653/v1/2023.semeval-1.181", pages = "1304--1310", abstract = "Intimacy estimation of a given text has recently gained importance due to the increase in direct interaction of NLP systems with humans. Intimacy is an important aspect of natural language and has a substantial impact on our everyday communication. Thus the level of intimacy can provide us with deeper insights and richer semantics of conversations. In this paper, we present our work on the SemEval shared task 9 on predicting the level of intimacy for the given text. The dataset consists of tweets in ten languages, out of which only six are available in the training dataset. We conduct several experiments and show that an ensemble of multilingual models along with a language-specific monolingual model has the best performance. We also evaluate other data augmentation methods such as translation and present the results. Lastly, we study the results thoroughly and present some noteworthy insights into this problem.", }
Intimacy estimation of a given text has recently gained importance due to the increase in direct interaction of NLP systems with humans. Intimacy is an important aspect of natural language and has a substantial impact on our everyday communication. Thus the level of intimacy can provide us with deeper insights and richer semantics of conversations. In this paper, we present our work on the SemEval shared task 9 on predicting the level of intimacy for the given text. The dataset consists of tweets in ten languages, out of which only six are available in the training dataset. We conduct several experiments and show that an ensemble of multilingual models along with a language-specific monolingual model has the best performance. We also evaluate other data augmentation methods such as translation and present the results. Lastly, we study the results thoroughly and present some noteworthy insights into this problem.
[ "Chavan, Tanmay", "Patwardhan, Ved" ]
ChaPat at SemEval-2023 Task 9: Text Intimacy Analysis using Ensembles of Multilingual Transformers
semeval-1.181
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.182.bib
https://aclanthology.org/2023.semeval-1.182/
@inproceedings{azime-etal-2023-masakhane, title = "Masakhane-Afrisenti at {S}em{E}val-2023 Task 12: Sentiment Analysis using {A}fro-centric Language Models and Adapters for Low-resource {A}frican Languages", author = "Azime, Israel Abebe and Al-azzawi, Sana and Tonja, Atnafu Lambebo and Shode, Iyanuoluwa and Alabi, Jesujoba and Awokoya, Ayodele and Oduwole, Mardiyyah and Adewumi, Tosin and Fanijo, Samuel and Oyinkansola, Awosan", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.182", doi = "10.18653/v1/2023.semeval-1.182", pages = "1311--1316", abstract = "Detecting harmful content on social media plat-forms is crucial in preventing the negative ef-fects these posts can have on social media users. This paper presents our methodology for tack-ling task 10 from SemEval23, which focuseson detecting and classifying online sexism insocial media posts. We constructed our solu-tion using an ensemble of transformer-basedmodels (that have been fine-tuned; BERTweet,RoBERTa, and DeBERTa). To alleviate the var-ious issues caused by the class imbalance inthe dataset provided and improve the general-ization of our model, our framework employsdata augmentation and semi-supervised learn-ing. Specifically, we use back-translation fordata augmentation in two scenarios: augment-ing the underrepresented class and augment-ing all classes. In this study, we analyze theimpact of these different strategies on the sys-tem{'}s overall performance and determine whichtechnique is the most effective. Extensive ex-periments demonstrate the efficacy of our ap-proach. For sub-task A, the system achievedan F1-score of 0.8613. The source code to re-produce the proposed solutions is available onGithub", }
Detecting harmful content on social media plat-forms is crucial in preventing the negative ef-fects these posts can have on social media users. This paper presents our methodology for tack-ling task 10 from SemEval23, which focuseson detecting and classifying online sexism insocial media posts. We constructed our solu-tion using an ensemble of transformer-basedmodels (that have been fine-tuned; BERTweet,RoBERTa, and DeBERTa). To alleviate the var-ious issues caused by the class imbalance inthe dataset provided and improve the general-ization of our model, our framework employsdata augmentation and semi-supervised learn-ing. Specifically, we use back-translation fordata augmentation in two scenarios: augment-ing the underrepresented class and augment-ing all classes. In this study, we analyze theimpact of these different strategies on the sys-tem{'}s overall performance and determine whichtechnique is the most effective. Extensive ex-periments demonstrate the efficacy of our ap-proach. For sub-task A, the system achievedan F1-score of 0.8613. The source code to re-produce the proposed solutions is available onGithub
[ "Azime, Israel Abebe", "Al-azzawi, Sana", "Tonja, Atnafu Lambebo", "Shode, Iyanuoluwa", "Alabi, Jesujoba", "Awokoya, Ayodele", "Oduwole, Mardiyyah", "Adewumi, Tosin", "Fanijo, Samuel", "Oyinkansola, Awosan" ]
Masakhane-Afrisenti at SemEval-2023 Task 12: Sentiment Analysis using Afro-centric Language Models and Adapters for Low-resource African Languages
semeval-1.182
Poster
2304.06459
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.183.bib
https://aclanthology.org/2023.semeval-1.183/
@inproceedings{glazkova-2023-tmn, title = "tmn at {S}em{E}val-2023 Task 9: Multilingual Tweet Intimacy Detection Using {XLM}-{T}, {G}oogle {T}ranslate, and Ensemble Learning", author = "Glazkova, Anna", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.183", doi = "10.18653/v1/2023.semeval-1.183", pages = "1317--1323", abstract = "The paper describes a transformer-based system designed for SemEval-2023 Task 9: Multilingual Tweet Intimacy Analysis. The purpose of the task was to predict the intimacy of tweets in a range from 1 (not intimate at all) to 5 (very intimate). The official training set for the competition consisted of tweets in six languages (English, Spanish, Italian, Portuguese, French, and Chinese). The test set included the given six languages as well as external data with four languages not presented in the training set (Hindi, Arabic, Dutch, and Korean). We presented a solution based on an ensemble of XLM-T, a multilingual RoBERTa model adapted to the Twitter domain. To improve the performance on unseen languages, each tweet was supplemented by its English translation. We explored the effectiveness of translated data for the languages seen in fine-tuning compared to unseen languages and estimated strategies for using translated data in transformer-based models. Our solution ranked 4th on the leaderboard while achieving an overall Pearson{'}s r of 0.5989 over the test set. The proposed system improves up to 0.088 Pearson{'}s r over a score averaged across all 45 submissions.", }
The paper describes a transformer-based system designed for SemEval-2023 Task 9: Multilingual Tweet Intimacy Analysis. The purpose of the task was to predict the intimacy of tweets in a range from 1 (not intimate at all) to 5 (very intimate). The official training set for the competition consisted of tweets in six languages (English, Spanish, Italian, Portuguese, French, and Chinese). The test set included the given six languages as well as external data with four languages not presented in the training set (Hindi, Arabic, Dutch, and Korean). We presented a solution based on an ensemble of XLM-T, a multilingual RoBERTa model adapted to the Twitter domain. To improve the performance on unseen languages, each tweet was supplemented by its English translation. We explored the effectiveness of translated data for the languages seen in fine-tuning compared to unseen languages and estimated strategies for using translated data in transformer-based models. Our solution ranked 4th on the leaderboard while achieving an overall Pearson{'}s r of 0.5989 over the test set. The proposed system improves up to 0.088 Pearson{'}s r over a score averaged across all 45 submissions.
[ "Glazkova, Anna" ]
tmn at SemEval-2023 Task 9: Multilingual Tweet Intimacy Detection Using XLM-T, Google Translate, and Ensemble Learning
semeval-1.183
Poster
2304.04054
[ "https://github.com/oldaandozerskaya/intimacy_tmn_semeval23_task9" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.184.bib
https://aclanthology.org/2023.semeval-1.184/
@inproceedings{andrew-2023-judithjeyafreeda, title = "{J}udith{J}eyafreeda at {S}em{E}val-2023 Task 10: Machine Learning for Explainable Detection of Online Sexism", author = "Andrew, Judith Jeyafreeda", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.184", doi = "10.18653/v1/2023.semeval-1.184", pages = "1324--1328", abstract = "The rise of the internet and social media platforms has brought about significant changes in how people interact with each another. For a lot of people, the internet have also become the only source of news and information about the world. Thus due to the increase in accessibility of information, online sexism has also increased. Efforts should be made to make the internet a safe space for everyone, irrespective of gender, both from a larger social norms perspective and legal or technical regulations to help alleviate online gender-based violence. As a part of this, this paper explores simple methods that can be easily deployed to automatically detect online sexism in textual statements.", }
The rise of the internet and social media platforms has brought about significant changes in how people interact with each another. For a lot of people, the internet have also become the only source of news and information about the world. Thus due to the increase in accessibility of information, online sexism has also increased. Efforts should be made to make the internet a safe space for everyone, irrespective of gender, both from a larger social norms perspective and legal or technical regulations to help alleviate online gender-based violence. As a part of this, this paper explores simple methods that can be easily deployed to automatically detect online sexism in textual statements.
[ "Andrew, Judith Jeyafreeda" ]
JudithJeyafreeda at SemEval-2023 Task 10: Machine Learning for Explainable Detection of Online Sexism
semeval-1.184
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.185.bib
https://aclanthology.org/2023.semeval-1.185/
@inproceedings{hosseini-etal-2023-lon, title = "Lon-e{\aa} at {S}em{E}val-2023 Task 11: A Comparison of Activation Functions for Soft and Hard Label Prediction", author = "Hosseini, Peyman and Hosseini, Mehran and Al-azzawi, Sana and Liwicki, Marcus and Castro, Ignacio and Purver, Matthew", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.185", doi = "10.18653/v1/2023.semeval-1.185", pages = "1329--1334", abstract = "We study the influence of different activation functions in the output layer of pre-trained transformer models for soft and hard label prediction in the learning with disagreement task. In this task, the goal is to quantify the amount of disagreement via predicting soft labels. To predict the soft labels, we use BERT-based preprocessors and encoders and vary the activation function used in the output layer, while keeping other parameters constant. The soft labels are then used for the hard label prediction. The activation functions considered are sigmoid as well as a step-function that is added to the model post-training and a sinusoidal activation function, which is introduced for the first time in this paper.", }
We study the influence of different activation functions in the output layer of pre-trained transformer models for soft and hard label prediction in the learning with disagreement task. In this task, the goal is to quantify the amount of disagreement via predicting soft labels. To predict the soft labels, we use BERT-based preprocessors and encoders and vary the activation function used in the output layer, while keeping other parameters constant. The soft labels are then used for the hard label prediction. The activation functions considered are sigmoid as well as a step-function that is added to the model post-training and a sinusoidal activation function, which is introduced for the first time in this paper.
[ "Hosseini, Peyman", "Hosseini, Mehran", "Al-azzawi, Sana", "Liwicki, Marcus", "Castro, Ignacio", "Purver, Matthew" ]
Lon-eå at SemEval-2023 Task 11: A Comparison of Activation Functions for Soft and Hard Label Prediction
semeval-1.185
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.186.bib
https://aclanthology.org/2023.semeval-1.186/
@inproceedings{garcia-ferrero-etal-2023-ixa, title = "{IXA}/Cogcomp at {S}em{E}val-2023 Task 2: Context-enriched Multilingual Named Entity Recognition Using Knowledge Bases", author = "Garc{\'\i}a-Ferrero, Iker and Campos, Jon Ander and Sainz, Oscar and Salaberria, Ander and Roth, Dan", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.186", doi = "10.18653/v1/2023.semeval-1.186", pages = "1335--1346", abstract = "Named Entity Recognition (NER) is a core natural language processing task in which pre-trained language models have shown remarkable performance. However, standard benchmarks like CoNLL 2003 do not address many of the challenges that deployed NER systems face, such as having to classify emerging or complex entities in a fine-grained way. In this paper we present a novel NER cascade approach comprising three steps: first, identifying candidate entities in the input sentence; second, linking the each candidate to an existing knowledge base; third, predicting the fine-grained category for each entity candidate. We empirically demonstrate the significance of external knowledge bases in accurately classifying fine-grained and emerging entities. Our system exhibits robust performance in the MultiCoNER2 shared task, even in the low-resource language setting where we leverage knowledge bases of high-resource languages.", }
Named Entity Recognition (NER) is a core natural language processing task in which pre-trained language models have shown remarkable performance. However, standard benchmarks like CoNLL 2003 do not address many of the challenges that deployed NER systems face, such as having to classify emerging or complex entities in a fine-grained way. In this paper we present a novel NER cascade approach comprising three steps: first, identifying candidate entities in the input sentence; second, linking the each candidate to an existing knowledge base; third, predicting the fine-grained category for each entity candidate. We empirically demonstrate the significance of external knowledge bases in accurately classifying fine-grained and emerging entities. Our system exhibits robust performance in the MultiCoNER2 shared task, even in the low-resource language setting where we leverage knowledge bases of high-resource languages.
[ "Garc{\\'\\i}a-Ferrero, Iker", "Campos, Jon Ander", "Sainz, Oscar", "Salaberria, Ander", "Roth, Dan" ]
IXA/Cogcomp at SemEval-2023 Task 2: Context-enriched Multilingual Named Entity Recognition Using Knowledge Bases
semeval-1.186
Poster
2304.10637
[ "https://github.com/ikergarcia1996/context-enriched-ner" ]
https://huggingface.co/papers/2304.10637
2
0
0
5
1
[]
[]
[]
https://aclanthology.org/2023.semeval-1.187.bib
https://aclanthology.org/2023.semeval-1.187/
@inproceedings{heinisch-etal-2023-accept, title = "{ACCEPT} at {S}em{E}val-2023 Task 3: An Ensemble-based Approach to Multilingual Framing Detection", author = "Heinisch, Philipp and Plenz, Moritz and Frank, Anette and Cimiano, Philipp", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.187", doi = "10.18653/v1/2023.semeval-1.187", pages = "1347--1358", abstract = "This paper describes the system and experimental results of an ensemble-based approach tomultilingual framing detection for the submission of the ACCEPT team to the SemEval-2023 Task 3 on Framing Detection (Subtask 2). The approach is based on an ensemble that combines three different methods: a classifier based on large language models, a classifier based on static word embeddings, and an approach that uses external commonsense knowledge graphs, in particular, ConceptNet. The results of the three classification heads are aggregated into an overall prediction for each frame class. Our best submission yielded a micro F1-score of 50.69{\%} (rank 10) and a macro F1-score of 50.20{\%} (rank 3) for English articles. Our experimental results show that static word embeddings and knowledge graphs are useful components for frame detection, while the ensemble of all three methods combines the strengths of our three proposed methods. Through system ablations, we show that the commonsenseguided knowledge graphs are the outperforming method for many languages.", }
This paper describes the system and experimental results of an ensemble-based approach tomultilingual framing detection for the submission of the ACCEPT team to the SemEval-2023 Task 3 on Framing Detection (Subtask 2). The approach is based on an ensemble that combines three different methods: a classifier based on large language models, a classifier based on static word embeddings, and an approach that uses external commonsense knowledge graphs, in particular, ConceptNet. The results of the three classification heads are aggregated into an overall prediction for each frame class. Our best submission yielded a micro F1-score of 50.69{\%} (rank 10) and a macro F1-score of 50.20{\%} (rank 3) for English articles. Our experimental results show that static word embeddings and knowledge graphs are useful components for frame detection, while the ensemble of all three methods combines the strengths of our three proposed methods. Through system ablations, we show that the commonsenseguided knowledge graphs are the outperforming method for many languages.
[ "Heinisch, Philipp", "Plenz, Moritz", "Frank, Anette", "Cimiano, Philipp" ]
ACCEPT at SemEval-2023 Task 3: An Ensemble-based Approach to Multilingual Framing Detection
semeval-1.187
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.188.bib
https://aclanthology.org/2023.semeval-1.188/
@inproceedings{honda-wilharm-2023-noam, title = "Noam {C}homsky at {S}em{E}val-2023 Task 4: Hierarchical Similarity-aware Model for Human Value Detection", author = "Honda, Sumire and Wilharm, Sebastian", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.188", doi = "10.18653/v1/2023.semeval-1.188", pages = "1359--1364", abstract = "This paper presents a hierarchical similarity-aware approach for the SemEval-2023 task 4 human value detection behind arguments using SBERT. The approach takes similarity score as an additional source of information between the input arguments and the lower level of labels in a human value hierarchical dataset. Our similarity-aware model improved the similarity-agnostic baseline model, especially showing a significant increase in or the value categories with lowest scores by the baseline model.", }
This paper presents a hierarchical similarity-aware approach for the SemEval-2023 task 4 human value detection behind arguments using SBERT. The approach takes similarity score as an additional source of information between the input arguments and the lower level of labels in a human value hierarchical dataset. Our similarity-aware model improved the similarity-agnostic baseline model, especially showing a significant increase in or the value categories with lowest scores by the baseline model.
[ "Honda, Sumire", "Wilharm, Sebastian" ]
Noam Chomsky at SemEval-2023 Task 4: Hierarchical Similarity-aware Model for Human Value Detection
semeval-1.188
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.189.bib
https://aclanthology.org/2023.semeval-1.189/
@inproceedings{kataria-gupta-2023-nlp, title = "{NLP}-Titan at {S}em{E}val-2023 Task 6: Identification of Rhetorical Roles Using Sequential Sentence Classification", author = "Kataria, Harsh and Gupta, Ambuje", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.189", doi = "10.18653/v1/2023.semeval-1.189", pages = "1365--1370", abstract = "The analysis of legal cases poses a considerable challenge for researchers, practitioners, and academicians due to the lengthy and intricate nature of these documents. Developing countries such as India are experiencing a significant increase in the number of pending legal cases, which are often unstructured and difficult to process using conventional methods. To address this issue, the authors have implemented a sequential sentence classification process, which categorizes legal documents into 13 segments, known as Rhetorical Roles. This approach enables the extraction of valuable insights from the various classes of the structured document. The performance of this approach was evaluated using the F1 score, which measures the model{'}s precision and recall. The authors{'} approach achieved an F1 score of 0.83, which surpasses the baseline score of 0.79 established by the task organizers. The authors have combined sequential sentence classification and the SetFit method in a hierarchical manner by combining similar classes to achieve this score.", }
The analysis of legal cases poses a considerable challenge for researchers, practitioners, and academicians due to the lengthy and intricate nature of these documents. Developing countries such as India are experiencing a significant increase in the number of pending legal cases, which are often unstructured and difficult to process using conventional methods. To address this issue, the authors have implemented a sequential sentence classification process, which categorizes legal documents into 13 segments, known as Rhetorical Roles. This approach enables the extraction of valuable insights from the various classes of the structured document. The performance of this approach was evaluated using the F1 score, which measures the model{'}s precision and recall. The authors{'} approach achieved an F1 score of 0.83, which surpasses the baseline score of 0.79 established by the task organizers. The authors have combined sequential sentence classification and the SetFit method in a hierarchical manner by combining similar classes to achieve this score.
[ "Kataria, Harsh", "Gupta, Ambuje" ]
NLP-Titan at SemEval-2023 Task 6: Identification of Rhetorical Roles Using Sequential Sentence Classification
semeval-1.189
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.190.bib
https://aclanthology.org/2023.semeval-1.190/
@inproceedings{rydelek-etal-2023-adamr, title = "{A}dam{R} at {S}em{E}val-2023 Task 10: Solving the Class Imbalance Problem in Sexism Detection with Ensemble Learning", author = "Rydelek, Adam and Dementieva, Daryna and Groh, Georg", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.190", doi = "10.18653/v1/2023.semeval-1.190", pages = "1371--1381", abstract = "The Explainable Detection of Online Sexism task presents the problem of explainable sexism detection through fine-grained categorisation of sexist cases with three subtasks. Our team experimented with different ways to combat class imbalance throughout the tasks using data augmentation and loss alteration techniques. We tackled the challenge by utilising ensembles of Transformer models trained on different datasets, which are tested to find the balance between performance and interpretability. This solution ranked us in the top 40{\%} of teams for each of the tracks.", }
The Explainable Detection of Online Sexism task presents the problem of explainable sexism detection through fine-grained categorisation of sexist cases with three subtasks. Our team experimented with different ways to combat class imbalance throughout the tasks using data augmentation and loss alteration techniques. We tackled the challenge by utilising ensembles of Transformer models trained on different datasets, which are tested to find the balance between performance and interpretability. This solution ranked us in the top 40{\%} of teams for each of the tracks.
[ "Rydelek, Adam", "Dementieva, Daryna", "Groh, Georg" ]
AdamR at SemEval-2023 Task 10: Solving the Class Imbalance Problem in Sexism Detection with Ensemble Learning
semeval-1.190
Poster
2305.08636
[ "" ]
https://huggingface.co/papers/2305.08636
0
0
0
3
1
[]
[]
[]
https://aclanthology.org/2023.semeval-1.191.bib
https://aclanthology.org/2023.semeval-1.191/
@inproceedings{el-balima-cordero-etal-2023-i2c, title = "{I}2{C} {H}uelva at {S}em{E}val-2023 Task 4: A Resampling and Transformers Approach to Identify Human Values behind Arguments", author = "El Balima Cordero, Nordin and Mata V{\'a}zquez, Jacinto and Pach{\'o}n {\'A}lvarez, Victoria and Pichardo Estevez, Abel", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.191", doi = "10.18653/v1/2023.semeval-1.191", pages = "1382--1387", abstract = "This paper presents the approaches proposedfor I2C Group to address the SemEval-2023Task 4: Identification of Human Values behindArguments (ValueEval){''}, whose goal is to classify 20 different categories of human valuesgiven a textual argument. The dataset of thistask consists of one argument per line, including its unique argument ID, conclusion, stanceof the premise towards the conclusion and thepremise text. To indicate whether the argumentdraws or not on that category a binary indication (1 or 0) is included. Participants can submit approaches that detect one, multiple, or allof these values in arguments. The task providesan opportunity for researchers to explore theuse of automated techniques to identify humanvalues in text and has potential applications invarious domains such as social science, politics,and marketing. To deal with the imbalancedclass distribution given, our approach undersamples the data. Additionally, the three components of the argument (conclusion, stanceand premise) are used for training. The systemoutperformed the BERT baseline according toofficial evaluation metrics, achieving a f1 scoreof 0.46.", }
This paper presents the approaches proposedfor I2C Group to address the SemEval-2023Task 4: Identification of Human Values behindArguments (ValueEval){''}, whose goal is to classify 20 different categories of human valuesgiven a textual argument. The dataset of thistask consists of one argument per line, including its unique argument ID, conclusion, stanceof the premise towards the conclusion and thepremise text. To indicate whether the argumentdraws or not on that category a binary indication (1 or 0) is included. Participants can submit approaches that detect one, multiple, or allof these values in arguments. The task providesan opportunity for researchers to explore theuse of automated techniques to identify humanvalues in text and has potential applications invarious domains such as social science, politics,and marketing. To deal with the imbalancedclass distribution given, our approach undersamples the data. Additionally, the three components of the argument (conclusion, stanceand premise) are used for training. The systemoutperformed the BERT baseline according toofficial evaluation metrics, achieving a f1 scoreof 0.46.
[ "El Balima Cordero, Nordin", "Mata V{\\'a}zquez, Jacinto", "Pach{\\'o}n {\\'A}lvarez, Victoria", "Pichardo Estevez, Abel" ]
I2C Huelva at SemEval-2023 Task 4: A Resampling and Transformers Approach to Identify Human Values behind Arguments
semeval-1.191
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.192.bib
https://aclanthology.org/2023.semeval-1.192/
@inproceedings{mukherjee-etal-2023-mllab4cs, title = "{ML}lab4{CS} at {S}em{E}val-2023 Task 2: Named Entity Recognition in Low-resource Language {B}angla Using Multilingual Language Models", author = "Mukherjee, Shrimon and Ghosh, Madhusudan and {Girish} and Basuchowdhuri, Partha", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.192", doi = "10.18653/v1/2023.semeval-1.192", pages = "1388--1394", abstract = "Extracting of NERs from low-resource languages and recognizing their types is one of the important tasks in the entity extraction domain. Recently many studies have been conducted in this area of research. In our study, we introduce a system for identifying complex entities and recognizing their types from low-resource language Bangla, which was published in SemEval Task 2 MulitCoNER II 2023. For this sequence labeling task, we use a pre-trained language model built on a natural language processing framework. Our team name in this competition is MLlab4CS. Our model Muril produces a macro average F-score of $76.27\%$, which is a comparable result for this competition.", }
Extracting of NERs from low-resource languages and recognizing their types is one of the important tasks in the entity extraction domain. Recently many studies have been conducted in this area of research. In our study, we introduce a system for identifying complex entities and recognizing their types from low-resource language Bangla, which was published in SemEval Task 2 MulitCoNER II 2023. For this sequence labeling task, we use a pre-trained language model built on a natural language processing framework. Our team name in this competition is MLlab4CS. Our model Muril produces a macro average F-score of $76.27\%$, which is a comparable result for this competition.
[ "Mukherjee, Shrimon", "Ghosh, Madhusudan", "{Girish}", "Basuchowdhuri, Partha" ]
MLlab4CS at SemEval-2023 Task 2: Named Entity Recognition in Low-resource Language Bangla Using Multilingual Language Models
semeval-1.192
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.193.bib
https://aclanthology.org/2023.semeval-1.193/
@inproceedings{baraniak-sydow-2023-kb, title = "Kb at {S}em{E}val-2023 Task 3: On Multitask Hierarchical {BERT} Base Neural Network for Multi-label Persuasion Techniques Detection", author = "Baraniak, Katarzyna and Sydow, M", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.193", doi = "10.18653/v1/2023.semeval-1.193", pages = "1395--1400", abstract = "This paper presents a solution for Semeval 2023 subtask3 of task 3: persuasion techniques in paragraphs detection. The aim of this task is to identify all persuasion techniques in each paragraph of a given news article. We use hierarchical multitask neural networks combined with transformers. Span detection is an auxiliary task that helps in the main task: identifying propaganda techniques. Our experiments show that if we change the index of BERT embedding from the first token of the whole input to the first token of the identified span, it can improve performance. Span and label detection can be performed using one network, so we save data and, when data is limited, we can use more of it for training.", }
This paper presents a solution for Semeval 2023 subtask3 of task 3: persuasion techniques in paragraphs detection. The aim of this task is to identify all persuasion techniques in each paragraph of a given news article. We use hierarchical multitask neural networks combined with transformers. Span detection is an auxiliary task that helps in the main task: identifying propaganda techniques. Our experiments show that if we change the index of BERT embedding from the first token of the whole input to the first token of the identified span, it can improve performance. Span and label detection can be performed using one network, so we save data and, when data is limited, we can use more of it for training.
[ "Baraniak, Katarzyna", "Sydow, M" ]
Kb at SemEval-2023 Task 3: On Multitask Hierarchical BERT Base Neural Network for Multi-label Persuasion Techniques Detection
semeval-1.193
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.194.bib
https://aclanthology.org/2023.semeval-1.194/
@inproceedings{benedetto-etal-2023-politohfi, title = "{P}oli{T}o{HFI} at {S}em{E}val-2023 Task 6: Leveraging Entity-Aware and Hierarchical Transformers For Legal Entity Recognition and Court Judgment Prediction", author = "Benedetto, Irene and Koudounas, Alkis and Vaiani, Lorenzo and Pastor, Eliana and Baralis, Elena and Cagliero, Luca and Tarasconi, Francesco", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.194", doi = "10.18653/v1/2023.semeval-1.194", pages = "1401--1411", abstract = "The use of Natural Language Processing techniques in the legal domain has become established for supporting attorneys and domain experts in content retrieval and decision-making. However, understanding the legal text poses relevant challenges in the recognition of domain-specific entities and the adaptation and explanation of predictive models. This paper addresses the Legal Entity Name Recognition (L-NER) and Court judgment Prediction (CPJ) and Explanation (CJPE) tasks. The L-NER solution explores the use of various transformer-based models, including an entity-aware method attending domain-specific entities. The CJPE proposed method relies on hierarchical BERT-based classifiers combined with local input attribution explainers. We propose a broad comparison of eXplainable AI methodologies along with a novel approach based on NER. For the L-NER task, the experimental results remark on the importance of domain-specific pre-training. For CJP our lightweight solution shows performance in line with existing approaches, and our NER-boosted explanations show promising CJPE results in terms of the conciseness of the prediction explanations.", }
The use of Natural Language Processing techniques in the legal domain has become established for supporting attorneys and domain experts in content retrieval and decision-making. However, understanding the legal text poses relevant challenges in the recognition of domain-specific entities and the adaptation and explanation of predictive models. This paper addresses the Legal Entity Name Recognition (L-NER) and Court judgment Prediction (CPJ) and Explanation (CJPE) tasks. The L-NER solution explores the use of various transformer-based models, including an entity-aware method attending domain-specific entities. The CJPE proposed method relies on hierarchical BERT-based classifiers combined with local input attribution explainers. We propose a broad comparison of eXplainable AI methodologies along with a novel approach based on NER. For the L-NER task, the experimental results remark on the importance of domain-specific pre-training. For CJP our lightweight solution shows performance in line with existing approaches, and our NER-boosted explanations show promising CJPE results in terms of the conciseness of the prediction explanations.
[ "Benedetto, Irene", "Koudounas, Alkis", "Vaiani, Lorenzo", "Pastor, Eliana", "Baralis, Elena", "Cagliero, Luca", "Tarasconi, Francesco" ]
PoliToHFI at SemEval-2023 Task 6: Leveraging Entity-Aware and Hierarchical Transformers For Legal Entity Recognition and Court Judgment Prediction
semeval-1.194
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.195.bib
https://aclanthology.org/2023.semeval-1.195/
@inproceedings{bosch-etal-2023-uo, title = "{UO}-{L}ou{TAL} at {S}em{E}val-2023 Task 6: Lightweight Systems for Legal Processing", author = "Bosch, S{\'e}bastien and Est{\`e}ve, Louis and Loo, Joanne and Minard, Anne-Lyse", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.195", doi = "10.18653/v1/2023.semeval-1.195", pages = "1412--1420", abstract = "This paper presents the work produced by students of the University of Orlans Masters in Natural Language Processing program by way of participating in SemEval Task 6, LegalEval, which aims to enhance the capabilities of legal professionals through automated systems. Two out of the three sub-tasks available {--} Rhetorical Role prediction (RR) and Legal Named Entity Recognition (L-NER) {--} were tackled, with the express intent of developing lightweight and interpretable systems. For the L-NER sub-task, a CRF model was trained, augmented with post-processing rules for some named entity types. A macro F1 score of 0.74 was obtained on the DEV set, and 0.64 on the evaluation set. As for the RR sub-task, two sentence classification systems were built: one based on the Bag-of-Words technique with L-NER system output integrated, the other using a sentence-transformer approach. Rule-based post-processing then converted the results of the sentence classification systems into RR predictions. The better-performing Bag-of-Words system obtained a macro F1 score of 0.49 on the DEV set and 0.57 on the evaluation set.", }
This paper presents the work produced by students of the University of Orlans Masters in Natural Language Processing program by way of participating in SemEval Task 6, LegalEval, which aims to enhance the capabilities of legal professionals through automated systems. Two out of the three sub-tasks available {--} Rhetorical Role prediction (RR) and Legal Named Entity Recognition (L-NER) {--} were tackled, with the express intent of developing lightweight and interpretable systems. For the L-NER sub-task, a CRF model was trained, augmented with post-processing rules for some named entity types. A macro F1 score of 0.74 was obtained on the DEV set, and 0.64 on the evaluation set. As for the RR sub-task, two sentence classification systems were built: one based on the Bag-of-Words technique with L-NER system output integrated, the other using a sentence-transformer approach. Rule-based post-processing then converted the results of the sentence classification systems into RR predictions. The better-performing Bag-of-Words system obtained a macro F1 score of 0.49 on the DEV set and 0.57 on the evaluation set.
[ "Bosch, S{\\'e}bastien", "Est{\\`e}ve, Louis", "Loo, Joanne", "Minard, Anne-Lyse" ]
UO-LouTAL at SemEval-2023 Task 6: Lightweight Systems for Legal Processing
semeval-1.195
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.196.bib
https://aclanthology.org/2023.semeval-1.196/
@inproceedings{al-azzawi-etal-2023-nlp, title = "{NLP}-{LTU} at {S}em{E}val-2023 Task 10: The Impact of Data Augmentation and Semi-Supervised Learning Techniques on Text Classification Performance on an Imbalanced Dataset", author = {Al-Azzawi, Sana and Kov{\'a}cs, Gy{\"o}rgy and Nilsson, Filip and Adewumi, Tosin and Liwicki, Marcus}, editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.196", doi = "10.18653/v1/2023.semeval-1.196", pages = "1421--1427", abstract = "In this paper, we propose a methodology fortask 10 of SemEval23, focusing on detectingand classifying online sexism in social me-dia posts. The task is tackling a serious is-sue, as detecting harmful content on socialmedia platforms is crucial for mitigating theharm of these posts on users. Our solutionfor this task is based on an ensemble of fine-tuned transformer-based models (BERTweet,RoBERTa, and DeBERTa). To alleviate prob-lems related to class imbalance, and to improvethe generalization capability of our model, wealso experiment with data augmentation andsemi-supervised learning. In particular, fordata augmentation, we use back-translation, ei-ther on all classes, or on the underrepresentedclasses only. We analyze the impact of thesestrategies on the overall performance of thepipeline through extensive experiments. whilefor semi-supervised learning, we found thatwith a substantial amount of unlabelled, in-domain data available, semi-supervised learn-ing can enhance the performance of certainmodels. Our proposed method (for which thesource code is available on Github12) attainsan F 1-score of 0.8613 for sub-taskA, whichranked us 10th in the competition.", }
In this paper, we propose a methodology fortask 10 of SemEval23, focusing on detectingand classifying online sexism in social me-dia posts. The task is tackling a serious is-sue, as detecting harmful content on socialmedia platforms is crucial for mitigating theharm of these posts on users. Our solutionfor this task is based on an ensemble of fine-tuned transformer-based models (BERTweet,RoBERTa, and DeBERTa). To alleviate prob-lems related to class imbalance, and to improvethe generalization capability of our model, wealso experiment with data augmentation andsemi-supervised learning. In particular, fordata augmentation, we use back-translation, ei-ther on all classes, or on the underrepresentedclasses only. We analyze the impact of thesestrategies on the overall performance of thepipeline through extensive experiments. whilefor semi-supervised learning, we found thatwith a substantial amount of unlabelled, in-domain data available, semi-supervised learn-ing can enhance the performance of certainmodels. Our proposed method (for which thesource code is available on Github12) attainsan F 1-score of 0.8613 for sub-taskA, whichranked us 10th in the competition.
[ "Al-Azzawi, Sana", "Kov{\\'a}cs, Gy{\\\"o}rgy", "Nilsson, Filip", "Adewumi, Tosin", "Liwicki, Marcus" ]
NLP-LTU at SemEval-2023 Task 10: The Impact of Data Augmentation and Semi-Supervised Learning Techniques on Text Classification Performance on an Imbalanced Dataset
semeval-1.196
Poster
2304.12847
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.197.bib
https://aclanthology.org/2023.semeval-1.197/
@inproceedings{balikas-2023-john, title = "John-Arthur at {S}em{E}val-2023 Task 4: Fine-Tuning Large Language Models for Arguments Classification", author = "Balikas, Georgios", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.197", doi = "10.18653/v1/2023.semeval-1.197", pages = "1428--1432", abstract = "This paper presents the system submissions of the John-Arthur team to the SemEval Task 4 {``}ValueEval: Identification of Human Values behind Arguments{''}. The best system of the team was ranked 3rd and the overall rank of the team was 2nd (the first team had the two best systems). John-Arthur team models the ValueEval problem as a multi-class, multi-label text classification problem. The solutions leverage recently proposed large language models that are fine-tuned on the provided datasets. To boost the achieved performance we employ different best practises whose impact on the model performance we evaluate here. The code ispublicly available at github and the model onHuggingface hub.", }
This paper presents the system submissions of the John-Arthur team to the SemEval Task 4 {``}ValueEval: Identification of Human Values behind Arguments{''}. The best system of the team was ranked 3rd and the overall rank of the team was 2nd (the first team had the two best systems). John-Arthur team models the ValueEval problem as a multi-class, multi-label text classification problem. The solutions leverage recently proposed large language models that are fine-tuned on the provided datasets. To boost the achieved performance we employ different best practises whose impact on the model performance we evaluate here. The code ispublicly available at github and the model onHuggingface hub.
[ "Balikas, Georgios" ]
John-Arthur at SemEval-2023 Task 4: Fine-Tuning Large Language Models for Arguments Classification
semeval-1.197
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.198.bib
https://aclanthology.org/2023.semeval-1.198/
@inproceedings{falk-etal-2023-nap, title = "{NAP} at {S}em{E}val-2023 Task 3: Is Less Really More? (Back-)Translation as Data Augmentation Strategies for Detecting Persuasion Techniques", author = "Falk, Neele and Eichel, Annerose and Piccirilli, Prisca", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.198", doi = "10.18653/v1/2023.semeval-1.198", pages = "1433--1446", abstract = "Persuasion techniques detection in news in a multi-lingual setup is non-trivial and comes with challenges, including little training data. Our system successfully leverages (back-)translation as data augmentation strategies with multi-lingual transformer models for the task of detecting persuasion techniques. The automatic and human evaluation of our augmented data allows us to explore whether (back-)translation aid or hinder performance. Our in-depth analyses indicate that both data augmentation strategies boost performance; however, balancing human-produced and machine-generated data seems to be crucial.", }
Persuasion techniques detection in news in a multi-lingual setup is non-trivial and comes with challenges, including little training data. Our system successfully leverages (back-)translation as data augmentation strategies with multi-lingual transformer models for the task of detecting persuasion techniques. The automatic and human evaluation of our augmented data allows us to explore whether (back-)translation aid or hinder performance. Our in-depth analyses indicate that both data augmentation strategies boost performance; however, balancing human-produced and machine-generated data seems to be crucial.
[ "Falk, Neele", "Eichel, Annerose", "Piccirilli, Prisca" ]
NAP at SemEval-2023 Task 3: Is Less Really More? (Back-)Translation as Data Augmentation Strategies for Detecting Persuasion Techniques
semeval-1.198
Poster
2304.14179
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.199.bib
https://aclanthology.org/2023.semeval-1.199/
@inproceedings{vaiani-etal-2023-polito, title = "{P}oli{T}o at {S}em{E}val-2023 Task 1: {CLIP}-based Visual-Word Sense Disambiguation Based on Back-Translation", author = "Vaiani, Lorenzo and Cagliero, Luca and Garza, Paolo", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.199", doi = "10.18653/v1/2023.semeval-1.199", pages = "1447--1453", abstract = "Visual-Word Sense Disambiguation (V-WSD) entails resolving the linguistic ambiguity in a text by selecting a clarifying image from a set of (potentially misleading) candidates. In this paper, we address V-WSD using a state-of-the-art Image-Text Retrieval system, namely CLIP. We propose to alleviate the linguistic ambiguity across multiple domains and languages via text and image augmentation. To augment the textual content we rely on back-translation with the aid of a variety of auxiliary languages. The approach based on finetuning CLIP on the full phrases is effective in accurately disambiguating words and incorporating back-translation enhance the system{'}s robustness and performance on the test samples written in Indo-European languages.", }
Visual-Word Sense Disambiguation (V-WSD) entails resolving the linguistic ambiguity in a text by selecting a clarifying image from a set of (potentially misleading) candidates. In this paper, we address V-WSD using a state-of-the-art Image-Text Retrieval system, namely CLIP. We propose to alleviate the linguistic ambiguity across multiple domains and languages via text and image augmentation. To augment the textual content we rely on back-translation with the aid of a variety of auxiliary languages. The approach based on finetuning CLIP on the full phrases is effective in accurately disambiguating words and incorporating back-translation enhance the system{'}s robustness and performance on the test samples written in Indo-European languages.
[ "Vaiani, Lorenzo", "Cagliero, Luca", "Garza, Paolo" ]
PoliTo at SemEval-2023 Task 1: CLIP-based Visual-Word Sense Disambiguation Based on Back-Translation
semeval-1.199
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.200.bib
https://aclanthology.org/2023.semeval-1.200/
@inproceedings{vassileva-etal-2023-fmi, title = "{FMI}-{SU} at {S}em{E}val-2023 Task 7: Two-level Entailment Classification of Clinical Trials Enhanced by Contextual Data Augmentation", author = "Vassileva, Sylvia and Grazhdanski, Georgi and Boytcheva, Svetla and Koychev, Ivan", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.200", doi = "10.18653/v1/2023.semeval-1.200", pages = "1454--1462", abstract = "The paper presents an approach for solving SemEval 2023 Task 7 - identifying the inference relation in a clinical trials dataset. The system has two levels for retrieving relevant clinical trial evidence for a statement and then classifying the inference relation based on the relevant sentences. In the first level, the system classifies the evidence-statement pairs as relevant or not using a BERT-based classifier and contextual data augmentation (subtask 2). Using the relevant parts of the clinical trial from the first level, the system uses an additional BERT-based classifier to determine whether the relation is entailment or contradiction (subtask 1). In both levels, the contextual data augmentation is showing a significant improvement in the F1 score on the test set of 3.7{\%} for subtask 2 and 7.6{\%} for subtask 1, achieving final F1 scores of 82.7{\%} for subtask 2 and 64.4{\%} for subtask 1.", }
The paper presents an approach for solving SemEval 2023 Task 7 - identifying the inference relation in a clinical trials dataset. The system has two levels for retrieving relevant clinical trial evidence for a statement and then classifying the inference relation based on the relevant sentences. In the first level, the system classifies the evidence-statement pairs as relevant or not using a BERT-based classifier and contextual data augmentation (subtask 2). Using the relevant parts of the clinical trial from the first level, the system uses an additional BERT-based classifier to determine whether the relation is entailment or contradiction (subtask 1). In both levels, the contextual data augmentation is showing a significant improvement in the F1 score on the test set of 3.7{\%} for subtask 2 and 7.6{\%} for subtask 1, achieving final F1 scores of 82.7{\%} for subtask 2 and 64.4{\%} for subtask 1.
[ "Vassileva, Sylvia", "Grazhdanski, Georgi", "Boytcheva, Svetla", "Koychev, Ivan" ]
FMI-SU at SemEval-2023 Task 7: Two-level Entailment Classification of Clinical Trials Enhanced by Contextual Data Augmentation
semeval-1.200
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.201.bib
https://aclanthology.org/2023.semeval-1.201/
@inproceedings{poth-etal-2023-ml, title = "{ML} Mob at {S}em{E}val-2023 Task 1: Probing {CLIP} on Visual Word-Sense Disambiguation", author = "Poth, Clifton and Hentschel, Martin and Werner, Tobias and Sterz, Hannah and Bongard, Leonard", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.201", doi = "10.18653/v1/2023.semeval-1.201", pages = "1463--1469", abstract = "Successful word sense disambiguation (WSD)is a fundamental element of natural languageunderstanding. As part of SemEval-2023 Task1, we investigate WSD in a multimodal setting,where ambiguous words are to be matched withcandidate images representing word senses. Wecompare multiple systems based on pre-trainedCLIP models. In our experiments, we findCLIP to have solid zero-shot performance onmonolingual and multilingual data. By em-ploying different fine-tuning techniques, we areable to further enhance performance. However,transferring knowledge between data distribu-tions proves to be more challenging.", }
Successful word sense disambiguation (WSD)is a fundamental element of natural languageunderstanding. As part of SemEval-2023 Task1, we investigate WSD in a multimodal setting,where ambiguous words are to be matched withcandidate images representing word senses. Wecompare multiple systems based on pre-trainedCLIP models. In our experiments, we findCLIP to have solid zero-shot performance onmonolingual and multilingual data. By em-ploying different fine-tuning techniques, we areable to further enhance performance. However,transferring knowledge between data distribu-tions proves to be more challenging.
[ "Poth, Clifton", "Hentschel, Martin", "Werner, Tobias", "Sterz, Hannah", "Bongard, Leonard" ]
ML Mob at SemEval-2023 Task 1: Probing CLIP on Visual Word-Sense Disambiguation
semeval-1.201
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.202.bib
https://aclanthology.org/2023.semeval-1.202/
@inproceedings{wozny-lango-2023-alexander, title = "Alexander Knox at {S}em{E}val-2023 Task 5: The comparison of prompting and standard fine-tuning techniques for selecting the type of spoiler needed to neutralize a clickbait", author = "Wo{\'z}ny, Mateusz and Lango, Mateusz", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.202", doi = "10.18653/v1/2023.semeval-1.202", pages = "1470--1475", abstract = "Clickbait posts are a common problem on social media platforms, as they often deceive users by providing misleading or sensational headlines that do not match the content of the linked web page. The aim of this study is to create a technique for identifying the specific type of suitable spoiler - be it a phrase, a passage, or a multipart spoiler - needed to neutralize clickbait posts. This is achieved by developing a machine learning classifier analyzing both the clickbait post and the linked web page. Modern approaches for constructing a text classifier usually rely on fine-tuning a transformer-based model pre-trained on large unsupervised corpora. However, recent advances in the development of large-scale language models have led to the emergence of a new transfer learning paradigm based on prompt engineering. In this work, we study these two transfer learning techniques and compare their effectiveness for clickbait spoiler-type detection task. Our experimental results show that for this task, using the standard fine-tuning method gives better results than using prompting. The best model can achieve a similar performance to that presented by Hagen et al. (2022).", }
Clickbait posts are a common problem on social media platforms, as they often deceive users by providing misleading or sensational headlines that do not match the content of the linked web page. The aim of this study is to create a technique for identifying the specific type of suitable spoiler - be it a phrase, a passage, or a multipart spoiler - needed to neutralize clickbait posts. This is achieved by developing a machine learning classifier analyzing both the clickbait post and the linked web page. Modern approaches for constructing a text classifier usually rely on fine-tuning a transformer-based model pre-trained on large unsupervised corpora. However, recent advances in the development of large-scale language models have led to the emergence of a new transfer learning paradigm based on prompt engineering. In this work, we study these two transfer learning techniques and compare their effectiveness for clickbait spoiler-type detection task. Our experimental results show that for this task, using the standard fine-tuning method gives better results than using prompting. The best model can achieve a similar performance to that presented by Hagen et al. (2022).
[ "Wo{\\'z}ny, Mateusz", "Lango, Mateusz" ]
Alexander Knox at SemEval-2023 Task 5: The comparison of prompting and standard fine-tuning techniques for selecting the type of spoiler needed to neutralize a clickbait
semeval-1.202
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.203.bib
https://aclanthology.org/2023.semeval-1.203/
@inproceedings{petersen-etal-2023-hhuedos, title = "hhu{EDOS} at {S}em{E}val-2023 Task 10: Explainable Detection of Online Sexism ({EDOS}) Binary Sexism Detection (Subtask A)", author = "Petersen, Wiebke and Tran, Diem-Ly and Wroblewitz, Marion", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.203", doi = "10.18653/v1/2023.semeval-1.203", pages = "1476--1482", abstract = "In this paper, we describe SemEval-2023 Task 10, a shared task on detecting and predicting sexist language. The dataset consists of labeled sexist and non-sexist data targeted towards women acquired from both Reddit and Gab. We present and compare several approaches we experimented with and our final submitted model. Additional error analysis is given to recognize challenges we dealt with in our process. A total of 84 teams participated. Our model ranks 55th overall in Subtask A of the shared task.", }
In this paper, we describe SemEval-2023 Task 10, a shared task on detecting and predicting sexist language. The dataset consists of labeled sexist and non-sexist data targeted towards women acquired from both Reddit and Gab. We present and compare several approaches we experimented with and our final submitted model. Additional error analysis is given to recognize challenges we dealt with in our process. A total of 84 teams participated. Our model ranks 55th overall in Subtask A of the shared task.
[ "Petersen, Wiebke", "Tran, Diem-Ly", "Wroblewitz, Marion" ]
hhuEDOS at SemEval-2023 Task 10: Explainable Detection of Online Sexism (EDOS) Binary Sexism Detection (Subtask A)
semeval-1.203
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.204.bib
https://aclanthology.org/2023.semeval-1.204/
@inproceedings{li-etal-2023-rutgers, title = "Rutgers Multimedia Image Processing Lab at {S}em{E}val-2023 Task-1: Text-Augmentation-based Approach for Visual Word Sense Disambiguation", author = "Li, Keyi and Yang, Sen and Gao, Chenyang and Marsic, Ivan", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.204", doi = "10.18653/v1/2023.semeval-1.204", pages = "1483--1490", abstract = "This paper describes our system used in SemEval-2023 Task-1: Visual Word Sense Disambiguation (VWSD). The VWSD task is to identify the correct image that corresponds to an ambiguous target word given limited textual context. To reduce word ambiguity and enhance image selection, we proposed several text augmentation techniques, such as prompting, WordNet synonyms, and text generation. We experimented with different vision-language pre-trained models to capture the joint features of the augmented text and image. Our approach achieved the best performance using a combination of GPT-3 text generation and the CLIP model. On the multilingual test sets, our system achieved an average hit rate (at top-1) of 51.11 and a mean reciprocal rank of 65.69.", }
This paper describes our system used in SemEval-2023 Task-1: Visual Word Sense Disambiguation (VWSD). The VWSD task is to identify the correct image that corresponds to an ambiguous target word given limited textual context. To reduce word ambiguity and enhance image selection, we proposed several text augmentation techniques, such as prompting, WordNet synonyms, and text generation. We experimented with different vision-language pre-trained models to capture the joint features of the augmented text and image. Our approach achieved the best performance using a combination of GPT-3 text generation and the CLIP model. On the multilingual test sets, our system achieved an average hit rate (at top-1) of 51.11 and a mean reciprocal rank of 65.69.
[ "Li, Keyi", "Yang, Sen", "Gao, Chenyang", "Marsic, Ivan" ]
Rutgers Multimedia Image Processing Lab at SemEval-2023 Task-1: Text-Augmentation-based Approach for Visual Word Sense Disambiguation
semeval-1.204
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.205.bib
https://aclanthology.org/2023.semeval-1.205/
@inproceedings{kniele-beloucif-2023-uppsala, title = "{U}ppsala {U}niversity at {S}em{E}val-2023 Task12: Zero-shot Sentiment Classification for {N}igerian {P}idgin Tweets", author = "Kniele, Annika and Beloucif, Meriem", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.205", doi = "10.18653/v1/2023.semeval-1.205", pages = "1491--1497", abstract = "While sentiment classification has been considered a practically solved task for high-resource languages such as English, the scarcity of data for many languages still makes it a challenging task. The AfriSenti-SemEval shared task aims to classify sentiment on Twitter data for 14 low-resource African languages. In our participation, we focus on Nigerian Pidgin as the target language. We have investigated the effect of English monolingual and multilingual pre-trained models on the sentiment classification task for Nigerian Pidgin. Our setup includes zero-shot models (using English, Igbo and Hausa data) and a Nigerian Pidgin fine-tuned model. Our results show that English fine-tuned models perform slightly better than models fine-tuned on other Nigerian languages, which could be explained by the lexical and structural closeness between Nigerian Pidgin and English. The best results were reported on the monolingual Nigerian Pidgin data. The model pre-trained on English and fine-tuned on Nigerian Pidgin was submitted to Task A Track 4 of the AfriSenti-SemEval Shared Task 12, and scored 25 out of 32 in the ranking.", }
While sentiment classification has been considered a practically solved task for high-resource languages such as English, the scarcity of data for many languages still makes it a challenging task. The AfriSenti-SemEval shared task aims to classify sentiment on Twitter data for 14 low-resource African languages. In our participation, we focus on Nigerian Pidgin as the target language. We have investigated the effect of English monolingual and multilingual pre-trained models on the sentiment classification task for Nigerian Pidgin. Our setup includes zero-shot models (using English, Igbo and Hausa data) and a Nigerian Pidgin fine-tuned model. Our results show that English fine-tuned models perform slightly better than models fine-tuned on other Nigerian languages, which could be explained by the lexical and structural closeness between Nigerian Pidgin and English. The best results were reported on the monolingual Nigerian Pidgin data. The model pre-trained on English and fine-tuned on Nigerian Pidgin was submitted to Task A Track 4 of the AfriSenti-SemEval Shared Task 12, and scored 25 out of 32 in the ranking.
[ "Kniele, Annika", "Beloucif, Meriem" ]
Uppsala University at SemEval-2023 Task12: Zero-shot Sentiment Classification for Nigerian Pidgin Tweets
semeval-1.205
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.206.bib
https://aclanthology.org/2023.semeval-1.206/
@inproceedings{martin-etal-2023-kddie, title = "{KDDIE} at {S}em{E}val-2023 Task 2: External Knowledge Injection for Named Entity Recognition", author = "Martin, Caleb and Yang, Huichen and Hsu, William", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.206", doi = "10.18653/v1/2023.semeval-1.206", pages = "1498--1501", abstract = "This paper introduces our system for the SemEval 2023 Task 2: Multilingual Complex Named Entity Recognition (MultiCoNER II) competition. Our team focused on the sub-task of Named Entity Recognition (NER) for the language of English in the challenge and reported our results. To achieve our goal, we utilized transfer learning by fine-tuning pre-trained language models (PLMs) on the competition dataset. Our approach involved combining a BERT-based PLM with external knowledge to provide additional context to the model. In this report, we present our findings and results.", }
This paper introduces our system for the SemEval 2023 Task 2: Multilingual Complex Named Entity Recognition (MultiCoNER II) competition. Our team focused on the sub-task of Named Entity Recognition (NER) for the language of English in the challenge and reported our results. To achieve our goal, we utilized transfer learning by fine-tuning pre-trained language models (PLMs) on the competition dataset. Our approach involved combining a BERT-based PLM with external knowledge to provide additional context to the model. In this report, we present our findings and results.
[ "Martin, Caleb", "Yang, Huichen", "Hsu, William" ]
KDDIE at SemEval-2023 Task 2: External Knowledge Injection for Named Entity Recognition
semeval-1.206
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.207.bib
https://aclanthology.org/2023.semeval-1.207/
@inproceedings{hughes-etal-2023-bhattacharya, title = "{B}hattacharya{\_}{L}ab at {S}em{E}val-2023 Task 12: A Transformer-based Language Model for Sentiment Classification for Low Resource {A}frican Languages: {N}igerian {P}idgin and {Y}oruba", author = "Hughes, Nathaniel and Baker, Kevan and Singh, Aditya and Singh, Aryavardhan and Dauda, Tharalillah and Bhattacharya, Sutanu", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.207", doi = "10.18653/v1/2023.semeval-1.207", pages = "1502--1507", abstract = "Sentiment Analysis is an aspect of natural languageprocessing (NLP) that has been a topicof research. While most studies focus on highresourcelanguages with an extensive amountof available data, the study on low-resource languageswith insufficient data needs attention. To address this issue, we propose a transformerbasedmethod for sentiment analysis for lowresourcesAfrican languages, Nigerian Pidginand Yoruba. To evaluate the effectiveness ofour multilingual language models for monolingualsentiment classification, we participated inthe AfriSenti SemEval shared task 2023 competition. On the official e valuation s et, ourgroup (named as Bhattacharya{\_}Lab) ranked1 out of 33 participating groups in the MonolingualSentiment Classification task (i.e., TaskA) for Nigerian Pidgin (i.e., Track 4), and inthe Top 5 among 33 participating groups inthe Monolingual Sentiment Classification taskfor Yoruba (i.e., Track 2) respectively, demonstratingthe potential for our transformer-basedlanguage models to improve sentiment analysisin low-resource languages. Overall, ourstudy highlights the importance of exploringthe potential of NLP in low-resource languagesand the impact of transformer-based multilinguallanguage models in sentiment analysis forthe low-resource African languages, NigerianPidgin and Yoruba.", }
Sentiment Analysis is an aspect of natural languageprocessing (NLP) that has been a topicof research. While most studies focus on highresourcelanguages with an extensive amountof available data, the study on low-resource languageswith insufficient data needs attention. To address this issue, we propose a transformerbasedmethod for sentiment analysis for lowresourcesAfrican languages, Nigerian Pidginand Yoruba. To evaluate the effectiveness ofour multilingual language models for monolingualsentiment classification, we participated inthe AfriSenti SemEval shared task 2023 competition. On the official e valuation s et, ourgroup (named as Bhattacharya{\_}Lab) ranked1 out of 33 participating groups in the MonolingualSentiment Classification task (i.e., TaskA) for Nigerian Pidgin (i.e., Track 4), and inthe Top 5 among 33 participating groups inthe Monolingual Sentiment Classification taskfor Yoruba (i.e., Track 2) respectively, demonstratingthe potential for our transformer-basedlanguage models to improve sentiment analysisin low-resource languages. Overall, ourstudy highlights the importance of exploringthe potential of NLP in low-resource languagesand the impact of transformer-based multilinguallanguage models in sentiment analysis forthe low-resource African languages, NigerianPidgin and Yoruba.
[ "Hughes, Nathaniel", "Baker, Kevan", "Singh, Aditya", "Singh, Aryavardhan", "Dauda, Tharalillah", "Bhattacharya, Sutanu" ]
Bhattacharya_Lab at SemEval-2023 Task 12: A Transformer-based Language Model for Sentiment Classification for Low Resource African Languages: Nigerian Pidgin and Yoruba
semeval-1.207
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.208.bib
https://aclanthology.org/2023.semeval-1.208/
@inproceedings{raychawdhary-etal-2023-seals, title = "{S}eals{\_}{L}ab at {S}em{E}val-2023 Task 12: Sentiment Analysis for Low-resource {A}frican Languages, {H}ausa and {I}gbo", author = "Raychawdhary, Nilanjana and Das, Amit and Dozier, Gerry and D. Seals, Cheryl", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.208", doi = "10.18653/v1/2023.semeval-1.208", pages = "1508--1517", abstract = "One of the most extensively researched applications in natural language processing (NLP) is sentiment analysis. While the majority of the study focuses on high-resource languages (e.g., English), this research will focus on low-resource African languages namely Igbo and Hausa. The annotated tweets of both languages have a significant number of code-mixed tweets. The curated datasets necessary to build complex AI applications are not available for the majority of African languages. To optimize the use of such datasets, research is needed to determine the viability of present NLP procedures as well as the development of novel techniques. This paper outlines our efforts to develop a sentiment analysis (for positive and negative as well as neutral) system for tweets from the Hausa, and Igbo languages. Sentiment analysis can computationally analyze and discover sentiments in a text or document. We worked on the first thorough compilation of AfriSenti-SemEval 2023 Shared Task 12 Twitter datasets that are human-annotated for the most widely spoken languages in Nigeria, such as Hausa and Igbo. Here we trained the modern pre-trained language model AfriBERTa large on the AfriSenti-SemEval Shared Task 12 Twitter dataset to create sentiment classification. In particular, the results demonstrate that our model trained on AfriSenti-SemEval Shared Task 12 datasets and produced with an F1 score of 80.85{\%} for Hausa and 80.82{\%} for Igbo languages on the sentiment analysis test. In AfriSenti-SemEval 2023 shared task 12 (Task A), we consistently ranked top 10 by achieving a mean F1 score of more than 80{\%} for both the Hausa and Igbo languages.", }
One of the most extensively researched applications in natural language processing (NLP) is sentiment analysis. While the majority of the study focuses on high-resource languages (e.g., English), this research will focus on low-resource African languages namely Igbo and Hausa. The annotated tweets of both languages have a significant number of code-mixed tweets. The curated datasets necessary to build complex AI applications are not available for the majority of African languages. To optimize the use of such datasets, research is needed to determine the viability of present NLP procedures as well as the development of novel techniques. This paper outlines our efforts to develop a sentiment analysis (for positive and negative as well as neutral) system for tweets from the Hausa, and Igbo languages. Sentiment analysis can computationally analyze and discover sentiments in a text or document. We worked on the first thorough compilation of AfriSenti-SemEval 2023 Shared Task 12 Twitter datasets that are human-annotated for the most widely spoken languages in Nigeria, such as Hausa and Igbo. Here we trained the modern pre-trained language model AfriBERTa large on the AfriSenti-SemEval Shared Task 12 Twitter dataset to create sentiment classification. In particular, the results demonstrate that our model trained on AfriSenti-SemEval Shared Task 12 datasets and produced with an F1 score of 80.85{\%} for Hausa and 80.82{\%} for Igbo languages on the sentiment analysis test. In AfriSenti-SemEval 2023 shared task 12 (Task A), we consistently ranked top 10 by achieving a mean F1 score of more than 80{\%} for both the Hausa and Igbo languages.
[ "Raychawdhary, Nilanjana", "Das, Amit", "Dozier, Gerry", "D. Seals, Cheryl" ]
Seals_Lab at SemEval-2023 Task 12: Sentiment Analysis for Low-resource African Languages, Hausa and Igbo
semeval-1.208
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.209.bib
https://aclanthology.org/2023.semeval-1.209/
@inproceedings{aparovich-etal-2023-fit, title = "{FIT} {BUT} at {S}em{E}val-2023 Task 12: Sentiment Without Borders - Multilingual Domain Adaptation for Low-Resource Sentiment Classification", author = "Aparovich, Maksim and Kesiraju, Santosh and Dufkova, Aneta and Smrz, Pavel", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.209", doi = "10.18653/v1/2023.semeval-1.209", pages = "1518--1524", abstract = "This paper presents our proposed method for SemEval-2023 Task 12, which focuses on sentiment analysis for low-resource African languages. Our method utilizes a language-centric domain adaptation approach which is based on adversarial training, where a small version of Afro-XLM-Roberta serves as a generator model and a feed-forward network as a discriminator. We participated in all three subtasks: monolingual (12 tracks), multilingual (1 track), and zero-shot (2 tracks). Our results show an improvement in weighted F1 for 13 out of 15 tracks with a maximum increase of 4.3 points for Moroccan Arabic compared to the baseline. We observed that using language family-based labels along with sequence-level input representations for the discriminator model improves the quality of the cross-lingual sentiment analysis for the languages unseen during the training. Additionally, our experimental results suggest that training the system on languages that are close in a language families tree enhances the quality of sentiment analysis for low-resource languages. Lastly, the computational complexity of the prediction step was kept at the same level which makes the approach to be interesting from a practical perspective. The code of the approach can be found in our repository.", }
This paper presents our proposed method for SemEval-2023 Task 12, which focuses on sentiment analysis for low-resource African languages. Our method utilizes a language-centric domain adaptation approach which is based on adversarial training, where a small version of Afro-XLM-Roberta serves as a generator model and a feed-forward network as a discriminator. We participated in all three subtasks: monolingual (12 tracks), multilingual (1 track), and zero-shot (2 tracks). Our results show an improvement in weighted F1 for 13 out of 15 tracks with a maximum increase of 4.3 points for Moroccan Arabic compared to the baseline. We observed that using language family-based labels along with sequence-level input representations for the discriminator model improves the quality of the cross-lingual sentiment analysis for the languages unseen during the training. Additionally, our experimental results suggest that training the system on languages that are close in a language families tree enhances the quality of sentiment analysis for low-resource languages. Lastly, the computational complexity of the prediction step was kept at the same level which makes the approach to be interesting from a practical perspective. The code of the approach can be found in our repository.
[ "Aparovich, Maksim", "Kesiraju, Santosh", "Dufkova, Aneta", "Smrz, Pavel" ]
FIT BUT at SemEval-2023 Task 12: Sentiment Without Borders - Multilingual Domain Adaptation for Low-Resource Sentiment Classification
semeval-1.209
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.210.bib
https://aclanthology.org/2023.semeval-1.210/
@inproceedings{zheng-2023-wku, title = "{WKU}{\_}{NLP} at {S}em{E}val-2023 Task 9: Translation Augmented Multilingual Tweet Intimacy Analysis", author = "Zheng, Qinyuan", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.210", doi = "10.18653/v1/2023.semeval-1.210", pages = "1525--1530", abstract = "This paper describes a system for the SemEval 2023 Task 9: Multilingual Tweet Intimacy Analysis. This system consists of a pretrained multilingual masked language model as a text encoder and a neural network as a regression model. Data augmentation based on neural machine translation models is adopted to improve model performance under the low-resource scenario. This system is further improved through the ensemble of multiple models with the best performance in each language. This system ranks 4th in languages unseen in the training data and 16th in languages seen in the training data. The code and data can be found in this link: \url{https://github.com/Cloudy0219/Multilingual}.", }
This paper describes a system for the SemEval 2023 Task 9: Multilingual Tweet Intimacy Analysis. This system consists of a pretrained multilingual masked language model as a text encoder and a neural network as a regression model. Data augmentation based on neural machine translation models is adopted to improve model performance under the low-resource scenario. This system is further improved through the ensemble of multiple models with the best performance in each language. This system ranks 4th in languages unseen in the training data and 16th in languages seen in the training data. The code and data can be found in this link: \url{https://github.com/Cloudy0219/Multilingual}.
[ "Zheng, Qinyuan" ]
WKU_NLP at SemEval-2023 Task 9: Translation Augmented Multilingual Tweet Intimacy Analysis
semeval-1.210
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.211.bib
https://aclanthology.org/2023.semeval-1.211/
@inproceedings{panwar-mamidi-2023-panwarjayant, title = "{P}anwar{J}ayant at {S}em{E}val-2023 Task 10: Exploring the Effectiveness of Conventional Machine Learning Techniques for Online Sexism Detection", author = "Panwar, Jayant and Mamidi, Radhika", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.211", doi = "10.18653/v1/2023.semeval-1.211", pages = "1531--1536", abstract = "The rapid growth of online communication using social media platforms has led to an increase in the presence of hate speech, especially in terms of sexist language online. The proliferation of such hate speech has a significant impact on the mental health and well-being of the users and hence the need for automated systems to detect and filter such texts. In this study, we explore the effectiveness of conventional machine learning techniques for detecting sexist text. We explore five conventional classifiers, namely, Logistic Regression, Decision Tree, XGBoost, Support Vector Machines, and Random Forest. The results show that different classifiers perform differently on each task due to their different inherent architectures which may be suited to a certain problem more. These models are trained on the shared task dataset, which includes both sexist and non-sexist texts. All in all, this study explores the potential of conventional machine learning techniques in detecting online sexist content. The results of this study highlight the strengths and weaknesses of all classifiers with respect to all subtasks. The results of this study will be useful for researchers and practitioners interested in developing systems for detecting or filtering online hate speech.", }
The rapid growth of online communication using social media platforms has led to an increase in the presence of hate speech, especially in terms of sexist language online. The proliferation of such hate speech has a significant impact on the mental health and well-being of the users and hence the need for automated systems to detect and filter such texts. In this study, we explore the effectiveness of conventional machine learning techniques for detecting sexist text. We explore five conventional classifiers, namely, Logistic Regression, Decision Tree, XGBoost, Support Vector Machines, and Random Forest. The results show that different classifiers perform differently on each task due to their different inherent architectures which may be suited to a certain problem more. These models are trained on the shared task dataset, which includes both sexist and non-sexist texts. All in all, this study explores the potential of conventional machine learning techniques in detecting online sexist content. The results of this study highlight the strengths and weaknesses of all classifiers with respect to all subtasks. The results of this study will be useful for researchers and practitioners interested in developing systems for detecting or filtering online hate speech.
[ "Panwar, Jayant", "Mamidi, Radhika" ]
PanwarJayant at SemEval-2023 Task 10: Exploring the Effectiveness of Conventional Machine Learning Techniques for Online Sexism Detection
semeval-1.211
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.212.bib
https://aclanthology.org/2023.semeval-1.212/
@inproceedings{homskiy-maloyan-2023-dn, title = "{DN} at {S}em{E}val-2023 Task 12: Low-Resource Language Text Classification via Multilingual Pretrained Language Model Fine-tuning", author = "Homskiy, Daniil and Maloyan, Narek", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.212", doi = "10.18653/v1/2023.semeval-1.212", pages = "1537--1541", abstract = "In our work, a model is implemented that solves the task, based on multilingual pre-trained models. We also consider various methods of data preprocessing", }
In our work, a model is implemented that solves the task, based on multilingual pre-trained models. We also consider various methods of data preprocessing
[ "Homskiy, Daniil", "Maloyan, Narek" ]
DN at SemEval-2023 Task 12: Low-Resource Language Text Classification via Multilingual Pretrained Language Model Fine-tuning
semeval-1.212
Poster
2305.02607
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.213.bib
https://aclanthology.org/2023.semeval-1.213/
@inproceedings{kruff-tran-2023-billie, title = "Billie-Newman at {S}em{E}val-2023 Task 5: Clickbait Classification and Question Answering with Pre-Trained Language Models, Named Entity Recognition and Rule-Based Approaches", author = "Kruff, Andreas and Tran, Anh Huy", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.213", doi = "10.18653/v1/2023.semeval-1.213", pages = "1542--1550", abstract = "In this paper, we describe the implementations of our systems for the SemEval-2023 Task 5 {`}Clickbait Spoiling{'}, which involves the classification of clickbait posts in sub-task 1 and the spoiler generation and question answering of clickbait posts in sub-task 2, ultimately achieving a balanced accuracy of 0.593 and a BLEU score of 0.322 on the test datasets in sub-task 1 and sub-task 2 respectively. For this, we propose the usage of RoBERTa transformer models and modify them for each specific downstream task. In sub-task 1, we use the pre-trained RoBERTa model and use it in conjunction with NER, a spoiler-title ratio, a regex check for enumerations and lists and the use of input reformulation. In sub-task 2, we propose the usage of the RoBERTa-SQuAD2.0 model for extractive question answering in combination with a contextual rule-based approach for multi-type spoilers in order to generate spoiler answers.", }
In this paper, we describe the implementations of our systems for the SemEval-2023 Task 5 {`}Clickbait Spoiling{'}, which involves the classification of clickbait posts in sub-task 1 and the spoiler generation and question answering of clickbait posts in sub-task 2, ultimately achieving a balanced accuracy of 0.593 and a BLEU score of 0.322 on the test datasets in sub-task 1 and sub-task 2 respectively. For this, we propose the usage of RoBERTa transformer models and modify them for each specific downstream task. In sub-task 1, we use the pre-trained RoBERTa model and use it in conjunction with NER, a spoiler-title ratio, a regex check for enumerations and lists and the use of input reformulation. In sub-task 2, we propose the usage of the RoBERTa-SQuAD2.0 model for extractive question answering in combination with a contextual rule-based approach for multi-type spoilers in order to generate spoiler answers.
[ "Kruff, Andreas", "Tran, Anh Huy" ]
Billie-Newman at SemEval-2023 Task 5: Clickbait Classification and Question Answering with Pre-Trained Language Models, Named Entity Recognition and Rule-Based Approaches
semeval-1.213
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.214.bib
https://aclanthology.org/2023.semeval-1.214/
@inproceedings{cuadrado-etal-2023-utb, title = "{UTB}-{NLP} at {S}em{E}val-2023 Task 3: Weirdness, Lexical Features for Detecting Categorical Framings, and Persuasion in Online News", author = "Cuadrado, Juan and Martinez, Elizabeth and Morillo, Anderson and Pe{\~n}a, Daniel and Sossa, Kevin and Martinez-Santos, Juan and Puertas, Edwin", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.214", doi = "10.18653/v1/2023.semeval-1.214", pages = "1551--1557", abstract = "Nowadays, persuasive messages are more and more frequent in social networks, which generates great concern in several communities, given that persuasion seeks to guide others towards the adoption of ideas, attitudes or actions that they consider to be beneficial to themselves. The efficient detection of news genre categories, detection of framing and detection of persuasion techniques requires several scientific disciplines, such as computational linguistics and sociology. Here we illustrate how we use lexical features given a news article, determine whether it is an opinion piece, aims to report factual news, or is satire. This paper presents a novel strategy for news based on Lexical Weirdness. The results are part of our participation in subtasks 1 and 2 in SemEval 2023 Task 3.", }
Nowadays, persuasive messages are more and more frequent in social networks, which generates great concern in several communities, given that persuasion seeks to guide others towards the adoption of ideas, attitudes or actions that they consider to be beneficial to themselves. The efficient detection of news genre categories, detection of framing and detection of persuasion techniques requires several scientific disciplines, such as computational linguistics and sociology. Here we illustrate how we use lexical features given a news article, determine whether it is an opinion piece, aims to report factual news, or is satire. This paper presents a novel strategy for news based on Lexical Weirdness. The results are part of our participation in subtasks 1 and 2 in SemEval 2023 Task 3.
[ "Cuadrado, Juan", "Martinez, Elizabeth", "Morillo, Anderson", "Pe{\\~n}a, Daniel", "Sossa, Kevin", "Martinez-Santos, Juan", "Puertas, Edwin" ]
UTB-NLP at SemEval-2023 Task 3: Weirdness, Lexical Features for Detecting Categorical Framings, and Persuasion in Online News
semeval-1.214
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.215.bib
https://aclanthology.org/2023.semeval-1.215/
@inproceedings{verma-bergler-2023-clac, title = "{CL}a{C} at {S}em{E}val-2023 Task 2: Comparing Span-Prediction and Sequence-Labeling Approaches for {NER}", author = "Verma, Harsh and Bergler, Sabine", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.215", doi = "10.18653/v1/2023.semeval-1.215", pages = "1558--1561", abstract = "This paper summarizes the CLaC submission for the MultiCoNER 2 task which concerns the recognition of complex, fine-grained named entities. We compare two popular approaches for NER, namely SequenceLabeling and Span Prediction. We find that our best Span Prediction system performs slightly better than our best Sequence Labeling system on test data. Moreover, we find that using the larger version of XLM RoBERTa significantly improves performance. Post-competition experiments show that Span Prediction and Sequence Labeling approaches improve when they use special input tokens ([s] and [/s]) of XLM-RoBERTa. The code for training all models, preprocessing, and post-processing is available at \url{https://github.com/harshshredding/semeval2023-multiconer-paper}.", }
This paper summarizes the CLaC submission for the MultiCoNER 2 task which concerns the recognition of complex, fine-grained named entities. We compare two popular approaches for NER, namely SequenceLabeling and Span Prediction. We find that our best Span Prediction system performs slightly better than our best Sequence Labeling system on test data. Moreover, we find that using the larger version of XLM RoBERTa significantly improves performance. Post-competition experiments show that Span Prediction and Sequence Labeling approaches improve when they use special input tokens ([s] and [/s]) of XLM-RoBERTa. The code for training all models, preprocessing, and post-processing is available at \url{https://github.com/harshshredding/semeval2023-multiconer-paper}.
[ "Verma, Harsh", "Bergler, Sabine" ]
CLaC at SemEval-2023 Task 2: Comparing Span-Prediction and Sequence-Labeling Approaches for NER
semeval-1.215
Poster
2305.03845
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.216.bib
https://aclanthology.org/2023.semeval-1.216/
@inproceedings{goldzycher-2023-cl, title = "{CL}-{UZH} at {S}em{E}val-2023 Task 10: Sexism Detection through Incremental Fine-Tuning and Multi-Task Learning with Label Descriptions", author = "Goldzycher, Janis", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.216", doi = "10.18653/v1/2023.semeval-1.216", pages = "1562--1572", abstract = "The widespread popularity of social media has led to an increase in hateful, abusive, and sexist language, motivating methods for the automatic detection of such phenomena. The goal of the SemEval shared task Towards Explainable Detection of Online Sexism (EDOS 2023) is to detect sexism in English social media posts (subtask A), and to categorize such posts into four coarse-grained sexism categories (subtask B), and eleven fine-grained subcategories (subtask C). In this paper, we present our submitted systems for all three subtasks, based on a multi-task model that has been fine-tuned on a range of related tasks and datasets before being fine-tuned on the specific EDOS subtasks. We implement multi-task learning by formulating each task as binary pairwise text classification, where the dataset and label descriptions are given along with the input text. The results show clear improvements over a fine-tuned DeBERTa-V3 serving as a baseline leading to F1-scores of 85.9{\%} in subtask A (rank 13/84), 64.8{\%} in subtask B (rank 19/69), and 44.9{\%} in subtask C (26/63).", }
The widespread popularity of social media has led to an increase in hateful, abusive, and sexist language, motivating methods for the automatic detection of such phenomena. The goal of the SemEval shared task Towards Explainable Detection of Online Sexism (EDOS 2023) is to detect sexism in English social media posts (subtask A), and to categorize such posts into four coarse-grained sexism categories (subtask B), and eleven fine-grained subcategories (subtask C). In this paper, we present our submitted systems for all three subtasks, based on a multi-task model that has been fine-tuned on a range of related tasks and datasets before being fine-tuned on the specific EDOS subtasks. We implement multi-task learning by formulating each task as binary pairwise text classification, where the dataset and label descriptions are given along with the input text. The results show clear improvements over a fine-tuned DeBERTa-V3 serving as a baseline leading to F1-scores of 85.9{\%} in subtask A (rank 13/84), 64.8{\%} in subtask B (rank 19/69), and 44.9{\%} in subtask C (26/63).
[ "Goldzycher, Janis" ]
CL-UZH at SemEval-2023 Task 10: Sexism Detection through Incremental Fine-Tuning and Multi-Task Learning with Label Descriptions
semeval-1.216
Poster
2306.03907
[ "https://github.com/jagol/cl-uzh-edos-2023" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.217.bib
https://aclanthology.org/2023.semeval-1.217/
@inproceedings{chernyshev-etal-2023-lct, title = "{LCT}-1 at {S}em{E}val-2023 Task 10: Pre-training and Multi-task Learning for Sexism Detection and Classification", author = "Chernyshev, Konstantin and Garanina, Ekaterina and Bayram, Duygu and Zheng, Qiankun and Edman, Lukas", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.217", doi = "10.18653/v1/2023.semeval-1.217", pages = "1573--1581", abstract = "Misogyny and sexism are growing problems in social media. Advances have been made in online sexism detection but the systems are often uninterpretable. SemEval-2023 Task 10 on Explainable Detection of Online Sexism aims at increasing explainability of the sexism detection, and our team participated in all the proposed subtasks. Our system is based on further domain-adaptive pre-training. Building on the Transformer-based models with the domain adaptation, we compare fine-tuning with multi-task learning and show that each subtask requires a different system configuration. In our experiments, multi-task learning performs on par with standard fine-tuning for sexism detection and noticeably better for coarse-grained sexism classification, while fine-tuning is preferable for fine-grained classification.", }
Misogyny and sexism are growing problems in social media. Advances have been made in online sexism detection but the systems are often uninterpretable. SemEval-2023 Task 10 on Explainable Detection of Online Sexism aims at increasing explainability of the sexism detection, and our team participated in all the proposed subtasks. Our system is based on further domain-adaptive pre-training. Building on the Transformer-based models with the domain adaptation, we compare fine-tuning with multi-task learning and show that each subtask requires a different system configuration. In our experiments, multi-task learning performs on par with standard fine-tuning for sexism detection and noticeably better for coarse-grained sexism classification, while fine-tuning is preferable for fine-grained classification.
[ "Chernyshev, Konstantin", "Garanina, Ekaterina", "Bayram, Duygu", "Zheng, Qiankun", "Edman, Lukas" ]
LCT-1 at SemEval-2023 Task 10: Pre-training and Multi-task Learning for Sexism Detection and Classification
semeval-1.217
Poster
2306.05075
[ "https://github.com/lct-rug-2022/edos-2023" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.218.bib
https://aclanthology.org/2023.semeval-1.218/
@inproceedings{modzelewski-etal-2023-dshacker, title = "{DSH}acker at {S}em{E}val-2023 Task 3: Genres and Persuasion Techniques Detection with Multilingual Data Augmentation through Machine Translation and Text Generation", author = "Modzelewski, Arkadiusz and Sosnowski, Witold and Wilczynska, Magdalena and Wierzbicki, Adam", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.218", doi = "10.18653/v1/2023.semeval-1.218", pages = "1582--1591", abstract = "In our article, we present the systems developed for SemEval-2023 Task 3, which aimed to evaluate the ability of Natural Language Processing (NLP) systems to detect genres and persuasion techniques in multiple languages. We experimented with several data augmentation techniques, including machine translation (MT) and text generation. For genre detection, synthetic texts for each class were created using the OpenAI GPT-3 Davinci language model. In contrast, to detect persuasion techniques, we relied on augmenting the dataset through text translation using the DeepL translator. Fine-tuning the models using augmented data resulted in a top-ten ranking across all languages, indicating the effectiveness of the approach. The models for genre detection demonstrated excellent performance, securing the first, second, and third positions in Spanish, German, and Italian, respectively. Moreover, one of the models for persuasion techniques{'} detection secured the third position in Polish. Our contribution constitutes the system architecture that utilizes DeepL and GPT-3 for data augmentation for the purpose of detecting both genre and persuasion techniques.", }
In our article, we present the systems developed for SemEval-2023 Task 3, which aimed to evaluate the ability of Natural Language Processing (NLP) systems to detect genres and persuasion techniques in multiple languages. We experimented with several data augmentation techniques, including machine translation (MT) and text generation. For genre detection, synthetic texts for each class were created using the OpenAI GPT-3 Davinci language model. In contrast, to detect persuasion techniques, we relied on augmenting the dataset through text translation using the DeepL translator. Fine-tuning the models using augmented data resulted in a top-ten ranking across all languages, indicating the effectiveness of the approach. The models for genre detection demonstrated excellent performance, securing the first, second, and third positions in Spanish, German, and Italian, respectively. Moreover, one of the models for persuasion techniques{'} detection secured the third position in Polish. Our contribution constitutes the system architecture that utilizes DeepL and GPT-3 for data augmentation for the purpose of detecting both genre and persuasion techniques.
[ "Modzelewski, Arkadiusz", "Sosnowski, Witold", "Wilczynska, Magdalena", "Wierzbicki, Adam" ]
DSHacker at SemEval-2023 Task 3: Genres and Persuasion Techniques Detection with Multilingual Data Augmentation through Machine Translation and Text Generation
semeval-1.218
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.219.bib
https://aclanthology.org/2023.semeval-1.219/
@inproceedings{zhang-etal-2023-gpl, title = "{GPL} at {S}em{E}val-2023 Task 1: {W}ord{N}et and {CLIP} to Disambiguate Images", author = "Zhang, Shibingfeng and Nath, Shantanu and Mazzaccara, Davide", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.219", doi = "10.18653/v1/2023.semeval-1.219", pages = "1592--1597", abstract = "Given a word in context, the task of VisualWord Sense Disambiguation consists of select-ing the correct image among a set of candidates. To select the correct image, we propose a so-lution blending text augmentation and multi-modal models. Text augmentation leverages thefine-grained semantic annotation from Word-Net to get a better representation of the tex-tual component. We then compare this sense-augmented text to the set of image using pre-trained multimodal models CLIP and ViLT. Oursystem has been ranked 16th for the Englishlanguage, achieving 68.5 points for hit rate and79.2 for mean reciprocal rank.", }
Given a word in context, the task of VisualWord Sense Disambiguation consists of select-ing the correct image among a set of candidates. To select the correct image, we propose a so-lution blending text augmentation and multi-modal models. Text augmentation leverages thefine-grained semantic annotation from Word-Net to get a better representation of the tex-tual component. We then compare this sense-augmented text to the set of image using pre-trained multimodal models CLIP and ViLT. Oursystem has been ranked 16th for the Englishlanguage, achieving 68.5 points for hit rate and79.2 for mean reciprocal rank.
[ "Zhang, Shibingfeng", "Nath, Shantanu", "Mazzaccara, Davide" ]
GPL at SemEval-2023 Task 1: WordNet and CLIP to Disambiguate Images
semeval-1.219
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.220.bib
https://aclanthology.org/2023.semeval-1.220/
@inproceedings{alameldin-williamson-2023-clemson, title = "Clemson {NLP} at {S}em{E}val-2023 Task 7: Applying {G}ator{T}ron to Multi-Evidence Clinical {NLI}", author = "Alameldin, Ahamed and Williamson, Ashton", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.220", doi = "10.18653/v1/2023.semeval-1.220", pages = "1598--1602", abstract = "This paper presents our system descriptions for SemEval 2023-Task 7: Multi-evidence Natural Language Inference for Clinical Trial Data sub-tasks one and two. Provided with a collection of Clinical Trial Reports (CTRs) and corresponding expert-annotated claim statements, sub-task one involves determining an inferential relationship between the statement and CTR premise: contradiction or entailment. Sub-task two involves retrieving evidence from the CTR which is necessary to determine the entailment in sub-task one. For sub-task two we employ a recent transformer-based language model pretrained on biomedical literature, which we domain-adapt on a set of clinical trial reports. For sub-task one, we take an ensemble approach in which we leverage the evidence retrieval model from sub-task two to extract relevant sections, which are then passed to a second model of equivalent architecture to determine entailment. Our system achieves a ranking of seventh on sub-task one with an F1-score of 0.705 and sixth on sub-task two with an F1-score of 0.806. In addition, we find that the high rate of success of language models on this dataset may be partially attributable to the existence of annotation artifacts.", }
This paper presents our system descriptions for SemEval 2023-Task 7: Multi-evidence Natural Language Inference for Clinical Trial Data sub-tasks one and two. Provided with a collection of Clinical Trial Reports (CTRs) and corresponding expert-annotated claim statements, sub-task one involves determining an inferential relationship between the statement and CTR premise: contradiction or entailment. Sub-task two involves retrieving evidence from the CTR which is necessary to determine the entailment in sub-task one. For sub-task two we employ a recent transformer-based language model pretrained on biomedical literature, which we domain-adapt on a set of clinical trial reports. For sub-task one, we take an ensemble approach in which we leverage the evidence retrieval model from sub-task two to extract relevant sections, which are then passed to a second model of equivalent architecture to determine entailment. Our system achieves a ranking of seventh on sub-task one with an F1-score of 0.705 and sixth on sub-task two with an F1-score of 0.806. In addition, we find that the high rate of success of language models on this dataset may be partially attributable to the existence of annotation artifacts.
[ "Alameldin, Ahamed", "Williamson, Ashton" ]
Clemson NLP at SemEval-2023 Task 7: Applying GatorTron to Multi-Evidence Clinical NLI
semeval-1.220
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.221.bib
https://aclanthology.org/2023.semeval-1.221/
@inproceedings{zhao-etal-2023-hw, title = "{HW}-{TSC} at {S}em{E}val-2023 Task 7: Exploring the Natural Language Inference Capabilities of {C}hat{GPT} and Pre-trained Language Model for Clinical Trial", author = "Zhao, Xiaofeng and Zhang, Min and Ma, Miaomiao and Su, Chang and Liu, Yilun and Wang, Minghan and Qiao, Xiaosong and Guo, Jiaxin and Li, Yinglu and Ma, Wenbing", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.221", doi = "10.18653/v1/2023.semeval-1.221", pages = "1603--1608", abstract = "In this paper, we describe the multi strategy system for SemEval-2022 Task 7, This task aims to determine whether a given statement is supported by one or two Clinical Trial reports, and to identify evidence that supports the statement. This is a task that requires high natural language inference capabilities. In Subtask 1, we compare our strategy based on prompt learning and ChatGPT with a baseline constructed using BERT in zero-shot setting, and validate the effectiveness of our strategy. In Subtask 2, we fine-tune DeBERTaV3 for classification without relying on the results from Subtask 1, and we observe that early stopping can effectively prevent model overfitting, which performs well in Subtask 2. In addition, we did not use any ensemble strategies. Ultimately, we achieved the 10th place in Subtask 1 and the 2nd place in Subtask 2.", }
In this paper, we describe the multi strategy system for SemEval-2022 Task 7, This task aims to determine whether a given statement is supported by one or two Clinical Trial reports, and to identify evidence that supports the statement. This is a task that requires high natural language inference capabilities. In Subtask 1, we compare our strategy based on prompt learning and ChatGPT with a baseline constructed using BERT in zero-shot setting, and validate the effectiveness of our strategy. In Subtask 2, we fine-tune DeBERTaV3 for classification without relying on the results from Subtask 1, and we observe that early stopping can effectively prevent model overfitting, which performs well in Subtask 2. In addition, we did not use any ensemble strategies. Ultimately, we achieved the 10th place in Subtask 1 and the 2nd place in Subtask 2.
[ "Zhao, Xiaofeng", "Zhang, Min", "Ma, Miaomiao", "Su, Chang", "Liu, Yilun", "Wang, Minghan", "Qiao, Xiaosong", "Guo, Jiaxin", "Li, Yinglu", "Ma, Wenbing" ]
HW-TSC at SemEval-2023 Task 7: Exploring the Natural Language Inference Capabilities of ChatGPT and Pre-trained Language Model for Clinical Trial
semeval-1.221
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.222.bib
https://aclanthology.org/2023.semeval-1.222/
@inproceedings{mopidevi-chenna-2023-quintilian, title = "Quintilian at {S}em{E}val-2023 Task 4: Grouped {BERT} for Multi-Label Classification", author = "Mopidevi, Ajay Narasimha and Chenna, Hemanth", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.222", doi = "10.18653/v1/2023.semeval-1.222", pages = "1609--1612", abstract = "In this paper, we initially discuss about the ValueEval task and the challenges involved in multi-label classification tasks. We tried to approach this task using Natural Language Inference and proposed a Grouped-BERT architecture which leverages commonality between the classes for a multi-label classification tasks.", }
In this paper, we initially discuss about the ValueEval task and the challenges involved in multi-label classification tasks. We tried to approach this task using Natural Language Inference and proposed a Grouped-BERT architecture which leverages commonality between the classes for a multi-label classification tasks.
[ "Mopidevi, Ajay Narasimha", "Chenna, Hemanth" ]
Quintilian at SemEval-2023 Task 4: Grouped BERT for Multi-Label Classification
semeval-1.222
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.223.bib
https://aclanthology.org/2023.semeval-1.223/
@inproceedings{costa-etal-2023-clac, title = "{CL}a{C} at {S}em{E}val-2023 Task 3: Language Potluck {R}o{BERT}a Detects Online Persuasion Techniques in a Multilingual Setup", author = "Costa, Nelson Filipe and Hamilton, Bryce and Kosseim, Leila", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.223", doi = "10.18653/v1/2023.semeval-1.223", pages = "1613--1618", abstract = "This paper presents our approach to the SemEval-2023 Task 3 to detect online persuasion techniques in a multilingual setup. Our classification system is based on the RoBERTa-base model trained predominantly on English to label the persuasion techniques across 9 different languages. Our system was able to significantly surpass the baseline performance in 3 of the 9 languages: English, Georgian and Greek. However, our wrong assumption that a single classification system trained predominantly on English could generalize well to other languages, negatively impacted our scores on the other 6 languages. In this paper, we provide a description of the reasoning behind the development of our final model and what conclusions may be drawn from its performance for future work.", }
This paper presents our approach to the SemEval-2023 Task 3 to detect online persuasion techniques in a multilingual setup. Our classification system is based on the RoBERTa-base model trained predominantly on English to label the persuasion techniques across 9 different languages. Our system was able to significantly surpass the baseline performance in 3 of the 9 languages: English, Georgian and Greek. However, our wrong assumption that a single classification system trained predominantly on English could generalize well to other languages, negatively impacted our scores on the other 6 languages. In this paper, we provide a description of the reasoning behind the development of our final model and what conclusions may be drawn from its performance for future work.
[ "Costa, Nelson Filipe", "Hamilton, Bryce", "Kosseim, Leila" ]
CLaC at SemEval-2023 Task 3: Language Potluck RoBERTa Detects Online Persuasion Techniques in a Multilingual Setup
semeval-1.223
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.224.bib
https://aclanthology.org/2023.semeval-1.224/
@inproceedings{li-zhou-2023-ynunlp, title = "{YNUNLP} at {S}em{E}val-2023 Task 2: The Pseudo Twin Tower Pre-training Model for {C}hinese Named Entity Recognition", author = "Li, Jing and Zhou, Xiaobing", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.224", doi = "10.18653/v1/2023.semeval-1.224", pages = "1619--1624", abstract = "This paper introduces our method in the system for SemEval 2023 Task 2: MultiCoNER II Multilingual Complex Named Entity Recognition, Track 9-Chinese. This task focuses on detecting fine-grained named entities whose data set has a fine-grained taxonomy of 36 NE classes, representing a realistic challenge for NER. In this task, we need to identify entity boundaries and category labels for the six identified categories. We use BERT embedding to represent each character in the original sentence and train CRF-Rdrop to predict named entity categories using the data set provided by the organizer. Our best submission, with a macro average F1 score of 0.5657, ranked 15th out of 22 teams.", }
This paper introduces our method in the system for SemEval 2023 Task 2: MultiCoNER II Multilingual Complex Named Entity Recognition, Track 9-Chinese. This task focuses on detecting fine-grained named entities whose data set has a fine-grained taxonomy of 36 NE classes, representing a realistic challenge for NER. In this task, we need to identify entity boundaries and category labels for the six identified categories. We use BERT embedding to represent each character in the original sentence and train CRF-Rdrop to predict named entity categories using the data set provided by the organizer. Our best submission, with a macro average F1 score of 0.5657, ranked 15th out of 22 teams.
[ "Li, Jing", "Zhou, Xiaobing" ]
YNUNLP at SemEval-2023 Task 2: The Pseudo Twin Tower Pre-training Model for Chinese Named Entity Recognition
semeval-1.224
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.225.bib
https://aclanthology.org/2023.semeval-1.225/
@inproceedings{saravanan-wilson-2023-mr, title = "Mr-wallace at {S}em{E}val-2023 Task 5: Novel Clickbait Spoiling Algorithm Using Natural Language Processing", author = "Saravanan, Vineet and Wilson, Steven", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.225", doi = "10.18653/v1/2023.semeval-1.225", pages = "1625--1629", abstract = "This paper presents a model for clickbait spoiling,which aims at generating short texts that satisfy thecuriosity induced by a clickbait post. The modelis split into two tasks: identifying the clickbaittype and spoiling the clickbait. The first task isto classify the spoiler type that the clickbait postwarrants, and the second task is to generate thespoiler for the clickbait post. The model utilizesthe Distilbert-base-uncased model for the first taskand the Bert-base-uncased model for the secondtask. The trained model is optimized through trialand error on different model selections, and hyper-parameters and results are presented in a confusionmatrix. The main reason we utilized Distilbert-base-uncased is that it analyzes words in the con-text of what{'}s around it. The objective of this modelis to save readers time and spoil the clickbait of dif-ferent articles they may see on different platformslike Twitter and Reddit", }
This paper presents a model for clickbait spoiling,which aims at generating short texts that satisfy thecuriosity induced by a clickbait post. The modelis split into two tasks: identifying the clickbaittype and spoiling the clickbait. The first task isto classify the spoiler type that the clickbait postwarrants, and the second task is to generate thespoiler for the clickbait post. The model utilizesthe Distilbert-base-uncased model for the first taskand the Bert-base-uncased model for the secondtask. The trained model is optimized through trialand error on different model selections, and hyper-parameters and results are presented in a confusionmatrix. The main reason we utilized Distilbert-base-uncased is that it analyzes words in the con-text of what{'}s around it. The objective of this modelis to save readers time and spoil the clickbait of dif-ferent articles they may see on different platformslike Twitter and Reddit
[ "Saravanan, Vineet", "Wilson, Steven" ]
Mr-wallace at SemEval-2023 Task 5: Novel Clickbait Spoiling Algorithm Using Natural Language Processing
semeval-1.225
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.226.bib
https://aclanthology.org/2023.semeval-1.226/
@inproceedings{rajamanickam-rajaraman-2023-i2r, title = "{I}2{R} at {S}em{E}val-2023 Task 7: Explanations-driven Ensemble Approach for Natural Language Inference over Clinical Trial Data", author = "Rajamanickam, Saravanan and Rajaraman, Kanagasabai", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.226", doi = "10.18653/v1/2023.semeval-1.226", pages = "1630--1635", abstract = "In this paper, we describe our system for SemEval-2023 Task 7: Multi-evidence Natural Language Inference for Clinical Trial Data. Given a CTR premise, and a statement, this task involves 2 sub-tasks (i) identifying the inference relation between CTR - statement pairs (Task 1: Textual Entailment), and (ii) extracting a set of supporting facts, from the premise, to justify the label predicted in Task 1 (Task 2: Evidence Retrieval). We adopt an explanations driven NLI approach to tackle the tasks. Given a statement to verify, the idea is to first identify relevant evidence from the target CTR(s), perform evidence level inferences and then ensemble them to arrive at the final inference. We have experimented with various BERT based models and T5 models. Our final model uses T5 base that achieved better performance compared to BERT models. In summary, our system achieves F1 score of 70.1{\%} for Task 1 and 80.2{\%} for Task 2. We ranked 8th respectively under both the tasks. Moreover, ours was one of the 5 systems that ranked within the Top 10 under both tasks.", }
In this paper, we describe our system for SemEval-2023 Task 7: Multi-evidence Natural Language Inference for Clinical Trial Data. Given a CTR premise, and a statement, this task involves 2 sub-tasks (i) identifying the inference relation between CTR - statement pairs (Task 1: Textual Entailment), and (ii) extracting a set of supporting facts, from the premise, to justify the label predicted in Task 1 (Task 2: Evidence Retrieval). We adopt an explanations driven NLI approach to tackle the tasks. Given a statement to verify, the idea is to first identify relevant evidence from the target CTR(s), perform evidence level inferences and then ensemble them to arrive at the final inference. We have experimented with various BERT based models and T5 models. Our final model uses T5 base that achieved better performance compared to BERT models. In summary, our system achieves F1 score of 70.1{\%} for Task 1 and 80.2{\%} for Task 2. We ranked 8th respectively under both the tasks. Moreover, ours was one of the 5 systems that ranked within the Top 10 under both tasks.
[ "Rajamanickam, Saravanan", "Rajaraman, Kanagasabai" ]
I2R at SemEval-2023 Task 7: Explanations-driven Ensemble Approach for Natural Language Inference over Clinical Trial Data
semeval-1.226
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.227.bib
https://aclanthology.org/2023.semeval-1.227/
@inproceedings{liu-etal-2023-nlubot101, title = "{NLUB}ot101 at {S}em{E}val-2023 Task 3: An Augmented Multilingual {NLI} Approach Towards Online News Persuasion Techniques Detection", author = "Liu, Genglin and Fung, Yi and Ji, Heng", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.227", doi = "10.18653/v1/2023.semeval-1.227", pages = "1636--1643", abstract = "We describe our submission to SemEval 2023 Task 3, specifically the subtask on persuasion technique detection. In this work, our team NLUBot101 tackled a novel task of classifying persuasion techniques in online news articles at a paragraph level. The low-resource multilingual datasets, along with the imbalanced label distribution, make this task challenging. Our team presented a cross-lingual data augmentation approach and leveraged a recently proposed multilingual natural language inference model to address these challenges. Our solution achieves the highest macro-F1 score for the English task, and top 5 micro-F1 scores on both the English and Russian leaderboards.", }
We describe our submission to SemEval 2023 Task 3, specifically the subtask on persuasion technique detection. In this work, our team NLUBot101 tackled a novel task of classifying persuasion techniques in online news articles at a paragraph level. The low-resource multilingual datasets, along with the imbalanced label distribution, make this task challenging. Our team presented a cross-lingual data augmentation approach and leveraged a recently proposed multilingual natural language inference model to address these challenges. Our solution achieves the highest macro-F1 score for the English task, and top 5 micro-F1 scores on both the English and Russian leaderboards.
[ "Liu, Genglin", "Fung, Yi", "Ji, Heng" ]
NLUBot101 at SemEval-2023 Task 3: An Augmented Multilingual NLI Approach Towards Online News Persuasion Techniques Detection
semeval-1.227
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.228.bib
https://aclanthology.org/2023.semeval-1.228/
@inproceedings{younes-etal-2023-alexa, title = "{A}lexa at {S}em{E}val-2023 Task 10: Ensemble Modeling of {D}e{BERT}a and {BERT} Variations for Identifying Sexist Text", author = "Younes, Mutaz and Kharabsheh, Ali and Bani Younes, Mohammad", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.228", doi = "10.18653/v1/2023.semeval-1.228", pages = "1644--1649", abstract = "This study presents an ensemble approach for detecting sexist text in the context of the Semeval-2023 task 10. Our approach leverages 18 models, including DeBERTa-v3-base models with different input sequence lengths, a BERT-based model trained on identifying hate speech, and three more models pre-trained on the task{'}s unlabeled data with varying input lengths. The results of our framework on the development set show an f1-score of 84.92{\%} and on the testing set 84.55{\%}, effectively demonstrating the strength of the ensemble approach in getting accurate results.", }
This study presents an ensemble approach for detecting sexist text in the context of the Semeval-2023 task 10. Our approach leverages 18 models, including DeBERTa-v3-base models with different input sequence lengths, a BERT-based model trained on identifying hate speech, and three more models pre-trained on the task{'}s unlabeled data with varying input lengths. The results of our framework on the development set show an f1-score of 84.92{\%} and on the testing set 84.55{\%}, effectively demonstrating the strength of the ensemble approach in getting accurate results.
[ "Younes, Mutaz", "Kharabsheh, Ali", "Bani Younes, Mohammad" ]
Alexa at SemEval-2023 Task 10: Ensemble Modeling of DeBERTa and BERT Variations for Identifying Sexist Text
semeval-1.228
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.229.bib
https://aclanthology.org/2023.semeval-1.229/
@inproceedings{bilgis-etal-2023-gallagher, title = "Gallagher at {S}em{E}val-2023 Task 5: Tackling Clickbait with {S}eq2{S}eq Models", author = "Bilgis, Tugay and Bozdag, Nimet Beyza and Bethard, Steven", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.229", doi = "10.18653/v1/2023.semeval-1.229", pages = "1650--1655", abstract = "This paper presents the systems and approaches of the Gallagher team for the SemEval-2023 Task 5: Clickbait Spoiling. We propose a method to classify the type of spoiler (phrase, passage, multi) and a question-answering method to generate spoilers that satisfy the curiosity caused by clickbait posts. We experiment with the state-of-the-art Seq2Seq model T5. To identify the spoiler types we used a fine-tuned T5 classifier (Subtask 1). A mixture of T5 and Flan-T5 was used to generate the spoilers for clickbait posts (Subtask 2). Our system officially ranks first in generating phrase type spoilers in Subtask 2, and achieves the highest precision score for passage type spoilers in Subtask 1.", }
This paper presents the systems and approaches of the Gallagher team for the SemEval-2023 Task 5: Clickbait Spoiling. We propose a method to classify the type of spoiler (phrase, passage, multi) and a question-answering method to generate spoilers that satisfy the curiosity caused by clickbait posts. We experiment with the state-of-the-art Seq2Seq model T5. To identify the spoiler types we used a fine-tuned T5 classifier (Subtask 1). A mixture of T5 and Flan-T5 was used to generate the spoilers for clickbait posts (Subtask 2). Our system officially ranks first in generating phrase type spoilers in Subtask 2, and achieves the highest precision score for passage type spoilers in Subtask 1.
[ "Bilgis, Tugay", "Bozdag, Nimet Beyza", "Bethard, Steven" ]
Gallagher at SemEval-2023 Task 5: Tackling Clickbait with Seq2Seq Models
semeval-1.229
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.230.bib
https://aclanthology.org/2023.semeval-1.230/
@inproceedings{bozdag-etal-2023-arizonans, title = "Arizonans at {S}em{E}val-2023 Task 9: Multilingual Tweet Intimacy Analysis with {XLM}-{T}", author = "Bozdag, Nimet Beyza and Bilgis, Tugay and Bethard, Steven", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.230", doi = "10.18653/v1/2023.semeval-1.230", pages = "1656--1659", abstract = "This paper presents the systems and approaches of the Arizonans team for the SemEval 2023 Task 9: Multilingual Tweet Intimacy Analysis. We finetune the Multilingual RoBERTa model trained with about 200M tweets, XLM-T. Our final model ranked 9th out of 45 overall, 13th in seen languages, and 8th in unseen languages.", }
This paper presents the systems and approaches of the Arizonans team for the SemEval 2023 Task 9: Multilingual Tweet Intimacy Analysis. We finetune the Multilingual RoBERTa model trained with about 200M tweets, XLM-T. Our final model ranked 9th out of 45 overall, 13th in seen languages, and 8th in unseen languages.
[ "Bozdag, Nimet Beyza", "Bilgis, Tugay", "Bethard, Steven" ]
Arizonans at SemEval-2023 Task 9: Multilingual Tweet Intimacy Analysis with XLM-T
semeval-1.230
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.231.bib
https://aclanthology.org/2023.semeval-1.231/
@inproceedings{vitsakis-etal-2023-ilab, title = "i{L}ab at {S}em{E}val-2023 Task 11 Le-Wi-Di: Modelling Disagreement or Modelling Perspectives?", author = "Vitsakis, Nikolas and Parekh, Amit and Dinkar, Tanvi and Abercrombie, Gavin and Konstas, Ioannis and Rieser, Verena", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.231", doi = "10.18653/v1/2023.semeval-1.231", pages = "1660--1669", abstract = "There are two competing approaches for modelling annotator disagreement: distributional soft-labelling approaches (which aim to capture the level of disagreement) or modelling perspectives of individual annotators or groups thereof. We adapt a multi-task architecture which has previously shown success in modelling perspectives to evaluate its performance on the SEMEVAL Task 11. We do so by combining both approaches, i.e. predicting individual annotator perspectives as an interim step towards predicting annotator disagreement. Despite its previous success, we found that a multi-task approach performed poorly on datasets which contained distinct annotator opinions, suggesting that this approach may not always be suitable when modelling perspectives. Furthermore, our results explain that while strongly perspectivist approaches might not achieve state-of-the-art performance according to evaluation metrics used by distributional approaches, our approach allows for a more nuanced understanding of individual perspectives present in the data. We argue that perspectivist approaches are preferable because they enable decision makers to amplify minority views, and that it is important to re-evaluate metrics to reflect this goal.", }
There are two competing approaches for modelling annotator disagreement: distributional soft-labelling approaches (which aim to capture the level of disagreement) or modelling perspectives of individual annotators or groups thereof. We adapt a multi-task architecture which has previously shown success in modelling perspectives to evaluate its performance on the SEMEVAL Task 11. We do so by combining both approaches, i.e. predicting individual annotator perspectives as an interim step towards predicting annotator disagreement. Despite its previous success, we found that a multi-task approach performed poorly on datasets which contained distinct annotator opinions, suggesting that this approach may not always be suitable when modelling perspectives. Furthermore, our results explain that while strongly perspectivist approaches might not achieve state-of-the-art performance according to evaluation metrics used by distributional approaches, our approach allows for a more nuanced understanding of individual perspectives present in the data. We argue that perspectivist approaches are preferable because they enable decision makers to amplify minority views, and that it is important to re-evaluate metrics to reflect this goal.
[ "Vitsakis, Nikolas", "Parekh, Amit", "Dinkar, Tanvi", "Abercrombie, Gavin", "Konstas, Ioannis", "Rieser, Verena" ]
iLab at SemEval-2023 Task 11 Le-Wi-Di: Modelling Disagreement or Modelling Perspectives?
semeval-1.231
Poster
2305.06074
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.232.bib
https://aclanthology.org/2023.semeval-1.232/
@inproceedings{peng-kim-2023-chride, title = "Chride at {S}em{E}val-2023 Task 10: Fine-tuned Deberta-V3 on Detection of Online Sexism with Hierarchical Loss", author = "Peng, Letian and Kim, Bosung", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.232", doi = "10.18653/v1/2023.semeval-1.232", pages = "1670--1675", abstract = "Sexism is one of the most concerning problems in the internet society. By detecting sexist expressions, we can reduce the offense toward females and provide useful information to understand how sexism occurs. Our work focuses on a newly-published dataset, EDOS, which annotates English sexist expressions from Reddit and categorizes their specific types. Our method is to train a DeBERTaV3 classifier with all three kinds of labels provided by the dataset, including sexist, category, and granular vectors. Our classifier predicts the probability distribution on vector labels and further applies it to represent category and sexist distributions. Our classifier uses its label and finer-grained labels for each classification to calculate the hierarchical loss for optimization. Our experiments and analyses show that using a combination of loss with finer-grained labels generally achieves better performance on sexism detection and categorization. Codes for our implementation can be found at \url{https://github.com/KomeijiForce/SemEval2023_Task10}.", }
Sexism is one of the most concerning problems in the internet society. By detecting sexist expressions, we can reduce the offense toward females and provide useful information to understand how sexism occurs. Our work focuses on a newly-published dataset, EDOS, which annotates English sexist expressions from Reddit and categorizes their specific types. Our method is to train a DeBERTaV3 classifier with all three kinds of labels provided by the dataset, including sexist, category, and granular vectors. Our classifier predicts the probability distribution on vector labels and further applies it to represent category and sexist distributions. Our classifier uses its label and finer-grained labels for each classification to calculate the hierarchical loss for optimization. Our experiments and analyses show that using a combination of loss with finer-grained labels generally achieves better performance on sexism detection and categorization. Codes for our implementation can be found at \url{https://github.com/KomeijiForce/SemEval2023_Task10}.
[ "Peng, Letian", "Kim, Bosung" ]
Chride at SemEval-2023 Task 10: Fine-tuned Deberta-V3 on Detection of Online Sexism with Hierarchical Loss
semeval-1.232
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.233.bib
https://aclanthology.org/2023.semeval-1.233/
@inproceedings{kumar-etal-2023-oda, title = "{ODA}{\_}{SRIB} at {S}em{E}val-2023 Task 9: A Multimodal Approach for Improved Intimacy Analysis", author = "Kumar, Priyanshu and Kumar, Amit and Prakash, Jiban and Lamba, Prabhat and Abdul, Irfan", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.233", doi = "10.18653/v1/2023.semeval-1.233", pages = "1676--1680", abstract = "We experiment with XLM-Twitter and XLM-RoBERTa models to predict the intimacy scores in Tweets i.e. the extent to which a Tweet contains intimate content. We propose a Transformer-TabNet based multimodal architecture using text data and statistical features from the text, which performs better than the vanilla Transformer based model. We further experiment with Adversarial Weight Perturbation to make our models generalized and robust. The ensemble of four of our best models achieve an over-all Pearson Coefficient of 0.5893 on the test dataset.", }
We experiment with XLM-Twitter and XLM-RoBERTa models to predict the intimacy scores in Tweets i.e. the extent to which a Tweet contains intimate content. We propose a Transformer-TabNet based multimodal architecture using text data and statistical features from the text, which performs better than the vanilla Transformer based model. We further experiment with Adversarial Weight Perturbation to make our models generalized and robust. The ensemble of four of our best models achieve an over-all Pearson Coefficient of 0.5893 on the test dataset.
[ "Kumar, Priyanshu", "Kumar, Amit", "Prakash, Jiban", "Lamba, Prabhat", "Abdul, Irfan" ]
ODA_SRIB at SemEval-2023 Task 9: A Multimodal Approach for Improved Intimacy Analysis
semeval-1.233
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.234.bib
https://aclanthology.org/2023.semeval-1.234/
@inproceedings{zhou-etal-2023-thifly, title = "{TH}i{FLY} Research at {S}em{E}val-2023 Task 7: A Multi-granularity System for {CTR}-based Textual Entailment and Evidence Retrieval", author = "Zhou, Yuxuan and Jin, Ziyu and Li, Meiwei and Li, Miao and Liu, Xien and You, Xinxin and Wu, Ji", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.234", doi = "10.18653/v1/2023.semeval-1.234", pages = "1681--1690", abstract = "The NLI4CT task aims to entail hypotheses based on Clinical Trial Reports (CTRs) and retrieve the corresponding evidence supporting the justification. This task poses a significant challenge, as verifying hypotheses in the NLI4CT task requires the integration of multiple pieces of evidence from one or two CTR(s) and the application of diverse levels of reasoning, including textual and numerical. To address these problems, we present a multi-granularity system for CTR-based textual entailment and evidence retrieval in this paper. Specifically, we construct a Multi-granularity Inference Network (MGNet) that exploits sentence-level and token-level encoding to handle both textual entailment and evidence retrieval tasks. Moreover, we enhance the numerical inference capability of the system by leveraging a T5-based model, SciFive, which is pre-trained on the medical corpus. Model ensembling and a joint inference method are further utilized in the system to increase the stability and consistency of inference. The system achieves f1-scores of 0.856 and 0.853 on textual entailment and evidence retrieval tasks, resulting in the best performance on both subtasks. The experimental results corroborate the effectiveness of our proposed method.", }
The NLI4CT task aims to entail hypotheses based on Clinical Trial Reports (CTRs) and retrieve the corresponding evidence supporting the justification. This task poses a significant challenge, as verifying hypotheses in the NLI4CT task requires the integration of multiple pieces of evidence from one or two CTR(s) and the application of diverse levels of reasoning, including textual and numerical. To address these problems, we present a multi-granularity system for CTR-based textual entailment and evidence retrieval in this paper. Specifically, we construct a Multi-granularity Inference Network (MGNet) that exploits sentence-level and token-level encoding to handle both textual entailment and evidence retrieval tasks. Moreover, we enhance the numerical inference capability of the system by leveraging a T5-based model, SciFive, which is pre-trained on the medical corpus. Model ensembling and a joint inference method are further utilized in the system to increase the stability and consistency of inference. The system achieves f1-scores of 0.856 and 0.853 on textual entailment and evidence retrieval tasks, resulting in the best performance on both subtasks. The experimental results corroborate the effectiveness of our proposed method.
[ "Zhou, Yuxuan", "Jin, Ziyu", "Li, Meiwei", "Li, Miao", "Liu, Xien", "You, Xinxin", "Wu, Ji" ]
THiFLY Research at SemEval-2023 Task 7: A Multi-granularity System for CTR-based Textual Entailment and Evidence Retrieval
semeval-1.234
Poster
2306.01245
[ "https://github.com/thumlp/nli4ct" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.235.bib
https://aclanthology.org/2023.semeval-1.235/
@inproceedings{manoj-etal-2023-irel, title = "i{REL} at {S}em{E}val-2023 Task 10: Multi-level Training for Explainable Detection of Online Sexism", author = "Manoj, Nirmal and Joshi, Sagar and Maity, Ankita and Varma, Vasudeva", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.235", doi = "10.18653/v1/2023.semeval-1.235", pages = "1691--1696", abstract = "This paper describes our approach for SemEval-2023 Task 10: Explainable Detection of Online Sexism (EDOS). The task deals with identification and categorization of sexist content into fine-grained categories for explainability in sexism classification. The explainable categorization is proposed through a set of three hierarchical tasks that constitute a taxonomy of sexist content, each task being more granular than the former for categorization of the content. Our team (iREL) participated in all three hierarchical subtasks. Considering the inter-connected task structure, we study multilevel training to study the transfer learning from coarser to finer tasks. Our experiments based on pretrained transformer architectures also make use of additional strategies such as domain-adaptive pretraining to adapt our models to the nature of the content dealt with, and use of the focal loss objective for handling class imbalances. Our best-performing systems on the three tasks achieve macro-F1 scores of 85.93, 69.96 and 54.62 on their respective validation sets.", }
This paper describes our approach for SemEval-2023 Task 10: Explainable Detection of Online Sexism (EDOS). The task deals with identification and categorization of sexist content into fine-grained categories for explainability in sexism classification. The explainable categorization is proposed through a set of three hierarchical tasks that constitute a taxonomy of sexist content, each task being more granular than the former for categorization of the content. Our team (iREL) participated in all three hierarchical subtasks. Considering the inter-connected task structure, we study multilevel training to study the transfer learning from coarser to finer tasks. Our experiments based on pretrained transformer architectures also make use of additional strategies such as domain-adaptive pretraining to adapt our models to the nature of the content dealt with, and use of the focal loss objective for handling class imbalances. Our best-performing systems on the three tasks achieve macro-F1 scores of 85.93, 69.96 and 54.62 on their respective validation sets.
[ "Manoj, Nirmal", "Joshi, Sagar", "Maity, Ankita", "Varma, Vasudeva" ]
iREL at SemEval-2023 Task 10: Multi-level Training for Explainable Detection of Online Sexism
semeval-1.235
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.236.bib
https://aclanthology.org/2023.semeval-1.236/
@inproceedings{akrah-pedersen-2023-duluthnlp, title = "{D}uluth{NLP} at {S}em{E}val-2023 Task 12: {A}fri{S}enti-{S}em{E}val: Sentiment Analysis for Low-resource {A}frican Languages using {T}witter Dataset", author = "Akrah, Samuel and Pedersen, Ted", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.236", doi = "10.18653/v1/2023.semeval-1.236", pages = "1697--1701", abstract = "This paper describes the DuluthNLP system that participated in Task 12 of SemEval-2023 on AfriSenti-SemEval: Sentiment Analysis for Low-resource African Languages using Twitter Dataset. Given a set of tweets, the task requires participating systems to classify each tweet as negative, positive or neutral. We evaluate a range of monolingual and multilingual pretrained models on the Twi language dataset, one among the 14 African languages included in the SemEval task. We introduce TwiBERT, a new pretrained model trained from scratch. We show that TwiBERT, along with mBERT, generally perform best when trained on the Twi dataset, achieving an F1 score of 64.29{\%} on the official evaluation test data, which ranks 14 out of 30 of the total submissions for Track 10. The TwiBERT model is released at \url{https://huggingface.co/sakrah/TwiBERT}", }
This paper describes the DuluthNLP system that participated in Task 12 of SemEval-2023 on AfriSenti-SemEval: Sentiment Analysis for Low-resource African Languages using Twitter Dataset. Given a set of tweets, the task requires participating systems to classify each tweet as negative, positive or neutral. We evaluate a range of monolingual and multilingual pretrained models on the Twi language dataset, one among the 14 African languages included in the SemEval task. We introduce TwiBERT, a new pretrained model trained from scratch. We show that TwiBERT, along with mBERT, generally perform best when trained on the Twi dataset, achieving an F1 score of 64.29{\%} on the official evaluation test data, which ranks 14 out of 30 of the total submissions for Track 10. The TwiBERT model is released at \url{https://huggingface.co/sakrah/TwiBERT}
[ "Akrah, Samuel", "Pedersen, Ted" ]
DuluthNLP at SemEval-2023 Task 12: AfriSenti-SemEval: Sentiment Analysis for Low-resource African Languages using Twitter Dataset
semeval-1.236
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.237.bib
https://aclanthology.org/2023.semeval-1.237/
@inproceedings{koreeda-etal-2023-hitachi, title = "Hitachi at {S}em{E}val-2023 Task 3: Exploring Cross-lingual Multi-task Strategies for Genre and Framing Detection in Online News", author = "Koreeda, Yuta and Yokote, Ken-ichi and Ozaki, Hiroaki and Yamaguchi, Atsuki and Tsunokake, Masaya and Sogawa, Yasuhiro", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.237", doi = "10.18653/v1/2023.semeval-1.237", pages = "1702--1711", abstract = "This paper explains the participation of team Hitachi to SemEval-2023 Task 3 {``}Detecting the genre, the framing, and the persuasion techniques in online news in a multi-lingual setup.{''} Based on the multilingual, multi-task nature of the task and the low-resource setting, we investigated different cross-lingual and multi-task strategies for training the pretrained language models. Through extensive experiments, we found that (a) cross-lingual/multi-task training, and (b) collecting an external balanced dataset, can benefit the genre and framing detection. We constructed ensemble models from the results and achieved the highest macro-averaged F1 scores in Italian and Russian genre categorization subtasks.", }
This paper explains the participation of team Hitachi to SemEval-2023 Task 3 {``}Detecting the genre, the framing, and the persuasion techniques in online news in a multi-lingual setup.{''} Based on the multilingual, multi-task nature of the task and the low-resource setting, we investigated different cross-lingual and multi-task strategies for training the pretrained language models. Through extensive experiments, we found that (a) cross-lingual/multi-task training, and (b) collecting an external balanced dataset, can benefit the genre and framing detection. We constructed ensemble models from the results and achieved the highest macro-averaged F1 scores in Italian and Russian genre categorization subtasks.
[ "Koreeda, Yuta", "Yokote, Ken-ichi", "Ozaki, Hiroaki", "Yamaguchi, Atsuki", "Tsunokake, Masaya", "Sogawa, Yasuhiro" ]
Hitachi at SemEval-2023 Task 3: Exploring Cross-lingual Multi-task Strategies for Genre and Framing Detection in Online News
semeval-1.237
Poster
2303.01794
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.238.bib
https://aclanthology.org/2023.semeval-1.238/
@inproceedings{keller-etal-2023-nancy, title = "nancy-hicks-gribble at {S}em{E}val-2023 Task 5: Classifying and generating clickbait spoilers with {R}o{BERT}a", author = {Keller, J{\"u}ri and Rehbach, Nicolas and Zafar, Ibrahim}, editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.238", doi = "10.18653/v1/2023.semeval-1.238", pages = "1712--1717", abstract = "Clickbait spoiling and spoiler type classification in the setting of the SemEval2023 shared task five was used to explore transformer based text classification in comparison to conventional, shallow learned classifying models. Additionally, an initial model for spoiler creation was explored. The task was to classify or create spoilers for clickbait social media posts. The classification task was addressed by comparing different classifiers trained on hand crafted features to pre-trained and fine-tuned RoBERTa transformer models. The spoiler generation task was formulated as a question answering task, using the clickbait posts as questions and the articles as foundation to retrieve the answer from. The results show that even of the shelve transformer models outperform shallow learned models in the classification task. The spoiler generation task is more complex and needs an advanced system.", }
Clickbait spoiling and spoiler type classification in the setting of the SemEval2023 shared task five was used to explore transformer based text classification in comparison to conventional, shallow learned classifying models. Additionally, an initial model for spoiler creation was explored. The task was to classify or create spoilers for clickbait social media posts. The classification task was addressed by comparing different classifiers trained on hand crafted features to pre-trained and fine-tuned RoBERTa transformer models. The spoiler generation task was formulated as a question answering task, using the clickbait posts as questions and the articles as foundation to retrieve the answer from. The results show that even of the shelve transformer models outperform shallow learned models in the classification task. The spoiler generation task is more complex and needs an advanced system.
[ "Keller, J{\\\"u}ri", "Rehbach, Nicolas", "Zafar, Ibrahim" ]
nancy-hicks-gribble at SemEval-2023 Task 5: Classifying and generating clickbait spoilers with RoBERTa
semeval-1.238
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.239.bib
https://aclanthology.org/2023.semeval-1.239/
@inproceedings{poncelas-etal-2023-sakura, title = "Sakura at {S}em{E}val-2023 Task 2: Data Augmentation via Translation", author = "Poncelas, Alberto and Tkachenko, Maksim and Htun, Ohnmar", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.239", doi = "10.18653/v1/2023.semeval-1.239", pages = "1718--1722", abstract = "We demonstrate a simple yet effective approach to augmenting training data for multilingual named entity recognition using translations. The named entity spans from the original sentences are transferred to translations via word alignment and then filtered with the baseline recognizer. The proposed approach outperforms the baseline XLM-Roberta on the multilingual dataset.", }
We demonstrate a simple yet effective approach to augmenting training data for multilingual named entity recognition using translations. The named entity spans from the original sentences are transferred to translations via word alignment and then filtered with the baseline recognizer. The proposed approach outperforms the baseline XLM-Roberta on the multilingual dataset.
[ "Poncelas, Alberto", "Tkachenko, Maksim", "Htun, Ohnmar" ]
Sakura at SemEval-2023 Task 2: Data Augmentation via Translation
semeval-1.239
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.240.bib
https://aclanthology.org/2023.semeval-1.240/
@inproceedings{tsunokake-etal-2023-hitachi, title = "Hitachi at {S}em{E}val-2023 Task 4: Exploring Various Task Formulations Reveals the Importance of Description Texts on Human Values", author = "Tsunokake, Masaya and Yamaguchi, Atsuki and Koreeda, Yuta and Ozaki, Hiroaki and Sogawa, Yasuhiro", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.240", doi = "10.18653/v1/2023.semeval-1.240", pages = "1723--1735", abstract = "This paper describes our participation in SemEval-2023 Task 4, ValueEval: Identification of Human Values behind Arguments. The aim of this task is to identify whether or not an input text supports each of the 20 pre-defined human values. Previous work on human value detection has shown the effectiveness of a sequence classification approach using BERT. However, little is known about what type of task formulation is suitable for the task. To this end, this paper explores various task formulations, including sequence classification, question answering, and question answering with chain-of-thought prompting and evaluates their performances on the shared task dataset. Experiments show that a zero-shot approach is not as effective as other methods, and there is no one approach that is optimal in every scenario. Our analysis also reveals that utilizing the descriptions of human values can help to improve performance.", }
This paper describes our participation in SemEval-2023 Task 4, ValueEval: Identification of Human Values behind Arguments. The aim of this task is to identify whether or not an input text supports each of the 20 pre-defined human values. Previous work on human value detection has shown the effectiveness of a sequence classification approach using BERT. However, little is known about what type of task formulation is suitable for the task. To this end, this paper explores various task formulations, including sequence classification, question answering, and question answering with chain-of-thought prompting and evaluates their performances on the shared task dataset. Experiments show that a zero-shot approach is not as effective as other methods, and there is no one approach that is optimal in every scenario. Our analysis also reveals that utilizing the descriptions of human values can help to improve performance.
[ "Tsunokake, Masaya", "Yamaguchi, Atsuki", "Koreeda, Yuta", "Ozaki, Hiroaki", "Sogawa, Yasuhiro" ]
Hitachi at SemEval-2023 Task 4: Exploring Various Task Formulations Reveals the Importance of Description Texts on Human Values
semeval-1.240
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.241.bib
https://aclanthology.org/2023.semeval-1.241/
@inproceedings{verma-etal-2023-dcu, title = "{DCU} at {S}em{E}val-2023 Task 10: A Comparative Analysis of Encoder-only and Decoder-only Language Models with Insights into Interpretability", author = "Verma, Kanishk and Adebayo, Kolawole and Wagner, Joachim and Davis, Brian", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.241", doi = "10.18653/v1/2023.semeval-1.241", pages = "1736--1750", abstract = "We conduct a comparison of pre-trained encoder-only and decoder-only language models with and without continued pre-training, to detect online sexism. Our fine-tuning-based classifier system achieved the 16th rank in the SemEval 2023 Shared Task 10 Subtask A that asks to distinguish sexist and non-sexist texts. Additionally, we conduct experiments aimed at enhancing the interpretability of systems designed to detect online sexism. Our findings provide insights into the features and decision-making processes underlying our classifier system, thereby contributing to a broader effort to develop explainable AI models to detect online sexism.", }
We conduct a comparison of pre-trained encoder-only and decoder-only language models with and without continued pre-training, to detect online sexism. Our fine-tuning-based classifier system achieved the 16th rank in the SemEval 2023 Shared Task 10 Subtask A that asks to distinguish sexist and non-sexist texts. Additionally, we conduct experiments aimed at enhancing the interpretability of systems designed to detect online sexism. Our findings provide insights into the features and decision-making processes underlying our classifier system, thereby contributing to a broader effort to develop explainable AI models to detect online sexism.
[ "Verma, Kanishk", "Adebayo, Kolawole", "Wagner, Joachim", "Davis, Brian" ]
DCU at SemEval-2023 Task 10: A Comparative Analysis of Encoder-only and Decoder-only Language Models with Insights into Interpretability
semeval-1.241
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.242.bib
https://aclanthology.org/2023.semeval-1.242/
@inproceedings{pirhadi-etal-2023-pmcoders, title = "{PMC}oders at {S}em{E}val-2023 Task 1: {RA}lt{CLIP}: Use Relative {A}lt{CLIP} Features to Rank", author = "Pirhadi, Mohammad Javad and Mirzaei, Motahhare and Mohammadi, Mohammad Reza and Eetemadi, Sauleh", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.242", doi = "10.18653/v1/2023.semeval-1.242", pages = "1751--1755", abstract = "Visual Word Sense Disambiguation (VWSD) task aims to find the most related image among 10 images to an ambiguous word in some limited textual context. In this work, we use AltCLIP features and a 3-layer standard transformer encoder to compare the cosine similarity between the given phrase and different images. Also, we improve our model{'}s generalization by using a subset of LAION-5B. The best official baseline achieves 37.20{\%} and 54.39{\%} macro-averaged hit rate and MRR (Mean Reciprocal Rank) respectively. Our best configuration reaches 39.61{\%} and 56.78{\%} macro-averaged hit rate and MRR respectively. The code will be made publicly available on GitHub.", }
Visual Word Sense Disambiguation (VWSD) task aims to find the most related image among 10 images to an ambiguous word in some limited textual context. In this work, we use AltCLIP features and a 3-layer standard transformer encoder to compare the cosine similarity between the given phrase and different images. Also, we improve our model{'}s generalization by using a subset of LAION-5B. The best official baseline achieves 37.20{\%} and 54.39{\%} macro-averaged hit rate and MRR (Mean Reciprocal Rank) respectively. Our best configuration reaches 39.61{\%} and 56.78{\%} macro-averaged hit rate and MRR respectively. The code will be made publicly available on GitHub.
[ "Pirhadi, Mohammad Javad", "Mirzaei, Motahhare", "Mohammadi, Mohammad Reza", "Eetemadi, Sauleh" ]
PMCoders at SemEval-2023 Task 1: RAltCLIP: Use Relative AltCLIP Features to Rank
semeval-1.242
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.243.bib
https://aclanthology.org/2023.semeval-1.243/
@inproceedings{kurita-etal-2023-tohokunlp, title = "{T}ohoku{NLP} at {S}em{E}val-2023 Task 5: Clickbait Spoiling via Simple {S}eq2{S}eq Generation and Ensembling", author = "Kurita, Hiroto and Ito, Ikumi and Funayama, Hiroaki and Sasaki, Shota and Moriya, Shoji and Mengyu, Ye and Kokuta, Kazuma and Hatakeyama, Ryujin and Sone, Shusaku and Inui, Kentaro", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.243", doi = "10.18653/v1/2023.semeval-1.243", pages = "1756--1762", abstract = "This paper describes our system submitted to SemEval-2023 Task 5: Clickbait Spoiling. We work on spoiler generation of the subtask 2 and develop a system which comprises two parts: 1) simple seq2seq spoiler generation and 2) post-hoc model ensembling. Using this simple method, we address the challenge of generating multipart spoiler. In the test set, our submitted system outperformed the baseline by a large margin (approximately 10 points above on the BLEU score) for mixed types of spoilers. We also found that our system successfully handled the challenge of the multipart spoiler, confirming the effectiveness of our approach.", }
This paper describes our system submitted to SemEval-2023 Task 5: Clickbait Spoiling. We work on spoiler generation of the subtask 2 and develop a system which comprises two parts: 1) simple seq2seq spoiler generation and 2) post-hoc model ensembling. Using this simple method, we address the challenge of generating multipart spoiler. In the test set, our submitted system outperformed the baseline by a large margin (approximately 10 points above on the BLEU score) for mixed types of spoilers. We also found that our system successfully handled the challenge of the multipart spoiler, confirming the effectiveness of our approach.
[ "Kurita, Hiroto", "Ito, Ikumi", "Funayama, Hiroaki", "Sasaki, Shota", "Moriya, Shoji", "Mengyu, Ye", "Kokuta, Kazuma", "Hatakeyama, Ryujin", "Sone, Shusaku", "Inui, Kentaro" ]
TohokuNLP at SemEval-2023 Task 5: Clickbait Spoiling via Simple Seq2Seq Generation and Ensembling
semeval-1.243
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.244.bib
https://aclanthology.org/2023.semeval-1.244/
@inproceedings{can-2023-tubingen, title = {{T}{\"u}bingen at {S}em{E}val-2023 Task 4: What Can Stance Tell? A Computational Study on Detecting Human Values behind Arguments}, author = "Can, Fidan", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.244", doi = "10.18653/v1/2023.semeval-1.244", pages = "1763--1768", abstract = "This paper describes the performance of a system which uses stance as an output instead of taking it as an input to identify 20 human values behind given arguments, based on two datasets for SemEval-2023 Task 4. The rationale was to draw a conclusion on whether predicting stance would help predict the given human values better. For this setup{---}predicting 21 labels{---}a pre-trained language model, RoBERTa-Large was used. The system had an F{\$}{\_}1{\$}-score of 0.50 for predicting these human values for the main test set while this score was 0.35 on the secondary test set, and through further analysis, this paper aims to give insight into the problem of human value identification.", }
This paper describes the performance of a system which uses stance as an output instead of taking it as an input to identify 20 human values behind given arguments, based on two datasets for SemEval-2023 Task 4. The rationale was to draw a conclusion on whether predicting stance would help predict the given human values better. For this setup{---}predicting 21 labels{---}a pre-trained language model, RoBERTa-Large was used. The system had an F{\$}{\_}1{\$}-score of 0.50 for predicting these human values for the main test set while this score was 0.35 on the secondary test set, and through further analysis, this paper aims to give insight into the problem of human value identification.
[ "Can, Fidan" ]
Tübingen at SemEval-2023 Task 4: What Can Stance Tell? A Computational Study on Detecting Human Values behind Arguments
semeval-1.244
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.245.bib
https://aclanthology.org/2023.semeval-1.245/
@inproceedings{takehana-etal-2023-stanford, title = "{S}tanford {ML}ab at {S}em{E}val 2023 Task 7: Neural Methods for Clinical Trial Report {NLI}", author = "Takehana, Conner and Lim, Dylan and Kurtulus, Emirhan and Iyer, Ramya and Tanimura, Ellie and Aggarwal, Pankhuri and Cantillon, Molly and Yu, Alfred and Khan, Sarosh and Chi, Nathan", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.245", doi = "10.18653/v1/2023.semeval-1.245", pages = "1769--1775", abstract = "We present a system for natural language inference in breast cancer clinical trial reports, as framed by SemEval 2023 Task 7: Multi-evidence Natural Language Inference for Clinical Trial Data. In particular, we propose a suite of techniques for two related inference subtasks: entailment and evidence retrieval. The purpose of the textual entailment identification subtask is to determine the inference relation (either entailment or contradiction) between given statement pairs, while the goal of the evidence retrieval task is to identify a set of sentences that support this inference relation. To this end, we propose fine-tuning Bio+Clinical BERT, a BERT-based model pre-trained on clinical data. Along with presenting our system, we analyze our architectural decisions in the context of our model{'}s accuracy and conduct an error analysis. Overall, our system ranked 20 / 30 on the entailment subtask.", }
We present a system for natural language inference in breast cancer clinical trial reports, as framed by SemEval 2023 Task 7: Multi-evidence Natural Language Inference for Clinical Trial Data. In particular, we propose a suite of techniques for two related inference subtasks: entailment and evidence retrieval. The purpose of the textual entailment identification subtask is to determine the inference relation (either entailment or contradiction) between given statement pairs, while the goal of the evidence retrieval task is to identify a set of sentences that support this inference relation. To this end, we propose fine-tuning Bio+Clinical BERT, a BERT-based model pre-trained on clinical data. Along with presenting our system, we analyze our architectural decisions in the context of our model{'}s accuracy and conduct an error analysis. Overall, our system ranked 20 / 30 on the entailment subtask.
[ "Takehana, Conner", "Lim, Dylan", "Kurtulus, Emirhan", "Iyer, Ramya", "Tanimura, Ellie", "Aggarwal, Pankhuri", "Cantillon, Molly", "Yu, Alfred", "Khan, Sarosh", "Chi, Nathan" ]
Stanford MLab at SemEval 2023 Task 7: Neural Methods for Clinical Trial Report NLI
semeval-1.245
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.246.bib
https://aclanthology.org/2023.semeval-1.246/
@inproceedings{dhrangadhariya-etal-2023-hevs, title = "{HEVS}-{TUW} at {S}em{E}val-2023 Task 8: Ensemble of Language Models and Rule-based Classifiers for Claims Identification and {PICO} Extraction", author = {Dhrangadhariya, Anjani and Kusa, Wojciech and M{\"u}ller, Henning and Hanbury, Allan}, editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.246", doi = "10.18653/v1/2023.semeval-1.246", pages = "1776--1782", abstract = "This paper describes the HEVS-TUW team submission to the SemEval-2023 Task 8: Causal Claims. We participated in two subtasks: (1) causal claims detection and (2) PIO identification. For subtask 1, we experimented with an ensemble of weakly supervised question detection and fine-tuned Transformer-based models. For subtask 2 of PIO frame extraction, we used a combination of deep representation learning and a rule-based approach. Our best model for subtask 1 ranks fourth with an F1-score of 65.77{\%}. It shows moderate benefit from ensembling models pre-trained on independent categories. The results for subtask 2 warrant further investigation for improvement.", }
This paper describes the HEVS-TUW team submission to the SemEval-2023 Task 8: Causal Claims. We participated in two subtasks: (1) causal claims detection and (2) PIO identification. For subtask 1, we experimented with an ensemble of weakly supervised question detection and fine-tuned Transformer-based models. For subtask 2 of PIO frame extraction, we used a combination of deep representation learning and a rule-based approach. Our best model for subtask 1 ranks fourth with an F1-score of 65.77{\%}. It shows moderate benefit from ensembling models pre-trained on independent categories. The results for subtask 2 warrant further investigation for improvement.
[ "Dhrangadhariya, Anjani", "Kusa, Wojciech", "M{\\\"u}ller, Henning", "Hanbury, Allan" ]
HEVS-TUW at SemEval-2023 Task 8: Ensemble of Language Models and Rule-based Classifiers for Claims Identification and PICO Extraction
semeval-1.246
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.247.bib
https://aclanthology.org/2023.semeval-1.247/
@inproceedings{cabrera-diego-gheewala-2023-jus, title = "Jus Mundi at {S}em{E}val-2023 Task 6: Using a Frustratingly Easy Domain Adaption for a Legal Named Entity Recognition System", author = "Cabrera-Diego, Luis Adri{\'a}n and Gheewala, Akshita", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.247", doi = "10.18653/v1/2023.semeval-1.247", pages = "1783--1790", abstract = "In this work, we present a Named Entity Recognition (NER) system that was trained using a Frustratingly Easy Domain Adaptation (FEDA) over multiple legal corpora. The goal was to create a NER capable of detecting 14 types of legal named entities in Indian judgments. Besides the FEDA architecture, we explored a method based on overlapping context and averaging tensors to process long input texts, which can be beneficial when processing legal documents. The proposed NER reached an F1-score of 0.9007 in the sub-task B of Semeval-2023 Task 6, Understanding Legal Texts.", }
In this work, we present a Named Entity Recognition (NER) system that was trained using a Frustratingly Easy Domain Adaptation (FEDA) over multiple legal corpora. The goal was to create a NER capable of detecting 14 types of legal named entities in Indian judgments. Besides the FEDA architecture, we explored a method based on overlapping context and averaging tensors to process long input texts, which can be beneficial when processing legal documents. The proposed NER reached an F1-score of 0.9007 in the sub-task B of Semeval-2023 Task 6, Understanding Legal Texts.
[ "Cabrera-Diego, Luis Adri{\\'a}n", "Gheewala, Akshita" ]
Jus Mundi at SemEval-2023 Task 6: Using a Frustratingly Easy Domain Adaption for a Legal Named Entity Recognition System
semeval-1.247
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.248.bib
https://aclanthology.org/2023.semeval-1.248/
@inproceedings{wan-etal-2023-stanford, title = "{S}tanford {ML}ab at {S}em{E}val-2023 Task 10: Exploring {G}lo{V}e- and Transformer-Based Methods for the Explainable Detection of Online Sexism", author = "Wan, Aaron and Yam, Hong Meng and Yogeswaran, Swetha and Zhou, Beining and Choi, Hee Jung and Chow, Trevor", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.248", doi = "10.18653/v1/2023.semeval-1.248", pages = "1791--1797", abstract = "In this paper, we discuss the methods we applied at SemEval-2023 Task 10: Towards the Explainable Detection of Online Sexism. Given an input text, we perform three classification tasks to predict whether the text is sexist and classify the sexist text into subcategories in order to provide an additional explanation as to why the text is sexist. We explored many different types of models, including GloVe embeddings as the baseline approach, transformer-based deep learning models like BERT, RoBERTa, and DeBERTa, ensemble models, and model blending. We explored various data cleaning and augmentation methods to improve model performance. Pre-training transformer models yielded significant improvements in performance, and ensembles and blending slightly improved robustness in the F1 score.", }
In this paper, we discuss the methods we applied at SemEval-2023 Task 10: Towards the Explainable Detection of Online Sexism. Given an input text, we perform three classification tasks to predict whether the text is sexist and classify the sexist text into subcategories in order to provide an additional explanation as to why the text is sexist. We explored many different types of models, including GloVe embeddings as the baseline approach, transformer-based deep learning models like BERT, RoBERTa, and DeBERTa, ensemble models, and model blending. We explored various data cleaning and augmentation methods to improve model performance. Pre-training transformer models yielded significant improvements in performance, and ensembles and blending slightly improved robustness in the F1 score.
[ "Wan, Aaron", "Yam, Hong Meng", "Yogeswaran, Swetha", "Zhou, Beining", "Choi, Hee Jung", "Chow, Trevor" ]
Stanford MLab at SemEval-2023 Task 10: Exploring GloVe- and Transformer-Based Methods for the Explainable Detection of Online Sexism
semeval-1.248
Poster
2305.04356
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.249.bib
https://aclanthology.org/2023.semeval-1.249/
@inproceedings{marcinczuk-walentynowicz-2023-codenlp, title = "{C}ode{NLP} at {S}em{E}val-2023 Task 2: Data Augmentation for Named Entity Recognition by Combination of Sequence Generation Strategies", author = "Marci{\'n}czuk, Micha and Walentynowicz, Wiktor", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.249", doi = "10.18653/v1/2023.semeval-1.249", pages = "1798--1804", abstract = "In the article, we present the CodeNLP submission to the SemEval-2023 Task 2: MultiCoNER II Multilingual Complex Named Entity Recognition. Our approach is based on data augmentation by combining various strategies of sequence generation for training. We show that the extended procedure of fine-tuning a pre-trained language model can bring improvements compared to any single strategy. On the development subsets, the improvements were 1.7 pp and 3.1 pp of F-measure, for English and multilingual datasets, respectively. On the test subsets our models achieved 63.51{\%} and 73.22{\%} of Macro F1, respectively.", }
In the article, we present the CodeNLP submission to the SemEval-2023 Task 2: MultiCoNER II Multilingual Complex Named Entity Recognition. Our approach is based on data augmentation by combining various strategies of sequence generation for training. We show that the extended procedure of fine-tuning a pre-trained language model can bring improvements compared to any single strategy. On the development subsets, the improvements were 1.7 pp and 3.1 pp of F-measure, for English and multilingual datasets, respectively. On the test subsets our models achieved 63.51{\%} and 73.22{\%} of Macro F1, respectively.
[ "Marci{\\'n}czuk, Micha", "Walentynowicz, Wiktor" ]
CodeNLP at SemEval-2023 Task 2: Data Augmentation for Named Entity Recognition by Combination of Sequence Generation Strategies
semeval-1.249
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.250.bib
https://aclanthology.org/2023.semeval-1.250/
@inproceedings{kondragunta-etal-2023-skam, title = "{SKAM} at {S}em{E}val-2023 Task 10: Linguistic Feature Integration and Continuous Pretraining for Online Sexism Detection and Classification", author = "Kondragunta, Murali Manohar and Chen, Amber and Slot, Karlo and Weering, Sanne and Caselli, Tommaso", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.250", doi = "10.18653/v1/2023.semeval-1.250", pages = "1805--1817", abstract = "Sexism has been prevalent online. In this paper, we explored the effect of explicit linguistic features and continuous pretraining on the performance of pretrained language models in sexism detection. While adding linguistic features did not improve the performance of the model, continuous pretraining did slightly boost the performance of the model in Task B from a mean macro-F1 score of 0.6156 to 0.6246. The best mean macro-F1 score in Task A was achieved by a finetuned HateBERT model using regular pretraining (0.8331). We observed that the linguistic features did not improve the model{'}s performance. At the same time, continuous pretraining proved beneficial only for nuanced downstream tasks like Task-B.", }
Sexism has been prevalent online. In this paper, we explored the effect of explicit linguistic features and continuous pretraining on the performance of pretrained language models in sexism detection. While adding linguistic features did not improve the performance of the model, continuous pretraining did slightly boost the performance of the model in Task B from a mean macro-F1 score of 0.6156 to 0.6246. The best mean macro-F1 score in Task A was achieved by a finetuned HateBERT model using regular pretraining (0.8331). We observed that the linguistic features did not improve the model{'}s performance. At the same time, continuous pretraining proved beneficial only for nuanced downstream tasks like Task-B.
[ "Kondragunta, Murali Manohar", "Chen, Amber", "Slot, Karlo", "Weering, Sanne", "Caselli, Tommaso" ]
SKAM at SemEval-2023 Task 10: Linguistic Feature Integration and Continuous Pretraining for Online Sexism Detection and Classification
semeval-1.250
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.251.bib
https://aclanthology.org/2023.semeval-1.251/
@inproceedings{sterz-etal-2023-ml, title = "{ML} Mob at {S}em{E}val-2023 Task 5: {``}Breaking News: Our Semi-Supervised and Multi-Task Learning Approach Spoils Clickbait{''}", author = "Sterz, Hannah and Bongard, Leonard and Werner, Tobias and Poth, Clifton and Hentschel, Martin", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.251", doi = "10.18653/v1/2023.semeval-1.251", pages = "1818--1823", abstract = "Online articles using striking headlines that promise intriguing information are often used to attract readers. Most of the time, the information provided in the text is disappointing to the reader after the headline promised exciting news. As part of the SemEval-2023 challenge, we propose a system to generate a spoiler for these headlines. The spoiler provides the information promised by the headline and eliminates the need to read the full article. We consider Multi-Task Learning and generating more data using a distillation approach in our system. With this, we achieve an F1 score up to 51.48{\%} on extracting the spoiler from the articles.", }
Online articles using striking headlines that promise intriguing information are often used to attract readers. Most of the time, the information provided in the text is disappointing to the reader after the headline promised exciting news. As part of the SemEval-2023 challenge, we propose a system to generate a spoiler for these headlines. The spoiler provides the information promised by the headline and eliminates the need to read the full article. We consider Multi-Task Learning and generating more data using a distillation approach in our system. With this, we achieve an F1 score up to 51.48{\%} on extracting the spoiler from the articles.
[ "Sterz, Hannah", "Bongard, Leonard", "Werner, Tobias", "Poth, Clifton", "Hentschel, Martin" ]
ML Mob at SemEval-2023 Task 5: “Breaking News: Our Semi-Supervised and Multi-Task Learning Approach Spoils Clickbait”
semeval-1.251
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.252.bib
https://aclanthology.org/2023.semeval-1.252/
@inproceedings{hassan-etal-2023-firc, title = "{F}i{RC} at {S}em{E}val-2023 Task 10: Fine-grained Classification of Online Sexism Content Using {D}e{BERT}a", author = "Hassan, Fadi and Bouchekif, Abdessalam and Aransa, Walid", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.252", doi = "10.18653/v1/2023.semeval-1.252", pages = "1824--1832", abstract = "The SemEval 2023 shared task 10 {``}Explainable Detection of Online Sexism{''} focuses on detecting and identifying comments and tweets containing sexist expressions and also explaining why it is sexist. This paper describes our system that we used to participate in this shared task. Our model is an ensemble of different variants of fine tuned DeBERTa models that employs a k-fold cross-validation. We have participated in the three tasks A, B and C. Our model ranked 2 nd position in tasks A, 7 th in task B and 4 th in task C.", }
The SemEval 2023 shared task 10 {``}Explainable Detection of Online Sexism{''} focuses on detecting and identifying comments and tweets containing sexist expressions and also explaining why it is sexist. This paper describes our system that we used to participate in this shared task. Our model is an ensemble of different variants of fine tuned DeBERTa models that employs a k-fold cross-validation. We have participated in the three tasks A, B and C. Our model ranked 2 nd position in tasks A, 7 th in task B and 4 th in task C.
[ "Hassan, Fadi", "Bouchekif, Abdessalam", "Aransa, Walid" ]
FiRC at SemEval-2023 Task 10: Fine-grained Classification of Online Sexism Content Using DeBERTa
semeval-1.252
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.253.bib
https://aclanthology.org/2023.semeval-1.253/
@inproceedings{hoang-etal-2023-vbd, title = "{VBD}{\_}{NLP} at {S}em{E}val-2023 Task 2: Named Entity Recognition Systems Enhanced by {B}abel{N}et and {W}ikipedia", author = "Hoang, Phu Gia and Thanh, Le and Trieu, Hai-Long", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.253", doi = "10.18653/v1/2023.semeval-1.253", pages = "1833--1843", abstract = "We describe our systems participated in the SemEval-2023 shared task for Named Entity Recognition (NER) in English and Bangla. In order to address the challenges of the task, where a large number of fine-grained named entity types need to be detected with only a small amount of training data, we use a method to augment the training data based on BabelNet conceptsand Wikipedia redirections to automatically annotate named entities from Wikipedia articles. We build our NER systems based on the powerful mDeBERTa pretrained language model and trained on the augmented data. Our approach significantly enhances the performance of the fine-grained NER task in both English and Bangla subtracks, outperforming the baseline models. Specifically, our augmented systems achieve macro-f1 scores of 52.64{\%} and 64.31{\%}, representing improvements of 2.38{\%} and 11.33{\%} over the English and Bangla baselines, respectively.", }
We describe our systems participated in the SemEval-2023 shared task for Named Entity Recognition (NER) in English and Bangla. In order to address the challenges of the task, where a large number of fine-grained named entity types need to be detected with only a small amount of training data, we use a method to augment the training data based on BabelNet conceptsand Wikipedia redirections to automatically annotate named entities from Wikipedia articles. We build our NER systems based on the powerful mDeBERTa pretrained language model and trained on the augmented data. Our approach significantly enhances the performance of the fine-grained NER task in both English and Bangla subtracks, outperforming the baseline models. Specifically, our augmented systems achieve macro-f1 scores of 52.64{\%} and 64.31{\%}, representing improvements of 2.38{\%} and 11.33{\%} over the English and Bangla baselines, respectively.
[ "Hoang, Phu Gia", "Thanh, Le", "Trieu, Hai-Long" ]
VBD_NLP at SemEval-2023 Task 2: Named Entity Recognition Systems Enhanced by BabelNet and Wikipedia
semeval-1.253
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.semeval-1.254.bib
https://aclanthology.org/2023.semeval-1.254/
@inproceedings{spreitzer-tran-2023-stephen, title = "Stephen Colbert at {S}em{E}val-2023 Task 5: Using Markup for Classifying Clickbait", author = "Spreitzer, Sabrina and Tran, Hoai Nam", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Da San Martino, Giovanni and Tayyar Madabushi, Harish and Kumar, Ritesh and Sartori, Elisa}, booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.semeval-1.254", doi = "10.18653/v1/2023.semeval-1.254", pages = "1844--1848", abstract = "For SemEval-2023 Task 5, we have submitted three DeBERTaV3[LARGE] models to tackle the first subtask, classifying spoiler types (passage, phrase, multi) of clickbait web articles. The choice of basic parameters like sequence length with BERT[BASE] uncased and further approaches were then tested with DeBERTaV3[BASE] only moving the most promising ones to DeBERTaV3[LARGE]. Our research showed that information-placement on webpages is often optimized regarding e.g. ad-placement Those informations are usually described within the webpages markup which is why we conducted an approach that takes this into account. Overall we could not manage to beat the baseline, which we lead down to three reasons: First we only crawled markup for Huffington Post articles, extracting only p- and a-tags which will not cover enough aspects of a webpages design. Second Huffington Post articles are overrepresented in the given dataset, which, third, shows an imbalance towards the spoiler tags. We highly suggest re-annotating the given dataset to use markup-optimized models like MarkupLM or TIE and to clear it from embedded articles like {``}Yahoo{''} or archives like {``}archive.is{''} or {``}web.archive{''} to avoid noise. Also, the imbalance should be tackled by adding articles from sources other than Huffington Post, considering that also multi-tagged entries should be balanced towards passage- and phrase-tagged ones.", }
For SemEval-2023 Task 5, we have submitted three DeBERTaV3[LARGE] models to tackle the first subtask, classifying spoiler types (passage, phrase, multi) of clickbait web articles. The choice of basic parameters like sequence length with BERT[BASE] uncased and further approaches were then tested with DeBERTaV3[BASE] only moving the most promising ones to DeBERTaV3[LARGE]. Our research showed that information-placement on webpages is often optimized regarding e.g. ad-placement Those informations are usually described within the webpages markup which is why we conducted an approach that takes this into account. Overall we could not manage to beat the baseline, which we lead down to three reasons: First we only crawled markup for Huffington Post articles, extracting only p- and a-tags which will not cover enough aspects of a webpages design. Second Huffington Post articles are overrepresented in the given dataset, which, third, shows an imbalance towards the spoiler tags. We highly suggest re-annotating the given dataset to use markup-optimized models like MarkupLM or TIE and to clear it from embedded articles like {``}Yahoo{''} or archives like {``}archive.is{''} or {``}web.archive{''} to avoid noise. Also, the imbalance should be tackled by adding articles from sources other than Huffington Post, considering that also multi-tagged entries should be balanced towards passage- and phrase-tagged ones.
[ "Spreitzer, Sabrina", "Tran, Hoai Nam" ]
Stephen Colbert at SemEval-2023 Task 5: Using Markup for Classifying Clickbait
semeval-1.254
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]