Datasets:

bibtex_url
stringlengths
41
53
proceedings
stringlengths
38
50
bibtext
stringlengths
535
2.8k
abstract
stringlengths
0
2.04k
authors
sequencelengths
1
31
title
stringlengths
19
178
id
stringlengths
7
19
type
stringclasses
1 value
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
124 values
n_linked_authors
int64
-1
7
upvotes
int64
-1
79
num_comments
int64
-1
4
n_authors
int64
-1
22
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
55
Datasets
sequencelengths
0
46
Spaces
sequencelengths
0
82
https://aclanthology.org/2024.sigul-1.31.bib
https://aclanthology.org/2024.sigul-1.31/
@inproceedings{hussiny-etal-2024-persianemo, title = "{P}ersian{E}mo: Enhancing {F}arsi-{D}ari Emotion Analysis with a Hybrid Transformer and Recurrent Neural Network Model", author = "Hussiny, Mohammad Ali and Payenda, Mohammad Arif and {\O}vrelid, Lilja", editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.31", pages = "257--263", abstract = "Emotion analysis is a critical research domain within the field of natural language processing (NLP). While substantial progress has been made in this area for the Persian language, there is still a need for more precise models and larger datasets specifically focusing on the Farsi and Dari dialects. In this research, we introduce {``}LearnArmanEmo{''} as a new dataset and a superior ensemble approach for Persian text emotion classification. Our proposed model, which combines XLM-RoBERTa-large and BiGRU, undergoes evaluation on LetHerLearn for the Dari dialect, ARMANEMO for the Farsi dialect, and LearnArmanEmo for both Dari and Farsi dialects. The empirical results substantiate the efficacy of our approach with the combined model demonstrating superior performance. Specifically, our model achieves an F1 score of 72.9{\%} on LetHerLearn, an F1 score of 77.1{\%} on ARMANEMO, and an F1 score of 78.8{\%} on the LearnArmanEmo dataset, establishing it as a better ensemble model for these datasets. These findings underscore the potential of this hybrid model as a useful tool for enhancing the performance of emotion analysis in Persian language processing.", }
Emotion analysis is a critical research domain within the field of natural language processing (NLP). While substantial progress has been made in this area for the Persian language, there is still a need for more precise models and larger datasets specifically focusing on the Farsi and Dari dialects. In this research, we introduce {``}LearnArmanEmo{''} as a new dataset and a superior ensemble approach for Persian text emotion classification. Our proposed model, which combines XLM-RoBERTa-large and BiGRU, undergoes evaluation on LetHerLearn for the Dari dialect, ARMANEMO for the Farsi dialect, and LearnArmanEmo for both Dari and Farsi dialects. The empirical results substantiate the efficacy of our approach with the combined model demonstrating superior performance. Specifically, our model achieves an F1 score of 72.9{\%} on LetHerLearn, an F1 score of 77.1{\%} on ARMANEMO, and an F1 score of 78.8{\%} on the LearnArmanEmo dataset, establishing it as a better ensemble model for these datasets. These findings underscore the potential of this hybrid model as a useful tool for enhancing the performance of emotion analysis in Persian language processing.
[ "Hussiny, Mohammad Ali", "Payenda, Mohammad Arif", "{\\O}vrelid, Lilja" ]
PersianEmo: Enhancing Farsi-Dari Emotion Analysis with a Hybrid Transformer and Recurrent Neural Network Model
sigul-1.31
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigul-1.32.bib
https://aclanthology.org/2024.sigul-1.32/
@inproceedings{guevara-etal-2024-philippine, title = "{P}hilippine Languages Database: A Multilingual Speech Corpora for Developing Systems for Low-Resource Languages", author = "Guevara, Rowena Cristina L. and Cajote, Rhandley D. and Bayona, Michael Gringo Angelo R. and Lucas, Crisron Rudolf G.", editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.32", pages = "264--271", abstract = "Previous efforts to collect Filipino speech were done in the development of Filipino-Speech Corpus, TAGCO, and Filipino-Bisaya speech corpus. These corpora, however, are either domain-specific, non-parallel, non-multilingual or relatively insufficient for the development of state-of-the-art Automatic Speech Recognizers (ASR) and Text-To-Speech Systems (TTS) which usually requires hundreds of hours of speech data. This paper presents a multilingual corpora for the Philippine languages namely: Filipino, English, Cebuano, Kapampangan, Hiligaynon, Ilokano, Bikolano, Waray, and Tausug. PLD includes over 454 hours of recordings from speakers of the ten languages, covering multiple domains in news, medical, education, tourism and spontaneous speech. The applicability of the corpus has also been demonstrated in adult and children ASR, phoneme transcriber, voice conversion, and TTS applications.", }
Previous efforts to collect Filipino speech were done in the development of Filipino-Speech Corpus, TAGCO, and Filipino-Bisaya speech corpus. These corpora, however, are either domain-specific, non-parallel, non-multilingual or relatively insufficient for the development of state-of-the-art Automatic Speech Recognizers (ASR) and Text-To-Speech Systems (TTS) which usually requires hundreds of hours of speech data. This paper presents a multilingual corpora for the Philippine languages namely: Filipino, English, Cebuano, Kapampangan, Hiligaynon, Ilokano, Bikolano, Waray, and Tausug. PLD includes over 454 hours of recordings from speakers of the ten languages, covering multiple domains in news, medical, education, tourism and spontaneous speech. The applicability of the corpus has also been demonstrated in adult and children ASR, phoneme transcriber, voice conversion, and TTS applications.
[ "Guevara, Rowena Cristina L.", "Cajote, Rh", "ley D.", "Bayona, Michael Gringo Angelo R.", "Lucas, Crisron Rudolf G." ]
Philippine Languages Database: A Multilingual Speech Corpora for Developing Systems for Low-Resource Languages
sigul-1.32
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigul-1.33.bib
https://aclanthology.org/2024.sigul-1.33/
@inproceedings{terblanche-etal-2024-prompting, title = "Prompting towards Alleviating Code-Switched Data Scarcity in Under-Resourced Languages with {GPT} as a Pivot", author = "Terblanche, Michelle and Olaleye, Kayode and Marivate, Vukosi", editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.33", pages = "272--282", abstract = "Many multilingual communities, including numerous in Africa, frequently engage in code-switching during conversations. This behaviour stresses the need for natural language processing technologies adept at processing code-switched text. However, data scarcity, particularly in African languages, poses a significant challenge, as many are low-resourced and under-represented. In this study, we prompted GPT 3.5 to generate Afrikaans{--}English and Yoruba{--}English code-switched sentences, enhancing diversity using topic-keyword pairs, linguistic guidelines, and few-shot examples. Our findings indicate that the quality of generated sentences for languages using non-Latin scripts, like Yoruba, is considerably lower when compared with the high Afrikaans{--}English success rate. There is therefore a notable opportunity to refine prompting guidelines to yield sentences suitable for the fine-tuning of language models. We propose a framework for augmenting the diversity of synthetically generated code-switched data using GPT and propose leveraging this technology to mitigate data scarcity in low-resourced languages, underscoring the essential role of native speakers in this process.", }
Many multilingual communities, including numerous in Africa, frequently engage in code-switching during conversations. This behaviour stresses the need for natural language processing technologies adept at processing code-switched text. However, data scarcity, particularly in African languages, poses a significant challenge, as many are low-resourced and under-represented. In this study, we prompted GPT 3.5 to generate Afrikaans{--}English and Yoruba{--}English code-switched sentences, enhancing diversity using topic-keyword pairs, linguistic guidelines, and few-shot examples. Our findings indicate that the quality of generated sentences for languages using non-Latin scripts, like Yoruba, is considerably lower when compared with the high Afrikaans{--}English success rate. There is therefore a notable opportunity to refine prompting guidelines to yield sentences suitable for the fine-tuning of language models. We propose a framework for augmenting the diversity of synthetically generated code-switched data using GPT and propose leveraging this technology to mitigate data scarcity in low-resourced languages, underscoring the essential role of native speakers in this process.
[ "Terblanche, Michelle", "Olaleye, Kayode", "Marivate, Vukosi" ]
Prompting towards Alleviating Code-Switched Data Scarcity in Under-Resourced Languages with GPT as a Pivot
sigul-1.33
Poster
2404.17216
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigul-1.34.bib
https://aclanthology.org/2024.sigul-1.34/
@inproceedings{domingues-etal-2024-quantifying, title = "Quantifying the Ethical Dilemma of Using Culturally Toxic Training Data in {AI} Tools for Indigenous Languages", author = "Domingues, Pedro Henrique and Pinhanez, Claudio Santos and Cavalin, Paulo and Nogima, Julio", editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.34", pages = "283--293", abstract = "This paper tries to quantify the ethical dilemma of using culturally toxic training data to improve the performance of AI tools for ultra low-resource languages such as Indigenous languages. Our case study explores the use of Bible data which is both a commonly available source of training pairs for translators of Indigenous languages and a text which has a trail of physical and cultural violence for many Indigenous communities. In the context of fine-tuning a WMT19 German-to-English model into a Guarani Mbya-to-English translator, we first show, with two commonly-used Machine Translation metrics, that using only Bible data is not enough to create successful translators for everyday sentences gathered from a dictionary. Indeed, even fine-tuning with only 3,000 pairs of data from the dictionary produces significant increases in accuracy compared to Bible-only models. We then show that simultaneously fine-tuning with dictionary and Bible data achieves a substantial increase over the accuracy of a dictionary-only trained translator, and similarly happens when using two-step methods of fine-tuning. However, we also observed some, measurable, contaminated text from the Bible into the outputs of the best translator, creating concerns about its release to an Indigenous community. We end by discussing mechanisms to mitigate the negative impacts of this contamination.", }
This paper tries to quantify the ethical dilemma of using culturally toxic training data to improve the performance of AI tools for ultra low-resource languages such as Indigenous languages. Our case study explores the use of Bible data which is both a commonly available source of training pairs for translators of Indigenous languages and a text which has a trail of physical and cultural violence for many Indigenous communities. In the context of fine-tuning a WMT19 German-to-English model into a Guarani Mbya-to-English translator, we first show, with two commonly-used Machine Translation metrics, that using only Bible data is not enough to create successful translators for everyday sentences gathered from a dictionary. Indeed, even fine-tuning with only 3,000 pairs of data from the dictionary produces significant increases in accuracy compared to Bible-only models. We then show that simultaneously fine-tuning with dictionary and Bible data achieves a substantial increase over the accuracy of a dictionary-only trained translator, and similarly happens when using two-step methods of fine-tuning. However, we also observed some, measurable, contaminated text from the Bible into the outputs of the best translator, creating concerns about its release to an Indigenous community. We end by discussing mechanisms to mitigate the negative impacts of this contamination.
[ "Domingues, Pedro Henrique", "Pinhanez, Claudio Santos", "Cavalin, Paulo", "Nogima, Julio" ]
Quantifying the Ethical Dilemma of Using Culturally Toxic Training Data in AI Tools for Indigenous Languages
sigul-1.34
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigul-1.35.bib
https://aclanthology.org/2024.sigul-1.35/
@inproceedings{escolano-etal-2024-residual, title = "Residual Dropout: A Simple Approach to Improve Transformer{'}s Data Efficiency", author = "Escolano, Carlos and De Luca Fornaciari, Francesca and Melero, Maite", editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.35", pages = "294--299", abstract = "Transformer models often demand a vast amount of training data to achieve the desired level of performance. However, this data requirement poses a major challenge for low-resource languages seeking access to high-quality systems, particularly in tasks like Machine Translation. To address this issue, we propose adding Dropout to Transformer{'}s Residual Connections. Our experimental results demonstrate that this modification effectively mitigates overfitting during training, resulting in substantial performance gains of over 4 BLEU points on a dataset consisting of merely 10 thousand examples.", }
Transformer models often demand a vast amount of training data to achieve the desired level of performance. However, this data requirement poses a major challenge for low-resource languages seeking access to high-quality systems, particularly in tasks like Machine Translation. To address this issue, we propose adding Dropout to Transformer{'}s Residual Connections. Our experimental results demonstrate that this modification effectively mitigates overfitting during training, resulting in substantial performance gains of over 4 BLEU points on a dataset consisting of merely 10 thousand examples.
[ "Escolano, Carlos", "De Luca Fornaciari, Francesca", "Melero, Maite" ]
Residual Dropout: A Simple Approach to Improve Transformer's Data Efficiency
sigul-1.35
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigul-1.36.bib
https://aclanthology.org/2024.sigul-1.36/
@inproceedings{blum-etal-2024-resource, title = "Resource Acquisition for Understudied Languages: Extracting Wordlists from Dictionaries for Computer-assisted Language Comparison", author = "Blum, Frederic and Englisch, Johannes and Hermida Rodriguez, Alba and van Gijn, Rik and List, Johann-Mattis", editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.36", pages = "300--306", abstract = "Comparative wordlists play a crucial role for historical language comparison. They are regularly used for the identification of related words and languages, or for the reconstruction of language phylogenies and proto-languages. While automated solutions exist for the majority of methods used for this purpose, no standardized computational or computer-assisted approaches for the compilation of comparative wordlists have been proposed so far. Up to today, scholars compile wordlists by sifting manually through dictionaries or similar language resources and typing them into spreadsheets. In this study we present a semi-automatic approach to extract wordlists from machine-readable dictionaries. The transparent workflow allows to build user-defined wordlists for individual languages in a standardized format. By automating the search for translation equivalents in dictionaries, our approach greatly facilitates the aggregation of individual resources into multilingual comparative wordlists that can be used for a variety of purposes.", }
Comparative wordlists play a crucial role for historical language comparison. They are regularly used for the identification of related words and languages, or for the reconstruction of language phylogenies and proto-languages. While automated solutions exist for the majority of methods used for this purpose, no standardized computational or computer-assisted approaches for the compilation of comparative wordlists have been proposed so far. Up to today, scholars compile wordlists by sifting manually through dictionaries or similar language resources and typing them into spreadsheets. In this study we present a semi-automatic approach to extract wordlists from machine-readable dictionaries. The transparent workflow allows to build user-defined wordlists for individual languages in a standardized format. By automating the search for translation equivalents in dictionaries, our approach greatly facilitates the aggregation of individual resources into multilingual comparative wordlists that can be used for a variety of purposes.
[ "Blum, Frederic", "Englisch, Johannes", "Hermida Rodriguez, Alba", "van Gijn, Rik", "List, Johann-Mattis" ]
Resource Acquisition for Understudied Languages: Extracting Wordlists from Dictionaries for Computer-assisted Language Comparison
sigul-1.36
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigul-1.37.bib
https://aclanthology.org/2024.sigul-1.37/
@inproceedings{ji-etal-2024-robust, title = "Robust Guidance for Unsupervised Data Selection: Capturing Perplexing Named Entities for Domain-Specific Machine Translation", author = "Ji, Seunghyun and Sinulingga, Hagai Raja and Kwon, Darongsae", editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.37", pages = "307--317", abstract = "Low-resourced data presents a significant challenge for neural machine translation. In most cases, the low-resourced environment is caused by high costs due to the need for domain experts or the lack of language experts. Therefore, identifying the most training-efficient data within an unsupervised setting emerges as a practical strategy. Recent research suggests that such effective data can be identified by selecting {`}appropriately complex data{'} based on its volume, providing strong intuition for unsupervised data selection. However, we have discovered that establishing criteria for unsupervised data selection remains a challenge, as the {`}appropriate level of difficulty{'} may vary depending on the data domain. We introduce a novel unsupervised data selection method named {`}Capturing Perplexing Named Entities,{'} which leverages the maximum inference entropy in translated named entities as a metric for selection. When tested with the {`}Korean-English Parallel Corpus of Specialized Domains,{'} our method served as robust guidance for identifying training-efficient data across different domains, in contrast to existing methods.", }
Low-resourced data presents a significant challenge for neural machine translation. In most cases, the low-resourced environment is caused by high costs due to the need for domain experts or the lack of language experts. Therefore, identifying the most training-efficient data within an unsupervised setting emerges as a practical strategy. Recent research suggests that such effective data can be identified by selecting {`}appropriately complex data{'} based on its volume, providing strong intuition for unsupervised data selection. However, we have discovered that establishing criteria for unsupervised data selection remains a challenge, as the {`}appropriate level of difficulty{'} may vary depending on the data domain. We introduce a novel unsupervised data selection method named {`}Capturing Perplexing Named Entities,{'} which leverages the maximum inference entropy in translated named entities as a metric for selection. When tested with the {`}Korean-English Parallel Corpus of Specialized Domains,{'} our method served as robust guidance for identifying training-efficient data across different domains, in contrast to existing methods.
[ "Ji, Seunghyun", "Sinulingga, Hagai Raja", "Kwon, Darongsae" ]
Robust Guidance for Unsupervised Data Selection: Capturing Perplexing Named Entities for Domain-Specific Machine Translation
sigul-1.37
Poster
2402.19267
[ "https://github.com/comchobo/capturing-perplexing-named-entities" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigul-1.38.bib
https://aclanthology.org/2024.sigul-1.38/
@inproceedings{carpenter-etal-2024-seeding, title = "Seeding Alignment between Language Technology and Indigenous Methodologies: A Decolonizing Framework for Endangered Language Revitalization", author = "Carpenter, Craig John and Lyon, John and Thorogood, Miles and Armstrong, Jeannette C.", editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.38", pages = "318--324", abstract = "The integration of a speech technology into a digital edition to support the acquisition of a critically endangered Indigenous language is a complex task. More than simply consisting of technical challenges of working with an under-resourced language, researchers face the potential of re-enacting causes of language endangerment without rigorous adherence to qualitative methodologies. Based on reflections throughout the development process of a speech technology, this paper proposes a cross-disciplinary decolonizing framework for researchers working in the field of computational linguistics for Indigenous Language Revitalization (ILR). The authors propose a series of qualitative methodologies to ensure alignment with the language community which the technology is intended to benefit. The proposed relational framework is designed to sustain the integrity of the Four Rs: a series of principles first presented by Verna J. Kirkness and Ray Barnhardt in their 1991 article, {``}First Nations and Higher Education: The Four R{'}s - Respect, Relevance, Reciprocity, Responsibility{''}.", }
The integration of a speech technology into a digital edition to support the acquisition of a critically endangered Indigenous language is a complex task. More than simply consisting of technical challenges of working with an under-resourced language, researchers face the potential of re-enacting causes of language endangerment without rigorous adherence to qualitative methodologies. Based on reflections throughout the development process of a speech technology, this paper proposes a cross-disciplinary decolonizing framework for researchers working in the field of computational linguistics for Indigenous Language Revitalization (ILR). The authors propose a series of qualitative methodologies to ensure alignment with the language community which the technology is intended to benefit. The proposed relational framework is designed to sustain the integrity of the Four Rs: a series of principles first presented by Verna J. Kirkness and Ray Barnhardt in their 1991 article, {``}First Nations and Higher Education: The Four R{'}s - Respect, Relevance, Reciprocity, Responsibility{''}.
[ "Carpenter, Craig John", "Lyon, John", "Thorogood, Miles", "Armstrong, Jeannette C." ]
Seeding Alignment between Language Technology and Indigenous Methodologies: A Decolonizing Framework for Endangered Language Revitalization
sigul-1.38
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigul-1.39.bib
https://aclanthology.org/2024.sigul-1.39/
@inproceedings{leoni-etal-2024-solving, title = "Solving Failure Modes in the Creation of Trustworthy Language Technologies", author = "Leoni, Gianna and Steven, Lee and Keith, T{\=u}reiti and Mahelona, Keoni and Jones, Peter-Lucas and Duncan, Suzanne", editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.39", pages = "325--330", abstract = "To produce high-quality Natural Language Processing (NLP) technologies for low-resource languages, authentic leadership and participation from the low-resource language community is crucial. This reduces chances of bias, surveillance and the inclusion of inaccurate data that can negatively impact output in language technologies. It also ensures that decision-making throughout the pipeline of work centres on the language community rather than only prioritising metrics. The NLP building process involves a range of steps and decisions to ensure the production of successful models and outputs. Rarely does a model perform as expected or desired the first time it is deployed for testing, resulting in the need for re-assessment and re-deployment. This paper discusses the process involved in solving failure modes for a M{\=a}ori language automatic speech recognition (ASR) model. It explains how the data is curated and how language and data specialists offer unparalleled insight into the debugging process because of their knowledge of the data. This expertise has a significant influence on decision-making to ensure the entire pipeline is embedded in ethical practice and the work is culturally appropriate for the M{\=a}ori language community thus creating trustworthy language technology.", }
To produce high-quality Natural Language Processing (NLP) technologies for low-resource languages, authentic leadership and participation from the low-resource language community is crucial. This reduces chances of bias, surveillance and the inclusion of inaccurate data that can negatively impact output in language technologies. It also ensures that decision-making throughout the pipeline of work centres on the language community rather than only prioritising metrics. The NLP building process involves a range of steps and decisions to ensure the production of successful models and outputs. Rarely does a model perform as expected or desired the first time it is deployed for testing, resulting in the need for re-assessment and re-deployment. This paper discusses the process involved in solving failure modes for a M{\=a}ori language automatic speech recognition (ASR) model. It explains how the data is curated and how language and data specialists offer unparalleled insight into the debugging process because of their knowledge of the data. This expertise has a significant influence on decision-making to ensure the entire pipeline is embedded in ethical practice and the work is culturally appropriate for the M{\=a}ori language community thus creating trustworthy language technology.
[ "Leoni, Gianna", "Steven, Lee", "Keith, T{\\=u}reiti", "Mahelona, Keoni", "Jones, Peter-Lucas", "Duncan, Suzanne" ]
Solving Failure Modes in the Creation of Trustworthy Language Technologies
sigul-1.39
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigul-1.40.bib
https://aclanthology.org/2024.sigul-1.40/
@inproceedings{mengke-etal-2024-tandem, title = "Tandem Long-Short Duration-based Modeling for Automatic Speech Recognition", author = "Mengke, Dalai and Meng, Yan and Mihajlik, Peter", editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.40", pages = "331--336", abstract = "This study outlines our duration-dependent modeling experiments on limited-resource Hungarian speech recognition tasks. As it is well known, very short utterances pose significant challenges in automatic speech recognition due to the lack of context and other phenomena. In particular, we found that that the exclusion of shorter speech samples from fine-tuning for longer duration test data significantly improves the recognition rate measured on public Hungarian datasets, BEA-Base and CommonVoice (CV). Therefore we apply a tandem modeling approach, separate models are used for short and long duration test data. Our strategy improved the ability to recognize short utterances while maintaining recognition of long utterances efficiently, which led to a significant increase in overall recognition accuracy.", }
This study outlines our duration-dependent modeling experiments on limited-resource Hungarian speech recognition tasks. As it is well known, very short utterances pose significant challenges in automatic speech recognition due to the lack of context and other phenomena. In particular, we found that that the exclusion of shorter speech samples from fine-tuning for longer duration test data significantly improves the recognition rate measured on public Hungarian datasets, BEA-Base and CommonVoice (CV). Therefore we apply a tandem modeling approach, separate models are used for short and long duration test data. Our strategy improved the ability to recognize short utterances while maintaining recognition of long utterances efficiently, which led to a significant increase in overall recognition accuracy.
[ "Mengke, Dalai", "Meng, Yan", "Mihajlik, Peter" ]
Tandem Long-Short Duration-based Modeling for Automatic Speech Recognition
sigul-1.40
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigul-1.41.bib
https://aclanthology.org/2024.sigul-1.41/
@inproceedings{cordeiro-etal-2024-telp, title = "{TELP} {--} Text Extraction with Linguistic Patterns", author = "Cordeiro, Jo{\~a}o and Silvano, Purifica{\c{c}}{\~a}o Moura and Leal, Ant{\'o}nio and Pais, Sebasti{\~a}o", editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.41", pages = "337--344", abstract = "Linguistic studies in under-resourced languages pose additional challenges at various levels, including the automatic collection of examples, cases, and corpora construction. Several sophisticated applications, such as GATE (Cunningham, 2002), can be configured/adjusted/programmed by experts to automatically collect examples from the Web in any language. However, these applications are too complex and intricate to be operated, requiring, in some cases, skills in computer science. In this work, we present TELP, a tool that allows for the simplified expression of linguistic patterns to extract case studies automatically from World Wide Web sites. It is a straightforward application with an intuitive GUI and a quick learning curve, facilitating its broad use by researchers from different domains. In this paper, we describe the operational and technical aspects of TELP and some relatively recent and relevant use cases in the field of linguistic studies.", }
Linguistic studies in under-resourced languages pose additional challenges at various levels, including the automatic collection of examples, cases, and corpora construction. Several sophisticated applications, such as GATE (Cunningham, 2002), can be configured/adjusted/programmed by experts to automatically collect examples from the Web in any language. However, these applications are too complex and intricate to be operated, requiring, in some cases, skills in computer science. In this work, we present TELP, a tool that allows for the simplified expression of linguistic patterns to extract case studies automatically from World Wide Web sites. It is a straightforward application with an intuitive GUI and a quick learning curve, facilitating its broad use by researchers from different domains. In this paper, we describe the operational and technical aspects of TELP and some relatively recent and relevant use cases in the field of linguistic studies.
[ "Cordeiro, Jo{\\~a}o", "Silvano, Purifica{\\c{c}}{\\~a}o Moura", "Leal, Ant{\\'o}nio", "Pais, Sebasti{\\~a}o" ]
TELP – Text Extraction with Linguistic Patterns
sigul-1.41
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigul-1.42.bib
https://aclanthology.org/2024.sigul-1.42/
@inproceedings{boyacioglu-niehues-2024-first, title = "The First Parallel Corpus and Neural Machine Translation Model of {W}estern {A}rmenian and {E}nglish", author = "Boyac{\i}o{\u{g}}lu, Ari Nubar and Niehues, Jan", editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.42", pages = "345--356", abstract = "Western Armenian is a low-resource language spoken by the Armenian Diaspora residing in various places of the world. Although having content on the internet as well as a relatively rich literary heritage for a minority language, there is no data for the machine translation task and only a very limited amount of labeled data for other NLP tasks. In this work, we build the first machine translation system between Western Armenian and English. We explore different techniques for data collection and evaluate their impact in this very low-resource scenario. Then, we build the machine translation system while focusing on the possibilities of performing knowledge transfer from Eastern Armenian. The system is finetuned with the data collected for the first Western Armenian-English parallel corpus, which contains a total of approximately 147k sentence pairs, whose shareable part of 52k examples was made open-source. The best system through the experiments performs with a BLEU score of 29.8 while translating into English and 17 into Western Armenian.", }
Western Armenian is a low-resource language spoken by the Armenian Diaspora residing in various places of the world. Although having content on the internet as well as a relatively rich literary heritage for a minority language, there is no data for the machine translation task and only a very limited amount of labeled data for other NLP tasks. In this work, we build the first machine translation system between Western Armenian and English. We explore different techniques for data collection and evaluate their impact in this very low-resource scenario. Then, we build the machine translation system while focusing on the possibilities of performing knowledge transfer from Eastern Armenian. The system is finetuned with the data collected for the first Western Armenian-English parallel corpus, which contains a total of approximately 147k sentence pairs, whose shareable part of 52k examples was made open-source. The best system through the experiments performs with a BLEU score of 29.8 while translating into English and 17 into Western Armenian.
[ "Boyac{\\i}o{\\u{g}}lu, Ari Nubar", "Niehues, Jan" ]
The First Parallel Corpus and Neural Machine Translation Model of Western Armenian and English
sigul-1.42
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigul-1.43.bib
https://aclanthology.org/2024.sigul-1.43/
@inproceedings{piccini-etal-2024-tracing, title = "Tracing Linguistic Heritage: Constructing a {S}omali-{I}talian Terminological Resource through Explorers{'} Notebooks and Contemporary Corpus Analysis", author = "Piccini, Silvia and Vilela Ruiz, Giuliana Elizabeth and Bellandi, Andrea and Carniani, Enrico", editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.43", pages = "357--362", abstract = "The aim of this contribution is to introduce the initial phases of constructing a Somali-Italian terminological resource that dates back to Italy{'}s colonial expansion into Africa. Specifically, the terminological data was extracted from the notebooks authored by the Italian explorer Ugo Ferrandi (1852 - 1928) and published by the Societ{\`a} Geografica in 1903 under the title {``}Lugh. Emporio Commerciale sul Giuba{''}. In order to develop Ferrandi{'}s terminological resource, we have employed Semantic Web technologies (RDF, OWL, and SPARQL) and embraced the Linked Open Data paradigm. This ensures the FAIRness of the data and enables the publication and sharing of our terminological resource within an open interconnected Web of Data, thus contributing to addressing the absence of Somali in the Linguistic Linked Data cloud. Whenever feasible, Ferrandi{'}s lexicon entries have been linked and enriched with information derived from a Somali lexicon included in a contemporary Somali Corpus. This approach allows the synchronic corpus-related Somali lexicon to acquire historical depth, thereby illuminating the linguistic dynamics that have transpired over time and would otherwise have remained obscure.", }
The aim of this contribution is to introduce the initial phases of constructing a Somali-Italian terminological resource that dates back to Italy{'}s colonial expansion into Africa. Specifically, the terminological data was extracted from the notebooks authored by the Italian explorer Ugo Ferrandi (1852 - 1928) and published by the Societ{\`a} Geografica in 1903 under the title {``}Lugh. Emporio Commerciale sul Giuba{''}. In order to develop Ferrandi{'}s terminological resource, we have employed Semantic Web technologies (RDF, OWL, and SPARQL) and embraced the Linked Open Data paradigm. This ensures the FAIRness of the data and enables the publication and sharing of our terminological resource within an open interconnected Web of Data, thus contributing to addressing the absence of Somali in the Linguistic Linked Data cloud. Whenever feasible, Ferrandi{'}s lexicon entries have been linked and enriched with information derived from a Somali lexicon included in a contemporary Somali Corpus. This approach allows the synchronic corpus-related Somali lexicon to acquire historical depth, thereby illuminating the linguistic dynamics that have transpired over time and would otherwise have remained obscure.
[ "Piccini, Silvia", "Vilela Ruiz, Giuliana Elizabeth", "Bell", "i, Andrea", "Carniani, Enrico" ]
Tracing Linguistic Heritage: Constructing a Somali-Italian Terminological Resource through Explorers' Notebooks and Contemporary Corpus Analysis
sigul-1.43
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigul-1.44.bib
https://aclanthology.org/2024.sigul-1.44/
@inproceedings{fernandez-de-landa-etal-2024-uncovering, title = "Uncovering Social Changes of the {B}asque Speaking {T}witter Community During {COVID}-19 Pandemic", author = "Fernandez de Landa, Joseba and Garc{\'\i}a-Ferrero, Iker and Salaberria, Ander and Campos, Jon Ander", editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.44", pages = "363--371", abstract = "The aim of this work is to study the impact of the COVID-19 pandemic on the Basque speaking Twitter community by applying Natural Language Processing unsupervised techniques. In order to carry out this study, we collected and publicly released the biggest dataset of Basque tweets containing up to 8M tweets from September 2019 to February 2021. To analyze the impact of the pandemic, the variability of the content over time was studied through quantitative and qualitative analysis of words and emojis. For the quantitative analysis, the shift at the frequency of the terms was calculated using linear regression over frequencies. On the other hand, for the qualitative analysis, word embeddings were used to study the changes in the meaning of the most significant words and emojis at different periods of the pandemic. Through this multifaceted approach, we discovered noteworthy alterations in the political inclinations exhibited by Basque users throughout the course of the pandemic.", }
The aim of this work is to study the impact of the COVID-19 pandemic on the Basque speaking Twitter community by applying Natural Language Processing unsupervised techniques. In order to carry out this study, we collected and publicly released the biggest dataset of Basque tweets containing up to 8M tweets from September 2019 to February 2021. To analyze the impact of the pandemic, the variability of the content over time was studied through quantitative and qualitative analysis of words and emojis. For the quantitative analysis, the shift at the frequency of the terms was calculated using linear regression over frequencies. On the other hand, for the qualitative analysis, word embeddings were used to study the changes in the meaning of the most significant words and emojis at different periods of the pandemic. Through this multifaceted approach, we discovered noteworthy alterations in the political inclinations exhibited by Basque users throughout the course of the pandemic.
[ "Fern", "ez de L", "a, Joseba", "Garc{\\'\\i}a-Ferrero, Iker", "Salaberria, Ander", "Campos, Jon Ander" ]
Uncovering Social Changes of the Basque Speaking Twitter Community During COVID-19 Pandemic
sigul-1.44
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigul-1.45.bib
https://aclanthology.org/2024.sigul-1.45/
@inproceedings{savary-etal-2024-unidive, title = "{U}ni{D}ive: A {COST} Action on Universality, Diversity and Idiosyncrasy in Language Technology", author = {Savary, Agata and Zeman, Daniel and Barbu Mititelu, Verginica and Barreiro, Anabela and Caftanatov, Olesea and de Marneffe, Marie-Catherine and Dobrovoljc, Kaja and Eryi{\u{g}}it, G{\"u}l{\c{s}}en and Giouli, Voula and Guillaume, Bruno and Markantonatou, Stella and Melnik, Nurit and Nivre, Joakim and Ojha, Atul Kr. and Ramisch, Carlos and Walsh, Abigail and W{\'o}jtowicz, Beata and Wr{\'o}blewska, Alina}, editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.45", pages = "372--382", abstract = "This paper presents the objectives, organization and activities of the UniDive COST Action, a scientific network dedicated to universality, diversity and idiosyncrasy in language technology. We describe the objectives and organization of this initiative, the people involved, the working groups and the ongoing tasks and activities. This paper is also an pen call for participation towards new members and countries.", }
This paper presents the objectives, organization and activities of the UniDive COST Action, a scientific network dedicated to universality, diversity and idiosyncrasy in language technology. We describe the objectives and organization of this initiative, the people involved, the working groups and the ongoing tasks and activities. This paper is also an pen call for participation towards new members and countries.
[ "Savary, Agata", "Zeman, Daniel", "Barbu Mititelu, Verginica", "Barreiro, Anabela", "Caftanatov, Olesea", "de Marneffe, Marie-Catherine", "Dobrovoljc, Kaja", "Eryi{\\u{g}}it, G{\\\"u}l{\\c{s}}en", "Giouli, Voula", "Guillaume, Bruno", "Markantonatou, Stella", "Melnik, Nurit", "Nivre, Joakim", "Ojha, Atul Kr.", "Ramisch, Carlos", "Walsh, Abigail", "W{\\'o}jtowicz, Beata", "Wr{\\'o}blewska, Alina" ]
UniDive: A COST Action on Universality, Diversity and Idiosyncrasy in Language Technology
sigul-1.45
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigul-1.46.bib
https://aclanthology.org/2024.sigul-1.46/
@inproceedings{dadason-loftsson-2024-unsupervised, title = "Unsupervised Outlier Detection for Language-Independent Text Quality Filtering", author = "Da{\dh}ason, J{\'o}n and Loftsson, Hrafn", editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.46", pages = "383--393", abstract = "Web-crawled corpora offer an abundant source of training data for language models. However, they are generally noisy and are typically filtered using heuristic rules or classifiers. These methods require careful tuning or labeling by fluent speakers. In this paper, we assess the effectiveness of commonly applied rules on TQ-IS, a manually labeled text quality dataset for Icelandic. Additionally, we advocate for the utilization of unsupervised clustering and outlier detection algorithms for filtering. These algorithms are language-independent, computationally efficient and do not require language expertise. Using grid search, we find the optimal configuration for every combination of rules, optimizing for F1 score on TQ-IS. For a rule-based approach, we discover that optimal results can be achieved with only a small subset of the full ruleset. Using five rules, we obtain an F1 score of 98.2{\%}. We then evaluate three unsupervised algorithms, i.e., Gaussian Mixture Models (GMMs), Isolation Forests and One-Class SVMs. Our findings reveal that unsupervised algorithms perform well on the TQ-IS dataset, with GMMs obtaining the best results, comparable to those obtained with the rule-based approach. Finally, we show that unsupervised methods appear to be equally suitable for languages other than Icelandic, including Estonian and Basque.", }
Web-crawled corpora offer an abundant source of training data for language models. However, they are generally noisy and are typically filtered using heuristic rules or classifiers. These methods require careful tuning or labeling by fluent speakers. In this paper, we assess the effectiveness of commonly applied rules on TQ-IS, a manually labeled text quality dataset for Icelandic. Additionally, we advocate for the utilization of unsupervised clustering and outlier detection algorithms for filtering. These algorithms are language-independent, computationally efficient and do not require language expertise. Using grid search, we find the optimal configuration for every combination of rules, optimizing for F1 score on TQ-IS. For a rule-based approach, we discover that optimal results can be achieved with only a small subset of the full ruleset. Using five rules, we obtain an F1 score of 98.2{\%}. We then evaluate three unsupervised algorithms, i.e., Gaussian Mixture Models (GMMs), Isolation Forests and One-Class SVMs. Our findings reveal that unsupervised algorithms perform well on the TQ-IS dataset, with GMMs obtaining the best results, comparable to those obtained with the rule-based approach. Finally, we show that unsupervised methods appear to be equally suitable for languages other than Icelandic, including Estonian and Basque.
[ "Da{\\dh}ason, J{\\'o}n", "Loftsson, Hrafn" ]
Unsupervised Outlier Detection for Language-Independent Text Quality Filtering
sigul-1.46
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigul-1.47.bib
https://aclanthology.org/2024.sigul-1.47/
@inproceedings{matlatipov-etal-2024-uzabsa, title = "{U}z{ABSA}: Aspect-Based Sentiment Analysis for the {U}zbek Language", author = "Matlatipov, Sanatbek Gayratovich and Rajabov, Jaloliddin and Kuriyozov, Elmurod and Aripov, Mersaid", editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.47", pages = "394--403", abstract = "The objective of enhancing the availability of natural language processing technologies for low-resource languages has significant importance in facilitating technological accessibility within the populations of speakers of these languages. Our current grasping shows that there are no established linguistic resources available open source to develop aspect-based sentiment analysis (ABSA) tools tailored to the Uzbek language. This work aims to address the aforementioned gap by presenting the first high-quality annotated ABSA dataset - UzABSA. The data used in this study was obtained from a compilation of online reviews of Uzbek restaurants. Consequently, the constructed dataset has a length of 3500 reviews at the document level and 6100+ sentences at the sentence level. The popular approach to language resources of this kind explores four distinctive characteristics, namely Aspect Terms, Aspect Term Polarities, Aspect Category Terms, as well as Aspect Category Polarities. To the best of our knowledge, it is the first and the largest ABSA dataset for the Uzbek language. To evaluate the annotation process of our dataset, we used established statistical techniques such as Cohen{'}s kappa coefficient and Krippendorff{'}s $\alpha$ to assess agreement between annotators. Subsequently, a classification model, namely K-Nearest Neighbour (KNN), was used to evaluate the performance of the created dataset. Both sets of evaluation techniques demonstrate comparable levels of accuracy. The first findings across the various tasks showed promising outcomes, with accuracy rates ranging from 72{\%} to 88{\%}. This study not only highlights the significance of our acquired dataset but also plays a valuable tool for scholars interested in furthering sentiment analysis in the Uzbek language.", }
The objective of enhancing the availability of natural language processing technologies for low-resource languages has significant importance in facilitating technological accessibility within the populations of speakers of these languages. Our current grasping shows that there are no established linguistic resources available open source to develop aspect-based sentiment analysis (ABSA) tools tailored to the Uzbek language. This work aims to address the aforementioned gap by presenting the first high-quality annotated ABSA dataset - UzABSA. The data used in this study was obtained from a compilation of online reviews of Uzbek restaurants. Consequently, the constructed dataset has a length of 3500 reviews at the document level and 6100+ sentences at the sentence level. The popular approach to language resources of this kind explores four distinctive characteristics, namely Aspect Terms, Aspect Term Polarities, Aspect Category Terms, as well as Aspect Category Polarities. To the best of our knowledge, it is the first and the largest ABSA dataset for the Uzbek language. To evaluate the annotation process of our dataset, we used established statistical techniques such as Cohen{'}s kappa coefficient and Krippendorff{'}s $\alpha$ to assess agreement between annotators. Subsequently, a classification model, namely K-Nearest Neighbour (KNN), was used to evaluate the performance of the created dataset. Both sets of evaluation techniques demonstrate comparable levels of accuracy. The first findings across the various tasks showed promising outcomes, with accuracy rates ranging from 72{\%} to 88{\%}. This study not only highlights the significance of our acquired dataset but also plays a valuable tool for scholars interested in furthering sentiment analysis in the Uzbek language.
[ "Matlatipov, Sanatbek Gayratovich", "Rajabov, Jaloliddin", "Kuriyozov, Elmurod", "Aripov, Mersaid" ]
UzABSA: Aspect-Based Sentiment Analysis for the Uzbek Language
sigul-1.47
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigul-1.48.bib
https://aclanthology.org/2024.sigul-1.48/
@inproceedings{nguyen-etal-2024-vihealthnli, title = "{V}i{H}ealth{NLI}: A Dataset for {V}ietnamese Natural Language Inference in Healthcare", author = "Nguyen, Huyen and Ngo, Quyen The and Do, Thanh-Ha and Hoang, Tuan-Anh", editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.48", pages = "404--409", abstract = "This paper introduces ViHealthNLI, a large dataset for the natural language inference problem for Vietnamese. Unlike the similar Vietnamese datasets, ours is specific to the healthcare domain. We conducted an exploratory analysis to characterize the dataset and evaluated the state-of-the-art methods on the dataset. Our findings indicate that the dataset poses significant challenges while also holding promise for further advanced research and the creation of practical applications.", }
This paper introduces ViHealthNLI, a large dataset for the natural language inference problem for Vietnamese. Unlike the similar Vietnamese datasets, ours is specific to the healthcare domain. We conducted an exploratory analysis to characterize the dataset and evaluated the state-of-the-art methods on the dataset. Our findings indicate that the dataset poses significant challenges while also holding promise for further advanced research and the creation of practical applications.
[ "Nguyen, Huyen", "Ngo, Quyen The", "Do, Thanh-Ha", "Hoang, Tuan-Anh" ]
ViHealthNLI: A Dataset for Vietnamese Natural Language Inference in Healthcare
sigul-1.48
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigul-1.49.bib
https://aclanthology.org/2024.sigul-1.49/
@inproceedings{barkhordar-etal-2024-unexpected, title = "Why the Unexpected? Dissecting the Political and Economic Bias in {P}ersian Small and Large Language Models", author = "Barkhordar, Ehsan and Thapa, Surendrabikram and Maratha, Ashwarya and Naseem, Usman", editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.49", pages = "410--420", abstract = "Recently, language models (LMs) like BERT and large language models (LLMs) like GPT-4 have demonstrated potential in various linguistic tasks such as text generation, translation, and sentiment analysis. However, these abilities come with a cost of a risk of perpetuating biases from their training data. Political and economic inclinations play a significant role in shaping these biases. Thus, this research aims to understand political and economic biases in Persian LMs and LLMs, addressing a significant gap in AI ethics and fairness research. Focusing on the Persian language, our research employs a two-step methodology. First, we utilize the political compass test adapted to Persian. Second, we analyze biases present in these models. Our findings indicate the presence of nuanced biases, underscoring the importance of ethical considerations in AI deployments within Persian-speaking contexts.", }
Recently, language models (LMs) like BERT and large language models (LLMs) like GPT-4 have demonstrated potential in various linguistic tasks such as text generation, translation, and sentiment analysis. However, these abilities come with a cost of a risk of perpetuating biases from their training data. Political and economic inclinations play a significant role in shaping these biases. Thus, this research aims to understand political and economic biases in Persian LMs and LLMs, addressing a significant gap in AI ethics and fairness research. Focusing on the Persian language, our research employs a two-step methodology. First, we utilize the political compass test adapted to Persian. Second, we analyze biases present in these models. Our findings indicate the presence of nuanced biases, underscoring the importance of ethical considerations in AI deployments within Persian-speaking contexts.
[ "Barkhordar, Ehsan", "Thapa, Surendrabikram", "Maratha, Ashwarya", "Naseem, Usman" ]
Why the Unexpected? Dissecting the Political and Economic Bias in Persian Small and Large Language Models
sigul-1.49
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.sigul-1.50.bib
https://aclanthology.org/2024.sigul-1.50/
@inproceedings{keith-2024-work, title = "Work in Progress: Text-to-speech on Edge Devices for Te Reo {M}{\=a}ori and {`}{\=O}lelo Hawaiʻi", author = "Keith, T{\=u}reiti", editor = "Melero, Maite and Sakti, Sakriani and Soria, Claudia", booktitle = "Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.sigul-1.50", pages = "421--426", abstract = "Existing popular text-to-speech technologies focus on large models requiring a large corpus of recorded speech to train. The resulting models are typically run on high-resource servers where users synthesise speech from a client device requiring constant connectivity. For speakers of low-resource languages living in remote areas, this approach does not work. Corpora are typically small and synthesis needs to run on an unconnected, battery or solar-powered edge device. In this paper, we demonstrate how knowledge transfer and adversarial training can be used to create efficient models capable of running on edge devices using a corpus of only several hours. We apply these concepts to create a voice synthesiser for te reo M{\=a}ori (the indigenous language of Aotearoa New Zealand) for a non-speaking user and {`}{\=o}lelo Hawaiʻi (the indigenous language of Hawaiʻi) for a legally blind user, thus creating the first high-quality text-to-speech tools for these endangered, central-eastern Polynesian languages capable of running on a low powered edge device.", }
Existing popular text-to-speech technologies focus on large models requiring a large corpus of recorded speech to train. The resulting models are typically run on high-resource servers where users synthesise speech from a client device requiring constant connectivity. For speakers of low-resource languages living in remote areas, this approach does not work. Corpora are typically small and synthesis needs to run on an unconnected, battery or solar-powered edge device. In this paper, we demonstrate how knowledge transfer and adversarial training can be used to create efficient models capable of running on edge devices using a corpus of only several hours. We apply these concepts to create a voice synthesiser for te reo M{\=a}ori (the indigenous language of Aotearoa New Zealand) for a non-speaking user and {`}{\=o}lelo Hawaiʻi (the indigenous language of Hawaiʻi) for a legally blind user, thus creating the first high-quality text-to-speech tools for these endangered, central-eastern Polynesian languages capable of running on a low powered edge device.
[ "Keith, T{\\=u}reiti" ]
Work in Progress: Text-to-speech on Edge Devices for Te Reo Māori and `Ōlelo HawaiÊ»i
sigul-1.50
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.tdle-1.1.bib
https://aclanthology.org/2024.tdle-1.1/
@inproceedings{grutzner-zahn-etal-2024-surveying, title = "Surveying the Technology Support of Languages", author = {Gr{\"u}tzner-Zahn, Annika and Gaspari, Federico and Giagkou, Maria and Hegele, Stefanie and Way, Andy and Rehm, Georg}, editor = "Gaspari, Federico and Moorkens, Joss and Aldabe, Itziar and Farwell, Aritz and Altuna, Begona and Piperidis, Stelios and Rehm, Georg and Rigau, German", booktitle = "Proceedings of the Second International Workshop Towards Digital Language Equality (TDLE): Focusing on Sustainability @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.tdle-1.1", pages = "1--17", abstract = "Many of the world{'}s languages are left behind when it comes to Language Technology applications, since most of these are available only in a limited number of languages, creating a digital divide that affects millions of users worldwide. It is crucial, therefore, to monitor and quantify the progress of technology support for individual languages, which also enables comparisons across language communities. In this way, efforts can be directed towards reducing language barriers, promoting economic and social inclusion, and ensuring that all citizens can use their preferred language in the digital age. This paper critically reviews and compares recent quantitative approaches to measuring technology support for languages. Despite using different approaches and methodologies, the findings of all analysed papers demonstrate the unequal distribution of technology support and emphasise the existence of a digital divide among languages.", }
Many of the world{'}s languages are left behind when it comes to Language Technology applications, since most of these are available only in a limited number of languages, creating a digital divide that affects millions of users worldwide. It is crucial, therefore, to monitor and quantify the progress of technology support for individual languages, which also enables comparisons across language communities. In this way, efforts can be directed towards reducing language barriers, promoting economic and social inclusion, and ensuring that all citizens can use their preferred language in the digital age. This paper critically reviews and compares recent quantitative approaches to measuring technology support for languages. Despite using different approaches and methodologies, the findings of all analysed papers demonstrate the unequal distribution of technology support and emphasise the existence of a digital divide among languages.
[ "Gr{\\\"u}tzner-Zahn, Annika", "Gaspari, Federico", "Giagkou, Maria", "Hegele, Stefanie", "Way, Andy", "Rehm, Georg" ]
Surveying the Technology Support of Languages
tdle-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.tdle-1.2.bib
https://aclanthology.org/2024.tdle-1.2/
@inproceedings{alves-etal-2024-domains, title = "Which Domains, Tasks and Languages are in the Focus of {NLP} Research on the Languages of {E}urope?", author = "Alves, Diego and Tadi{\'c}, Marko and Rehm, Georg", editor = "Gaspari, Federico and Moorkens, Joss and Aldabe, Itziar and Farwell, Aritz and Altuna, Begona and Piperidis, Stelios and Rehm, Georg and Rigau, German", booktitle = "Proceedings of the Second International Workshop Towards Digital Language Equality (TDLE): Focusing on Sustainability @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.tdle-1.2", pages = "18--32", abstract = "This article provides a thorough mapping of NLP and Language Technology research on 39 European languages onto 46 domains. Our analysis is based on almost 50,000 papers published between 2010 and October 2022 in the ACL Anthology. We use a dictionary-based approach to identify 1) languages, 2) domains, and 3) NLP tasks in these papers; the dictionary-based method using exact terms has a precision value of 0.81. Moreover, we identify common mistakes which can be useful to fine-tune the methodology for future work. While we are only able to highlight selected results in this submitted version, the final paper will contain detailed analyses and charts on a per-language basis. We hope that this study can contribute to digital language equality in Europe by providing information to the academic and industrial research community about the opportunities for novel LT/NLP research.", }
This article provides a thorough mapping of NLP and Language Technology research on 39 European languages onto 46 domains. Our analysis is based on almost 50,000 papers published between 2010 and October 2022 in the ACL Anthology. We use a dictionary-based approach to identify 1) languages, 2) domains, and 3) NLP tasks in these papers; the dictionary-based method using exact terms has a precision value of 0.81. Moreover, we identify common mistakes which can be useful to fine-tune the methodology for future work. While we are only able to highlight selected results in this submitted version, the final paper will contain detailed analyses and charts on a per-language basis. We hope that this study can contribute to digital language equality in Europe by providing information to the academic and industrial research community about the opportunities for novel LT/NLP research.
[ "Alves, Diego", "Tadi{\\'c}, Marko", "Rehm, Georg" ]
Which Domains, Tasks and Languages are in the Focus of NLP Research on the Languages of Europe?
tdle-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.tdle-1.3.bib
https://aclanthology.org/2024.tdle-1.3/
@inproceedings{padro-sauri-2024-fine, title = "Fine-Tuning Open Access {LLM}s for High-Precision {NLU} in Goal-Driven Dialog Systems", author = "Padr{\'o}, Llu{\'\i}s and Saur{\'\i}, Roser", editor = "Gaspari, Federico and Moorkens, Joss and Aldabe, Itziar and Farwell, Aritz and Altuna, Begona and Piperidis, Stelios and Rehm, Georg and Rigau, German", booktitle = "Proceedings of the Second International Workshop Towards Digital Language Equality (TDLE): Focusing on Sustainability @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.tdle-1.3", pages = "33--42", abstract = "This paper presents a set of experiments on fine-tuning LLMs to produce high-precision semantic representations for the NLU component of a dialog system front-end. The aim of this research is threefold: First, we want to explore the capabilities of LLMs on real, industry-based use cases that involve complex data and strict requirements on results. Since the LLM output should usable by the application back-end, the produced semantic representation must satisfy strict format and consistency requirements. Second, we want to evaluate the cost-benefit of open-source LLMs, that is, the feasibility of running this kind of models in machines affordable to small-medium enterprises (SMEs), in order to assess how far this organizations can go without depending on the large players controlling the market, and with a moderate use of computation resources. Finally, we also want to assess the language scalability of the LLMs in this kind of applications; specifically, whether a multilingual model is able to cast patterns learnt from one language to other ones {--}with special attention to underresourced languages{--}, thus reducing required training data and computation costs. This work was carried out within an R{\&}D context of assisting a real company in defining its NLU model strategy, and thus the results have a practical, industry-level focus.", }
This paper presents a set of experiments on fine-tuning LLMs to produce high-precision semantic representations for the NLU component of a dialog system front-end. The aim of this research is threefold: First, we want to explore the capabilities of LLMs on real, industry-based use cases that involve complex data and strict requirements on results. Since the LLM output should usable by the application back-end, the produced semantic representation must satisfy strict format and consistency requirements. Second, we want to evaluate the cost-benefit of open-source LLMs, that is, the feasibility of running this kind of models in machines affordable to small-medium enterprises (SMEs), in order to assess how far this organizations can go without depending on the large players controlling the market, and with a moderate use of computation resources. Finally, we also want to assess the language scalability of the LLMs in this kind of applications; specifically, whether a multilingual model is able to cast patterns learnt from one language to other ones {--}with special attention to underresourced languages{--}, thus reducing required training data and computation costs. This work was carried out within an R{\&}D context of assisting a real company in defining its NLU model strategy, and thus the results have a practical, industry-level focus.
[ "Padr{\\'o}, Llu{\\'\\i}s", "Saur{\\'\\i}, Roser" ]
Fine-Tuning Open Access LLMs for High-Precision NLU in Goal-Driven Dialog Systems
tdle-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.tdle-1.4.bib
https://aclanthology.org/2024.tdle-1.4/
@inproceedings{diandaru-etal-2024-better, title = "Could We Have Had Better Multilingual {LLM}s if {E}nglish Was Not the Central Language?", author = "Diandaru, Ryandito and Susanto, Lucky and Tang, Zilu and Purwarianti, Ayu and Wijaya, Derry Tanti", editor = "Gaspari, Federico and Moorkens, Joss and Aldabe, Itziar and Farwell, Aritz and Altuna, Begona and Piperidis, Stelios and Rehm, Georg and Rigau, German", booktitle = "Proceedings of the Second International Workshop Towards Digital Language Equality (TDLE): Focusing on Sustainability @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.tdle-1.4", pages = "43--52", abstract = "Large Language Models (LLMs) demonstrate strong machine translation capabilities on languages they are trained on. However, the impact of factors beyond training data size on translation performance remains a topic of debate, especially concerning languages not directly encountered during training. Our study delves into Llama2{'}s translation capabilities. By modeling a linear relationship between linguistic feature distances and machine translation scores, we ask ourselves if there are potentially better central languages for LLMs other than English. Our experiments show that the 7B Llama2 model yields above 10 BLEU when translating into all languages it has seen, which rarely happens for languages it has not seen. Most translation improvements into unseen languages come from scaling up the model size rather than instruction tuning or increasing shot count. Furthermore, our correlation analysis reveals that syntactic similarity is not the only linguistic factor that strongly correlates with machine translation scores. Interestingly, we discovered that under specific circumstances, some languages (e.g. Swedish, Catalan), despite having significantly less training data, exhibit comparable correlation levels to English. These insights challenge the prevailing landscape of LLMs, suggesting that models centered around languages other than English could provide a more efficient foundation for multilingual applications.", }
Large Language Models (LLMs) demonstrate strong machine translation capabilities on languages they are trained on. However, the impact of factors beyond training data size on translation performance remains a topic of debate, especially concerning languages not directly encountered during training. Our study delves into Llama2{'}s translation capabilities. By modeling a linear relationship between linguistic feature distances and machine translation scores, we ask ourselves if there are potentially better central languages for LLMs other than English. Our experiments show that the 7B Llama2 model yields above 10 BLEU when translating into all languages it has seen, which rarely happens for languages it has not seen. Most translation improvements into unseen languages come from scaling up the model size rather than instruction tuning or increasing shot count. Furthermore, our correlation analysis reveals that syntactic similarity is not the only linguistic factor that strongly correlates with machine translation scores. Interestingly, we discovered that under specific circumstances, some languages (e.g. Swedish, Catalan), despite having significantly less training data, exhibit comparable correlation levels to English. These insights challenge the prevailing landscape of LLMs, suggesting that models centered around languages other than English could provide a more efficient foundation for multilingual applications.
[ "Di", "aru, Ry", "ito", "Susanto, Lucky", "Tang, Zilu", "Purwarianti, Ayu", "Wijaya, Derry Tanti" ]
Could We Have Had Better Multilingual LLMs if English Was Not the Central Language?
tdle-1.4
Poster
2402.13917
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.tdle-1.5.bib
https://aclanthology.org/2024.tdle-1.5/
@inproceedings{filevich-etal-2024-language, title = "A Language Model Trained on Uruguayan {S}panish News Text", author = "Filevich, Juan Pablo and Marco, Gonzalo and Castro, Santiago and Chiruzzo, Luis and Ros{\'a}, Aiala", editor = "Gaspari, Federico and Moorkens, Joss and Aldabe, Itziar and Farwell, Aritz and Altuna, Begona and Piperidis, Stelios and Rehm, Georg and Rigau, German", booktitle = "Proceedings of the Second International Workshop Towards Digital Language Equality (TDLE): Focusing on Sustainability @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.tdle-1.5", pages = "53--60", abstract = "This paper presents a language model trained from scratch exclusively on a brand new corpus consisting of about 6 GiB of Uruguayan newspaper text. We trained the model for 30 days on a single Nvidia P100 using the RoBERTa-base architecture but with considerably fewer parameters than other standard RoBERTa models. We evaluated the model on two NLP tasks and found that it outperforms BETO, the widely used Spanish BERT pre-trained model. We also compared our model on the masked-word prediction task with two popular multilingual BERT-based models, Multilingual BERT and XLM-RoBERTa, obtaining outstanding results on sentences from the Uruguayan press domain. Our experiments show that training a language model on a domain-specific corpus can significantly improve performance even when the model is smaller and was trained with significantly less data than more standard pre-trained models.", }
This paper presents a language model trained from scratch exclusively on a brand new corpus consisting of about 6 GiB of Uruguayan newspaper text. We trained the model for 30 days on a single Nvidia P100 using the RoBERTa-base architecture but with considerably fewer parameters than other standard RoBERTa models. We evaluated the model on two NLP tasks and found that it outperforms BETO, the widely used Spanish BERT pre-trained model. We also compared our model on the masked-word prediction task with two popular multilingual BERT-based models, Multilingual BERT and XLM-RoBERTa, obtaining outstanding results on sentences from the Uruguayan press domain. Our experiments show that training a language model on a domain-specific corpus can significantly improve performance even when the model is smaller and was trained with significantly less data than more standard pre-trained models.
[ "Filevich, Juan Pablo", "Marco, Gonzalo", "Castro, Santiago", "Chiruzzo, Luis", "Ros{\\'a}, Aiala" ]
A Language Model Trained on Uruguayan Spanish News Text
tdle-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.tdle-1.6.bib
https://aclanthology.org/2024.tdle-1.6/
@inproceedings{marmol-romero-etal-2024-environmental, title = "Environmental Impact Measurement in the {M}ental{R}isk{ES} Evaluation Campaign", author = "M{\'a}rmol Romero, Alba M. and Moreno-Mu{\~n}oz, Adri{\'a}n and Plaza-del-Arco, Flor Miriam and Molina Gonz{\'a}lez, M. Dolores and Montejo-R{\'a}ez, Arturo", editor = "Gaspari, Federico and Moorkens, Joss and Aldabe, Itziar and Farwell, Aritz and Altuna, Begona and Piperidis, Stelios and Rehm, Georg and Rigau, German", booktitle = "Proceedings of the Second International Workshop Towards Digital Language Equality (TDLE): Focusing on Sustainability @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.tdle-1.6", pages = "61--72", abstract = "With the rise of Large Language Models (LLMs), the NLP community is increasingly aware of the environmental consequences of model development due to the energy consumed for training and running these models. This study investigates the energy consumption and environmental impact of systems participating in the MentalRiskES shared task, at the Iberian Language Evaluation Forum (IberLEF) in the year 2023, which focuses on early risk identification of mental disorders in Spanish comments. Participants were asked to submit, for each prediction, a set of efficiency metrics, being carbon dioxide emissions among them. We conduct an empirical analysis of the data submitted considering model architecture, task complexity, and dataset characteristics, covering a spectrum from traditional Machine Learning (ML) models to advanced LLMs. Our findings contribute to understanding the ecological footprint of NLP systems and advocate for prioritizing environmental impact assessment in shared tasks to foster sustainability across diverse model types and approaches, being evaluation campaigns an adequate framework for this kind of analysis.", }
With the rise of Large Language Models (LLMs), the NLP community is increasingly aware of the environmental consequences of model development due to the energy consumed for training and running these models. This study investigates the energy consumption and environmental impact of systems participating in the MentalRiskES shared task, at the Iberian Language Evaluation Forum (IberLEF) in the year 2023, which focuses on early risk identification of mental disorders in Spanish comments. Participants were asked to submit, for each prediction, a set of efficiency metrics, being carbon dioxide emissions among them. We conduct an empirical analysis of the data submitted considering model architecture, task complexity, and dataset characteristics, covering a spectrum from traditional Machine Learning (ML) models to advanced LLMs. Our findings contribute to understanding the ecological footprint of NLP systems and advocate for prioritizing environmental impact assessment in shared tasks to foster sustainability across diverse model types and approaches, being evaluation campaigns an adequate framework for this kind of analysis.
[ "M{\\'a}rmol Romero, Alba M.", "Moreno-Mu{\\~n}oz, Adri{\\'a}n", "Plaza-del-Arco, Flor Miriam", "Molina Gonz{\\'a}lez, M. Dolores", "Montejo-R{\\'a}ez, Arturo" ]
Environmental Impact Measurement in the MentalRiskES Evaluation Campaign
tdle-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trac-1.1.bib
https://aclanthology.org/2024.trac-1.1/
@inproceedings{tufa-etal-2024-constant, title = "The Constant in {HATE}: Toxicity in {R}eddit across Topics and Languages", author = "Tufa, Wondimagegnhue Tsegaye and Markov, Ilia and Vossen, Piek T.J.M.", editor = "Kumar, Ritesh and Ojha, Atul Kr. and Malmasi, Shervin and Chakravarthi, Bharathi Raja and Lahiri, Bornini and Singh, Siddharth and Ratan, Shyam", booktitle = "Proceedings of the Fourth Workshop on Threat, Aggression {\&} Cyberbullying @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.trac-1.1", pages = "1--11", abstract = "Toxic language remains an ongoing challenge on social media platforms, presenting significant issues for users and communities. This paper provides a cross-topic and cross-lingual analysis of toxicity in Reddit conversations. We collect 1.5 million comment threads from 481 communities in six languages. By aligning languages with topics, we thoroughly analyze how toxicity spikes within different communities. Our analysis targets six languages spanning different communities and topics such as Culture, Politics, and News. We observe consistent patterns across languages where toxicity increases within the same topics while also identifying significant differences where specific language communities exhibit notable variations in relation to certain topics.", }
Toxic language remains an ongoing challenge on social media platforms, presenting significant issues for users and communities. This paper provides a cross-topic and cross-lingual analysis of toxicity in Reddit conversations. We collect 1.5 million comment threads from 481 communities in six languages. By aligning languages with topics, we thoroughly analyze how toxicity spikes within different communities. Our analysis targets six languages spanning different communities and topics such as Culture, Politics, and News. We observe consistent patterns across languages where toxicity increases within the same topics while also identifying significant differences where specific language communities exhibit notable variations in relation to certain topics.
[ "Tufa, Wondimagegnhue Tsegaye", "Markov, Ilia", "Vossen, Piek T.J.M." ]
The Constant in HATE: Toxicity in Reddit across Topics and Languages
trac-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trac-1.2.bib
https://aclanthology.org/2024.trac-1.2/
@inproceedings{zampieri-etal-2024-federated, title = "A Federated Learning Approach to Privacy Preserving Offensive Language Identification", author = "Zampieri, Marcos and Premasiri, Damith and Ranasinghe, Tharindu", editor = "Kumar, Ritesh and Ojha, Atul Kr. and Malmasi, Shervin and Chakravarthi, Bharathi Raja and Lahiri, Bornini and Singh, Siddharth and Ratan, Shyam", booktitle = "Proceedings of the Fourth Workshop on Threat, Aggression {\&} Cyberbullying @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.trac-1.2", pages = "12--20", abstract = "The spread of various forms of offensive speech online is an important concern in social media. While platforms have been investing heavily in ways of coping with this problem, the question of privacy remains largely unaddressed. Models trained to detect offensive language on social media are trained and/or fine-tuned using large amounts of data often stored in centralized servers. Since most social media data originates from end users, we propose a privacy preserving decentralized architecture for identifying offensive language online by introducing Federated Learning (FL) in the context of offensive language identification. FL is a decentralized architecture that allows multiple models to be trained locally without the need for data sharing hence preserving users{'} privacy. We propose a model fusion approach to perform FL. We trained multiple deep learning models on four publicly available English benchmark datasets (AHSD, HASOC, HateXplain, OLID) and evaluated their performance in detail. We also present initial cross-lingual experiments in English and Spanish. We show that the proposed model fusion approach outperforms baselines in all the datasets while preserving privacy.", }
The spread of various forms of offensive speech online is an important concern in social media. While platforms have been investing heavily in ways of coping with this problem, the question of privacy remains largely unaddressed. Models trained to detect offensive language on social media are trained and/or fine-tuned using large amounts of data often stored in centralized servers. Since most social media data originates from end users, we propose a privacy preserving decentralized architecture for identifying offensive language online by introducing Federated Learning (FL) in the context of offensive language identification. FL is a decentralized architecture that allows multiple models to be trained locally without the need for data sharing hence preserving users{'} privacy. We propose a model fusion approach to perform FL. We trained multiple deep learning models on four publicly available English benchmark datasets (AHSD, HASOC, HateXplain, OLID) and evaluated their performance in detail. We also present initial cross-lingual experiments in English and Spanish. We show that the proposed model fusion approach outperforms baselines in all the datasets while preserving privacy.
[ "Zampieri, Marcos", "Premasiri, Damith", "Ranasinghe, Tharindu" ]
A Federated Learning Approach to Privacy Preserving Offensive Language Identification
trac-1.2
Poster
2404.11470
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trac-1.3.bib
https://aclanthology.org/2024.trac-1.3/
@inproceedings{wang-markov-2024-cltl-harmpot, title = "{CLTL}@{H}arm{P}ot-{ID}: Leveraging Transformer Models for Detecting Offline Harm Potential and Its Targets in Low-Resource Languages", author = "Wang, Yeshan and Markov, Ilia", editor = "Kumar, Ritesh and Ojha, Atul Kr. and Malmasi, Shervin and Chakravarthi, Bharathi Raja and Lahiri, Bornini and Singh, Siddharth and Ratan, Shyam", booktitle = "Proceedings of the Fourth Workshop on Threat, Aggression {\&} Cyberbullying @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.trac-1.3", pages = "21--26", abstract = "We present the winning approach to the TRAC 2024 Shared Task on Offline Harm Potential Identification (HarmPot-ID). The task focused on low-resource Indian languages and consisted of two sub-tasks: 1a) predicting the offline harm potential and 1b) detecting the most likely target(s) of the offline harm. We explored low-source domain specific, cross-lingual, and monolingual transformer models and submitted the aggregate predictions from the MuRIL and BERT models. Our approach achieved 0.74 micro-averaged F1-score for sub-task 1a and 0.96 for sub-task 1b, securing the 1st rank for both sub-tasks in the competition.", }
We present the winning approach to the TRAC 2024 Shared Task on Offline Harm Potential Identification (HarmPot-ID). The task focused on low-resource Indian languages and consisted of two sub-tasks: 1a) predicting the offline harm potential and 1b) detecting the most likely target(s) of the offline harm. We explored low-source domain specific, cross-lingual, and monolingual transformer models and submitted the aggregate predictions from the MuRIL and BERT models. Our approach achieved 0.74 micro-averaged F1-score for sub-task 1a and 0.96 for sub-task 1b, securing the 1st rank for both sub-tasks in the competition.
[ "Wang, Yeshan", "Markov, Ilia" ]
CLTL@HarmPot-ID: Leveraging Transformer Models for Detecting Offline Harm Potential and Its Targets in Low-Resource Languages
trac-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trac-1.4.bib
https://aclanthology.org/2024.trac-1.4/
@inproceedings{wang-etal-2024-njust, title = "{NJUST}-{KMG} at {TRAC}-2024 Tasks 1 and 2: Offline Harm Potential Identification", author = "Wang, Jingyuan and Depp, Jack and Yang, Yang", editor = "Kumar, Ritesh and Ojha, Atul Kr. and Malmasi, Shervin and Chakravarthi, Bharathi Raja and Lahiri, Bornini and Singh, Siddharth and Ratan, Shyam", booktitle = "Proceedings of the Fourth Workshop on Threat, Aggression {\&} Cyberbullying @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.trac-1.4", pages = "27--31", abstract = "This report provide a detailed description of the method that we proposed in the TRAC-2024 Offline Harm Potential dentification which encloses two sub-tasks. The investigation utilized a rich dataset comprised of social media comments in several Indian languages, annotated with precision by expert judges to capture the nuanced implications for offline context harm. The objective assigned to the participants was to design algorithms capable of accurately assessing the likelihood of harm in given situations and identifying the most likely target(s) of offline harm. Our approach ranked second in two separate tracks, with F1 values of 0.73 and 0.96 respectively. Our method principally involved selecting pretrained models for finetuning, incorporating contrastive learning techniques, and culminating in an ensemble approach for the test set.", }
This report provide a detailed description of the method that we proposed in the TRAC-2024 Offline Harm Potential dentification which encloses two sub-tasks. The investigation utilized a rich dataset comprised of social media comments in several Indian languages, annotated with precision by expert judges to capture the nuanced implications for offline context harm. The objective assigned to the participants was to design algorithms capable of accurately assessing the likelihood of harm in given situations and identifying the most likely target(s) of offline harm. Our approach ranked second in two separate tracks, with F1 values of 0.73 and 0.96 respectively. Our method principally involved selecting pretrained models for finetuning, incorporating contrastive learning techniques, and culminating in an ensemble approach for the test set.
[ "Wang, Jingyuan", "Depp, Jack", "Yang, Yang" ]
NJUST-KMG at TRAC-2024 Tasks 1 and 2: Offline Harm Potential Identification
trac-1.4
Poster
2403.19713
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trac-1.5.bib
https://aclanthology.org/2024.trac-1.5/
@inproceedings{h-c-etal-2024-scalarlab, title = "{S}calar{L}ab@{TRAC}2024: Exploring Machine Learning Techniques for Identifying Potential Offline Harm in Multilingual Commentaries", author = "H C, Anagha and Krishna, Saatvik M. and Jha, Soumya Sangam and Rao, Vartika T. and M, Anand Kumar", editor = "Kumar, Ritesh and Ojha, Atul Kr. and Malmasi, Shervin and Chakravarthi, Bharathi Raja and Lahiri, Bornini and Singh, Siddharth and Ratan, Shyam", booktitle = "Proceedings of the Fourth Workshop on Threat, Aggression {\&} Cyberbullying @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.trac-1.5", pages = "32--36", abstract = "The objective of the shared task, Offline Harm Potential Identification (HarmPot-ID), is to build models to predict the offline harm potential of social media texts. {``}Harm potential{''} is defined as the ability of an online post or comment to incite offline physical harm such as murder, arson, riot, rape, etc. The first subtask was to predict the level of harm potential, and the second was to identify the group to which this harm was directed towards. This paper details our submissions for the shared task that includes a cascaded SVM model, an XGBoost model, and a TF-IDF weighted Word2Vec embedding-supported SVM model. Several other models that were explored have also been detailed.", }
The objective of the shared task, Offline Harm Potential Identification (HarmPot-ID), is to build models to predict the offline harm potential of social media texts. {``}Harm potential{''} is defined as the ability of an online post or comment to incite offline physical harm such as murder, arson, riot, rape, etc. The first subtask was to predict the level of harm potential, and the second was to identify the group to which this harm was directed towards. This paper details our submissions for the shared task that includes a cascaded SVM model, an XGBoost model, and a TF-IDF weighted Word2Vec embedding-supported SVM model. Several other models that were explored have also been detailed.
[ "H C, Anagha", "Krishna, Saatvik M.", "Jha, Soumya Sangam", "Rao, Vartika T.", "M, An", "Kumar" ]
ScalarLab@TRAC2024: Exploring Machine Learning Techniques for Identifying Potential Offline Harm in Multilingual Commentaries
trac-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trac-1.6.bib
https://aclanthology.org/2024.trac-1.6/
@inproceedings{kruschwitz-schmidhuber-2024-llm, title = "{LLM}-Based Synthetic Datasets: Applications and Limitations in Toxicity Detection", author = "Kruschwitz, Udo and Schmidhuber, Maximilian", editor = "Kumar, Ritesh and Ojha, Atul Kr. and Malmasi, Shervin and Chakravarthi, Bharathi Raja and Lahiri, Bornini and Singh, Siddharth and Ratan, Shyam", booktitle = "Proceedings of the Fourth Workshop on Threat, Aggression {\&} Cyberbullying @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.trac-1.6", pages = "37--51", abstract = "Large Language Model (LLM)-based Synthetic Data is becoming an increasingly important field of research. One of its promising application is in training classifiers to detect online toxicity, which is of increasing concern in today{'}s digital landscape. In this work, we assess the feasibility of generative models to generate synthetic data for toxic speech detection. Our experiments are conducted on six different toxicity datasets, four of whom are hateful and two are toxic in the broader sense. We then employ a classifier trained on the original data for filtering. To explore the potential of this data, we conduct experiments using combinations of original and synthetic data, synthetic oversampling of the minority class, and a comparison of original vs. synthetic-only training. Results indicate that while our generative models offer benefits in certain scenarios, it does not improve hateful dataset classification. However, it does boost patronizing and condescending language detection. We find that synthetic data generated by LLMs is a promising avenue of research, but further research is needed to improve the quality of the generated data and develop better filtering methods. Code is available on GitHub; the generated dataset will be available on Zenodo in the final submission.", }
Large Language Model (LLM)-based Synthetic Data is becoming an increasingly important field of research. One of its promising application is in training classifiers to detect online toxicity, which is of increasing concern in today{'}s digital landscape. In this work, we assess the feasibility of generative models to generate synthetic data for toxic speech detection. Our experiments are conducted on six different toxicity datasets, four of whom are hateful and two are toxic in the broader sense. We then employ a classifier trained on the original data for filtering. To explore the potential of this data, we conduct experiments using combinations of original and synthetic data, synthetic oversampling of the minority class, and a comparison of original vs. synthetic-only training. Results indicate that while our generative models offer benefits in certain scenarios, it does not improve hateful dataset classification. However, it does boost patronizing and condescending language detection. We find that synthetic data generated by LLMs is a promising avenue of research, but further research is needed to improve the quality of the generated data and develop better filtering methods. Code is available on GitHub; the generated dataset will be available on Zenodo in the final submission.
[ "Kruschwitz, Udo", "Schmidhuber, Maximilian" ]
LLM-Based Synthetic Datasets: Applications and Limitations in Toxicity Detection
trac-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trac-1.7.bib
https://aclanthology.org/2024.trac-1.7/
@inproceedings{guo-gauch-2024-using, title = "Using Sarcasm to Improve Cyberbullying Detection", author = "Guo, Xiaoyu and Gauch, Susan", editor = "Kumar, Ritesh and Ojha, Atul Kr. and Malmasi, Shervin and Chakravarthi, Bharathi Raja and Lahiri, Bornini and Singh, Siddharth and Ratan, Shyam", booktitle = "Proceedings of the Fourth Workshop on Threat, Aggression {\&} Cyberbullying @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.trac-1.7", pages = "52--59", abstract = "Cyberbullying has become more prevalent over time, especially towards minority groups, and online human moderators cannot detect cyberbullying content efficiently. Prior work has addressed this problem by detecting cyberbullying with deep learning approaches. In this project, we compare several BERT-based benchmark methods for cyberbullying detection and do a failure analysis to see where the model fails to correctly identify cyberbullying. We find that many falsely classified texts are sarcastic, so we propose a method to mitigate the false classifications by incorporating neural network-based sarcasm detection. We define a simple multilayer perceptron (MLP) that incorpo- rates sarcasm detection in the final cyberbully classifications and demonstrate improvement over benchmark methods.", }
Cyberbullying has become more prevalent over time, especially towards minority groups, and online human moderators cannot detect cyberbullying content efficiently. Prior work has addressed this problem by detecting cyberbullying with deep learning approaches. In this project, we compare several BERT-based benchmark methods for cyberbullying detection and do a failure analysis to see where the model fails to correctly identify cyberbullying. We find that many falsely classified texts are sarcastic, so we propose a method to mitigate the false classifications by incorporating neural network-based sarcasm detection. We define a simple multilayer perceptron (MLP) that incorpo- rates sarcasm detection in the final cyberbully classifications and demonstrate improvement over benchmark methods.
[ "Guo, Xiaoyu", "Gauch, Susan" ]
Using Sarcasm to Improve Cyberbullying Detection
trac-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trac-1.8.bib
https://aclanthology.org/2024.trac-1.8/
@inproceedings{weissenbacher-kruschwitz-2024-analyzing, title = "Analyzing Offensive Language and Hate Speech in Political Discourse: A Case Study of {G}erman Politicians", author = "Weissenbacher, Maximilian and Kruschwitz, Udo", editor = "Kumar, Ritesh and Ojha, Atul Kr. and Malmasi, Shervin and Chakravarthi, Bharathi Raja and Lahiri, Bornini and Singh, Siddharth and Ratan, Shyam", booktitle = "Proceedings of the Fourth Workshop on Threat, Aggression {\&} Cyberbullying @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.trac-1.8", pages = "60--72", abstract = "Social media platforms have become key players in political discourse. Twitter (now {`}X{'}), for example, is used by many German politicians to communicate their views and interact with others. Due to its nature, however, social networks suffer from a number of issues such as offensive content, toxic language and hate speech. This has attracted a lot of research interest but in the context of political discourse there is a noticeable gap with no such study specifically looking at German politicians in a systematic way. We aim to help addressing this gap. We first create an annotated dataset of 1,197 Twitter posts mentioning German politicians. This is the basis to explore a number of approaches to detect hate speech and offensive language (HOF) and identify an ensemble of transformer models that achieves an F1-Macros score of 0.94. This model is then used to automatically classify two much larger, longitudinal datasets: one with 520,000 tweets posted by MPs, and the other with 2,200,000 tweets which comprise posts from the public mentioning politicians. We obtain interesting insights in regards to the distribution of hate and offensive content when looking at different independent variables.", }
Social media platforms have become key players in political discourse. Twitter (now {`}X{'}), for example, is used by many German politicians to communicate their views and interact with others. Due to its nature, however, social networks suffer from a number of issues such as offensive content, toxic language and hate speech. This has attracted a lot of research interest but in the context of political discourse there is a noticeable gap with no such study specifically looking at German politicians in a systematic way. We aim to help addressing this gap. We first create an annotated dataset of 1,197 Twitter posts mentioning German politicians. This is the basis to explore a number of approaches to detect hate speech and offensive language (HOF) and identify an ensemble of transformer models that achieves an F1-Macros score of 0.94. This model is then used to automatically classify two much larger, longitudinal datasets: one with 520,000 tweets posted by MPs, and the other with 2,200,000 tweets which comprise posts from the public mentioning politicians. We obtain interesting insights in regards to the distribution of hate and offensive content when looking at different independent variables.
[ "Weissenbacher, Maximilian", "Kruschwitz, Udo" ]
Analyzing Offensive Language and Hate Speech in Political Discourse: A Case Study of German Politicians
trac-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trac-1.9.bib
https://aclanthology.org/2024.trac-1.9/
@inproceedings{fridriksdottir-etal-2024-ice, title = "Ice and Fire: Dataset on Sentiment, Emotions, Toxicity, Sarcasm, Hate speech, Sympathy and More in {I}celandic Blog Comments", author = "Fri{\dh}riksd{\'o}ttir, Steinunn Rut and Simonsen, Annika and {\'A}smundsson, Atli Sn{\ae}r and Fri{\dh}j{\'o}nsd{\'o}ttir, Gu{\dh}r{\'u}n Lilja and Ingason, Anton Karl and Sn{\ae}bjarnarson, V{\'e}steinn and Einarsson, Hafsteinn", editor = "Kumar, Ritesh and Ojha, Atul Kr. and Malmasi, Shervin and Chakravarthi, Bharathi Raja and Lahiri, Bornini and Singh, Siddharth and Ratan, Shyam", booktitle = "Proceedings of the Fourth Workshop on Threat, Aggression {\&} Cyberbullying @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.trac-1.9", pages = "73--84", abstract = "This study introduces {``}Ice and Fire,{''} a Multi-Task Learning (MTL) dataset tailored for sentiment analysis in the Icelandic language, encompassing a wide range of linguistic tasks, including sentiment and emotion detection, as well as identification of toxicity, hate speech, encouragement, sympathy, sarcasm/irony, and trolling. With 261 fully annotated blog comments and 1045 comments annotated in at least one task, this contribution marks a significant step forward in the field of Icelandic natural language processing. It provides a comprehensive dataset for understanding the nuances of online communication in Icelandic and an interface to expand the annotation effort. Despite the challenges inherent in subjective interpretation of text, our findings highlight the positive potential of this dataset to improve text analysis techniques and encourage more inclusive online discourse in Icelandic communities. With promising baseline performances, {``}Ice and Fire{''} sets the stage for future research to enhance automated text analysis and develop sophisticated language technologies, contributing to healthier online environments and advancing Icelandic language resources.", }
This study introduces {``}Ice and Fire,{''} a Multi-Task Learning (MTL) dataset tailored for sentiment analysis in the Icelandic language, encompassing a wide range of linguistic tasks, including sentiment and emotion detection, as well as identification of toxicity, hate speech, encouragement, sympathy, sarcasm/irony, and trolling. With 261 fully annotated blog comments and 1045 comments annotated in at least one task, this contribution marks a significant step forward in the field of Icelandic natural language processing. It provides a comprehensive dataset for understanding the nuances of online communication in Icelandic and an interface to expand the annotation effort. Despite the challenges inherent in subjective interpretation of text, our findings highlight the positive potential of this dataset to improve text analysis techniques and encourage more inclusive online discourse in Icelandic communities. With promising baseline performances, {``}Ice and Fire{''} sets the stage for future research to enhance automated text analysis and develop sophisticated language technologies, contributing to healthier online environments and advancing Icelandic language resources.
[ "Fri{\\dh}riksd{\\'o}ttir, Steinunn Rut", "Simonsen, Annika", "{\\'A}smundsson, Atli Sn{\\ae}r", "Fri{\\dh}j{\\'o}nsd{\\'o}ttir, Gu{\\dh}r{\\'u}n Lilja", "Ingason, Anton Karl", "Sn{\\ae}bjarnarson, V{\\'e}steinn", "Einarsson, Hafsteinn" ]
Ice and Fire: Dataset on Sentiment, Emotions, Toxicity, Sarcasm, Hate speech, Sympathy and More in Icelandic Blog Comments
trac-1.9
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trac-1.10.bib
https://aclanthology.org/2024.trac-1.10/
@inproceedings{jigar-etal-2024-detecting, title = "Detecting Hate Speech in {A}mharic Using Multimodal Analysis of Social Media Memes", author = "Jigar, Melese Ayichlie and Ayele, Abinew Ali and Yimam, Seid Muhie and Biemann, Chris", editor = "Kumar, Ritesh and Ojha, Atul Kr. and Malmasi, Shervin and Chakravarthi, Bharathi Raja and Lahiri, Bornini and Singh, Siddharth and Ratan, Shyam", booktitle = "Proceedings of the Fourth Workshop on Threat, Aggression {\&} Cyberbullying @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.trac-1.10", pages = "85--95", abstract = "In contemporary society, the proliferation of hate speech is increasingly prevalent across various social media platforms, with a notable trend of incorporating memes to amplify its visual impact and reach. The conventional text-based detection approaches frequently fail to address the complexities introduced by memes, thereby aggravating the challenges, particularly in low-resource languages such as Amharic. We develop Amharic meme hate speech detection models using 2,000 memes collected from Facebook, Twitter, and Telegram over four months. We employ native Amharic speakers to annotate each meme using a web-based tool, yielding a Fleiss{'} kappa score of 0.50. We utilize different feature extraction techniques, namely VGG16 for images and word2Vec for textual content, and build unimodal and multimodal models such as LSTM, BiLSTM, and CNN. The BiLSTM model shows the best performance, achieving 63{\%} accuracy for text and 75{\%} for multimodal features. In image-only experiments, the CNN model achieves 69{\%} in accuracy. Multimodal models demonstrate superior performance in detecting Amharic hate speech in memes, showcasing their potential to address the unique challenges posed by meme-based hate speech on social media.", }
In contemporary society, the proliferation of hate speech is increasingly prevalent across various social media platforms, with a notable trend of incorporating memes to amplify its visual impact and reach. The conventional text-based detection approaches frequently fail to address the complexities introduced by memes, thereby aggravating the challenges, particularly in low-resource languages such as Amharic. We develop Amharic meme hate speech detection models using 2,000 memes collected from Facebook, Twitter, and Telegram over four months. We employ native Amharic speakers to annotate each meme using a web-based tool, yielding a Fleiss{'} kappa score of 0.50. We utilize different feature extraction techniques, namely VGG16 for images and word2Vec for textual content, and build unimodal and multimodal models such as LSTM, BiLSTM, and CNN. The BiLSTM model shows the best performance, achieving 63{\%} accuracy for text and 75{\%} for multimodal features. In image-only experiments, the CNN model achieves 69{\%} in accuracy. Multimodal models demonstrate superior performance in detecting Amharic hate speech in memes, showcasing their potential to address the unique challenges posed by meme-based hate speech on social media.
[ "Jigar, Melese Ayichlie", "Ayele, Abinew Ali", "Yimam, Seid Muhie", "Biemann, Chris" ]
Detecting Hate Speech in Amharic Using Multimodal Analysis of Social Media Memes
trac-1.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trac-1.11.bib
https://aclanthology.org/2024.trac-1.11/
@inproceedings{barbarestani-etal-2024-content, title = "Content Moderation in Online Platforms: A Study of Annotation Methods for Inappropriate Language", author = "Barbarestani, Baran and Maks, Isa and Vossen, Piek T.J.M.", editor = "Kumar, Ritesh and Ojha, Atul Kr. and Malmasi, Shervin and Chakravarthi, Bharathi Raja and Lahiri, Bornini and Singh, Siddharth and Ratan, Shyam", booktitle = "Proceedings of the Fourth Workshop on Threat, Aggression {\&} Cyberbullying @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.trac-1.11", pages = "96--104", abstract = "Detecting inappropriate language in online platforms is vital for maintaining a safe and respectful digital environment, especially in the context of hate speech prevention. However, defining what constitutes inappropriate language can be highly subjective and context-dependent, varying from person to person. This study presents the outcomes of a comprehensive examination of the subjectivity involved in assessing inappropriateness within conversational contexts. Different annotation methods, including expert annotation, crowd annotation, ChatGPT-generated annotation, and lexicon-based annotation, were applied to English Reddit conversations. The analysis revealed a high level of agreement across these annotation methods, with most disagreements arising from subjective interpretations of inappropriate language. This emphasizes the importance of implementing content moderation systems that not only recognize inappropriate content but also understand and adapt to diverse user perspectives and contexts. The study contributes to the evolving field of hate speech annotation by providing a detailed analysis of annotation differences in relation to the subjective task of judging inappropriate words in conversations.", }
Detecting inappropriate language in online platforms is vital for maintaining a safe and respectful digital environment, especially in the context of hate speech prevention. However, defining what constitutes inappropriate language can be highly subjective and context-dependent, varying from person to person. This study presents the outcomes of a comprehensive examination of the subjectivity involved in assessing inappropriateness within conversational contexts. Different annotation methods, including expert annotation, crowd annotation, ChatGPT-generated annotation, and lexicon-based annotation, were applied to English Reddit conversations. The analysis revealed a high level of agreement across these annotation methods, with most disagreements arising from subjective interpretations of inappropriate language. This emphasizes the importance of implementing content moderation systems that not only recognize inappropriate content but also understand and adapt to diverse user perspectives and contexts. The study contributes to the evolving field of hate speech annotation by providing a detailed analysis of annotation differences in relation to the subjective task of judging inappropriate words in conversations.
[ "Barbarestani, Baran", "Maks, Isa", "Vossen, Piek T.J.M." ]
Content Moderation in Online Platforms: A Study of Annotation Methods for Inappropriate Language
trac-1.11
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trac-1.12.bib
https://aclanthology.org/2024.trac-1.12/
@inproceedings{brun-nikoulina-2024-frenchtoxicityprompts, title = "{F}rench{T}oxicity{P}rompts: a Large Benchmark for Evaluating and Mitigating Toxicity in {F}rench Texts", author = "Brun, Caroline and Nikoulina, Vassilina", editor = "Kumar, Ritesh and Ojha, Atul Kr. and Malmasi, Shervin and Chakravarthi, Bharathi Raja and Lahiri, Bornini and Singh, Siddharth and Ratan, Shyam", booktitle = "Proceedings of the Fourth Workshop on Threat, Aggression {\&} Cyberbullying @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.trac-1.12", pages = "105--114", abstract = "Large language models (LLMs) are increasingly popular but are also prone to generating bias, toxic or harmful language, which can have detrimental effects on individuals and communities. Although most efforts is put to assess and mitigate toxicity in generated content, it is primarily concentrated on English, while it{'}s essential to consider other languages as well. For addressing this issue, we create and release FrenchToxicityPrompts, a dataset of 50K naturally occurring French prompts and their continuations, annotated with toxicity scores from a widely used toxicity classifier. We evaluate 14 different models from four prevalent open-sourced families of LLMs against our dataset to assess their potential toxicity across various dimensions. We hope that our contribution will foster future research on toxicity detection and mitigation beyond English.", }
Large language models (LLMs) are increasingly popular but are also prone to generating bias, toxic or harmful language, which can have detrimental effects on individuals and communities. Although most efforts is put to assess and mitigate toxicity in generated content, it is primarily concentrated on English, while it{'}s essential to consider other languages as well. For addressing this issue, we create and release FrenchToxicityPrompts, a dataset of 50K naturally occurring French prompts and their continuations, annotated with toxicity scores from a widely used toxicity classifier. We evaluate 14 different models from four prevalent open-sourced families of LLMs against our dataset to assess their potential toxicity across various dimensions. We hope that our contribution will foster future research on toxicity detection and mitigation beyond English.
[ "Brun, Caroline", "Nikoulina, Vassilina" ]
FrenchToxicityPrompts: a Large Benchmark for Evaluating and Mitigating Toxicity in French Texts
trac-1.12
Poster
2406.17566
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trac-1.13.bib
https://aclanthology.org/2024.trac-1.13/
@inproceedings{chierchiello-etal-2024-studying, title = "Studying Reactions to Stereotypes in Teenagers: an Annotated {I}talian Dataset", author = "Chierchiello, Elisa and Bourgeade, Tom and Ricci, Giacomo and Bosco, Cristina and D{'}Errico, Francesca", editor = "Kumar, Ritesh and Ojha, Atul Kr. and Malmasi, Shervin and Chakravarthi, Bharathi Raja and Lahiri, Bornini and Singh, Siddharth and Ratan, Shyam", booktitle = "Proceedings of the Fourth Workshop on Threat, Aggression {\&} Cyberbullying @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.trac-1.13", pages = "115--125", abstract = "The paper introduces a novel corpus collected in a set of experiments in Italian schools, annotated for the presence of stereotypes, and related categories. It consists of comments written by teenage students in reaction to fabricated fake news, designed to elicit prejudiced responses, by featuring racial stereotypes. We make use of an annotation scheme which takes into account the implicit or explicit nature of different instances of stereotypes, alongside their forms of discredit. We also annotate the stance of the commenter towards the news article, using a schema inspired by rumor and fake news stance detection tasks. Through this rarely studied setting, we provide a preliminary exploration of the production of stereotypes in a more controlled context. Alongside this novel dataset, we provide both quantitative and qualitative analyses of these reactions, to validate the categories used in their annotation. Through this work, we hope to increase the diversity of available data in the study of the propagation and the dynamics of negative stereotypes.", }
The paper introduces a novel corpus collected in a set of experiments in Italian schools, annotated for the presence of stereotypes, and related categories. It consists of comments written by teenage students in reaction to fabricated fake news, designed to elicit prejudiced responses, by featuring racial stereotypes. We make use of an annotation scheme which takes into account the implicit or explicit nature of different instances of stereotypes, alongside their forms of discredit. We also annotate the stance of the commenter towards the news article, using a schema inspired by rumor and fake news stance detection tasks. Through this rarely studied setting, we provide a preliminary exploration of the production of stereotypes in a more controlled context. Alongside this novel dataset, we provide both quantitative and qualitative analyses of these reactions, to validate the categories used in their annotation. Through this work, we hope to increase the diversity of available data in the study of the propagation and the dynamics of negative stereotypes.
[ "Chierchiello, Elisa", "Bourgeade, Tom", "Ricci, Giacomo", "Bosco, Cristina", "D{'}Errico, Francesca" ]
Studying Reactions to Stereotypes in Teenagers: an Annotated Italian Dataset
trac-1.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trac-1.14.bib
https://aclanthology.org/2024.trac-1.14/
@inproceedings{bauer-etal-2024-offensiveness, title = "Offensiveness, Hate, Emotion and {GPT}: Benchmarking {GPT}3.5 and {GPT}4 as Classifiers on {T}witter-specific Datasets", author = "Bauer, Nikolaj and Preisig, Moritz and Volk, Martin", editor = "Kumar, Ritesh and Ojha, Atul Kr. and Malmasi, Shervin and Chakravarthi, Bharathi Raja and Lahiri, Bornini and Singh, Siddharth and Ratan, Shyam", booktitle = "Proceedings of the Fourth Workshop on Threat, Aggression {\&} Cyberbullying @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.trac-1.14", pages = "126--133", abstract = "In this paper, we extend the work of benchmarking GPT by turning GPT models into classifiers and applying them on three different Twitter datasets on Hate-Speech Detection, Offensive Language Detection, and Emotion Classification. We use a Zero-Shot and Few-Shot approach to evaluate the classification capabilities of the GPT models. Our results show that GPT models do not always beat fine-tuned models on the tested benchmarks. However, in Hate-Speech and Emotion Detection, using a Few-Shot approach, state-of-the-art performance can be achieved. The results also reveal that GPT-4 is more sensitive to the examples given in a Few-Shot prompt, highlighting the importance of choosing fitting examples for inference and prompt formulation.", }
In this paper, we extend the work of benchmarking GPT by turning GPT models into classifiers and applying them on three different Twitter datasets on Hate-Speech Detection, Offensive Language Detection, and Emotion Classification. We use a Zero-Shot and Few-Shot approach to evaluate the classification capabilities of the GPT models. Our results show that GPT models do not always beat fine-tuned models on the tested benchmarks. However, in Hate-Speech and Emotion Detection, using a Few-Shot approach, state-of-the-art performance can be achieved. The results also reveal that GPT-4 is more sensitive to the examples given in a Few-Shot prompt, highlighting the importance of choosing fitting examples for inference and prompt formulation.
[ "Bauer, Nikolaj", "Preisig, Moritz", "Volk, Martin" ]
Offensiveness, Hate, Emotion and GPT: Benchmarking GPT3.5 and GPT4 as Classifiers on Twitter-specific Datasets
trac-1.14
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trac-1.15.bib
https://aclanthology.org/2024.trac-1.15/
@inproceedings{williams-etal-2024-dodo, title = "{D}o{D}o Learning: Domain-Demographic Transfer in Language Models for Detecting Abuse Targeted at Public Figures", author = "Williams, Angus Redlarski and Kirk, Hannah Rose and Burke-Moore, Liam and Chung, Yi-Ling and Debono, Ivan and Johansson, Pica and Stevens, Francesca and Bright, Jonathan and Hale, Scott", editor = "Kumar, Ritesh and Ojha, Atul Kr. and Malmasi, Shervin and Chakravarthi, Bharathi Raja and Lahiri, Bornini and Singh, Siddharth and Ratan, Shyam", booktitle = "Proceedings of the Fourth Workshop on Threat, Aggression {\&} Cyberbullying @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.trac-1.15", pages = "134--154", abstract = "Public figures receive disproportionate levels of abuse on social media, impacting their active participation in public life. Automated systems can identify abuse at scale but labelling training data is expensive and potentially harmful. So, it is desirable that systems are efficient and generalisable, handling shared and specific aspects of abuse. We explore the dynamics of cross-group text classification in order to understand how well models trained on one domain or demographic can transfer to others, with a view to building more generalisable abuse classifiers. We fine-tune language models to classify tweets targeted at public figures using our novel DoDo dataset, containing 28,000 entries with fine-grained labels, split equally across four Domain-Demographic pairs (male and female footballers and politicians). We find that (i) small amounts of diverse data are hugely beneficial to generalisation and adaptation; (ii) models transfer more easily across demographics but cross-domain models are more generalisable; (iii) some groups contribute more to generalisability than others; and (iv) dataset similarity is a signal of transferability.", }
Public figures receive disproportionate levels of abuse on social media, impacting their active participation in public life. Automated systems can identify abuse at scale but labelling training data is expensive and potentially harmful. So, it is desirable that systems are efficient and generalisable, handling shared and specific aspects of abuse. We explore the dynamics of cross-group text classification in order to understand how well models trained on one domain or demographic can transfer to others, with a view to building more generalisable abuse classifiers. We fine-tune language models to classify tweets targeted at public figures using our novel DoDo dataset, containing 28,000 entries with fine-grained labels, split equally across four Domain-Demographic pairs (male and female footballers and politicians). We find that (i) small amounts of diverse data are hugely beneficial to generalisation and adaptation; (ii) models transfer more easily across demographics but cross-domain models are more generalisable; (iii) some groups contribute more to generalisability than others; and (iv) dataset similarity is a signal of transferability.
[ "Williams, Angus Redlarski", "Kirk, Hannah Rose", "Burke-Moore, Liam", "Chung, Yi-Ling", "Debono, Ivan", "Johansson, Pica", "Stevens, Francesca", "Bright, Jonathan", "Hale, Scott" ]
DoDo Learning: Domain-Demographic Transfer in Language Models for Detecting Abuse Targeted at Public Figures
trac-1.15
Poster
2307.16811
[ "https://github.com/turing-online-safety-codebase/dodo-learning" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trac-1.16.bib
https://aclanthology.org/2024.trac-1.16/
@inproceedings{donabauer-etal-2024-empowering, title = "Empowering Users and Mitigating Harm: Leveraging Nudging Principles to Enhance Social Media Safety", author = "Donabauer, Gregor and Theophilou, Emily and Lomonaco, Francesco and Bursic, Sathya and Taibi, Davide and Hern{\'a}ndez-Leo, Davinia and Kruschwitz, Udo and Ognibene, Dimitri", editor = "Kumar, Ritesh and Ojha, Atul Kr. and Malmasi, Shervin and Chakravarthi, Bharathi Raja and Lahiri, Bornini and Singh, Siddharth and Ratan, Shyam", booktitle = "Proceedings of the Fourth Workshop on Threat, Aggression {\&} Cyberbullying @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.trac-1.16", pages = "155--166", abstract = "Social media have become an integral part of our daily lives, yet they have also resulted in various negative effects on users, ranging from offensive or hateful content to the spread of misinformation. In recent years, numerous automated approaches have been proposed to identify and combat such harmful content. However, it is crucial to recognize the human aspect of users who engage with this content in designing efforts to mitigate these threats. We propose to incorporate principles of behavioral science, specifically the concept of nudging into social media platforms. Our approach involves augmenting social media feeds with informative diagrams, which provide insights into the content that users are presented. The goal of our work is to empower social media users to make well-informed decisions for themselves and for others within these platforms. Nudges serve as a means to gently draw users{'} attention to content in an unintrusive manner, a crucial consideration in the context of social media. To evaluate the effectiveness of our approach, we conducted a user study involving 120 Italian-speaking participants who interacted with a social media interface augmented with these nudging diagrams. Participants who had used the augmented interface were able to outperform those using the plain interface in a successive harmful content detection test where nudging diagrams were not visible anymore. Our findings demonstrate that our approach significantly improves users{'} awareness of potentially harmful content with effects lasting beyond the duration of the interaction. In this work, we provide a comprehensive overview of our experimental materials and setup, present our findings, and refer to the limitations identified during our study.", }
Social media have become an integral part of our daily lives, yet they have also resulted in various negative effects on users, ranging from offensive or hateful content to the spread of misinformation. In recent years, numerous automated approaches have been proposed to identify and combat such harmful content. However, it is crucial to recognize the human aspect of users who engage with this content in designing efforts to mitigate these threats. We propose to incorporate principles of behavioral science, specifically the concept of nudging into social media platforms. Our approach involves augmenting social media feeds with informative diagrams, which provide insights into the content that users are presented. The goal of our work is to empower social media users to make well-informed decisions for themselves and for others within these platforms. Nudges serve as a means to gently draw users{'} attention to content in an unintrusive manner, a crucial consideration in the context of social media. To evaluate the effectiveness of our approach, we conducted a user study involving 120 Italian-speaking participants who interacted with a social media interface augmented with these nudging diagrams. Participants who had used the augmented interface were able to outperform those using the plain interface in a successive harmful content detection test where nudging diagrams were not visible anymore. Our findings demonstrate that our approach significantly improves users{'} awareness of potentially harmful content with effects lasting beyond the duration of the interaction. In this work, we provide a comprehensive overview of our experimental materials and setup, present our findings, and refer to the limitations identified during our study.
[ "Donabauer, Gregor", "Theophilou, Emily", "Lomonaco, Francesco", "Bursic, Sathya", "Taibi, Davide", "Hern{\\'a}ndez-Leo, Davinia", "Kruschwitz, Udo", "Ognibene, Dimitri" ]
Empowering Users and Mitigating Harm: Leveraging Nudging Principles to Enhance Social Media Safety
trac-1.16
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.trac-1.17.bib
https://aclanthology.org/2024.trac-1.17/
@inproceedings{ayele-etal-2024-exploring, title = "Exploring Boundaries and Intensities in Offensive and Hate Speech: Unveiling the Complex Spectrum of Social Media Discourse", author = "Ayele, Abinew Ali and Jalew, Esubalew Alemneh and Ali, Adem Chanie and Yimam, Seid Muhie and Biemann, Chris", editor = "Kumar, Ritesh and Ojha, Atul Kr. and Malmasi, Shervin and Chakravarthi, Bharathi Raja and Lahiri, Bornini and Singh, Siddharth and Ratan, Shyam", booktitle = "Proceedings of the Fourth Workshop on Threat, Aggression {\&} Cyberbullying @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.trac-1.17", pages = "167--178", abstract = "The prevalence of digital media and evolving sociopolitical dynamics have significantly amplified the dissemination of hateful content. Existing studies mainly focus on classifying texts into binary categories, often overlooking the continuous spectrum of offensiveness and hatefulness inherent in the text. In this research, we present an extensive benchmark dataset for Amharic, comprising 8,258 tweets annotated for three distinct tasks: category classification, identification of hate targets, and rating offensiveness and hatefulness intensities. Our study highlights that a considerable majority of tweets belong to the less offensive and less hate intensity levels, underscoring the need for early interventions by stakeholders. The prevalence of ethnic and political hatred targets, with significant overlaps in our dataset, emphasizes the complex relationships within Ethiopia{'}s sociopolitical landscape. We build classification and regression models and investigate the efficacy of models in handling these tasks. Our results reveal that hate and offensive speech can not be addressed by a simplistic binary classification, instead manifesting as variables across a continuous range of values. The afro-XLMR-large model exhibits the best performances achieving F1-scores of 75.30{\%}, 70.59{\%}, and 29.42{\%} for the category, target, and regression tasks, respectively. The 80.22{\%} correlation coefficient of the Afro-XLMR-large model indicates strong alignments.", }
The prevalence of digital media and evolving sociopolitical dynamics have significantly amplified the dissemination of hateful content. Existing studies mainly focus on classifying texts into binary categories, often overlooking the continuous spectrum of offensiveness and hatefulness inherent in the text. In this research, we present an extensive benchmark dataset for Amharic, comprising 8,258 tweets annotated for three distinct tasks: category classification, identification of hate targets, and rating offensiveness and hatefulness intensities. Our study highlights that a considerable majority of tweets belong to the less offensive and less hate intensity levels, underscoring the need for early interventions by stakeholders. The prevalence of ethnic and political hatred targets, with significant overlaps in our dataset, emphasizes the complex relationships within Ethiopia{'}s sociopolitical landscape. We build classification and regression models and investigate the efficacy of models in handling these tasks. Our results reveal that hate and offensive speech can not be addressed by a simplistic binary classification, instead manifesting as variables across a continuous range of values. The afro-XLMR-large model exhibits the best performances achieving F1-scores of 75.30{\%}, 70.59{\%}, and 29.42{\%} for the category, target, and regression tasks, respectively. The 80.22{\%} correlation coefficient of the Afro-XLMR-large model indicates strong alignments.
[ "Ayele, Abinew Ali", "Jalew, Esubalew Alemneh", "Ali, Adem Chanie", "Yimam, Seid Muhie", "Biemann, Chris" ]
Exploring Boundaries and Intensities in Offensive and Hate Speech: Unveiling the Complex Spectrum of Social Media Discourse
trac-1.17
Poster
2404.12042
[ "https://github.com/uhh-lt/amharichatespeech" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.unlp-1.1.bib
https://aclanthology.org/2024.unlp-1.1/
@inproceedings{fischer-etal-2024-contemporary, title = "A Contemporary News Corpus of {U}krainian ({CNC}-{UA}): Compilation, Annotation, Publication", author = {Fischer, Stefan and Haidarzhyi, Kateryna and Knappen, J{\"o}rg and Polishchuk, Olha and Stodolinska, Yuliya and Teich, Elke}, editor = "Romanyshyn, Mariana and Romanyshyn, Nataliia and Hlybovets, Andrii and Ignatenko, Oleksii", booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.unlp-1.1", pages = "1--7", abstract = {We present a corpus of contemporary Ukrainian news articles published between 2019 and 2022 on the news website of the national public broadcaster of Ukraine, commonly known as SUSPILNE. The current release comprises 87 210 364 words in 292 955 texts. Texts are annotated with titles and their time of publication. In addition, the corpus has been linguistically annotated at the token level with a dependency parser. To provide further aspects for investigation, a topic model was trained on the corpus. The corpus is hosted (Fischer et al., 2023) at the Saarbr{\"u}cken CLARIN center under a CC BY-NC-ND 4.0 license and available in two tab-separated formats: CoNLL-U (de Marneffe et al., 2021) and vertical text format (VRT) as used by the IMS Open Corpus Workbench (CWB; Evert and Hardie, 2011) and CQPweb (Hardie, 2012). We show examples of using the CQPweb interface, which allows to extract the quantitative data necessary for distributional and collocation analyses of the CNC-UA. As the CNC-UA contains news texts documenting recent events, it is highly relevant not only for linguistic analyses of the modern Ukrainian language but also for socio-cultural and political studies.}, }
We present a corpus of contemporary Ukrainian news articles published between 2019 and 2022 on the news website of the national public broadcaster of Ukraine, commonly known as SUSPILNE. The current release comprises 87 210 364 words in 292 955 texts. Texts are annotated with titles and their time of publication. In addition, the corpus has been linguistically annotated at the token level with a dependency parser. To provide further aspects for investigation, a topic model was trained on the corpus. The corpus is hosted (Fischer et al., 2023) at the Saarbr{\"u}cken CLARIN center under a CC BY-NC-ND 4.0 license and available in two tab-separated formats: CoNLL-U (de Marneffe et al., 2021) and vertical text format (VRT) as used by the IMS Open Corpus Workbench (CWB; Evert and Hardie, 2011) and CQPweb (Hardie, 2012). We show examples of using the CQPweb interface, which allows to extract the quantitative data necessary for distributional and collocation analyses of the CNC-UA. As the CNC-UA contains news texts documenting recent events, it is highly relevant not only for linguistic analyses of the modern Ukrainian language but also for socio-cultural and political studies.
[ "Fischer, Stefan", "Haidarzhyi, Kateryna", "Knappen, J{\\\"o}rg", "Polishchuk, Olha", "Stodolinska, Yuliya", "Teich, Elke" ]
A Contemporary News Corpus of Ukrainian (CNC-UA): Compilation, Annotation, Publication
unlp-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.unlp-1.2.bib
https://aclanthology.org/2024.unlp-1.2/
@inproceedings{drushchak-romanyshyn-2024-introducing, title = "Introducing the Djinni Recruitment Dataset: A Corpus of Anonymized {CV}s and Job Postings", author = "Drushchak, Nazarii and Romanyshyn, Mariana", editor = "Romanyshyn, Mariana and Romanyshyn, Nataliia and Hlybovets, Andrii and Ignatenko, Oleksii", booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.unlp-1.2", pages = "8--13", abstract = "This paper introduces the Djinni Recruitment Dataset, a large-scale open-source corpus of candidate profiles and job descriptions. With over 150,000 jobs and 230,000 candidates, the dataset includes samples in English and Ukrainian, thereby facilitating advancements in the recruitment domain of natural language processing (NLP) for both languages. It is one of the first open-source corpora in the recruitment domain, opening up new opportunities for AI-driven recruitment technologies and related fields. Notably, the dataset is accessible under the MIT license, encouraging widespread adoption for both scientific research and commercial projects.", }
This paper introduces the Djinni Recruitment Dataset, a large-scale open-source corpus of candidate profiles and job descriptions. With over 150,000 jobs and 230,000 candidates, the dataset includes samples in English and Ukrainian, thereby facilitating advancements in the recruitment domain of natural language processing (NLP) for both languages. It is one of the first open-source corpora in the recruitment domain, opening up new opportunities for AI-driven recruitment technologies and related fields. Notably, the dataset is accessible under the MIT license, encouraging widespread adoption for both scientific research and commercial projects.
[ "Drushchak, Nazarii", "Romanyshyn, Mariana" ]
Introducing the Djinni Recruitment Dataset: A Corpus of Anonymized CVs and Job Postings
unlp-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.unlp-1.3.bib
https://aclanthology.org/2024.unlp-1.3/
@inproceedings{shvedova-lukashevskyi-2024-creating, title = "Creating Parallel Corpora for {U}krainian: A {G}erman-{U}krainian Parallel Corpus ({P}ara{R}ook||{DE}-{UK})", author = "Shvedova, Maria and Lukashevskyi, Arsenii", editor = "Romanyshyn, Mariana and Romanyshyn, Nataliia and Hlybovets, Andrii and Ignatenko, Oleksii", booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.unlp-1.3", pages = "14--22", abstract = "Parallel corpora are currently a popular and vibrantly developing category of linguistic resources, used both in literature and translation studies, as well as in the field of NLP. For Ukrainian, though, there are still not enough significant parallel corpora compiled within a single roof project and made available to the research community. In this paper we present a newly developed resource, the German-Ukrainian Parallel Corpus {---} ParaRook||DE-UK, searchable online. We describe various issues related to its compilation, text selection, and annotation. The paper also features several examples of how the corpus can be used in linguistic research and translation studies. Using the experience of the German-Ukrainian parallel corpus, parallel corpora for other languages with Ukrainian can be developed.", }
Parallel corpora are currently a popular and vibrantly developing category of linguistic resources, used both in literature and translation studies, as well as in the field of NLP. For Ukrainian, though, there are still not enough significant parallel corpora compiled within a single roof project and made available to the research community. In this paper we present a newly developed resource, the German-Ukrainian Parallel Corpus {---} ParaRook||DE-UK, searchable online. We describe various issues related to its compilation, text selection, and annotation. The paper also features several examples of how the corpus can be used in linguistic research and translation studies. Using the experience of the German-Ukrainian parallel corpus, parallel corpora for other languages with Ukrainian can be developed.
[ "Shvedova, Maria", "Lukashevskyi, Arsenii" ]
Creating Parallel Corpora for Ukrainian: A German-Ukrainian Parallel Corpus (ParaRook||DE-UK)
unlp-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.unlp-1.4.bib
https://aclanthology.org/2024.unlp-1.4/
@inproceedings{chaplynskyi-romanyshyn-2024-introducing, title = "Introducing {NER}-{UK} 2.0: A Rich Corpus of Named Entities for {U}krainian", author = "Chaplynskyi, Dmytro and Romanyshyn, Mariana", editor = "Romanyshyn, Mariana and Romanyshyn, Nataliia and Hlybovets, Andrii and Ignatenko, Oleksii", booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.unlp-1.4", pages = "23--29", abstract = "This paper presents NER-UK 2.0, a corpus of texts in the Ukrainian language manually annotated for the named entity recognition task. The corpus contains 560 texts of multiple genres, boasting 21,993 entities in total. The annotation scheme covers 13 entity types, namely location, person name, organization, artifact, document, job title, date, time, period, money, percentage, quantity, and miscellaneous. Such a rich set of entities makes the corpus valuable for training named-entity recognition models in various domains, including news, social media posts, legal documents, and procurement contracts. The paper presents an updated baseline solution for named entity recognition in Ukrainian with 0.89 F1. The corpus is the largest of its kind for the Ukrainian language and is available for download.", }
This paper presents NER-UK 2.0, a corpus of texts in the Ukrainian language manually annotated for the named entity recognition task. The corpus contains 560 texts of multiple genres, boasting 21,993 entities in total. The annotation scheme covers 13 entity types, namely location, person name, organization, artifact, document, job title, date, time, period, money, percentage, quantity, and miscellaneous. Such a rich set of entities makes the corpus valuable for training named-entity recognition models in various domains, including news, social media posts, legal documents, and procurement contracts. The paper presents an updated baseline solution for named entity recognition in Ukrainian with 0.89 F1. The corpus is the largest of its kind for the Ukrainian language and is available for download.
[ "Chaplynskyi, Dmytro", "Romanyshyn, Mariana" ]
Introducing NER-UK 2.0: A Rich Corpus of Named Entities for Ukrainian
unlp-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.unlp-1.5.bib
https://aclanthology.org/2024.unlp-1.5/
@inproceedings{ustyianovych-barbosa-2024-instant, title = "Instant Messaging Platforms News Multi-Task Classification for Stance, Sentiment, and Discrimination Detection", author = "Ustyianovych, Taras and Barbosa, Denilson", editor = "Romanyshyn, Mariana and Romanyshyn, Nataliia and Hlybovets, Andrii and Ignatenko, Oleksii", booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.unlp-1.5", pages = "30--40", abstract = "In the digital age, geopolitical events frequently catalyze discussions among global web users. Platforms such as social networks and messaging applications serve as vital means for information spreading and acquisition. The Russian aggression against Ukraine has notably intensified online discourse on the matter, drawing a significant audience eager for real-time updates. This surge in online activity inevitably results in the proliferation of content, some of which may be unreliable or manipulative. Given this context, the identification of such content with information distortion is imperative to mitigate bias and promote fairness. However, this task presents considerable challenges, primarily due to the lack of sophisticated language models capable of understanding the nuances and context of texts in low-resource languages, and the scarcity of well-annotated datasets for training such models. To address these gaps, we introduce the TRWU dataset - a meticulously annotated collection of Telegram news about the Russian war in Ukraine gathered starting from January 1, 2022. This paper outlines our methodology for semantic analysis and classification of these messages, aiming to ascertain their bias. Such an approach enhances our ability to detect manipulative and destructive content. Through descriptive statistical analysis, we explore deviations in message sentiment, stance, and metadata across different types of channels and levels of content creation activity. Our findings indicate a predominance of negative sentiment within the dataset. Additionally, our research elucidates distinct differences in the linguistic choices and phraseology among channels, based on their stance towards the war. This study contributes to the broader effort of understanding the spread and mitigating the impact of biased and manipulative content in digital communications.", }
In the digital age, geopolitical events frequently catalyze discussions among global web users. Platforms such as social networks and messaging applications serve as vital means for information spreading and acquisition. The Russian aggression against Ukraine has notably intensified online discourse on the matter, drawing a significant audience eager for real-time updates. This surge in online activity inevitably results in the proliferation of content, some of which may be unreliable or manipulative. Given this context, the identification of such content with information distortion is imperative to mitigate bias and promote fairness. However, this task presents considerable challenges, primarily due to the lack of sophisticated language models capable of understanding the nuances and context of texts in low-resource languages, and the scarcity of well-annotated datasets for training such models. To address these gaps, we introduce the TRWU dataset - a meticulously annotated collection of Telegram news about the Russian war in Ukraine gathered starting from January 1, 2022. This paper outlines our methodology for semantic analysis and classification of these messages, aiming to ascertain their bias. Such an approach enhances our ability to detect manipulative and destructive content. Through descriptive statistical analysis, we explore deviations in message sentiment, stance, and metadata across different types of channels and levels of content creation activity. Our findings indicate a predominance of negative sentiment within the dataset. Additionally, our research elucidates distinct differences in the linguistic choices and phraseology among channels, based on their stance towards the war. This study contributes to the broader effort of understanding the spread and mitigating the impact of biased and manipulative content in digital communications.
[ "Ustyianovych, Taras", "Barbosa, Denilson" ]
Instant Messaging Platforms News Multi-Task Classification for Stance, Sentiment, and Discrimination Detection
unlp-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.unlp-1.6.bib
https://aclanthology.org/2024.unlp-1.6/
@inproceedings{paniv-etal-2024-setting, title = "Setting up the Data Printer with Improved {E}nglish to {U}krainian Machine Translation", author = "Paniv, Yurii and Chaplynskyi, Dmytro and Trynus, Nikita and Kyrylov, Volodymyr", editor = "Romanyshyn, Mariana and Romanyshyn, Nataliia and Hlybovets, Andrii and Ignatenko, Oleksii", booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.unlp-1.6", pages = "41--50", abstract = "To build large language models for Ukrainian we need to expand our corpora with large amounts of new algorithmic tasks expressed in natural language. Examples of task performance expressed in English are abundant, so with a high-quality translation system our community will be enabled to curate datasets faster. To aid this goal, we introduce a recipe to build a translation system using supervised finetuning of a large pretrained language model with a noisy parallel dataset of 3M pairs of Ukrainian and English sentences followed by a second phase of training using 17K examples selected by k-fold perplexity filtering on another dataset of higher quality. Our decoder-only model named Dragoman beats performance of previous state of the art encoder-decoder models on the FLORES devtest set.", }
To build large language models for Ukrainian we need to expand our corpora with large amounts of new algorithmic tasks expressed in natural language. Examples of task performance expressed in English are abundant, so with a high-quality translation system our community will be enabled to curate datasets faster. To aid this goal, we introduce a recipe to build a translation system using supervised finetuning of a large pretrained language model with a noisy parallel dataset of 3M pairs of Ukrainian and English sentences followed by a second phase of training using 17K examples selected by k-fold perplexity filtering on another dataset of higher quality. Our decoder-only model named Dragoman beats performance of previous state of the art encoder-decoder models on the FLORES devtest set.
[ "Paniv, Yurii", "Chaplynskyi, Dmytro", "Trynus, Nikita", "Kyrylov, Volodymyr" ]
Setting up the Data Printer with Improved English to Ukrainian Machine Translation
unlp-1.6
Poster
2404.15196
[ "https://github.com/lang-uk/dragoman" ]
https://huggingface.co/papers/2404.15196
2
0
0
4
1
[]
[]
[]
https://aclanthology.org/2024.unlp-1.7.bib
https://aclanthology.org/2024.unlp-1.7/
@inproceedings{romanyshyn-etal-2024-automated, title = "Automated Extraction of Hypo-Hypernym Relations for the {U}krainian {W}ord{N}et", author = "Romanyshyn, Nataliia and Chaplynskyi, Dmytro and Romanyshyn, Mariana", editor = "Romanyshyn, Mariana and Romanyshyn, Nataliia and Hlybovets, Andrii and Ignatenko, Oleksii", booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.unlp-1.7", pages = "51--60", abstract = "WordNet is a crucial resource in linguistics and natural language processing, providing a detailed and expansive set of lexico-semantic relationships among words in a language. The trend toward automated construction and expansion of WordNets has become increasingly popular due to the high costs of manual development. This study aims to automate the development of the Ukrainian WordNet, explicitly concentrating on hypo-hypernym relations that are crucial building blocks of the hierarchical structure of WordNet. Utilizing the linking between Princeton WordNet, Wikidata, and multilingual resources from Wikipedia, the proposed approach successfully mapped 17{\%} of Princeton WordNet (PWN) content to Ukrainian Wikipedia. Furthermore, the study introduces three innovative strategies for generating new entries to fill in the gaps of the Ukrainian WordNet: machine translation, the Hypernym Discovery model, and the Hypernym Instruction-Following LLaMA model. The latter model shows a high level of effectiveness, evidenced by a 41.61{\%} performance on the Mean Overlap Coefficient (MOC) metric. With the proposed approach that combines automated techniques with expert human input, we provide a reliable basis for creating the Ukrainian WordNet.", }
WordNet is a crucial resource in linguistics and natural language processing, providing a detailed and expansive set of lexico-semantic relationships among words in a language. The trend toward automated construction and expansion of WordNets has become increasingly popular due to the high costs of manual development. This study aims to automate the development of the Ukrainian WordNet, explicitly concentrating on hypo-hypernym relations that are crucial building blocks of the hierarchical structure of WordNet. Utilizing the linking between Princeton WordNet, Wikidata, and multilingual resources from Wikipedia, the proposed approach successfully mapped 17{\%} of Princeton WordNet (PWN) content to Ukrainian Wikipedia. Furthermore, the study introduces three innovative strategies for generating new entries to fill in the gaps of the Ukrainian WordNet: machine translation, the Hypernym Discovery model, and the Hypernym Instruction-Following LLaMA model. The latter model shows a high level of effectiveness, evidenced by a 41.61{\%} performance on the Mean Overlap Coefficient (MOC) metric. With the proposed approach that combines automated techniques with expert human input, we provide a reliable basis for creating the Ukrainian WordNet.
[ "Romanyshyn, Nataliia", "Chaplynskyi, Dmytro", "Romanyshyn, Mariana" ]
Automated Extraction of Hypo-Hypernym Relations for the Ukrainian WordNet
unlp-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.unlp-1.8.bib
https://aclanthology.org/2024.unlp-1.8/
@inproceedings{laba-etal-2024-ukrainian, title = "{U}krainian Visual Word Sense Disambiguation Benchmark", author = "Laba, Yurii and Mohytych, Yaryna and Rohulia, Ivanna and Kyryleyza, Halyna and Dydyk-Meush, Hanna and Dobosevych, Oles and Hryniv, Rostyslav", editor = "Romanyshyn, Mariana and Romanyshyn, Nataliia and Hlybovets, Andrii and Ignatenko, Oleksii", booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.unlp-1.8", pages = "61--66", abstract = "This study presents a benchmark for evaluating the Visual Word Sense Disambiguation (Visual-WSD) task in Ukrainian. The main goal of the Visual-WSD task is to identify, with minimal contextual information, the most appropriate representation of a given ambiguous word from a set of ten images. To construct this benchmark, we followed a methodology similar to that proposed by (CITATION), who previously introduced benchmarks for the Visual-WSD task in English, Italian, and Farsi. This approach allows us to incorporate the Ukrainian benchmark into a broader framework for cross-language model performance comparisons. We collected the benchmark data semi-automatically and refined it with input from domain experts. We then assessed eight multilingual and multimodal large language models using this benchmark. All tested models performed worse than the zero-shot CLIP-based baseline model (CITATION) used by (CITATION) for the English Visual-WSD task. Our analysis revealed a significant performance gap in the Visual-WSD task between Ukrainian and English.", }
This study presents a benchmark for evaluating the Visual Word Sense Disambiguation (Visual-WSD) task in Ukrainian. The main goal of the Visual-WSD task is to identify, with minimal contextual information, the most appropriate representation of a given ambiguous word from a set of ten images. To construct this benchmark, we followed a methodology similar to that proposed by (CITATION), who previously introduced benchmarks for the Visual-WSD task in English, Italian, and Farsi. This approach allows us to incorporate the Ukrainian benchmark into a broader framework for cross-language model performance comparisons. We collected the benchmark data semi-automatically and refined it with input from domain experts. We then assessed eight multilingual and multimodal large language models using this benchmark. All tested models performed worse than the zero-shot CLIP-based baseline model (CITATION) used by (CITATION) for the English Visual-WSD task. Our analysis revealed a significant performance gap in the Visual-WSD task between Ukrainian and English.
[ "Laba, Yurii", "Mohytych, Yaryna", "Rohulia, Ivanna", "Kyryleyza, Halyna", "Dydyk-Meush, Hanna", "Dobosevych, Oles", "Hryniv, Rostyslav" ]
Ukrainian Visual Word Sense Disambiguation Benchmark
unlp-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.unlp-1.9.bib
https://aclanthology.org/2024.unlp-1.9/
@inproceedings{romanyshyn-etal-2024-unlp, title = "The {UNLP} 2024 Shared Task on Fine-Tuning Large Language Models for {U}krainian", author = "Romanyshyn, Mariana and Syvokon, Oleksiy and Kyslyi, Roman", editor = "Romanyshyn, Mariana and Romanyshyn, Nataliia and Hlybovets, Andrii and Ignatenko, Oleksii", booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.unlp-1.9", pages = "67--74", abstract = "This paper presents the results of the UNLP 2024 shared task, the first Shared Task on Fine-Tuning Large Language Models for the Ukrainian language. The goal of the task was to facilitate the creation of models that have knowledge of the Ukrainian language, history, and culture, as well as common knowledge, and are capable of generating fluent and accurate responses in Ukrainian. The participants were required to use models with open weights and reasonable size to ensure the reproducibility of the solutions. The participating systems were evaluated using multiple-choice exam questions and manually crafted open questions. Three teams submitted their solutions before the deadline, and two teams submitted papers that were accepted to appear in the UNLP workshop proceedings and are referred to in this report. The Codabench leaderboard is left open for further submissions.", }
This paper presents the results of the UNLP 2024 shared task, the first Shared Task on Fine-Tuning Large Language Models for the Ukrainian language. The goal of the task was to facilitate the creation of models that have knowledge of the Ukrainian language, history, and culture, as well as common knowledge, and are capable of generating fluent and accurate responses in Ukrainian. The participants were required to use models with open weights and reasonable size to ensure the reproducibility of the solutions. The participating systems were evaluated using multiple-choice exam questions and manually crafted open questions. Three teams submitted their solutions before the deadline, and two teams submitted papers that were accepted to appear in the UNLP workshop proceedings and are referred to in this report. The Codabench leaderboard is left open for further submissions.
[ "Romanyshyn, Mariana", "Syvokon, Oleksiy", "Kyslyi, Roman" ]
The UNLP 2024 Shared Task on Fine-Tuning Large Language Models for Ukrainian
unlp-1.9
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.unlp-1.10.bib
https://aclanthology.org/2024.unlp-1.10/
@inproceedings{boros-etal-2024-fine, title = "Fine-Tuning and Retrieval Augmented Generation for Question Answering Using Affordable Large Language Models", author = "Boros, Tiberiu and Chivereanu, Radu and Dumitrescu, Stefan and Purcaru, Octavian", editor = "Romanyshyn, Mariana and Romanyshyn, Nataliia and Hlybovets, Andrii and Ignatenko, Oleksii", booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.unlp-1.10", pages = "75--82", abstract = "We present our proposed system named Sherlock to UNLP 2024 Shared Task on Question Answering winning first place. We employ a mix of methods, from using automatically translated datasets to perform supervised fine-tuning and direct preference optimization on instruction-tuned models, to model weight merging and retrieval augmented generation. We present and motivate our chosen sequence of steps, as well as an ablation study to understand the effect of each additional step. The resulting model and code are made publicly available (download links provided in the paper).", }
We present our proposed system named Sherlock to UNLP 2024 Shared Task on Question Answering winning first place. We employ a mix of methods, from using automatically translated datasets to perform supervised fine-tuning and direct preference optimization on instruction-tuned models, to model weight merging and retrieval augmented generation. We present and motivate our chosen sequence of steps, as well as an ablation study to understand the effect of each additional step. The resulting model and code are made publicly available (download links provided in the paper).
[ "Boros, Tiberiu", "Chivereanu, Radu", "Dumitrescu, Stefan", "Purcaru, Octavian" ]
Fine-Tuning and Retrieval Augmented Generation for Question Answering Using Affordable Large Language Models
unlp-1.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.unlp-1.11.bib
https://aclanthology.org/2024.unlp-1.11/
@inproceedings{kiulian-etal-2024-bytes, title = "From Bytes to Borsch: Fine-Tuning Gemma and Mistral for the {U}krainian Language Representation", author = "Kiulian, Artur and Polishko, Anton and Khandoga, Mykola and Chubych, Oryna and Connor, Jack and Ravishankar, Raghav and Shirawalmath, Adarsh", editor = "Romanyshyn, Mariana and Romanyshyn, Nataliia and Hlybovets, Andrii and Ignatenko, Oleksii", booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.unlp-1.11", pages = "83--94", abstract = "In the rapidly advancing field of AI and NLP, generative large language models (LLMs) stand at the forefront of innovation, showcasing unparalleled abilities in text understanding and generation. However, the limited representation of low-resource languages like Ukrainian poses a notable challenge, restricting the reach and relevance of this technology. Our paper addresses this by fine-tuning the open-source Gemma and Mistral LLMs with Ukrainian datasets, aiming to improve their linguistic proficiency and benchmarking them against other existing models capable of processing Ukrainian language. This endeavor not only aims to mitigate language bias in technology but also promotes inclusivity in the digital realm. Our transparent and reproducible approach encourages further NLP research and development. Additionally, we present the Ukrainian Knowledge and Instruction Dataset (UKID) to aid future efforts in language model fine-tuning. Our research not only advances the field of NLP but also highlights the importance of linguistic diversity in AI, which is crucial for cultural preservation, education, and expanding AI{'}s global utility. Ultimately, we advocate for a future where technology is inclusive, enabling AI to communicate effectively across all languages, especially those currently underrepresented.", }
In the rapidly advancing field of AI and NLP, generative large language models (LLMs) stand at the forefront of innovation, showcasing unparalleled abilities in text understanding and generation. However, the limited representation of low-resource languages like Ukrainian poses a notable challenge, restricting the reach and relevance of this technology. Our paper addresses this by fine-tuning the open-source Gemma and Mistral LLMs with Ukrainian datasets, aiming to improve their linguistic proficiency and benchmarking them against other existing models capable of processing Ukrainian language. This endeavor not only aims to mitigate language bias in technology but also promotes inclusivity in the digital realm. Our transparent and reproducible approach encourages further NLP research and development. Additionally, we present the Ukrainian Knowledge and Instruction Dataset (UKID) to aid future efforts in language model fine-tuning. Our research not only advances the field of NLP but also highlights the importance of linguistic diversity in AI, which is crucial for cultural preservation, education, and expanding AI{'}s global utility. Ultimately, we advocate for a future where technology is inclusive, enabling AI to communicate effectively across all languages, especially those currently underrepresented.
[ "Kiulian, Artur", "Polishko, Anton", "Kh", "oga, Mykola", "Chubych, Oryna", "Connor, Jack", "Ravishankar, Raghav", "Shirawalmath, Adarsh" ]
From Bytes to Borsch: Fine-Tuning Gemma and Mistral for the Ukrainian Language Representation
unlp-1.11
Poster
2404.09138
[ "https://github.com/polyagent/from-bytes-to-borsch" ]
https://huggingface.co/papers/2404.09138
6
4
1
7
1
[]
[]
[]
https://aclanthology.org/2024.unlp-1.12.bib
https://aclanthology.org/2024.unlp-1.12/
@inproceedings{saini-etal-2024-spivavtor, title = "Spivavtor: An Instruction Tuned {U}krainian Text Editing Model", author = "Saini, Aman and Chernodub, Artem and Raheja, Vipul and Kulkarni, Vivek", editor = "Romanyshyn, Mariana and Romanyshyn, Nataliia and Hlybovets, Andrii and Ignatenko, Oleksii", booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.unlp-1.12", pages = "95--108", abstract = "We introduce Spivavtor, a dataset, and instruction-tuned models for text editing focused on the Ukrainian language. Spivavtor is the Ukrainian-focused adaptation of the English-only CoEdIT (Raheja et al., 2023) model. Similar to CoEdIT, Spivavtor performs text editing tasks by following instructions in Ukrainian like {``}Виправте граматику в цьому реченнi{''} and {``}Спростiть це речення{''} which translate to {``}Correct the grammar in this sentence{''} and {``}Simplify this sentence{''} in English, respectively. This paper describes the details of the Spivavtor-Instruct dataset and Spivavtor models. We evaluate Spivavtor on a variety of text editing tasks in Ukrainian, such as Grammatical Error Correction (GEC), Text Simplification, Coherence, and Paraphrasing, and demonstrate its superior performance on all of them. We publicly release our best performing models and data as resources to the community to advance further research in this space.", }
We introduce Spivavtor, a dataset, and instruction-tuned models for text editing focused on the Ukrainian language. Spivavtor is the Ukrainian-focused adaptation of the English-only CoEdIT (Raheja et al., 2023) model. Similar to CoEdIT, Spivavtor performs text editing tasks by following instructions in Ukrainian like {``}Виправте граматику в цьому реченнi{''} and {``}Спростiть це речення{''} which translate to {``}Correct the grammar in this sentence{''} and {``}Simplify this sentence{''} in English, respectively. This paper describes the details of the Spivavtor-Instruct dataset and Spivavtor models. We evaluate Spivavtor on a variety of text editing tasks in Ukrainian, such as Grammatical Error Correction (GEC), Text Simplification, Coherence, and Paraphrasing, and demonstrate its superior performance on all of them. We publicly release our best performing models and data as resources to the community to advance further research in this space.
[ "Saini, Aman", "Chernodub, Artem", "Raheja, Vipul", "Kulkarni, Vivek" ]
Spivavtor: An Instruction Tuned Ukrainian Text Editing Model
unlp-1.12
Poster
2404.18880
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.unlp-1.13.bib
https://aclanthology.org/2024.unlp-1.13/
@inproceedings{hamotskyi-etal-2024-eval, title = "Eval-{UA}-tion 1.0: Benchmark for Evaluating {U}krainian (Large) Language Models", author = {Hamotskyi, Serhii and Levbarg, Anna-Izabella and H{\"a}nig, Christian}, editor = "Romanyshyn, Mariana and Romanyshyn, Nataliia and Hlybovets, Andrii and Ignatenko, Oleksii", booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.unlp-1.13", pages = "109--119", abstract = "In this paper, we introduce Eval-UA-tion, a set of novel Ukrainian-language datasets aimed at evaluating the performance of language models on the Ukrainian language. The tasks include UA-CBT (inspired by the Children{'}s Book Test, a fill-in-the-gaps type task aimed at gauging the extent to which a story narrative is understood), UP-Titles (where the online newspaper \textit{Ukrainska Pravda}{`}s articles have to be matched to the correct title among 10 similar ones), and LMentry-static-UA/LMES (inspired by the LMentry benchmark, a set of tasks simple to solve for humans but hard for LMs, such as {`}which of these words is longer{'} and {`}what is the fifth word of this sentence{'}). With the exception of UP-Titles, the tasks are built in a way to minimize contamination and use material unlikely to be present in the training sets of language models, and include a split for few-shot model prompting use that minimizes contamination. For each task human and random baselines are provided.", }
In this paper, we introduce Eval-UA-tion, a set of novel Ukrainian-language datasets aimed at evaluating the performance of language models on the Ukrainian language. The tasks include UA-CBT (inspired by the Children{'}s Book Test, a fill-in-the-gaps type task aimed at gauging the extent to which a story narrative is understood), UP-Titles (where the online newspaper \textit{Ukrainska Pravda}{`}s articles have to be matched to the correct title among 10 similar ones), and LMentry-static-UA/LMES (inspired by the LMentry benchmark, a set of tasks simple to solve for humans but hard for LMs, such as {`}which of these words is longer{'} and {`}what is the fifth word of this sentence{'}). With the exception of UP-Titles, the tasks are built in a way to minimize contamination and use material unlikely to be present in the training sets of language models, and include a split for few-shot model prompting use that minimizes contamination. For each task human and random baselines are provided.
[ "Hamotskyi, Serhii", "Levbarg, Anna-Izabella", "H{\\\"a}nig, Christian" ]
Eval-UA-tion 1.0: Benchmark for Evaluating Ukrainian (Large) Language Models
unlp-1.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.unlp-1.14.bib
https://aclanthology.org/2024.unlp-1.14/
@inproceedings{haltiuk-smywinski-pohl-2024-liberta, title = "{L}i{BERT}a: Advancing {U}krainian Language Modeling through Pre-training from Scratch", author = "Haltiuk, Mykola and Smywi{\'n}ski-Pohl, Aleksander", editor = "Romanyshyn, Mariana and Romanyshyn, Nataliia and Hlybovets, Andrii and Ignatenko, Oleksii", booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.unlp-1.14", pages = "120--128", abstract = "Recent advancements in Natural Language Processing (NLP) have spurred remarkable progress in language modeling, predominantly benefiting English. While Ukrainian NLP has long grappled with significant challenges due to limited data and computational resources, recent years have seen a shift with the emergence of new corpora, marking a pivotal moment in addressing these obstacles. This paper introduces LiBERTa Large, the inaugural BERT Large model pre-trained entirely from scratch only on Ukrainian texts. Leveraging extensive multilingual text corpora, including a substantial Ukrainian subset, LiBERTa Large establishes a foundational resource for Ukrainian NLU tasks. Our model outperforms existing multilingual and monolingual models pre-trained from scratch for Ukrainian, demonstrating competitive performance against those relying on cross-lingual transfer from English. This achievement underscores our ability to achieve superior performance through pre-training from scratch with additional enhancements, obviating the need to rely on decisions made for English models to efficiently transfer weights. We establish LiBERTa Large as a robust baseline, paving the way for future advancements in Ukrainian language modeling.", }
Recent advancements in Natural Language Processing (NLP) have spurred remarkable progress in language modeling, predominantly benefiting English. While Ukrainian NLP has long grappled with significant challenges due to limited data and computational resources, recent years have seen a shift with the emergence of new corpora, marking a pivotal moment in addressing these obstacles. This paper introduces LiBERTa Large, the inaugural BERT Large model pre-trained entirely from scratch only on Ukrainian texts. Leveraging extensive multilingual text corpora, including a substantial Ukrainian subset, LiBERTa Large establishes a foundational resource for Ukrainian NLU tasks. Our model outperforms existing multilingual and monolingual models pre-trained from scratch for Ukrainian, demonstrating competitive performance against those relying on cross-lingual transfer from English. This achievement underscores our ability to achieve superior performance through pre-training from scratch with additional enhancements, obviating the need to rely on decisions made for English models to efficiently transfer weights. We establish LiBERTa Large as a robust baseline, paving the way for future advancements in Ukrainian language modeling.
[ "Haltiuk, Mykola", "Smywi{\\'n}ski-Pohl, Aleks", "er" ]
LiBERTa: Advancing Ukrainian Language Modeling through Pre-training from Scratch
unlp-1.14
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.unlp-1.15.bib
https://aclanthology.org/2024.unlp-1.15/
@inproceedings{galeshchuk-2024-entity, title = "Entity Embellishment Mitigation in {LLM}s Output with Noisy Synthetic Dataset for Alignment", author = "Galeshchuk, Svitlana", editor = "Romanyshyn, Mariana and Romanyshyn, Nataliia and Hlybovets, Andrii and Ignatenko, Oleksii", booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.unlp-1.15", pages = "129--134", abstract = "The present work focuses on the entity embellishments when named entities are accompanied by additional information that is not supported by the context or the source material. Our paper contributes into mitigating this problem in large language model{'}s generated texts, summaries in particular, by proposing the approach with synthetic noise injection in the generated samples that are further used for alignment of finetuned LLM. We also challenge the issue of solutions scarcity for low-resourced languages and test our approach with corpora in Ukrainian.", }
The present work focuses on the entity embellishments when named entities are accompanied by additional information that is not supported by the context or the source material. Our paper contributes into mitigating this problem in large language model{'}s generated texts, summaries in particular, by proposing the approach with synthetic noise injection in the generated samples that are further used for alignment of finetuned LLM. We also challenge the issue of solutions scarcity for low-resourced languages and test our approach with corpora in Ukrainian.
[ "Galeshchuk, Svitlana" ]
Entity Embellishment Mitigation in LLMs Output with Noisy Synthetic Dataset for Alignment
unlp-1.15
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.unlp-1.16.bib
https://aclanthology.org/2024.unlp-1.16/
@inproceedings{shamrai-2024-language, title = "Language-Specific Pruning for Efficient Reduction of Large Language Models", author = "Shamrai, Maksym", editor = "Romanyshyn, Mariana and Romanyshyn, Nataliia and Hlybovets, Andrii and Ignatenko, Oleksii", booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.unlp-1.16", pages = "135--140", abstract = "Delving into pruning techniques is essential to boost the efficiency of Large Language Models (LLMs) by reducing their size and computational demands, resulting in faster and more cost-effective inference. In this work, our key contribution lies in recognizing that LLMs trained on diverse languages manifest distinct language-specific weight distributions. Exploiting this insight, we illustrate that pruning LLMs using language-specific data results in a more potent model compression. Empirical evidence underscores the critical nature of pruning on language-specific data, highlighting a noteworthy impact on the perplexity of Ukrainian texts compared to pruning on English data. The proposed methodology significantly reduces the size of LLaMA, LLaMA 2 and Mistral models while preserving competitive performance. This research underscores the significance of linguistic considerations in LLM pruning and advocates for language-specific optimization, establishing a framework for more efficient and tailored language models across diverse linguistic contexts. Additionally, all experiments were conducted using a single consumer-grade NVIDIA RTX 3090 GPU, and the code is available at https://github.com/mshamrai/language-specific-pruning.", }
Delving into pruning techniques is essential to boost the efficiency of Large Language Models (LLMs) by reducing their size and computational demands, resulting in faster and more cost-effective inference. In this work, our key contribution lies in recognizing that LLMs trained on diverse languages manifest distinct language-specific weight distributions. Exploiting this insight, we illustrate that pruning LLMs using language-specific data results in a more potent model compression. Empirical evidence underscores the critical nature of pruning on language-specific data, highlighting a noteworthy impact on the perplexity of Ukrainian texts compared to pruning on English data. The proposed methodology significantly reduces the size of LLaMA, LLaMA 2 and Mistral models while preserving competitive performance. This research underscores the significance of linguistic considerations in LLM pruning and advocates for language-specific optimization, establishing a framework for more efficient and tailored language models across diverse linguistic contexts. Additionally, all experiments were conducted using a single consumer-grade NVIDIA RTX 3090 GPU, and the code is available at https://github.com/mshamrai/language-specific-pruning.
[ "Shamrai, Maksym" ]
Language-Specific Pruning for Efficient Reduction of Large Language Models
unlp-1.16
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.wildre-1.1.bib
https://aclanthology.org/2024.wildre-1.1/
@inproceedings{kochar-etal-2024-towards, title = "Towards Disfluency Annotated Corpora for {I}ndian Languages", author = "Kochar, Chayan and Mujadia, Vandan Vasantlal and Mishra, Pruthwik and Sharma, Dipti Misra", editor = "Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr.", booktitle = "Proceedings of the 7th Workshop on Indian Language Data: Resources and Evaluation", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.wildre-1.1", pages = "1--10", abstract = "In the natural course of spoken language, individuals often engage in thinking and self-correction during speech production. These instances of interruption or correction are commonly referred to as disfluencies. When preparing data for subsequent downstream NLP tasks, these linguistic elements can be systematically removed, or handled as required, to enhance data quality. In this study, we present a comprehensive research on disfluencies in Indian languages. Our approach involves not only annotating real-world conversation transcripts but also conducting a detailed analysis of linguistic nuances inherent to Indian languages that are necessary to consider during annotation. Additionally, we introduce a robust algorithm for the synthetic generation of disfluent data. This algorithm aims to facilitate more effective model training for the identification of disfluencies in real-world conversations, thereby contributing to the advancement of disfluency research in Indian languages.", }
In the natural course of spoken language, individuals often engage in thinking and self-correction during speech production. These instances of interruption or correction are commonly referred to as disfluencies. When preparing data for subsequent downstream NLP tasks, these linguistic elements can be systematically removed, or handled as required, to enhance data quality. In this study, we present a comprehensive research on disfluencies in Indian languages. Our approach involves not only annotating real-world conversation transcripts but also conducting a detailed analysis of linguistic nuances inherent to Indian languages that are necessary to consider during annotation. Additionally, we introduce a robust algorithm for the synthetic generation of disfluent data. This algorithm aims to facilitate more effective model training for the identification of disfluencies in real-world conversations, thereby contributing to the advancement of disfluency research in Indian languages.
[ "Kochar, Chayan", "Mujadia, V", "an Vasantlal", "Mishra, Pruthwik", "Sharma, Dipti Misra" ]
Towards Disfluency Annotated Corpora for Indian Languages
wildre-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.wildre-1.2.bib
https://aclanthology.org/2024.wildre-1.2/
@inproceedings{raihan-etal-2024-emomix, title = "{E}mo{M}ix-3{L}: A Code-Mixed Dataset for {B}angla-{E}nglish-{H}indi for Emotion Detection", author = "Raihan, Nishat and Goswami, Dhiman and Mahmud, Antara and Anastasopoulos, Antonios and Zampieri, Marcos", editor = "Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr.", booktitle = "Proceedings of the 7th Workshop on Indian Language Data: Resources and Evaluation", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.wildre-1.2", pages = "11--16", abstract = "Code-mixing is a well-studied linguistic phenomenon that occurs when two or more languages are mixed in text or speech. Several studies have been conducted on building datasets and performing downstream NLP tasks on code-mixed data. Although it is not uncommon to observe code-mixing of three or more languages, most available datasets in this domain contain code-mixed data from only two languages. In this paper, we introduce EmoMix-3L, a novel multi-label emotion detection dataset containing code-mixed data from three different languages. We experiment with several models on EmoMix-3L and we report that MuRIL outperforms other models on this dataset.", }
Code-mixing is a well-studied linguistic phenomenon that occurs when two or more languages are mixed in text or speech. Several studies have been conducted on building datasets and performing downstream NLP tasks on code-mixed data. Although it is not uncommon to observe code-mixing of three or more languages, most available datasets in this domain contain code-mixed data from only two languages. In this paper, we introduce EmoMix-3L, a novel multi-label emotion detection dataset containing code-mixed data from three different languages. We experiment with several models on EmoMix-3L and we report that MuRIL outperforms other models on this dataset.
[ "Raihan, Nishat", "Goswami, Dhiman", "Mahmud, Antara", "Anastasopoulos, Antonios", "Zampieri, Marcos" ]
EmoMix-3L: A Code-Mixed Dataset for Bangla-English-Hindi for Emotion Detection
wildre-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.wildre-1.3.bib
https://aclanthology.org/2024.wildre-1.3/
@inproceedings{rani-etal-2024-findings, title = "Findings of the {WILDRE} Shared Task on Code-mixed Less-resourced Sentiment Analysis for {I}ndo-{A}ryan Languages", author = "Rani, Priya and Negi, Gaurav and Jha, Saroj and Suryawanshi, Shardul and Ojha, Atul Kr. and Buitelaar, Paul and McCrae, John P.", editor = "Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr.", booktitle = "Proceedings of the 7th Workshop on Indian Language Data: Resources and Evaluation", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.wildre-1.3", pages = "17--23", abstract = "This paper describes the structure and findings of the WILDRE 2024 shared task on Code-mixed Less-resourced Sentiment Analysis for Indo-Aryan Languages. The participants were asked to submit the test data{'}s final prediction on CodaLab. A total of fourteen teams registered for the shared task. Only four participants submitted the system for evaluation on CodaLab, with only two teams submitting the system description paper. While all systems show a rather promising performance, they outperform the baseline scores.", }
This paper describes the structure and findings of the WILDRE 2024 shared task on Code-mixed Less-resourced Sentiment Analysis for Indo-Aryan Languages. The participants were asked to submit the test data{'}s final prediction on CodaLab. A total of fourteen teams registered for the shared task. Only four participants submitted the system for evaluation on CodaLab, with only two teams submitting the system description paper. While all systems show a rather promising performance, they outperform the baseline scores.
[ "Rani, Priya", "Negi, Gaurav", "Jha, Saroj", "Suryawanshi, Shardul", "Ojha, Atul Kr.", "Buitelaar, Paul", "McCrae, John P." ]
Findings of the WILDRE Shared Task on Code-mixed Less-resourced Sentiment Analysis for Indo-Aryan Languages
wildre-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.wildre-1.4.bib
https://aclanthology.org/2024.wildre-1.4/
@inproceedings{maity-etal-2024-multilingual, title = "Multilingual Bias Detection and Mitigation for {I}ndian Languages", author = "Maity, Ankita and Sharma, Anubhav and Dhar, Rudra and Abhishek, Tushar and Gupta, Manish and Varma, Vasudeva", editor = "Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr.", booktitle = "Proceedings of the 7th Workshop on Indian Language Data: Resources and Evaluation", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.wildre-1.4", pages = "24--29", abstract = "Lack of diverse perspectives causes neutrality bias in Wikipedia content leading to millions of worldwide readers getting exposed by potentially inaccurate information. Hence, neutrality bias detection and mitigation is a critical problem. Although previous studies have proposed effective solutions for English, no work exists for Indian languages. First, we contribute two large datasets, mWIKIBIAS and mWNC, covering 8 languages, for the bias detection and mitigation tasks respectively. Next, we investigate the effectiveness of popular multilingual Transformer-based models for the two tasks by modeling detection as a binary classification problem and mitigation as a style transfer problem. We make the code and data publicly available.", }
Lack of diverse perspectives causes neutrality bias in Wikipedia content leading to millions of worldwide readers getting exposed by potentially inaccurate information. Hence, neutrality bias detection and mitigation is a critical problem. Although previous studies have proposed effective solutions for English, no work exists for Indian languages. First, we contribute two large datasets, mWIKIBIAS and mWNC, covering 8 languages, for the bias detection and mitigation tasks respectively. Next, we investigate the effectiveness of popular multilingual Transformer-based models for the two tasks by modeling detection as a binary classification problem and mitigation as a style transfer problem. We make the code and data publicly available.
[ "Maity, Ankita", "Sharma, Anubhav", "Dhar, Rudra", "Abhishek, Tushar", "Gupta, Manish", "Varma, Vasudeva" ]
Multilingual Bias Detection and Mitigation for Indian Languages
wildre-1.4
Poster
2312.15181
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.wildre-1.5.bib
https://aclanthology.org/2024.wildre-1.5/
@inproceedings{nigam-chandra-2024-dharmasastra, title = "Dharma{\'s}{\=a}stra Informatics: Concept Mining System for Socio-Cultural Facet in {A}ncient {I}ndia", author = "Nigam, Arooshi and Chandra, Subhash", editor = "Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr.", booktitle = "Proceedings of the 7th Workshop on Indian Language Data: Resources and Evaluation", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.wildre-1.5", pages = "30--39", abstract = "The heritage of Dharma{\'s}{\=a}stra (DS) represents an extensive cultural legacy, spanning diverse fields such as family law, social ethics, culture and economics. In this paper, a new term {``}Dharma{\'s}{\=a}stric Informatics,{''} is proposed which leverages computational methods for concept mining to unravel the socio-cultural complexities of ancient India as reflected in the DS. Despite its profound significance, the digitization and online information retrieval of DS texts encounter notable challenges. Therefore, the primary aim of this paper is to synergize digital accessibility and information mining techniques to enhance access to DS knowledge traditions. Through the utilization of heritage computing methodologies, it is an endeavour to develop a robust system for digitizing DS texts comprehensively, facilitating instant referencing and efficient retrieval, catering to the needs of researchers and scholars across disciplines worldwide. By leveraging advanced digital technologies and the burgeoning IT landscape, it seeks to create a seamless and user-friendly platform for accessing and exploring DS texts. This experiment not only promotes scholarly engagement but also serves as an invaluable resource for individuals interested in delving into the intricate realms of archaic Indian knowledge traditions. Ultimately, our efforts aim to amplify the visibility and accessibility of DS knowledge, fostering a deeper understanding and appreciation of this profound cultural heritage.", }
The heritage of Dharma{\'s}{\=a}stra (DS) represents an extensive cultural legacy, spanning diverse fields such as family law, social ethics, culture and economics. In this paper, a new term {``}Dharma{\'s}{\=a}stric Informatics,{''} is proposed which leverages computational methods for concept mining to unravel the socio-cultural complexities of ancient India as reflected in the DS. Despite its profound significance, the digitization and online information retrieval of DS texts encounter notable challenges. Therefore, the primary aim of this paper is to synergize digital accessibility and information mining techniques to enhance access to DS knowledge traditions. Through the utilization of heritage computing methodologies, it is an endeavour to develop a robust system for digitizing DS texts comprehensively, facilitating instant referencing and efficient retrieval, catering to the needs of researchers and scholars across disciplines worldwide. By leveraging advanced digital technologies and the burgeoning IT landscape, it seeks to create a seamless and user-friendly platform for accessing and exploring DS texts. This experiment not only promotes scholarly engagement but also serves as an invaluable resource for individuals interested in delving into the intricate realms of archaic Indian knowledge traditions. Ultimately, our efforts aim to amplify the visibility and accessibility of DS knowledge, fostering a deeper understanding and appreciation of this profound cultural heritage.
[ "Nigam, Arooshi", "Ch", "ra, Subhash" ]
Dharmaśāstra Informatics: Concept Mining System for Socio-Cultural Facet in Ancient India
wildre-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.wildre-1.6.bib
https://aclanthology.org/2024.wildre-1.6/
@inproceedings{bala-etal-2024-exploring, title = "Exploring News Summarization and Enrichment in a Highly Resource-Scarce {I}ndian Language: A Case Study of Mizo", author = "Bala, Abhinaba and Urlana, Ashok and Mishra, Rahul and Krishnamurthy, Parameswari", editor = "Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr.", booktitle = "Proceedings of the 7th Workshop on Indian Language Data: Resources and Evaluation", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.wildre-1.6", pages = "40--46", abstract = "Obtaining sufficient information in one{'}s mother tongue is crucial for satisfying the information needs of the users. While high-resource languages have abundant online resources, the situation is less than ideal for very low-resource languages. Moreover, the insufficient reporting of vital national and international events continues to be a worry, especially in languages with scarce resources, like Mizo. In this paper, we conduct a study to investigate the effectiveness of a simple methodology designed to generate a holistic summary for Mizo news articles, which leverages English-language news to supplement and enhance the information related to the corresponding news events. Furthermore, we make available 500 Mizo news articles and corresponding enriched holistic summaries. Human evaluation confirms that our approach significantly enhances the information coverage of Mizo news articles.", }
Obtaining sufficient information in one{'}s mother tongue is crucial for satisfying the information needs of the users. While high-resource languages have abundant online resources, the situation is less than ideal for very low-resource languages. Moreover, the insufficient reporting of vital national and international events continues to be a worry, especially in languages with scarce resources, like Mizo. In this paper, we conduct a study to investigate the effectiveness of a simple methodology designed to generate a holistic summary for Mizo news articles, which leverages English-language news to supplement and enhance the information related to the corresponding news events. Furthermore, we make available 500 Mizo news articles and corresponding enriched holistic summaries. Human evaluation confirms that our approach significantly enhances the information coverage of Mizo news articles.
[ "Bala, Abhinaba", "Urlana, Ashok", "Mishra, Rahul", "Krishnamurthy, Parameswari" ]
Exploring News Summarization and Enrichment in a Highly Resource-Scarce Indian Language: A Case Study of Mizo
wildre-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.wildre-1.7.bib
https://aclanthology.org/2024.wildre-1.7/
@inproceedings{lalitha-devi-rk-rao-2024-finding, title = "Finding the Causality of an Event in News Articles", author = "Lalitha Devi, Sobha and RK Rao, Pattabhi", editor = "Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr.", booktitle = "Proceedings of the 7th Workshop on Indian Language Data: Resources and Evaluation", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.wildre-1.7", pages = "47--53", abstract = "This paper discusses about the finding of causality of an event in newspaper articles. The analysis of causality , otherwise known as cause and effect is crucial for building efficient Natural Language Understanding (NLU) supported AI systems such as Event tracking and it is considered as a complex semantic relation under discourse theory. A cause-effect relation consists of a linguistic marker and its two arguments. The arguments are semantic arguments where the cause is the first argument (Arg1) and the effect is the second argument(Arg2). In this work we have considered the causal relations in Tamil Newspaper articles. The analysis of causal constructions, the causal markers and their syntactic relation lead to the identification of different features for developing the language model using RBMs (Restricted Boltzmann Machine). The experiments we performed have given encouraging results. The Cause-Effect system developed is used in a mobile App for Event profiling called {``}Nigalazhvi{''} where the cause and effect of an event is identified and given to the user.", }
This paper discusses about the finding of causality of an event in newspaper articles. The analysis of causality , otherwise known as cause and effect is crucial for building efficient Natural Language Understanding (NLU) supported AI systems such as Event tracking and it is considered as a complex semantic relation under discourse theory. A cause-effect relation consists of a linguistic marker and its two arguments. The arguments are semantic arguments where the cause is the first argument (Arg1) and the effect is the second argument(Arg2). In this work we have considered the causal relations in Tamil Newspaper articles. The analysis of causal constructions, the causal markers and their syntactic relation lead to the identification of different features for developing the language model using RBMs (Restricted Boltzmann Machine). The experiments we performed have given encouraging results. The Cause-Effect system developed is used in a mobile App for Event profiling called {``}Nigalazhvi{''} where the cause and effect of an event is identified and given to the user.
[ "Lalitha Devi, Sobha", "RK Rao, Pattabhi" ]
Finding the Causality of an Event in News Articles
wildre-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.wildre-1.8.bib
https://aclanthology.org/2024.wildre-1.8/
@inproceedings{dongare-2024-creating, title = "Creating Corpus of Low Resource {I}ndian Languages for Natural Language Processing: Challenges and Opportunities", author = "Dongare, Pratibha", editor = "Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr.", booktitle = "Proceedings of the 7th Workshop on Indian Language Data: Resources and Evaluation", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.wildre-1.8", pages = "54--58", abstract = "Addressing tasks in Natural Language Processing requires access to sufficient and high-quality data. However, working with languages that have limited resources poses a significant challenge due to the absence of established methodologies, frameworks, and collaborative efforts. This paper intends to briefly outline the challenges associated with standardization in data creation, focusing on Indian languages, which are often categorized as low resource languages. Additionally, potential solutions and the importance of standardized procedures for low-resource language data are proposed. Furthermore, the critical role of standardized protocols in corpus creation and their impact on research is highlighted. Lastly, this paper concludes by defining what constitutes a corpus.", }
Addressing tasks in Natural Language Processing requires access to sufficient and high-quality data. However, working with languages that have limited resources poses a significant challenge due to the absence of established methodologies, frameworks, and collaborative efforts. This paper intends to briefly outline the challenges associated with standardization in data creation, focusing on Indian languages, which are often categorized as low resource languages. Additionally, potential solutions and the importance of standardized procedures for low-resource language data are proposed. Furthermore, the critical role of standardized protocols in corpus creation and their impact on research is highlighted. Lastly, this paper concludes by defining what constitutes a corpus.
[ "Dongare, Pratibha" ]
Creating Corpus of Low Resource Indian Languages for Natural Language Processing: Challenges and Opportunities
wildre-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.wildre-1.9.bib
https://aclanthology.org/2024.wildre-1.9/
@inproceedings{thakkar-etal-2024-fzzg, title = "{FZZG} at {WILDRE}-7: Fine-tuning Pre-trained Models for Code-mixed, Less-resourced Sentiment Analysis", author = "Thakkar, Gaurish and Tadi{\'c}, Marko and Mikelic Preradovic, Nives", editor = "Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr.", booktitle = "Proceedings of the 7th Workshop on Indian Language Data: Resources and Evaluation", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.wildre-1.9", pages = "59--65", abstract = "This paper describes our system used for a shared task on code-mixed, less-resourced sentiment analysis for Indo-Aryan languages. We are using the large language models (LLMs) since they have demonstrated excellent performance on classification tasks. In our participation in all tracks, we use \textit{unsloth/mistral-7b-bnb-4bit} LLM for the task of code-mixed sentiment analysis. For track 1, we used a simple fine-tuning strategy on PLMs by combining data from multiple phases. Our trained systems secured first place in four phases out of five. In addition, we present the results achieved using several PLMs for each language.", }
This paper describes our system used for a shared task on code-mixed, less-resourced sentiment analysis for Indo-Aryan languages. We are using the large language models (LLMs) since they have demonstrated excellent performance on classification tasks. In our participation in all tracks, we use \textit{unsloth/mistral-7b-bnb-4bit} LLM for the task of code-mixed sentiment analysis. For track 1, we used a simple fine-tuning strategy on PLMs by combining data from multiple phases. Our trained systems secured first place in four phases out of five. In addition, we present the results achieved using several PLMs for each language.
[ "Thakkar, Gaurish", "Tadi{\\'c}, Marko", "Mikelic Preradovic, Nives" ]
FZZG at WILDRE-7: Fine-tuning Pre-trained Models for Code-mixed, Less-resourced Sentiment Analysis
wildre-1.9
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.wildre-1.10.bib
https://aclanthology.org/2024.wildre-1.10/
@inproceedings{veeramani-etal-2024-mlinitiative, title = "{MLI}nitiative@{WILDRE}7: Hybrid Approaches with Large Language Models for Enhanced Sentiment Analysis in Code-Switched and Code-Mixed Texts", author = "Veeramani, Hariram and Thapa, Surendrabikram and Naseem, Usman", editor = "Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr.", booktitle = "Proceedings of the 7th Workshop on Indian Language Data: Resources and Evaluation", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.wildre-1.10", pages = "66--72", abstract = "Code-switched and code-mixed languages are prevalent in multilingual societies, reflecting the complex interplay of cultures and languages in daily communication. Understanding the sentiment embedded in such texts is crucial for a range of applications, from improving social media analytics to enhancing customer feedback systems. Despite their significance, research in code-mixed and code-switched languages remains limited, particularly in less-resourced languages. This scarcity of research creates a gap in natural language processing (NLP) technologies, hindering their ability to accurately interpret the rich linguistic diversity of global communications. To bridge this gap, this paper presents a novel methodology for sentiment analysis in code-mixed and code-switched texts. Our approach combines the power of large language models (LLMs) and the versatility of the multilingual BERT (mBERT) framework to effectively process and analyze sentiments in multilingual data. By decomposing code-mixed texts into their constituent languages, employing mBERT for named entity recognition (NER) and sentiment label prediction, and integrating these insights into a decision-making LLM, we provide a comprehensive framework for understanding sentiment in complex linguistic contexts. Our system achieves competitive rank on all subtasks in the Code-mixed Less-Resourced Sentiment analysis (Code-mixed) shared task at WILDRE-7 (LREC-COLING).", }
Code-switched and code-mixed languages are prevalent in multilingual societies, reflecting the complex interplay of cultures and languages in daily communication. Understanding the sentiment embedded in such texts is crucial for a range of applications, from improving social media analytics to enhancing customer feedback systems. Despite their significance, research in code-mixed and code-switched languages remains limited, particularly in less-resourced languages. This scarcity of research creates a gap in natural language processing (NLP) technologies, hindering their ability to accurately interpret the rich linguistic diversity of global communications. To bridge this gap, this paper presents a novel methodology for sentiment analysis in code-mixed and code-switched texts. Our approach combines the power of large language models (LLMs) and the versatility of the multilingual BERT (mBERT) framework to effectively process and analyze sentiments in multilingual data. By decomposing code-mixed texts into their constituent languages, employing mBERT for named entity recognition (NER) and sentiment label prediction, and integrating these insights into a decision-making LLM, we provide a comprehensive framework for understanding sentiment in complex linguistic contexts. Our system achieves competitive rank on all subtasks in the Code-mixed Less-Resourced Sentiment analysis (Code-mixed) shared task at WILDRE-7 (LREC-COLING).
[ "Veeramani, Hariram", "Thapa, Surendrabikram", "Naseem, Usman" ]
MLInitiative@WILDRE7: Hybrid Approaches with Large Language Models for Enhanced Sentiment Analysis in Code-Switched and Code-Mixed Texts
wildre-1.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.wildre-1.11.bib
https://aclanthology.org/2024.wildre-1.11/
@inproceedings{abirami-etal-2024-aalamaram, title = "Aalamaram: A Large-Scale Linguistically Annotated Treebank for the {T}amil Language", author = "Abirami, A M and Leong, Wei Qi and Rengarajan, Hamsawardhini and Anitha, D and Suganya, R and Singh, Himanshu and Sarveswaran, Kengatharaiyer and Tjhi, William Chandra and Shah, Rajiv Ratn", editor = "Jha, Girish Nath and L., Sobha and Bali, Kalika and Ojha, Atul Kr.", booktitle = "Proceedings of the 7th Workshop on Indian Language Data: Resources and Evaluation", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.wildre-1.11", pages = "73--83", abstract = "Tamil is a relatively low-resource language in the field of Natural Language Processing (NLP). Recent years have seen a growth in Tamil NLP datasets in Natural Language Understanding (NLU) or Natural Language Generation (NLG) tasks, but high-quality linguistic resources remain scarce. In order to alleviate this gap in resources, this paper introduces Aalamaram, a treebank with rich linguistic annotations for the Tamil language. It is hitherto the largest publicly available Tamil treebank with almost 10,000 sentences from diverse sources and is annotated for the tasks of Part-of-speech (POS) tagging, Named Entity Recognition (NER), Morphological Parsing and Dependency Parsing. Close attention has also been paid to multi-word segmentation, especially in the context of Tamil clitics. Although the treebank is based largely on the Universal Dependencies (UD) specifications, significant effort has been made to adjust the annotation rules according to the idiosyncrasies and complexities of the Tamil language, thereby providing a valuable resource for linguistic research and NLP developments.", }
Tamil is a relatively low-resource language in the field of Natural Language Processing (NLP). Recent years have seen a growth in Tamil NLP datasets in Natural Language Understanding (NLU) or Natural Language Generation (NLG) tasks, but high-quality linguistic resources remain scarce. In order to alleviate this gap in resources, this paper introduces Aalamaram, a treebank with rich linguistic annotations for the Tamil language. It is hitherto the largest publicly available Tamil treebank with almost 10,000 sentences from diverse sources and is annotated for the tasks of Part-of-speech (POS) tagging, Named Entity Recognition (NER), Morphological Parsing and Dependency Parsing. Close attention has also been paid to multi-word segmentation, especially in the context of Tamil clitics. Although the treebank is based largely on the Universal Dependencies (UD) specifications, significant effort has been made to adjust the annotation rules according to the idiosyncrasies and complexities of the Tamil language, thereby providing a valuable resource for linguistic research and NLP developments.
[ "Abirami, A M", "Leong, Wei Qi", "Rengarajan, Hamsawardhini", "Anitha, D", "Suganya, R", "Singh, Himanshu", "Sarveswaran, Kengatharaiyer", "Tjhi, William Ch", "ra", "Shah, Rajiv Ratn" ]
Aalamaram: A Large-Scale Linguistically Annotated Treebank for the Tamil Language
wildre-1.11
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]