Datasets:

bibtex_url
stringlengths
41
53
proceedings
stringlengths
38
50
bibtext
stringlengths
535
2.8k
abstract
stringlengths
0
2.04k
authors
sequencelengths
1
31
title
stringlengths
19
178
id
stringlengths
7
19
type
stringclasses
1 value
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
124 values
n_linked_authors
int64
-1
7
upvotes
int64
-1
79
num_comments
int64
-1
4
n_authors
int64
-1
22
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
55
Datasets
sequencelengths
0
46
Spaces
sequencelengths
0
82
https://aclanthology.org/2024.mwe-1.23.bib
https://aclanthology.org/2024.mwe-1.23/
@inproceedings{alam-etal-2024-universal, title = "{U}niversal {D}ependencies for {S}araiki", author = {Alam, Meesum and Tyers, Francis and Hanink, Emily and K{\"u}bler, Sandra}, editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.23", pages = "188--197", abstract = "We present the first treebank of the Saraiki/Siraiki [ISO 639-3 skr] language, using the Universal Dependency annotation scheme (de Marneffe et al., 2021). The treebank currently comprises 587 annotated sentences and 7597 tokens. We explain the most relevant syntactic and morphological features of Saraiki, along with the decision we have made for a range of language specific constructions, namely compounds, verbal structures including light verb and serial verb constructions, and relative clauses.", }
We present the first treebank of the Saraiki/Siraiki [ISO 639-3 skr] language, using the Universal Dependency annotation scheme (de Marneffe et al., 2021). The treebank currently comprises 587 annotated sentences and 7597 tokens. We explain the most relevant syntactic and morphological features of Saraiki, along with the decision we have made for a range of language specific constructions, namely compounds, verbal structures including light verb and serial verb constructions, and relative clauses.
[ "Alam, Meesum", "Tyers, Francis", "Hanink, Emily", "K{\\\"u}bler, S", "ra" ]
Universal Dependencies for Saraiki
mwe-1.23
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.24.bib
https://aclanthology.org/2024.mwe-1.24/
@inproceedings{striebel-etal-2024-domain, title = "Domain-Weighted Batch Sampling for Neural Dependency Parsing", author = {Striebel, Jacob and Dakota, Daniel and K{\"u}bler, Sandra}, editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.24", pages = "198--206", abstract = "In neural dependency parsing, as well as in the broader field of NLP, domain adaptation remains a challenging problem. When adapting a parser to a target domain, there is a fundamental tension between the need to make use of out-of-domain data and the need to ensure that syntactic characteristic of the target domain are learned. In this work we explore a way to balance these two competing concerns, namely using domain-weighted batch sampling, which allows us to use all available training data, while controlling the probability of sampling in- and out-of-domain data when constructing training batches. We conduct experiments using ten natural language domains and find that domain-weighted batch sampling yields substantial performance improvements in all ten domains compared to a baseline of conventional randomized batch sampling.", }
In neural dependency parsing, as well as in the broader field of NLP, domain adaptation remains a challenging problem. When adapting a parser to a target domain, there is a fundamental tension between the need to make use of out-of-domain data and the need to ensure that syntactic characteristic of the target domain are learned. In this work we explore a way to balance these two competing concerns, namely using domain-weighted batch sampling, which allows us to use all available training data, while controlling the probability of sampling in- and out-of-domain data when constructing training batches. We conduct experiments using ten natural language domains and find that domain-weighted batch sampling yields substantial performance improvements in all ten domains compared to a baseline of conventional randomized batch sampling.
[ "Striebel, Jacob", "Dakota, Daniel", "K{\\\"u}bler, S", "ra" ]
Domain-Weighted Batch Sampling for Neural Dependency Parsing
mwe-1.24
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.25.bib
https://aclanthology.org/2024.mwe-1.25/
@inproceedings{washington-etal-2024-strategies, title = "Strategies for the Annotation of Pronominalised Locatives in {T}urkic {U}niversal {D}ependency Treebanks", author = {Washington, Jonathan and {\c{C}}{\"o}ltekin, {\c{C}}a{\u{g}}r{\i} and Akkurt, Furkan and Chontaeva, Bermet and Eslami, Soudabeh and Jumalieva, Gulnura and Kasieva, Aida and Kuzgun, Asl{\i} and Mar{\c{s}}an, B{\"u}{\c{s}}ra and Taguchi, Chihiro}, editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.25", pages = "207--219", abstract = "As part of our efforts to develop unified Universal Dependencies (UD) guidelines for Turkic languages, we evaluate multiple approaches to a difficult morphosyntactic phenomenon, pronominal locative expressions formed by a suffix -ki. These forms result in multiple syntactic words, with potentially conflicting morphological features, and participating in different dependency relations. We describe multiple approaches to the problem in current (and upcoming) Turkic UD treebanks, and show that none of them offers a solution that satisfies a number of constraints we consider (including constraints imposed by UD guidelines). This calls for a compromise with the {`}least damage{'} that should be adopted by most, if not all, Turkic treebanks. Our discussion of the phenomenon and various annotation approaches may also help treebanking efforts for other languages or language families with similar constructions.", }
As part of our efforts to develop unified Universal Dependencies (UD) guidelines for Turkic languages, we evaluate multiple approaches to a difficult morphosyntactic phenomenon, pronominal locative expressions formed by a suffix -ki. These forms result in multiple syntactic words, with potentially conflicting morphological features, and participating in different dependency relations. We describe multiple approaches to the problem in current (and upcoming) Turkic UD treebanks, and show that none of them offers a solution that satisfies a number of constraints we consider (including constraints imposed by UD guidelines). This calls for a compromise with the {`}least damage{'} that should be adopted by most, if not all, Turkic treebanks. Our discussion of the phenomenon and various annotation approaches may also help treebanking efforts for other languages or language families with similar constructions.
[ "Washington, Jonathan", "{\\c{C}}{\\\"o}ltekin, {\\c{C}}a{\\u{g}}r{\\i}", "Akkurt, Furkan", "Chontaeva, Bermet", "Eslami, Soudabeh", "Jumalieva, Gulnura", "Kasieva, Aida", "Kuzgun, Asl{\\i}", "Mar{\\c{s}}an, B{\\\"u}{\\c{s}}ra", "Taguchi, Chihiro" ]
Strategies for the Annotation of Pronominalised Locatives in Turkic Universal Dependency Treebanks
mwe-1.25
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.26.bib
https://aclanthology.org/2024.mwe-1.26/
@inproceedings{yayavaram-etal-2024-bert, title = "{BERT}-based Idiom Identification using Language Translation and Word Cohesion", author = "Yayavaram, Arnav and Yayavaram, Siddharth and Upadhyay, Prajna Devi and Das, Apurba", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.26", pages = "220--230", abstract = "An idiom refers to a special type of multi-word expression whose meaning is figurative and cannot be deduced from the literal interpretation of its components. Idioms are prevalent in almost all languages and text genres, necessitating explicit handling by comprehensive NLP systems. Such phrases are referred to as Potentially Idiomatic Expressions (PIEs) and automatically identifying them in text is a challenging task. In this paper, we propose using a BERT-based model fine-tuned with custom objectives, to improve the accuracy of detecting PIEs in text. Our custom loss functions capture two important properties (word cohesion and language translation) to distinguish PIEs from non-PIEs. We conducted several experiments on 7 datasets and showed that incorporating custom objectives while training the model leads to substantial gains. Our models trained using this approach also have better sequence accuracy over DISC, a state-of-the-art PIE detection technique, along with good transfer capabilities.", }
An idiom refers to a special type of multi-word expression whose meaning is figurative and cannot be deduced from the literal interpretation of its components. Idioms are prevalent in almost all languages and text genres, necessitating explicit handling by comprehensive NLP systems. Such phrases are referred to as Potentially Idiomatic Expressions (PIEs) and automatically identifying them in text is a challenging task. In this paper, we propose using a BERT-based model fine-tuned with custom objectives, to improve the accuracy of detecting PIEs in text. Our custom loss functions capture two important properties (word cohesion and language translation) to distinguish PIEs from non-PIEs. We conducted several experiments on 7 datasets and showed that incorporating custom objectives while training the model leads to substantial gains. Our models trained using this approach also have better sequence accuracy over DISC, a state-of-the-art PIE detection technique, along with good transfer capabilities.
[ "Yayavaram, Arnav", "Yayavaram, Siddharth", "Upadhyay, Prajna Devi", "Das, Apurba" ]
BERT-based Idiom Identification using Language Translation and Word Cohesion
mwe-1.26
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.27.bib
https://aclanthology.org/2024.mwe-1.27/
@inproceedings{yu-etal-2024-ad, title = "Ad Hoc Compounds for Stance Detection", author = "Yu, Qi and Schlotterbeck, Fabian and Wang, Hening and Reichmann, Naomi and Stolterfoht, Britta and Eckardt, Regine and Butt, Miriam", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.27", pages = "231--242", abstract = "In this paper we focus on a subclass of multi-word expressions, namely compound formation in German. The automatic detection of compounds is a known problem and we argue that its resolution should be given more urgency in light of a new role we uncovered with respect to ad hoc compound formation: the systematic expression of attitudinal meaning and its potential importance for the down-stream NLP task of stance detection. We demonstrate that ad hoc compounds in German indeed systematically express attitudinal meaning by adducing corpus linguistic and psycholinguistic experimental data. However, an investigation of state-of-the-art dependency parsers and Universal Dependency treebanks shows that German compounds are parsed and annotated very unevenly, so that currently one cannot reliably identify or access ad hoc compounds with attitudinal meaning in texts. Moreover, we report initial experiments with large language models underlining the challenges in capturing attitudinal meanings conveyed by ad hoc compounds. We consequently suggest a systematized way of annotating (and thereby also parsing) ad hoc compounds that is based on positive experiences from within the multilingual ParGram grammar development effort.", }
In this paper we focus on a subclass of multi-word expressions, namely compound formation in German. The automatic detection of compounds is a known problem and we argue that its resolution should be given more urgency in light of a new role we uncovered with respect to ad hoc compound formation: the systematic expression of attitudinal meaning and its potential importance for the down-stream NLP task of stance detection. We demonstrate that ad hoc compounds in German indeed systematically express attitudinal meaning by adducing corpus linguistic and psycholinguistic experimental data. However, an investigation of state-of-the-art dependency parsers and Universal Dependency treebanks shows that German compounds are parsed and annotated very unevenly, so that currently one cannot reliably identify or access ad hoc compounds with attitudinal meaning in texts. Moreover, we report initial experiments with large language models underlining the challenges in capturing attitudinal meanings conveyed by ad hoc compounds. We consequently suggest a systematized way of annotating (and thereby also parsing) ad hoc compounds that is based on positive experiences from within the multilingual ParGram grammar development effort.
[ "Yu, Qi", "Schlotterbeck, Fabian", "Wang, Hening", "Reichmann, Naomi", "Stolterfoht, Britta", "Eckardt, Regine", "Butt, Miriam" ]
Ad Hoc Compounds for Stance Detection
mwe-1.27
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.neusymbridge-1.1.bib
https://aclanthology.org/2024.neusymbridge-1.1/
@inproceedings{wang-etal-2024-probing, title = "Probing Large Language Models from a Human Behavioral Perspective", author = "Wang, Xintong and Li, Xiaoyu and Li, Xingshan and Biemann, Chris", editor = "Dong, Tiansi and Hinrichs, Erhard and Han, Zhen and Liu, Kang and Song, Yangqiu and Cao, Yixin and Hempelmann, Christian F. and Sifa, Rafet", booktitle = "Proceedings of the Workshop: Bridging Neurons and Symbols for Natural Language Processing and Knowledge Graphs Reasoning (NeusymBridge) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.neusymbridge-1.1", pages = "1--7", abstract = "Large Language Models (LLMs) have emerged as dominant foundational models in modern NLP. However, the understanding of their prediction processes and internal mechanisms, such as feed-forward networks (FFN) and multi-head self-attention (MHSA), remains largely unexplored. In this work, we probe LLMs from a human behavioral perspective, correlating values from LLMs with eye-tracking measures, which are widely recognized as meaningful indicators of human reading patterns. Our findings reveal that LLMs exhibit a similar prediction pattern with humans but distinct from that of Shallow Language Models (SLMs). Moreover, with the escalation of LLM layers from the middle layers, the correlation coefficients also increase in FFN and MHSA, indicating that the logits within FFN increasingly encapsulate word semantics suitable for predicting tokens from the vocabulary.", }
Large Language Models (LLMs) have emerged as dominant foundational models in modern NLP. However, the understanding of their prediction processes and internal mechanisms, such as feed-forward networks (FFN) and multi-head self-attention (MHSA), remains largely unexplored. In this work, we probe LLMs from a human behavioral perspective, correlating values from LLMs with eye-tracking measures, which are widely recognized as meaningful indicators of human reading patterns. Our findings reveal that LLMs exhibit a similar prediction pattern with humans but distinct from that of Shallow Language Models (SLMs). Moreover, with the escalation of LLM layers from the middle layers, the correlation coefficients also increase in FFN and MHSA, indicating that the logits within FFN increasingly encapsulate word semantics suitable for predicting tokens from the vocabulary.
[ "Wang, Xintong", "Li, Xiaoyu", "Li, Xingshan", "Biemann, Chris" ]
Probing Large Language Models from a Human Behavioral Perspective
neusymbridge-1.1
Poster
2310.05216
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.neusymbridge-1.2.bib
https://aclanthology.org/2024.neusymbridge-1.2/
@inproceedings{tseng-etal-2024-semantic, title = "The Semantic Relations in {LLM}s: An Information-theoretic Compression Approach", author = "Tseng, Yu-Hsiang and Chen, Pin-Er and Lian, Da-Chen and Hsieh, Shu-Kai", editor = "Dong, Tiansi and Hinrichs, Erhard and Han, Zhen and Liu, Kang and Song, Yangqiu and Cao, Yixin and Hempelmann, Christian F. and Sifa, Rafet", booktitle = "Proceedings of the Workshop: Bridging Neurons and Symbols for Natural Language Processing and Knowledge Graphs Reasoning (NeusymBridge) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.neusymbridge-1.2", pages = "8--21", abstract = "Compressibility is closely related to the predictability of the texts from the information theory viewpoint. As large language models (LLMs) are trained to maximize the conditional probabilities of upcoming words, they may capture the subtlety and nuances of the semantic constraints underlying the texts, and texts aligning with the encoded semantic constraints are more compressible than those that do not. This paper systematically tests whether and how LLMs can act as compressors of semantic pairs. Using semantic relations from English and Chinese Wordnet, we empirically demonstrate that texts with correct semantic pairings are more compressible than incorrect ones, measured by the proposed compression advantages index. We also show that, with the Pythia model suite and a fine-tuned model on Chinese Wordnet, compression capacities are modulated by the model{'}s seen data. These findings are consistent with the view that LLMs encode the semantic knowledge as underlying constraints learned from texts and can act as compressors of semantic information or potentially other structured knowledge.", }
Compressibility is closely related to the predictability of the texts from the information theory viewpoint. As large language models (LLMs) are trained to maximize the conditional probabilities of upcoming words, they may capture the subtlety and nuances of the semantic constraints underlying the texts, and texts aligning with the encoded semantic constraints are more compressible than those that do not. This paper systematically tests whether and how LLMs can act as compressors of semantic pairs. Using semantic relations from English and Chinese Wordnet, we empirically demonstrate that texts with correct semantic pairings are more compressible than incorrect ones, measured by the proposed compression advantages index. We also show that, with the Pythia model suite and a fine-tuned model on Chinese Wordnet, compression capacities are modulated by the model{'}s seen data. These findings are consistent with the view that LLMs encode the semantic knowledge as underlying constraints learned from texts and can act as compressors of semantic information or potentially other structured knowledge.
[ "Tseng, Yu-Hsiang", "Chen, Pin-Er", "Lian, Da-Chen", "Hsieh, Shu-Kai" ]
The Semantic Relations in LLMs: An Information-theoretic Compression Approach
neusymbridge-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.neusymbridge-1.3.bib
https://aclanthology.org/2024.neusymbridge-1.3/
@inproceedings{dong-sifa-2024-word, title = "Word Sense Disambiguation as a Game of Neurosymbolic Darts", author = "Dong, Tiansi and Sifa, Rafet", editor = "Dong, Tiansi and Hinrichs, Erhard and Han, Zhen and Liu, Kang and Song, Yangqiu and Cao, Yixin and Hempelmann, Christian F. and Sifa, Rafet", booktitle = "Proceedings of the Workshop: Bridging Neurons and Symbols for Natural Language Processing and Knowledge Graphs Reasoning (NeusymBridge) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.neusymbridge-1.3", pages = "22--32", abstract = "Word Sense Disambiguation (WSD) is one of the hardest tasks in natural language understanding and knowledge engineering. The glass ceiling of the 80{\%} F1 score is recently achieved through supervised learning, enriched by knowledge graphs. Here, we propose a novel neurosymbolic methodology that may push the F1 score above 90{\%}. The core of our methodology is a neurosymbolic sense embedding, in terms of a configuration of nested n-dimensional balls. The central point of a ball well preserves pre-trained word embeddings learned from data, which partially fixes the locations of balls. Inclusion relations among balls precisely encode symbolic hypernym relations among senses, and enable simple logic deduction among sense embeddings. We trained a Transformer to learn the mapping from a contextualized word embedding to its sense ball embedding, just like playing the game of darts (a game of shooting darts into a dartboard). A series of experiments are carried out using pre-training n ball embeddings, which cover around 70{\%} training data and 75{\%} testing data in the benchmark WSD corpus. Euclidean distance and cosine similarity functions are used as objective functions, separately, and each reaches {\textgreater}95.0{\%} F1 score in the ALL-n-ball dataset. This substantially breaks the glass ceiling of deep learning methods. Future work is discussed to develop a full-fledged neurosymbolic WSD system that substantially outperforms deep learning approaches.", }
Word Sense Disambiguation (WSD) is one of the hardest tasks in natural language understanding and knowledge engineering. The glass ceiling of the 80{\%} F1 score is recently achieved through supervised learning, enriched by knowledge graphs. Here, we propose a novel neurosymbolic methodology that may push the F1 score above 90{\%}. The core of our methodology is a neurosymbolic sense embedding, in terms of a configuration of nested n-dimensional balls. The central point of a ball well preserves pre-trained word embeddings learned from data, which partially fixes the locations of balls. Inclusion relations among balls precisely encode symbolic hypernym relations among senses, and enable simple logic deduction among sense embeddings. We trained a Transformer to learn the mapping from a contextualized word embedding to its sense ball embedding, just like playing the game of darts (a game of shooting darts into a dartboard). A series of experiments are carried out using pre-training n ball embeddings, which cover around 70{\%} training data and 75{\%} testing data in the benchmark WSD corpus. Euclidean distance and cosine similarity functions are used as objective functions, separately, and each reaches {\textgreater}95.0{\%} F1 score in the ALL-n-ball dataset. This substantially breaks the glass ceiling of deep learning methods. Future work is discussed to develop a full-fledged neurosymbolic WSD system that substantially outperforms deep learning approaches.
[ "Dong, Tiansi", "Sifa, Rafet" ]
Word Sense Disambiguation as a Game of Neurosymbolic Darts
neusymbridge-1.3
Poster
2307.16663
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.neusymbridge-1.4.bib
https://aclanthology.org/2024.neusymbridge-1.4/
@inproceedings{luo-etal-2024-open, title = "Open Event Causality Extraction by the Assistance of {LLM} in Task Annotation, Dataset, and Method", author = "Luo, Kun and Zhou, Tong and Chen, Yubo and Zhao, Jun and Liu, Kang", editor = "Dong, Tiansi and Hinrichs, Erhard and Han, Zhen and Liu, Kang and Song, Yangqiu and Cao, Yixin and Hempelmann, Christian F. and Sifa, Rafet", booktitle = "Proceedings of the Workshop: Bridging Neurons and Symbols for Natural Language Processing and Knowledge Graphs Reasoning (NeusymBridge) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.neusymbridge-1.4", pages = "33--44", abstract = "Event Causality Extraction (ECE) aims to extract explicit causal relations between event pairs from the text. However, the event boundary deviation and the causal event pair mismatching are two crucial challenges that remain unaddressed. To address the above issues, we propose a paradigm to utilize LLM to optimize the task definition, evolve the datasets, and strengthen our proposed customized Contextual Highlighting Event Causality Extraction framework (CHECE). Specifically in CHECE, we propose an Event Highlighter and an Event Concretization Module, guiding the model to represent the event by a higher-level cluster and consider its causal counterpart in event boundary prediction to deal with event boundary deviation. And we propose a Contextual Event Causality Matching mechanism, meanwhile, applying LLM to diversify the content templates to force the model to learn causality from context to targeting on causal event pair mismatching. Experimental results on two ECE datasets demonstrate the effectiveness of our method.", }
Event Causality Extraction (ECE) aims to extract explicit causal relations between event pairs from the text. However, the event boundary deviation and the causal event pair mismatching are two crucial challenges that remain unaddressed. To address the above issues, we propose a paradigm to utilize LLM to optimize the task definition, evolve the datasets, and strengthen our proposed customized Contextual Highlighting Event Causality Extraction framework (CHECE). Specifically in CHECE, we propose an Event Highlighter and an Event Concretization Module, guiding the model to represent the event by a higher-level cluster and consider its causal counterpart in event boundary prediction to deal with event boundary deviation. And we propose a Contextual Event Causality Matching mechanism, meanwhile, applying LLM to diversify the content templates to force the model to learn causality from context to targeting on causal event pair mismatching. Experimental results on two ECE datasets demonstrate the effectiveness of our method.
[ "Luo, Kun", "Zhou, Tong", "Chen, Yubo", "Zhao, Jun", "Liu, Kang" ]
Open Event Causality Extraction by the Assistance of LLM in Task Annotation, Dataset, and Method
neusymbridge-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.neusymbridge-1.5.bib
https://aclanthology.org/2024.neusymbridge-1.5/
@inproceedings{jokinen-2024-need, title = "The Need for Grounding in {LLM}-based Dialogue Systems", author = "Jokinen, Kristiina", editor = "Dong, Tiansi and Hinrichs, Erhard and Han, Zhen and Liu, Kang and Song, Yangqiu and Cao, Yixin and Hempelmann, Christian F. and Sifa, Rafet", booktitle = "Proceedings of the Workshop: Bridging Neurons and Symbols for Natural Language Processing and Knowledge Graphs Reasoning (NeusymBridge) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.neusymbridge-1.5", pages = "45--52", abstract = "Grounding is a pertinent part of the design of LLM-based dialogue systems. Although research on grounding has a long tradition, the paradigm shift caused by LLMs has brought the concept onto the foreground, in particular in the context of cognitive robotics. To avoid generation of irrelevant or false information, the system needs to ground its utterances into real-world events, and to avoid the statistical parrot effect, the system needs to construct shared understanding of the dialogue context and of the partner{'}s intents. Grounding and construction of the shared context enables cooperation between the participants, and thus supports trustworthy interaction. This paper discusses grounding using neural LLM technology. It aims to bridge neural and symbolic computing on the cognitive architecture level, so as to contribute to a better understanding of how conversational reasoning and collaboration can be linked to LLM implementations to support trustworthy and flexible interaction.", }
Grounding is a pertinent part of the design of LLM-based dialogue systems. Although research on grounding has a long tradition, the paradigm shift caused by LLMs has brought the concept onto the foreground, in particular in the context of cognitive robotics. To avoid generation of irrelevant or false information, the system needs to ground its utterances into real-world events, and to avoid the statistical parrot effect, the system needs to construct shared understanding of the dialogue context and of the partner{'}s intents. Grounding and construction of the shared context enables cooperation between the participants, and thus supports trustworthy interaction. This paper discusses grounding using neural LLM technology. It aims to bridge neural and symbolic computing on the cognitive architecture level, so as to contribute to a better understanding of how conversational reasoning and collaboration can be linked to LLM implementations to support trustworthy and flexible interaction.
[ "Jokinen, Kristiina" ]
The Need for Grounding in LLM-based Dialogue Systems
neusymbridge-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlperspectives-1.1.bib
https://aclanthology.org/2024.nlperspectives-1.1/
@inproceedings{parrish-etal-2024-picture, title = "Is a picture of a bird a bird? A mixed-methods approach to understanding diverse human perspectives and ambiguity in machine vision models", author = "Parrish, Alicia and Hao, Susan and Laszlo, Sarah and Aroyo, Lora", editor = "Abercrombie, Gavin and Basile, Valerio and Bernadi, Davide and Dudy, Shiran and Frenda, Simona and Havens, Lucy and Tonelli, Sara", booktitle = "Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.nlperspectives-1.1", pages = "1--18", abstract = "Human experiences are complex and subjective. This subjectivity is reflected in the way people label images for machine vision models. While annotation tasks are often assumed to deliver objective results, this assumption does not allow for the subjectivity of human experience. This paper examines the implications of subjective human judgments in the behavioral task of labeling images used to train machine vision models. We identify three primary sources of ambiguity: (1) depictions of labels in the images can be simply ambiguous, (2) raters{'} backgrounds and experiences can influence their judgments and (3) the way the labeling task is defined can also influence raters{'} judgments. By taking steps to address these sources of ambiguity, we can create more robust and reliable machine vision models.", }
Human experiences are complex and subjective. This subjectivity is reflected in the way people label images for machine vision models. While annotation tasks are often assumed to deliver objective results, this assumption does not allow for the subjectivity of human experience. This paper examines the implications of subjective human judgments in the behavioral task of labeling images used to train machine vision models. We identify three primary sources of ambiguity: (1) depictions of labels in the images can be simply ambiguous, (2) raters{'} backgrounds and experiences can influence their judgments and (3) the way the labeling task is defined can also influence raters{'} judgments. By taking steps to address these sources of ambiguity, we can create more robust and reliable machine vision models.
[ "Parrish, Alicia", "Hao, Susan", "Laszlo, Sarah", "Aroyo, Lora" ]
Is a picture of a bird a bird? A mixed-methods approach to understanding diverse human perspectives and ambiguity in machine vision models
nlperspectives-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlperspectives-1.2.bib
https://aclanthology.org/2024.nlperspectives-1.2/
@inproceedings{plaza-del-arco-etal-2024-wisdom, title = "Wisdom of Instruction-Tuned Language Model Crowds. Exploring Model Label Variation", author = "Plaza-del-Arco, Flor Miriam and Nozza, Debora and Hovy, Dirk", editor = "Abercrombie, Gavin and Basile, Valerio and Bernadi, Davide and Dudy, Shiran and Frenda, Simona and Havens, Lucy and Tonelli, Sara", booktitle = "Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.nlperspectives-1.2", pages = "19--30", abstract = "Large Language Models (LLMs) exhibit remarkable text classification capabilities, excelling in zero- and few-shot learning (ZSL and FSL) scenarios. However, since they are trained on different datasets, performance varies widely across tasks between those models. Recent studies emphasize the importance of considering human label variation in data annotation. However, how this human label variation also applies to LLMs remains unexplored. Given this likely model specialization, we ask: Do aggregate LLM labels improve over individual models (as for human annotators)? We evaluate four recent instruction-tuned LLMs as {``}annotators{''} on five subjective tasks across four languages. We use ZSL and FSL setups and label aggregation from human annotation. Aggregations are indeed substantially better than any individual model, benefiting from specialization in diverse tasks or languages. Surprisingly, FSL does not surpass ZSL, as it depends on the quality of the selected examples. However, there seems to be no good information-theoretical strategy to select those. We find that no LLM method rivals even simple supervised models. We also discuss the tradeoffs in accuracy, cost, and moral/ethical considerations between LLM and human annotation.", }
Large Language Models (LLMs) exhibit remarkable text classification capabilities, excelling in zero- and few-shot learning (ZSL and FSL) scenarios. However, since they are trained on different datasets, performance varies widely across tasks between those models. Recent studies emphasize the importance of considering human label variation in data annotation. However, how this human label variation also applies to LLMs remains unexplored. Given this likely model specialization, we ask: Do aggregate LLM labels improve over individual models (as for human annotators)? We evaluate four recent instruction-tuned LLMs as {``}annotators{''} on five subjective tasks across four languages. We use ZSL and FSL setups and label aggregation from human annotation. Aggregations are indeed substantially better than any individual model, benefiting from specialization in diverse tasks or languages. Surprisingly, FSL does not surpass ZSL, as it depends on the quality of the selected examples. However, there seems to be no good information-theoretical strategy to select those. We find that no LLM method rivals even simple supervised models. We also discuss the tradeoffs in accuracy, cost, and moral/ethical considerations between LLM and human annotation.
[ "Plaza-del-Arco, Flor Miriam", "Nozza, Debora", "Hovy, Dirk" ]
Wisdom of Instruction-Tuned Language Model Crowds. Exploring Model Label Variation
nlperspectives-1.2
Poster
2307.12973
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlperspectives-1.3.bib
https://aclanthology.org/2024.nlperspectives-1.3/
@inproceedings{abercrombie-etal-2024-revisiting, title = "Revisiting Annotation of Online Gender-Based Violence", author = "Abercrombie, Gavin and Vitsakis, Nikolas and Jiang, Aiqi and Konstas, Ioannis", editor = "Abercrombie, Gavin and Basile, Valerio and Bernadi, Davide and Dudy, Shiran and Frenda, Simona and Havens, Lucy and Tonelli, Sara", booktitle = "Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.nlperspectives-1.3", pages = "31--41", abstract = "Online Gender-Based Violence is an increasing problem, but existing datasets fail to capture the plurality of possible annotator perspectives or ensure representation of affected groups. In a pilot study, we revisit the annotation of a widely used dataset to investigate the relationship between annotator identities and underlying attitudes and the responses they give to a sexism labelling task. We collect demographic and attitudinal information about crowd-sourced annotators using two validated surveys from Social Psychology. While we do not find any correlation between underlying attitudes and annotation behaviour, ethnicity does appear to be related to annotator responses for this pool of crowd-workers. We also conduct initial classification experiments using Large Language Models, finding that a state-of-the-art model trained with human feedback benefits from our broad data collection to perform better on the new labels. This study represents the initial stages of a wider data collection project, in which we aim to develop a taxonomy of GBV in partnership with affected stakeholders.", }
Online Gender-Based Violence is an increasing problem, but existing datasets fail to capture the plurality of possible annotator perspectives or ensure representation of affected groups. In a pilot study, we revisit the annotation of a widely used dataset to investigate the relationship between annotator identities and underlying attitudes and the responses they give to a sexism labelling task. We collect demographic and attitudinal information about crowd-sourced annotators using two validated surveys from Social Psychology. While we do not find any correlation between underlying attitudes and annotation behaviour, ethnicity does appear to be related to annotator responses for this pool of crowd-workers. We also conduct initial classification experiments using Large Language Models, finding that a state-of-the-art model trained with human feedback benefits from our broad data collection to perform better on the new labels. This study represents the initial stages of a wider data collection project, in which we aim to develop a taxonomy of GBV in partnership with affected stakeholders.
[ "Abercrombie, Gavin", "Vitsakis, Nikolas", "Jiang, Aiqi", "Konstas, Ioannis" ]
Revisiting Annotation of Online Gender-Based Violence
nlperspectives-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlperspectives-1.4.bib
https://aclanthology.org/2024.nlperspectives-1.4/
@inproceedings{may-etal-2024-perspectivist, title = "A Perspectivist Corpus of Numbers in Social Judgements", author = "May, Marlon and Flek, Lucie and Welch, Charles", editor = "Abercrombie, Gavin and Basile, Valerio and Bernadi, Davide and Dudy, Shiran and Frenda, Simona and Havens, Lucy and Tonelli, Sara", booktitle = "Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.nlperspectives-1.4", pages = "42--48", abstract = "With growing interest in the use of large language models, it is becoming increasingly important to understand whose views they express. These models tend to generate output that conforms to majority opinion and are not representative of diverse views. As a step toward building models that can take differing views into consideration, we build a novel corpus of social judgements. We crowdsourced annotations of a subset of the Commonsense Norm Bank that contained numbers in the situation descriptions and asked annotators to replace the number with a range defined by a start and end value that, in their view, correspond to the given verdict. Our corpus contains unaggregated annotations and annotator demographics. We describe our annotation process for social judgements and will release our dataset to support future work on numerical reasoning and perspectivist approaches to natural language processing.", }
With growing interest in the use of large language models, it is becoming increasingly important to understand whose views they express. These models tend to generate output that conforms to majority opinion and are not representative of diverse views. As a step toward building models that can take differing views into consideration, we build a novel corpus of social judgements. We crowdsourced annotations of a subset of the Commonsense Norm Bank that contained numbers in the situation descriptions and asked annotators to replace the number with a range defined by a start and end value that, in their view, correspond to the given verdict. Our corpus contains unaggregated annotations and annotator demographics. We describe our annotation process for social judgements and will release our dataset to support future work on numerical reasoning and perspectivist approaches to natural language processing.
[ "May, Marlon", "Flek, Lucie", "Welch, Charles" ]
A Perspectivist Corpus of Numbers in Social Judgements
nlperspectives-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlperspectives-1.5.bib
https://aclanthology.org/2024.nlperspectives-1.5/
@inproceedings{muscato-etal-2024-overview, title = "An Overview of Recent Approaches to Enable Diversity in Large Language Models through Aligning with Human Perspectives", author = "Muscato, Benedetta and Mala, Chandana Sree and Marchiori Manerba, Marta and Gezici, Gizem and Giannotti, Fosca", editor = "Abercrombie, Gavin and Basile, Valerio and Bernadi, Davide and Dudy, Shiran and Frenda, Simona and Havens, Lucy and Tonelli, Sara", booktitle = "Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.nlperspectives-1.5", pages = "49--55", abstract = "The varied backgrounds and experiences of human annotators inject different opinions and potential biases into the data, inevitably leading to disagreements. Yet, traditional aggregation methods fail to capture individual judgments since they rely on the notion of a single ground truth. Our aim is to review prior contributions to pinpoint the shortcomings that might cause stereotypical content generation. As a preliminary study, our purpose is to investigate state-of-the-art approaches, primarily focusing on the following two research directions. First, we investigate how adding subjectivity aspects to LLMs might guarantee diversity. We then look into the alignment between humans and LLMs and discuss how to measure it. Considering existing gaps, our review explores possible methods to mitigate the perpetuation of biases targeting specific communities. However, we recognize the potential risk of disseminating sensitive information due to the utilization of socio-demographic data in the training process. These considerations underscore the inclusion of diverse perspectives while taking into account the critical importance of implementing robust safeguards to protect individuals{'} privacy and prevent the inadvertent propagation of sensitive information.", }
The varied backgrounds and experiences of human annotators inject different opinions and potential biases into the data, inevitably leading to disagreements. Yet, traditional aggregation methods fail to capture individual judgments since they rely on the notion of a single ground truth. Our aim is to review prior contributions to pinpoint the shortcomings that might cause stereotypical content generation. As a preliminary study, our purpose is to investigate state-of-the-art approaches, primarily focusing on the following two research directions. First, we investigate how adding subjectivity aspects to LLMs might guarantee diversity. We then look into the alignment between humans and LLMs and discuss how to measure it. Considering existing gaps, our review explores possible methods to mitigate the perpetuation of biases targeting specific communities. However, we recognize the potential risk of disseminating sensitive information due to the utilization of socio-demographic data in the training process. These considerations underscore the inclusion of diverse perspectives while taking into account the critical importance of implementing robust safeguards to protect individuals{'} privacy and prevent the inadvertent propagation of sensitive information.
[ "Muscato, Benedetta", "Mala, Ch", "ana Sree", "Marchiori Manerba, Marta", "Gezici, Gizem", "Giannotti, Fosca" ]
An Overview of Recent Approaches to Enable Diversity in Large Language Models through Aligning with Human Perspectives
nlperspectives-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlperspectives-1.6.bib
https://aclanthology.org/2024.nlperspectives-1.6/
@inproceedings{lindahl-2024-disagreement, title = "Disagreement in Argumentation Annotation", author = "Lindahl, Anna", editor = "Abercrombie, Gavin and Basile, Valerio and Bernadi, Davide and Dudy, Shiran and Frenda, Simona and Havens, Lucy and Tonelli, Sara", booktitle = "Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.nlperspectives-1.6", pages = "56--66", abstract = "Disagreement, perspective or error? There is a growing discussion against the idea of a unified ground truth in annotated data, as well as the usefulness of such a ground truth and resulting gold standard. In data perspectivism, this issue is exemplified with tasks such as hate speech or sentiment classification in which annotators{'} different perspectives are important to include. In this paper we turn to argumentation, a related field which has had less focus from this point of view. Argumentation is difficult to annotate for several reasons, from the more practical parts of deciding where the argumentation begins and ends to questions of how argumentation is defined and what it consists of. Learning more about disagreement is therefore important in order to improve argument annotation and to better utilize argument annotated data. Because of this, we examine disagreement in two corpora annotated with argumentation both manually and computationally. We find that disagreement is often not because of annotation errors or mistakes but due to the possibility of multiple possible interpretations. More specifically, these interpretations can be over boundaries, label or existence of argumentation. These results emphasize the need for more thorough analysis of disagreement in data, outside of the more common inter-annotator agreement measures.", }
Disagreement, perspective or error? There is a growing discussion against the idea of a unified ground truth in annotated data, as well as the usefulness of such a ground truth and resulting gold standard. In data perspectivism, this issue is exemplified with tasks such as hate speech or sentiment classification in which annotators{'} different perspectives are important to include. In this paper we turn to argumentation, a related field which has had less focus from this point of view. Argumentation is difficult to annotate for several reasons, from the more practical parts of deciding where the argumentation begins and ends to questions of how argumentation is defined and what it consists of. Learning more about disagreement is therefore important in order to improve argument annotation and to better utilize argument annotated data. Because of this, we examine disagreement in two corpora annotated with argumentation both manually and computationally. We find that disagreement is often not because of annotation errors or mistakes but due to the possibility of multiple possible interpretations. More specifically, these interpretations can be over boundaries, label or existence of argumentation. These results emphasize the need for more thorough analysis of disagreement in data, outside of the more common inter-annotator agreement measures.
[ "Lindahl, Anna" ]
Disagreement in Argumentation Annotation
nlperspectives-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlperspectives-1.7.bib
https://aclanthology.org/2024.nlperspectives-1.7/
@inproceedings{alvarez-nogales-araque-2024-moral, title = "Moral Disagreement over Serious Matters: Discovering the Knowledge Hidden in the Perspectives", author = "Alvarez Nogales, Anny D. and Araque, Oscar", editor = "Abercrombie, Gavin and Basile, Valerio and Bernadi, Davide and Dudy, Shiran and Frenda, Simona and Havens, Lucy and Tonelli, Sara", booktitle = "Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.nlperspectives-1.7", pages = "67--77", abstract = "Moral values significantly define decision-making processes, notably on contentious issues like global warming. The Moral Foundations Theory (MFT) delineates morality and aims to reconcile moral expressions across cultures, yet different interpretations arise, posing challenges for computational modeling. This paper addresses the need to incorporate diverse moral perspectives into the learning systems used to estimate morality in text. To do so, it explores how training language models with varied annotator perspectives affects the performance of the learners. Building on top if this, this work also proposes an ensemble method that exploits the diverse perspectives of annotators to construct a more robust moral estimation model. Additionally, we investigate the automated identification of texts that pose annotation challenges, enhancing the understanding of linguistic cues towards annotator disagreement. To evaluate the proposed models we use the Moral Foundations Twitter Corpus (MFTC), a resource that is currently the reference for modeling moral values in computational social sciences. We observe that incorporating the diverse perspectives of annotators into an ensemble model benefits the learning process, showing large improvements in the classification performance. Finally, the results also indicate that instances that convey strong moral meaning are more challenging to annotate.", }
Moral values significantly define decision-making processes, notably on contentious issues like global warming. The Moral Foundations Theory (MFT) delineates morality and aims to reconcile moral expressions across cultures, yet different interpretations arise, posing challenges for computational modeling. This paper addresses the need to incorporate diverse moral perspectives into the learning systems used to estimate morality in text. To do so, it explores how training language models with varied annotator perspectives affects the performance of the learners. Building on top if this, this work also proposes an ensemble method that exploits the diverse perspectives of annotators to construct a more robust moral estimation model. Additionally, we investigate the automated identification of texts that pose annotation challenges, enhancing the understanding of linguistic cues towards annotator disagreement. To evaluate the proposed models we use the Moral Foundations Twitter Corpus (MFTC), a resource that is currently the reference for modeling moral values in computational social sciences. We observe that incorporating the diverse perspectives of annotators into an ensemble model benefits the learning process, showing large improvements in the classification performance. Finally, the results also indicate that instances that convey strong moral meaning are more challenging to annotate.
[ "Alvarez Nogales, Anny D.", "Araque, Oscar" ]
Moral Disagreement over Serious Matters: Discovering the Knowledge Hidden in the Perspectives
nlperspectives-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlperspectives-1.8.bib
https://aclanthology.org/2024.nlperspectives-1.8/
@inproceedings{rizzi-etal-2024-perspectives, title = "Perspectives on Hate: General vs. Domain-Specific Models", author = "Rizzi, Giulia and Fontana, Michele and Fersini, Elisabetta", editor = "Abercrombie, Gavin and Basile, Valerio and Bernadi, Davide and Dudy, Shiran and Frenda, Simona and Havens, Lucy and Tonelli, Sara", booktitle = "Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.nlperspectives-1.8", pages = "78--83", abstract = "The rise of online hostility, combined with broad social media use, leads to the necessity of the comprehension of its human impact. However, the process of hate identification is challenging because, on the one hand, the line between healthy disagreement and poisonous speech is not well defined, and, on the other hand, multiple socio-cultural factors or prior beliefs shape people{'}s perceptions of potentially harmful text. To address disagreements in hate speech identification, Natural Language Processing (NLP) models must capture several perspectives. This paper introduces a strategy based on the Contrastive Learning paradigm for detecting disagreements in hate speech using pre-trained language models. Two approaches are proposed: the General Model, a comprehensive framework, and the Domain-Specific Model, which focuses on more specific hate-related tasks. The source code is available at ://anonymous.4open.science/r/Disagreement-530C.", }
The rise of online hostility, combined with broad social media use, leads to the necessity of the comprehension of its human impact. However, the process of hate identification is challenging because, on the one hand, the line between healthy disagreement and poisonous speech is not well defined, and, on the other hand, multiple socio-cultural factors or prior beliefs shape people{'}s perceptions of potentially harmful text. To address disagreements in hate speech identification, Natural Language Processing (NLP) models must capture several perspectives. This paper introduces a strategy based on the Contrastive Learning paradigm for detecting disagreements in hate speech using pre-trained language models. Two approaches are proposed: the General Model, a comprehensive framework, and the Domain-Specific Model, which focuses on more specific hate-related tasks. The source code is available at ://anonymous.4open.science/r/Disagreement-530C.
[ "Rizzi, Giulia", "Fontana, Michele", "Fersini, Elisabetta" ]
Perspectives on Hate: General vs. Domain-Specific Models
nlperspectives-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlperspectives-1.9.bib
https://aclanthology.org/2024.nlperspectives-1.9/
@inproceedings{rizzi-etal-2024-soft, title = "Soft metrics for evaluation with disagreements: an assessment", author = "Rizzi, Giulia and Leonardelli, Elisa and Poesio, Massimo and Uma, Alexandra and Pavlovic, Maja and Paun, Silviu and Rosso, Paolo and Fersini, Elisabetta", editor = "Abercrombie, Gavin and Basile, Valerio and Bernadi, Davide and Dudy, Shiran and Frenda, Simona and Havens, Lucy and Tonelli, Sara", booktitle = "Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.nlperspectives-1.9", pages = "84--94", abstract = "The move towards preserving judgement disagreements in NLP requires the identification of adequate evaluation metrics. We identify a set of key properties that such metrics should have, and assess the extent to which natural candidates for soft evaluation such as Cross Entropy satisfy such properties. We employ a theoretical framework, supported by a visual approach, by practical examples, and by the analysis of a real case scenario. Our results indicate that Cross Entropy can result in fairly paradoxical results in some cases, whereas other measures Manhattan distance and Euclidean distance exhibit a more intuitive behavior, at least for the case of binary classification.", }
The move towards preserving judgement disagreements in NLP requires the identification of adequate evaluation metrics. We identify a set of key properties that such metrics should have, and assess the extent to which natural candidates for soft evaluation such as Cross Entropy satisfy such properties. We employ a theoretical framework, supported by a visual approach, by practical examples, and by the analysis of a real case scenario. Our results indicate that Cross Entropy can result in fairly paradoxical results in some cases, whereas other measures Manhattan distance and Euclidean distance exhibit a more intuitive behavior, at least for the case of binary classification.
[ "Rizzi, Giulia", "Leonardelli, Elisa", "Poesio, Massimo", "Uma, Alex", "ra", "Pavlovic, Maja", "Paun, Silviu", "Rosso, Paolo", "Fersini, Elisabetta" ]
Soft metrics for evaluation with disagreements: an assessment
nlperspectives-1.9
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlperspectives-1.10.bib
https://aclanthology.org/2024.nlperspectives-1.10/
@inproceedings{creanga-dinu-2024-designing, title = "Designing {NLP} Systems That Adapt to Diverse Worldviews", author = "Creanga, Claudiu and Dinu, Liviu P.", editor = "Abercrombie, Gavin and Basile, Valerio and Bernadi, Davide and Dudy, Shiran and Frenda, Simona and Havens, Lucy and Tonelli, Sara", booktitle = "Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.nlperspectives-1.10", pages = "95--99", abstract = "Natural Language Inference (NLI) is foundational for evaluating language understanding in AI. However, progress has plateaued, with models failing on ambiguous examples and exhibiting poor generalization. We argue that this stems from disregarding the subjective nature of meaning, which is intrinsically tied to an individual{'}s \textit{weltanschauung} (which roughly translates to worldview). Existing NLP datasets often obscure this by aggregating labels or filtering out disagreement. We propose a perspectivist approach: building datasets that capture annotator demographics, values, and justifications for their labels. Such datasets would explicitly model diverse worldviews. Our initial experiments with a subset of the SBIC dataset demonstrate that even limited annotator metadata can improve model performance.", }
Natural Language Inference (NLI) is foundational for evaluating language understanding in AI. However, progress has plateaued, with models failing on ambiguous examples and exhibiting poor generalization. We argue that this stems from disregarding the subjective nature of meaning, which is intrinsically tied to an individual{'}s \textit{weltanschauung} (which roughly translates to worldview). Existing NLP datasets often obscure this by aggregating labels or filtering out disagreement. We propose a perspectivist approach: building datasets that capture annotator demographics, values, and justifications for their labels. Such datasets would explicitly model diverse worldviews. Our initial experiments with a subset of the SBIC dataset demonstrate that even limited annotator metadata can improve model performance.
[ "Creanga, Claudiu", "Dinu, Liviu P." ]
Designing NLP Systems That Adapt to Diverse Worldviews
nlperspectives-1.10
Poster
2405.11197
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlperspectives-1.11.bib
https://aclanthology.org/2024.nlperspectives-1.11/
@inproceedings{pavlovic-poesio-2024-effectiveness, title = "The Effectiveness of {LLM}s as Annotators: A Comparative Overview and Empirical Analysis of Direct Representation", author = "Pavlovic, Maja and Poesio, Massimo", editor = "Abercrombie, Gavin and Basile, Valerio and Bernadi, Davide and Dudy, Shiran and Frenda, Simona and Havens, Lucy and Tonelli, Sara", booktitle = "Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.nlperspectives-1.11", pages = "100--110", abstract = "Recent studies focus on exploring the capability of Large Language Models (LLMs) for data annotation. Our work, firstly, offers a comparative overview of twelve such studies that investigate labelling with LLMs, particularly focusing on classification tasks. Secondly, we present an empirical analysis that examines the degree of alignment between the opinion distributions returned by GPT and those provided by human annotators across four subjective datasets. Our analysis supports a minority of studies that are considering diverse perspectives when evaluating data annotation tasks and highlights the need for further research in this direction.", }
Recent studies focus on exploring the capability of Large Language Models (LLMs) for data annotation. Our work, firstly, offers a comparative overview of twelve such studies that investigate labelling with LLMs, particularly focusing on classification tasks. Secondly, we present an empirical analysis that examines the degree of alignment between the opinion distributions returned by GPT and those provided by human annotators across four subjective datasets. Our analysis supports a minority of studies that are considering diverse perspectives when evaluating data annotation tasks and highlights the need for further research in this direction.
[ "Pavlovic, Maja", "Poesio, Massimo" ]
The Effectiveness of LLMs as Annotators: A Comparative Overview and Empirical Analysis of Direct Representation
nlperspectives-1.11
Poster
2405.01299
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlperspectives-1.12.bib
https://aclanthology.org/2024.nlperspectives-1.12/
@inproceedings{valette-2024-perspectivism, title = "What Does Perspectivism Mean? An Ethical and Methodological Countercriticism", author = "Valette, Mathieu", editor = "Abercrombie, Gavin and Basile, Valerio and Bernadi, Davide and Dudy, Shiran and Frenda, Simona and Havens, Lucy and Tonelli, Sara", booktitle = "Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.nlperspectives-1.12", pages = "111--115", abstract = "In this paper, we address the epistemological and ethical break of perspectivism in NLP. First, we propose to consider data annotation from the point of view of the scientific management of annotation work - which is part of the automation process inherent in NLP, in order to ideologically situate the perspectivist paradigm. We then analyze some of the concepts of perspectivism (in particular, truth). Finally, based on this analysis, we formulate a set of proposals aimed at overcoming the observed limitations of corpus annotation in general and perspectivism in particular.", }
In this paper, we address the epistemological and ethical break of perspectivism in NLP. First, we propose to consider data annotation from the point of view of the scientific management of annotation work - which is part of the automation process inherent in NLP, in order to ideologically situate the perspectivist paradigm. We then analyze some of the concepts of perspectivism (in particular, truth). Finally, based on this analysis, we formulate a set of proposals aimed at overcoming the observed limitations of corpus annotation in general and perspectivism in particular.
[ "Valette, Mathieu" ]
What Does Perspectivism Mean? An Ethical and Methodological Countercriticism
nlperspectives-1.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlperspectives-1.13.bib
https://aclanthology.org/2024.nlperspectives-1.13/
@inproceedings{allein-moens-2024-origamim, title = "{O}rigam{IM}: A Dataset of Ambiguous Sentence Interpretations for Social Grounding and Implicit Language Understanding", author = "Allein, Liesbeth and Moens, Marie-Francine", editor = "Abercrombie, Gavin and Basile, Valerio and Bernadi, Davide and Dudy, Shiran and Frenda, Simona and Havens, Lucy and Tonelli, Sara", booktitle = "Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.nlperspectives-1.13", pages = "116--122", abstract = "Sentences elicit different interpretations and reactions among readers, especially when there is ambiguity in their implicit layers. We present a first-of-its kind dataset of sentences from Reddit, where each sentence is annotated with multiple interpretations of its meanings, understandings of implicit moral judgments about mentioned people, and reader impressions of its author. Scrutiny of the dataset proves the evoked variability and polarity in reactions. It further shows that readers strongly disagree on both the presence of implied judgments and the social acceptability of the behaviors they evaluate. In all, the dataset offers a valuable resource for socially grounding language and modeling the intricacies of implicit language understanding from multiple reader perspectives.", }
Sentences elicit different interpretations and reactions among readers, especially when there is ambiguity in their implicit layers. We present a first-of-its kind dataset of sentences from Reddit, where each sentence is annotated with multiple interpretations of its meanings, understandings of implicit moral judgments about mentioned people, and reader impressions of its author. Scrutiny of the dataset proves the evoked variability and polarity in reactions. It further shows that readers strongly disagree on both the presence of implied judgments and the social acceptability of the behaviors they evaluate. In all, the dataset offers a valuable resource for socially grounding language and modeling the intricacies of implicit language understanding from multiple reader perspectives.
[ "Allein, Liesbeth", "Moens, Marie-Francine" ]
OrigamIM: A Dataset of Ambiguous Sentence Interpretations for Social Grounding and Implicit Language Understanding
nlperspectives-1.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlperspectives-1.14.bib
https://aclanthology.org/2024.nlperspectives-1.14/
@inproceedings{mastromattei-zanzotto-2024-linguistic, title = "Linguistic Fingerprint in Transformer Models: How Language Variation Influences Parameter Selection in Irony Detection", author = "Mastromattei, Michele and Zanzotto, Fabio Massimo", editor = "Abercrombie, Gavin and Basile, Valerio and Bernadi, Davide and Dudy, Shiran and Frenda, Simona and Havens, Lucy and Tonelli, Sara", booktitle = "Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.nlperspectives-1.14", pages = "123--130", abstract = "This paper explores the correlation between linguistic diversity, sentiment analysis and transformer model architectures. We aim to investigate how different English variations impact transformer-based models for irony detection. To conduct our study, we used the EPIC corpus to extract five diverse English variation-specific datasets and applied the KEN pruning algorithm on five different architectures. Our results reveal several similarities between optimal subnetworks, which provide insights into the linguistic variations that share strong resemblances and those that exhibit greater dissimilarities. We discovered that optimal subnetworks across models share at least 60{\%} of their parameters, emphasizing the significance of parameter values in capturing and interpreting linguistic variations. This study highlights the inherent structural similarities between models trained on different variants of the same language and also the critical role of parameter values in capturing these nuances.", }
This paper explores the correlation between linguistic diversity, sentiment analysis and transformer model architectures. We aim to investigate how different English variations impact transformer-based models for irony detection. To conduct our study, we used the EPIC corpus to extract five diverse English variation-specific datasets and applied the KEN pruning algorithm on five different architectures. Our results reveal several similarities between optimal subnetworks, which provide insights into the linguistic variations that share strong resemblances and those that exhibit greater dissimilarities. We discovered that optimal subnetworks across models share at least 60{\%} of their parameters, emphasizing the significance of parameter values in capturing and interpreting linguistic variations. This study highlights the inherent structural similarities between models trained on different variants of the same language and also the critical role of parameter values in capturing these nuances.
[ "Mastromattei, Michele", "Zanzotto, Fabio Massimo" ]
Linguistic Fingerprint in Transformer Models: How Language Variation Influences Parameter Selection in Irony Detection
nlperspectives-1.14
Poster
2406.02338
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlperspectives-1.15.bib
https://aclanthology.org/2024.nlperspectives-1.15/
@inproceedings{homan-etal-2024-intersectionality, title = "Intersectionality in {AI} Safety: Using Multilevel Models to Understand Diverse Perceptions of Safety in Conversational {AI}", author = "Homan, Christopher and Serapio-Garcia, Gregory and Aroyo, Lora and Diaz, Mark and Parrish, Alicia and Prabhakaran, Vinodkumar and Taylor, Alex and Wang, Ding", editor = "Abercrombie, Gavin and Basile, Valerio and Bernadi, Davide and Dudy, Shiran and Frenda, Simona and Havens, Lucy and Tonelli, Sara", booktitle = "Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.nlperspectives-1.15", pages = "131--141", abstract = "State-of-the-art conversational AI exhibits a level of sophistication that promises to have profound impacts on many aspects of daily life, including how people seek information, create content, and find emotional support. It has also shown a propensity for bias, offensive language, and false information. Consequently, understanding and moderating safety risks posed by interacting with AI chatbots is a critical technical and social challenge. Safety annotation is an intrinsically subjective task, where many factors{---}often intersecting{---}determine why people may express different opinions on whether a conversation is safe. We apply Bayesian multilevel models to surface factors that best predict rater behavior to a dataset of 101,286 annotations of conversations between humans and an AI chatbot, stratified by rater gender, age, race/ethnicity, and education level. We show that intersectional effects involving these factors play significant roles in validating safety in conversational AI data. For example, race/ethnicity and gender show strong intersectional effects, particularly among South Asian and East Asian women. We also find that conversational degree of harm impacts raters of all race/ethnicity groups, but that Indigenous and South Asian raters are particularly sensitive. Finally, we discover that the effect of education is uniquely intersectional for Indigenous raters. Our results underscore the utility of multilevel frameworks for uncovering underrepresented social perspectives.", }
State-of-the-art conversational AI exhibits a level of sophistication that promises to have profound impacts on many aspects of daily life, including how people seek information, create content, and find emotional support. It has also shown a propensity for bias, offensive language, and false information. Consequently, understanding and moderating safety risks posed by interacting with AI chatbots is a critical technical and social challenge. Safety annotation is an intrinsically subjective task, where many factors{---}often intersecting{---}determine why people may express different opinions on whether a conversation is safe. We apply Bayesian multilevel models to surface factors that best predict rater behavior to a dataset of 101,286 annotations of conversations between humans and an AI chatbot, stratified by rater gender, age, race/ethnicity, and education level. We show that intersectional effects involving these factors play significant roles in validating safety in conversational AI data. For example, race/ethnicity and gender show strong intersectional effects, particularly among South Asian and East Asian women. We also find that conversational degree of harm impacts raters of all race/ethnicity groups, but that Indigenous and South Asian raters are particularly sensitive. Finally, we discover that the effect of education is uniquely intersectional for Indigenous raters. Our results underscore the utility of multilevel frameworks for uncovering underrepresented social perspectives.
[ "Homan, Christopher", "Serapio-Garcia, Gregory", "Aroyo, Lora", "Diaz, Mark", "Parrish, Alicia", "Prabhakaran, Vinodkumar", "Taylor, Alex", "Wang, Ding" ]
Intersectionality in AI Safety: Using Multilevel Models to Understand Diverse Perceptions of Safety in Conversational AI
nlperspectives-1.15
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.nlperspectives-1.16.bib
https://aclanthology.org/2024.nlperspectives-1.16/
@inproceedings{robertson-leone-2024-dataset, title = "A Dataset for Multi-Scale Film Rating Inference from Reviews", author = "Robertson, Frankie and Leone, Stefano", editor = "Abercrombie, Gavin and Basile, Valerio and Bernadi, Davide and Dudy, Shiran and Frenda, Simona and Havens, Lucy and Tonelli, Sara", booktitle = "Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.nlperspectives-1.16", pages = "142--150", abstract = "This resource paper introduces a dataset for multi-scale rating inference of film review scores based upon review summaries. The dataset and task are unique in pairing a text regression problem with ratings given on multiple scales, e.g. the A-F letter scale and the 4-point star scale. It retains entity identifiers such as film and reviewer names. The paper describes the construction of the dataset before exploring potential baseline architectures for the task, and evaluating their performance. Baselines based on classifier-per-scale, affine-per-scale, and ordinal regression models are presented and evaluated with the BERT-base backbone. Additional experiments are used to ground a discussion of the different architectures{'} merits and drawbacks with regards to explainability and model interpretation.", }
This resource paper introduces a dataset for multi-scale rating inference of film review scores based upon review summaries. The dataset and task are unique in pairing a text regression problem with ratings given on multiple scales, e.g. the A-F letter scale and the 4-point star scale. It retains entity identifiers such as film and reviewer names. The paper describes the construction of the dataset before exploring potential baseline architectures for the task, and evaluating their performance. Baselines based on classifier-per-scale, affine-per-scale, and ordinal regression models are presented and evaluated with the BERT-base backbone. Additional experiments are used to ground a discussion of the different architectures{'} merits and drawbacks with regards to explainability and model interpretation.
[ "Robertson, Frankie", "Leone, Stefano" ]
A Dataset for Multi-Scale Film Rating Inference from Reviews
nlperspectives-1.16
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.osact-1.1.bib
https://aclanthology.org/2024.osact-1.1/
@inproceedings{alghamdi-etal-2024-aratar, title = "{A}ra{T}ar: A Corpus to Support the Fine-grained Detection of Hate Speech Targets in the {A}rabic Language", author = "Alghamdi, Seham and Benkhedda, Youcef and Alharbi, Basma and Batista-Navarro, Riza", editor = "Al-Khalifa, Hend and Darwish, Kareem and Mubarak, Hamdy and Ali, Mona and Elsayed, Tamer", booktitle = "Proceedings of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT) with Shared Tasks on Arabic LLMs Hallucination and Dialect to MSA Machine Translation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.osact-1.1", pages = "1--12", abstract = "We are currently witnessing a concerning surge in the spread of hate speech across various social media platforms, targeting individuals or groups based on their protected characteristics such as race, religion, nationality and gender. This paper focuses on the detection of hate type (Task 1) and hate target (Task 2) in the Arabic language. To comprehensively address this problem, we have combined and re-annotated hate speech tweets from existing publicly available corpora, resulting in the creation of AraTar, the first and largest Arabic corpus annotated with support for multi-label classification for both hate speech types and target detection with a high inter-annotator agreement. Additionally, we sought to determine the most effective machine learning-based approach for addressing this issue. To achieve this, we compare and evaluate different approaches, including: (1) traditional machine learning-based models, (2) deep learning-based models fed with contextual embeddings, and (3) fine-tuning language models (LMs). Our results demonstrate that fine-tuning LMs, specifically using AraBERTv0.2-twitter (base), achieved the highest performance, with a micro-averaged F1-score of 84.5{\%} and 85.03{\%}, and a macro-averaged F1-score of 77.46{\%} and 73.15{\%}, for Tasks 1 and 2, respectively.", }
We are currently witnessing a concerning surge in the spread of hate speech across various social media platforms, targeting individuals or groups based on their protected characteristics such as race, religion, nationality and gender. This paper focuses on the detection of hate type (Task 1) and hate target (Task 2) in the Arabic language. To comprehensively address this problem, we have combined and re-annotated hate speech tweets from existing publicly available corpora, resulting in the creation of AraTar, the first and largest Arabic corpus annotated with support for multi-label classification for both hate speech types and target detection with a high inter-annotator agreement. Additionally, we sought to determine the most effective machine learning-based approach for addressing this issue. To achieve this, we compare and evaluate different approaches, including: (1) traditional machine learning-based models, (2) deep learning-based models fed with contextual embeddings, and (3) fine-tuning language models (LMs). Our results demonstrate that fine-tuning LMs, specifically using AraBERTv0.2-twitter (base), achieved the highest performance, with a micro-averaged F1-score of 84.5{\%} and 85.03{\%}, and a macro-averaged F1-score of 77.46{\%} and 73.15{\%}, for Tasks 1 and 2, respectively.
[ "Alghamdi, Seham", "Benkhedda, Youcef", "Alharbi, Basma", "Batista-Navarro, Riza" ]
AraTar: A Corpus to Support the Fine-grained Detection of Hate Speech Targets in the Arabic Language
osact-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.osact-1.2.bib
https://aclanthology.org/2024.osact-1.2/
@inproceedings{alduwais-etal-2024-cleananercorp, title = "{CLEANANERC}orp: Identifying and Correcting Incorrect Labels in the {ANER}corp Dataset", author = "AlDuwais, Mashael and Al-Khalifa, Hend and AlSalman, Abdulmalik", editor = "Al-Khalifa, Hend and Darwish, Kareem and Mubarak, Hamdy and Ali, Mona and Elsayed, Tamer", booktitle = "Proceedings of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT) with Shared Tasks on Arabic LLMs Hallucination and Dialect to MSA Machine Translation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.osact-1.2", pages = "13--19", abstract = "Label errors are a common issue in machine learning datasets, particularly for tasks such as Named Entity Recognition. Such label erros might hurt model training, affect evaluation results, and lead to an inaccurate assessment of model performance. In this study, we dived deep into one of the widely adopted Arabic NER benchmark datasets (ANERcorp) and found a significant number of annotation errors, missing labels, and inconsistencies. Therefore, in this study, we conducted empirical research to understand these erros, correct them and propose a cleaner version of the dataset named CLEANANERCorp. CLEANANERCorp will serve the research community as a more accurate and consistent benchmark.", }
Label errors are a common issue in machine learning datasets, particularly for tasks such as Named Entity Recognition. Such label erros might hurt model training, affect evaluation results, and lead to an inaccurate assessment of model performance. In this study, we dived deep into one of the widely adopted Arabic NER benchmark datasets (ANERcorp) and found a significant number of annotation errors, missing labels, and inconsistencies. Therefore, in this study, we conducted empirical research to understand these erros, correct them and propose a cleaner version of the dataset named CLEANANERCorp. CLEANANERCorp will serve the research community as a more accurate and consistent benchmark.
[ "AlDuwais, Mashael", "Al-Khalifa, Hend", "AlSalman, Abdulmalik" ]
CLEANANERCorp: Identifying and Correcting Incorrect Labels in the ANERcorp Dataset
osact-1.2
Poster
2408.12362
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.osact-1.3.bib
https://aclanthology.org/2024.osact-1.3/
@inproceedings{khader-etal-2024-munazarat, title = "Munazarat 1.0: A Corpus of {A}rabic Competitive Debates", author = "Khader, Mohammad M. and Al-Sharafi, AbdulGabbar and Al-Sioufy, Mohamad Hamza and Zaghouani, Wajdi and Al-Zawqari, Ali", editor = "Al-Khalifa, Hend and Darwish, Kareem and Mubarak, Hamdy and Ali, Mona and Elsayed, Tamer", booktitle = "Proceedings of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT) with Shared Tasks on Arabic LLMs Hallucination and Dialect to MSA Machine Translation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.osact-1.3", pages = "20--30", abstract = "This paper introduces the Corpus of Arabic Competitive Debates (Munazarat). Despite the significance of competitive debating as an activity of fostering critical thinking and promoting dialogue, researchers within the fields of Arabic Natural Language Processing (NLP), linguistics, argumentation studies, and education have access to very limited datasets about competitive debating. At this study stage, we introduce Munazarat 1.0, which combines recordings of approximately 50 hours collected from 73 debates at QatarDebate-recognized tournaments, where all of those debates were available on YouTube. Munazarat is a novel specialized speech Arabic corpus, mostly in Modern Standard Arabic (MSA), consisting of diverse debating topics and showing rich metadata for each debate. The transcription of debates was done using Fenek, a speech-to-text Kanari AI tool, and three native Arabic speakers reviewed each transcription file to enhance the quality provided by the machine. The Munazarat 1.0 dataset can be used to train Arabic NLP tools, develop an argumentation mining machine, and analyze Arabic argumentation and rhetoric styles. Keywords: Arabic Speech Corpus, Modern Standard Arabic, Debates", }
This paper introduces the Corpus of Arabic Competitive Debates (Munazarat). Despite the significance of competitive debating as an activity of fostering critical thinking and promoting dialogue, researchers within the fields of Arabic Natural Language Processing (NLP), linguistics, argumentation studies, and education have access to very limited datasets about competitive debating. At this study stage, we introduce Munazarat 1.0, which combines recordings of approximately 50 hours collected from 73 debates at QatarDebate-recognized tournaments, where all of those debates were available on YouTube. Munazarat is a novel specialized speech Arabic corpus, mostly in Modern Standard Arabic (MSA), consisting of diverse debating topics and showing rich metadata for each debate. The transcription of debates was done using Fenek, a speech-to-text Kanari AI tool, and three native Arabic speakers reviewed each transcription file to enhance the quality provided by the machine. The Munazarat 1.0 dataset can be used to train Arabic NLP tools, develop an argumentation mining machine, and analyze Arabic argumentation and rhetoric styles. Keywords: Arabic Speech Corpus, Modern Standard Arabic, Debates
[ "Khader, Mohammad M.", "Al-Sharafi, AbdulGabbar", "Al-Sioufy, Mohamad Hamza", "Zaghouani, Wajdi", "Al-Zawqari, Ali" ]
Munazarat 1.0: A Corpus of Arabic Competitive Debates
osact-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.osact-1.4.bib
https://aclanthology.org/2024.osact-1.4/
@inproceedings{alshahrani-etal-2024-leveraging, title = "Leveraging Corpus Metadata to Detect Template-based Translation: An Exploratory Case Study of the {E}gyptian {A}rabic {W}ikipedia Edition", author = "Alshahrani, Saied and Mohammed, Hesham Haroon and Elfilali, Ali and Njie, Mariama and Matthews, Jeanna", editor = "Al-Khalifa, Hend and Darwish, Kareem and Mubarak, Hamdy and Ali, Mona and Elsayed, Tamer", booktitle = "Proceedings of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT) with Shared Tasks on Arabic LLMs Hallucination and Dialect to MSA Machine Translation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.osact-1.4", pages = "31--45", abstract = "Wikipedia articles (content pages) are commonly used corpora in Natural Language Processing (NLP) research, especially in low-resource languages other than English. Yet, a few research studies have studied the three Arabic Wikipedia editions, Arabic Wikipedia (AR), Egyptian Arabic Wikipedia (ARZ), and Moroccan Arabic Wikipedia (ARY), and documented issues in the Egyptian Arabic Wikipedia edition regarding the massive automatic creation of its articles using template-based translation from English to Arabic without human involvement, overwhelming the Egyptian Arabic Wikipedia with articles that do not only have low-quality content but also with articles that do not represent the Egyptian people, their culture, and their dialect. In this paper, we aim to mitigate the problem of template translation that occurred in the Egyptian Arabic Wikipedia by identifying these template-translated articles and their characteristics through exploratory analysis and building automatic detection systems. We first explore the content of the three Arabic Wikipedia editions in terms of density, quality, and human contributions and utilize the resulting insights to build multivariate machine learning classifiers leveraging articles{'} metadata to detect the template-translated articles automatically. We then publicly deploy and host the best-performing classifier as an online application called {`}Egyptian Wikipedia Scanner{'} and release the extracted, filtered, labeled, and preprocessed datasets to the research community to benefit from our datasets and the online, web-based detection system.", }
Wikipedia articles (content pages) are commonly used corpora in Natural Language Processing (NLP) research, especially in low-resource languages other than English. Yet, a few research studies have studied the three Arabic Wikipedia editions, Arabic Wikipedia (AR), Egyptian Arabic Wikipedia (ARZ), and Moroccan Arabic Wikipedia (ARY), and documented issues in the Egyptian Arabic Wikipedia edition regarding the massive automatic creation of its articles using template-based translation from English to Arabic without human involvement, overwhelming the Egyptian Arabic Wikipedia with articles that do not only have low-quality content but also with articles that do not represent the Egyptian people, their culture, and their dialect. In this paper, we aim to mitigate the problem of template translation that occurred in the Egyptian Arabic Wikipedia by identifying these template-translated articles and their characteristics through exploratory analysis and building automatic detection systems. We first explore the content of the three Arabic Wikipedia editions in terms of density, quality, and human contributions and utilize the resulting insights to build multivariate machine learning classifiers leveraging articles{'} metadata to detect the template-translated articles automatically. We then publicly deploy and host the best-performing classifier as an online application called {`}Egyptian Wikipedia Scanner{'} and release the extracted, filtered, labeled, and preprocessed datasets to the research community to benefit from our datasets and the online, web-based detection system.
[ "Alshahrani, Saied", "Mohammed, Hesham Haroon", "Elfilali, Ali", "Njie, Mariama", "Matthews, Jeanna" ]
Leveraging Corpus Metadata to Detect Template-based Translation: An Exploratory Case Study of the Egyptian Arabic Wikipedia Edition
osact-1.4
Poster
2404.00565
[ "https://github.com/SaiedAlshahrani/leveraging-corpus-metadata" ]
https://huggingface.co/papers/2404.00565
3
6
0
5
1
[]
[ "SaiedAlshahrani/Detect-Egyptian-Wikipedia-Articles" ]
[ "SaiedAlshahrani/Egyptian-Wikipedia-Scanner" ]
https://aclanthology.org/2024.osact-1.5.bib
https://aclanthology.org/2024.osact-1.5/
@inproceedings{al-ghamdi-etal-2024-novel, title = "A Novel Approach for Root Selection in the Dependency Parsing", author = "Al-Ghamdi, Sharefah Ahmed and Al-Khalifa, Hend and AlSalman, Abdulmalik", editor = "Al-Khalifa, Hend and Darwish, Kareem and Mubarak, Hamdy and Ali, Mona and Elsayed, Tamer", booktitle = "Proceedings of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT) with Shared Tasks on Arabic LLMs Hallucination and Dialect to MSA Machine Translation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.osact-1.5", pages = "46--49", abstract = "Although syntactic analysis using the sequence labeling method is promising, it can be problematic when the labels sequence does not contain a root label. This can result in errors in the final parse tree when the postprocessing method assumes the first word as the root. In this paper, we present a novel postprocessing method for BERT-based dependency parsing as sequence labeling. Our method leverages the root{'}s part of speech tag to select a more suitable root for the dependency tree, instead of using the default first token. We conducted experiments on nine dependency treebanks from different languages and domains, and demonstrated that our technique consistently improves the labeled attachment score (LAS) on most of them.", }
Although syntactic analysis using the sequence labeling method is promising, it can be problematic when the labels sequence does not contain a root label. This can result in errors in the final parse tree when the postprocessing method assumes the first word as the root. In this paper, we present a novel postprocessing method for BERT-based dependency parsing as sequence labeling. Our method leverages the root{'}s part of speech tag to select a more suitable root for the dependency tree, instead of using the default first token. We conducted experiments on nine dependency treebanks from different languages and domains, and demonstrated that our technique consistently improves the labeled attachment score (LAS) on most of them.
[ "Al-Ghamdi, Sharefah Ahmed", "Al-Khalifa, Hend", "AlSalman, Abdulmalik" ]
A Novel Approach for Root Selection in the Dependency Parsing
osact-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.osact-1.6.bib
https://aclanthology.org/2024.osact-1.6/
@inproceedings{alasmari-etal-2024-aramed, title = "{A}ra{M}ed: {A}rabic Medical Question Answering using Pretrained Transformer Language Models", author = "Alasmari, Ashwag and Alhumoud, Sarah and Alshammari, Waad", editor = "Al-Khalifa, Hend and Darwish, Kareem and Mubarak, Hamdy and Ali, Mona and Elsayed, Tamer", booktitle = "Proceedings of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT) with Shared Tasks on Arabic LLMs Hallucination and Dialect to MSA Machine Translation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.osact-1.6", pages = "50--56", abstract = "Medical Question Answering systems have gained significant attention in recent years due to their potential to enhance medical decision-making and improve patient care. However, most of the research in this field has focused on English-language datasets, limiting the generalizability of MQA systems to non-English speaking regions. This study introduces AraMed, a large-scale Arabic Medical Question Answering dataset addressing the limited resources available for Arabic medical question answering. AraMed comprises of 270k question-answer pairs based on health consumer questions submitted to online medical forum. Experiments using various deep learning models showcase the dataset{'}s effectiveness, particularly with AraBERT models achieving highest results, specifically AraBERTv2 obtained an F1 score of 96.73{\%} in the answer selection task. The comparative analysis of different deep learning models provides insights into their strengths and limitations. These findings highlight the potential of AraMed for advancing Arabic medical question answering research and development.", }
Medical Question Answering systems have gained significant attention in recent years due to their potential to enhance medical decision-making and improve patient care. However, most of the research in this field has focused on English-language datasets, limiting the generalizability of MQA systems to non-English speaking regions. This study introduces AraMed, a large-scale Arabic Medical Question Answering dataset addressing the limited resources available for Arabic medical question answering. AraMed comprises of 270k question-answer pairs based on health consumer questions submitted to online medical forum. Experiments using various deep learning models showcase the dataset{'}s effectiveness, particularly with AraBERT models achieving highest results, specifically AraBERTv2 obtained an F1 score of 96.73{\%} in the answer selection task. The comparative analysis of different deep learning models provides insights into their strengths and limitations. These findings highlight the potential of AraMed for advancing Arabic medical question answering research and development.
[ "Alasmari, Ashwag", "Alhumoud, Sarah", "Alshammari, Waad" ]
AraMed: Arabic Medical Question Answering using Pretrained Transformer Language Models
osact-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.osact-1.7.bib
https://aclanthology.org/2024.osact-1.7/
@inproceedings{el-haj-ezzini-2024-multilingual, title = "The Multilingual Corpus of World{'}s Constitutions ({MCWC})", author = "El-Haj, Mo and Ezzini, Saad", editor = "Al-Khalifa, Hend and Darwish, Kareem and Mubarak, Hamdy and Ali, Mona and Elsayed, Tamer", booktitle = "Proceedings of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT) with Shared Tasks on Arabic LLMs Hallucination and Dialect to MSA Machine Translation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.osact-1.7", pages = "57--66", abstract = "The {``}Multilingual Corpus of World{'}s Constitutions{''} (MCWC) serves as a valuable resource for the NLP community, offering a comprehensive collection of constitutions from around the world. Its focus on data quality and breadth of coverage enables advanced research in constitutional analysis, machine translation, and cross-lingual legal studies. The MCWC prepares its data to ensure high quality and minimal noise, while also providing valuable mappings of constitutions to their respective countries and continents, facilitating comparative analysis. Notably, the corpus offers pairwise sentence alignments across languages, supporting machine translation experiments. We utilise a leading Machine Translation model, fine-tuned on the MCWC to achieve accurate and context-aware translations. Additionally, we introduce an independent Machine Translation model as a comparative baseline. Fine-tuning the model on the MCWC improves accuracy, highlighting the significance of such a legal corpus for NLP and Machine Translation. The MCWC{'}s rich multilingual content and rigorous data quality standards raise the bar for legal text analysis and inspire innovation in the NLP community, opening new avenues for studying constitutional texts and multilingual data analysis.", }
The {``}Multilingual Corpus of World{'}s Constitutions{''} (MCWC) serves as a valuable resource for the NLP community, offering a comprehensive collection of constitutions from around the world. Its focus on data quality and breadth of coverage enables advanced research in constitutional analysis, machine translation, and cross-lingual legal studies. The MCWC prepares its data to ensure high quality and minimal noise, while also providing valuable mappings of constitutions to their respective countries and continents, facilitating comparative analysis. Notably, the corpus offers pairwise sentence alignments across languages, supporting machine translation experiments. We utilise a leading Machine Translation model, fine-tuned on the MCWC to achieve accurate and context-aware translations. Additionally, we introduce an independent Machine Translation model as a comparative baseline. Fine-tuning the model on the MCWC improves accuracy, highlighting the significance of such a legal corpus for NLP and Machine Translation. The MCWC{'}s rich multilingual content and rigorous data quality standards raise the bar for legal text analysis and inspire innovation in the NLP community, opening new avenues for studying constitutional texts and multilingual data analysis.
[ "El-Haj, Mo", "Ezzini, Saad" ]
The Multilingual Corpus of World's Constitutions (MCWC)
osact-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.osact-1.8.bib
https://aclanthology.org/2024.osact-1.8/
@inproceedings{kruse-ahmed-2024-tafsirextractor, title = "{T}afsir{E}xtractor: Text Preprocessing Pipeline preparing Classical {A}rabic Literature for Machine Learning Applications", author = "Kruse, Carl and Ahmed, Sajawel", editor = "Al-Khalifa, Hend and Darwish, Kareem and Mubarak, Hamdy and Ali, Mona and Elsayed, Tamer", booktitle = "Proceedings of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT) with Shared Tasks on Arabic LLMs Hallucination and Dialect to MSA Machine Translation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.osact-1.8", pages = "67--73", abstract = "In this paper, we present a comprehensive tool of preprocessing Classical Arabic (CA) literature in the field of historical exegetical studies for machine learning (ML) evaluations. Most recent ML models require the training data to be in a specific format (e.g. XML, TEI, CoNLL) to use it afterwards for ML applications such as Named Entity Recognition (NER) or Topic Modeling (TM). We report on how our method works and can be applied by other researchers with similar endeavors. Thereby, the importance of this comprehensive tool of preprocessing is demonstrated, as this novel approach has no predecessors for CA yet. We achieve results that enable the training of current ML models leading to state-of-the art performance for NER and TM on CA literature. We make our tool along its source code and data freely available for the Natural Language Processing (NLP) research community.", }
In this paper, we present a comprehensive tool of preprocessing Classical Arabic (CA) literature in the field of historical exegetical studies for machine learning (ML) evaluations. Most recent ML models require the training data to be in a specific format (e.g. XML, TEI, CoNLL) to use it afterwards for ML applications such as Named Entity Recognition (NER) or Topic Modeling (TM). We report on how our method works and can be applied by other researchers with similar endeavors. Thereby, the importance of this comprehensive tool of preprocessing is demonstrated, as this novel approach has no predecessors for CA yet. We achieve results that enable the training of current ML models leading to state-of-the art performance for NER and TM on CA literature. We make our tool along its source code and data freely available for the Natural Language Processing (NLP) research community.
[ "Kruse, Carl", "Ahmed, Sajawel" ]
TafsirExtractor: Text Preprocessing Pipeline preparing Classical Arabic Literature for Machine Learning Applications
osact-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.osact-1.9.bib
https://aclanthology.org/2024.osact-1.9/
@inproceedings{freihat-etal-2024-advancing, title = "Advancing the {A}rabic {W}ord{N}et: Elevating Content Quality", author = "Freihat, Abed Alhakim and Khalilia, Hadi Mahmoud and Bella, G{\'a}bor and Giunchiglia, Fausto", editor = "Al-Khalifa, Hend and Darwish, Kareem and Mubarak, Hamdy and Ali, Mona and Elsayed, Tamer", booktitle = "Proceedings of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT) with Shared Tasks on Arabic LLMs Hallucination and Dialect to MSA Machine Translation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.osact-1.9", pages = "74--83", abstract = "High-quality WordNets are crucial for achieving high-quality results in NLP applications that rely on such resources. However, the wordnets of most languages suffer from serious issues of correctness and completeness with respect to the words and word meanings they define, such as incorrect lemmas, missing glosses and example sentences, or an inadequate, Western-centric representation of the morphology and the semantics of the language. Previous efforts have largely focused on increasing lexical coverage while ignoring other qualitative aspects. In this paper, we focus on the Arabic language and introduce a major revision of the Arabic WordNet that addresses multiple dimensions of lexico-semantic resource quality. As a result, we updated more than 58{\%} of the synsets of the existing Arabic WordNet by adding missing information and correcting errors. In order to address issues of language diversity and untranslatability, we also extended the wordnet structure by new elements: phrasets and lexical gaps.", }
High-quality WordNets are crucial for achieving high-quality results in NLP applications that rely on such resources. However, the wordnets of most languages suffer from serious issues of correctness and completeness with respect to the words and word meanings they define, such as incorrect lemmas, missing glosses and example sentences, or an inadequate, Western-centric representation of the morphology and the semantics of the language. Previous efforts have largely focused on increasing lexical coverage while ignoring other qualitative aspects. In this paper, we focus on the Arabic language and introduce a major revision of the Arabic WordNet that addresses multiple dimensions of lexico-semantic resource quality. As a result, we updated more than 58{\%} of the synsets of the existing Arabic WordNet by adding missing information and correcting errors. In order to address issues of language diversity and untranslatability, we also extended the wordnet structure by new elements: phrasets and lexical gaps.
[ "Freihat, Abed Alhakim", "Khalilia, Hadi Mahmoud", "Bella, G{\\'a}bor", "Giunchiglia, Fausto" ]
Advancing the Arabic WordNet: Elevating Content Quality
osact-1.9
Poster
2403.20215
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.osact-1.10.bib
https://aclanthology.org/2024.osact-1.10/
@inproceedings{alrashoudi-etal-2024-arabic, title = "{A}rabic Speech Recognition of zero-resourced Languages: A case of {S}hehri (Jibbali) Language", author = "Alrashoudi, Norah A. and Alshahri, Omar Said and Al-Khalifa, Hend", editor = "Al-Khalifa, Hend and Darwish, Kareem and Mubarak, Hamdy and Ali, Mona and Elsayed, Tamer", booktitle = "Proceedings of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT) with Shared Tasks on Arabic LLMs Hallucination and Dialect to MSA Machine Translation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.osact-1.10", pages = "84--92", abstract = "Many under-resourced languages lack computational resources for automatic speech recognition (ASR) due to data scarcity issues. This makes developing accurate ASR models challenging. Shehri or Jibbali, spoken in Oman, lacks extensive annotated speech data. This paper aims to improve an ASR model for this under-resourced language. We collected a Shehri (Jibbali) speech corpus and utilized transfer learning by fine-tuning pre-trained ASR models on this dataset. Specifically, models like Wav2Vec2.0, HuBERT and Whisper were fine-tuned using techniques like parameter-efficient fine-tuning. Evaluation using word error rate (WER) and character error rate (CER) showed that the Whisper model, fine-tuned on the Shehri (Jibbali) dataset, significantly outperformed other models, with the best results from Whisper-medium achieving 3.5{\%} WER. This demonstrates the effectiveness of transfer learning for resource-constrained tasks, showing high zero-shot performance of pre-trained models.", }
Many under-resourced languages lack computational resources for automatic speech recognition (ASR) due to data scarcity issues. This makes developing accurate ASR models challenging. Shehri or Jibbali, spoken in Oman, lacks extensive annotated speech data. This paper aims to improve an ASR model for this under-resourced language. We collected a Shehri (Jibbali) speech corpus and utilized transfer learning by fine-tuning pre-trained ASR models on this dataset. Specifically, models like Wav2Vec2.0, HuBERT and Whisper were fine-tuned using techniques like parameter-efficient fine-tuning. Evaluation using word error rate (WER) and character error rate (CER) showed that the Whisper model, fine-tuned on the Shehri (Jibbali) dataset, significantly outperformed other models, with the best results from Whisper-medium achieving 3.5{\%} WER. This demonstrates the effectiveness of transfer learning for resource-constrained tasks, showing high zero-shot performance of pre-trained models.
[ "Alrashoudi, Norah A.", "Alshahri, Omar Said", "Al-Khalifa, Hend" ]
Arabic Speech Recognition of zero-resourced Languages: A case of Shehri (Jibbali) Language
osact-1.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.osact-1.11.bib
https://aclanthology.org/2024.osact-1.11/
@inproceedings{elneima-etal-2024-osact6, title = "{OSACT}6 Dialect to {MSA} Translation Shared Task Overview", author = "Elneima, Ashraf Hatim and Abdelaziz, AhmedElmogtaba Abdelmoniem Ali and Darwish, Kareem", editor = "Al-Khalifa, Hend and Darwish, Kareem and Mubarak, Hamdy and Ali, Mona and Elsayed, Tamer", booktitle = "Proceedings of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT) with Shared Tasks on Arabic LLMs Hallucination and Dialect to MSA Machine Translation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.osact-1.11", pages = "93--97", abstract = "This paper presents the Dialectal Arabic (DA) to Modern Standard Arabic (MSA) Machine Translation (MT) shared task in the sixth Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT6). The paper describes the creation of the validation and test data and the metrics used; and provides a brief overview of the submissions to the shared task. In all, 29 teams signed up and 6 teams made actual submissions. The teams used a variety of datasets and approaches to build their MT systems. The most successful submission involved using zero-shot and n-shot prompting of chatGPT.", }
This paper presents the Dialectal Arabic (DA) to Modern Standard Arabic (MSA) Machine Translation (MT) shared task in the sixth Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT6). The paper describes the creation of the validation and test data and the metrics used; and provides a brief overview of the submissions to the shared task. In all, 29 teams signed up and 6 teams made actual submissions. The teams used a variety of datasets and approaches to build their MT systems. The most successful submission involved using zero-shot and n-shot prompting of chatGPT.
[ "Elneima, Ashraf Hatim", "Abdelaziz, AhmedElmogtaba Abdelmoniem Ali", "Darwish, Kareem" ]
OSACT6 Dialect to MSA Translation Shared Task Overview
osact-1.11
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.osact-1.12.bib
https://aclanthology.org/2024.osact-1.12/
@inproceedings{atwany-etal-2024-osact, title = "{OSACT} 2024 Task 2: {A}rabic Dialect to {MSA} Translation", author = "Atwany, Hanin and Rabih, Nour and Mohammed, Ibrahim and Waheed, Abdul and Raj, Bhiksha", editor = "Al-Khalifa, Hend and Darwish, Kareem and Mubarak, Hamdy and Ali, Mona and Elsayed, Tamer", booktitle = "Proceedings of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT) with Shared Tasks on Arabic LLMs Hallucination and Dialect to MSA Machine Translation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.osact-1.12", pages = "98--103", abstract = "We present the results of Shared Task {``}Dialect to MSA Translation{''}, which tackles challenges posed by the diverse Arabic dialects in machine translation. Covering Gulf, Egyptian, Levantine, Iraqi and Maghrebi dialects, the task offers 1001 sentences in both MSA and dialects for fine-tuning, alongside 1888 blind test sentences. Leveraging GPT-3.5, a state-of-the-art language model, our method achieved the a BLEU score of 29.61. This endeavor holds significant implications for Neural Machine Translation (NMT) systems targeting low-resource langu ages with linguistic variation. Additionally, negative experiments involving fine-tuning AraT5 and No Language Left Behind (NLLB) using the MADAR Dataset resulted in BLEU scores of 10.41 and 11.96, respectively. Future directions include expanding the dataset to incorporate more Arabic dialects and exploring alternative NMT architectures to further enhance translation capabilities.", }
We present the results of Shared Task {``}Dialect to MSA Translation{''}, which tackles challenges posed by the diverse Arabic dialects in machine translation. Covering Gulf, Egyptian, Levantine, Iraqi and Maghrebi dialects, the task offers 1001 sentences in both MSA and dialects for fine-tuning, alongside 1888 blind test sentences. Leveraging GPT-3.5, a state-of-the-art language model, our method achieved the a BLEU score of 29.61. This endeavor holds significant implications for Neural Machine Translation (NMT) systems targeting low-resource langu ages with linguistic variation. Additionally, negative experiments involving fine-tuning AraT5 and No Language Left Behind (NLLB) using the MADAR Dataset resulted in BLEU scores of 10.41 and 11.96, respectively. Future directions include expanding the dataset to incorporate more Arabic dialects and exploring alternative NMT architectures to further enhance translation capabilities.
[ "Atwany, Hanin", "Rabih, Nour", "Mohammed, Ibrahim", "Waheed, Abdul", "Raj, Bhiksha" ]
OSACT 2024 Task 2: Arabic Dialect to MSA Translation
osact-1.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.osact-1.13.bib
https://aclanthology.org/2024.osact-1.13/
@inproceedings{nacar-etal-2024-asos, title = "{ASOS} at {OSACT}6 Shared Task: Investigation of Data Augmentation in {A}rabic Dialect-{MSA} Translation", author = "Nacar, Omer and Alharbi, Abdullah and Sibaee, Serry and Ahmed, Samar and Ghouti, Lahouari and Koubaa, Anis", editor = "Al-Khalifa, Hend and Darwish, Kareem and Mubarak, Hamdy and Ali, Mona and Elsayed, Tamer", booktitle = "Proceedings of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT) with Shared Tasks on Arabic LLMs Hallucination and Dialect to MSA Machine Translation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.osact-1.13", pages = "104--111", abstract = "The translation between Modern Standard Arabic (MSA) and the various Arabic dialects presents unique challenges due to the significant linguistic, cultural, and contextual variations across the regions where Arabic is spoken. This paper presents a system description of our participation in the OSACT 2024 Dialect to MSA Translation Shared Task. We explain our comprehensive approach which combines data augmentation techniques using generative pre-trained transformer models (GPT-3.5 and GPT-4) with fine-tuning of AraT5 V2, a model specifically designed for Arabic translation tasks. Our methodology has significantly expanded the training dataset, thus improving the model{'}s performance across five major Arabic dialects, namely Gulf, Egyptian, Levantine, Iraqi, and Maghrebi. We have rigorously evaluated our approach, using BLEU score, to ensure translation accuracy, fluency, and the preservation of meaning. Our results showcase the effectiveness of our refined models in addressing the challenges posed by diverse Arabic dialects and Modern Standard Arabic (MSA), achieving a BLEU score of 80{\%} on the validation test set and 22.25{\%} on the blind test set. However, it{'}s important to note that while utilizing a larger dataset, such as Madar + Dev, resulted in significantly higher evaluation BLEU scores, the performance on the blind test set was relatively lower. This observation underscores the importance of dataset size in model training, revealing potential limitations in generalization to unseen data due to variations in data distribution and domain mismatches.", }
The translation between Modern Standard Arabic (MSA) and the various Arabic dialects presents unique challenges due to the significant linguistic, cultural, and contextual variations across the regions where Arabic is spoken. This paper presents a system description of our participation in the OSACT 2024 Dialect to MSA Translation Shared Task. We explain our comprehensive approach which combines data augmentation techniques using generative pre-trained transformer models (GPT-3.5 and GPT-4) with fine-tuning of AraT5 V2, a model specifically designed for Arabic translation tasks. Our methodology has significantly expanded the training dataset, thus improving the model{'}s performance across five major Arabic dialects, namely Gulf, Egyptian, Levantine, Iraqi, and Maghrebi. We have rigorously evaluated our approach, using BLEU score, to ensure translation accuracy, fluency, and the preservation of meaning. Our results showcase the effectiveness of our refined models in addressing the challenges posed by diverse Arabic dialects and Modern Standard Arabic (MSA), achieving a BLEU score of 80{\%} on the validation test set and 22.25{\%} on the blind test set. However, it{'}s important to note that while utilizing a larger dataset, such as Madar + Dev, resulted in significantly higher evaluation BLEU scores, the performance on the blind test set was relatively lower. This observation underscores the importance of dataset size in model training, revealing potential limitations in generalization to unseen data due to variations in data distribution and domain mismatches.
[ "Nacar, Omer", "Alharbi, Abdullah", "Sibaee, Serry", "Ahmed, Samar", "Ghouti, Lahouari", "Koubaa, Anis" ]
ASOS at OSACT6 Shared Task: Investigation of Data Augmentation in Arabic Dialect-MSA Translation
osact-1.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.osact-1.14.bib
https://aclanthology.org/2024.osact-1.14/
@inproceedings{abdelaziz-etal-2024-llm, title = "{LLM}-based {MT} Data Creation: Dialectal to {MSA} Translation Shared Task", author = "Abdelaziz, AhmedElmogtaba Abdelmoniem Ali and Elneima, Ashraf Hatim and Darwish, Kareem", editor = "Al-Khalifa, Hend and Darwish, Kareem and Mubarak, Hamdy and Ali, Mona and Elsayed, Tamer", booktitle = "Proceedings of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT) with Shared Tasks on Arabic LLMs Hallucination and Dialect to MSA Machine Translation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.osact-1.14", pages = "112--116", abstract = "This paper presents our approach to the Dialect to Modern Standard Arabic (MSA) Machine Translation shared task, conducted as part of the sixth Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT6). Our primary contribution is the development of a novel dataset derived from The Saudi Audio Dataset for Arabic (SADA) an Arabic audio corpus. By employing an automated method utilizing ChatGPT 3.5, we translated the dialectal Arabic texts to their MSA equivalents. This process not only yielded a unique and valuable dataset but also showcased an efficient method for leveraging language models in dataset generation. Utilizing this dataset, alongside additional resources, we trained a machine translation model based on the Transformer architecture. Through systematic experimentation with model configurations, we achieved notable improvements in translation quality. Our findings highlight the significance of LLM-assisted dataset creation methodologies and their impact on advancing machine translation systems, particularly for languages with considerable dialectal diversity like Arabic.", }
This paper presents our approach to the Dialect to Modern Standard Arabic (MSA) Machine Translation shared task, conducted as part of the sixth Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT6). Our primary contribution is the development of a novel dataset derived from The Saudi Audio Dataset for Arabic (SADA) an Arabic audio corpus. By employing an automated method utilizing ChatGPT 3.5, we translated the dialectal Arabic texts to their MSA equivalents. This process not only yielded a unique and valuable dataset but also showcased an efficient method for leveraging language models in dataset generation. Utilizing this dataset, alongside additional resources, we trained a machine translation model based on the Transformer architecture. Through systematic experimentation with model configurations, we achieved notable improvements in translation quality. Our findings highlight the significance of LLM-assisted dataset creation methodologies and their impact on advancing machine translation systems, particularly for languages with considerable dialectal diversity like Arabic.
[ "Abdelaziz, AhmedElmogtaba Abdelmoniem Ali", "Elneima, Ashraf Hatim", "Darwish, Kareem" ]
LLM-based MT Data Creation: Dialectal to MSA Translation Shared Task
osact-1.14
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.osact-1.15.bib
https://aclanthology.org/2024.osact-1.15/
@inproceedings{alahmari-2024-sirius, title = "{S}irius{\_}{T}ranslators at {OSACT}6 2024 Shared Task: Fin-tuning Ara-T5 Models for Translating {A}rabic Dialectal Text to {M}odern {S}tandard {A}rabic", author = "Alahmari, Salwa Saad", editor = "Al-Khalifa, Hend and Darwish, Kareem and Mubarak, Hamdy and Ali, Mona and Elsayed, Tamer", booktitle = "Proceedings of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT) with Shared Tasks on Arabic LLMs Hallucination and Dialect to MSA Machine Translation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.osact-1.15", pages = "117--123", abstract = "This paper presents the findings from our participation in the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT6) in 2024. Our specific focus was on the second task (Task 2), which involved translating text at the sentence level from five distinct Dialectal Arabic (DA) (Gulf, Egyptian, Levantine, Iraqi, and Maghrebi) into Modern Standard Arabic (MSA). Our team, Sirius{\_}Translators, fine-tuned four AraT5 models namely; AraT5 base, AraT5v2-base-1024, AraT5-MSA-Small, and AraT5-MSA-Base for the Arabic machine translation (MT) task. These models were fine-tuned using a variety of parallel corpora containing Dialectal Arabic and Modern Standard Arabic. Based on the evaluation results of OSACT6 2024 Shared Task2, our fine-tuned AraT5v2-base-1024 model achieved an overall BLEU score of 21.0 on the development (Dev) set and 9.57 on the test set, respectively.", }
This paper presents the findings from our participation in the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT6) in 2024. Our specific focus was on the second task (Task 2), which involved translating text at the sentence level from five distinct Dialectal Arabic (DA) (Gulf, Egyptian, Levantine, Iraqi, and Maghrebi) into Modern Standard Arabic (MSA). Our team, Sirius{\_}Translators, fine-tuned four AraT5 models namely; AraT5 base, AraT5v2-base-1024, AraT5-MSA-Small, and AraT5-MSA-Base for the Arabic machine translation (MT) task. These models were fine-tuned using a variety of parallel corpora containing Dialectal Arabic and Modern Standard Arabic. Based on the evaluation results of OSACT6 2024 Shared Task2, our fine-tuned AraT5v2-base-1024 model achieved an overall BLEU score of 21.0 on the development (Dev) set and 9.57 on the test set, respectively.
[ "Alahmari, Salwa Saad" ]
Sirius_Translators at OSACT6 2024 Shared Task: Fin-tuning Ara-T5 Models for Translating Arabic Dialectal Text to Modern Standard Arabic
osact-1.15
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.osact-1.16.bib
https://aclanthology.org/2024.osact-1.16/
@inproceedings{fares-2024-arat5, title = "{A}ra{T}5-{MSA}izer: Translating Dialectal {A}rabic to {MSA}", author = "Fares, Murhaf", editor = "Al-Khalifa, Hend and Darwish, Kareem and Mubarak, Hamdy and Ali, Mona and Elsayed, Tamer", booktitle = "Proceedings of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT) with Shared Tasks on Arabic LLMs Hallucination and Dialect to MSA Machine Translation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.osact-1.16", pages = "124--129", abstract = "This paper outlines the process of training the AraT5-MSAizer model, a transformer-based neural machine translation model aimed at translating five regional Arabic dialects into Modern Standard Arabic (MSA). Developed for Task 2 of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools, the model attained a BLEU score of 21.79{\%} on the test set associated with this task.", }
This paper outlines the process of training the AraT5-MSAizer model, a transformer-based neural machine translation model aimed at translating five regional Arabic dialects into Modern Standard Arabic (MSA). Developed for Task 2 of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools, the model attained a BLEU score of 21.79{\%} on the test set associated with this task.
[ "Fares, Murhaf" ]
AraT5-MSAizer: Translating Dialectal Arabic to MSA
osact-1.16
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.osact-1.17.bib
https://aclanthology.org/2024.osact-1.17/
@inproceedings{sibaee-etal-2024-asos, title = "{ASOS} at {A}rabic {LLM}s Hallucinations 2024: Can {LLM}s detect their Hallucinations :)", author = "Sibaee, Serry Taiseer and I. Alharbi, Abdullah and Ahmed, Samar and Nacar, Omar and Ghouti, Lahouri and Koubaa, Anis", editor = "Al-Khalifa, Hend and Darwish, Kareem and Mubarak, Hamdy and Ali, Mona and Elsayed, Tamer", booktitle = "Proceedings of the 6th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT) with Shared Tasks on Arabic LLMs Hallucination and Dialect to MSA Machine Translation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.osact-1.17", pages = "130--134", abstract = "This research delves into the issue of hallucination detection in Large Language Models (LLMs) using Arabic language datasets. As LLMs are increasingly being used in various applications, the phenomenon of hallucination, which refers to generating factually inaccurate content despite grammatical coherence, poses significant challenges. We participate in the OSACT 2024 Shared-task (Detection of Hallucination in Arabic Factual Claims Generated by ChatGPT and GPT4). We explore various approaches for detecting and mitigating hallucination, using models such as GPT-4, Mistral, and Gemini within a novel experimental framework. Our research findings reveal that the effectiveness of these models in classifying claims into Fact-Claim, Fact-Improvement, and Non-Fact categories varies greatly, underscoring the complexities of addressing hallucination in morphologically rich languages. The study emphasizes the need for advanced modelling and training strategies to enhance the reliability and factual accuracy of LLM-generated content, laying the groundwork for future explorations in mitigating hallucination risks. In our experiments we achieved a 0.54 F1 in GPT-4 LLM.", }
This research delves into the issue of hallucination detection in Large Language Models (LLMs) using Arabic language datasets. As LLMs are increasingly being used in various applications, the phenomenon of hallucination, which refers to generating factually inaccurate content despite grammatical coherence, poses significant challenges. We participate in the OSACT 2024 Shared-task (Detection of Hallucination in Arabic Factual Claims Generated by ChatGPT and GPT4). We explore various approaches for detecting and mitigating hallucination, using models such as GPT-4, Mistral, and Gemini within a novel experimental framework. Our research findings reveal that the effectiveness of these models in classifying claims into Fact-Claim, Fact-Improvement, and Non-Fact categories varies greatly, underscoring the complexities of addressing hallucination in morphologically rich languages. The study emphasizes the need for advanced modelling and training strategies to enhance the reliability and factual accuracy of LLM-generated content, laying the groundwork for future explorations in mitigating hallucination risks. In our experiments we achieved a 0.54 F1 in GPT-4 LLM.
[ "Sibaee, Serry Taiseer", "I. Alharbi, Abdullah", "Ahmed, Samar", "Nacar, Omar", "Ghouti, Lahouri", "Koubaa, Anis" ]
ASOS at Arabic LLMs Hallucinations 2024: Can LLMs detect their Hallucinations :)
osact-1.17
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.1.bib
https://aclanthology.org/2024.parlaclarin-1.1/
@inproceedings{skubic-fiser-2024-parliamentary, title = "Parliamentary Discourse Research in Political Science: Literature Review", author = "Skubic, Jure and Fi{\v{s}}er, Darja", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.1", pages = "1--11", abstract = "One of the major research interests for political science has always been the study of political discourse and parliamentary debates. This literature review offers an overview of the most prominent research methods used in political science when studying political discourse. We identify the commonalities and the differences of the political science and corpus-driven approaches and show how parliamentary corpora and corpus-based approaches could be successfully integrated in political science research.", }
One of the major research interests for political science has always been the study of political discourse and parliamentary debates. This literature review offers an overview of the most prominent research methods used in political science when studying political discourse. We identify the commonalities and the differences of the political science and corpus-driven approaches and show how parliamentary corpora and corpus-based approaches could be successfully integrated in political science research.
[ "Skubic, Jure", "Fi{\\v{s}}er, Darja" ]
Parliamentary Discourse Research in Political Science: Literature Review
parlaclarin-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.2.bib
https://aclanthology.org/2024.parlaclarin-1.2/
@inproceedings{aires-etal-2024-compiling, title = "Compiling and Exploring a {P}ortuguese Parliamentary Corpus: {P}arla{M}int-{PT}", author = "Aires, Jos{\'e} and Cardoso, Aida and Pereira, Rui and Mendes, Amalia", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.2", pages = "12--20", abstract = "As part of the project ParlaMint II, a new corpus of the sessions of the Portuguese Parliament from 2015 to 2022 has been compiled, encoded and annotated following the ParlaMint guidelines. We report on the contents of the corpus and on the specific nature of the political settings in Portugal during the time period covered. Two subcorpora were designed that would enable comparisons of the political speeches between pre and post covid-19 pandemic. We discuss the pipeline applied to download the original texts, ensure their preprocessing and encoding in XML, and the final step of annotation. This new resource covers a period of changes in the political system in Portugal and will be an important source of data for political and social studies. Finally, Finally, we have explored the political stance on immigration in the ParlaMint-PT corpus.", }
As part of the project ParlaMint II, a new corpus of the sessions of the Portuguese Parliament from 2015 to 2022 has been compiled, encoded and annotated following the ParlaMint guidelines. We report on the contents of the corpus and on the specific nature of the political settings in Portugal during the time period covered. Two subcorpora were designed that would enable comparisons of the political speeches between pre and post covid-19 pandemic. We discuss the pipeline applied to download the original texts, ensure their preprocessing and encoding in XML, and the final step of annotation. This new resource covers a period of changes in the political system in Portugal and will be an important source of data for political and social studies. Finally, Finally, we have explored the political stance on immigration in the ParlaMint-PT corpus.
[ "Aires, Jos{\\'e}", "Cardoso, Aida", "Pereira, Rui", "Mendes, Amalia" ]
Compiling and Exploring a Portuguese Parliamentary Corpus: ParlaMint-PT
parlaclarin-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.3.bib
https://aclanthology.org/2024.parlaclarin-1.3/
@inproceedings{vladu-etal-2024-gender, title = "Gender, Speech, and Representation in the {G}alician Parliament: An Analysis Based on the {P}arla{M}int-{ES}-{GA} Dataset", author = "Vladu, Adina I. and Fern{\'a}ndez Rei, Elisa and Magari{\~n}os, Carmen and Garc{\'\i}a D{\'\i}az, Noelia", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.3", pages = "21--29", abstract = "This paper employs the ParlaMint-ES-GA dataset to scrutinize the intersection of gender, speech, and representation within the Parliament of Galicia, an autonomous region located in North-western Spain. The research questions center around the dynamics of women{'}s participation in parliamentary proceedings. Contrary to numerical parity, we explore whether increased female presence in the parliament correlates with equitable access to the floor. Analyzing parliamentary proceedings from 2015 to 2022, our quantitative study investigates the relationship between the legislative body{'}s composition, the number of speeches by Members of Parliament (MPs), and references made by MPs in their speeches. The findings reveal nuances in gender representation and participation, challenging assumptions about proportional access to parliamentary discourse.", }
This paper employs the ParlaMint-ES-GA dataset to scrutinize the intersection of gender, speech, and representation within the Parliament of Galicia, an autonomous region located in North-western Spain. The research questions center around the dynamics of women{'}s participation in parliamentary proceedings. Contrary to numerical parity, we explore whether increased female presence in the parliament correlates with equitable access to the floor. Analyzing parliamentary proceedings from 2015 to 2022, our quantitative study investigates the relationship between the legislative body{'}s composition, the number of speeches by Members of Parliament (MPs), and references made by MPs in their speeches. The findings reveal nuances in gender representation and participation, challenging assumptions about proportional access to parliamentary discourse.
[ "Vladu, Adina I.", "Fern{\\'a}ndez Rei, Elisa", "Magari{\\~n}os, Carmen", "Garc{\\'\\i}a D{\\'\\i}az, Noelia" ]
Gender, Speech, and Representation in the Galician Parliament: An Analysis Based on the ParlaMint-ES-GA Dataset
parlaclarin-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.4.bib
https://aclanthology.org/2024.parlaclarin-1.4/
@inproceedings{osenova-simov-2024-bulgarian, title = "{B}ulgarian {P}arla{M}int 4.0 corpus as a testset for Part-of-speech tagging and Named Entity Recognition", author = "Osenova, Petya and Simov, Kiril", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.4", pages = "30--35", abstract = "The paper discusses some fine-tuned models for the tasks of part-of-speech tagging and named entity recognition. The fine-tuning was performed on the basis of an existing BERT pre-trained model and two newly pre-trained BERT models for Bulgarian that are cross-tested on the domain of the Bulgarian part of the ParlaMint corpora as a new domain. In addition, a comparison has been made between the performance of the new fine-tuned BERT models and the available results from the Stanza-based model which the Bulgarian part of the ParlaMint corpora has been annotated with. The observations show the weaknesses in each model as well as the common challenges.", }
The paper discusses some fine-tuned models for the tasks of part-of-speech tagging and named entity recognition. The fine-tuning was performed on the basis of an existing BERT pre-trained model and two newly pre-trained BERT models for Bulgarian that are cross-tested on the domain of the Bulgarian part of the ParlaMint corpora as a new domain. In addition, a comparison has been made between the performance of the new fine-tuned BERT models and the available results from the Stanza-based model which the Bulgarian part of the ParlaMint corpora has been annotated with. The observations show the weaknesses in each model as well as the common challenges.
[ "Osenova, Petya", "Simov, Kiril" ]
Bulgarian ParlaMint 4.0 corpus as a testset for Part-of-speech tagging and Named Entity Recognition
parlaclarin-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.5.bib
https://aclanthology.org/2024.parlaclarin-1.5/
@inproceedings{rehbein-2024-resources, title = "Resources and Methods for Analysing Political Rhetoric and Framing in Parliamentary Debates", author = "Rehbein, Ines", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.5", pages = "36--37", abstract = "Recent work in political science has made exten- sive use of NLP methods to produce evidential sup- port for a variety of analyses, for example, inferring an actor{'}s ideological positions from textual data or identifying the polarisation of the political discourse over the last decades. Most work has employed variations of lexical features extracted from text or has learned latent representations in a mostly un- supervised manner. While such approaches have the potential to enable political analyses at scale, they are often limited by their lack of interpretabil- ity. In the talk, I will instead look at semantic and pragmatic representations of political rhethoric and ideological framing and present several case stud- ies that showcase how linguistic annotation and the use of NLP methods can help to investigate dif- ferent framing strategies in parliamentary debates. The first part of the talk investigates populist framing strategies, specifically, the use of pronouns to create in- and out-groups and the identification of people-centric messages. The second part of the presentation focusses on framing strategies on the pragmatic level.", }
Recent work in political science has made exten- sive use of NLP methods to produce evidential sup- port for a variety of analyses, for example, inferring an actor{'}s ideological positions from textual data or identifying the polarisation of the political discourse over the last decades. Most work has employed variations of lexical features extracted from text or has learned latent representations in a mostly un- supervised manner. While such approaches have the potential to enable political analyses at scale, they are often limited by their lack of interpretabil- ity. In the talk, I will instead look at semantic and pragmatic representations of political rhethoric and ideological framing and present several case stud- ies that showcase how linguistic annotation and the use of NLP methods can help to investigate dif- ferent framing strategies in parliamentary debates. The first part of the talk investigates populist framing strategies, specifically, the use of pronouns to create in- and out-groups and the identification of people-centric messages. The second part of the presentation focusses on framing strategies on the pragmatic level.
[ "Rehbein, Ines" ]
Resources and Methods for Analysing Political Rhetoric and Framing in Parliamentary Debates
parlaclarin-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.6.bib
https://aclanthology.org/2024.parlaclarin-1.6/
@inproceedings{sousa-lopes-cardoso-2024-ptparl, title = "{PTPARL}-{V}: {P}ortuguese Parliamentary Debates for Voting Behaviour Study", author = "Sousa, Afonso and Lopes Cardoso, Henrique", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.6", pages = "38--42", abstract = "We present a new dataset, , that provides valuable insight for advancing discourse analysis of parliamentary debates in Portuguese. This is achieved by processing the open-access information available at the official Portuguese Parliament website and scraping the information from the debate minutes{'} PDFs contained therein. Our dataset includes interventions from 547 different deputies of all major Portuguese parties, from 736 legislative initiatives spanning five legislatures from 2005 to 2021. We present a statistical analysis of the dataset compared to other publicly available Portuguese parliamentary debate corpora. Finally, we provide baseline performance analysis for voting behaviour classification.", }
We present a new dataset, , that provides valuable insight for advancing discourse analysis of parliamentary debates in Portuguese. This is achieved by processing the open-access information available at the official Portuguese Parliament website and scraping the information from the debate minutes{'} PDFs contained therein. Our dataset includes interventions from 547 different deputies of all major Portuguese parties, from 736 legislative initiatives spanning five legislatures from 2005 to 2021. We present a statistical analysis of the dataset compared to other publicly available Portuguese parliamentary debate corpora. Finally, we provide baseline performance analysis for voting behaviour classification.
[ "Sousa, Afonso", "Lopes Cardoso, Henrique" ]
PTPARL-V: Portuguese Parliamentary Debates for Voting Behaviour Study
parlaclarin-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.7.bib
https://aclanthology.org/2024.parlaclarin-1.7/
@inproceedings{ogrodniczuk-etal-2024-polish, title = "{P}olish Round Table Corpus", author = "Ogrodniczuk, Maciej and Tuora, Ryszard and W{\'o}jtowicz, Beata", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.7", pages = "43--47", abstract = "The paper describes the process of preparation of the Polish Round Table Corpus (Pol. Korpus Okr{\k{a}}g{\l}ego Sto{\l}u), a new resource documenting negotiations taking place in 1989 between the representatives of the communist government of the People{'}s Republic of Poland and the Solidarity opposition. The process consisted of OCR of graphical transcripts of the talks stored in the form of parliament-like stenographic transcripts, carrying out their manual correction and making them available for search in a concordancer currently used for standard parliamentary transcripts.", }
The paper describes the process of preparation of the Polish Round Table Corpus (Pol. Korpus Okr{\k{a}}g{\l}ego Sto{\l}u), a new resource documenting negotiations taking place in 1989 between the representatives of the communist government of the People{'}s Republic of Poland and the Solidarity opposition. The process consisted of OCR of graphical transcripts of the talks stored in the form of parliament-like stenographic transcripts, carrying out their manual correction and making them available for search in a concordancer currently used for standard parliamentary transcripts.
[ "Ogrodniczuk, Maciej", "Tuora, Ryszard", "W{\\'o}jtowicz, Beata" ]
Polish Round Table Corpus
parlaclarin-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.8.bib
https://aclanthology.org/2024.parlaclarin-1.8/
@inproceedings{jauhiainen-etal-2024-investigating, title = "Investigating Multilinguality in the Plenary Sessions of the Parliament of {F}inland with Automatic Language Identification", author = "Jauhiainen, Tommi and Piitulainen, Jussi and Axelson, Erik and Dieckmann, Ute and Lennes, Mietta and Niemi, Jyrki and Rueter, Jack and Lind{\'e}n, Krister", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.8", pages = "48--56", abstract = "In this paper, we use automatic language identification to investigate the usage of different languages in the plenary sessions of the Parliament of Finland. Finland has two national languages, Finnish and Swedish. The plenary sessions are published as transcriptions of speeches in Parliament, reflecting the language the speaker used. In addition to charting out language use, we demonstrate how language identification can be used to audit the quality of the dataset. On the one hand, we made slight improvements to our language identifier; on the other hand, we made a list of improvement suggestions for the next version of the dataset.", }
In this paper, we use automatic language identification to investigate the usage of different languages in the plenary sessions of the Parliament of Finland. Finland has two national languages, Finnish and Swedish. The plenary sessions are published as transcriptions of speeches in Parliament, reflecting the language the speaker used. In addition to charting out language use, we demonstrate how language identification can be used to audit the quality of the dataset. On the one hand, we made slight improvements to our language identifier; on the other hand, we made a list of improvement suggestions for the next version of the dataset.
[ "Jauhiainen, Tommi", "Piitulainen, Jussi", "Axelson, Erik", "Dieckmann, Ute", "Lennes, Mietta", "Niemi, Jyrki", "Rueter, Jack", "Lind{\\'e}n, Krister" ]
Investigating Multilinguality in the Plenary Sessions of the Parliament of Finland with Automatic Language Identification
parlaclarin-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.9.bib
https://aclanthology.org/2024.parlaclarin-1.9/
@inproceedings{menzel-2024-exploring, title = "Exploring Word Formation Trends in Written, Spoken, Translated and Interpreted {E}uropean Parliament Data {--} A Case Study on Initialisms in {E}nglish and {G}erman", author = "Menzel, Katrin", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.9", pages = "57--65", abstract = "This paper demonstrates the research potential of a unique European Parliament dataset for register studies, contrastive linguistics, translation and interpreting studies. The dataset consists of parallel data for several European languages, including written source texts and their translations as well as spoken source texts and the transcripts of their simultaneously interpreted versions. The paper presents a cross-linguistic, corpus-based case study on a word formation phenomenon in these European Parliament data that are enriched with various linguistic annotations and metadata as well as with information-theoretic surprisal scores. It addresses the questions of how initialisms are used across languages and production modes in the English and German corpus sections of these European Parliament data, whether there is a correlation between the use of initialisms and the use of their corresponding multiword full forms in the analysed corpus sections and what insights on the informativity and possible processing difficulties of initialisms we can gain from an analysis of information-theoretic surprisal values. The results show that English written originals and German translations are the corpus sections with the highest frequencies of initialisms. The majority of cross-language transfer situations lead to fewer initialisms in the target texts than in the source texts. In the English data, there is a positive correlation between the frequency of initialisms and the frequency of the respective full forms. There is a similar correlation in the German data, apart from the interpreted data. Additionally, the results show that initialisms represent peaks of information with regard to their surprisal values within their segments. Particularly the German data show higher surprisal values of initialisms in mediated language than in non-mediated discourse types, which indicates that in German mediated discourse, initialisms tend to be used in less conventionalised textual contexts than in English.", }
This paper demonstrates the research potential of a unique European Parliament dataset for register studies, contrastive linguistics, translation and interpreting studies. The dataset consists of parallel data for several European languages, including written source texts and their translations as well as spoken source texts and the transcripts of their simultaneously interpreted versions. The paper presents a cross-linguistic, corpus-based case study on a word formation phenomenon in these European Parliament data that are enriched with various linguistic annotations and metadata as well as with information-theoretic surprisal scores. It addresses the questions of how initialisms are used across languages and production modes in the English and German corpus sections of these European Parliament data, whether there is a correlation between the use of initialisms and the use of their corresponding multiword full forms in the analysed corpus sections and what insights on the informativity and possible processing difficulties of initialisms we can gain from an analysis of information-theoretic surprisal values. The results show that English written originals and German translations are the corpus sections with the highest frequencies of initialisms. The majority of cross-language transfer situations lead to fewer initialisms in the target texts than in the source texts. In the English data, there is a positive correlation between the frequency of initialisms and the frequency of the respective full forms. There is a similar correlation in the German data, apart from the interpreted data. Additionally, the results show that initialisms represent peaks of information with regard to their surprisal values within their segments. Particularly the German data show higher surprisal values of initialisms in mediated language than in non-mediated discourse types, which indicates that in German mediated discourse, initialisms tend to be used in less conventionalised textual contexts than in English.
[ "Menzel, Katrin" ]
Exploring Word Formation Trends in Written, Spoken, Translated and Interpreted European Parliament Data – A Case Study on Initialisms in English and German
parlaclarin-1.9
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.10.bib
https://aclanthology.org/2024.parlaclarin-1.10/
@inproceedings{kawahara-2024-quantitative, title = "Quantitative Analysis of Editing in Transcription Process in {J}apanese and {E}uropean Parliaments and its Diachronic Changes", author = "Kawahara, Tatsuya", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.10", pages = "66--69", abstract = "In making official transcripts for meeting records in Parliament, some edits are made from faithful transcripts of utterances for linguistic correction and formality. Classification of these edits is provided in this paper, and quantitative analysis is conducted for Japanese and European Parliamentary meetings by comparing the faithful transcripts of audio recordings against the official meeting records. Different trends are observed between the two Parliaments due to the nature of the language used and the meeting style. Moreover, its diachronic changes in the Japanese transcripts are presented, showing a significant decrease in the edits over the past decades. It was found that a majority of edits in the Japanese Parliament (Diet) simply remove fillers and redundant words, keeping the transcripts as verbatim as possible. This property is useful for the evaluation of the automatic speech transcription system, which was developed by us and has been used in the Japanese Parliament.", }
In making official transcripts for meeting records in Parliament, some edits are made from faithful transcripts of utterances for linguistic correction and formality. Classification of these edits is provided in this paper, and quantitative analysis is conducted for Japanese and European Parliamentary meetings by comparing the faithful transcripts of audio recordings against the official meeting records. Different trends are observed between the two Parliaments due to the nature of the language used and the meeting style. Moreover, its diachronic changes in the Japanese transcripts are presented, showing a significant decrease in the edits over the past decades. It was found that a majority of edits in the Japanese Parliament (Diet) simply remove fillers and redundant words, keeping the transcripts as verbatim as possible. This property is useful for the evaluation of the automatic speech transcription system, which was developed by us and has been used in the Japanese Parliament.
[ "Kawahara, Tatsuya" ]
Quantitative Analysis of Editing in Transcription Process in Japanese and European Parliaments and its Diachronic Changes
parlaclarin-1.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.11.bib
https://aclanthology.org/2024.parlaclarin-1.11/
@inproceedings{tarkka-etal-2024-automated, title = "Automated Emotion Annotation of {F}innish Parliamentary Speeches Using {GPT}-4", author = "Tarkka, Otto and Koljonen, Jaakko and Korhonen, Markus and Laine, Juuso and Martiskainen, Kristian and Elo, Kimmo and Laippala, Veronika", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.11", pages = "70--76", abstract = "In this paper, we test the efficacy of using GPT-4 to annotate a dataset that is the used to train a BERT classifier for emotion analysis. Manual data annotation is often a laborious and expensive task and emotion annotation, specifically, has proved difficult even for expert annotators. We show that using GPT-4 can produce equally good results as doing data annotation manually while saving a lot of time and money. We train a BERT classifier on our automatically annotated dataset and get results that outperform a BERT classifier that is trained on machine translated data. Our paper shows how Large Language Models can be used to work with and analyse parliamentary corpora.", }
In this paper, we test the efficacy of using GPT-4 to annotate a dataset that is the used to train a BERT classifier for emotion analysis. Manual data annotation is often a laborious and expensive task and emotion annotation, specifically, has proved difficult even for expert annotators. We show that using GPT-4 can produce equally good results as doing data annotation manually while saving a lot of time and money. We train a BERT classifier on our automatically annotated dataset and get results that outperform a BERT classifier that is trained on machine translated data. Our paper shows how Large Language Models can be used to work with and analyse parliamentary corpora.
[ "Tarkka, Otto", "Koljonen, Jaakko", "Korhonen, Markus", "Laine, Juuso", "Martiskainen, Kristian", "Elo, Kimmo", "Laippala, Veronika" ]
Automated Emotion Annotation of Finnish Parliamentary Speeches Using GPT-4
parlaclarin-1.11
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.12.bib
https://aclanthology.org/2024.parlaclarin-1.12/
@inproceedings{aubert-jager-2024-making, title = "Making Parliamentary Debates More Accessible: Aligning Video Recordings with Text Proceedings in Open Parliament {TV}", author = {Aubert, Olivier and J{\"a}ger, Joscha}, editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.12", pages = "77--83", abstract = "We are going to describe the Open Parliament TV project and more specifically the work we have done on alignment of video recordings with text proceedings of the german Bundestag. This has allowed us to create a comprehensive and accessible platform for citizens and journalists to engage with parliamentary proceedings. Through our diligent work, we have ensured that the video recordings accurately correspond to the corresponding text, providing a seamless and synchronised experience for users. In this article, we describe the issues we were faced with and the method we used to solve it, along with the visualisations we developed to investigate and assess the content.", }
We are going to describe the Open Parliament TV project and more specifically the work we have done on alignment of video recordings with text proceedings of the german Bundestag. This has allowed us to create a comprehensive and accessible platform for citizens and journalists to engage with parliamentary proceedings. Through our diligent work, we have ensured that the video recordings accurately correspond to the corresponding text, providing a seamless and synchronised experience for users. In this article, we describe the issues we were faced with and the method we used to solve it, along with the visualisations we developed to investigate and assess the content.
[ "Aubert, Olivier", "J{\\\"a}ger, Joscha" ]
Making Parliamentary Debates More Accessible: Aligning Video Recordings with Text Proceedings in Open Parliament TV
parlaclarin-1.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.13.bib
https://aclanthology.org/2024.parlaclarin-1.13/
@inproceedings{calzada-perez-2024-russia, title = "{R}ussia and {U}kraine through the Eyes of {P}arla{M}int 4.0: A Collocational {CADS} Profile of {S}panish and {B}ritish Parliamentary Discourses", author = "Calzada Perez, Maria", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.13", pages = "84--93", abstract = "This article resorts to mixed methods to examine British and Spanish parliamentary discourse. The quantitative corpus-assisted (lexical priming) theory and data are complemented by the qualitative discourse historical approach. Two CLARIN ParlaMint corpora {--} ParlamMint-GB and ParlaMint-ES {--} are queried in the analysis, which focuses on English ({``}Rusia{''} and {``}Ukraine{''}) and Spanish ({``}Rusia{''} and {``}Ucrania{''}) nodes and collocations. In sum, the analysis sketches a brief profile of each corpus. The British House of Commons is more homogenous, strongly associating {``}Russia{''} and {``}Ukraine{''} with their participation in the war. Furthermore, this chamber shows a greater interest in {``}Russia. The Spanish Congreso de los Diputados indicates greater quantitative differences (heterogeneity). Here, {``}Russia{''} clearly transcends its role as a military contender and is also portrayed as an economic competitor for the West. Unlike in Britain, the Spanish lower house shows more mentions of {``}Ucrania{''}, which is assigned just one role {--} as an invasion victim. In conclusion, the productivity of corpus-assisted mixed methods is confirmed along with the precious value of the ParlaMint constellation.", }
This article resorts to mixed methods to examine British and Spanish parliamentary discourse. The quantitative corpus-assisted (lexical priming) theory and data are complemented by the qualitative discourse historical approach. Two CLARIN ParlaMint corpora {--} ParlamMint-GB and ParlaMint-ES {--} are queried in the analysis, which focuses on English ({``}Rusia{''} and {``}Ukraine{''}) and Spanish ({``}Rusia{''} and {``}Ucrania{''}) nodes and collocations. In sum, the analysis sketches a brief profile of each corpus. The British House of Commons is more homogenous, strongly associating {``}Russia{''} and {``}Ukraine{''} with their participation in the war. Furthermore, this chamber shows a greater interest in {``}Russia. The Spanish Congreso de los Diputados indicates greater quantitative differences (heterogeneity). Here, {``}Russia{''} clearly transcends its role as a military contender and is also portrayed as an economic competitor for the West. Unlike in Britain, the Spanish lower house shows more mentions of {``}Ucrania{''}, which is assigned just one role {--} as an invasion victim. In conclusion, the productivity of corpus-assisted mixed methods is confirmed along with the precious value of the ParlaMint constellation.
[ "Calzada Perez, Maria" ]
Russia and Ukraine through the Eyes of ParlaMint 4.0: A Collocational CADS Profile of Spanish and British Parliamentary Discourses
parlaclarin-1.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.14.bib
https://aclanthology.org/2024.parlaclarin-1.14/
@inproceedings{coltekin-etal-2024-multilingual, title = "Multilingual Power and Ideology identification in the Parliament: a reference dataset and simple baselines", author = {{\c{C}}{\"o}ltekin, {\c{C}}a{\u{g}}r{\i} and Kopp, Maty{\'a}{\v{s}} and Katja, Meden and Morkevicius, Vaidas and Ljube{\v{s}}i{\'c}, Nikola and Erjavec, Toma{\v{z}}}, editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.14", pages = "94--100", abstract = "We introduce a dataset on political orientation and power position identification. The dataset is derived from ParlaMint, a set of comparable corpora of transcribed parliamentary speeches from 29 national and regional parliaments. We introduce the dataset, provide the reasoning behind some of the choices during its creation, present statistics on the dataset, and, using a simple classifier, some baseline results on predicting political orientation on the left-to-right axis, and on power position identification, i.e., distinguishing between the speeches delivered by governing coalition party members from those of opposition party members.", }
We introduce a dataset on political orientation and power position identification. The dataset is derived from ParlaMint, a set of comparable corpora of transcribed parliamentary speeches from 29 national and regional parliaments. We introduce the dataset, provide the reasoning behind some of the choices during its creation, present statistics on the dataset, and, using a simple classifier, some baseline results on predicting political orientation on the left-to-right axis, and on power position identification, i.e., distinguishing between the speeches delivered by governing coalition party members from those of opposition party members.
[ "{\\c{C}}{\\\"o}ltekin, {\\c{C}}a{\\u{g}}r{\\i}", "Kopp, Maty{\\'a}{\\v{s}}", "Katja, Meden", "Morkevicius, Vaidas", "Ljube{\\v{s}}i{\\'c}, Nikola", "Erjavec, Toma{\\v{z}}" ]
Multilingual Power and Ideology identification in the Parliament: a reference dataset and simple baselines
parlaclarin-1.14
Poster
2405.07363
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.15.bib
https://aclanthology.org/2024.parlaclarin-1.15/
@inproceedings{cominetti-etal-2024-impaqts, title = "{IMPAQTS}: a multimodal corpus of parliamentary and other political speeches in {I}taly (1946-2023), annotated with implicit strategies", author = "Cominetti, Federica and Gregori, Lorenzo and Lombardi Vallauri, Edoardo and Panunzi, Alessandro", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.15", pages = "101--109", abstract = "The paper introduces the IMPAQTS corpus of Italian political discourse, a multimodal corpus of around 2.65 million tokens including 1,500 speeches uttered by 150 prominent politicians spanning from 1946 to 2023. Covering the entire history of the Italian Republic, the collection exhibits a non-homogeneous consistency that progressively increases in quantity towards the present. The corpus is balanced according to textual and socio-linguistic criteria and includes different types of speeches. The sociolinguistic features of the speakers are carefully considered to ensure representation of Republican Italian politicians. For each speaker, the corpus contains 4 parliamentary speeches, 2 rallies, 1 party assembly, and 3 statements (in person or broadcasted). Parliamentary speeches therefore constitute the largest section of the corpus (40{\%} of the total), enabling direct comparison with other types of political speeches. The collection procedure, including details relevant to the transcription protocols, and the processing pipeline are described. The corpus has been pragmatically annotated to include information about the implicitly conveyed questionable contents, paired with their explicit paraphrasis, providing the largest Italian collection of ecologic examples of linguistic implicit strategies. The adopted ontology of linguistic implicitness and the fine-grained annotation scheme are presented in detail.", }
The paper introduces the IMPAQTS corpus of Italian political discourse, a multimodal corpus of around 2.65 million tokens including 1,500 speeches uttered by 150 prominent politicians spanning from 1946 to 2023. Covering the entire history of the Italian Republic, the collection exhibits a non-homogeneous consistency that progressively increases in quantity towards the present. The corpus is balanced according to textual and socio-linguistic criteria and includes different types of speeches. The sociolinguistic features of the speakers are carefully considered to ensure representation of Republican Italian politicians. For each speaker, the corpus contains 4 parliamentary speeches, 2 rallies, 1 party assembly, and 3 statements (in person or broadcasted). Parliamentary speeches therefore constitute the largest section of the corpus (40{\%} of the total), enabling direct comparison with other types of political speeches. The collection procedure, including details relevant to the transcription protocols, and the processing pipeline are described. The corpus has been pragmatically annotated to include information about the implicitly conveyed questionable contents, paired with their explicit paraphrasis, providing the largest Italian collection of ecologic examples of linguistic implicit strategies. The adopted ontology of linguistic implicitness and the fine-grained annotation scheme are presented in detail.
[ "Cominetti, Federica", "Gregori, Lorenzo", "Lombardi Vallauri, Edoardo", "Panunzi, Aless", "ro" ]
IMPAQTS: a multimodal corpus of parliamentary and other political speeches in Italy (1946-2023), annotated with implicit strategies
parlaclarin-1.15
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.16.bib
https://aclanthology.org/2024.parlaclarin-1.16/
@inproceedings{de-jong-etal-2024-parlamint, title = "{P}arla{M}int Ngram viewer: Multilingual Comparative Diachronic Search Across 26 Parliaments", author = "de Jong, Asher and Kuzman, Taja and Larooij, Maik and Marx, Maarten", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.16", pages = "110--115", abstract = "We demonstrate the multilingual search engine and Ngram viewer that was built on top of the Parlamint dataset using the recently available translations. The user interface and SERP are carefully designed for querying parliamentary proceedings and for the intended use by citizens, journalists and political scholars. Demo at https://debateabase.wooverheid.nl. Keywords: Multilingual Search, Parliamentary Proceedings, Ngram Viewer, Machine Translation", }
We demonstrate the multilingual search engine and Ngram viewer that was built on top of the Parlamint dataset using the recently available translations. The user interface and SERP are carefully designed for querying parliamentary proceedings and for the intended use by citizens, journalists and political scholars. Demo at https://debateabase.wooverheid.nl. Keywords: Multilingual Search, Parliamentary Proceedings, Ngram Viewer, Machine Translation
[ "de Jong, Asher", "Kuzman, Taja", "Larooij, Maik", "Marx, Maarten" ]
ParlaMint Ngram viewer: Multilingual Comparative Diachronic Search Across 26 Parliaments
parlaclarin-1.16
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.17.bib
https://aclanthology.org/2024.parlaclarin-1.17/
@inproceedings{gavriilidou-etal-2024-investigating, title = "Investigating Political Ideologies through the {G}reek {P}arla{M}int corpus", author = "Gavriilidou, Maria and Gkoumas, Dimitris and Piperidis, Stelios and Prokopidis, Prokopis", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.17", pages = "116--120", abstract = "This paper has two objectives: to present (a) the creation of ParlaMint-GR, the Greek part of the ParlaMint corpora of debates in the parliaments of Europe, and (b) preliminary results on its comparison with a corpus of Greek party manifestos, aiming at the investigation of the ideologies of the Greek political parties and members of the Parliament. Additionally, a gender related comparison is explored. The creation of the ParlaMint-GR corpus is discussed, together with the solutions adopted for various challenges faced. The corpus of party manifestos, available through CLARIN:EL, serves for a comparative study with the corpus of speeches delivered by the members of the Greek Parliament, with the aim to identify the ideological positions of parties and politicians.", }
This paper has two objectives: to present (a) the creation of ParlaMint-GR, the Greek part of the ParlaMint corpora of debates in the parliaments of Europe, and (b) preliminary results on its comparison with a corpus of Greek party manifestos, aiming at the investigation of the ideologies of the Greek political parties and members of the Parliament. Additionally, a gender related comparison is explored. The creation of the ParlaMint-GR corpus is discussed, together with the solutions adopted for various challenges faced. The corpus of party manifestos, available through CLARIN:EL, serves for a comparative study with the corpus of speeches delivered by the members of the Greek Parliament, with the aim to identify the ideological positions of parties and politicians.
[ "Gavriilidou, Maria", "Gkoumas, Dimitris", "Piperidis, Stelios", "Prokopidis, Prokopis" ]
Investigating Political Ideologies through the Greek ParlaMint corpus
parlaclarin-1.17
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.18.bib
https://aclanthology.org/2024.parlaclarin-1.18/
@inproceedings{janssen-kopp-2024-parlamint, title = "{P}arla{M}int in {TEITOK}", author = "Janssen, Maarten and Kopp, Maty{\'a}{\v{s}}", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.18", pages = "121--126", abstract = "This paper describes the ParlaMint 4.0 parliamentary corpora as made available in TEITOK at LINDAT. The TEITOK interface makes it possible to search through the corpus, to view each session in a readable manner, and to explore the names in the corpus. The interface does not present any new data, but provides an access point to the ParlaMint corpus that is less oriented to linguistic use only, and more accessible for the general public or researchers from other fields.", }
This paper describes the ParlaMint 4.0 parliamentary corpora as made available in TEITOK at LINDAT. The TEITOK interface makes it possible to search through the corpus, to view each session in a readable manner, and to explore the names in the corpus. The interface does not present any new data, but provides an access point to the ParlaMint corpus that is less oriented to linguistic use only, and more accessible for the general public or researchers from other fields.
[ "Janssen, Maarten", "Kopp, Maty{\\'a}{\\v{s}}" ]
ParlaMint in TEITOK
parlaclarin-1.18
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.19.bib
https://aclanthology.org/2024.parlaclarin-1.19/
@inproceedings{kavcic-etal-2024-historical, title = "Historical Parliamentary Corpora Viewer", author = "Kav{\v{c}}i{\v{c}}, Alenka and Stojanoski, Martin and Marolt, Matija", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.19", pages = "127--132", abstract = "Historical parliamentary debates offer a window into the past and provide valuable insights for academic research and historical analysis. This paper presents a novel web application tailored to the exploration of historical parliamentary corpora in the context of Slovenian national identity. The developed web viewer enables advanced search functions within collections of historical parliamentary records and has an intuitive and user-friendly interface. Users can enter search terms and apply filters to refine their search results. The search function allows keyword and phrase searching, including the ability to search by delegate and place names. It is also possible to search for translations of the text by selecting the desired languages. The search results are displayed with a preview of the proceedings and highlighted phrases that match the search query. To review a specific record, the full PDF document can be displayed in a separate view, allowing the user to scroll through the PDF document and search the content. In addition, the two corpora of Slovenian historical records integrated into the viewer{---}the Carniolan Provincial Assembly Corpus and the Parliamentary Corpus of the First Yugoslavia{---}are described and an insight into the corresponding preparation processes is provided.", }
Historical parliamentary debates offer a window into the past and provide valuable insights for academic research and historical analysis. This paper presents a novel web application tailored to the exploration of historical parliamentary corpora in the context of Slovenian national identity. The developed web viewer enables advanced search functions within collections of historical parliamentary records and has an intuitive and user-friendly interface. Users can enter search terms and apply filters to refine their search results. The search function allows keyword and phrase searching, including the ability to search by delegate and place names. It is also possible to search for translations of the text by selecting the desired languages. The search results are displayed with a preview of the proceedings and highlighted phrases that match the search query. To review a specific record, the full PDF document can be displayed in a separate view, allowing the user to scroll through the PDF document and search the content. In addition, the two corpora of Slovenian historical records integrated into the viewer{---}the Carniolan Provincial Assembly Corpus and the Parliamentary Corpus of the First Yugoslavia{---}are described and an insight into the corresponding preparation processes is provided.
[ "Kav{\\v{c}}i{\\v{c}}, Alenka", "Stojanoski, Martin", "Marolt, Matija" ]
Historical Parliamentary Corpora Viewer
parlaclarin-1.19
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.20.bib
https://aclanthology.org/2024.parlaclarin-1.20/
@inproceedings{leonhardt-blaette-2024-dbpedia, title = "The dbpedia {R} Package: An Integrated Workflow for Entity Linking (for {P}arla{M}int Corpora)", author = "Leonhardt, Christoph and Blaette, Andreas", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.20", pages = "133--144", abstract = "Entity Linking is a powerful approach for linking textual data to established structured data such as survey data or adminstrative data. However, in the realm of social science, the approach is not widely adopted. We argue that this is, at least in part, due to specific setup requirements which constitute high barriers for usage and workflows which are not well integrated into analyitical scenarios commonly deployed in social science research. We introduce the dbpedia R package to make the approach more accessible. It has a focus on functionality that is easily adoptable to the needs of social scientists working with textual data, including the support of different input formats, limited setup costs and various output formats. Using a ParlaMint corpus, we show the applicability and flexibility of the approach for parliamentary debates.", }
Entity Linking is a powerful approach for linking textual data to established structured data such as survey data or adminstrative data. However, in the realm of social science, the approach is not widely adopted. We argue that this is, at least in part, due to specific setup requirements which constitute high barriers for usage and workflows which are not well integrated into analyitical scenarios commonly deployed in social science research. We introduce the dbpedia R package to make the approach more accessible. It has a focus on functionality that is easily adoptable to the needs of social scientists working with textual data, including the support of different input formats, limited setup costs and various output formats. Using a ParlaMint corpus, we show the applicability and flexibility of the approach for parliamentary debates.
[ "Leonhardt, Christoph", "Blaette, Andreas" ]
The dbpedia R Package: An Integrated Workflow for Entity Linking (for ParlaMint Corpora)
parlaclarin-1.20
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.21.bib
https://aclanthology.org/2024.parlaclarin-1.21/
@inproceedings{masuyama-etal-2024-video, title = "Video Retrieval System Using Automatic Speech Recognition for the {J}apanese Diet", author = "Masuyama, Mikitaka and Kawahara, Tatsuya and Matsuda, Kenjiro", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.21", pages = "145--148", abstract = "The Japanese House of Representatives, one of the two houses of the Diet, has adopted an Automatic Speech Recognition (ASR) system, which directly transcribes parliamentary speech with an accuracy of 95 percent. The ASR system also provides a timestamp for every word, which enables retrieval of the video segments of the Parliamentary meetings. The video retrieval system we have developed allows one to pinpoint and play the parliamentary video clips corresponding to the meeting minutes by keyword search. In this paper, we provide its overview and suggest various ways we can utilize the system. The system is currently extended to cover meetings of local governments, which will allow us to investigate dialectal linguistic variations.", }
The Japanese House of Representatives, one of the two houses of the Diet, has adopted an Automatic Speech Recognition (ASR) system, which directly transcribes parliamentary speech with an accuracy of 95 percent. The ASR system also provides a timestamp for every word, which enables retrieval of the video segments of the Parliamentary meetings. The video retrieval system we have developed allows one to pinpoint and play the parliamentary video clips corresponding to the meeting minutes by keyword search. In this paper, we provide its overview and suggest various ways we can utilize the system. The system is currently extended to cover meetings of local governments, which will allow us to investigate dialectal linguistic variations.
[ "Masuyama, Mikitaka", "Kawahara, Tatsuya", "Matsuda, Kenjiro" ]
Video Retrieval System Using Automatic Speech Recognition for the Japanese Diet
parlaclarin-1.21
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.22.bib
https://aclanthology.org/2024.parlaclarin-1.22/
@inproceedings{mikusek-2024-one, title = "One Year of Continuous and Automatic Data Gathering from Parliaments of {E}uropean {U}nion Member States", author = "Miku{\v{s}}ek, Ota", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.22", pages = "149--153", abstract = "This paper provides insight into automatic parliamentary corpora development. One year ago, I created a simple set of tools designed to continuously and automatically download, process, and create corpora from speeches in the parliaments of European Union member states. Despite the existence of numerous corpora providing speeches from European Union parliaments, the tools are more focused on collecting and building such corpora with minimal human interaction. These tools have been operating continuously for over a year, gathering parliamentary data and extending corpora, which together have more than one billion words. However, the process of maintaining these tools has brought unforeseen challenges, including issues such as being blocked by some parliaments due to overloading the parliament with requests, the inability to access the most recent data of a parliament, and effectively managing interrupted connections. Additionally, potential problems that may arise in the future are provided, along with possible solutions. These include problems with data loss prevention and adaptation to changes in the sources from which speeches are downloaded.", }
This paper provides insight into automatic parliamentary corpora development. One year ago, I created a simple set of tools designed to continuously and automatically download, process, and create corpora from speeches in the parliaments of European Union member states. Despite the existence of numerous corpora providing speeches from European Union parliaments, the tools are more focused on collecting and building such corpora with minimal human interaction. These tools have been operating continuously for over a year, gathering parliamentary data and extending corpora, which together have more than one billion words. However, the process of maintaining these tools has brought unforeseen challenges, including issues such as being blocked by some parliaments due to overloading the parliament with requests, the inability to access the most recent data of a parliament, and effectively managing interrupted connections. Additionally, potential problems that may arise in the future are provided, along with possible solutions. These include problems with data loss prevention and adaptation to changes in the sources from which speeches are downloaded.
[ "Miku{\\v{s}}ek, Ota" ]
One Year of Continuous and Automatic Data Gathering from Parliaments of European Union Member States
parlaclarin-1.22
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.23.bib
https://aclanthology.org/2024.parlaclarin-1.23/
@inproceedings{navarretta-haltrup-hansen-2024-government, title = "Government and Opposition in {D}anish Parliamentary Debates", author = "Navarretta, Costanza and Haltrup Hansen, Dorte", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.23", pages = "154--162", abstract = "In this paper, we address government and opposition speeches made by the Danish Parliament{'}s members from 2014 to 2022. We use the linguistic annotations and metadata in ParlaMint-DK, one of the ParlaMint corpora, to investigate some characteristics of the transcribed speeches made by government and opposition and test how well classifiers can identify the speeches delivered by these groups. Our analyses confirm that there are differences in the speeches made by government and opposition e.g., in the frequency of some modality expressions. In our study, we also include parties, which do not directly support or are against the government, the {``}other{''} group. The best performing classifier for identifying speeches made by parties in government, in opposition or in {``}other{''} is a transformer with a pre-trained Danish BERT model which gave an F1-score of 0.64. The same classifier obtained an F1-score of 0.77 on the binary identification of speeches made by government or opposition parties.", }
In this paper, we address government and opposition speeches made by the Danish Parliament{'}s members from 2014 to 2022. We use the linguistic annotations and metadata in ParlaMint-DK, one of the ParlaMint corpora, to investigate some characteristics of the transcribed speeches made by government and opposition and test how well classifiers can identify the speeches delivered by these groups. Our analyses confirm that there are differences in the speeches made by government and opposition e.g., in the frequency of some modality expressions. In our study, we also include parties, which do not directly support or are against the government, the {``}other{''} group. The best performing classifier for identifying speeches made by parties in government, in opposition or in {``}other{''} is a transformer with a pre-trained Danish BERT model which gave an F1-score of 0.64. The same classifier obtained an F1-score of 0.77 on the binary identification of speeches made by government or opposition parties.
[ "Navarretta, Costanza", "Haltrup Hansen, Dorte" ]
Government and Opposition in Danish Parliamentary Debates
parlaclarin-1.23
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.24.bib
https://aclanthology.org/2024.parlaclarin-1.24/
@inproceedings{rehbein-ponzetto-2024-new, title = "A new Resource and Baselines for Opinion Role Labelling in {G}erman Parliamentary Debates", author = "Rehbein, Ines and Ponzetto, Simone Paolo", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.24", pages = "163--170", abstract = "Detecting opinions, their holders and targets in parliamentary debates provides an interesting layer of analysis, for example, to identify frequent targets of opinions for specific topics, actors or parties. In the paper, we present GePaDe-ORL, a new dataset for German parliamentary debates where subjective expressions, their opinion holders and targets have been annotated. We describe the annotation process and report baselines for predicting those annotations in our new dataset.", }
Detecting opinions, their holders and targets in parliamentary debates provides an interesting layer of analysis, for example, to identify frequent targets of opinions for specific topics, actors or parties. In the paper, we present GePaDe-ORL, a new dataset for German parliamentary debates where subjective expressions, their opinion holders and targets have been annotated. We describe the annotation process and report baselines for predicting those annotations in our new dataset.
[ "Rehbein, Ines", "Ponzetto, Simone Paolo" ]
A new Resource and Baselines for Opinion Role Labelling in German Parliamentary Debates
parlaclarin-1.24
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.parlaclarin-1.25.bib
https://aclanthology.org/2024.parlaclarin-1.25/
@inproceedings{viira-etal-2024-parlamint, title = "{P}arla{M}int Widened: a {E}uropean Dataset of Freedom of Information Act Documents (Position Paper)", author = "Viira, Gerda and Marx, Maarten and Larooij, Maik", editor = "Fiser, Darja and Eskevich, Maria and Bordon, David", booktitle = "Proceedings of the IV Workshop on Creating, Analysing, and Increasing Accessibility of Parliamentary Corpora (ParlaCLARIN) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.parlaclarin-1.25", pages = "171--172", abstract = "This position paper makes an argument for creating a corpus similar to that of ParlaMint, not consisting of parliamentary proceedings, but of documents released under Freedom of Information Acts. Over 100 countries have such an act, and almost all European countries. Bringing these now dispersed document collections together in a uniform format into one portal will result in a valuable language resource. Besides that, our Dutch experience shows that such new larger exposure of these documents leads to efforts to improve their quality at the sources. Keywords: Freedom of Information Act, ParlaMint, Government Data", }
This position paper makes an argument for creating a corpus similar to that of ParlaMint, not consisting of parliamentary proceedings, but of documents released under Freedom of Information Acts. Over 100 countries have such an act, and almost all European countries. Bringing these now dispersed document collections together in a uniform format into one portal will result in a valuable language resource. Besides that, our Dutch experience shows that such new larger exposure of these documents leads to efforts to improve their quality at the sources. Keywords: Freedom of Information Act, ParlaMint, Government Data
[ "Viira, Gerda", "Marx, Maarten", "Larooij, Maik" ]
ParlaMint Widened: a European Dataset of Freedom of Information Act Documents (Position Paper)
parlaclarin-1.25
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.politicalnlp-1.1.bib
https://aclanthology.org/2024.politicalnlp-1.1/
@inproceedings{kuila-sarkar-2024-deciphering, title = "Deciphering Political Entity Sentiment in News with Large Language Models: Zero-Shot and Few-Shot Strategies", author = "Kuila, Alapan and Sarkar, Sudeshna", editor = "Afli, Haithem and Bouamor, Houda and Casagran, Cristina Blasi and Ghannay, Sahar", booktitle = "Proceedings of the Second Workshop on Natural Language Processing for Political Sciences @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.politicalnlp-1.1", pages = "1--11", abstract = "Sentiment analysis plays a pivotal role in understanding public opinion, particularly in the political domain where the portrayal of entities in news articles influences public perception. In this paper, we investigate the effectiveness of Large Language Models (LLMs) in predicting entity-specific sentiment from political news articles. Leveraging zero-shot and few-shot strategies, we explore the capability of LLMs to discern sentiment towards political entities in news content. Employing a chain-of-thought (COT) approach augmented with rationale in few-shot in-context learning, we assess whether this method enhances sentiment prediction accuracy. Our evaluation on sentiment-labeled datasets demonstrates that LLMs, outperform fine-tuned BERT models in capturing entity-specific sentiment. We find that learning in-context significantly improves model performance, while the self-consistency mechanism enhances consistency in sentiment prediction. Despite the promising results, we observe inconsistencies in the effectiveness of the COT prompting method. Overall, our findings underscore the potential of LLMs in entity-centric sentiment analysis within the political news domain and highlight the importance of suitable prompting strategies and model architectures.", }
Sentiment analysis plays a pivotal role in understanding public opinion, particularly in the political domain where the portrayal of entities in news articles influences public perception. In this paper, we investigate the effectiveness of Large Language Models (LLMs) in predicting entity-specific sentiment from political news articles. Leveraging zero-shot and few-shot strategies, we explore the capability of LLMs to discern sentiment towards political entities in news content. Employing a chain-of-thought (COT) approach augmented with rationale in few-shot in-context learning, we assess whether this method enhances sentiment prediction accuracy. Our evaluation on sentiment-labeled datasets demonstrates that LLMs, outperform fine-tuned BERT models in capturing entity-specific sentiment. We find that learning in-context significantly improves model performance, while the self-consistency mechanism enhances consistency in sentiment prediction. Despite the promising results, we observe inconsistencies in the effectiveness of the COT prompting method. Overall, our findings underscore the potential of LLMs in entity-centric sentiment analysis within the political news domain and highlight the importance of suitable prompting strategies and model architectures.
[ "Kuila, Alapan", "Sarkar, Sudeshna" ]
Deciphering Political Entity Sentiment in News with Large Language Models: Zero-Shot and Few-Shot Strategies
politicalnlp-1.1
Poster
2404.04361
[ "https://github.com/alapanju/entsent" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.politicalnlp-1.2.bib
https://aclanthology.org/2024.politicalnlp-1.2/
@inproceedings{cartier-tanev-2024-event, title = "Event Detection in the Socio Political Domain", author = "Cartier, Emmanuel and Tanev, Hristo", editor = "Afli, Haithem and Bouamor, Houda and Casagran, Cristina Blasi and Ghannay, Sahar", booktitle = "Proceedings of the Second Workshop on Natural Language Processing for Political Sciences @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.politicalnlp-1.2", pages = "12--21", abstract = "In this paper we present two approaches for detection of socio political events: the first is based on manually crafted keyword combinations and the second one is based on a BERT classifier. We compare the performance of the two systems on a dataset of socio-political events. Interestingly, the systems demonstrate complementary performance: both showing their best accuracy on non overlapping sets of event types. In the evaluation section we provide insights on the effect of taxonomy mapping on the event detection evaluation. We also review in the related work section the most important resources and approaches for event extraction in the recent years.", }
In this paper we present two approaches for detection of socio political events: the first is based on manually crafted keyword combinations and the second one is based on a BERT classifier. We compare the performance of the two systems on a dataset of socio-political events. Interestingly, the systems demonstrate complementary performance: both showing their best accuracy on non overlapping sets of event types. In the evaluation section we provide insights on the effect of taxonomy mapping on the event detection evaluation. We also review in the related work section the most important resources and approaches for event extraction in the recent years.
[ "Cartier, Emmanuel", "Tanev, Hristo" ]
Event Detection in the Socio Political Domain
politicalnlp-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.politicalnlp-1.3.bib
https://aclanthology.org/2024.politicalnlp-1.3/
@inproceedings{laabar-zaghouani-2024-multi, title = "Multi-Dimensional Insights: Annotated Dataset of Stance, Sentiment, and Emotion in {F}acebook Comments on {T}unisia{'}s {J}uly 25 Measures", author = "Laabar, Sanaa and Zaghouani, Wajdi", editor = "Afli, Haithem and Bouamor, Houda and Casagran, Cristina Blasi and Ghannay, Sahar", booktitle = "Proceedings of the Second Workshop on Natural Language Processing for Political Sciences @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.politicalnlp-1.3", pages = "22--32", abstract = "On July 25, 2021, Tunisian President Kais Saied announced the suspension of parliament and dismissal of Prime Minister Hichem Mechichi, a move that sparked intense public debate. This study investigates Tunisian public opinion regarding these events by analyzing a corpus of 7,535 Facebook comments collected from the official Tunisian presidency page, specifically the post announcing the July 25 measures. A team of three annotators labeled a subset of 5,000 comments, categorizing each comment{'}s political stance (supportive, opposing, or neutral), sentiment (positive, negative, or neutral), emotions, presence of hate speech, aggressive tone, and racism. The inter-annotator agreement, measured by Cohen{'}s kappa, was 0.61, indicating substantial consensus. The analysis reveals that a majority of commenters supported President Saied{'}s actions, outnumbering those who opposed or took a neutral stance. Moreover, the overall sentiment expressed in the comments was predominantly positive. This study provides valuable insights into the complex landscape of public opinion in Tunisia during a crucial moment in the country{'}s ongoing political transformation, highlighting the role of social media as a platform for political discourse and engagement.", }
On July 25, 2021, Tunisian President Kais Saied announced the suspension of parliament and dismissal of Prime Minister Hichem Mechichi, a move that sparked intense public debate. This study investigates Tunisian public opinion regarding these events by analyzing a corpus of 7,535 Facebook comments collected from the official Tunisian presidency page, specifically the post announcing the July 25 measures. A team of three annotators labeled a subset of 5,000 comments, categorizing each comment{'}s political stance (supportive, opposing, or neutral), sentiment (positive, negative, or neutral), emotions, presence of hate speech, aggressive tone, and racism. The inter-annotator agreement, measured by Cohen{'}s kappa, was 0.61, indicating substantial consensus. The analysis reveals that a majority of commenters supported President Saied{'}s actions, outnumbering those who opposed or took a neutral stance. Moreover, the overall sentiment expressed in the comments was predominantly positive. This study provides valuable insights into the complex landscape of public opinion in Tunisia during a crucial moment in the country{'}s ongoing political transformation, highlighting the role of social media as a platform for political discourse and engagement.
[ "Laabar, Sanaa", "Zaghouani, Wajdi" ]
Multi-Dimensional Insights: Annotated Dataset of Stance, Sentiment, and Emotion in Facebook Comments on Tunisia's July 25 Measures
politicalnlp-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.politicalnlp-1.4.bib
https://aclanthology.org/2024.politicalnlp-1.4/
@inproceedings{akiba-etal-2024-masking, title = "Masking Explicit Pro-Con Expressions for Development of a Stance Classification Dataset on Assembly Minutes", author = "Akiba, Tomoyosi and Gato, Yuki and Kimura, Yasutomo and Uchida, Yuzu and Takamaru, Keiichi", editor = "Afli, Haithem and Bouamor, Houda and Casagran, Cristina Blasi and Ghannay, Sahar", booktitle = "Proceedings of the Second Workshop on Natural Language Processing for Political Sciences @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.politicalnlp-1.4", pages = "33--38", abstract = "In this paper, a new dataset for Stance Classification based on assembly minutes is introduced. We develop it by using publicity available minutes taken from diverse Japanese local governments including prefectural, city, and town assemblies. In order to make the task to predict a stance from content of a politician{'}s utterance without explicit stance expressions, predefined words that directly convey the speaker{'}s stance in the utterance are replaced by a special token. Those masked words are also used to assign a golden label, either agreement or disagreement, to the utterance. Finally, we constructed total 15,018 instances automatically from 47 Japanese local governments. The dataset is used in the shared Stance Classification task evaluated in the NTCIR-17 QA-Lab-PoliInfo-4, and is now publicity available. Since the construction method of the dataset is automatic, we can still apply it to obtain more instances from the other Japanese local governments.", }
In this paper, a new dataset for Stance Classification based on assembly minutes is introduced. We develop it by using publicity available minutes taken from diverse Japanese local governments including prefectural, city, and town assemblies. In order to make the task to predict a stance from content of a politician{'}s utterance without explicit stance expressions, predefined words that directly convey the speaker{'}s stance in the utterance are replaced by a special token. Those masked words are also used to assign a golden label, either agreement or disagreement, to the utterance. Finally, we constructed total 15,018 instances automatically from 47 Japanese local governments. The dataset is used in the shared Stance Classification task evaluated in the NTCIR-17 QA-Lab-PoliInfo-4, and is now publicity available. Since the construction method of the dataset is automatic, we can still apply it to obtain more instances from the other Japanese local governments.
[ "Akiba, Tomoyosi", "Gato, Yuki", "Kimura, Yasutomo", "Uchida, Yuzu", "Takamaru, Keiichi" ]
Masking Explicit Pro-Con Expressions for Development of a Stance Classification Dataset on Assembly Minutes
politicalnlp-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.politicalnlp-1.5.bib
https://aclanthology.org/2024.politicalnlp-1.5/
@inproceedings{evgrafova-etal-2024-analysing, title = "Analysing Pathos in User-Generated Argumentative Text", author = "Evgrafova, Natalia and Hoste, Veronique and Lefever, Els", editor = "Afli, Haithem and Bouamor, Houda and Casagran, Cristina Blasi and Ghannay, Sahar", booktitle = "Proceedings of the Second Workshop on Natural Language Processing for Political Sciences @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.politicalnlp-1.5", pages = "39--44", abstract = "While persuasion has been extensively examined in the context of politicians{'} speeches, there exists a notable gap in the understanding of the pathos role in user-generated argumentation. This paper presents an exploratory study into the pathos dimension of user-generated arguments and formulates ideas on how pathos could be incorporated in argument mining. Using existing sentiment and emotion detection tools, this research aims to obtain insights into the role of emotion in argumentative public discussion on controversial topics, explores the connection between sentiment and stance, and detects frequent emotion-related words for a given topic.", }
While persuasion has been extensively examined in the context of politicians{'} speeches, there exists a notable gap in the understanding of the pathos role in user-generated argumentation. This paper presents an exploratory study into the pathos dimension of user-generated arguments and formulates ideas on how pathos could be incorporated in argument mining. Using existing sentiment and emotion detection tools, this research aims to obtain insights into the role of emotion in argumentative public discussion on controversial topics, explores the connection between sentiment and stance, and detects frequent emotion-related words for a given topic.
[ "Evgrafova, Natalia", "Hoste, Veronique", "Lefever, Els" ]
Analysing Pathos in User-Generated Argumentative Text
politicalnlp-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.politicalnlp-1.6.bib
https://aclanthology.org/2024.politicalnlp-1.6/
@inproceedings{osmonova-etal-2024-knowledge, title = "Knowledge Graph Representation for Political Information Sources", author = "Osmonova, Tinatin and Tikhonov, Alexey and Yamshchikov, Ivan P.", editor = "Afli, Haithem and Bouamor, Houda and Casagran, Cristina Blasi and Ghannay, Sahar", booktitle = "Proceedings of the Second Workshop on Natural Language Processing for Political Sciences @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.politicalnlp-1.6", pages = "45--54", abstract = "With the rise of computational social science, many scholars utilize data analysis and natural language processing tools to analyze social media, news articles, and other accessible data sources for examining political and social discourse. Particularly, the study of the emergence of echo-chambers due to the dissemination of specific information has become a topic of interest in mixed methods research areas. In this paper, we analyze data collected from two news portals, Breitbart News (BN) and New York Times (NYT) to prove the hypothesis that the formation of echo-chambers can be partially explained on the level of an individual information consumption rather than a collective topology of individuals{'} social networks. Our research findings are presented through knowledge graphs, utilizing a dataset spanning 11.5 years gathered from BN and NYT media portals. We demonstrate that the application of knowledge representation techniques to the aforementioned news streams highlights, contrary to common assumptions, shows relative {``}internal{''} neutrality of both sources and polarizing attitude towards a small fraction of entities. Additionally, we argue that such characteristics in information sources lead to fundamental disparities in audience worldviews, potentially acting as a catalyst for the formation of echo-chambers.", }
With the rise of computational social science, many scholars utilize data analysis and natural language processing tools to analyze social media, news articles, and other accessible data sources for examining political and social discourse. Particularly, the study of the emergence of echo-chambers due to the dissemination of specific information has become a topic of interest in mixed methods research areas. In this paper, we analyze data collected from two news portals, Breitbart News (BN) and New York Times (NYT) to prove the hypothesis that the formation of echo-chambers can be partially explained on the level of an individual information consumption rather than a collective topology of individuals{'} social networks. Our research findings are presented through knowledge graphs, utilizing a dataset spanning 11.5 years gathered from BN and NYT media portals. We demonstrate that the application of knowledge representation techniques to the aforementioned news streams highlights, contrary to common assumptions, shows relative {``}internal{''} neutrality of both sources and polarizing attitude towards a small fraction of entities. Additionally, we argue that such characteristics in information sources lead to fundamental disparities in audience worldviews, potentially acting as a catalyst for the formation of echo-chambers.
[ "Osmonova, Tinatin", "Tikhonov, Alexey", "Yamshchikov, Ivan P." ]
Knowledge Graph Representation for Political Information Sources
politicalnlp-1.6
Poster
2404.03437
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.politicalnlp-1.7.bib
https://aclanthology.org/2024.politicalnlp-1.7/
@inproceedings{shestakov-zaghouani-2024-analyzing, title = "Analyzing Conflict Through Data: A Dataset on the Digital Framing of Sheikh Jarrah Evictions", author = "Shestakov, Anatolii and Zaghouani, Wajdi", editor = "Afli, Haithem and Bouamor, Houda and Casagran, Cristina Blasi and Ghannay, Sahar", booktitle = "Proceedings of the Second Workshop on Natural Language Processing for Political Sciences @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.politicalnlp-1.7", pages = "55--67", abstract = "This study empirically investigates the role of social media in tracing the evolution of the May 2021 Israeli-Palestinian crisis, centered on the Sheikh Jarrah evictions. Analyzing a dataset of 370,747 English tweets from 120,173 users from May 9-21, 2021, the research employs a mixed-methods approach combining computational techniques and qualitative content analysis. Findings support the hypothesis that social media interactions reliably map crisis dynamics, as evidenced by hashtags like {\#}SaveSheikhJarrah corresponding to critical shifts, though virality did not correlate with hashtag use. In contrast to prior sentiment-focused studies, the context-driven analysis reveals influencers and state actors shaping polarized narratives along geopolitical lines, with high-profile voices backing Palestinian solidarity while Israeli state accounts endorsed military operations. Evidence of a transcontinental cybercampaign emerged, albeit with limitations due to the English language scope and potential biases from data collection and keyword choices. The study contributes empirical insights into the mediatization of armed conflicts through social media{'}s competing narratives and information flows within the Israeli-Palestinian context. Recommendations for future multilingual, multi-platform analyses are provided to address limitations.", }
This study empirically investigates the role of social media in tracing the evolution of the May 2021 Israeli-Palestinian crisis, centered on the Sheikh Jarrah evictions. Analyzing a dataset of 370,747 English tweets from 120,173 users from May 9-21, 2021, the research employs a mixed-methods approach combining computational techniques and qualitative content analysis. Findings support the hypothesis that social media interactions reliably map crisis dynamics, as evidenced by hashtags like {\#}SaveSheikhJarrah corresponding to critical shifts, though virality did not correlate with hashtag use. In contrast to prior sentiment-focused studies, the context-driven analysis reveals influencers and state actors shaping polarized narratives along geopolitical lines, with high-profile voices backing Palestinian solidarity while Israeli state accounts endorsed military operations. Evidence of a transcontinental cybercampaign emerged, albeit with limitations due to the English language scope and potential biases from data collection and keyword choices. The study contributes empirical insights into the mediatization of armed conflicts through social media{'}s competing narratives and information flows within the Israeli-Palestinian context. Recommendations for future multilingual, multi-platform analyses are provided to address limitations.
[ "Shestakov, Anatolii", "Zaghouani, Wajdi" ]
Analyzing Conflict Through Data: A Dataset on the Digital Framing of Sheikh Jarrah Evictions
politicalnlp-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.politicalnlp-1.8.bib
https://aclanthology.org/2024.politicalnlp-1.8/
@inproceedings{borazio-etal-2024-semi, title = "Semi-Automatic Topic Discovery and Classification for Epidemic Intelligence via Large Language Models", author = "Borazio, Federico and Croce, Danilo and Gambosi, Giorgio and Basili, Roberto and Margiotta, Daniele and Scaiella, Antonio and Del Manso, Martina and Petrone, Daniele and Cannone, Andrea and Urdiales, Alberto M. and Sacco, Chiara and Pezzotti, Patrizio and Riccardo, Flavia and Mipatrini, Daniele and Ferraro, Federica and Pilati, Sobha", editor = "Afli, Haithem and Bouamor, Houda and Casagran, Cristina Blasi and Ghannay, Sahar", booktitle = "Proceedings of the Second Workshop on Natural Language Processing for Political Sciences @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.politicalnlp-1.8", pages = "68--84", abstract = "This paper introduces a novel framework to harness Large Language Models (LLMs) for Epidemic Intelligence, focusing on identifying and categorizing emergent socio-political phenomena within health crises, with a spotlight on the COVID-19 pandemic. Our approach diverges from traditional methods, such as Topic Models, by providing explicit support to analysts through the identification of distinct thematic areas and the generation of clear, actionable statements for each topic. This supports a Zero-shot Classification mechanism, enabling effective matching of news articles to fine-grain topics without the need for model fine-tuning. The framework is designed to be as transparent as possible, producing linguistically informed insights to make the analysis more accessible to analysts who may not be familiar with every subject matter of inherently emerging phenomena. This process not only enhances the precision and relevance of the extracted Epidemic Intelligence but also fosters a collaborative environment where system linguistic abilities and the analyst{'}s domain expertise are integrated.", }
This paper introduces a novel framework to harness Large Language Models (LLMs) for Epidemic Intelligence, focusing on identifying and categorizing emergent socio-political phenomena within health crises, with a spotlight on the COVID-19 pandemic. Our approach diverges from traditional methods, such as Topic Models, by providing explicit support to analysts through the identification of distinct thematic areas and the generation of clear, actionable statements for each topic. This supports a Zero-shot Classification mechanism, enabling effective matching of news articles to fine-grain topics without the need for model fine-tuning. The framework is designed to be as transparent as possible, producing linguistically informed insights to make the analysis more accessible to analysts who may not be familiar with every subject matter of inherently emerging phenomena. This process not only enhances the precision and relevance of the extracted Epidemic Intelligence but also fosters a collaborative environment where system linguistic abilities and the analyst{'}s domain expertise are integrated.
[ "Borazio, Federico", "Croce, Danilo", "Gambosi, Giorgio", "Basili, Roberto", "Margiotta, Daniele", "Scaiella, Antonio", "Del Manso, Martina", "Petrone, Daniele", "Cannone, Andrea", "Urdiales, Alberto M.", "Sacco, Chiara", "Pezzotti, Patrizio", "Riccardo, Flavia", "Mipatrini, Daniele", "Ferraro, Federica", "Pilati, Sobha" ]
Semi-Automatic Topic Discovery and Classification for Epidemic Intelligence via Large Language Models
politicalnlp-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.politicalnlp-1.9.bib
https://aclanthology.org/2024.politicalnlp-1.9/
@inproceedings{wang-etal-2024-towards-quantifying, title = "Towards quantifying politicization in foreign aid project reports", author = "Wang, Sidi and Eggers, Gustav and de Roode Torres Georgiadis, Alexia and {\DJ}o, Tuan Anh and Gontard, L{\'e}a and Carlitz, Ruth and Bloem, Jelke", editor = "Afli, Haithem and Bouamor, Houda and Casagran, Cristina Blasi and Ghannay, Sahar", booktitle = "Proceedings of the Second Workshop on Natural Language Processing for Political Sciences @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.politicalnlp-1.9", pages = "85--90", abstract = "We aim to develop a metric of politicization by investigating whether this concept can be operationalized computationally using document embeddings. We are interested in measuring the extent to which foreign aid is politicized. Textual reports of foreign aid projects are often made available by donor governments, but these are large and unstructured. By embedding them in vector space, we can compute similarities between sets of known politicized keywords and the foreign aid reports. We present a pilot study where we apply this metric to USAID reports.", }
We aim to develop a metric of politicization by investigating whether this concept can be operationalized computationally using document embeddings. We are interested in measuring the extent to which foreign aid is politicized. Textual reports of foreign aid projects are often made available by donor governments, but these are large and unstructured. By embedding them in vector space, we can compute similarities between sets of known politicized keywords and the foreign aid reports. We present a pilot study where we apply this metric to USAID reports.
[ "Wang, Sidi", "Eggers, Gustav", "de Roode Torres Georgiadis, Alexia", "{\\DJ}o, Tuan Anh", "Gontard, L{\\'e}a", "Carlitz, Ruth", "Bloem, Jelke" ]
Towards quantifying politicization in foreign aid project reports
politicalnlp-1.9
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.politicalnlp-1.10.bib
https://aclanthology.org/2024.politicalnlp-1.10/
@inproceedings{sorokovikova-etal-2024-echo, title = "Echo-chambers and Idea Labs: Communication Styles on {T}witter", author = "Sorokovikova, Aleksandra and Becker, Michael and Yamshchikov, Ivan P.", editor = "Afli, Haithem and Bouamor, Houda and Casagran, Cristina Blasi and Ghannay, Sahar", booktitle = "Proceedings of the Second Workshop on Natural Language Processing for Political Sciences @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.politicalnlp-1.10", pages = "91--95", abstract = "This paper investigates the communication styles and structures of Twitter (X) communities within the vaccination context. While mainstream research primarily focuses on the echo-chamber phenomenon, wherein certain ideas are reinforced and participants are isolated from opposing opinions, this study reveals the presence of diverse communication styles across various communities. In addition to the communities exhibiting echo-chamber behavior, this research uncovers communities with distinct communication patterns. By shedding light on the nuanced nature of communication within social networks, this study emphasizes the significance of understanding the diversity of perspectives within online communities.", }
This paper investigates the communication styles and structures of Twitter (X) communities within the vaccination context. While mainstream research primarily focuses on the echo-chamber phenomenon, wherein certain ideas are reinforced and participants are isolated from opposing opinions, this study reveals the presence of diverse communication styles across various communities. In addition to the communities exhibiting echo-chamber behavior, this research uncovers communities with distinct communication patterns. By shedding light on the nuanced nature of communication within social networks, this study emphasizes the significance of understanding the diversity of perspectives within online communities.
[ "Sorokovikova, Aleks", "ra", "Becker, Michael", "Yamshchikov, Ivan P." ]
Echo-chambers and Idea Labs: Communication Styles on Twitter
politicalnlp-1.10
Poster
2403.19423
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rail-1.1.bib
https://aclanthology.org/2024.rail-1.1/
@inproceedings{ghio-etal-2024-phonetics, title = "Doing Phonetics in the {R}ift {V}alley: Sound Systems of {M}aasai, {I}raqw and {H}adza", author = "Ghio, Alain and Demolin, Didier and Karani, Michael and Meynadier, Yohann", editor = "Mabuya, Rooweither and Matfunjwa, Muzi and Setaka, Mmasibidi and van Zaanen, Menno", booktitle = "Proceedings of the Fifth Workshop on Resources for African Indigenous Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rail-1.1", pages = "1--9", abstract = "This article discusses the contribution of experimental techniques to recording phonetic data in the field. Only a small part of the phonological systems of African languages is described with precision. This is why it is important to collect empirical data in the form of sound, video and physiological recordings. This allows research questions such as patterns of variation to be addressed. Analytical methods show how to interpret data from physical principles and integrate them into appropriate models. The question of linguistic contact between different language families is also addressed. To achieve these general objectives, we present the way we design corpora, and the different ways of recording data with crucial technical considerations during fieldwork. Finally, we focus on 3 languages spoken in the Great African Rift Zone, which includes several linguistic areas belonging to the four major linguistic families of the continent. (1) Hadza is a click language with a very complex consonant system. (2) Iraqw is a Cushitic language with ejective consonants. (3) Maasai is a Nilotic language with implosive consonants and a very elaborate set of interjections, ideophones and animal calls that include sounds not described in the International Phonetic Alphabet.", }
This article discusses the contribution of experimental techniques to recording phonetic data in the field. Only a small part of the phonological systems of African languages is described with precision. This is why it is important to collect empirical data in the form of sound, video and physiological recordings. This allows research questions such as patterns of variation to be addressed. Analytical methods show how to interpret data from physical principles and integrate them into appropriate models. The question of linguistic contact between different language families is also addressed. To achieve these general objectives, we present the way we design corpora, and the different ways of recording data with crucial technical considerations during fieldwork. Finally, we focus on 3 languages spoken in the Great African Rift Zone, which includes several linguistic areas belonging to the four major linguistic families of the continent. (1) Hadza is a click language with a very complex consonant system. (2) Iraqw is a Cushitic language with ejective consonants. (3) Maasai is a Nilotic language with implosive consonants and a very elaborate set of interjections, ideophones and animal calls that include sounds not described in the International Phonetic Alphabet.
[ "Ghio, Alain", "Demolin, Didier", "Karani, Michael", "Meynadier, Yohann" ]
Doing Phonetics in the Rift Valley: Sound Systems of Maasai, Iraqw and Hadza
rail-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rail-1.2.bib
https://aclanthology.org/2024.rail-1.2/
@inproceedings{gauthier-etal-2024-kallaama, title = "Kallaama: A Transcribed Speech Dataset about Agriculture in the Three Most Widely Spoken Languages in {S}enegal", author = "Gauthier, Elodie and Ndiaye, Aminata and Guiss{\'e}, Abdoulaye", editor = "Mabuya, Rooweither and Matfunjwa, Muzi and Setaka, Mmasibidi and van Zaanen, Menno", booktitle = "Proceedings of the Fifth Workshop on Resources for African Indigenous Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rail-1.2", pages = "10--19", abstract = "This work is part of the Kallaama project, whose objective is to produce and disseminate national languages corpora for speech technologies developments, in the field of agriculture. Except for Wolof, which benefits from some language data for natural language processing, national languages of Senegal are largely ignored by language technology providers. However, such technologies are keys to the protection, promotion and teaching of these languages. Kallaama focuses on the 3 main spoken languages by Senegalese people: Wolof, Pulaar and Sereer. These languages are widely spoken by the population, with around 10 million of native Senegalese speakers, not to mention those outside the country. However, they remain under-resourced in terms of machine-readable data that can be used for automatic processing and language technologies, all the more so in the agricultural sector. We release a transcribed speech dataset containing 125 hours of recordings, about agriculture, in each of the above-mentioned languages. These resources are specifically designed for Automatic Speech Recognition purpose, including traditional approaches. To build such technologies, we provide textual corpora in Wolof and Pulaar, and a pronunciation lexicon containing 49,132 entries from the Wolof dataset.", }
This work is part of the Kallaama project, whose objective is to produce and disseminate national languages corpora for speech technologies developments, in the field of agriculture. Except for Wolof, which benefits from some language data for natural language processing, national languages of Senegal are largely ignored by language technology providers. However, such technologies are keys to the protection, promotion and teaching of these languages. Kallaama focuses on the 3 main spoken languages by Senegalese people: Wolof, Pulaar and Sereer. These languages are widely spoken by the population, with around 10 million of native Senegalese speakers, not to mention those outside the country. However, they remain under-resourced in terms of machine-readable data that can be used for automatic processing and language technologies, all the more so in the agricultural sector. We release a transcribed speech dataset containing 125 hours of recordings, about agriculture, in each of the above-mentioned languages. These resources are specifically designed for Automatic Speech Recognition purpose, including traditional approaches. To build such technologies, we provide textual corpora in Wolof and Pulaar, and a pronunciation lexicon containing 49,132 entries from the Wolof dataset.
[ "Gauthier, Elodie", "Ndiaye, Aminata", "Guiss{\\'e}, Abdoulaye" ]
Kallaama: A Transcribed Speech Dataset about Agriculture in the Three Most Widely Spoken Languages in Senegal
rail-1.2
Poster
2404.01991
[ "https://github.com/gauthelo/kallaama-speech-dataset" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rail-1.3.bib
https://aclanthology.org/2024.rail-1.3/
@inproceedings{coffey-cristia-2024-long, title = "Long-Form Recordings to Study Children{'}s Language Input and Output in Under-Resourced Contexts", author = "Coffey, Joseph R. and Cristia, Alejandrina", editor = "Mabuya, Rooweither and Matfunjwa, Muzi and Setaka, Mmasibidi and van Zaanen, Menno", booktitle = "Proceedings of the Fifth Workshop on Resources for African Indigenous Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rail-1.3", pages = "20--31", abstract = "A growing body of research suggests that young children{'}s early speech and language exposure is associated with later language development (including delays and diagnoses), school readiness, and academic performance. The last decade has seen increasing use of child-worn devices to collect long-form audio recordings by educators, economists, and developmental psychologists. The most commonly used system for analyzing this data is LENA, which was trained on North American English child-centered data and generates estimates of children{'}s speech-like vocalization counts, adult word counts, and child-adult turn counts. Recently, cheaper and open-source non-LENA alternatives with multilingual training have been proposed. Both kinds of systems have been employed in under-resourced, sometimes multilingual contexts, including Africa where access to printed or digital linguistic resources may be limited. In this paper, we describe each kind of system (LENA, non-LENA), provide information on audio data collected with them that is available for reuse, review evidence of the accuracy of extant automated analyses, and note potential strengths and shortcomings of their use in African communities.", }
A growing body of research suggests that young children{'}s early speech and language exposure is associated with later language development (including delays and diagnoses), school readiness, and academic performance. The last decade has seen increasing use of child-worn devices to collect long-form audio recordings by educators, economists, and developmental psychologists. The most commonly used system for analyzing this data is LENA, which was trained on North American English child-centered data and generates estimates of children{'}s speech-like vocalization counts, adult word counts, and child-adult turn counts. Recently, cheaper and open-source non-LENA alternatives with multilingual training have been proposed. Both kinds of systems have been employed in under-resourced, sometimes multilingual contexts, including Africa where access to printed or digital linguistic resources may be limited. In this paper, we describe each kind of system (LENA, non-LENA), provide information on audio data collected with them that is available for reuse, review evidence of the accuracy of extant automated analyses, and note potential strengths and shortcomings of their use in African communities.
[ "Coffey, Joseph R.", "Cristia, Alej", "rina" ]
Long-Form Recordings to Study Children's Language Input and Output in Under-Resourced Contexts
rail-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rail-1.4.bib
https://aclanthology.org/2024.rail-1.4/
@inproceedings{moape-etal-2024-developing, title = "Developing Bilingual {E}nglish-Setswana Datasets for Space Domain", author = "Moape, Tebatso G. and Ojo, Sunday Olusegun and Olugbara, Oludayo O.", editor = "Mabuya, Rooweither and Matfunjwa, Muzi and Setaka, Mmasibidi and van Zaanen, Menno", booktitle = "Proceedings of the Fifth Workshop on Resources for African Indigenous Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rail-1.4", pages = "32--36", abstract = "In the current digital age, languages lacking digital presence face an imminent risk of extinction. In addition, the absence of digital resources poses a significant obstacle to the development of Natural Language Processing (NLP) applications for such languages. Therefore, the development of digital language resources contributes to the preservation of these languages and enables application development. This paper contributes to the ongoing efforts of developing language resources for South African languages with a specific focus on Setswana and presents a new English-Setswana bilingual dataset that focuses on the space domain. The dataset was constructed using the expansion method. A subset of space domain English synsets from Princeton WordNet was professionally translated to Setswana. The initial submission of translations demonstrated an accuracy rate of 99{\%} before validation. After validation, continuous revisions and discussions between translators and validators resulted in a unanimous agreement, ultimately achieving a 100{\%} accuracy rate. The final version of the resource was converted into an XML format due to its machine-readable framework, providing a structured hierarchy for the organization of linguistic data.", }
In the current digital age, languages lacking digital presence face an imminent risk of extinction. In addition, the absence of digital resources poses a significant obstacle to the development of Natural Language Processing (NLP) applications for such languages. Therefore, the development of digital language resources contributes to the preservation of these languages and enables application development. This paper contributes to the ongoing efforts of developing language resources for South African languages with a specific focus on Setswana and presents a new English-Setswana bilingual dataset that focuses on the space domain. The dataset was constructed using the expansion method. A subset of space domain English synsets from Princeton WordNet was professionally translated to Setswana. The initial submission of translations demonstrated an accuracy rate of 99{\%} before validation. After validation, continuous revisions and discussions between translators and validators resulted in a unanimous agreement, ultimately achieving a 100{\%} accuracy rate. The final version of the resource was converted into an XML format due to its machine-readable framework, providing a structured hierarchy for the organization of linguistic data.
[ "Moape, Tebatso G.", "Ojo, Sunday Olusegun", "Olugbara, Oludayo O." ]
Developing Bilingual English-Setswana Datasets for Space Domain
rail-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rail-1.5.bib
https://aclanthology.org/2024.rail-1.5/
@inproceedings{sibeko-2024-compiling, title = "Compiling a List of Frequently Used Setswana Words for Developing Readability Measures", author = "Sibeko, Johannes", editor = "Mabuya, Rooweither and Matfunjwa, Muzi and Setaka, Mmasibidi and van Zaanen, Menno", booktitle = "Proceedings of the Fifth Workshop on Resources for African Indigenous Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rail-1.5", pages = "37--44", abstract = "This paper addresses the pressing need for improved readability assessment in Setswana through the creation of a list of frequently used words in Setswana. The end goal is to integrate this list into the adaptation of traditional readability measures in Setswana, such as the Dale-Chall index, which relies on frequently used words. Our initial list is developed using corpus-based methods utilising frequency lists obtained from five sets of corpora. It is then refined using manual methods. The analysis section delves into the challenges encountered during the development of the final list, encompassing issues like the inclusion of non-Setswana words, proper names, unexpected terms, and spelling variations. The decision-making process is clarified, highlighting crucial choices such as the retention of contemporary terms and the acceptance of diverse spelling variations. These decisions reflect a nuanced balance between linguistic authenticity and readability. This paper contributes to the discourse on text readability in indigenous Southern African languages. Moreover, it establishes a foundation for tailored literacy initiatives and serves as a starting point for adapting traditional frequency-list-based readability measures to Setswana.", }
This paper addresses the pressing need for improved readability assessment in Setswana through the creation of a list of frequently used words in Setswana. The end goal is to integrate this list into the adaptation of traditional readability measures in Setswana, such as the Dale-Chall index, which relies on frequently used words. Our initial list is developed using corpus-based methods utilising frequency lists obtained from five sets of corpora. It is then refined using manual methods. The analysis section delves into the challenges encountered during the development of the final list, encompassing issues like the inclusion of non-Setswana words, proper names, unexpected terms, and spelling variations. The decision-making process is clarified, highlighting crucial choices such as the retention of contemporary terms and the acceptance of diverse spelling variations. These decisions reflect a nuanced balance between linguistic authenticity and readability. This paper contributes to the discourse on text readability in indigenous Southern African languages. Moreover, it establishes a foundation for tailored literacy initiatives and serves as a starting point for adapting traditional frequency-list-based readability measures to Setswana.
[ "Sibeko, Johannes" ]
Compiling a List of Frequently Used Setswana Words for Developing Readability Measures
rail-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rail-1.6.bib
https://aclanthology.org/2024.rail-1.6/
@inproceedings{ngcungca-etal-2024-qualitative, title = "A Qualitative Inquiry into the {S}outh {A}frican Language Identifier{'}s Performance on {Y}ou{T}ube Comments.", author = "Ngcungca, Nkazimlo N. and Sibeko, Johannes and Rudman, Sharon", editor = "Mabuya, Rooweither and Matfunjwa, Muzi and Setaka, Mmasibidi and van Zaanen, Menno", booktitle = "Proceedings of the Fifth Workshop on Resources for African Indigenous Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rail-1.6", pages = "45--54", abstract = "The South African Language Identifier (SA-LID) has proven to be a valuable tool for data analysis in the multilingual context of South Africa, particularly in governmental texts. However, its suitability for broader projects has yet to be determined. This paper aims to assess the performance of the SA-LID in identifying isiXhosa in YouTube comments as part of the methodology for research on the expression of cultural identity through linguistic strategies. We curated a selection of 10 videos which focused on the isiXhosa culture in terms of theatre, poetry, language learning, culture, or music. The videos were predominantly in English as were most of the comments, but the latter were interspersed with elements of isiXhosa, identifying the commentators as speakers of isiXhosa. The SA-LID was used to identify all instances of the use of isiXhosa to facilitate the analysis of the relevant items. Following the application of the SA-LID to this data, a manual evaluation was conducted to gauge the effectiveness of this tool in selecting all isiXhosa items. Our findings reveal significant limitations in the use of the SA-LID, encompassing the oversight of unconventional spellings in indigenous languages and misclassification of closely related languages within the Nguni group. Although proficient in identifying the use of Nguni languages, differentiating within this language group proved challenging for the SA-LID. These results underscore the necessity for manual checks to complement the use of the SA-LID when other Nguni languages may be present in the comment texts.", }
The South African Language Identifier (SA-LID) has proven to be a valuable tool for data analysis in the multilingual context of South Africa, particularly in governmental texts. However, its suitability for broader projects has yet to be determined. This paper aims to assess the performance of the SA-LID in identifying isiXhosa in YouTube comments as part of the methodology for research on the expression of cultural identity through linguistic strategies. We curated a selection of 10 videos which focused on the isiXhosa culture in terms of theatre, poetry, language learning, culture, or music. The videos were predominantly in English as were most of the comments, but the latter were interspersed with elements of isiXhosa, identifying the commentators as speakers of isiXhosa. The SA-LID was used to identify all instances of the use of isiXhosa to facilitate the analysis of the relevant items. Following the application of the SA-LID to this data, a manual evaluation was conducted to gauge the effectiveness of this tool in selecting all isiXhosa items. Our findings reveal significant limitations in the use of the SA-LID, encompassing the oversight of unconventional spellings in indigenous languages and misclassification of closely related languages within the Nguni group. Although proficient in identifying the use of Nguni languages, differentiating within this language group proved challenging for the SA-LID. These results underscore the necessity for manual checks to complement the use of the SA-LID when other Nguni languages may be present in the comment texts.
[ "Ngcungca, Nkazimlo N.", "Sibeko, Johannes", "Rudman, Sharon" ]
A Qualitative Inquiry into the South African Language Identifier's Performance on YouTube Comments.
rail-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rail-1.7.bib
https://aclanthology.org/2024.rail-1.7/
@inproceedings{gaustad-etal-2024-first, title = "The First {U}niversal {D}ependency Treebank for {T}swana: {T}swana-Popapolelo", author = "Gaustad, Tanja and Berg, Ansu and Pretorius, Rigardt and Eiselen, Roald", editor = "Mabuya, Rooweither and Matfunjwa, Muzi and Setaka, Mmasibidi and van Zaanen, Menno", booktitle = "Proceedings of the Fifth Workshop on Resources for African Indigenous Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rail-1.7", pages = "55--65", abstract = "This paper presents the first publicly available UD treebank for Tswana, Tswana-Popapolelo. The data used consists of the 20 Cairo CICLing sentences translated to Tswana. After pre-processing these sentences with detailed POS (XPOS) and converting them to universal POS (UPOS), we proceeded to annotate the data with dependency relations, documenting decisions for the language specific constructions. Linguistic issues encountered are described in detail as this is the first application of the UD framework to produce a dependency treebank for the Bantu language family in general and for Tswana specifically.", }
This paper presents the first publicly available UD treebank for Tswana, Tswana-Popapolelo. The data used consists of the 20 Cairo CICLing sentences translated to Tswana. After pre-processing these sentences with detailed POS (XPOS) and converting them to universal POS (UPOS), we proceeded to annotate the data with dependency relations, documenting decisions for the language specific constructions. Linguistic issues encountered are described in detail as this is the first application of the UD framework to produce a dependency treebank for the Bantu language family in general and for Tswana specifically.
[ "Gaustad, Tanja", "Berg, Ansu", "Pretorius, Rigardt", "Eiselen, Roald" ]
The First Universal Dependency Treebank for Tswana: Tswana-Popapolelo
rail-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rail-1.8.bib
https://aclanthology.org/2024.rail-1.8/
@inproceedings{sibeko-van-zaanen-2024-adapting, title = "Adapting Nine Traditional Text Readability Measures into Sesotho", author = "Sibeko, Johannes and van Zaanen, Menno", editor = "Mabuya, Rooweither and Matfunjwa, Muzi and Setaka, Mmasibidi and van Zaanen, Menno", booktitle = "Proceedings of the Fifth Workshop on Resources for African Indigenous Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rail-1.8", pages = "66--76", abstract = {This article discusses the adaptation of traditional English readability measures into Sesotho, a Southern African indigenous low-resource language. We employ the use of a translated readability corpus to extract textual features from the Sesotho texts and readability levels from the English translations. We look at the correlation between the different features to ensure that non-competing features are used in the readability metrics. Next, through linear regression analyses, we examine the impact of the text features from the Sesotho texts on the overall readability levels (which are gauged from the English translations). Starting from the structure of the traditional English readability measures, linear regression models identify coefficients and intercepts for the different variables considered in the readability formulas for Sesotho. In the end, we propose ten readability formulas for Sesotho (one more than the initial nine; we provide two formulas based on the structure of the Gunning Fog index). We also introduce intercepts for the Gunning Fog index, the L{\"a}sbarhets index and the Readability index (which do not have intercepts in the English variants) in the Sesotho formulas.}, }
This article discusses the adaptation of traditional English readability measures into Sesotho, a Southern African indigenous low-resource language. We employ the use of a translated readability corpus to extract textual features from the Sesotho texts and readability levels from the English translations. We look at the correlation between the different features to ensure that non-competing features are used in the readability metrics. Next, through linear regression analyses, we examine the impact of the text features from the Sesotho texts on the overall readability levels (which are gauged from the English translations). Starting from the structure of the traditional English readability measures, linear regression models identify coefficients and intercepts for the different variables considered in the readability formulas for Sesotho. In the end, we propose ten readability formulas for Sesotho (one more than the initial nine; we provide two formulas based on the structure of the Gunning Fog index). We also introduce intercepts for the Gunning Fog index, the L{\"a}sbarhets index and the Readability index (which do not have intercepts in the English variants) in the Sesotho formulas.
[ "Sibeko, Johannes", "van Zaanen, Menno" ]
Adapting Nine Traditional Text Readability Measures into Sesotho
rail-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rail-1.9.bib
https://aclanthology.org/2024.rail-1.9/
@inproceedings{marais-etal-2024-bootstrapping, title = "Bootstrapping Syntactic Resources from isi{Z}ulu to Siswati", author = "Marais, Laurette and Pretorius, Laurette and Posthumus, Lionel Clive", editor = "Mabuya, Rooweither and Matfunjwa, Muzi and Setaka, Mmasibidi and van Zaanen, Menno", booktitle = "Proceedings of the Fifth Workshop on Resources for African Indigenous Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rail-1.9", pages = "77--85", abstract = "IsiZulu and Siswati are mutually intelligible languages that are considered under-resourced despite their status as official languages. Even so, the available digital and computational language resources for isiZulu significantly outstrip those for Siswati, such that it is worth investigating to what degree bootstrapping approaches can be leveraged to develop resources for Siswati. In this paper, we present the development of a computational grammar and parallel treebank, based on parallel linguistic descriptions of the two languages.", }
IsiZulu and Siswati are mutually intelligible languages that are considered under-resourced despite their status as official languages. Even so, the available digital and computational language resources for isiZulu significantly outstrip those for Siswati, such that it is worth investigating to what degree bootstrapping approaches can be leveraged to develop resources for Siswati. In this paper, we present the development of a computational grammar and parallel treebank, based on parallel linguistic descriptions of the two languages.
[ "Marais, Laurette", "Pretorius, Laurette", "Posthumus, Lionel Clive" ]
Bootstrapping Syntactic Resources from isiZulu to Siswati
rail-1.9
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rail-1.10.bib
https://aclanthology.org/2024.rail-1.10/
@inproceedings{white-etal-2024-early, title = "Early Child Language Resources and Corpora Developed in Nine {A}frican Languages by the {SAD}i{L}a{R} Child Language Development Node", author = "White, Michelle J. and Southwood, Frenette and Yalala, Sefela Londiwe", editor = "Mabuya, Rooweither and Matfunjwa, Muzi and Setaka, Mmasibidi and van Zaanen, Menno", booktitle = "Proceedings of the Fifth Workshop on Resources for African Indigenous Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rail-1.10", pages = "86--93", abstract = "Prior to the initiation of the project reported on in this paper, there were no instruments available with which to measure the language skills of young speakers of nine official African languages of South Africa. This limited the kind of research that could be conducted, and the rate at which knowledge creation on child language development could progress. Not only does this result in a dearth of knowledge needed to inform child language interventions but it also hinders the development of child language theories that would have good predictive power across languages. This paper reports on (i) the development of a questionnaire that caregivers complete about their infant{'}s communicative gestures and vocabulary or about their toddler{'}s vocabulary and grammar skills, in isiNdebele, isiXhosa, isiZulu, Sesotho, Sesotho sa Leboa, Setswana, Siswati, Tshivenda, and Xitsonga; and (ii) the 24 child language corpora thus far developed with these instruments. The potential research avenues opened by the 18 instruments and 24 corpora are discussed.", }
Prior to the initiation of the project reported on in this paper, there were no instruments available with which to measure the language skills of young speakers of nine official African languages of South Africa. This limited the kind of research that could be conducted, and the rate at which knowledge creation on child language development could progress. Not only does this result in a dearth of knowledge needed to inform child language interventions but it also hinders the development of child language theories that would have good predictive power across languages. This paper reports on (i) the development of a questionnaire that caregivers complete about their infant{'}s communicative gestures and vocabulary or about their toddler{'}s vocabulary and grammar skills, in isiNdebele, isiXhosa, isiZulu, Sesotho, Sesotho sa Leboa, Setswana, Siswati, Tshivenda, and Xitsonga; and (ii) the 24 child language corpora thus far developed with these instruments. The potential research avenues opened by the 18 instruments and 24 corpora are discussed.
[ "White, Michelle J.", "Southwood, Frenette", "Yalala, Sefela Londiwe" ]
Early Child Language Resources and Corpora Developed in Nine African Languages by the SADiLaR Child Language Development Node
rail-1.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rail-1.11.bib
https://aclanthology.org/2024.rail-1.11/
@inproceedings{gidey-etal-2024-morphological, title = "Morphological Synthesizer for {G}e{'}ez Language: Addressing Morphological Complexity and Resource Limitations", author = "Gidey, Gebrearegawi Gebremariam and Teklehaymanot, Hailay Kidu and Atsbha, Gebregewergs Mezgebe", editor = "Mabuya, Rooweither and Matfunjwa, Muzi and Setaka, Mmasibidi and van Zaanen, Menno", booktitle = "Proceedings of the Fifth Workshop on Resources for African Indigenous Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rail-1.11", pages = "94--106", abstract = "Ge{'}ez is an ancient Semitic language renowned for its unique alphabet. It serves as the script for numerous lan- guages, including Tigrinya and Amharic, and played a pivotal role in Ethiopia{'}s cultural and religious development during the Aksumite kingdom era. Ge{'}ez remains significant as a liturgical language in Ethiopia and Eritrea, with much of the national identity documentation recorded in Ge{'}ez. These written materials are invaluable primary sources for studying Ethiopian and Eritrean philosophy, creativity, knowledge, and civilization. Ge{'}ez is a complex morphological structure with rich inflectional and derivational morphology, and no usable NLP has been developed and published until now due to the scarcity of annotated linguistic data, corpora, labeled datasets, and lexicons. Therefore, we proposed a rule-based Ge{'}ez morphological synthesis to generate surface words from root words according to the morphological structures of the language. Consequently, we proposed an automatic morphological synthesizer for Ge{'}ez using TLM. We used 1,102 sample verbs, representing all verb morphological structures, to test and evaluate the system. Finally, we get a performance of 97.4{\%}. This result outperforms the baseline model, suggesting that other scholars build a comprehensive system considering morphological variations of the language. Keywords: Ge{'}ez, NLP, morphology, morphological synthesizer, rule-based", }
Ge{'}ez is an ancient Semitic language renowned for its unique alphabet. It serves as the script for numerous lan- guages, including Tigrinya and Amharic, and played a pivotal role in Ethiopia{'}s cultural and religious development during the Aksumite kingdom era. Ge{'}ez remains significant as a liturgical language in Ethiopia and Eritrea, with much of the national identity documentation recorded in Ge{'}ez. These written materials are invaluable primary sources for studying Ethiopian and Eritrean philosophy, creativity, knowledge, and civilization. Ge{'}ez is a complex morphological structure with rich inflectional and derivational morphology, and no usable NLP has been developed and published until now due to the scarcity of annotated linguistic data, corpora, labeled datasets, and lexicons. Therefore, we proposed a rule-based Ge{'}ez morphological synthesis to generate surface words from root words according to the morphological structures of the language. Consequently, we proposed an automatic morphological synthesizer for Ge{'}ez using TLM. We used 1,102 sample verbs, representing all verb morphological structures, to test and evaluate the system. Finally, we get a performance of 97.4{\%}. This result outperforms the baseline model, suggesting that other scholars build a comprehensive system considering morphological variations of the language. Keywords: Ge{'}ez, NLP, morphology, morphological synthesizer, rule-based
[ "Gidey, Gebrearegawi Gebremariam", "Teklehaymanot, Hailay Kidu", "Atsbha, Gebregewergs Mezgebe" ]
Morphological Synthesizer for Ge'ez Language: Addressing Morphological Complexity and Resource Limitations
rail-1.11
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rail-1.12.bib
https://aclanthology.org/2024.rail-1.12/
@inproceedings{tonja-etal-2024-ethiomt, title = "{E}thio{MT}: Parallel Corpus for Low-resource {E}thiopian Languages", author = "Tonja, Atnafu Lambebo and Kolesnikova, Olga and Gelbukh, Alexander and Kalita, Jugal", editor = "Mabuya, Rooweither and Matfunjwa, Muzi and Setaka, Mmasibidi and van Zaanen, Menno", booktitle = "Proceedings of the Fifth Workshop on Resources for African Indigenous Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rail-1.12", pages = "107--114", abstract = "Recent research in natural language processing (NLP) has achieved impressive performance in tasks such as machine translation (MT), news classification, and question-answering in high-resource languages. However, the performance of MT leaves much to be desired for low-resource languages. This is due to the smaller size of available parallel corpora in these languages, if such corpora are available at all. NLP in Ethiopian languages suffers from the same issues due to the unavailability of publicly accessible datasets for NLP tasks, including MT. To help the research community and foster research for Ethiopian languages, we introduce EthioMT {--} a new parallel corpus for 15 languages. We also create a new benchmark by collecting a dataset for better-researched languages in Ethiopia. We evaluate the newly collected corpus and the benchmark dataset for 23 Ethiopian languages using transformer and fine-tuning approaches.", }
Recent research in natural language processing (NLP) has achieved impressive performance in tasks such as machine translation (MT), news classification, and question-answering in high-resource languages. However, the performance of MT leaves much to be desired for low-resource languages. This is due to the smaller size of available parallel corpora in these languages, if such corpora are available at all. NLP in Ethiopian languages suffers from the same issues due to the unavailability of publicly accessible datasets for NLP tasks, including MT. To help the research community and foster research for Ethiopian languages, we introduce EthioMT {--} a new parallel corpus for 15 languages. We also create a new benchmark by collecting a dataset for better-researched languages in Ethiopia. We evaluate the newly collected corpus and the benchmark dataset for 23 Ethiopian languages using transformer and fine-tuning approaches.
[ "Tonja, Atnafu Lambebo", "Kolesnikova, Olga", "Gelbukh, Alex", "er", "Kalita, Jugal" ]
EthioMT: Parallel Corpus for Low-resource Ethiopian Languages
rail-1.12
Poster
2403.19365
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rail-1.13.bib
https://aclanthology.org/2024.rail-1.13/
@inproceedings{ibrahim-etal-2024-resources, title = "Resources for Annotating Hate Speech in Social Media Platforms Used in {E}thiopia: A Novel Lexicon and Labelling Scheme", author = "Ibrahim, Nuhu and Mulford, Felicity and Lawrence, Matt and Batista-Navarro, Riza", editor = "Mabuya, Rooweither and Matfunjwa, Muzi and Setaka, Mmasibidi and van Zaanen, Menno", booktitle = "Proceedings of the Fifth Workshop on Resources for African Indigenous Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rail-1.13", pages = "115--123", abstract = "Hate speech on social media has proliferated in Ethiopia. To support studies aimed at investigating the targets and types of hate speech circulating in the Ethiopian context, we developed a new fine-grained annotation scheme that captures three elements of hate speech: the target (i.e., any groups with protected characteristics), type (i.e., the method of abuse) and nature (i.e., the style of the language used). We also developed a new lexicon of hate speech-related keywords in the four most prominent languages found on Ethiopian social media: Amharic, Afaan Oromo, English and Tigrigna. These keywords enabled us to retrieve social media posts (also in the same four languages) from three platforms (i.e., X, Telegram and Facebook), that are likely to contain hate speech. Experts in the Ethiopian context then manually annotated a sample of those retrieved posts, obtaining fair to moderate inter-annotator agreement. The resulting annotations formed the basis of a case study of which groups tend to be targeted by particular types of hate speech or by particular styles of hate speech language.", }
Hate speech on social media has proliferated in Ethiopia. To support studies aimed at investigating the targets and types of hate speech circulating in the Ethiopian context, we developed a new fine-grained annotation scheme that captures three elements of hate speech: the target (i.e., any groups with protected characteristics), type (i.e., the method of abuse) and nature (i.e., the style of the language used). We also developed a new lexicon of hate speech-related keywords in the four most prominent languages found on Ethiopian social media: Amharic, Afaan Oromo, English and Tigrigna. These keywords enabled us to retrieve social media posts (also in the same four languages) from three platforms (i.e., X, Telegram and Facebook), that are likely to contain hate speech. Experts in the Ethiopian context then manually annotated a sample of those retrieved posts, obtaining fair to moderate inter-annotator agreement. The resulting annotations formed the basis of a case study of which groups tend to be targeted by particular types of hate speech or by particular styles of hate speech language.
[ "Ibrahim, Nuhu", "Mulford, Felicity", "Lawrence, Matt", "Batista-Navarro, Riza" ]
Resources for Annotating Hate Speech in Social Media Platforms Used in Ethiopia: A Novel Lexicon and Labelling Scheme
rail-1.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rail-1.14.bib
https://aclanthology.org/2024.rail-1.14/
@inproceedings{taffa-etal-2024-low, title = "Low Resource Question Answering: An {A}mharic Benchmarking Dataset", author = "Taffa, Tilahun Abedissa and Usbeck, Ricardo and Assabie, Yaregal", editor = "Mabuya, Rooweither and Matfunjwa, Muzi and Setaka, Mmasibidi and van Zaanen, Menno", booktitle = "Proceedings of the Fifth Workshop on Resources for African Indigenous Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rail-1.14", pages = "124--132", abstract = "Question Answering (QA) systems return concise answers or answer lists based on natural language text, which uses a given context document. Many resources go into curating QA datasets to advance the development of robust QA models. There is a surge in QA datasets for languages such as English; this is different for low-resource languages like Amharic. Indeed, there is no published or publicly available Amharic QA dataset. Hence, to foster further research in low-resource QA, we present the first publicly available benchmarking Amharic Question Answering Dataset (Amh-QuAD). We crowdsource 2,628 question-answer pairs from over 378 Amharic Wikipedia articles. Using the training set, we fine-tune an XLM-R-based language model and introduce a new reader model. Leveraging our newly fine-tuned reader run a baseline model to spark open-domain Amharic QA research interest. The best- performing baseline QA achieves an F-score of 80.3 and 81.34 in retriever-reader and reading comprehension settings.", }
Question Answering (QA) systems return concise answers or answer lists based on natural language text, which uses a given context document. Many resources go into curating QA datasets to advance the development of robust QA models. There is a surge in QA datasets for languages such as English; this is different for low-resource languages like Amharic. Indeed, there is no published or publicly available Amharic QA dataset. Hence, to foster further research in low-resource QA, we present the first publicly available benchmarking Amharic Question Answering Dataset (Amh-QuAD). We crowdsource 2,628 question-answer pairs from over 378 Amharic Wikipedia articles. Using the training set, we fine-tune an XLM-R-based language model and introduce a new reader model. Leveraging our newly fine-tuned reader run a baseline model to spark open-domain Amharic QA research interest. The best- performing baseline QA achieves an F-score of 80.3 and 81.34 in retriever-reader and reading comprehension settings.
[ "Taffa, Tilahun Abedissa", "Usbeck, Ricardo", "Assabie, Yaregal" ]
Low Resource Question Answering: An Amharic Benchmarking Dataset
rail-1.14
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rail-1.15.bib
https://aclanthology.org/2024.rail-1.15/
@inproceedings{guellil-etal-2024-annotators, title = "The Annotators Agree to Not Agree on the Fine-grained Annotation of Hate-speech against Women in {A}lgerian Dialect Comments", author = "Guellil, Imane and Houichi, Yousra and Chennoufi, Sara and Boubred, Mohamed and Boucetta, Anfal Yousra and Azouaou, Faical", editor = "Mabuya, Rooweither and Matfunjwa, Muzi and Setaka, Mmasibidi and van Zaanen, Menno", booktitle = "Proceedings of the Fifth Workshop on Resources for African Indigenous Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rail-1.15", pages = "133--139", abstract = "A significant number of research studies have been presented for detecting hate speech in social media during the last few years. However, the majority of these studies are in English. Only a few studies focus on Arabic and its dialects (especially the Algerian dialect) with a smaller number of them targeting sexism detection (or hate speech against women). Even the works that have been proposed on Arabic sexism detection consider two classes only (hateful and non-hateful), and three classes(adding the neutral class) in the best scenario. This paper aims to propose the first fine-grained corpus focusing on 13 classes. However, given the challenges related to hate speech and fine-grained annotation, the Kappa metric is relatively low among the annotators (i.e. 35{\%} ). This work in progress proposes three main contributions: 1) Annotation of different categories related to hate speech such as insults, vulgar words or hate in general. 2) Annotation of 10,000 comments, in Arabic and Algerian dialects, automatically extracted from Youtube. 3) High-lighting the challenges related to manual annotation such as subjectivity, risk of bias, lack of annotation guidelines, etc", }
A significant number of research studies have been presented for detecting hate speech in social media during the last few years. However, the majority of these studies are in English. Only a few studies focus on Arabic and its dialects (especially the Algerian dialect) with a smaller number of them targeting sexism detection (or hate speech against women). Even the works that have been proposed on Arabic sexism detection consider two classes only (hateful and non-hateful), and three classes(adding the neutral class) in the best scenario. This paper aims to propose the first fine-grained corpus focusing on 13 classes. However, given the challenges related to hate speech and fine-grained annotation, the Kappa metric is relatively low among the annotators (i.e. 35{\%} ). This work in progress proposes three main contributions: 1) Annotation of different categories related to hate speech such as insults, vulgar words or hate in general. 2) Annotation of 10,000 comments, in Arabic and Algerian dialects, automatically extracted from Youtube. 3) High-lighting the challenges related to manual annotation such as subjectivity, risk of bias, lack of annotation guidelines, etc
[ "Guellil, Imane", "Houichi, Yousra", "Chennoufi, Sara", "Boubred, Mohamed", "Boucetta, Anfal Yousra", "Azouaou, Faical" ]
The Annotators Agree to Not Agree on the Fine-grained Annotation of Hate-speech against Women in Algerian Dialect Comments
rail-1.15
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rail-1.16.bib
https://aclanthology.org/2024.rail-1.16/
@inproceedings{cisse-sadat-2024-advancing, title = "Advancing Language Diversity and Inclusion: Towards a Neural Network-based Spell Checker and Correction for {W}olof", author = "Ciss{\'e}, Thierno Ibrahima and Sadat, Fatiha", editor = "Mabuya, Rooweither and Matfunjwa, Muzi and Setaka, Mmasibidi and van Zaanen, Menno", booktitle = "Proceedings of the Fifth Workshop on Resources for African Indigenous Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rail-1.16", pages = "140--151", abstract = "This paper introduces a novel approach to spell checking and correction for low-resource and under-represented languages, with a specific focus on an African language, Wolof. By leveraging the capabilities of transformer models and neural networks, we propose an efficient and practical system capable of correcting typos and improving text quality. Our proposed technique involves training a transformer model on a parallel corpus consisting of misspelled sentences and their correctly spelled counterparts, generated using a semi-automatic method. As we fine tune the model to transform misspelled text into accurate sentences, we demonstrate the immense potential of this approach to overcome the challenges faced by resource-scarce and under-represented languages in the realm of spell checking and correction. Our experimental results and evaluations exhibit promising outcomes, offering valuable insights that contribute to the ongoing endeavors aimed at enriching linguistic diversity and inclusion and thus improving digital communication accessibility for languages grappling with scarcity of resources and under-representation in the digital landscape.", }
This paper introduces a novel approach to spell checking and correction for low-resource and under-represented languages, with a specific focus on an African language, Wolof. By leveraging the capabilities of transformer models and neural networks, we propose an efficient and practical system capable of correcting typos and improving text quality. Our proposed technique involves training a transformer model on a parallel corpus consisting of misspelled sentences and their correctly spelled counterparts, generated using a semi-automatic method. As we fine tune the model to transform misspelled text into accurate sentences, we demonstrate the immense potential of this approach to overcome the challenges faced by resource-scarce and under-represented languages in the realm of spell checking and correction. Our experimental results and evaluations exhibit promising outcomes, offering valuable insights that contribute to the ongoing endeavors aimed at enriching linguistic diversity and inclusion and thus improving digital communication accessibility for languages grappling with scarcity of resources and under-representation in the digital landscape.
[ "Ciss{\\'e}, Thierno Ibrahima", "Sadat, Fatiha" ]
Advancing Language Diversity and Inclusion: Towards a Neural Network-based Spell Checker and Correction for Wolof
rail-1.16
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rail-1.17.bib
https://aclanthology.org/2024.rail-1.17/
@inproceedings{momoh-2024-lateral, title = "Lateral Inversions, Word Form/Order, Unnamed Grammatical Entities and Ambiguities in the Constituency Parsing and Annotation of the {I}gala Syntax through the {E}nglish Language", author = "Momoh, Mahmud Mohammed", editor = "Mabuya, Rooweither and Matfunjwa, Muzi and Setaka, Mmasibidi and van Zaanen, Menno", booktitle = "Proceedings of the Fifth Workshop on Resources for African Indigenous Languages @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rail-1.17", pages = "152--162", abstract = "The aim of this paper is expose the structural form of the Igala language and the inherent complexity related to the translation of the language to a second language {--} i.e. the English language, through an inquisition into its the word order, lateral inversions, and unnamed grammatical entities inherent in the language. While this study finds out that there is a preponderance of a linguistic typology with subject-verb-object word order and the total absence of preposition in the speech composition of the Igala language. The implication of these trio of topic sentences (syntactic inversion, word ordering, unnamed entities) have remain within the dark corner of intellectual consideration and worst still the incorporation of this considerations in syntax parsing and annotation in computing. Rising from ongoing abstruseness and incongruity in machine translation of Igala, a comprehension model for automotive identification, application and/or conversion of these structural forms to the English language shall be the focus of this paper.", }
The aim of this paper is expose the structural form of the Igala language and the inherent complexity related to the translation of the language to a second language {--} i.e. the English language, through an inquisition into its the word order, lateral inversions, and unnamed grammatical entities inherent in the language. While this study finds out that there is a preponderance of a linguistic typology with subject-verb-object word order and the total absence of preposition in the speech composition of the Igala language. The implication of these trio of topic sentences (syntactic inversion, word ordering, unnamed entities) have remain within the dark corner of intellectual consideration and worst still the incorporation of this considerations in syntax parsing and annotation in computing. Rising from ongoing abstruseness and incongruity in machine translation of Igala, a comprehension model for automotive identification, application and/or conversion of these structural forms to the English language shall be the focus of this paper.
[ "Momoh, Mahmud Mohammed" ]
Lateral Inversions, Word Form/Order, Unnamed Grammatical Entities and Ambiguities in the Constituency Parsing and Annotation of the Igala Syntax through the English Language
rail-1.17
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rapid-1.1.bib
https://aclanthology.org/2024.rapid-1.1/
@inproceedings{tsiwah-etal-2024-semantic, title = "Semantic-based {NLP} techniques discriminate schizophrenia and {W}ernicke{'}s aphasia based on spontaneous speech", author = "Tsiwah, Frank and Mayya, Anas and van Cranenburgh, Andreas", editor = "Kokkinakis, Dimitrios and Fraser, Kathleen C. and Themistocleous, Charalambos K. and Fors, Kristina Lundholm and Tsanas, Athanasios and Ohman, Fredrik", booktitle = "Proceedings of the Fifth Workshop on Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments @LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rapid-1.1", pages = "1--8", abstract = "People with schizophrenia spectrum disorder (SSD){---}a psychiatric disorder, and people with Wernicke{'}s aphasia {---} an acquired neurological disorder, are both known to display semantic deficits in their spontaneous speech outputs. Very few studies directly compared the two groups on their spontaneous speech (Gerson et al., 1977; Faber et al., 1983), and no consistent results were found. Our study uses word (based on the word2vec model with moving windows across words) and sentence (transformer based-model) embeddings as features for a machine learning classification model to differentiate between the spontaneous speech of both groups. Additionally, this study uses these measures to differentiate between people with Wernicke{'}s aphasia and healthy controls. The model is able to classify patients with Wernicke{'}s aphasia and patients with SSD with a cross-validated accuracy of 81{\%}. Additionally, it is also able to classify patients with Wernicke{'}s aphasia versus healthy controls and SSD versus healthy controls with cross-validated accuracy of 93.72{\%} and 84.36{\%}, respectively. For the SSD individuals, sentence and/or discourse level features are deemed more informative by the model, whereas for the Wernicke group, only intra-sentential features are more informative. Overall, we show that NLP-based semantic measures are sensitive to identifying Wernicke{'}s aphasic and schizophrenic speech.", }
People with schizophrenia spectrum disorder (SSD){---}a psychiatric disorder, and people with Wernicke{'}s aphasia {---} an acquired neurological disorder, are both known to display semantic deficits in their spontaneous speech outputs. Very few studies directly compared the two groups on their spontaneous speech (Gerson et al., 1977; Faber et al., 1983), and no consistent results were found. Our study uses word (based on the word2vec model with moving windows across words) and sentence (transformer based-model) embeddings as features for a machine learning classification model to differentiate between the spontaneous speech of both groups. Additionally, this study uses these measures to differentiate between people with Wernicke{'}s aphasia and healthy controls. The model is able to classify patients with Wernicke{'}s aphasia and patients with SSD with a cross-validated accuracy of 81{\%}. Additionally, it is also able to classify patients with Wernicke{'}s aphasia versus healthy controls and SSD versus healthy controls with cross-validated accuracy of 93.72{\%} and 84.36{\%}, respectively. For the SSD individuals, sentence and/or discourse level features are deemed more informative by the model, whereas for the Wernicke group, only intra-sentential features are more informative. Overall, we show that NLP-based semantic measures are sensitive to identifying Wernicke{'}s aphasic and schizophrenic speech.
[ "Tsiwah, Frank", "Mayya, Anas", "van Cranenburgh, Andreas" ]
Semantic-based NLP techniques discriminate schizophrenia and Wernicke's aphasia based on spontaneous speech
rapid-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rapid-1.2.bib
https://aclanthology.org/2024.rapid-1.2/
@inproceedings{saccone-2024-speech, title = "Speech Rate and Salient Syllables Position in Spontaneous Speech of Children with Autism Spectrum Disorder", author = "Saccone, Valentina", editor = "Kokkinakis, Dimitrios and Fraser, Kathleen C. and Themistocleous, Charalambos K. and Fors, Kristina Lundholm and Tsanas, Athanasios and Ohman, Fredrik", booktitle = "Proceedings of the Fifth Workshop on Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments @LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rapid-1.2", pages = "9--15", abstract = "The study employs a semi-automatic approach to analyze speech rate in spoken Italian, aiming to identify acoustic parameters associated with perceptual atypicality in the speech of children diagnosed with Autism Spectrum Disorder (ASD). The research focuses on a dataset comprising recordings of semi-spontaneous interactions, in comparison with interviews of Typically Developing (TD) children. A detailed examination of speech rate variability is conducted, progressing from assessing overall speech rate in conversation to the analysis of individual utterances. Furthermore, salient syllables within utterances are identified using an automatic procedure through the Salient Detector Praat script and analyzed for stress position. The study highlights specific speech style, including rapid-telegraphic and reading-performed speech. Additionally, it reveals a higher speech rate with the increasing length of utterance when {\textless}10 syllables; conversely, a speech rate diminishing in 20-25 syllables utterances, suggesting potential difficulty in producing longer utterances associated with increased cognitive load.", }
The study employs a semi-automatic approach to analyze speech rate in spoken Italian, aiming to identify acoustic parameters associated with perceptual atypicality in the speech of children diagnosed with Autism Spectrum Disorder (ASD). The research focuses on a dataset comprising recordings of semi-spontaneous interactions, in comparison with interviews of Typically Developing (TD) children. A detailed examination of speech rate variability is conducted, progressing from assessing overall speech rate in conversation to the analysis of individual utterances. Furthermore, salient syllables within utterances are identified using an automatic procedure through the Salient Detector Praat script and analyzed for stress position. The study highlights specific speech style, including rapid-telegraphic and reading-performed speech. Additionally, it reveals a higher speech rate with the increasing length of utterance when {\textless}10 syllables; conversely, a speech rate diminishing in 20-25 syllables utterances, suggesting potential difficulty in producing longer utterances associated with increased cognitive load.
[ "Saccone, Valentina" ]
Speech Rate and Salient Syllables Position in Spontaneous Speech of Children with Autism Spectrum Disorder
rapid-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rapid-1.3.bib
https://aclanthology.org/2024.rapid-1.3/
@inproceedings{lindsay-etal-2024-cross, title = "Cross-Lingual Examination of Language Features and Cognitive Scores From Free Speech", author = {Lindsay, Hali and Albertin, Giorgia and Schwed, Louisa and Linz, Nicklas and Tr{\"o}ger, Johannes}, editor = "Kokkinakis, Dimitrios and Fraser, Kathleen C. and Themistocleous, Charalambos K. and Fors, Kristina Lundholm and Tsanas, Athanasios and Ohman, Fredrik", booktitle = "Proceedings of the Fifth Workshop on Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments @LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rapid-1.3", pages = "16--25", abstract = "Speech analysis is gaining significance for monitoring neurodegenerative disorders, but with a view of application in clinical practice, solid evidence of the association of language features with cognitive scores is still needed. A cross-linguistic investigation has been pursued to examine whether language features show significance correlation with two cognitive scores, i.e. Mini-Mental State Examination and ki:e SB-C scores, on Alzheimer{'}s Disease patients. We explore 23 language features, representative of syntactic complexity and semantic richness, extracted on a dataset of free speech recordings of 138 participants distributed in four languages (Spanish, Catalan, German, Dutch). Data was analyzed using the speech library SIGMA; Pearson{'}s correlation was computed with Bonferroni correction, and a mixed effects linear regression analysis is done on the significant correlated results. MMSE and the SB-C are found to be correlated with no significant differences across languages. Three features were found to be significantly correlated with the SB-C scores. Among these, two features of lexical richness show consistent patterns across languages, while determiner rate showed language-specific patterns.", }
Speech analysis is gaining significance for monitoring neurodegenerative disorders, but with a view of application in clinical practice, solid evidence of the association of language features with cognitive scores is still needed. A cross-linguistic investigation has been pursued to examine whether language features show significance correlation with two cognitive scores, i.e. Mini-Mental State Examination and ki:e SB-C scores, on Alzheimer{'}s Disease patients. We explore 23 language features, representative of syntactic complexity and semantic richness, extracted on a dataset of free speech recordings of 138 participants distributed in four languages (Spanish, Catalan, German, Dutch). Data was analyzed using the speech library SIGMA; Pearson{'}s correlation was computed with Bonferroni correction, and a mixed effects linear regression analysis is done on the significant correlated results. MMSE and the SB-C are found to be correlated with no significant differences across languages. Three features were found to be significantly correlated with the SB-C scores. Among these, two features of lexical richness show consistent patterns across languages, while determiner rate showed language-specific patterns.
[ "Lindsay, Hali", "Albertin, Giorgia", "Schwed, Louisa", "Linz, Nicklas", "Tr{\\\"o}ger, Johannes" ]
Cross-Lingual Examination of Language Features and Cognitive Scores From Free Speech
rapid-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rapid-1.4.bib
https://aclanthology.org/2024.rapid-1.4/
@inproceedings{nowenstein-etal-2024-speech, title = "Speech and Language Biomarkers of Neurodegenerative Conditions: Developing Cross-Linguistically Valid Tools for Automatic Analysis", author = {Nowenstein, Iris E. and Stanojevic, Marija and {\"O}rn{\'o}lfsson, Gunnar and J{\'o}nsd{\'o}ttir, Mar{\'\i}a Krist{\'\i}n and Simpson, Bill and Sorinas Nerin, Jennifer and Berg{\th}{\'o}rsd{\'o}ttir, Brynd{\'\i}s and Hannesd{\'o}ttir, Krist{\'\i}n and Novikova, Jekaterina and Curcic, Jelena}, editor = "Kokkinakis, Dimitrios and Fraser, Kathleen C. and Themistocleous, Charalambos K. and Fors, Kristina Lundholm and Tsanas, Athanasios and Ohman, Fredrik", booktitle = "Proceedings of the Fifth Workshop on Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments @LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rapid-1.4", pages = "26--33", abstract = "In the last decade, a rapidly growing body of studies has shown promising results for the automatic detection and extraction of speech and language features as biomarkers of neurodegenerative conditions such as Alzheimer{'}s disease. This has sparked great optimism and the development of various digital health tools, but also warnings regarding the predominance of English in the field and calls for linguistically diverse research as well as global, equitable access to novel clinical instruments. To automatically extract clinically relevant features from transcripts in low-resource languages, two approaches are possible: 1) utilizing a limited range of language-specific tools or 2) translating text to English and then extracting the features. We evaluate these approaches for part-of-speech (POS) rates in transcripts of recorded picture descriptions from a cross-sectional study of Icelandic speakers at different stages of Alzheimer{'}s disease and healthy controls. While the translation method merits further exploration, only a subset of the POS categories show a promising correspondence to the direct extraction from the Icelandic transcripts in our results, indicating that the translation method has to be linguistically validated at the individual POS category level.", }
In the last decade, a rapidly growing body of studies has shown promising results for the automatic detection and extraction of speech and language features as biomarkers of neurodegenerative conditions such as Alzheimer{'}s disease. This has sparked great optimism and the development of various digital health tools, but also warnings regarding the predominance of English in the field and calls for linguistically diverse research as well as global, equitable access to novel clinical instruments. To automatically extract clinically relevant features from transcripts in low-resource languages, two approaches are possible: 1) utilizing a limited range of language-specific tools or 2) translating text to English and then extracting the features. We evaluate these approaches for part-of-speech (POS) rates in transcripts of recorded picture descriptions from a cross-sectional study of Icelandic speakers at different stages of Alzheimer{'}s disease and healthy controls. While the translation method merits further exploration, only a subset of the POS categories show a promising correspondence to the direct extraction from the Icelandic transcripts in our results, indicating that the translation method has to be linguistically validated at the individual POS category level.
[ "Nowenstein, Iris E.", "Stanojevic, Marija", "{\\\"O}rn{\\'o}lfsson, Gunnar", "J{\\'o}nsd{\\'o}ttir, Mar{\\'\\i}a Krist{\\'\\i}n", "Simpson, Bill", "Sorinas Nerin, Jennifer", "Berg{\\th}{\\'o}rsd{\\'o}ttir, Brynd{\\'\\i}s", "Hannesd{\\'o}ttir, Krist{\\'\\i}n", "Novikova, Jekaterina", "Curcic, Jelena" ]
Speech and Language Biomarkers of Neurodegenerative Conditions: Developing Cross-Linguistically Valid Tools for Automatic Analysis
rapid-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.rapid-1.5.bib
https://aclanthology.org/2024.rapid-1.5/
@inproceedings{belmonte-etal-2024-automatic, title = "Automatic Detection of Rhythmic Features in Pathological Speech of {MCI} and Dementia Patients", author = "Belmonte, Marica and Gagliardi, Gloria and Kokkinakis, Dimitrios and Tamburini, Fabio", editor = "Kokkinakis, Dimitrios and Fraser, Kathleen C. and Themistocleous, Charalambos K. and Fors, Kristina Lundholm and Tsanas, Athanasios and Ohman, Fredrik", booktitle = "Proceedings of the Fifth Workshop on Resources and ProcessIng of linguistic, para-linguistic and extra-linguistic Data from people with various forms of cognitive/psychiatric/developmental impairments @LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.rapid-1.5", pages = "34--44", abstract = "Linguistic alterations represent one of the prodromal signs of cognitive decline associated with Dementia. In recent years, a growing body of work has been devoted to the development of algorithms for the automatic linguistic analysis of both oral and written texts, for diagnostic purposes. The extraction of Digital Linguistic Biomarkers from patients{'} verbal productions can indeed provide a rapid, ecological, and cost-effective system for large-scale screening of the pathology. This article contributes to the ongoing research in the field by exploring a traditionally less studied aspect of language in Dementia, namely the rhythmic characteristics of speech. In particular, the paper focuses on the automatic detection of rhythmic features in Italian-connected speech. A landmark-based system was developed and evaluated to segment the speech flow into vocalic and consonantal intervals and to calculate several rhythmic metrics. Additionally, the reliability of these metrics in identifying Mild Cognitive Impairment and Dementia patients was tested.", }
Linguistic alterations represent one of the prodromal signs of cognitive decline associated with Dementia. In recent years, a growing body of work has been devoted to the development of algorithms for the automatic linguistic analysis of both oral and written texts, for diagnostic purposes. The extraction of Digital Linguistic Biomarkers from patients{'} verbal productions can indeed provide a rapid, ecological, and cost-effective system for large-scale screening of the pathology. This article contributes to the ongoing research in the field by exploring a traditionally less studied aspect of language in Dementia, namely the rhythmic characteristics of speech. In particular, the paper focuses on the automatic detection of rhythmic features in Italian-connected speech. A landmark-based system was developed and evaluated to segment the speech flow into vocalic and consonantal intervals and to calculate several rhythmic metrics. Additionally, the reliability of these metrics in identifying Mild Cognitive Impairment and Dementia patients was tested.
[ "Belmonte, Marica", "Gagliardi, Gloria", "Kokkinakis, Dimitrios", "Tamburini, Fabio" ]
Automatic Detection of Rhythmic Features in Pathological Speech of MCI and Dementia Patients
rapid-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]