Datasets:

bibtex_url
stringlengths
41
53
proceedings
stringlengths
38
50
bibtext
stringlengths
535
2.8k
abstract
stringlengths
0
2.04k
authors
sequencelengths
1
31
title
stringlengths
19
178
id
stringlengths
7
19
type
stringclasses
1 value
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
124 values
n_linked_authors
int64
-1
7
upvotes
int64
-1
79
num_comments
int64
-1
4
n_authors
int64
-1
22
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
55
Datasets
sequencelengths
0
46
Spaces
sequencelengths
0
82
https://aclanthology.org/2024.isa-1.5.bib
https://aclanthology.org/2024.isa-1.5/
@inproceedings{paccosi-tonelli-2024-new, title = "A New Annotation Scheme for the Semantics of Taste", author = "Paccosi, Teresa and Tonelli, Sara", editor = "Bunt, Harry and Ide, Nancy and Lee, Kiyong and Petukhova, Volha and Pustejovsky, James and Romary, Laurent", booktitle = "Proceedings of the 20th Joint ACL - ISO Workshop on Interoperable Semantic Annotation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.isa-1.5", pages = "39--46", abstract = "This paper introduces a new annotation scheme for the semantics of gustatory language in English, which builds upon a previous framework for olfactory language based on frame semantics. The purpose of this annotation framework is to be used for annotating comparable resources for the study of sensory language and to create training datasets for supervised systems aimed at extracting sensory information. Furthermore, our approach incorporates words from specific historical periods, thereby enhancing the framework{'}s utility for studying language from a diachronic perspective.", }
This paper introduces a new annotation scheme for the semantics of gustatory language in English, which builds upon a previous framework for olfactory language based on frame semantics. The purpose of this annotation framework is to be used for annotating comparable resources for the study of sensory language and to create training datasets for supervised systems aimed at extracting sensory information. Furthermore, our approach incorporates words from specific historical periods, thereby enhancing the framework{'}s utility for studying language from a diachronic perspective.
[ "Paccosi, Teresa", "Tonelli, Sara" ]
A New Annotation Scheme for the Semantics of Taste
isa-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.isa-1.6.bib
https://aclanthology.org/2024.isa-1.6/
@inproceedings{marini-jezek-2024-annotate, title = "What to Annotate: Retrieving Lexical Markers of Conspiracy Discourse from an {I}talian-{E}nglish Corpus of Telegram Data", author = "Marini, Costanza and Jezek, Elisabetta", editor = "Bunt, Harry and Ide, Nancy and Lee, Kiyong and Petukhova, Volha and Pustejovsky, James and Romary, Laurent", booktitle = "Proceedings of the 20th Joint ACL - ISO Workshop on Interoperable Semantic Annotation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.isa-1.6", pages = "47--52", abstract = "In this age of social media, Conspiracy Theories (CTs) have become an issue that can no longer be ignored. After providing an overview of CT literature and corpus studies, we describe the creation of a 40,000-token English-Italian bilingual corpus of conspiracy-oriented Telegram comments {--} the Complotto corpus {--} and the linguistic analysis we performed using the Sketch Engine online platform (Kilgarriff et al., 2010) on our annotated data to identify statistically relevant linguistic markers of CT discourse. Thanks to the platform{'}s keywords and key terms extraction functions, we were able to assess the statistical significance of the following lexical and semantic phenomena, both cross-linguistically and cross-CT, namely: (1) evidentiality and epistemic modality markers; (2) debunking vocabulary referring to another version of the truth lying behind the official one; (3) the conceptual metaphor INSTITUTIONS ARE ABUSERS. All these features qualify as markers of CT discourse and have the potential to be effectively used for future semantic annotation tasks to develop automatic systems for CT identification.", }
In this age of social media, Conspiracy Theories (CTs) have become an issue that can no longer be ignored. After providing an overview of CT literature and corpus studies, we describe the creation of a 40,000-token English-Italian bilingual corpus of conspiracy-oriented Telegram comments {--} the Complotto corpus {--} and the linguistic analysis we performed using the Sketch Engine online platform (Kilgarriff et al., 2010) on our annotated data to identify statistically relevant linguistic markers of CT discourse. Thanks to the platform{'}s keywords and key terms extraction functions, we were able to assess the statistical significance of the following lexical and semantic phenomena, both cross-linguistically and cross-CT, namely: (1) evidentiality and epistemic modality markers; (2) debunking vocabulary referring to another version of the truth lying behind the official one; (3) the conceptual metaphor INSTITUTIONS ARE ABUSERS. All these features qualify as markers of CT discourse and have the potential to be effectively used for future semantic annotation tasks to develop automatic systems for CT identification.
[ "Marini, Costanza", "Jezek, Elisabetta" ]
What to Annotate: Retrieving Lexical Markers of Conspiracy Discourse from an Italian-English Corpus of Telegram Data
isa-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.isa-1.7.bib
https://aclanthology.org/2024.isa-1.7/
@inproceedings{er-etal-2024-lightweight, title = "Lightweight Connective Detection Using Gradient Boosting", author = "Er, Mustafa Erolcan and Kurfal{\i}, Murathan and Zeyrek, Deniz", editor = "Bunt, Harry and Ide, Nancy and Lee, Kiyong and Petukhova, Volha and Pustejovsky, James and Romary, Laurent", booktitle = "Proceedings of the 20th Joint ACL - ISO Workshop on Interoperable Semantic Annotation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.isa-1.7", pages = "53--59", abstract = "In this work, we introduce a lightweight discourse connective detection system. Employing gradient boosting trained on straightforward, low-complexity features, this proposed approach sidesteps the computational demands of the current approaches that rely on deep neural networks. Considering its simplicity, our approach achieves competitive results while offering significant gains in terms of time even on CPU. Furthermore, the stable performance across two unrelated languages suggests the robustness of our system in the multilingual scenario. The model is designed to support the annotation of discourse relations, particularly in scenarios with limited resources, while minimizing performance loss.", }
In this work, we introduce a lightweight discourse connective detection system. Employing gradient boosting trained on straightforward, low-complexity features, this proposed approach sidesteps the computational demands of the current approaches that rely on deep neural networks. Considering its simplicity, our approach achieves competitive results while offering significant gains in terms of time even on CPU. Furthermore, the stable performance across two unrelated languages suggests the robustness of our system in the multilingual scenario. The model is designed to support the annotation of discourse relations, particularly in scenarios with limited resources, while minimizing performance loss.
[ "Er, Mustafa Erolcan", "Kurfal{\\i}, Murathan", "Zeyrek, Deniz" ]
Lightweight Connective Detection Using Gradient Boosting
isa-1.7
Poster
2404.13793
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.isa-1.8.bib
https://aclanthology.org/2024.isa-1.8/
@inproceedings{aktas-ozmen-2024-shallow, title = "Shallow Discourse Parsing on {T}witter Conversations", author = {Aktas, Berfin and {\"O}zmen, Burak}, editor = "Bunt, Harry and Ide, Nancy and Lee, Kiyong and Petukhova, Volha and Pustejovsky, James and Romary, Laurent", booktitle = "Proceedings of the 20th Joint ACL - ISO Workshop on Interoperable Semantic Annotation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.isa-1.8", pages = "60--65", abstract = "We present our PDTB-style annotations on conversational Twitter data, which was initially annotated by Scheffler et al. (2019). We introduced 1,043 new annotations to the dataset, nearly doubling the number of previously annotated discourse relations. Subsequently, we applied a neural Shallow Discourse Parsing (SDP) model to the resulting corpus, improving its performance through retraining with in-domain data. The most substantial improvement was observed in the sense identification task (+19{\%}). Our experiments with diverse training data combinations underline the potential benefits of exploring various data combinations in domain adaptation efforts for SDP. To the best of our knowledge, this is the first application of Shallow Discourse Parsing on Twitter data", }
We present our PDTB-style annotations on conversational Twitter data, which was initially annotated by Scheffler et al. (2019). We introduced 1,043 new annotations to the dataset, nearly doubling the number of previously annotated discourse relations. Subsequently, we applied a neural Shallow Discourse Parsing (SDP) model to the resulting corpus, improving its performance through retraining with in-domain data. The most substantial improvement was observed in the sense identification task (+19{\%}). Our experiments with diverse training data combinations underline the potential benefits of exploring various data combinations in domain adaptation efforts for SDP. To the best of our knowledge, this is the first application of Shallow Discourse Parsing on Twitter data
[ "Aktas, Berfin", "{\\\"O}zmen, Burak" ]
Shallow Discourse Parsing on Twitter Conversations
isa-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.isa-1.9.bib
https://aclanthology.org/2024.isa-1.9/
@inproceedings{petliak-etal-2024-search, title = "Search tool for An Event-Type Ontology", author = "Petliak, Nataliia and Alcaina, Cristina Fernand{\'e}z and Fu{\v{c}}{\'\i}kov{\'a}, Eva and Haji{\v{c}}, Jan and Ure{\v{s}}ov{\'a}, Zde{\v{n}}ka", editor = "Bunt, Harry and Ide, Nancy and Lee, Kiyong and Petukhova, Volha and Pustejovsky, James and Romary, Laurent", booktitle = "Proceedings of the 20th Joint ACL - ISO Workshop on Interoperable Semantic Annotation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.isa-1.9", pages = "66--70", abstract = "This short demo description paper presents a new tool designed for searching an event-type ontology with rich information, demonstrated on the SynSemClass ontology resource. The tool complements a web browser, created by the authors of the SynSemClass ontology previously. Due to the complexity of the resource, the search tool offers possibilities both for a linguistically-oriented researcher as well as for teams working with the resource from a technical point of view, such as building role labeling tools, automatic annotation tools, etc.", }
This short demo description paper presents a new tool designed for searching an event-type ontology with rich information, demonstrated on the SynSemClass ontology resource. The tool complements a web browser, created by the authors of the SynSemClass ontology previously. Due to the complexity of the resource, the search tool offers possibilities both for a linguistically-oriented researcher as well as for teams working with the resource from a technical point of view, such as building role labeling tools, automatic annotation tools, etc.
[ "Petliak, Nataliia", "Alcaina, Cristina Fern", "{\\'e}z", "Fu{\\v{c}}{\\'\\i}kov{\\'a}, Eva", "Haji{\\v{c}}, Jan", "Ure{\\v{s}}ov{\\'a}, Zde{\\v{n}}ka" ]
Search tool for An Event-Type Ontology
isa-1.9
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.isa-1.10.bib
https://aclanthology.org/2024.isa-1.10/
@inproceedings{salman-etal-2024-tiny, title = "Tiny But Mighty: A Crowdsourced Benchmark Dataset for Triple Extraction from Unstructured Text", author = "Salman, Muhammad and Haller, Armin and Rodriguez Mendez, Sergio J. and Naseem, Usman", editor = "Bunt, Harry and Ide, Nancy and Lee, Kiyong and Petukhova, Volha and Pustejovsky, James and Romary, Laurent", booktitle = "Proceedings of the 20th Joint ACL - ISO Workshop on Interoperable Semantic Annotation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.isa-1.10", pages = "71--81", abstract = "In the context of Natural Language Processing (NLP) and Semantic Web applications, constructing Knowledge Graphs (KGs) from unstructured text plays a vital role. Several techniques have been developed for KG construction from text, but the lack of standardized datasets hinders the evaluation of triple extraction methods. The evaluation of existing KG construction approaches is based on structured data or manual investigations. To overcome this limitation, this work introduces a novel dataset specifically designed to evaluate KG construction techniques from unstructured text. Our dataset consists of a diverse collection of compound and complex sentences meticulously annotated by human annotators with potential triples (subject, verb, object). The annotations underwent further scrutiny by expert ontologists to ensure accuracy and consistency. For evaluation purposes, the proposed F-measure criterion offers a robust approach to quantify the relatedness and assess the alignment between extracted triples and the ground-truth triples, providing a valuable tool for evaluating the performance of triple extraction systems. By providing a diverse collection of high-quality triples, our proposed benchmark dataset offers a comprehensive training and evaluation set for refining the performance of state-of-the-art language models on a triple extraction task. Furthermore, this dataset encompasses various KG-related tasks, such as named entity recognition, relation extraction, and entity linking.", }
In the context of Natural Language Processing (NLP) and Semantic Web applications, constructing Knowledge Graphs (KGs) from unstructured text plays a vital role. Several techniques have been developed for KG construction from text, but the lack of standardized datasets hinders the evaluation of triple extraction methods. The evaluation of existing KG construction approaches is based on structured data or manual investigations. To overcome this limitation, this work introduces a novel dataset specifically designed to evaluate KG construction techniques from unstructured text. Our dataset consists of a diverse collection of compound and complex sentences meticulously annotated by human annotators with potential triples (subject, verb, object). The annotations underwent further scrutiny by expert ontologists to ensure accuracy and consistency. For evaluation purposes, the proposed F-measure criterion offers a robust approach to quantify the relatedness and assess the alignment between extracted triples and the ground-truth triples, providing a valuable tool for evaluating the performance of triple extraction systems. By providing a diverse collection of high-quality triples, our proposed benchmark dataset offers a comprehensive training and evaluation set for refining the performance of state-of-the-art language models on a triple extraction task. Furthermore, this dataset encompasses various KG-related tasks, such as named entity recognition, relation extraction, and entity linking.
[ "Salman, Muhammad", "Haller, Armin", "Rodriguez Mendez, Sergio J.", "Naseem, Usman" ]
Tiny But Mighty: A Crowdsourced Benchmark Dataset for Triple Extraction from Unstructured Text
isa-1.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.isa-1.11.bib
https://aclanthology.org/2024.isa-1.11/
@inproceedings{vanroy-van-de-cruys-2024-less, title = "Less is Enough: Less-Resourced Multilingual {AMR} Parsing", author = "Vanroy, Bram and Van de Cruys, Tim", editor = "Bunt, Harry and Ide, Nancy and Lee, Kiyong and Petukhova, Volha and Pustejovsky, James and Romary, Laurent", booktitle = "Proceedings of the 20th Joint ACL - ISO Workshop on Interoperable Semantic Annotation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.isa-1.11", pages = "82--92", abstract = "This paper investigates the efficacy of multilingual models for the task of text-to-AMR parsing, focusing on English, Spanish, and Dutch. We train and evaluate models under various configurations, including monolingual and multilingual settings, both in full and reduced data scenarios. Our empirical results reveal that while monolingual models exhibit superior performance, multilingual models are competitive across all languages, offering a more resource-efficient alternative for training and deployment. Crucially, our findings demonstrate that AMR parsing benefits from transfer learning across languages even when having access to significantly smaller datasets. As a tangible contribution, we provide text-to-AMR parsing models for the aforementioned languages as well as multilingual variants, and make available the large corpora of translated data for Dutch, Spanish (and Irish) that we used for training them in order to foster AMR research in non-English languages. Additionally, we open-source the training code and offer an interactive interface for parsing AMR graphs from text.", }
This paper investigates the efficacy of multilingual models for the task of text-to-AMR parsing, focusing on English, Spanish, and Dutch. We train and evaluate models under various configurations, including monolingual and multilingual settings, both in full and reduced data scenarios. Our empirical results reveal that while monolingual models exhibit superior performance, multilingual models are competitive across all languages, offering a more resource-efficient alternative for training and deployment. Crucially, our findings demonstrate that AMR parsing benefits from transfer learning across languages even when having access to significantly smaller datasets. As a tangible contribution, we provide text-to-AMR parsing models for the aforementioned languages as well as multilingual variants, and make available the large corpora of translated data for Dutch, Spanish (and Irish) that we used for training them in order to foster AMR research in non-English languages. Additionally, we open-source the training code and offer an interactive interface for parsing AMR graphs from text.
[ "Vanroy, Bram", "Van de Cruys, Tim" ]
Less is Enough: Less-Resourced Multilingual AMR Parsing
isa-1.11
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.isa-1.12.bib
https://aclanthology.org/2024.isa-1.12/
@inproceedings{lorenzi-etal-2024-mocca, title = "{M}o{CCA}: A Model of Comparative Concepts for Aligning Constructicons", author = {Lorenzi, Arthur and Ljungl{\"o}f, Peter and Lyngfelt, Ben and Timponi Torrent, Tiago and Croft, William and Ziem, Alexander and B{\"o}bel, Nina and B{\"a}ckstr{\"o}m, Linn{\'e}a and Uhrig, Peter and Matos, Ely E.}, editor = "Bunt, Harry and Ide, Nancy and Lee, Kiyong and Petukhova, Volha and Pustejovsky, James and Romary, Laurent", booktitle = "Proceedings of the 20th Joint ACL - ISO Workshop on Interoperable Semantic Annotation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.isa-1.12", pages = "93--98", abstract = "This paper presents MoCCA, a Model of Comparative Concepts for Aligning Constructicons under development by a consortium of research groups building Constructicons of different languages including Brazilian Portuguese, English, German and Swedish. The Constructicons will be aligned by using comparative concepts (CCs) providing language-neutral definitions of linguistic properties. The CCs are drawn from typological research on grammatical categories and constructions, and from FrameNet frames, organized in a conceptual network. Language-specific constructions are linked to the CCs in accordance with general principles. MoCCA is organized into files of two types: a largely static CC Database file and multiple Linking files containing relations between constructions in a Constructicon and the CCs. Tools are planned to facilitate visualization of the CC network and linking of constructions to the CCs. All files and guidelines will be versioned, and a mechanism is set up to report cases where a language-specific construction cannot be easily linked to existing CCs.", }
This paper presents MoCCA, a Model of Comparative Concepts for Aligning Constructicons under development by a consortium of research groups building Constructicons of different languages including Brazilian Portuguese, English, German and Swedish. The Constructicons will be aligned by using comparative concepts (CCs) providing language-neutral definitions of linguistic properties. The CCs are drawn from typological research on grammatical categories and constructions, and from FrameNet frames, organized in a conceptual network. Language-specific constructions are linked to the CCs in accordance with general principles. MoCCA is organized into files of two types: a largely static CC Database file and multiple Linking files containing relations between constructions in a Constructicon and the CCs. Tools are planned to facilitate visualization of the CC network and linking of constructions to the CCs. All files and guidelines will be versioned, and a mechanism is set up to report cases where a language-specific construction cannot be easily linked to existing CCs.
[ "Lorenzi, Arthur", "Ljungl{\\\"o}f, Peter", "Lyngfelt, Ben", "Timponi Torrent, Tiago", "Croft, William", "Ziem, Alex", "er", "B{\\\"o}bel, Nina", "B{\\\"a}ckstr{\\\"o}m, Linn{\\'e}a", "Uhrig, Peter", "Matos, Ely E." ]
MoCCA: A Model of Comparative Concepts for Aligning Constructicons
isa-1.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.isa-1.13.bib
https://aclanthology.org/2024.isa-1.13/
@inproceedings{tomaszewska-etal-2024-iso, title = "{ISO} 24617-8 Applied: Insights from Multilingual Discourse Relations Annotation in {E}nglish, {P}olish, and {P}ortuguese", author = "Tomaszewska, Aleksandra and Silvano, Purifica{\c{c}}{\~a}o and Leal, Ant{\'o}nio and Amorim, Evelin", editor = "Bunt, Harry and Ide, Nancy and Lee, Kiyong and Petukhova, Volha and Pustejovsky, James and Romary, Laurent", booktitle = "Proceedings of the 20th Joint ACL - ISO Workshop on Interoperable Semantic Annotation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.isa-1.13", pages = "99--110", abstract = "The main objective of this study is to contribute to multilingual discourse research by employing ISO-24617 Part 8 (Semantic Relations in Discourse, Core Annotation Schema {--} DR-core) for annotating discourse relations. Centering around a parallel discourse relations corpus that includes English, Polish, and European Portuguese, we initiate one of the few ISO-based comparative analyses through a multilingual corpus that aligns discourse relations across these languages. In this paper, we discuss the project{'}s contributions, including the annotated corpus, research findings, and statistics related to the use of discourse relations. The paper further discusses the challenges encountered in complying with the ISO standard, such as defining the scope of arguments and annotating specific relation types like Expansion. Our findings highlight the necessity for clearer definitions of certain discourse relations and more precise guidelines for argument spans, especially concerning the inclusion of connectives. Additionally, the study underscores the importance of ongoing collaborative efforts to broaden the inclusion of languages and more comprehensive datasets, with the objective of widening the reach of ISO-guided multilingual discourse research.", }
The main objective of this study is to contribute to multilingual discourse research by employing ISO-24617 Part 8 (Semantic Relations in Discourse, Core Annotation Schema {--} DR-core) for annotating discourse relations. Centering around a parallel discourse relations corpus that includes English, Polish, and European Portuguese, we initiate one of the few ISO-based comparative analyses through a multilingual corpus that aligns discourse relations across these languages. In this paper, we discuss the project{'}s contributions, including the annotated corpus, research findings, and statistics related to the use of discourse relations. The paper further discusses the challenges encountered in complying with the ISO standard, such as defining the scope of arguments and annotating specific relation types like Expansion. Our findings highlight the necessity for clearer definitions of certain discourse relations and more precise guidelines for argument spans, especially concerning the inclusion of connectives. Additionally, the study underscores the importance of ongoing collaborative efforts to broaden the inclusion of languages and more comprehensive datasets, with the objective of widening the reach of ISO-guided multilingual discourse research.
[ "Tomaszewska, Aleks", "ra", "Silvano, Purifica{\\c{c}}{\\~a}o", "Leal, Ant{\\'o}nio", "Amorim, Evelin" ]
ISO 24617-8 Applied: Insights from Multilingual Discourse Relations Annotation in English, Polish, and Portuguese
isa-1.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.isa-1.14.bib
https://aclanthology.org/2024.isa-1.14/
@inproceedings{bunt-2024-combining, title = "Combining semantic annotation schemes through interlinking", author = "Bunt, Harry", editor = "Bunt, Harry and Ide, Nancy and Lee, Kiyong and Petukhova, Volha and Pustejovsky, James and Romary, Laurent", booktitle = "Proceedings of the 20th Joint ACL - ISO Workshop on Interoperable Semantic Annotation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.isa-1.14", pages = "111--121", abstract = "This paper explores the possibilities of using combinations of different semantic annotation schemes. This is particularly interesting for annotation schemes developed under the umbrella of the ISO Semantic Annotation Framework (ISO 24617), since these schemes were intended to be complementary, providing ways of indicating different semantic information about the same entities. However, there are certain overlaps between the schemes of SemAF parts, due to overlaps of their semantic domains, which are a potential source of inconsistencies. The paper shows how issues relating to inconsistencies can be addressed at the levels of concrete representation, abstract syntax, and semantic interpretation.", }
This paper explores the possibilities of using combinations of different semantic annotation schemes. This is particularly interesting for annotation schemes developed under the umbrella of the ISO Semantic Annotation Framework (ISO 24617), since these schemes were intended to be complementary, providing ways of indicating different semantic information about the same entities. However, there are certain overlaps between the schemes of SemAF parts, due to overlaps of their semantic domains, which are a potential source of inconsistencies. The paper shows how issues relating to inconsistencies can be addressed at the levels of concrete representation, abstract syntax, and semantic interpretation.
[ "Bunt, Harry" ]
Combining semantic annotation schemes through interlinking
isa-1.14
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.isa-1.15.bib
https://aclanthology.org/2024.isa-1.15/
@inproceedings{malchanau-etal-2024-fusing, title = "Fusing {ISO} 24617-2 Dialogue Acts and Application-Specific Semantic Content Annotations", author = "Malchanau, Andrei and Petukhova, Volha and Bunt, Harry", editor = "Bunt, Harry and Ide, Nancy and Lee, Kiyong and Petukhova, Volha and Pustejovsky, James and Romary, Laurent", booktitle = "Proceedings of the 20th Joint ACL - ISO Workshop on Interoperable Semantic Annotation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.isa-1.15", pages = "122--132", abstract = "Accurately annotated data determines whether a modern high-performing AI/ML model will present a suitable solution to a complex task/application challenge, or time and resources are wasted. The more adequate the structure of the incoming data is specified, the more efficient the data is translated to be used by the application. This paper presents an approach to an application-specific dialogue semantics design which integrates the dialogue act annotation standard ISO 24617-2 and various domain-specific semantic annotations. The proposed multi-scheme design offers a plausible and a rather powerful strategy to integrate, validate, extend and reuse existing annotations, and automatically generate code for dialogue system modules. Advantages and possible trade-offs are discussed.", }
Accurately annotated data determines whether a modern high-performing AI/ML model will present a suitable solution to a complex task/application challenge, or time and resources are wasted. The more adequate the structure of the incoming data is specified, the more efficient the data is translated to be used by the application. This paper presents an approach to an application-specific dialogue semantics design which integrates the dialogue act annotation standard ISO 24617-2 and various domain-specific semantic annotations. The proposed multi-scheme design offers a plausible and a rather powerful strategy to integrate, validate, extend and reuse existing annotations, and automatically generate code for dialogue system modules. Advantages and possible trade-offs are discussed.
[ "Malchanau, Andrei", "Petukhova, Volha", "Bunt, Harry" ]
Fusing ISO 24617-2 Dialogue Acts and Application-Specific Semantic Content Annotations
isa-1.15
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.isa-1.16.bib
https://aclanthology.org/2024.isa-1.16/
@inproceedings{lee-2024-annotation, title = "Annotation-Based Semantics for Dialogues in the Vox World", author = "Lee, Kiyong", editor = "Bunt, Harry and Ide, Nancy and Lee, Kiyong and Petukhova, Volha and Pustejovsky, James and Romary, Laurent", booktitle = "Proceedings of the 20th Joint ACL - ISO Workshop on Interoperable Semantic Annotation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.isa-1.16", pages = "133--143", abstract = "This paper aims at enriching Annotation-Based Semantics (ABS) with the notion of small visual worlds, called the \textit{Vox worlds}, to interpret dialogues in natural language. It attempts to implement classical set-theoretic models with these Vox worlds that serve as interpretation models. These worlds describe dialogue situations while providing background for the visualization of those situations in which these described dialogues take place interactively among dialogue participants, often triggering actions and emotions. The enriched ABS is based on VoxML, a modeling language for visual object conceptual structures (vocs or vox) that constitute the structural basis of visual worlds.", }
This paper aims at enriching Annotation-Based Semantics (ABS) with the notion of small visual worlds, called the \textit{Vox worlds}, to interpret dialogues in natural language. It attempts to implement classical set-theoretic models with these Vox worlds that serve as interpretation models. These worlds describe dialogue situations while providing background for the visualization of those situations in which these described dialogues take place interactively among dialogue participants, often triggering actions and emotions. The enriched ABS is based on VoxML, a modeling language for visual object conceptual structures (vocs or vox) that constitute the structural basis of visual worlds.
[ "Lee, Kiyong" ]
Annotation-Based Semantics for Dialogues in the Vox World
isa-1.16
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.isa-1.17.bib
https://aclanthology.org/2024.isa-1.17/
@inproceedings{zeng-etal-2024-annotating, title = "Annotating Evaluative Language: Challenges and Solutions in Applying Appraisal Theory", author = "Zeng, Jiamei and Dong, Min and Fang, Alex Chengyu", editor = "Bunt, Harry and Ide, Nancy and Lee, Kiyong and Petukhova, Volha and Pustejovsky, James and Romary, Laurent", booktitle = "Proceedings of the 20th Joint ACL - ISO Workshop on Interoperable Semantic Annotation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.isa-1.17", pages = "144--151", abstract = "This article describes a corpus-based experiment to identify the challenges and solutions in the annotation of evaluative language according to the scheme defined in Appraisal Theory (Martin and White, 2005). Originating from systemic functional linguistics, Appraisal Theory provides a robust framework for the analysis of linguistic expressions of evaluation, stance, and interpersonal relationships. Despite its theoretical richness, the practical application of Appraisal Theory in text annotation presents significant challenges, chiefly due to the intricacies of identifying and classifying evaluative expressions within its sub-system of Attitude, which comprises Affect, Judgement, and Appreciation. This study examines these challenges through the annotation of a corpus of editorials related to the Russian-Ukraine conflict and aims to offer practical solutions to enhance the transparency and consistency of the annotation. By refining the annotation process and addressing the subjective nature in the identification and classification of evaluative language, this work represents some timely effort in the annotation of pragmatic knowledge in language resources.", }
This article describes a corpus-based experiment to identify the challenges and solutions in the annotation of evaluative language according to the scheme defined in Appraisal Theory (Martin and White, 2005). Originating from systemic functional linguistics, Appraisal Theory provides a robust framework for the analysis of linguistic expressions of evaluation, stance, and interpersonal relationships. Despite its theoretical richness, the practical application of Appraisal Theory in text annotation presents significant challenges, chiefly due to the intricacies of identifying and classifying evaluative expressions within its sub-system of Attitude, which comprises Affect, Judgement, and Appreciation. This study examines these challenges through the annotation of a corpus of editorials related to the Russian-Ukraine conflict and aims to offer practical solutions to enhance the transparency and consistency of the annotation. By refining the annotation process and addressing the subjective nature in the identification and classification of evaluative language, this work represents some timely effort in the annotation of pragmatic knowledge in language resources.
[ "Zeng, Jiamei", "Dong, Min", "Fang, Alex Chengyu" ]
Annotating Evaluative Language: Challenges and Solutions in Applying Appraisal Theory
isa-1.17
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.isa-1.18.bib
https://aclanthology.org/2024.isa-1.18/
@inproceedings{van-der-sluis-kiewiet-de-jonge-2024-attractive, title = "Attractive Multimodal Instructions, Describing Easy and Engaging Recipe Blogs", author = "van der Sluis, Ielka and Kiewiet de Jonge, Jarred", editor = "Bunt, Harry and Ide, Nancy and Lee, Kiyong and Petukhova, Volha and Pustejovsky, James and Romary, Laurent", booktitle = "Proceedings of the 20th Joint ACL - ISO Workshop on Interoperable Semantic Annotation @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.isa-1.18", pages = "152--164", abstract = "This paper presents a corpus study that extends and generalises an existing annotation model which integrates functional content descriptions delivered via text, pictures and interactive components. The model is used to describe a new corpus with 20 online vegan recipe blogs in terms of their Attractiveness for at least two types of readers: vegan readers and readers interested in a vegan lifestyle. Arguably, these readers value a blog that shows that the target dish is Easy to Make which can be inferred from the number of ingredients, procedural steps and visualised actions, according to an Easy to Read cooking instruction that displays a coherent use of verbal and visual modalities presenting processes and results of the cooking actions involved. Moreover, added value may be attributed to invitations to Engage with the blog content and functionality through which information about the recipe, the author, diet and nutrition can be accessed. Thus, the corpus study merges generalisable annotations of verbal, visual and interaction phenomena to capture the Attractiveness of online vegan recipe blogs to inform reader and user studies and ultimately offer guidelines for authoring effective online multimodal instructions.", }
This paper presents a corpus study that extends and generalises an existing annotation model which integrates functional content descriptions delivered via text, pictures and interactive components. The model is used to describe a new corpus with 20 online vegan recipe blogs in terms of their Attractiveness for at least two types of readers: vegan readers and readers interested in a vegan lifestyle. Arguably, these readers value a blog that shows that the target dish is Easy to Make which can be inferred from the number of ingredients, procedural steps and visualised actions, according to an Easy to Read cooking instruction that displays a coherent use of verbal and visual modalities presenting processes and results of the cooking actions involved. Moreover, added value may be attributed to invitations to Engage with the blog content and functionality through which information about the recipe, the author, diet and nutrition can be accessed. Thus, the corpus study merges generalisable annotations of verbal, visual and interaction phenomena to capture the Attractiveness of online vegan recipe blogs to inform reader and user studies and ultimately offer guidelines for authoring effective online multimodal instructions.
[ "van der Sluis, Ielka", "Kiewiet de Jonge, Jarred" ]
Attractive Multimodal Instructions, Describing Easy and Engaging Recipe Blogs
isa-1.18
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.ldl-1.1.bib
https://aclanthology.org/2024.ldl-1.1/
@inproceedings{armaselu-etal-2024-llodia, title = "{LLODIA}: A Linguistic Linked Open Data Model for Diachronic Analysis", author = "Armaselu, Florentina and Liebeskind, Chaya and Marongiu, Paola and McGillivray, Barbara and Valunaite Oleskeviciene, Giedre and Apostol, Elena-Simona and Truica, Ciprian-Octavian and Gifu, Daniela", editor = "Chiarcos, Christian and Gkirtzou, Katerina and Ionov, Maxim and Khan, Fahad and McCrae, John P. and Ponsoda, Elena Montiel and Chozas, Patricia Mart{\'\i}n", booktitle = "Proceedings of the 9th Workshop on Linked Data in Linguistics @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.ldl-1.1", pages = "1--10", abstract = "This article proposes a linguistic linked open data model for diachronic analysis (LLODIA) that combines data derived from diachronic analysis of multilingual corpora with dictionary-based evidence. A humanities use case was devised as a proof of concept that includes examples in five languages (French, Hebrew, Latin, Lithuanian and Romanian) related to various meanings of the term {``}revolution{''} considered at different time intervals. The examples were compiled through diachronic word embedding and dictionary alignment.", }
This article proposes a linguistic linked open data model for diachronic analysis (LLODIA) that combines data derived from diachronic analysis of multilingual corpora with dictionary-based evidence. A humanities use case was devised as a proof of concept that includes examples in five languages (French, Hebrew, Latin, Lithuanian and Romanian) related to various meanings of the term {``}revolution{''} considered at different time intervals. The examples were compiled through diachronic word embedding and dictionary alignment.
[ "Armaselu, Florentina", "Liebeskind, Chaya", "Marongiu, Paola", "McGillivray, Barbara", "Valunaite Oleskeviciene, Giedre", "Apostol, Elena-Simona", "Truica, Ciprian-Octavian", "Gifu, Daniela" ]
LLODIA: A Linguistic Linked Open Data Model for Diachronic Analysis
ldl-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.ldl-1.2.bib
https://aclanthology.org/2024.ldl-1.2/
@inproceedings{banerjee-etal-2024-cross, title = "Cross-Lingual Ontology Matching using Structural and Semantic Similarity", author = "Banerjee, Shubhanker and Chakravarthi, Bharathi Raja and McCrae, John Philip", editor = "Chiarcos, Christian and Gkirtzou, Katerina and Ionov, Maxim and Khan, Fahad and McCrae, John P. and Ponsoda, Elena Montiel and Chozas, Patricia Mart{\'\i}n", booktitle = "Proceedings of the 9th Workshop on Linked Data in Linguistics @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.ldl-1.2", pages = "11--21", abstract = "The development of ontologies in various languages is attracting attention as the amount of multilingual data available on the web increases. Cross-lingual ontology matching facilitates interoperability amongst ontologies in different languages. Although supervised machine learning-based methods have shown good performance on ontology matching, their application to the cross-lingual setting is limited by the availability of training data. Current state-of-the-art unsupervised methods for cross-lingual ontology matching focus on lexical similarity between entities. These approaches follow a two-stage pipeline where the entities are translated into a common language using a translation service in the first step followed by computation of lexical similarity between the translations to match the entities in the second step. In this paper we introduce a novel ontology matching method based on the fusion of structural similarity and cross-lingual semantic similarity. We carry out experiments using 3 language pairs and report substantial improvements on the performance of the lexical methods thus showing the effectiveness of our proposed approach. To the best of our knowledge this is the first work which tackles the problem of unsupervised ontology matching in the cross-lingual setting by leveraging both structural and semantic embeddings.", }
The development of ontologies in various languages is attracting attention as the amount of multilingual data available on the web increases. Cross-lingual ontology matching facilitates interoperability amongst ontologies in different languages. Although supervised machine learning-based methods have shown good performance on ontology matching, their application to the cross-lingual setting is limited by the availability of training data. Current state-of-the-art unsupervised methods for cross-lingual ontology matching focus on lexical similarity between entities. These approaches follow a two-stage pipeline where the entities are translated into a common language using a translation service in the first step followed by computation of lexical similarity between the translations to match the entities in the second step. In this paper we introduce a novel ontology matching method based on the fusion of structural similarity and cross-lingual semantic similarity. We carry out experiments using 3 language pairs and report substantial improvements on the performance of the lexical methods thus showing the effectiveness of our proposed approach. To the best of our knowledge this is the first work which tackles the problem of unsupervised ontology matching in the cross-lingual setting by leveraging both structural and semantic embeddings.
[ "Banerjee, Shubhanker", "Chakravarthi, Bharathi Raja", "McCrae, John Philip" ]
Cross-Lingual Ontology Matching using Structural and Semantic Similarity
ldl-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.ldl-1.3.bib
https://aclanthology.org/2024.ldl-1.3/
@inproceedings{boano-etal-2024-querying, title = "Querying the Lexicon der indogermanischen Verben in the {L}i{L}a Knowledge Base: Two Use Cases", author = "Boano, Valeria Irene and Passarotti, Marco and Ginevra, Riccardo", editor = "Chiarcos, Christian and Gkirtzou, Katerina and Ionov, Maxim and Khan, Fahad and McCrae, John P. and Ponsoda, Elena Montiel and Chozas, Patricia Mart{\'\i}n", booktitle = "Proceedings of the 9th Workshop on Linked Data in Linguistics @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.ldl-1.3", pages = "22--31", abstract = "This paper presents two use cases of the etymological data provided by the *Lexicon der indogermanischen Verben* (LIV) after their publication as Linked Open Data and their linking to the LiLa Knowledge Base (KB) of interoperable linguistic resources for Latin. The first part of the paper briefly describes the LiLa KB and its structure. Then, the LIV and the information it contains are introduced, followed by a short description of the ontologies and the extensions used for modelling the LIV{'}s data and interlinking them to the LiLa ecosystem. The last section details the two use cases. The first case concerns the inflection types of the Latin verbs that reflect Proto-Indo-European stems, while the second one focusses on the Latin derivatives of the inherited stems. The results of the investigations are put in relation to current research topics in Historical Linguistics, demonstrating their relevance to the discipline.", }
This paper presents two use cases of the etymological data provided by the *Lexicon der indogermanischen Verben* (LIV) after their publication as Linked Open Data and their linking to the LiLa Knowledge Base (KB) of interoperable linguistic resources for Latin. The first part of the paper briefly describes the LiLa KB and its structure. Then, the LIV and the information it contains are introduced, followed by a short description of the ontologies and the extensions used for modelling the LIV{'}s data and interlinking them to the LiLa ecosystem. The last section details the two use cases. The first case concerns the inflection types of the Latin verbs that reflect Proto-Indo-European stems, while the second one focusses on the Latin derivatives of the inherited stems. The results of the investigations are put in relation to current research topics in Historical Linguistics, demonstrating their relevance to the discipline.
[ "Boano, Valeria Irene", "Passarotti, Marco", "Ginevra, Riccardo" ]
Querying the Lexicon der indogermanischen Verben in the LiLa Knowledge Base: Two Use Cases
ldl-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.ldl-1.4.bib
https://aclanthology.org/2024.ldl-1.4/
@inproceedings{canning-2024-defining, title = "Defining an Ontology for Museum Critical Cataloguing Terminology Guidelines", author = "Canning, Erin", editor = "Chiarcos, Christian and Gkirtzou, Katerina and Ionov, Maxim and Khan, Fahad and McCrae, John P. and Ponsoda, Elena Montiel and Chozas, Patricia Mart{\'\i}n", booktitle = "Proceedings of the 9th Workshop on Linked Data in Linguistics @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.ldl-1.4", pages = "32--36", abstract = "Submission type: Short paper This paper presents the proposed ontology for the project Computational Approaches for Addressing Problematic Terminology (CAAPT). This schema seeks to represent contents and structure of language guideline documents produced by cultural heritage institutions seeking to engage with critical cataloguing or reparative description work, known as terminology guidance documents. It takes the Victoria {\&} Albert Museum{'}s Terminology Guidance Document as a source for the initial modelling work. Ultimately, CAAPT seeks to expand the knowledge graph beyond the V{\&}A Museum context to incorporate additional terminology guidance documents and linked open data vocabularies. The ontology seeks to bring together scholarly communities in areas relevant to this project, most notably those in cultural heritage and linguistics linked open data, by leveraging existing linked data resources in these areas: as such, OntoLex, CIDOC CRM, and SKOS are used as a foundation for this work, along with a proposed schema from a related project, CULCO. As the CAAPT project is in early stages, this paper presents the preliminary results of work undertaken thus far in order to seek feedback from the linguistics linked open data community.", }
Submission type: Short paper This paper presents the proposed ontology for the project Computational Approaches for Addressing Problematic Terminology (CAAPT). This schema seeks to represent contents and structure of language guideline documents produced by cultural heritage institutions seeking to engage with critical cataloguing or reparative description work, known as terminology guidance documents. It takes the Victoria {\&} Albert Museum{'}s Terminology Guidance Document as a source for the initial modelling work. Ultimately, CAAPT seeks to expand the knowledge graph beyond the V{\&}A Museum context to incorporate additional terminology guidance documents and linked open data vocabularies. The ontology seeks to bring together scholarly communities in areas relevant to this project, most notably those in cultural heritage and linguistics linked open data, by leveraging existing linked data resources in these areas: as such, OntoLex, CIDOC CRM, and SKOS are used as a foundation for this work, along with a proposed schema from a related project, CULCO. As the CAAPT project is in early stages, this paper presents the preliminary results of work undertaken thus far in order to seek feedback from the linguistics linked open data community.
[ "Canning, Erin" ]
Defining an Ontology for Museum Critical Cataloguing Terminology Guidelines
ldl-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.ldl-1.5.bib
https://aclanthology.org/2024.ldl-1.5/
@inproceedings{fransen-etal-2024-molor, title = "The {MOLOR} Lemma Bank: a New {LLOD} Resource for {O}ld {I}rish", author = "Fransen, Theodorus and Anderson, Cormac and Beniamine, Sacha and Passarotti, Marco", editor = "Chiarcos, Christian and Gkirtzou, Katerina and Ionov, Maxim and Khan, Fahad and McCrae, John P. and Ponsoda, Elena Montiel and Chozas, Patricia Mart{\'\i}n", booktitle = "Proceedings of the 9th Workshop on Linked Data in Linguistics @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.ldl-1.5", pages = "37--43", abstract = "This paper describes the first steps in creating a Lemma Bank for Old Irish (600-900CE) within the Linked Data paradigm, taking inspiration from a similar resource for Latin built as part of the LiLa project (2018{--}2023). The focus is on the extraction and RDF conversion of nouns from Goidelex, a novel and highly structured morphological resource for Old Irish. The aim is to strike a good balance between retaining a representative level of morphological granularity and at the same time keeping the amount of lemma variants within workable limits, to facilitate straightforward resource interlinking for Old Irish, planned as future work.", }
This paper describes the first steps in creating a Lemma Bank for Old Irish (600-900CE) within the Linked Data paradigm, taking inspiration from a similar resource for Latin built as part of the LiLa project (2018{--}2023). The focus is on the extraction and RDF conversion of nouns from Goidelex, a novel and highly structured morphological resource for Old Irish. The aim is to strike a good balance between retaining a representative level of morphological granularity and at the same time keeping the amount of lemma variants within workable limits, to facilitate straightforward resource interlinking for Old Irish, planned as future work.
[ "Fransen, Theodorus", "Anderson, Cormac", "Beniamine, Sacha", "Passarotti, Marco" ]
The MOLOR Lemma Bank: a New LLOD Resource for Old Irish
ldl-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.ldl-1.6.bib
https://aclanthology.org/2024.ldl-1.6/
@inproceedings{khan-etal-2024-chamuca, title = "{CHAMU{\c{C}}A}: Towards a Linked Data Language Resource of {P}ortuguese Borrowings in {A}sian Languages", author = "Khan, Fahad and Salgado, Ana and Anuradha, Isuri and Costa, Rute and Liyanage, Chamila and McCrae, John P. and Ojha, Atul Kr. and Rani, Priya and Frontini, Francesca", editor = "Chiarcos, Christian and Gkirtzou, Katerina and Ionov, Maxim and Khan, Fahad and McCrae, John P. and Ponsoda, Elena Montiel and Chozas, Patricia Mart{\'\i}n", booktitle = "Proceedings of the 9th Workshop on Linked Data in Linguistics @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.ldl-1.6", pages = "44--48", abstract = "This paper presents the development of CHAMU{\c{C}}A, a novel lexical resource designed to document the influence of the Portuguese language on various Asian languages, with an initial focus on the languages of South Asia. Through the utilization of linked open data and the OntoLex vocabulary, CHAMU{\c{C}}A offers structured insights into the linguistic characteristics, and cultural ramifications of Portuguese borrowings across multiple languages. The article outlines CHAMU{\c{C}}A{'}s potential contributions to the linguistic linked data community, emphasising its role in addressing the scarcity of resources for lesser-resourced languages and serving as a test case for organising etymological data in a queryable format. CHAMU{\c{C}}A emerges as an initiative towards the comprehensive catalogization and analysis of Portuguese borrowings, offering valuable insights into language contact dynamics, historical evolution, and cultural exchange in Asia, one that is based on linked data technology.", }
This paper presents the development of CHAMU{\c{C}}A, a novel lexical resource designed to document the influence of the Portuguese language on various Asian languages, with an initial focus on the languages of South Asia. Through the utilization of linked open data and the OntoLex vocabulary, CHAMU{\c{C}}A offers structured insights into the linguistic characteristics, and cultural ramifications of Portuguese borrowings across multiple languages. The article outlines CHAMU{\c{C}}A{'}s potential contributions to the linguistic linked data community, emphasising its role in addressing the scarcity of resources for lesser-resourced languages and serving as a test case for organising etymological data in a queryable format. CHAMU{\c{C}}A emerges as an initiative towards the comprehensive catalogization and analysis of Portuguese borrowings, offering valuable insights into language contact dynamics, historical evolution, and cultural exchange in Asia, one that is based on linked data technology.
[ "Khan, Fahad", "Salgado, Ana", "Anuradha, Isuri", "Costa, Rute", "Liyanage, Chamila", "McCrae, John P.", "Ojha, Atul Kr.", "Rani, Priya", "Frontini, Francesca" ]
CHAMUÇA: Towards a Linked Data Language Resource of Portuguese Borrowings in Asian Languages
ldl-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.ldl-1.7.bib
https://aclanthology.org/2024.ldl-1.7/
@inproceedings{kudera-etal-2024-loding, title = "{LOD}in{G}: Linked Open Data in the Humanities", author = {Kudera, Jacek and Bamberg, Claudia and Burch, Thomas and Gernert, Folke and Hinzmann, Maria and Kabatnik, Susanne and Moulin, Claudine and Raue, Benjamin and Rettinger, Achim and R{\"o}pke, J{\"o}rg and Schenkel, Ralf and Shi-Kupfer, Kristin and Schirra, Doris and Sch{\"o}ch, Christof and Weis, Jo{\"e}lle}, editor = "Chiarcos, Christian and Gkirtzou, Katerina and Ionov, Maxim and Khan, Fahad and McCrae, John P. and Ponsoda, Elena Montiel and Chozas, Patricia Mart{\'\i}n", booktitle = "Proceedings of the 9th Workshop on Linked Data in Linguistics @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.ldl-1.7", pages = "49--54", abstract = "We are presenting LODinG {--} Linked Open Data in the Humanities (abbreviated from Linked Open Data in den Geisteswissenschaften), a recently launched research initiative exploring the intersection of Linked Open Data (LOD) and a range of areas of work within the Humanities. We focus on effective methods of collecting, modeling, linking, releasing and analyzing machine-readable information relevant to (digital) humanities research in the form of LOD. LODinG combines the sources and methods of digital humanities, general and computational linguistics, digital lexicography, German and Romance philology, translatology, cultural and literary studies, media studies, information science and law to explore and expand the potential of the LOD paradigm for such a diverse and multidisciplinary field. The project{'}s primary objectives are to improve the methods of extracting, modeling and analyzing multilingual data in the LOD paradigm; to demonstrate the application of the linguistic LOD to various methods and domains within and beyond the humanities; and to develop a modular, cross-domain data model for the humanities.", }
We are presenting LODinG {--} Linked Open Data in the Humanities (abbreviated from Linked Open Data in den Geisteswissenschaften), a recently launched research initiative exploring the intersection of Linked Open Data (LOD) and a range of areas of work within the Humanities. We focus on effective methods of collecting, modeling, linking, releasing and analyzing machine-readable information relevant to (digital) humanities research in the form of LOD. LODinG combines the sources and methods of digital humanities, general and computational linguistics, digital lexicography, German and Romance philology, translatology, cultural and literary studies, media studies, information science and law to explore and expand the potential of the LOD paradigm for such a diverse and multidisciplinary field. The project{'}s primary objectives are to improve the methods of extracting, modeling and analyzing multilingual data in the LOD paradigm; to demonstrate the application of the linguistic LOD to various methods and domains within and beyond the humanities; and to develop a modular, cross-domain data model for the humanities.
[ "Kudera, Jacek", "Bamberg, Claudia", "Burch, Thomas", "Gernert, Folke", "Hinzmann, Maria", "Kabatnik, Susanne", "Moulin, Claudine", "Raue, Benjamin", "Rettinger, Achim", "R{\\\"o}pke, J{\\\"o}rg", "Schenkel, Ralf", "Shi-Kupfer, Kristin", "Schirra, Doris", "Sch{\\\"o}ch, Christof", "Weis, Jo{\\\"e}lle" ]
LODinG: Linked Open Data in the Humanities
ldl-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.ldl-1.8.bib
https://aclanthology.org/2024.ldl-1.8/
@inproceedings{mallia-etal-2024-digitant, title = "{D}ig{I}t{A}nt: a platform for creating, linking and exploiting {LOD} lexica with heterogeneous resources", author = "Mallia, Michele and Bandini, Michela and Bellandi, Andrea and Murano, Francesca and Piccini, Silvia and Rigobianco, Luca and Tommasi, Alessandro and Zavattari, Cesare and Zinzi, Mariarosaria and Quochi, Valeria", editor = "Chiarcos, Christian and Gkirtzou, Katerina and Ionov, Maxim and Khan, Fahad and McCrae, John P. and Ponsoda, Elena Montiel and Chozas, Patricia Mart{\'\i}n", booktitle = "Proceedings of the 9th Workshop on Linked Data in Linguistics @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.ldl-1.8", pages = "55--65", abstract = "Over the past few years, the deployment of Linked Open Data (LOD) technologies has witnessed significant advancements across a myriad of sectors, linguistics included. This progression is characterized by an exponential increase in the conversion of resources to adhere to contemporary encoding standards. Such transformations are driven by the objectives outlined in {``}ecological{''} methodologies, notably the FAIR data principles, which advocate for the reuse and interoperability of resources. This paper presents the DigItAnt architecture, developed in the context of a national project funded by the Italian Ministry of Research and in the service of a recently started Italian endeavor to realize a federation of infrastructures for the humanities. It details its services, utilities and data types, and shows how it manages to produce, exploit and interlink LLOD and non-LLOD datasets in ways that are meaningful to its intended target disciplinary context, i.e. historical linguistics over epigraphy data. The paper also introduces how DigItAnt services and functionalities will contribute to the empowerment of the H2IOSC Italian infrastructures cluster project, which is devoted to the construction of a nationwide research infrastructure federation for the humanities, and it will possibly contribute to its pilot project towards an authoritative LLOD platform.", }
Over the past few years, the deployment of Linked Open Data (LOD) technologies has witnessed significant advancements across a myriad of sectors, linguistics included. This progression is characterized by an exponential increase in the conversion of resources to adhere to contemporary encoding standards. Such transformations are driven by the objectives outlined in {``}ecological{''} methodologies, notably the FAIR data principles, which advocate for the reuse and interoperability of resources. This paper presents the DigItAnt architecture, developed in the context of a national project funded by the Italian Ministry of Research and in the service of a recently started Italian endeavor to realize a federation of infrastructures for the humanities. It details its services, utilities and data types, and shows how it manages to produce, exploit and interlink LLOD and non-LLOD datasets in ways that are meaningful to its intended target disciplinary context, i.e. historical linguistics over epigraphy data. The paper also introduces how DigItAnt services and functionalities will contribute to the empowerment of the H2IOSC Italian infrastructures cluster project, which is devoted to the construction of a nationwide research infrastructure federation for the humanities, and it will possibly contribute to its pilot project towards an authoritative LLOD platform.
[ "Mallia, Michele", "B", "ini, Michela", "Bell", "i, Andrea", "Murano, Francesca", "Piccini, Silvia", "Rigobianco, Luca", "Tommasi, Aless", "ro", "Zavattari, Cesare", "Zinzi, Mariarosaria", "Quochi, Valeria" ]
DigItAnt: a platform for creating, linking and exploiting LOD lexica with heterogeneous resources
ldl-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.ldl-1.9.bib
https://aclanthology.org/2024.ldl-1.9/
@inproceedings{mccrae-etal-2024-teanga, title = "Teanga Data Model for Linked Corpora", author = "McCrae, John P. and Rani, Priya and Doyle, Adrian and Stearns, Bernardo", editor = "Chiarcos, Christian and Gkirtzou, Katerina and Ionov, Maxim and Khan, Fahad and McCrae, John P. and Ponsoda, Elena Montiel and Chozas, Patricia Mart{\'\i}n", booktitle = "Proceedings of the 9th Workshop on Linked Data in Linguistics @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.ldl-1.9", pages = "66--74", abstract = "Corpus data is the main source of data for natural language processing applications, however no standard or model for corpus data has become predominant in the field. Linguistic linked data aims to provide methods by which data can be made findable, accessible, interoperable and reusable (FAIR). However, current attempts to create a linked data format for corpora have been unsuccessful due to the verbose and specialised formats that they use. In this work, we present the Teanga data model, which uses a layered annotation model to capture all NLP-relevant annotations. We present the YAML serializations of the model, which is concise and uses a widely-deployed format, and we describe how this can be interpreted as RDF. Finally, we demonstrate three examples of the use of the Teanga data model for syntactic annotation, literary analysis and multilingual corpora.", }
Corpus data is the main source of data for natural language processing applications, however no standard or model for corpus data has become predominant in the field. Linguistic linked data aims to provide methods by which data can be made findable, accessible, interoperable and reusable (FAIR). However, current attempts to create a linked data format for corpora have been unsuccessful due to the verbose and specialised formats that they use. In this work, we present the Teanga data model, which uses a layered annotation model to capture all NLP-relevant annotations. We present the YAML serializations of the model, which is concise and uses a widely-deployed format, and we describe how this can be interpreted as RDF. Finally, we demonstrate three examples of the use of the Teanga data model for syntactic annotation, literary analysis and multilingual corpora.
[ "McCrae, John P.", "Rani, Priya", "Doyle, Adrian", "Stearns, Bernardo" ]
Teanga Data Model for Linked Corpora
ldl-1.9
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.ldl-1.10.bib
https://aclanthology.org/2024.ldl-1.10/
@inproceedings{passarotti-etal-2024-services, title = "The Services of the {L}i{L}a Knowledge Base of Interoperable Linguistic Resources for {L}atin", author = "Passarotti, Marco and Mambrini, Francesco and Moretti, Giovanni", editor = "Chiarcos, Christian and Gkirtzou, Katerina and Ionov, Maxim and Khan, Fahad and McCrae, John P. and Ponsoda, Elena Montiel and Chozas, Patricia Mart{\'\i}n", booktitle = "Proceedings of the 9th Workshop on Linked Data in Linguistics @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.ldl-1.10", pages = "75--83", abstract = "This paper describes three online services designed to ease the tasks of querying and populating the linguistic resources for Latin made interoperable through their publication as Linked Open Data in the LiLa Knowledge Base. As for querying the KB, we present an interface to search the collection of lemmas that represents the core of the Knowledge Base, and an interactive, graphical platform to run queries on the resources currently interlinked. As for populating the KB with new textual resources, we describe a tool that performs automatic tokenization, lemmatization and Part-of-Speech tagging of a raw text in Latin and links its tokens to LiLa.", }
This paper describes three online services designed to ease the tasks of querying and populating the linguistic resources for Latin made interoperable through their publication as Linked Open Data in the LiLa Knowledge Base. As for querying the KB, we present an interface to search the collection of lemmas that represents the core of the Knowledge Base, and an interactive, graphical platform to run queries on the resources currently interlinked. As for populating the KB with new textual resources, we describe a tool that performs automatic tokenization, lemmatization and Part-of-Speech tagging of a raw text in Latin and links its tokens to LiLa.
[ "Passarotti, Marco", "Mambrini, Francesco", "Moretti, Giovanni" ]
The Services of the LiLa Knowledge Base of Interoperable Linguistic Resources for Latin
ldl-1.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.ldl-1.11.bib
https://aclanthology.org/2024.ldl-1.11/
@inproceedings{pertsas-etal-2024-annotated, title = "An Annotated Dataset for Transformer-based Scholarly Information Extraction and Linguistic Linked Data Generation", author = "Pertsas, Vayianos and Kasapaki, Marialena and Constantopoulos, Panos", editor = "Chiarcos, Christian and Gkirtzou, Katerina and Ionov, Maxim and Khan, Fahad and McCrae, John P. and Ponsoda, Elena Montiel and Chozas, Patricia Mart{\'\i}n", booktitle = "Proceedings of the 9th Workshop on Linked Data in Linguistics @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.ldl-1.11", pages = "84--93", abstract = "We present a manually curated and annotated, multidisciplinary dataset of 15,262 sentences from research articles (abstract and main text) that can be used for transformer-based extraction from scholarly publications of three types of entities: 1) research methods, named entities of variable length, 2) research goals, entities that appear as textual spans of variable length with mostly fixed lexico-syntactic-structure, and 3) research activities, entities that appear as textual spans of variable length with complex lexico-syntactic structure. We explore the capabilities of our dataset by using it for training/fine-tuning various ML and transformer-based models. We compare our finetuned models as well as LLM responses (chatGPT 3.5) based on 10-shot learning, by measuring F1 scores in token-based, entity-based strict and entity-based partial evaluations across interdisciplinary and discipline-specific datasets in order to capture any possible differences in discipline-oriented writing styles. Results show that fine tuning of transformer-based models significantly outperforms the performance of few- shot learning of LLMs such as chatGPT, highlighting the significance of annotation datasets in such tasks. Our dataset can also be used as a source for linguistic linked data by itself. We demonstrate this by presenting indicative queries in SPARQL, executed over such an RDF knowledge graph.", }
We present a manually curated and annotated, multidisciplinary dataset of 15,262 sentences from research articles (abstract and main text) that can be used for transformer-based extraction from scholarly publications of three types of entities: 1) research methods, named entities of variable length, 2) research goals, entities that appear as textual spans of variable length with mostly fixed lexico-syntactic-structure, and 3) research activities, entities that appear as textual spans of variable length with complex lexico-syntactic structure. We explore the capabilities of our dataset by using it for training/fine-tuning various ML and transformer-based models. We compare our finetuned models as well as LLM responses (chatGPT 3.5) based on 10-shot learning, by measuring F1 scores in token-based, entity-based strict and entity-based partial evaluations across interdisciplinary and discipline-specific datasets in order to capture any possible differences in discipline-oriented writing styles. Results show that fine tuning of transformer-based models significantly outperforms the performance of few- shot learning of LLMs such as chatGPT, highlighting the significance of annotation datasets in such tasks. Our dataset can also be used as a source for linguistic linked data by itself. We demonstrate this by presenting indicative queries in SPARQL, executed over such an RDF knowledge graph.
[ "Pertsas, Vayianos", "Kasapaki, Marialena", "Constantopoulos, Panos" ]
An Annotated Dataset for Transformer-based Scholarly Information Extraction and Linguistic Linked Data Generation
ldl-1.11
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.ldl-1.12.bib
https://aclanthology.org/2024.ldl-1.12/
@inproceedings{rosner-ionov-2024-linguistic, title = "Linguistic {LOD} for Interoperable Morphological Description", author = "Rosner, Michael and Ionov, Maxim", editor = "Chiarcos, Christian and Gkirtzou, Katerina and Ionov, Maxim and Khan, Fahad and McCrae, John P. and Ponsoda, Elena Montiel and Chozas, Patricia Mart{\'\i}n", booktitle = "Proceedings of the 9th Workshop on Linked Data in Linguistics @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.ldl-1.12", pages = "94--102", abstract = "Interoperability is a characteristic of a product or system that seamlessly works with another product or system and implies a certain level of independence from the context of use. Turning to language resources, interoperability is frequently cited as one important rationale underlying the use of LLOD representations and is generally regarded as highly desirable. In this paper we further elaborate this theme, distinguishing three different kinds of interoperability providing practical implementations with examples from morphology.", }
Interoperability is a characteristic of a product or system that seamlessly works with another product or system and implies a certain level of independence from the context of use. Turning to language resources, interoperability is frequently cited as one important rationale underlying the use of LLOD representations and is generally regarded as highly desirable. In this paper we further elaborate this theme, distinguishing three different kinds of interoperability providing practical implementations with examples from morphology.
[ "Rosner, Michael", "Ionov, Maxim" ]
Linguistic LOD for Interoperable Morphological Description
ldl-1.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.ldl-1.13.bib
https://aclanthology.org/2024.ldl-1.13/
@inproceedings{sciolette-2024-modeling, title = "Modeling linking between text and lexicon with {O}nto{L}ex-Lemon: a case study of computational terminology for the {B}abylonian Talmud", author = "Sciolette, Flavia", editor = "Chiarcos, Christian and Gkirtzou, Katerina and Ionov, Maxim and Khan, Fahad and McCrae, John P. and Ponsoda, Elena Montiel and Chozas, Patricia Mart{\'\i}n", booktitle = "Proceedings of the 9th Workshop on Linked Data in Linguistics @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.ldl-1.13", pages = "103--107", abstract = "This paper illustrates the first steps in the creation of a computational terminology for the Babylonian Talmud. After introducing reasons and the state of the art, the paper exposes the choice of using OntoLex-Lemon and the new FrAC module for encoding the attestations and quantitative data of the terminology extraction. After that, the Talmudic terminology base is introduced and an example entry with the above-mentioned data is shown. The scheme is motivated not only by the rich representation the model allows, but also by the future management of the link between text and lexical entries.", }
This paper illustrates the first steps in the creation of a computational terminology for the Babylonian Talmud. After introducing reasons and the state of the art, the paper exposes the choice of using OntoLex-Lemon and the new FrAC module for encoding the attestations and quantitative data of the terminology extraction. After that, the Talmudic terminology base is introduced and an example entry with the above-mentioned data is shown. The scheme is motivated not only by the rich representation the model allows, but also by the future management of the link between text and lexical entries.
[ "Sciolette, Flavia" ]
Modeling linking between text and lexicon with OntoLex-Lemon: a case study of computational terminology for the Babylonian Talmud
ldl-1.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.ldl-1.14.bib
https://aclanthology.org/2024.ldl-1.14/
@inproceedings{stankovic-etal-2024-ontolex, title = "{O}nto{L}ex Publication Made Easy: A Dataset of Verbal Aspectual Pairs for {B}osnian, {C}roatian and {S}erbian", author = "Stankovi{\'c}, Ranka and Ionov, Maxim and Bajtarevi{\'c}, Medina and Nin{\v{c}}evi{\'c}, Lorena", editor = "Chiarcos, Christian and Gkirtzou, Katerina and Ionov, Maxim and Khan, Fahad and McCrae, John P. and Ponsoda, Elena Montiel and Chozas, Patricia Mart{\'\i}n", booktitle = "Proceedings of the 9th Workshop on Linked Data in Linguistics @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.ldl-1.14", pages = "108--114", abstract = "This paper introduces a novel language resource for retrieving and researching verbal aspectual pairs in BCS (Bosnian, Croatian, and Serbian) created using Linguistic Linked Open Data (LLOD) principles. As there is no resource to help learners of Bosnian, Croatian, and Serbian as foreign languages to recognize the aspect of a verb or its pairs, we have created a new resource that will provide users with information about the aspect, as well as the link to a verb{'}s aspectual counterparts. This resource also contains external links to monolingual dictionaries, Wordnet, and BabelNet. As this is a work in progress, our resource only includes verbs and their perfective pairs formed with prefixes {``}pro{''}, {``}od{''}, {``}ot{''}, {``}iz{''}, {``}is{''} and {``}na{''}. The goal of this project is to have a complete dataset of all the aspectual pairs in these three languages. We believe it will be useful for research in the field of aspectology, as well as machine translation and other NLP tasks. Using this resource as an example, we also propose a sustainable approach to publishing small to moderate LLOD resources on the Web, both in a user-friendly way and according to the Linked Data principles.", }
This paper introduces a novel language resource for retrieving and researching verbal aspectual pairs in BCS (Bosnian, Croatian, and Serbian) created using Linguistic Linked Open Data (LLOD) principles. As there is no resource to help learners of Bosnian, Croatian, and Serbian as foreign languages to recognize the aspect of a verb or its pairs, we have created a new resource that will provide users with information about the aspect, as well as the link to a verb{'}s aspectual counterparts. This resource also contains external links to monolingual dictionaries, Wordnet, and BabelNet. As this is a work in progress, our resource only includes verbs and their perfective pairs formed with prefixes {``}pro{''}, {``}od{''}, {``}ot{''}, {``}iz{''}, {``}is{''} and {``}na{''}. The goal of this project is to have a complete dataset of all the aspectual pairs in these three languages. We believe it will be useful for research in the field of aspectology, as well as machine translation and other NLP tasks. Using this resource as an example, we also propose a sustainable approach to publishing small to moderate LLOD resources on the Web, both in a user-friendly way and according to the Linked Data principles.
[ "Stankovi{\\'c}, Ranka", "Ionov, Maxim", "Bajtarevi{\\'c}, Medina", "Nin{\\v{c}}evi{\\'c}, Lorena" ]
OntoLex Publication Made Easy: A Dataset of Verbal Aspectual Pairs for Bosnian, Croatian and Serbian
ldl-1.14
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.ldl-1.15.bib
https://aclanthology.org/2024.ldl-1.15/
@inproceedings{stankovic-etal-2024-towards, title = "Towards Semantic Interoperability: Parallel Corpora as Linked Data Incorporating Named Entity Linking", author = "Stankovi{\'c}, Ranka and Ikoni{\'c} Ne{\v{s}}i{\'c}, Milica and Perisic, Olja and {\v{S}}kori{\'c}, Mihailo and Kitanovi{\'c}, Olivera", editor = "Chiarcos, Christian and Gkirtzou, Katerina and Ionov, Maxim and Khan, Fahad and McCrae, John P. and Ponsoda, Elena Montiel and Chozas, Patricia Mart{\'\i}n", booktitle = "Proceedings of the 9th Workshop on Linked Data in Linguistics @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.ldl-1.15", pages = "115--125", abstract = "The paper presents the results of the research related to the preparation of parallel corpora, focusing on transformation into RDF graphs using NLP Interchange Format (NIF) for linguistic annotation. We give an overview of the parallel corpus that was used in this case study, as well as the process of POS tagging, lemmatization, named entity recognition (NER), and named entity linking (NEL), which is implemented using Wikidata. In the first phase of NEL main characters and places mentioned in novels are stored in Wikidata and in the second phase they are linked with the occurrences of previously annotated entities in text. Next, we describe the named entity linking (NEL), data conversion to RDF, and incorporation of NIF annotations. Produced NIF files were evaluated through the exploration of triplestore using SPARQL queries. Finally, the bridging of Linked Data and Digital Humanities research is discussed, as well as some drawbacks related to the verbosity of transformation. Semantic interoperability concept in the context of linked data and parallel corpora ensures that data exchanged between systems carries shared and well-defined meanings, enabling effective communication and understanding.", }
The paper presents the results of the research related to the preparation of parallel corpora, focusing on transformation into RDF graphs using NLP Interchange Format (NIF) for linguistic annotation. We give an overview of the parallel corpus that was used in this case study, as well as the process of POS tagging, lemmatization, named entity recognition (NER), and named entity linking (NEL), which is implemented using Wikidata. In the first phase of NEL main characters and places mentioned in novels are stored in Wikidata and in the second phase they are linked with the occurrences of previously annotated entities in text. Next, we describe the named entity linking (NEL), data conversion to RDF, and incorporation of NIF annotations. Produced NIF files were evaluated through the exploration of triplestore using SPARQL queries. Finally, the bridging of Linked Data and Digital Humanities research is discussed, as well as some drawbacks related to the verbosity of transformation. Semantic interoperability concept in the context of linked data and parallel corpora ensures that data exchanged between systems carries shared and well-defined meanings, enabling effective communication and understanding.
[ "Stankovi{\\'c}, Ranka", "Ikoni{\\'c} Ne{\\v{s}}i{\\'c}, Milica", "Perisic, Olja", "{\\v{S}}kori{\\'c}, Mihailo", "Kitanovi{\\'c}, Olivera" ]
Towards Semantic Interoperability: Parallel Corpora as Linked Data Incorporating Named Entity Linking
ldl-1.15
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.legal-1.1.bib
https://aclanthology.org/2024.legal-1.1/
@inproceedings{talmoudi-etal-2024-compliance, title = "Compliance by Design Methodologies in the Legal Governance Schemes of {E}uropean Data Spaces", author = "Talmoudi, Kossay and Choukri, Khalid and Gavanon, Isabelle", editor = "Siegert, Ingo and Choukri, Khalid", booktitle = "Proceedings of the Workshop on Legal and Ethical Issues in Human Language Technologies @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.legal-1.1", pages = "1--5", abstract = "Creating novel ways of sharing data to boost the digital economy has been one of the growing priorities of the European Union. In order to realise a set of data-sharing modalities, the European Union funds several projects that aim to put in place Common Data Spaces. These infrastructures are set to be a catalyser for the data economy. However, many hurdles face their implementation. Legal compliance is still one of the major ambiguities of European Common Data Spaces and many initiatives intend to proactively integrate legal compliance schemes in the architecture of sectoral Data Spaces. The various initiatives must navigate a complex web of cross-cutting legal frameworks, including contract law, data protection, intellectual property, protection of trade secrets, competition law, European sovereignty, and cybersecurity obligations. As the conceptualisation of Data Spaces evolves and shows signs of differentiation from one sector to another, it is important to showcase the legal repercussions of the options of centralisation and decentralisation that can be observed in different Data Spaces. This paper will thus delve into their legal requirements and attempt to sketch out a stepping stone for understanding legal governance in data spaces.", }
Creating novel ways of sharing data to boost the digital economy has been one of the growing priorities of the European Union. In order to realise a set of data-sharing modalities, the European Union funds several projects that aim to put in place Common Data Spaces. These infrastructures are set to be a catalyser for the data economy. However, many hurdles face their implementation. Legal compliance is still one of the major ambiguities of European Common Data Spaces and many initiatives intend to proactively integrate legal compliance schemes in the architecture of sectoral Data Spaces. The various initiatives must navigate a complex web of cross-cutting legal frameworks, including contract law, data protection, intellectual property, protection of trade secrets, competition law, European sovereignty, and cybersecurity obligations. As the conceptualisation of Data Spaces evolves and shows signs of differentiation from one sector to another, it is important to showcase the legal repercussions of the options of centralisation and decentralisation that can be observed in different Data Spaces. This paper will thus delve into their legal requirements and attempt to sketch out a stepping stone for understanding legal governance in data spaces.
[ "Talmoudi, Kossay", "Choukri, Khalid", "Gavanon, Isabelle" ]
Compliance by Design Methodologies in the Legal Governance Schemes of European Data Spaces
legal-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.legal-1.2.bib
https://aclanthology.org/2024.legal-1.2/
@inproceedings{almeida-amorim-2024-legal, title = "A Legal Framework for Natural Language Model Training in {P}ortugal", author = "Almeida, Ruben and Amorim, Evelin", editor = "Siegert, Ingo and Choukri, Khalid", booktitle = "Proceedings of the Workshop on Legal and Ethical Issues in Human Language Technologies @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.legal-1.2", pages = "6--12", abstract = "Recent advances in deep learning have promoted the advent of many computational systems capable of performing intelligent actions that, until then, were restricted to the human intellect. In the particular case of human languages, these advances allowed the introduction of applications like ChatGPT that are capable of generating coherent text without being explicitly programmed to do so. Instead, these models use large volumes of textual data to learn meaningful representations of human languages. Associated with these advances, concerns about copyright and data privacy infringements caused by these applications have emerged. Despite these concerns, the pace at which new natural language processing applications continued to be developed largely outperformed the introduction of new regulations. Today, communication barriers between legal experts and computer scientists motivate many unintentional legal infringements during the development of such applications. In this paper, a multidisciplinary team intends to bridge this communication gap and promote more compliant Portuguese NLP research by presenting a series of everyday NLP use cases, while highlighting the Portuguese legislation that may arise during its development.", }
Recent advances in deep learning have promoted the advent of many computational systems capable of performing intelligent actions that, until then, were restricted to the human intellect. In the particular case of human languages, these advances allowed the introduction of applications like ChatGPT that are capable of generating coherent text without being explicitly programmed to do so. Instead, these models use large volumes of textual data to learn meaningful representations of human languages. Associated with these advances, concerns about copyright and data privacy infringements caused by these applications have emerged. Despite these concerns, the pace at which new natural language processing applications continued to be developed largely outperformed the introduction of new regulations. Today, communication barriers between legal experts and computer scientists motivate many unintentional legal infringements during the development of such applications. In this paper, a multidisciplinary team intends to bridge this communication gap and promote more compliant Portuguese NLP research by presenting a series of everyday NLP use cases, while highlighting the Portuguese legislation that may arise during its development.
[ "Almeida, Ruben", "Amorim, Evelin" ]
A Legal Framework for Natural Language Model Training in Portugal
legal-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.legal-1.3.bib
https://aclanthology.org/2024.legal-1.3/
@inproceedings{kirchhubel-brown-2024-intellectual, title = "Intellectual property rights at the training, development and generation stages of Large Language Models", author = {Kirchh{\"u}bel, Christin and Brown, Georgina}, editor = "Siegert, Ingo and Choukri, Khalid", booktitle = "Proceedings of the Workshop on Legal and Ethical Issues in Human Language Technologies @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.legal-1.3", pages = "13--18", abstract = "Large Language Models (LLMs) prompt new questions around Intellectual Property (IP): what is the IP status of the datasets used to train LLMs, the resulting LLMs themselves, and their outputs? The training needs of LLMs may be at odds with current copyright law, and there are active conversations around the ownership of their outputs. A report published by the House of Lords Committee following its inquiry into LLMs and generative AI criticises, among other things, the lack of government guidance, and stresses the need for clarity (through legislation, where appropriate) in this sphere. This paper considers the little guidance and caselaw there is involving AI more broadly to allow us to anticipate legal cases and arguments involving LLMs. Given the pre-emptive nature of this paper, it is not possible to provide comprehensive answers to these questions, but we hope to equip language technology communities with a more informed understanding of the current position with respect to UK copyright and patent law.", }
Large Language Models (LLMs) prompt new questions around Intellectual Property (IP): what is the IP status of the datasets used to train LLMs, the resulting LLMs themselves, and their outputs? The training needs of LLMs may be at odds with current copyright law, and there are active conversations around the ownership of their outputs. A report published by the House of Lords Committee following its inquiry into LLMs and generative AI criticises, among other things, the lack of government guidance, and stresses the need for clarity (through legislation, where appropriate) in this sphere. This paper considers the little guidance and caselaw there is involving AI more broadly to allow us to anticipate legal cases and arguments involving LLMs. Given the pre-emptive nature of this paper, it is not possible to provide comprehensive answers to these questions, but we hope to equip language technology communities with a more informed understanding of the current position with respect to UK copyright and patent law.
[ "Kirchh{\\\"u}bel, Christin", "Brown, Georgina" ]
Intellectual property rights at the training, development and generation stages of Large Language Models
legal-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.legal-1.4.bib
https://aclanthology.org/2024.legal-1.4/
@inproceedings{kamocki-witt-2024-ethical, title = "Ethical Issues in Language Resources and Language Technology {--} New Challenges, New Perspectives", author = "Kamocki, Pawel and Witt, Andreas", editor = "Siegert, Ingo and Choukri, Khalid", booktitle = "Proceedings of the Workshop on Legal and Ethical Issues in Human Language Technologies @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.legal-1.4", pages = "19--23", abstract = "This article elaborates on the author{'}s contribution to the previous edition of the LREC conference, in which they proposed a tentative taxonomy of ethical issues that affect Language Resources (LRs) and Language Technology (LT) at the various stages of their lifecycle (conception, creation, use and evaluation). The proposed taxonomy was built around the following ethical principles: Privacy, Property, Equality, Transparency and Freedom. In this article, the authors would like to: 1) examine whether and how this taxonomy stood the test of time, in light of the recent developments in the legal framework and popularisation of Large Language Models (LLMs); 2) provide some details and a tentative checklist on how the taxonomy can be applied in practice; and 3) develop the taxonomy by adding new principles (Accountability; Risk Anticipation and Limitation; Reliability and Limited Confidence), to address the technological developments in LLMs and the upcoming Artificial Intelligence Act.", }
This article elaborates on the author{'}s contribution to the previous edition of the LREC conference, in which they proposed a tentative taxonomy of ethical issues that affect Language Resources (LRs) and Language Technology (LT) at the various stages of their lifecycle (conception, creation, use and evaluation). The proposed taxonomy was built around the following ethical principles: Privacy, Property, Equality, Transparency and Freedom. In this article, the authors would like to: 1) examine whether and how this taxonomy stood the test of time, in light of the recent developments in the legal framework and popularisation of Large Language Models (LLMs); 2) provide some details and a tentative checklist on how the taxonomy can be applied in practice; and 3) develop the taxonomy by adding new principles (Accountability; Risk Anticipation and Limitation; Reliability and Limited Confidence), to address the technological developments in LLMs and the upcoming Artificial Intelligence Act.
[ "Kamocki, Pawel", "Witt, Andreas" ]
Ethical Issues in Language Resources and Language Technology – New Challenges, New Perspectives
legal-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.legal-1.5.bib
https://aclanthology.org/2024.legal-1.5/
@inproceedings{hamalainen-2024-legal, title = "Legal and Ethical Considerations that Hinder the Use of {LLM}s in a {F}innish Institution of Higher Education", author = {H{\"a}m{\"a}l{\"a}inen, Mika}, editor = "Siegert, Ingo and Choukri, Khalid", booktitle = "Proceedings of the Workshop on Legal and Ethical Issues in Human Language Technologies @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.legal-1.5", pages = "24--27", abstract = "Large language models (LLMs) make it possible to solve many business problems easier than ever before. However, embracing LLMs in an organization may be slowed down due to ethical and legal considerations. In this paper, we will describe some of these issues we have faced at our university while developing university-level NLP tools to empower teaching and study planning. The identified issues touch upon topics such as GDPR, copyright, user account management and fear towards the new technology.", }
Large language models (LLMs) make it possible to solve many business problems easier than ever before. However, embracing LLMs in an organization may be slowed down due to ethical and legal considerations. In this paper, we will describe some of these issues we have faced at our university while developing university-level NLP tools to empower teaching and study planning. The identified issues touch upon topics such as GDPR, copyright, user account management and fear towards the new technology.
[ "H{\\\"a}m{\\\"a}l{\\\"a}inen, Mika" ]
Legal and Ethical Considerations that Hinder the Use of LLMs in a Finnish Institution of Higher Education
legal-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.legal-1.6.bib
https://aclanthology.org/2024.legal-1.6/
@inproceedings{schmitt-etal-2024-implications, title = "Implications of Regulations on Large Generative {AI} Models in the Super-Election Year and the Impact on Disinformation", author = {Schmitt, Vera and Tesch, Jakob and Lopez, Eva and Polzehl, Tim and Burchardt, Aljoscha and Neumann, Konstanze and Mohtaj, Salar and M{\"o}ller, Sebastian}, editor = "Siegert, Ingo and Choukri, Khalid", booktitle = "Proceedings of the Workshop on Legal and Ethical Issues in Human Language Technologies @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.legal-1.6", pages = "28--38", abstract = "With the rise of Large Generative AI Models (LGAIMs), disinformation online has become more concerning than ever before. Within the super-election year 2024, the influence of mis- and disinformation can severely influence public opinion. To combat the increasing amount of disinformation online, humans need to be supported by AI-based tools to increase the effectiveness of detecting false content. This paper examines the critical intersection of the AI Act with the deployment of LGAIMs for disinformation detection and the implications from research, deployer, and the user{'}s perspective. The utilization of LGAIMs for disinformation detection falls under the high-risk category defined in the AI Act, leading to several obligations that need to be followed after the enforcement of the AI Act. Among others, the obligations include risk management, transparency, and human oversight which pose the challenge of finding adequate technical interpretations. Furthermore, the paper articulates the necessity for clear guidelines and standards that enable the effective, ethical, and legally compliant use of AI. The paper contributes to the discourse on balancing technological advancement with ethical and legal imperatives, advocating for a collaborative approach to utilizing LGAIMs in safeguarding information integrity and fostering trust in digital ecosystems.", }
With the rise of Large Generative AI Models (LGAIMs), disinformation online has become more concerning than ever before. Within the super-election year 2024, the influence of mis- and disinformation can severely influence public opinion. To combat the increasing amount of disinformation online, humans need to be supported by AI-based tools to increase the effectiveness of detecting false content. This paper examines the critical intersection of the AI Act with the deployment of LGAIMs for disinformation detection and the implications from research, deployer, and the user{'}s perspective. The utilization of LGAIMs for disinformation detection falls under the high-risk category defined in the AI Act, leading to several obligations that need to be followed after the enforcement of the AI Act. Among others, the obligations include risk management, transparency, and human oversight which pose the challenge of finding adequate technical interpretations. Furthermore, the paper articulates the necessity for clear guidelines and standards that enable the effective, ethical, and legally compliant use of AI. The paper contributes to the discourse on balancing technological advancement with ethical and legal imperatives, advocating for a collaborative approach to utilizing LGAIMs in safeguarding information integrity and fostering trust in digital ecosystems.
[ "Schmitt, Vera", "Tesch, Jakob", "Lopez, Eva", "Polzehl, Tim", "Burchardt, Aljoscha", "Neumann, Konstanze", "Mohtaj, Salar", "M{\\\"o}ller, Sebastian" ]
Implications of Regulations on Large Generative AI Models in the Super-Election Year and the Impact on Disinformation
legal-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.legal-1.7.bib
https://aclanthology.org/2024.legal-1.7/
@inproceedings{dipersio-2024-selling, title = "Selling Personal Information: Data Brokers and the Limits of {US} Regulation", author = "DiPersio, Denise", editor = "Siegert, Ingo and Choukri, Khalid", booktitle = "Proceedings of the Workshop on Legal and Ethical Issues in Human Language Technologies @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.legal-1.7", pages = "39--46", abstract = "A principal pillar of the US Blueprint for an AI Bill of Rights is data privacy, specifically, that individuals should be protected from abusive practices by data collectors and data aggregators, and that users should have control over how their personal information is collected and used. An area that spotlights the need for such protections is found in the common practices of data brokers who scrape, purchase, process and reassemble personal information in bulk and sell it for a variety of downstream uses. Such activities almost always occur in the absence of users{'} knowledge or meaningful consent, yet they are legal under US law. This paper examines how data brokers operate, provides some examples of recent US regulatory actions taken against them, summarizes federal efforts to redress data broker practices and concludes that as long as there continues to be no comprehensive federal data protection and privacy scheme, efforts to control such behavior will have only a limited effect. This paper also addresses the limits of informed consent on the use of personal information in language resources and suggests a solution in an holistic approach to data protection and privacy across the data/development life cycle.", }
A principal pillar of the US Blueprint for an AI Bill of Rights is data privacy, specifically, that individuals should be protected from abusive practices by data collectors and data aggregators, and that users should have control over how their personal information is collected and used. An area that spotlights the need for such protections is found in the common practices of data brokers who scrape, purchase, process and reassemble personal information in bulk and sell it for a variety of downstream uses. Such activities almost always occur in the absence of users{'} knowledge or meaningful consent, yet they are legal under US law. This paper examines how data brokers operate, provides some examples of recent US regulatory actions taken against them, summarizes federal efforts to redress data broker practices and concludes that as long as there continues to be no comprehensive federal data protection and privacy scheme, efforts to control such behavior will have only a limited effect. This paper also addresses the limits of informed consent on the use of personal information in language resources and suggests a solution in an holistic approach to data protection and privacy across the data/development life cycle.
[ "DiPersio, Denise" ]
Selling Personal Information: Data Brokers and the Limits of US Regulation
legal-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.legal-1.8.bib
https://aclanthology.org/2024.legal-1.8/
@inproceedings{jorschick-etal-2024-data, title = "What Can {I} Do with this Data Point? Towards Modeling Legal and Ethical Aspects of Linguistic Data Collection and (Re-)use", author = "Jorschick, Annett and Schrader, Paul T. and Buschmeier, Hendrik", editor = "Siegert, Ingo and Choukri, Khalid", booktitle = "Proceedings of the Workshop on Legal and Ethical Issues in Human Language Technologies @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.legal-1.8", pages = "47--51", abstract = "Linguistic data often inherits characteristics that limit open science practices such as data publication, sharing, and reuse. Part of the problem is researchers{'} uncertainty about the legal requirements, which need to be considered at the beginning of study planning, when consent forms for participants, ethics applications, and data management plans need to be written. This paper presents a newly funded project that will develop a research data management infrastructure that will provide automated support to researchers in the planning, collection, storage, use, reuse, and sharing of data, taking into account ethical and legal aspects to encourage open science practices.", }
Linguistic data often inherits characteristics that limit open science practices such as data publication, sharing, and reuse. Part of the problem is researchers{'} uncertainty about the legal requirements, which need to be considered at the beginning of study planning, when consent forms for participants, ethics applications, and data management plans need to be written. This paper presents a newly funded project that will develop a research data management infrastructure that will provide automated support to researchers in the planning, collection, storage, use, reuse, and sharing of data, taking into account ethical and legal aspects to encourage open science practices.
[ "Jorschick, Annett", "Schrader, Paul T.", "Buschmeier, Hendrik" ]
What Can I Do with this Data Point? Towards Modeling Legal and Ethical Aspects of Linguistic Data Collection and (Re-)use
legal-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.legal-1.9.bib
https://aclanthology.org/2024.legal-1.9/
@inproceedings{eskevich-luthra-2024-data, title = "Data-Envelopes for Cultural Heritage: Going beyond Datasheets", author = "Luthra, Mrinalini and Eskevich, Maria", editor = "Siegert, Ingo and Choukri, Khalid", booktitle = "Proceedings of the Workshop on Legal and Ethical Issues in Human Language Technologies @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.legal-1.9", pages = "52--65", abstract = "Cultural heritage data is a rich source of information about the history and culture development in the past. When used with due understanding of its intrinsic complexity it can both support research in social sciences and humanities, and become input for machine learning and artificial intelligence algorithms. In all cases ethical and contextual considerations can be encouraged when the relevant information is provided in a clear and well structured form to potential users before they begin to interact with the data. Proposed data-envelopes, basing on the existing documentation frameworks, address the particular needs and challenges of the cultural heritage field while combining machine-readability and user-friendliness. We develop and test data-envelopes usability on the data from the Huygens Institute for History and Culture of the Netherlands. This paper presents the following contributions: i) we highlight the complexity of CH data, featuring the unique ethical and contextual considerations they entail; ii) we evaluate and compare existing dataset documentation frameworks, examining their suitability for CH datasets; iii) we introduce the {``}data-envelope{''}{--}a machine readable adaptation of existing dataset documentation frameworks, to tackle the specificities of CH datasets. Its modular form is designed to serve not only the needs of machine learning (ML), but also and especially broader user groups varying from humanities scholars, governmental monitoring authorities to citizen scientists and the general public. Importantly, the data-envelope framework emphasises the legal and ethical dimensions of dataset documentation, facilitating compliance with evolving data protection regulations and enhancing the accountability of data stewardship in the cultural heritage sector. We discuss and invite the readers for further conversation on the topic of ethical considerations, and how the different audiences should be informed about the importance of datasets documentation management and their context.", }
Cultural heritage data is a rich source of information about the history and culture development in the past. When used with due understanding of its intrinsic complexity it can both support research in social sciences and humanities, and become input for machine learning and artificial intelligence algorithms. In all cases ethical and contextual considerations can be encouraged when the relevant information is provided in a clear and well structured form to potential users before they begin to interact with the data. Proposed data-envelopes, basing on the existing documentation frameworks, address the particular needs and challenges of the cultural heritage field while combining machine-readability and user-friendliness. We develop and test data-envelopes usability on the data from the Huygens Institute for History and Culture of the Netherlands. This paper presents the following contributions: i) we highlight the complexity of CH data, featuring the unique ethical and contextual considerations they entail; ii) we evaluate and compare existing dataset documentation frameworks, examining their suitability for CH datasets; iii) we introduce the {``}data-envelope{''}{--}a machine readable adaptation of existing dataset documentation frameworks, to tackle the specificities of CH datasets. Its modular form is designed to serve not only the needs of machine learning (ML), but also and especially broader user groups varying from humanities scholars, governmental monitoring authorities to citizen scientists and the general public. Importantly, the data-envelope framework emphasises the legal and ethical dimensions of dataset documentation, facilitating compliance with evolving data protection regulations and enhancing the accountability of data stewardship in the cultural heritage sector. We discuss and invite the readers for further conversation on the topic of ethical considerations, and how the different audiences should be informed about the importance of datasets documentation management and their context.
[ "Luthra, Mrinalini", "Eskevich, Maria" ]
Data-Envelopes for Cultural Heritage: Going beyond Datasheets
legal-1.9
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.legal-1.10.bib
https://aclanthology.org/2024.legal-1.10/
@inproceedings{alemadi-zaghouani-2024-emotional, title = "Emotional Toll and Coping Strategies: Navigating the Effects of Annotating Hate Speech Data", author = "AlEmadi, Maryam M. and Zaghouani, Wajdi", editor = "Siegert, Ingo and Choukri, Khalid", booktitle = "Proceedings of the Workshop on Legal and Ethical Issues in Human Language Technologies @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.legal-1.10", pages = "66--72", abstract = "Freedom of speech on online social media platforms, often comes with the cost of hate speech production. Hate speech can be very harmful to the peace and development of societies as they bring about conflict and encourage crime. To regulate the hate speech content, moderators and annotators are employed. In our research, we look at the effects of prolonged exposure to hate speech on the mental and physical health of these annotators, as well as researchers with work revolving around the topic of hate speech. Through the methodology of analyzing literature, we found that prolonged exposure to hate speech does mentally and physically impact annotators and researchers in this field. We also propose solutions to reduce these negative impacts such as providing mental health services, fair labor practices, psychological assessments and interventions, as well as developing AI to assist in the process of hate speech detection.", }
Freedom of speech on online social media platforms, often comes with the cost of hate speech production. Hate speech can be very harmful to the peace and development of societies as they bring about conflict and encourage crime. To regulate the hate speech content, moderators and annotators are employed. In our research, we look at the effects of prolonged exposure to hate speech on the mental and physical health of these annotators, as well as researchers with work revolving around the topic of hate speech. Through the methodology of analyzing literature, we found that prolonged exposure to hate speech does mentally and physically impact annotators and researchers in this field. We also propose solutions to reduce these negative impacts such as providing mental health services, fair labor practices, psychological assessments and interventions, as well as developing AI to assist in the process of hate speech detection.
[ "AlEmadi, Maryam M.", "Zaghouani, Wajdi" ]
Emotional Toll and Coping Strategies: Navigating the Effects of Annotating Hate Speech Data
legal-1.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.legal-1.11.bib
https://aclanthology.org/2024.legal-1.11/
@inproceedings{siegert-etal-2024-user, title = "User Perspective on Anonymity in Voice Assistants {--} A comparison between {G}ermany and {F}inland", author = {Siegert, Ingo and Rech, Silas and B{\"a}ckstr{\"o}m, Tom and Haase, Matthias}, editor = "Siegert, Ingo and Choukri, Khalid", booktitle = "Proceedings of the Workshop on Legal and Ethical Issues in Human Language Technologies @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.legal-1.11", pages = "73--78", abstract = "This study investigates the growing importance of voice assistants, particularly focusing on their usage patterns and associated user characteristics, trust perceptions, and concerns about data security. While previous research has identified correlations between the use of voice assistants and trust in these technologies, as well as data security concerns, little evidence exists regarding the relationship between individual user traits and perceived trust and security concerns. The study design involves surveying various user attributes, including technical proficiency, personality traits, and experience with digital technologies, alongside attitudes toward and usage of voice assistants. A comparison between Germany and Finland is conducted to explore potential cultural differences. The findings aim to inform strategies for enhancing voice assistant acceptance, including the implementation of anonymization methods.", }
This study investigates the growing importance of voice assistants, particularly focusing on their usage patterns and associated user characteristics, trust perceptions, and concerns about data security. While previous research has identified correlations between the use of voice assistants and trust in these technologies, as well as data security concerns, little evidence exists regarding the relationship between individual user traits and perceived trust and security concerns. The study design involves surveying various user attributes, including technical proficiency, personality traits, and experience with digital technologies, alongside attitudes toward and usage of voice assistants. A comparison between Germany and Finland is conducted to explore potential cultural differences. The findings aim to inform strategies for enhancing voice assistant acceptance, including the implementation of anonymization methods.
[ "Siegert, Ingo", "Rech, Silas", "B{\\\"a}ckstr{\\\"o}m, Tom", "Haase, Matthias" ]
User Perspective on Anonymity in Voice Assistants – A comparison between Germany and Finland
legal-1.11
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.1.bib
https://aclanthology.org/2024.lt4hala-1.1/
@inproceedings{anderson-etal-2024-goidelex, title = "Goidelex: A Lexical Resource for {O}ld {I}rish", author = "Anderson, Cormac and Beniamine, Sacha and Fransen, Theodorus", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.1", pages = "1--10", abstract = "We introduce Goidelex, a new lexical database resource for Old Irish. Goidelex is an openly accessible relational database in CSV format, linked by formal relationships. The launch version documents 695 headwords with extensive linguistic annotations, including orthographic forms using a normalised orthography, automatically generated phonemic transcriptions, and information about morphosyntactic features, such as gender, inflectional class, etc. Metadata in JSON format, following the Frictionless standard, provides detailed descriptions of the tables and dataset. The database is designed to be fully compatible with the Paralex and CLDF standards and is interoperable with existing lexical resources for Old Irish such as CorPH and eDIL. It is suited to both qualitative and quantitative investigation into Old Irish morphology and lexicon, as well as to comparative research. This paper outlines the creation process, rationale, and resulting structure of the database.", }
We introduce Goidelex, a new lexical database resource for Old Irish. Goidelex is an openly accessible relational database in CSV format, linked by formal relationships. The launch version documents 695 headwords with extensive linguistic annotations, including orthographic forms using a normalised orthography, automatically generated phonemic transcriptions, and information about morphosyntactic features, such as gender, inflectional class, etc. Metadata in JSON format, following the Frictionless standard, provides detailed descriptions of the tables and dataset. The database is designed to be fully compatible with the Paralex and CLDF standards and is interoperable with existing lexical resources for Old Irish such as CorPH and eDIL. It is suited to both qualitative and quantitative investigation into Old Irish morphology and lexicon, as well as to comparative research. This paper outlines the creation process, rationale, and resulting structure of the database.
[ "Anderson, Cormac", "Beniamine, Sacha", "Fransen, Theodorus" ]
Goidelex: A Lexical Resource for Old Irish
lt4hala-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.2.bib
https://aclanthology.org/2024.lt4hala-1.2/
@inproceedings{doyle-mccrae-2024-developing, title = "Developing a Part-of-speech Tagger for Diplomatically Edited {O}ld {I}rish Text", author = "Doyle, Adrian and McCrae, John P.", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.2", pages = "11--21", abstract = "POS-tagging is typically considered a fundamental text preprocessing task, with a variety of downstream NLP tasks and techniques being dependent on the availability of POS-tagged corpora. As such, POS-taggers are important precursors to further NLP tasks, and their accuracy can impact the potential accuracy of these dependent tasks. While a variety of POS-tagging methods have been developed which work well with modern languages, historical languages present orthographic and editorial challenges which require special attention. The effectiveness of POS-taggers developed for modern languages is reduced when applied to Old Irish, with its comparatively complex orthography and morphology. This paper examines some of the obstacles to POS-tagging Old Irish text, and shows that inconsistencies between extant annotated corpora reduce the quantity of data available for use in training POS-taggers. The development of a multi-layer neural network model for POS-tagging Old Irish text is described, and an experiment is detailed which demonstrates that this model outperforms a variety of off-the-shelf POS-taggers. Moreover, this model sets a new benchmark for POS-tagging diplomatically edited Old Irish text.", }
POS-tagging is typically considered a fundamental text preprocessing task, with a variety of downstream NLP tasks and techniques being dependent on the availability of POS-tagged corpora. As such, POS-taggers are important precursors to further NLP tasks, and their accuracy can impact the potential accuracy of these dependent tasks. While a variety of POS-tagging methods have been developed which work well with modern languages, historical languages present orthographic and editorial challenges which require special attention. The effectiveness of POS-taggers developed for modern languages is reduced when applied to Old Irish, with its comparatively complex orthography and morphology. This paper examines some of the obstacles to POS-tagging Old Irish text, and shows that inconsistencies between extant annotated corpora reduce the quantity of data available for use in training POS-taggers. The development of a multi-layer neural network model for POS-tagging Old Irish text is described, and an experiment is detailed which demonstrates that this model outperforms a variety of off-the-shelf POS-taggers. Moreover, this model sets a new benchmark for POS-tagging diplomatically edited Old Irish text.
[ "Doyle, Adrian", "McCrae, John P." ]
Developing a Part-of-speech Tagger for Diplomatically Edited Old Irish Text
lt4hala-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.3.bib
https://aclanthology.org/2024.lt4hala-1.3/
@inproceedings{brigada-villa-giarda-2024-ycoe, title = "From {YCOE} to {UD}: Rule-based Root Identification in {O}ld {E}nglish", author = "Brigada Villa, Luca and Giarda, Martina", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.3", pages = "22--29", abstract = "In this paper we apply a set of rules to identify the root of a dependency tree, following the Universal Dependencies formalism and starting from the constituency annotation of the York-Toronto-Helsinki Parsed Corpus of Old English Prose (YCOE). This rule-based root-identification task represents the first step towards a rule-based automatic conversion of this valuable resource into the UD format. After presenting Old English and the annotated resources available for this language, we describe the different rules we applied and then we discuss the results and the errors.", }
In this paper we apply a set of rules to identify the root of a dependency tree, following the Universal Dependencies formalism and starting from the constituency annotation of the York-Toronto-Helsinki Parsed Corpus of Old English Prose (YCOE). This rule-based root-identification task represents the first step towards a rule-based automatic conversion of this valuable resource into the UD format. After presenting Old English and the annotated resources available for this language, we describe the different rules we applied and then we discuss the results and the errors.
[ "Brigada Villa, Luca", "Giarda, Martina" ]
From YCOE to UD: Rule-based Root Identification in Old English
lt4hala-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.4.bib
https://aclanthology.org/2024.lt4hala-1.4/
@inproceedings{provatorova-etal-2024-young, title = "Too Young to {NER}: Improving Entity Recognition on {D}utch Historical Documents", author = "Provatorova, Vera and van Erp, Marieke and Kanoulas, Evangelos", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.4", pages = "30--35", abstract = "Named entity recognition (NER) on historical texts is beneficial for the field of digital humanities, as it allows to easily search for the names of people, places and other entities in digitised archives. While the task of historical NER in different languages has been gaining popularity in recent years, Dutch historical NER remains an underexplored topic. Using a recently released historical dataset from the Dutch Language Institute, we train three BERT-based models and analyse the errors to identify main challenges. All three models outperform a contemporary multilingual baseline by a large margin on historical test data.", }
Named entity recognition (NER) on historical texts is beneficial for the field of digital humanities, as it allows to easily search for the names of people, places and other entities in digitised archives. While the task of historical NER in different languages has been gaining popularity in recent years, Dutch historical NER remains an underexplored topic. Using a recently released historical dataset from the Dutch Language Institute, we train three BERT-based models and analyse the errors to identify main challenges. All three models outperform a contemporary multilingual baseline by a large margin on historical test data.
[ "Provatorova, Vera", "van Erp, Marieke", "Kanoulas, Evangelos" ]
Too Young to NER: Improving Entity Recognition on Dutch Historical Documents
lt4hala-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.5.bib
https://aclanthology.org/2024.lt4hala-1.5/
@inproceedings{swanson-etal-2024-towards, title = "Towards Named-Entity and Coreference Annotation of the {H}ebrew {B}ible", author = "Swanson, Daniel G. and Bussert, Bryce D. and Tyers, Francis", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.5", pages = "36--40", abstract = "Named-entity annotation refers to the process of specifying what real-world (or, at least, external-to-the-text) entities various names and descriptions within a text refer to. Coreference annotation, meanwhile, specifies what context-dependent words or phrases, such as pronouns refer to. This paper describes an ongoing project to apply both of these to the Hebrew Bible, so far covering most of the book of Genesis, fully marking every person, place, object, and point in time which occurs in the text. The annotation process and possible future uses for the data are covered, along with the challenges involved in applying existing annotation guidelines to the Hebrew text.", }
Named-entity annotation refers to the process of specifying what real-world (or, at least, external-to-the-text) entities various names and descriptions within a text refer to. Coreference annotation, meanwhile, specifies what context-dependent words or phrases, such as pronouns refer to. This paper describes an ongoing project to apply both of these to the Hebrew Bible, so far covering most of the book of Genesis, fully marking every person, place, object, and point in time which occurs in the text. The annotation process and possible future uses for the data are covered, along with the challenges involved in applying existing annotation guidelines to the Hebrew text.
[ "Swanson, Daniel G.", "Bussert, Bryce D.", "Tyers, Francis" ]
Towards Named-Entity and Coreference Annotation of the Hebrew Bible
lt4hala-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.6.bib
https://aclanthology.org/2024.lt4hala-1.6/
@inproceedings{bassani-etal-2024-lime, title = "{L}i{M}e: A {L}atin Corpus of Late Medieval Criminal Sentences", author = "Bassani, Alessanda Clara Carmela and Del Bo, Beatrice Giovanna Maria and Ferrara, Alfio and Mangini, Marta Luigina and Picascia, Sergio and Stefanello, Ambra", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.6", pages = "41--49", abstract = "The Latin language has received attention from the computational linguistics research community, which has built, over the years, several valuable resources, ranging from detailed annotated corpora to sophisticated tools for linguistic analysis. With the recent advent of large language models, researchers have also started developing models capable of generating vector representations of Latin texts. The performances of such models remain behind the ones for modern languages, given the disparity in available data. In this paper, we present the LiMe dataset, a corpus of 325 documents extracted from a series of medieval manuscripts called Libri sententiarum potestatis Mediolani, and thoroughly annotated by experts, in order to be employed for masked language model, as well as supervised natural language processing tasks.", }
The Latin language has received attention from the computational linguistics research community, which has built, over the years, several valuable resources, ranging from detailed annotated corpora to sophisticated tools for linguistic analysis. With the recent advent of large language models, researchers have also started developing models capable of generating vector representations of Latin texts. The performances of such models remain behind the ones for modern languages, given the disparity in available data. In this paper, we present the LiMe dataset, a corpus of 325 documents extracted from a series of medieval manuscripts called Libri sententiarum potestatis Mediolani, and thoroughly annotated by experts, in order to be employed for masked language model, as well as supervised natural language processing tasks.
[ "Bassani, Aless", "a Clara Carmela", "Del Bo, Beatrice Giovanna Maria", "Ferrara, Alfio", "Mangini, Marta Luigina", "Picascia, Sergio", "Stefanello, Ambra" ]
LiMe: A Latin Corpus of Late Medieval Criminal Sentences
lt4hala-1.6
Poster
2404.12829
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.7.bib
https://aclanthology.org/2024.lt4hala-1.7/
@inproceedings{corbetta-etal-2024-rise, title = "The Rise and Fall of Dependency Parsing in Dante Alighieri{'}s Divine Comedy", author = "Corbetta, Claudia and Passarotti, Marco and Moretti, Giovanni", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.7", pages = "50--56", abstract = "In this paper, we conduct parsing experiments on Dante Alighieri{'}s Divine Comedy, an Old Italian poem composed between 1306-1321 and organized into three Cantiche {---}Inferno, Purgatorio, and Paradiso. We perform parsing on subsets of the poem using both a Modern Italian training set and sections of the Divine Comedy itself to evaluate under which scenarios parsers achieve higher scores. We find that employing in-domain training data supports better results, leading to an increase of approximately +17{\%} in Unlabeled Attachment Score (UAS) and +25-30{\%} in Labeled Attachment Score (LAS). Subsequently, we provide brief commentary on the differences in scores achieved among subsections of Cantiche, and we conduct experimental parsing on a text from the same period and style as the Divine Comedy.", }
In this paper, we conduct parsing experiments on Dante Alighieri{'}s Divine Comedy, an Old Italian poem composed between 1306-1321 and organized into three Cantiche {---}Inferno, Purgatorio, and Paradiso. We perform parsing on subsets of the poem using both a Modern Italian training set and sections of the Divine Comedy itself to evaluate under which scenarios parsers achieve higher scores. We find that employing in-domain training data supports better results, leading to an increase of approximately +17{\%} in Unlabeled Attachment Score (UAS) and +25-30{\%} in Labeled Attachment Score (LAS). Subsequently, we provide brief commentary on the differences in scores achieved among subsections of Cantiche, and we conduct experimental parsing on a text from the same period and style as the Divine Comedy.
[ "Corbetta, Claudia", "Passarotti, Marco", "Moretti, Giovanni" ]
The Rise and Fall of Dependency Parsing in Dante Alighieri's Divine Comedy
lt4hala-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.8.bib
https://aclanthology.org/2024.lt4hala-1.8/
@inproceedings{de-langhe-etal-2024-unsupervised, title = "Unsupervised Authorship Attribution for Medieval {L}atin Using Transformer-Based Embeddings", author = "De Langhe, Loic and De Clercq, Orphee and Hoste, Veronique", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.8", pages = "57--64", abstract = "We explore the potential of employing transformer-based embeddings in an unsupervised authorship attribution task for medieval Latin. The development of Large Language Models (LLMs) and recent advances in transfer learning alleviate many of the traditional issues associated with authorship attribution in lower-resourced (ancient) languages. Despite this, these methods remain heavily understudied within this domain. Concretely, we generate strong contextual embeddings using a variety of mono -and multilingual transformer models and use these as input for two unsupervised clustering methods: a standard agglomerative clustering algorithm and a self-organizing map. We show that these transformer-based embeddings can be used to generate high-quality and interpretable clusterings, resulting in an attractive alternative to the traditional feature-based methods.", }
We explore the potential of employing transformer-based embeddings in an unsupervised authorship attribution task for medieval Latin. The development of Large Language Models (LLMs) and recent advances in transfer learning alleviate many of the traditional issues associated with authorship attribution in lower-resourced (ancient) languages. Despite this, these methods remain heavily understudied within this domain. Concretely, we generate strong contextual embeddings using a variety of mono -and multilingual transformer models and use these as input for two unsupervised clustering methods: a standard agglomerative clustering algorithm and a self-organizing map. We show that these transformer-based embeddings can be used to generate high-quality and interpretable clusterings, resulting in an attractive alternative to the traditional feature-based methods.
[ "De Langhe, Loic", "De Clercq, Orphee", "Hoste, Veronique" ]
Unsupervised Authorship Attribution for Medieval Latin Using Transformer-Based Embeddings
lt4hala-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.9.bib
https://aclanthology.org/2024.lt4hala-1.9/
@inproceedings{dereza-etal-2024-million, title = "{``}To Have the {`}Million{'} Readers Yet{''}: Building a Digitally Enhanced Edition of the Bilingual {I}rish-{E}nglish Newspaper an Gaodhal (1881-1898)", author = "Dereza, Oksana and N{\'\i} Chonghaile, Deirdre and Wolf, Nicholas", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.9", pages = "65--78", abstract = "This paper introduces the {`}An Gaodhal{'} project, which aims to serve the historically under-resourced and endangered language of Irish (known as Gaeilge) by providing new digital tools and resources. The initial goal of the project was the extraction of full text of {`}An Gaodhal{'}, a monthly bilingual Irish-English newspaper produced from 1881 to 1898, to the highest possible degree of accuracy via Optical Character Recognition (OCR), with a view to making its printed content searchable. The methodology applied toward achieving this goal yielded additional digital outputs including: 1. a new OCR model for the Irish language as printed in Cl{\'o} Gaelach type; 2. a new OCR model for bilingual Irish-English content printed in Cl{\'o} Gaelach and Roman types respectively; 3. a BART-based OCR post-correction model for historical bilingual Irish-English data; 4. a historical Irish training set for Named Entity Recognition (NER). All but the first of these four additional outputs appear to be the first of their kind. Each of the project outputs, including the full-text OCR outputs in ALTO XML format, is set for public release to enable open-access research. The paper also identifies the challenges historical Irish data poses to Natural Language Processing (NLP) in general and OCR in particular, and reports on project results and outputs to date. Finally, it contextualises the project within the wider field of NLP and considers its potential impact on under-resourced languages worldwide.", }
This paper introduces the {`}An Gaodhal{'} project, which aims to serve the historically under-resourced and endangered language of Irish (known as Gaeilge) by providing new digital tools and resources. The initial goal of the project was the extraction of full text of {`}An Gaodhal{'}, a monthly bilingual Irish-English newspaper produced from 1881 to 1898, to the highest possible degree of accuracy via Optical Character Recognition (OCR), with a view to making its printed content searchable. The methodology applied toward achieving this goal yielded additional digital outputs including: 1. a new OCR model for the Irish language as printed in Cl{\'o} Gaelach type; 2. a new OCR model for bilingual Irish-English content printed in Cl{\'o} Gaelach and Roman types respectively; 3. a BART-based OCR post-correction model for historical bilingual Irish-English data; 4. a historical Irish training set for Named Entity Recognition (NER). All but the first of these four additional outputs appear to be the first of their kind. Each of the project outputs, including the full-text OCR outputs in ALTO XML format, is set for public release to enable open-access research. The paper also identifies the challenges historical Irish data poses to Natural Language Processing (NLP) in general and OCR in particular, and reports on project results and outputs to date. Finally, it contextualises the project within the wider field of NLP and considers its potential impact on under-resourced languages worldwide.
[ "Dereza, Oksana", "N{\\'\\i} Chonghaile, Deirdre", "Wolf, Nicholas" ]
“To Have the `Million' Readers Yet”: Building a Digitally Enhanced Edition of the Bilingual Irish-English Newspaper an Gaodhal (1881-1898)
lt4hala-1.9
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.10.bib
https://aclanthology.org/2024.lt4hala-1.10/
@inproceedings{luraghi-etal-2024-introducing, title = "Introducing {P}a{V}e{D}a {--} {P}avia Verbs Database: Valency Patterns and Pattern Comparison in {A}ncient {I}ndo-{E}uropean Languages", author = "Luraghi, Silvia and Palmero Aprosio, Alessio and Zanchi, Chiara and Giuliani, Martina", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.10", pages = "79--88", abstract = "The paper introduces [DATASET], a resource that builds on the ValPaL database of verbs{'} valency patterns and alternations by adding a number of ancient languages (completely absent from ValPaL) and a number of new features that enable direct comparison, both diachronic and synchronic. For each verb, ValPaL contains the basic frame and ideally all possible valency alternations allowed by the verb (e.g. passive, causative, reflexive etc.). In order to enable comparison among alternations, an additional level has been added, the alternation class, that overcomes the issue of comparing language specific alternations which were added by individual contributors of ValPaL. The ValPaL had as its main aim typological comparison, and data collection was variously carried out using questionnaires, secondary sources and largely drawing on native speakers{'} intuition by contributors. Working with ancient languages entails a methodological change, as the data is extracted from corpora. This has led to re-thinking the notion of valency as a usage-based feature of verbs and to planning future addition of corpus data to modern languages in the database. It further shows the impact of ancient languages on theoretical reflection.", }
The paper introduces [DATASET], a resource that builds on the ValPaL database of verbs{'} valency patterns and alternations by adding a number of ancient languages (completely absent from ValPaL) and a number of new features that enable direct comparison, both diachronic and synchronic. For each verb, ValPaL contains the basic frame and ideally all possible valency alternations allowed by the verb (e.g. passive, causative, reflexive etc.). In order to enable comparison among alternations, an additional level has been added, the alternation class, that overcomes the issue of comparing language specific alternations which were added by individual contributors of ValPaL. The ValPaL had as its main aim typological comparison, and data collection was variously carried out using questionnaires, secondary sources and largely drawing on native speakers{'} intuition by contributors. Working with ancient languages entails a methodological change, as the data is extracted from corpora. This has led to re-thinking the notion of valency as a usage-based feature of verbs and to planning future addition of corpus data to modern languages in the database. It further shows the impact of ancient languages on theoretical reflection.
[ "Luraghi, Silvia", "Palmero Aprosio, Alessio", "Zanchi, Chiara", "Giuliani, Martina" ]
Introducing PaVeDa – Pavia Verbs Database: Valency Patterns and Pattern Comparison in Ancient Indo-European Languages
lt4hala-1.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.11.bib
https://aclanthology.org/2024.lt4hala-1.11/
@inproceedings{palladino-yousef-2024-development, title = "Development of Robust {NER} Models and Named Entity Tagsets for {A}ncient {G}reek", author = "Palladino, Chiara and Yousef, Tariq", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.11", pages = "89--97", abstract = "This contribution presents a novel approach to the development and evaluation of transformer-based models for Named Entity Recognition and Classification in Ancient Greek texts. We trained two models with annotated datasets by consolidating potentially ambiguous entity types under a harmonized set of classes. Then, we tested their performance with out-of-domain texts, reproducing a real-world use case. Both models performed very well under these conditions, with the multilingual model being slightly superior on the monolingual one. In the conclusion, we emphasize current limitations due to the scarcity of high-quality annotated corpora and to the lack of cohesive annotation strategies for ancient languages.", }
This contribution presents a novel approach to the development and evaluation of transformer-based models for Named Entity Recognition and Classification in Ancient Greek texts. We trained two models with annotated datasets by consolidating potentially ambiguous entity types under a harmonized set of classes. Then, we tested their performance with out-of-domain texts, reproducing a real-world use case. Both models performed very well under these conditions, with the multilingual model being slightly superior on the monolingual one. In the conclusion, we emphasize current limitations due to the scarcity of high-quality annotated corpora and to the lack of cohesive annotation strategies for ancient languages.
[ "Palladino, Chiara", "Yousef, Tariq" ]
Development of Robust NER Models and Named Entity Tagsets for Ancient Greek
lt4hala-1.11
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.12.bib
https://aclanthology.org/2024.lt4hala-1.12/
@inproceedings{roman-meyer-2024-analysis, title = "Analysis of Glyph and Writing System Similarities Using {S}iamese Neural Networks", author = "Roman, Claire and Meyer, Philippe", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.12", pages = "98--104", abstract = "In this paper we use siamese neural networks to compare glyphs and writing systems. These deep learning models define distance-like functions and are used to explore and visualize the space of scripts by performing multidimensional scaling and clustering analyses. From 51 historical European, Mediterranean and Middle Eastern alphabets, we use a Ward-linkage hierarchical clustering and obtain 10 clusters of scripts including three isolated writing systems. To collect the glyph database we use the Noto family fonts that encode in a standard form the Unicode character repertoire. This approach has the potential to reveal connections among scripts and civilizations and to help the deciphering of ancient scripts.", }
In this paper we use siamese neural networks to compare glyphs and writing systems. These deep learning models define distance-like functions and are used to explore and visualize the space of scripts by performing multidimensional scaling and clustering analyses. From 51 historical European, Mediterranean and Middle Eastern alphabets, we use a Ward-linkage hierarchical clustering and obtain 10 clusters of scripts including three isolated writing systems. To collect the glyph database we use the Noto family fonts that encode in a standard form the Unicode character repertoire. This approach has the potential to reveal connections among scripts and civilizations and to help the deciphering of ancient scripts.
[ "Roman, Claire", "Meyer, Philippe" ]
Analysis of Glyph and Writing System Similarities Using Siamese Neural Networks
lt4hala-1.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.13.bib
https://aclanthology.org/2024.lt4hala-1.13/
@inproceedings{sprugnoli-redaelli-2024-annotate, title = "How to Annotate Emotions in Historical {I}talian Novels: A Case Study on {I} Promessi Sposi", author = "Sprugnoli, Rachele and Redaelli, Arianna", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.13", pages = "105--115", abstract = "This paper describes the annotation of a chapter taken from I Promessi Sposi, the most famous Italian novel of the 19th century written by Alessandro Manzoni, following 3 emotion classifications. The aim of this methodological paper is to understand: i) how the annotation procedure changes depending on the granularity of the classification, ii) how the different granularities impact the inter-annotator agreement, iii) which granularity allows good coverage of emotions, iv) if the chosen classifications are missing emotions that are important for historical literary texts. The opinion of non-experts is integrated in the present study through an online questionnaire. In addition, preliminary experiments are carried out using the new dataset as a test set to evaluate the performances of different approaches for emotion polarity detection and emotion classification respectively. Annotated data are released both as aggregated gold standard and with non-aggregated labels (that is labels before reconciliation between annotators) so to align with the perspectivist approach, that is an established practice in the Humanities and, more recently, also in NLP.", }
This paper describes the annotation of a chapter taken from I Promessi Sposi, the most famous Italian novel of the 19th century written by Alessandro Manzoni, following 3 emotion classifications. The aim of this methodological paper is to understand: i) how the annotation procedure changes depending on the granularity of the classification, ii) how the different granularities impact the inter-annotator agreement, iii) which granularity allows good coverage of emotions, iv) if the chosen classifications are missing emotions that are important for historical literary texts. The opinion of non-experts is integrated in the present study through an online questionnaire. In addition, preliminary experiments are carried out using the new dataset as a test set to evaluate the performances of different approaches for emotion polarity detection and emotion classification respectively. Annotated data are released both as aggregated gold standard and with non-aggregated labels (that is labels before reconciliation between annotators) so to align with the perspectivist approach, that is an established practice in the Humanities and, more recently, also in NLP.
[ "Sprugnoli, Rachele", "Redaelli, Arianna" ]
How to Annotate Emotions in Historical Italian Novels: A Case Study on I Promessi Sposi
lt4hala-1.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.14.bib
https://aclanthology.org/2024.lt4hala-1.14/
@inproceedings{thomas-etal-2024-leveraging, title = "Leveraging {LLM}s for Post-{OCR} Correction of Historical Newspapers", author = "Thomas, Alan and Gaizauskas, Robert and Lu, Haiping", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.14", pages = "116--121", abstract = "Poor OCR quality continues to be a major obstacle for humanities scholars seeking to make use of digitised primary sources such as historical newspapers. Typical approaches to post-OCR correction employ sequence-to-sequence models for a neural machine translation task, mapping erroneous OCR texts to accurate reference texts. We shift our focus towards the adaptation of generative LLMs for a prompt-based approach. By instruction-tuning Llama 2 and comparing it to a fine-tuned BART on BLN600, a parallel corpus of 19th century British newspaper articles, we demonstrate the potential of a prompt-based approach in detecting and correcting OCR errors, even with limited training data. We achieve a significant enhancement in OCR quality with Llama 2 outperforming BART, achieving a 54.51{\%} reduction in the character error rate against BART{'}s 23.30{\%}. This paves the way for future work leveraging generative LLMs to improve the accessibility and unlock the full potential of historical texts for humanities research.", }
Poor OCR quality continues to be a major obstacle for humanities scholars seeking to make use of digitised primary sources such as historical newspapers. Typical approaches to post-OCR correction employ sequence-to-sequence models for a neural machine translation task, mapping erroneous OCR texts to accurate reference texts. We shift our focus towards the adaptation of generative LLMs for a prompt-based approach. By instruction-tuning Llama 2 and comparing it to a fine-tuned BART on BLN600, a parallel corpus of 19th century British newspaper articles, we demonstrate the potential of a prompt-based approach in detecting and correcting OCR errors, even with limited training data. We achieve a significant enhancement in OCR quality with Llama 2 outperforming BART, achieving a 54.51{\%} reduction in the character error rate against BART{'}s 23.30{\%}. This paves the way for future work leveraging generative LLMs to improve the accessibility and unlock the full potential of historical texts for humanities research.
[ "Thomas, Alan", "Gaizauskas, Robert", "Lu, Haiping" ]
Leveraging LLMs for Post-OCR Correction of Historical Newspapers
lt4hala-1.14
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.15.bib
https://aclanthology.org/2024.lt4hala-1.15/
@inproceedings{volk-etal-2024-llm, title = "{LLM}-based Machine Translation and Summarization for {L}atin", author = {Volk, Martin and Fischer, Dominic Philipp and Fischer, Lukas and Scheurer, Patricia and Str{\"o}bel, Phillip Benjamin}, editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.15", pages = "122--128", abstract = "This paper presents an evaluation of machine translation for Latin. We tested multilingual Large Language Models, in particular GPT-4, on letters from the 16th century that are in Latin and Early New High German. Our experiments include translation and cross-language summarization for the two historical languages into modern English and German. We show that LLM-based translation for Latin is clearly superior to previous approaches. We also show that LLM-based paraphrasing of Latin paragraphs from the historical letters produces English and German summaries that are close to human summaries published in the edition.", }
This paper presents an evaluation of machine translation for Latin. We tested multilingual Large Language Models, in particular GPT-4, on letters from the 16th century that are in Latin and Early New High German. Our experiments include translation and cross-language summarization for the two historical languages into modern English and German. We show that LLM-based translation for Latin is clearly superior to previous approaches. We also show that LLM-based paraphrasing of Latin paragraphs from the historical letters produces English and German summaries that are close to human summaries published in the edition.
[ "Volk, Martin", "Fischer, Dominic Philipp", "Fischer, Lukas", "Scheurer, Patricia", "Str{\\\"o}bel, Phillip Benjamin" ]
LLM-based Machine Translation and Summarization for Latin
lt4hala-1.15
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.16.bib
https://aclanthology.org/2024.lt4hala-1.16/
@inproceedings{dejaeghere-etal-2024-exploring, title = "Exploring Aspect-Based Sentiment Analysis Methodologies for Literary-Historical Research Purposes", author = "Dejaeghere, Tess and Singh, Pranaydeep and Lefever, Els and Birkholz, Julie", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.16", pages = "129--143", abstract = "This study explores aspect-based sentiment analysis (ABSA) methodologies for literary-historical research, aiming to address the limitations of traditional sentiment analysis in understanding nuanced aspects of literature. It evaluates three ABSA toolchains: rule-based, machine learning-based (utilizing BERT and MacBERTh embeddings), and a prompt-based workflow with Mixtral 8x7B. Findings highlight challenges and potentials of ABSA for literary-historical analysis, emphasizing the need for context-aware annotation strategies and technical skills. The research contributes by curating a multilingual corpus of travelogues, publishing an annotated dataset for ABSA, creating openly available Jupyter Notebooks with Python code for each modeling approach, conducting pilot experiments on literary-historical texts, and proposing future endeavors to advance ABSA methodologies in this domain.", }
This study explores aspect-based sentiment analysis (ABSA) methodologies for literary-historical research, aiming to address the limitations of traditional sentiment analysis in understanding nuanced aspects of literature. It evaluates three ABSA toolchains: rule-based, machine learning-based (utilizing BERT and MacBERTh embeddings), and a prompt-based workflow with Mixtral 8x7B. Findings highlight challenges and potentials of ABSA for literary-historical analysis, emphasizing the need for context-aware annotation strategies and technical skills. The research contributes by curating a multilingual corpus of travelogues, publishing an annotated dataset for ABSA, creating openly available Jupyter Notebooks with Python code for each modeling approach, conducting pilot experiments on literary-historical texts, and proposing future endeavors to advance ABSA methodologies in this domain.
[ "Dejaeghere, Tess", "Singh, Pranaydeep", "Lefever, Els", "Birkholz, Julie" ]
Exploring Aspect-Based Sentiment Analysis Methodologies for Literary-Historical Research Purposes
lt4hala-1.16
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.17.bib
https://aclanthology.org/2024.lt4hala-1.17/
@inproceedings{debaene-etal-2024-early, title = "Early {M}odern {D}utch Comedies and Farces in the Spotlight: Introducing {E}m{DC}om{F} and Its Emotion Framework", author = "Debaene, Florian and van der Haven, Kornee and Hoste, Veronique", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.17", pages = "144--155", abstract = {As computational drama studies are developing rapidly, the Dutch dramatic tradition is in need of centralisation still before it can benefit from state-of-the-art methodologies. This paper presents and evaluates EmDComF, a historical corpus of both manually curated and automatically digitised early modern Dutch comedies and farces authored between 1650 and 1725, and describes the refinement of a historically motivated annotation framework exploring sentiment and emotions in these two dramatic subgenres. Originating from Lodewijk Meyer{'}s philosophical writings on passions in the dramatic genre ({\mbox{$\pm$}}1670), published in Naauwkeurig onderwys in de tooneel-po{\"e}zy (Thorough instruction in the Poetics of Drama) by the literary society Nil Volentibus Arduum in 1765, a historical and genre-specific emotion framework is tested and operationalised for annotating emotions in the domain of early modern Dutch comedies and farces. Based on a frequency and cluster analysis of 782 annotated sentences by 2 expert annotators, the initial 38 emotion labels were restructured to a hierarchical label set of the 5 emotions Hatred, Anxiety, Sadness, Joy and Desire.}, }
As computational drama studies are developing rapidly, the Dutch dramatic tradition is in need of centralisation still before it can benefit from state-of-the-art methodologies. This paper presents and evaluates EmDComF, a historical corpus of both manually curated and automatically digitised early modern Dutch comedies and farces authored between 1650 and 1725, and describes the refinement of a historically motivated annotation framework exploring sentiment and emotions in these two dramatic subgenres. Originating from Lodewijk Meyer{'}s philosophical writings on passions in the dramatic genre ({\mbox{$\pm$}}1670), published in Naauwkeurig onderwys in de tooneel-po{\"e}zy (Thorough instruction in the Poetics of Drama) by the literary society Nil Volentibus Arduum in 1765, a historical and genre-specific emotion framework is tested and operationalised for annotating emotions in the domain of early modern Dutch comedies and farces. Based on a frequency and cluster analysis of 782 annotated sentences by 2 expert annotators, the initial 38 emotion labels were restructured to a hierarchical label set of the 5 emotions Hatred, Anxiety, Sadness, Joy and Desire.
[ "Debaene, Florian", "van der Haven, Kornee", "Hoste, Veronique" ]
Early Modern Dutch Comedies and Farces in the Spotlight: Introducing EmDComF and Its Emotion Framework
lt4hala-1.17
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.18.bib
https://aclanthology.org/2024.lt4hala-1.18/
@inproceedings{munoz-sanchez-2024-hieroglyphs, title = "When Hieroglyphs Meet Technology: A Linguistic Journey through {A}ncient {E}gypt Using Natural Language Processing", author = "Mu{\~n}oz S{\'a}nchez, Ricardo", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.18", pages = "156--169", abstract = "Knowing our past can help us better understand our future. The explosive development of NLP in these past few decades has allowed us to study ancient languages and cultures in ways that we couldn{'}t have done in the past. However, not all languages have received the same level of attention. Despite its popularity in pop culture, the languages spoken in Ancient Egypt have been somewhat overlooked in terms of NLP research. In this paper we give an overview of how NLP has been used to study different variations of the Ancient Egyptian languages. This not only includes Old, Middle, and Late Egyptian but also Demotic and Coptic. We begin our survey paper by giving a short introduction to these languages and their writing systems, before talking about the corpora and lexical resources that are available digitally. We then show the different NLP tasks that have been tackled for different variations of Ancient Egyptian, as well as the approaches that have been used. We hope that our work can stoke interest in the study of these languages within the NLP community.", }
Knowing our past can help us better understand our future. The explosive development of NLP in these past few decades has allowed us to study ancient languages and cultures in ways that we couldn{'}t have done in the past. However, not all languages have received the same level of attention. Despite its popularity in pop culture, the languages spoken in Ancient Egypt have been somewhat overlooked in terms of NLP research. In this paper we give an overview of how NLP has been used to study different variations of the Ancient Egyptian languages. This not only includes Old, Middle, and Late Egyptian but also Demotic and Coptic. We begin our survey paper by giving a short introduction to these languages and their writing systems, before talking about the corpora and lexical resources that are available digitally. We then show the different NLP tasks that have been tackled for different variations of Ancient Egyptian, as well as the approaches that have been used. We hope that our work can stoke interest in the study of these languages within the NLP community.
[ "Mu{\\~n}oz S{\\'a}nchez, Ricardo" ]
When Hieroglyphs Meet Technology: A Linguistic Journey through Ancient Egypt Using Natural Language Processing
lt4hala-1.18
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.19.bib
https://aclanthology.org/2024.lt4hala-1.19/
@inproceedings{laurs-2024-towards, title = "Towards a Readability Formula for {L}atin", author = "Laurs, Thomas", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.19", pages = "170--175", abstract = "This research focuses on the development of a readability formula for Latin texts, a much-needed tool to assess the difficulty of Latin texts in educational settings. This study takes a comprehensive approach, exploring more than 100 linguistic variables, including lexical, morphological, syntactical, and discourse-related factors, to capture the multifaceted nature of text difficulty. The study incorporates a corpus of Latin texts that were assessed for difficulty, and their evaluations were used to establish the basis for the model. The research utilizes natural language processing tools to derive linguistic predictors, resulting in a multiple linear regression model that explains about 70{\%} of the variance in text difficulty. While the model{'}s precision can be enhanced by adding further variables and a larger corpus, it already provides valuable insights into the readability of Latin texts and offers the opportunity to examine how different text genres and contents influence text accessibility. Additionally, the formula{'}s focus on objective text difficulty paves the way for future research on personal predictors, particularly in educational contexts.", }
This research focuses on the development of a readability formula for Latin texts, a much-needed tool to assess the difficulty of Latin texts in educational settings. This study takes a comprehensive approach, exploring more than 100 linguistic variables, including lexical, morphological, syntactical, and discourse-related factors, to capture the multifaceted nature of text difficulty. The study incorporates a corpus of Latin texts that were assessed for difficulty, and their evaluations were used to establish the basis for the model. The research utilizes natural language processing tools to derive linguistic predictors, resulting in a multiple linear regression model that explains about 70{\%} of the variance in text difficulty. While the model{'}s precision can be enhanced by adding further variables and a larger corpus, it already provides valuable insights into the readability of Latin texts and offers the opportunity to examine how different text genres and contents influence text accessibility. Additionally, the formula{'}s focus on objective text difficulty paves the way for future research on personal predictors, particularly in educational contexts.
[ "Laurs, Thomas" ]
Towards a Readability Formula for Latin
lt4hala-1.19
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.20.bib
https://aclanthology.org/2024.lt4hala-1.20/
@inproceedings{rubino-etal-2024-automatic, title = "Automatic Normalisation of {M}iddle {F}rench and Its Impact on Productivity", author = "Rubino, Raphael and Coram-Mekkey, Sandra and Gerlach, Johanna and Mutal, Jonathan David and Bouillon, Pierrette", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.20", pages = "176--189", abstract = "This paper presents a study on automatic normalisation of 16th century documents written in Middle French. These documents present a large variety of wordforms which require spelling normalisation to facilitate downstream linguistic and historical studies. We frame the normalisation process as a machine translation task starting with a strong baseline leveraging a pre-trained encoder{--}decoder model. We propose to improve this baseline by combining synthetic data generation methods and producing artificial training data, thus tackling the lack of parallel corpora relevant to our task. The evaluation of our approach is twofold, in addition to automatic metrics relying on gold references, we evaluate our models through post-editing of their outputs. This evaluation method directly measures the productivity gain brought by our models to experts conducting the normalisation task manually. Results show a 20+ token per minute increase in productivity when using automatic normalisation compared to normalising text from scratch. The manually post-edited dataset resulting from our study is the first parallel corpus of normalised 16th century Middle French to be publicly released, along with the synthetic data and the automatic normalisation models used and trained in the presented work.", }
This paper presents a study on automatic normalisation of 16th century documents written in Middle French. These documents present a large variety of wordforms which require spelling normalisation to facilitate downstream linguistic and historical studies. We frame the normalisation process as a machine translation task starting with a strong baseline leveraging a pre-trained encoder{--}decoder model. We propose to improve this baseline by combining synthetic data generation methods and producing artificial training data, thus tackling the lack of parallel corpora relevant to our task. The evaluation of our approach is twofold, in addition to automatic metrics relying on gold references, we evaluate our models through post-editing of their outputs. This evaluation method directly measures the productivity gain brought by our models to experts conducting the normalisation task manually. Results show a 20+ token per minute increase in productivity when using automatic normalisation compared to normalising text from scratch. The manually post-edited dataset resulting from our study is the first parallel corpus of normalised 16th century Middle French to be publicly released, along with the synthetic data and the automatic normalisation models used and trained in the presented work.
[ "Rubino, Raphael", "Coram-Mekkey, S", "ra", "Gerlach, Johanna", "Mutal, Jonathan David", "Bouillon, Pierrette" ]
Automatic Normalisation of Middle French and Its Impact on Productivity
lt4hala-1.20
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.21.bib
https://aclanthology.org/2024.lt4hala-1.21/
@inproceedings{sprugnoli-etal-2024-overview, title = "Overview of the {E}va{L}atin 2024 Evaluation Campaign", author = "Sprugnoli, Rachele and Iurescia, Federica and Passarotti, Marco", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.21", pages = "190--197", abstract = "This paper describes the organization and the results of the third edition of EvaLatin, the campaign for the evaluation of Natural Language Processing tools for Latin. The two shared tasks proposed in EvaLatin 2024, i.,e., Dependency Parsing and Emotion Polarity Detection, are aimed to foster research in the field of language technologies for Classical languages. The shared datasets are described and the results obtained by the participants for each task are presented and discussed.", }
This paper describes the organization and the results of the third edition of EvaLatin, the campaign for the evaluation of Natural Language Processing tools for Latin. The two shared tasks proposed in EvaLatin 2024, i.,e., Dependency Parsing and Emotion Polarity Detection, are aimed to foster research in the field of language technologies for Classical languages. The shared datasets are described and the results obtained by the participants for each task are presented and discussed.
[ "Sprugnoli, Rachele", "Iurescia, Federica", "Passarotti, Marco" ]
Overview of the EvaLatin 2024 Evaluation Campaign
lt4hala-1.21
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.22.bib
https://aclanthology.org/2024.lt4hala-1.22/
@inproceedings{behr-2024-behr, title = "Behr at {E}va{L}atin 2024: {L}atin Dependency Parsing Using Historical Sentence Embeddings", author = "Behr, Rufus", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.22", pages = "198--202", abstract = "This paper identifies the system used for my submission to EvaLatin{'}s shared dependency parsing task as part of the LT4HALA 2024 workshop. EvaLatin presented new Latin prose and poetry dependency test data from potentially different time periods, and imposed no restriction on training data or model selection for the task. This paper, therefore, sought to build a general Latin dependency parser that would perform accurately regardless of the Latin age to which the test data belongs. To train a general parser, all of the available Universal Dependencies treebanks were used, but in order to address the changes in the Latin language over time, this paper introduces historical sentence embeddings. A model was trained to encode sentences of the same Latin age into vectors of high cosine similarity, which are referred to as historical sentence embeddings. The system introduces these historical sentence embeddings into a biaffine dependency parser with the hopes of enabling training across the Latin treebanks in a more efficacious manner, but their inclusion shows no improvement over the base model.", }
This paper identifies the system used for my submission to EvaLatin{'}s shared dependency parsing task as part of the LT4HALA 2024 workshop. EvaLatin presented new Latin prose and poetry dependency test data from potentially different time periods, and imposed no restriction on training data or model selection for the task. This paper, therefore, sought to build a general Latin dependency parser that would perform accurately regardless of the Latin age to which the test data belongs. To train a general parser, all of the available Universal Dependencies treebanks were used, but in order to address the changes in the Latin language over time, this paper introduces historical sentence embeddings. A model was trained to encode sentences of the same Latin age into vectors of high cosine similarity, which are referred to as historical sentence embeddings. The system introduces these historical sentence embeddings into a biaffine dependency parser with the hopes of enabling training across the Latin treebanks in a more efficacious manner, but their inclusion shows no improvement over the base model.
[ "Behr, Rufus" ]
Behr at EvaLatin 2024: Latin Dependency Parsing Using Historical Sentence Embeddings
lt4hala-1.22
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.23.bib
https://aclanthology.org/2024.lt4hala-1.23/
@inproceedings{mercelis-2024-ku, title = "{KU} Leuven / Brepols-{CTLO} at {E}va{L}atin 2024: Span Extraction Approaches for {L}atin Dependency Parsing", author = "Mercelis, Wouter", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.23", pages = "203--206", abstract = "This report describes the KU Leuven / Brepols-CTLO submission to EvaLatin 2024. We present the results of two runs, both of which try to implement a span extraction approach. The first run implements span-span prediction, rooted in Machine Reading Comprehension, while making use of LaBERTa, a RoBERTa model pretrained on Latin texts. The first run produces meaningful results. The second, more experimental run operates on the token-level with a span-extraction approach based on the Question Answering task. This model finetuned a DeBERTa model, pretrained on Latin texts. The finetuning was set up in the form of a Multitask Model, with classification heads for each token{'}s part-of-speech tag and dependency relation label, while a question answering head handled the dependency head predictions. Due to the shared loss function, this paper tried to capture the link between part-of-speech tag, dependency relation and dependency heads, that follows the human intuition. The second run did not perform well.", }
This report describes the KU Leuven / Brepols-CTLO submission to EvaLatin 2024. We present the results of two runs, both of which try to implement a span extraction approach. The first run implements span-span prediction, rooted in Machine Reading Comprehension, while making use of LaBERTa, a RoBERTa model pretrained on Latin texts. The first run produces meaningful results. The second, more experimental run operates on the token-level with a span-extraction approach based on the Question Answering task. This model finetuned a DeBERTa model, pretrained on Latin texts. The finetuning was set up in the form of a Multitask Model, with classification heads for each token{'}s part-of-speech tag and dependency relation label, while a question answering head handled the dependency head predictions. Due to the shared loss function, this paper tried to capture the link between part-of-speech tag, dependency relation and dependency heads, that follows the human intuition. The second run did not perform well.
[ "Mercelis, Wouter" ]
KU Leuven / Brepols-CTLO at EvaLatin 2024: Span Extraction Approaches for Latin Dependency Parsing
lt4hala-1.23
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.24.bib
https://aclanthology.org/2024.lt4hala-1.24/
@inproceedings{straka-etal-2024-ufal, title = "{{\'U}FAL} {L}atin{P}ipe at {E}va{L}atin 2024: Morphosyntactic Analysis of {L}atin", author = "Straka, Milan and Strakov{\'a}, Jana and Gamba, Federica", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.24", pages = "207--214", abstract = "We present LatinPipe, the winning submission to the EvaLatin 2024 Dependency Parsing shared task. Our system consists of a fine-tuned concatenation of base and large pre-trained LMs, with a dot-product attention head for parsing and softmax classification heads for morphology to jointly learn both dependency parsing and morphological analysis. It is trained by sampling from seven publicly available Latin corpora, utilizing additional harmonization of annotations to achieve a more unified annotation style. Before fine-tuning, we train the system for a few initial epochs with frozen weights. We also add additional local relative contextualization by stacking the BiLSTM layers on top of the Transformer(s). Finally, we ensemble output probability distributions from seven randomly instantiated networks for the final submission. The code is available at https://github.com/ufal/evalatin2024-latinpipe.", }
We present LatinPipe, the winning submission to the EvaLatin 2024 Dependency Parsing shared task. Our system consists of a fine-tuned concatenation of base and large pre-trained LMs, with a dot-product attention head for parsing and softmax classification heads for morphology to jointly learn both dependency parsing and morphological analysis. It is trained by sampling from seven publicly available Latin corpora, utilizing additional harmonization of annotations to achieve a more unified annotation style. Before fine-tuning, we train the system for a few initial epochs with frozen weights. We also add additional local relative contextualization by stacking the BiLSTM layers on top of the Transformer(s). Finally, we ensemble output probability distributions from seven randomly instantiated networks for the final submission. The code is available at https://github.com/ufal/evalatin2024-latinpipe.
[ "Straka, Milan", "Strakov{\\'a}, Jana", "Gamba, Federica" ]
ÚFAL LatinPipe at EvaLatin 2024: Morphosyntactic Analysis of Latin
lt4hala-1.24
Poster
2404.05839
[ "https://github.com/ufal/evalatin2024-latinpipe" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.25.bib
https://aclanthology.org/2024.lt4hala-1.25/
@inproceedings{bothwell-etal-2024-nostra, title = "Nostra Domina at {E}va{L}atin 2024: Improving {L}atin Polarity Detection through Data Augmentation", author = "Bothwell, Stephen and Swenor, Abigail and Chiang, David", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.25", pages = "215--222", abstract = "This paper describes submissions from the team Nostra Domina to the EvaLatin 2024 shared task of emotion polarity detection. Given the low-resource environment of Latin and the complexity of sentiment in rhetorical genres like poetry, we augmented the available data through automatic polarity annotation. We present two methods for doing so on the basis of the k-means algorithm, and we employ a variety of Latin large language models (LLMs) in a neural architecture to better capture the underlying contextual sentiment representations. Our best approach achieved the second highest macro-averaged Macro-F1 score on the shared task{'}s test set.", }
This paper describes submissions from the team Nostra Domina to the EvaLatin 2024 shared task of emotion polarity detection. Given the low-resource environment of Latin and the complexity of sentiment in rhetorical genres like poetry, we augmented the available data through automatic polarity annotation. We present two methods for doing so on the basis of the k-means algorithm, and we employ a variety of Latin large language models (LLMs) in a neural architecture to better capture the underlying contextual sentiment representations. Our best approach achieved the second highest macro-averaged Macro-F1 score on the shared task{'}s test set.
[ "Bothwell, Stephen", "Swenor, Abigail", "Chiang, David" ]
Nostra Domina at EvaLatin 2024: Improving Latin Polarity Detection through Data Augmentation
lt4hala-1.25
Poster
2404.07792
[ "https://github.com/mythologos/evalatin2024-nostradomina" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.26.bib
https://aclanthology.org/2024.lt4hala-1.26/
@inproceedings{dorkin-sirts-2024-tartunlp-evalatin, title = "{T}artu{NLP} at {E}va{L}atin 2024: Emotion Polarity Detection", author = "Dorkin, Aleksei and Sirts, Kairit", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.26", pages = "223--228", abstract = "The technical report for our submission at EvaLatin 2024 shared task. We apply knowledge transfer techniques and two distinct approaches to data annotation: based on heuristics and based on LLMs.", }
The technical report for our submission at EvaLatin 2024 shared task. We apply knowledge transfer techniques and two distinct approaches to data annotation: based on heuristics and based on LLMs.
[ "Dorkin, Aleksei", "Sirts, Kairit" ]
TartuNLP at EvaLatin 2024: Emotion Polarity Detection
lt4hala-1.26
Poster
2405.01159
[ "" ]
https://huggingface.co/papers/2405.01159
1
0
0
2
1
[]
[ "adorkin/evalatin2024" ]
[]
https://aclanthology.org/2024.lt4hala-1.27.bib
https://aclanthology.org/2024.lt4hala-1.27/
@inproceedings{li-etal-2024-overview, title = "Overview of {E}va{H}an2024: The First International Evaluation on {A}ncient {C}hinese Sentence Segmentation and Punctuation", author = "Li, Bin and Chang, Bolin and Xu, Zhixing and Feng, Minxuan and Xu, Chao and Qu, Weiguang and Shen, Si and Wang, Dongbo", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.27", pages = "229--236", abstract = "Ancient Chinese texts have no sentence boundaries and punctuation. Adding modern Chinese punctuation to theses texts requires expertise, time and efforts. Automatic sentence segmentation and punctuation is considered as a basic task for Ancient Chinese processing, but there is no shared task to evaluate the performances of different systems. This paper presents the results of the first ancient Chinese sentence segmentation and punctuation bakeoff, which is held at the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) 2024. The contest uses metrics for detailed evaluations of 4 genres of unpublished texts with 11 punctuation types. Six teams submitted 32 running results. In the closed modality, the participants are only allowed to use the training data, the highest obtained F1 scores are respectively 88.47{\%} and 75.29{\%} in sentence segmentation and sentence punctuation. The perfermances on the unseen data is 10 percent lower than the published common data, which means there is still space for further improvement. The large language models outperform the traditional models, but LLM changes the original characters around 1-2{\%}, due to over-generation. Thus, post-processing is needed to keep the text consistancy.", }
Ancient Chinese texts have no sentence boundaries and punctuation. Adding modern Chinese punctuation to theses texts requires expertise, time and efforts. Automatic sentence segmentation and punctuation is considered as a basic task for Ancient Chinese processing, but there is no shared task to evaluate the performances of different systems. This paper presents the results of the first ancient Chinese sentence segmentation and punctuation bakeoff, which is held at the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) 2024. The contest uses metrics for detailed evaluations of 4 genres of unpublished texts with 11 punctuation types. Six teams submitted 32 running results. In the closed modality, the participants are only allowed to use the training data, the highest obtained F1 scores are respectively 88.47{\%} and 75.29{\%} in sentence segmentation and sentence punctuation. The perfermances on the unseen data is 10 percent lower than the published common data, which means there is still space for further improvement. The large language models outperform the traditional models, but LLM changes the original characters around 1-2{\%}, due to over-generation. Thus, post-processing is needed to keep the text consistancy.
[ "Li, Bin", "Chang, Bolin", "Xu, Zhixing", "Feng, Minxuan", "Xu, Chao", "Qu, Weiguang", "Shen, Si", "Wang, Dongbo" ]
Overview of EvaHan2024: The First International Evaluation on Ancient Chinese Sentence Segmentation and Punctuation
lt4hala-1.27
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.28.bib
https://aclanthology.org/2024.lt4hala-1.28/
@inproceedings{wang-li-2024-two, title = "Two Sequence Labeling Approaches to Sentence Segmentation and Punctuation Prediction for Classic {C}hinese Texts", author = "Wang, Xuebin and Li, Zhenghua", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.28", pages = "237--241", abstract = "This paper describes our system for the EvaHan2024 shared task. We design and experiment with two sequence labeling approaches, i.e., one-stage and two-stage approaches. The one-stage approach directly predicts a label for each character, and the label may contain multiple punctuation marks. The two-stage approach divides punctuation marks into two classes, i.e., pause and non-pause, and separately handles them via two sequence labeling processes. The labels contain at most one punctuation marks. We use pre-trained SikuRoBERTa as a key component of the encoder and employ a conditional random field (CRF) layer on the top. According to the evaluation metrics adopted by the organizers, the two-stage approach is superior to the one-stage approach, and our system achieves the second place among all participant systems.", }
This paper describes our system for the EvaHan2024 shared task. We design and experiment with two sequence labeling approaches, i.e., one-stage and two-stage approaches. The one-stage approach directly predicts a label for each character, and the label may contain multiple punctuation marks. The two-stage approach divides punctuation marks into two classes, i.e., pause and non-pause, and separately handles them via two sequence labeling processes. The labels contain at most one punctuation marks. We use pre-trained SikuRoBERTa as a key component of the encoder and employ a conditional random field (CRF) layer on the top. According to the evaluation metrics adopted by the organizers, the two-stage approach is superior to the one-stage approach, and our system achieves the second place among all participant systems.
[ "Wang, Xuebin", "Li, Zhenghua" ]
Two Sequence Labeling Approaches to Sentence Segmentation and Punctuation Prediction for Classic Chinese Texts
lt4hala-1.28
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.29.bib
https://aclanthology.org/2024.lt4hala-1.29/
@inproceedings{huo-chen-2024-ancient, title = "{A}ncient {C}hinese Sentence Segmentation and Punctuation on Xunzi {LLM}", author = "Huo, Shitu and Chen, Wenhui", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.29", pages = "242--245", abstract = "This paper describes the system submitted for the EvaHan 2024 Task on ancient Chinese sentence segmentation and punctuation. Our study utillizes the Xunzi large language model as the base model to evaluate the overall performance and the performance by record type. The applied methodologies and the prompts utilized in our study have shown to be helpful and effective in aiding the model{'}s performance evaluation.", }
This paper describes the system submitted for the EvaHan 2024 Task on ancient Chinese sentence segmentation and punctuation. Our study utillizes the Xunzi large language model as the base model to evaluate the overall performance and the performance by record type. The applied methodologies and the prompts utilized in our study have shown to be helpful and effective in aiding the model{'}s performance evaluation.
[ "Huo, Shitu", "Chen, Wenhui" ]
Ancient Chinese Sentence Segmentation and Punctuation on Xunzi LLM
lt4hala-1.29
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.30.bib
https://aclanthology.org/2024.lt4hala-1.30/
@inproceedings{chen-2024-sentence, title = "Sentence Segmentation and Sentence Punctuation Based on {X}unzi{ALLM}", author = "Chen, Zihong", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.30", pages = "246--250", abstract = "In ancient Chinese books, punctuation marks are typically absent in engraved texts. Sentence segmentation and punctuation heavily rely on the meticulous efforts of experts and scholars. Therefore, the work of automatic punctuation and sentence segmentation plays a very important role in promoting ancient books, as well as the inheritance of Chinese culture. In this paper, we present a method for fine-tuning downstream tasks for large language model using the LoRA approach, leveraging the EvaHan2024 dataset. This method ensures robust output and high accuracy while inheriting the knowledge from the large pre-trained language model Xunzi.", }
In ancient Chinese books, punctuation marks are typically absent in engraved texts. Sentence segmentation and punctuation heavily rely on the meticulous efforts of experts and scholars. Therefore, the work of automatic punctuation and sentence segmentation plays a very important role in promoting ancient books, as well as the inheritance of Chinese culture. In this paper, we present a method for fine-tuning downstream tasks for large language model using the LoRA approach, leveraging the EvaHan2024 dataset. This method ensures robust output and high accuracy while inheriting the knowledge from the large pre-trained language model Xunzi.
[ "Chen, Zihong" ]
Sentence Segmentation and Sentence Punctuation Based on XunziALLM
lt4hala-1.30
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.31.bib
https://aclanthology.org/2024.lt4hala-1.31/
@inproceedings{wang-etal-2024-sentence, title = "Sentence Segmentation and Punctuation for Ancient Books Based on Supervised In-context Training", author = "Wang, Shiquan and Fu, Weiwei and Li, Mengxiang and He, Zhongjiang and Li, Yongxiang and Fang, Ruiyu and Guan, Li and Song, Shuangyong", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.31", pages = "251--255", abstract = "This paper describes the participation of team {``}TeleAI{''} in the third International Chinese Ancient Chinese Language Information Processing Evaluation (EvalHan24). The competition comprises a joint task of sentence segmentation and punctuation, categorized into open and closed tracks based on the models and data used. In the final evaluation, our system achieved significantly better results than the baseline. Specifically, in the closed-track sentence segmentation task, we obtained an F1 score of 0.8885, while in the sentence punctuation task, we achieved an F1 score of 0.7129.", }
This paper describes the participation of team {``}TeleAI{''} in the third International Chinese Ancient Chinese Language Information Processing Evaluation (EvalHan24). The competition comprises a joint task of sentence segmentation and punctuation, categorized into open and closed tracks based on the models and data used. In the final evaluation, our system achieved significantly better results than the baseline. Specifically, in the closed-track sentence segmentation task, we obtained an F1 score of 0.8885, while in the sentence punctuation task, we achieved an F1 score of 0.7129.
[ "Wang, Shiquan", "Fu, Weiwei", "Li, Mengxiang", "He, Zhongjiang", "Li, Yongxiang", "Fang, Ruiyu", "Guan, Li", "Song, Shuangyong" ]
Sentence Segmentation and Punctuation for Ancient Books Based on Supervised In-context Training
lt4hala-1.31
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.32.bib
https://aclanthology.org/2024.lt4hala-1.32/
@inproceedings{xia-etal-2024-speado, title = "{SPEADO}: Segmentation and Punctuation for {A}ncient {C}hinese Texts via Example Augmentation and Decoding Optimization", author = "Xia, Tian and Yu, Kai and Yu, Qianrong and Peng, Xinran", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.32", pages = "256--260", abstract = "The SPEADO model for sentence segmentation and punctuation tasks in ancient Chinese texts is proposed, which incorporates text chunking and MinHash indexing techniques to realise example argumentation. Additionally, decoding optimization strategies are introduced to direct the attention of the LLM model towards punctuation errors and address the issue of uncontrollable output. Experimental results show that the F1 score of the proposed method exceeds the baseline model by 14.18{\%}, indicating a significant improvement in performance.", }
The SPEADO model for sentence segmentation and punctuation tasks in ancient Chinese texts is proposed, which incorporates text chunking and MinHash indexing techniques to realise example argumentation. Additionally, decoding optimization strategies are introduced to direct the attention of the LLM model towards punctuation errors and address the issue of uncontrollable output. Experimental results show that the F1 score of the proposed method exceeds the baseline model by 14.18{\%}, indicating a significant improvement in performance.
[ "Xia, Tian", "Yu, Kai", "Yu, Qianrong", "Peng, Xinran" ]
SPEADO: Segmentation and Punctuation for Ancient Chinese Texts via Example Augmentation and Decoding Optimization
lt4hala-1.32
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.lt4hala-1.33.bib
https://aclanthology.org/2024.lt4hala-1.33/
@inproceedings{huang-2024-ancient, title = "{A}ncient {C}hinese Punctuation via In-Context Learning", author = "Huang, Jie", editor = "Sprugnoli, Rachele and Passarotti, Marco", booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.lt4hala-1.33", pages = "261--265", abstract = "EvaHan2024 focuses on sentence punctuation in ancient Chinese. Xunzi large language base model, which is specifically trained for ancient Chinese processing, is advised in the campaign. In general, we adopted the in-context learning (ICL) paradigm for this task and designed a post-processing scheme to ensure the standardability of final results. When constructing ICL prompts, we did feature extraction by LLM QA and selected demonstrations based on non-parametric metrics. We used Xunzi in two stages and neither did further training, so the model was generic and other fundamental abilities remained unaffected. Moreover, newly acquired training data can be directly utilized after identical feature extraction, showcasing the scalability of our system. As for the result, we achieved an F1-score of 67.7{\%} on a complex test dataset consisting of multiple types of documents and 77.98{\%} on Zuozhuan data.", }
EvaHan2024 focuses on sentence punctuation in ancient Chinese. Xunzi large language base model, which is specifically trained for ancient Chinese processing, is advised in the campaign. In general, we adopted the in-context learning (ICL) paradigm for this task and designed a post-processing scheme to ensure the standardability of final results. When constructing ICL prompts, we did feature extraction by LLM QA and selected demonstrations based on non-parametric metrics. We used Xunzi in two stages and neither did further training, so the model was generic and other fundamental abilities remained unaffected. Moreover, newly acquired training data can be directly utilized after identical feature extraction, showcasing the scalability of our system. As for the result, we achieved an F1-score of 67.7{\%} on a complex test dataset consisting of multiple types of documents and 77.98{\%} on Zuozhuan data.
[ "Huang, Jie" ]
Ancient Chinese Punctuation via In-Context Learning
lt4hala-1.33
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mathnlp-1.1.bib
https://aclanthology.org/2024.mathnlp-1.1/
@inproceedings{dev-etal-2024-approach, title = "An Approach to Co-reference Resolution and Formula Grounding for Mathematical Identifiers Using Large Language Models", author = "Dev, Aamin and Asakura, Takuto and S{\ae}tre, Rune", editor = "Valentino, Marco and Ferreira, Deborah and Thayaparan, Mokanarangan and Freitas, Andre", booktitle = "Proceedings of the 2nd Workshop on Mathematical Natural Language Processing @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mathnlp-1.1", pages = "1--10", abstract = "This paper outlines an automated approach to annotate mathematical identifiers in scientific papers {---} a process historically laborious and costly. We employ state-of-the-art LLMs, including GPT-3.5 and GPT-4, and open-source alternatives to generate a dictionary for annotating mathematical identifiers, linking each identifier to its conceivable descriptions and then assigning these definitions to the respective identifier in- stances based on context. Evaluation metrics include the CoNLL score for co-reference cluster quality and semantic correctness of the annotations.", }
This paper outlines an automated approach to annotate mathematical identifiers in scientific papers {---} a process historically laborious and costly. We employ state-of-the-art LLMs, including GPT-3.5 and GPT-4, and open-source alternatives to generate a dictionary for annotating mathematical identifiers, linking each identifier to its conceivable descriptions and then assigning these definitions to the respective identifier in- stances based on context. Evaluation metrics include the CoNLL score for co-reference cluster quality and semantic correctness of the annotations.
[ "Dev, Aamin", "Asakura, Takuto", "S{\\ae}tre, Rune" ]
An Approach to Co-reference Resolution and Formula Grounding for Mathematical Identifiers Using Large Language Models
mathnlp-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mathnlp-1.2.bib
https://aclanthology.org/2024.mathnlp-1.2/
@inproceedings{picca-2024-fluid, title = "Fluid Dynamics-Inspired Emotional Analysis in {S}hakespearean Tragedies: A Novel Computational Linguistics Methodology", author = "Picca, Davide", editor = "Valentino, Marco and Ferreira, Deborah and Thayaparan, Mokanarangan and Freitas, Andre", booktitle = "Proceedings of the 2nd Workshop on Mathematical Natural Language Processing @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mathnlp-1.2", pages = "11--18", abstract = "This study introduces an innovative method for analyzing emotions in texts, drawing inspiration from the principles of fluid dynamics, particularly the Navier-Stokes equations. It applies this framework to analyze Shakespeare{'}s tragedies {``}Hamlet{''} and {``}Romeo and Juliet{''}, treating emotional expressions as entities akin to fluids. By mapping linguistic characteristics onto fluid dynamics components, this approach provides a dynamic perspective on how emotions are expressed and evolve in narrative texts. The results, when compared with conventional sentiment analysis methods, reveal a more detailed and subtle grasp of the emotional arcs within these works. This interdisciplinary strategy not only enriches emotion analysis in computational linguistics but also paves the way for potential integrations with machine learning in NLP.", }
This study introduces an innovative method for analyzing emotions in texts, drawing inspiration from the principles of fluid dynamics, particularly the Navier-Stokes equations. It applies this framework to analyze Shakespeare{'}s tragedies {``}Hamlet{''} and {``}Romeo and Juliet{''}, treating emotional expressions as entities akin to fluids. By mapping linguistic characteristics onto fluid dynamics components, this approach provides a dynamic perspective on how emotions are expressed and evolve in narrative texts. The results, when compared with conventional sentiment analysis methods, reveal a more detailed and subtle grasp of the emotional arcs within these works. This interdisciplinary strategy not only enriches emotion analysis in computational linguistics but also paves the way for potential integrations with machine learning in NLP.
[ "Picca, Davide" ]
Fluid Dynamics-Inspired Emotional Analysis in Shakespearean Tragedies: A Novel Computational Linguistics Methodology
mathnlp-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mathnlp-1.3.bib
https://aclanthology.org/2024.mathnlp-1.3/
@inproceedings{narin-2024-math, title = "Math Problem Solving: Enhancing Large Language Models with Semantically Rich Symbolic Variables", author = "Narin, Ali Emre", editor = "Valentino, Marco and Ferreira, Deborah and Thayaparan, Mokanarangan and Freitas, Andre", booktitle = "Proceedings of the 2nd Workshop on Mathematical Natural Language Processing @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mathnlp-1.3", pages = "19--24", abstract = "The advent of Large Language Models (LLMs) based on the Transformer architecture has led to remarkable advancements in various domains, including reasoning tasks. However, accurately assessing the performance of Large Language Models, particularly in the reasoning domain, remains a challenge. In this paper, we propose the Semantically Rich Variable Substitution Method (SemRiVas) as an enhancement to existing symbolic methodologies for evaluating LLMs on Mathematical Word Problems (MWPs). Unlike previous approaches that utilize generic symbols for variable substitution, SemRiVas employs descriptive variable names, aiming to improve the problem-solving abilities of LLMs. Our method aims to eliminate the need for LLMs to possess programming proficiency and perform arithmetic operations, to be universally applicable. Our experimental results demonstrate the superior accuracy of SemRiVas compared to prior symbolic methods, particularly in resolving longer and more complex MWP questions. However, LLMs{'} performance with SemRiVas and symbolic methods that utilize one-character variables still falls short compared to notable techniques like CoT and PaL.", }
The advent of Large Language Models (LLMs) based on the Transformer architecture has led to remarkable advancements in various domains, including reasoning tasks. However, accurately assessing the performance of Large Language Models, particularly in the reasoning domain, remains a challenge. In this paper, we propose the Semantically Rich Variable Substitution Method (SemRiVas) as an enhancement to existing symbolic methodologies for evaluating LLMs on Mathematical Word Problems (MWPs). Unlike previous approaches that utilize generic symbols for variable substitution, SemRiVas employs descriptive variable names, aiming to improve the problem-solving abilities of LLMs. Our method aims to eliminate the need for LLMs to possess programming proficiency and perform arithmetic operations, to be universally applicable. Our experimental results demonstrate the superior accuracy of SemRiVas compared to prior symbolic methods, particularly in resolving longer and more complex MWP questions. However, LLMs{'} performance with SemRiVas and symbolic methods that utilize one-character variables still falls short compared to notable techniques like CoT and PaL.
[ "Narin, Ali Emre" ]
Math Problem Solving: Enhancing Large Language Models with Semantically Rich Symbolic Variables
mathnlp-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mathnlp-1.4.bib
https://aclanthology.org/2024.mathnlp-1.4/
@inproceedings{kim-etal-2024-data, title = "Data Driven Approach for Mathematical Problem Solving", author = "Kim, Byungju and Lee, Wonseok and Kim, Jaehong and Im, Jungbin", editor = "Valentino, Marco and Ferreira, Deborah and Thayaparan, Mokanarangan and Freitas, Andre", booktitle = "Proceedings of the 2nd Workshop on Mathematical Natural Language Processing @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mathnlp-1.4", pages = "25--34", abstract = "In this paper, we investigate and introduce a novel Llama-2 based model, fine-tuned with an original dataset designed to mirror real-world mathematical challenges. The dataset was collected through a question-answering platform, incorporating solutions generated by both rule-based solver and question answering, to cover a broad spectrum of mathematical concepts and problem-solving techniques. Experimental results demonstrate significant performance improvements when the models are fine-tuned with our dataset. The results suggest that the integration of contextually rich and diverse problem sets into the training substantially enhances the problem-solving capability of language models across various mathematical domains. This study showcases the critical role of curated educational content in advancing AI research.", }
In this paper, we investigate and introduce a novel Llama-2 based model, fine-tuned with an original dataset designed to mirror real-world mathematical challenges. The dataset was collected through a question-answering platform, incorporating solutions generated by both rule-based solver and question answering, to cover a broad spectrum of mathematical concepts and problem-solving techniques. Experimental results demonstrate significant performance improvements when the models are fine-tuned with our dataset. The results suggest that the integration of contextually rich and diverse problem sets into the training substantially enhances the problem-solving capability of language models across various mathematical domains. This study showcases the critical role of curated educational content in advancing AI research.
[ "Kim, Byungju", "Lee, Wonseok", "Kim, Jaehong", "Im, Jungbin" ]
Data Driven Approach for Mathematical Problem Solving
mathnlp-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mathnlp-1.5.bib
https://aclanthology.org/2024.mathnlp-1.5/
@inproceedings{wennberg-henter-2024-exploring, title = "Exploring Internal Numeracy in Language Models: A Case Study on {ALBERT}", author = "Wennberg, Ulme and Henter, Gustav Eje", editor = "Valentino, Marco and Ferreira, Deborah and Thayaparan, Mokanarangan and Freitas, Andre", booktitle = "Proceedings of the 2nd Workshop on Mathematical Natural Language Processing @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mathnlp-1.5", pages = "35--40", abstract = "It has been found that Transformer-based language models have the ability to perform basic quantitative reasoning. In this paper, we propose a method for studying how these models internally represent numerical data, and use our proposal to analyze the ALBERT family of language models. Specifically, we extract the learned embeddings these models use to represent tokens that correspond to numbers and ordinals, and subject these embeddings to Principal Component Analysis (PCA). PCA results reveal that ALBERT models of different sizes, trained and initialized separately, consistently learn to use the axes of greatest variation to represent the approximate ordering of various numerical concepts. Numerals and their textual counterparts are represented in separate clusters, but increase along the same direction in 2D space. Our findings illustrate that language models, trained purely to model text, can intuit basic mathematical concepts, opening avenues for NLP applications that intersect with quantitative reasoning.", }
It has been found that Transformer-based language models have the ability to perform basic quantitative reasoning. In this paper, we propose a method for studying how these models internally represent numerical data, and use our proposal to analyze the ALBERT family of language models. Specifically, we extract the learned embeddings these models use to represent tokens that correspond to numbers and ordinals, and subject these embeddings to Principal Component Analysis (PCA). PCA results reveal that ALBERT models of different sizes, trained and initialized separately, consistently learn to use the axes of greatest variation to represent the approximate ordering of various numerical concepts. Numerals and their textual counterparts are represented in separate clusters, but increase along the same direction in 2D space. Our findings illustrate that language models, trained purely to model text, can intuit basic mathematical concepts, opening avenues for NLP applications that intersect with quantitative reasoning.
[ "Wennberg, Ulme", "Henter, Gustav Eje" ]
Exploring Internal Numeracy in Language Models: A Case Study on ALBERT
mathnlp-1.5
Poster
2404.16574
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.1.bib
https://aclanthology.org/2024.mwe-1.1/
@inproceedings{tayyar-madabushi-2024-every, title = "Every Time We Hire an {LLM}, the Reasoning Performance of the Linguists Goes Up", author = "Tayyar Madabushi, Harish", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.1", pages = "1", abstract = "", }
[ "Tayyar Madabushi, Harish" ]
Every Time We Hire an LLM, the Reasoning Performance of the Linguists Goes Up
mwe-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.2.bib
https://aclanthology.org/2024.mwe-1.2/
@inproceedings{levshina-2024-using, title = "Using {U}niversal {D}ependencies for testing hypotheses about communicative efficiency", author = "Levshina, Natalia", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.2", pages = "2--3", abstract = "", }
[ "Levshina, Natalia" ]
Using Universal Dependencies for testing hypotheses about communicative efficiency
mwe-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.3.bib
https://aclanthology.org/2024.mwe-1.3/
@inproceedings{kanayama-etal-2024-automatic, title = "Automatic Manipulation of Training Corpora to Make Parsers Accept Real-world Text", author = "Kanayama, Hiroshi and Iwamoto, Ran and Muraoka, Masayasu and Ohko, Takuya and Miyamoto, Kohtaroh", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.3", pages = "4--13", abstract = "This paper discusses how to build a practical syntactic analyzer, and addresses the distributional differences between existing corpora and actual documents in applications. As a case study we focus on noun phrases that are not headed by a main verb and sentences without punctuation at the end, which are rare in a number of Universal Dependencies corpora but frequently appear in the real-world use cases of syntactic parsers. We converted the training corpora so that their distribution is closer to that in realistic inputs, and obtained the better scores both in general syntax benchmarking and a sentiment detection task, a typical application of dependency analysis.", }
This paper discusses how to build a practical syntactic analyzer, and addresses the distributional differences between existing corpora and actual documents in applications. As a case study we focus on noun phrases that are not headed by a main verb and sentences without punctuation at the end, which are rare in a number of Universal Dependencies corpora but frequently appear in the real-world use cases of syntactic parsers. We converted the training corpora so that their distribution is closer to that in realistic inputs, and obtained the better scores both in general syntax benchmarking and a sentiment detection task, a typical application of dependency analysis.
[ "Kanayama, Hiroshi", "Iwamoto, Ran", "Muraoka, Masayasu", "Ohko, Takuya", "Miyamoto, Kohtaroh" ]
Automatic Manipulation of Training Corpora to Make Parsers Accept Real-world Text
mwe-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.4.bib
https://aclanthology.org/2024.mwe-1.4/
@inproceedings{liu-lareau-2024-assessing, title = "Assessing {BERT}{'}s sensitivity to idiomaticity", author = "Liu, Li and Lareau, Francois", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.4", pages = "14--23", abstract = "BERT-like language models have been demonstrated to capture the idiomatic meaning of multiword expressions. Linguists have also shown that idioms have varying degrees of idiomaticity. In this paper, we assess CamemBERT{'}s sensitivity to the degree of idiomaticity within idioms, as well as the dependency of this sensitivity on part of speech and idiom length. We used a demasking task on tokens from 3127 idioms and 22551 tokens corresponding to simple lexemes taken from the French Lexical Network (LN-fr), and observed that CamemBERT performs distinctly on tokens embedded within idioms compared to simple ones. When demasking tokens within idioms, the model is not proficient in discerning their level of idiomaticity. Moreover, regardless of idiomaticity, CamemBERT excels at handling function words. The length of idioms also impacts CamemBERT{'}s performance to a certain extent. The last two observations partly explain the difference between the model{'}s performance on idioms versus simple lexemes. We conclude that the model treats idioms differently from simple lexemes, but that it does not capture the difference in compositionality between subclasses of idioms.", }
BERT-like language models have been demonstrated to capture the idiomatic meaning of multiword expressions. Linguists have also shown that idioms have varying degrees of idiomaticity. In this paper, we assess CamemBERT{'}s sensitivity to the degree of idiomaticity within idioms, as well as the dependency of this sensitivity on part of speech and idiom length. We used a demasking task on tokens from 3127 idioms and 22551 tokens corresponding to simple lexemes taken from the French Lexical Network (LN-fr), and observed that CamemBERT performs distinctly on tokens embedded within idioms compared to simple ones. When demasking tokens within idioms, the model is not proficient in discerning their level of idiomaticity. Moreover, regardless of idiomaticity, CamemBERT excels at handling function words. The length of idioms also impacts CamemBERT{'}s performance to a certain extent. The last two observations partly explain the difference between the model{'}s performance on idioms versus simple lexemes. We conclude that the model treats idioms differently from simple lexemes, but that it does not capture the difference in compositionality between subclasses of idioms.
[ "Liu, Li", "Lareau, Francois" ]
Assessing BERT's sensitivity to idiomaticity
mwe-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.5.bib
https://aclanthology.org/2024.mwe-1.5/
@inproceedings{diaz-hernandez-2024-identification, title = "Identification and Annotation of Body Part Multiword Expressions in Old {E}gyptian", author = "D{\'\i}az Hern{\'a}ndez, Roberto", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.5", pages = "24--32", abstract = "This paper presents the preliminary results of an ongoing study on the diachronic and synchronic use of multiword expressions (MWEs) in Egyptian, begun when I joined the COST Action Universality, Diversity and Idiosyncrasy in Language Technology (UniDive, CA21167). It analyzes, as a case study, Old Egyptian body part MWEs based on lexicographic and textual resources, and its aim is both to open up a research line in Egyptology, where the study of MWEs has been neglected, and to contribute to Natural Language Processing studies by determining the rules governing the morpho-syntactic formation of Old Egyptian body part MWEs in order to facilitate the identification of other types of MWEs.", }
This paper presents the preliminary results of an ongoing study on the diachronic and synchronic use of multiword expressions (MWEs) in Egyptian, begun when I joined the COST Action Universality, Diversity and Idiosyncrasy in Language Technology (UniDive, CA21167). It analyzes, as a case study, Old Egyptian body part MWEs based on lexicographic and textual resources, and its aim is both to open up a research line in Egyptology, where the study of MWEs has been neglected, and to contribute to Natural Language Processing studies by determining the rules governing the morpho-syntactic formation of Old Egyptian body part MWEs in order to facilitate the identification of other types of MWEs.
[ "D{\\'\\i}az Hern{\\'a}ndez, Roberto" ]
Identification and Annotation of Body Part Multiword Expressions in Old Egyptian
mwe-1.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.6.bib
https://aclanthology.org/2024.mwe-1.6/
@inproceedings{ahrenberg-2024-fitting, title = "Fitting Fixed Expressions into the {UD} Mould: {S}wedish as a Use Case", author = "Ahrenberg, Lars", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.6", pages = "33--42", abstract = "Fixed multiword expressions are common in many, if not all, natural languages. In the Universal Dependencies framework, UD, a subset of these expressions are modelled with the dependency relation {`}fixed{'} targeting the most grammaticalized cases of functional multiword items. In this paper we perform a detailed analysis of 439 expressions modelled with {`}fixed{'} in two Swedish UD treebanks in order to reduce their numbers and fit the definition better. We identify a large number of dimensions of variation for fixed multiword expressions that can be used for the purpose. We also point out several problematic aspects of the current UD approach to multiword expressions and discuss different alternative solutions for modelling fixed expresions. We suggest that insights from Constructional Grammar (CxG) can help with a more systematic treatment of fixed expressions in UD.", }
Fixed multiword expressions are common in many, if not all, natural languages. In the Universal Dependencies framework, UD, a subset of these expressions are modelled with the dependency relation {`}fixed{'} targeting the most grammaticalized cases of functional multiword items. In this paper we perform a detailed analysis of 439 expressions modelled with {`}fixed{'} in two Swedish UD treebanks in order to reduce their numbers and fit the definition better. We identify a large number of dimensions of variation for fixed multiword expressions that can be used for the purpose. We also point out several problematic aspects of the current UD approach to multiword expressions and discuss different alternative solutions for modelling fixed expresions. We suggest that insights from Constructional Grammar (CxG) can help with a more systematic treatment of fixed expressions in UD.
[ "Ahrenberg, Lars" ]
Fitting Fixed Expressions into the UD Mould: Swedish as a Use Case
mwe-1.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.7.bib
https://aclanthology.org/2024.mwe-1.7/
@inproceedings{masciolini-etal-2024-synthetic, title = "Synthetic-Error Augmented Parsing of {S}wedish as a Second Language: Experiments with Word Order", author = "Masciolini, Arianna and Francis, Emilie and Szawerna, Maria Irena", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.7", pages = "43--49", abstract = "Ungrammatical text poses significant challenges for off-the-shelf dependency parsers. In this paper, we explore the effectiveness of using synthetic data to improve performance on essays written by learners of Swedish as a second language. Due to their relevance and ease of annotation, we restrict our initial experiments to word order errors. To do that, we build a corrupted version of the standard Swedish Universal Dependencies (UD) treebank Talbanken, mimicking the error patterns and frequency distributions observed in the Swedish Learner Language (SweLL) corpus. We then use the MaChAmp (Massive Choice, Ample tasks) toolkit to train an array of BERT-based dependency parsers, fine-tuning on different combinations of original and corrupted data. We evaluate the resulting models not only on their respective test sets but also, most importantly, on a smaller collection of sentence-correction pairs derived from SweLL. Results show small but significant performance improvements on the target domain, with minimal decline on normative data.", }
Ungrammatical text poses significant challenges for off-the-shelf dependency parsers. In this paper, we explore the effectiveness of using synthetic data to improve performance on essays written by learners of Swedish as a second language. Due to their relevance and ease of annotation, we restrict our initial experiments to word order errors. To do that, we build a corrupted version of the standard Swedish Universal Dependencies (UD) treebank Talbanken, mimicking the error patterns and frequency distributions observed in the Swedish Learner Language (SweLL) corpus. We then use the MaChAmp (Massive Choice, Ample tasks) toolkit to train an array of BERT-based dependency parsers, fine-tuning on different combinations of original and corrupted data. We evaluate the resulting models not only on their respective test sets but also, most importantly, on a smaller collection of sentence-correction pairs derived from SweLL. Results show small but significant performance improvements on the target domain, with minimal decline on normative data.
[ "Masciolini, Arianna", "Francis, Emilie", "Szawerna, Maria Irena" ]
Synthetic-Error Augmented Parsing of Swedish as a Second Language: Experiments with Word Order
mwe-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.8.bib
https://aclanthology.org/2024.mwe-1.8/
@inproceedings{sellmer-hellwig-2024-vedic, title = "The {V}edic Compound Dataset", author = "Sellmer, Sven and Hellwig, Oliver", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.8", pages = "50--55", abstract = "This paper introduces the Vedic Compound Dataset (VCD), the first resource providing annotated compounds from Vedic Sanskrit, a South Asian Indo-European language used from ca. 1500 to 500 BCE. The VCD aims at facilitating the study of language change in early Indo-Iranian and offers comparative material for quantitative cross-linguistic research on compounds. The process of annotating Vedic compounds is complex as they contain five of the six basic types of compounds defined by Scalise {\&} Bisetto (2005), which are, however, not consistently marked in morphosyntax, making their automatic classification a significant challenge. The paper details the process of collecting and preprocessing the relevant data, with a particular focus on the question of how to distinguish exocentric from endocentric usage. It further discusses experiments with a simple ML classifier that uses compound internal syntactic relations, outlines the composition of the dataset, and sketches directions for future research.", }
This paper introduces the Vedic Compound Dataset (VCD), the first resource providing annotated compounds from Vedic Sanskrit, a South Asian Indo-European language used from ca. 1500 to 500 BCE. The VCD aims at facilitating the study of language change in early Indo-Iranian and offers comparative material for quantitative cross-linguistic research on compounds. The process of annotating Vedic compounds is complex as they contain five of the six basic types of compounds defined by Scalise {\&} Bisetto (2005), which are, however, not consistently marked in morphosyntax, making their automatic classification a significant challenge. The paper details the process of collecting and preprocessing the relevant data, with a particular focus on the question of how to distinguish exocentric from endocentric usage. It further discusses experiments with a simple ML classifier that uses compound internal syntactic relations, outlines the composition of the dataset, and sketches directions for future research.
[ "Sellmer, Sven", "Hellwig, Oliver" ]
The Vedic Compound Dataset
mwe-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.9.bib
https://aclanthology.org/2024.mwe-1.9/
@inproceedings{jobanputra-etal-2024-universal, title = "A {U}niversal {D}ependencies Treebank for {G}ujarati", author = {Jobanputra, Mayank and Mehta, Maitrey and {\c{C}}{\"o}ltekin, {\c{C}}a{\u{g}}r{\i}}, editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.9", pages = "56--62", abstract = "The Universal Dependencies (UD) project has presented itself as a valuable platform to develop various resources for the languages of the world. We present and release a sample treebank for the Indo-Aryan language of Gujarati {--} a widely spoken language with little linguistic resources. This treebank is the first labeled dataset for dependency parsing in the language and the script (the Gujarati script). The treebank contains 187 part-of-speech and dependency annotated sentences from diverse genres. We discuss various idiosyncratic examples, annotation choices and present an elaborate corpus along with agreement statistics. We see this work as a valuable resource and a stepping stone for research in Gujarati Computational Linguistics.", }
The Universal Dependencies (UD) project has presented itself as a valuable platform to develop various resources for the languages of the world. We present and release a sample treebank for the Indo-Aryan language of Gujarati {--} a widely spoken language with little linguistic resources. This treebank is the first labeled dataset for dependency parsing in the language and the script (the Gujarati script). The treebank contains 187 part-of-speech and dependency annotated sentences from diverse genres. We discuss various idiosyncratic examples, annotation choices and present an elaborate corpus along with agreement statistics. We see this work as a valuable resource and a stepping stone for research in Gujarati Computational Linguistics.
[ "Jobanputra, Mayank", "Mehta, Maitrey", "{\\c{C}}{\\\"o}ltekin, {\\c{C}}a{\\u{g}}r{\\i}" ]
A Universal Dependencies Treebank for Gujarati
mwe-1.9
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.10.bib
https://aclanthology.org/2024.mwe-1.10/
@inproceedings{mao-etal-2024-overcoming, title = "Overcoming Early Saturation on Low-Resource Languages in Multilingual Dependency Parsing", author = "Mao, Jiannan and Ding, Chenchen and Kaing, Hour and Tanaka, Hideki and Utiyama, Masao and Matsumoto., Tadahiro", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.10", pages = "63--69", abstract = "UDify is a multilingual and multi-task parser fine-tuned on mBERT that achieves remarkable performance in high-resource languages. However, the performance saturates early and decreases gradually in low-resource languages as training proceeds. This work applies a data augmentation method and conducts experiments on seven few-shot and four zero-shot languages. The unlabeled attachment scores were improved on the zero-shot languages dependency parsing tasks, with the average score rising from 67.1{\%} to 68.7{\%}. Meanwhile, dependency parsing tasks for high-resource languages and other tasks were hardly affected. Experimental results indicate the data augmentation method is effective for low-resource languages in a multilingual dependency parsing.", }
UDify is a multilingual and multi-task parser fine-tuned on mBERT that achieves remarkable performance in high-resource languages. However, the performance saturates early and decreases gradually in low-resource languages as training proceeds. This work applies a data augmentation method and conducts experiments on seven few-shot and four zero-shot languages. The unlabeled attachment scores were improved on the zero-shot languages dependency parsing tasks, with the average score rising from 67.1{\%} to 68.7{\%}. Meanwhile, dependency parsing tasks for high-resource languages and other tasks were hardly affected. Experimental results indicate the data augmentation method is effective for low-resource languages in a multilingual dependency parsing.
[ "Mao, Jiannan", "Ding, Chenchen", "Kaing, Hour", "Tanaka, Hideki", "Utiyama, Masao", "Matsumoto., Tadahiro" ]
Overcoming Early Saturation on Low-Resource Languages in Multilingual Dependency Parsing
mwe-1.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.11.bib
https://aclanthology.org/2024.mwe-1.11/
@inproceedings{morad-etal-2024-part, title = "Part-of-Speech Tagging for {N}orthern {K}urdish", author = "Morad, Peshmerge and Ahmadi, Sina and Gatti, Lorenzo", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.11", pages = "70--80", abstract = "In the growing domain of natural language processing, low-resourced languages like Northern Kurdish remain largely unexplored due to the lack of resources needed to be part of this growth. In particular, the tasks of part-of-speech tagging and tokenization for Northern Kurdish are still insufficiently addressed. In this study, we aim to bridge this gap by evaluating a range of statistical, neural, and fine-tuned-based models specifically tailored for Northern Kurdish. Leveraging limited but valuable datasets, including the Universal Dependency Kurmanji treebank and a novel manually annotated and tokenized gold-standard dataset consisting of 136 sentences (2,937 tokens). We evaluate several POS tagging models and report that the fine-tuned transformer-based model outperforms others, achieving an accuracy of 0.87 and a macro-averaged F1 score of 0.77. Data and models are publicly available under an open license at https://github.com/peshmerge/northern-kurdish-pos-tagging", }
In the growing domain of natural language processing, low-resourced languages like Northern Kurdish remain largely unexplored due to the lack of resources needed to be part of this growth. In particular, the tasks of part-of-speech tagging and tokenization for Northern Kurdish are still insufficiently addressed. In this study, we aim to bridge this gap by evaluating a range of statistical, neural, and fine-tuned-based models specifically tailored for Northern Kurdish. Leveraging limited but valuable datasets, including the Universal Dependency Kurmanji treebank and a novel manually annotated and tokenized gold-standard dataset consisting of 136 sentences (2,937 tokens). We evaluate several POS tagging models and report that the fine-tuned transformer-based model outperforms others, achieving an accuracy of 0.87 and a macro-averaged F1 score of 0.77. Data and models are publicly available under an open license at https://github.com/peshmerge/northern-kurdish-pos-tagging
[ "Morad, Peshmerge", "Ahmadi, Sina", "Gatti, Lorenzo" ]
Part-of-Speech Tagging for Northern Kurdish
mwe-1.11
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.12.bib
https://aclanthology.org/2024.mwe-1.12/
@inproceedings{alves-etal-2024-diachronic, title = "Diachronic Analysis of Multi-word Expression Functional Categories in Scientific {E}nglish", author = "Alves, Diego and Degaetano-Ortlieb, Stefania and Schmidt, Elena and Teich, Elke", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.12", pages = "81--87", abstract = "We present a diachronic analysis of multi-word expressions (MWEs) in English based on the Royal Society Corpus, a dataset containing 300+ years of the scientific publications of the Royal Society of London. Specifically, we investigate the functions of MWEs, such as stance markers ({``}is is interesting{''}) or discourse organizers ({``}in this section{''}), and their development over time. Our approach is multi-disciplinary: to detect MWEs we use Universal Dependencies, to classify them functionally we use an approach from register linguistics, and to assess their role in diachronic development we use an information-theoretic measure, relative entropy.", }
We present a diachronic analysis of multi-word expressions (MWEs) in English based on the Royal Society Corpus, a dataset containing 300+ years of the scientific publications of the Royal Society of London. Specifically, we investigate the functions of MWEs, such as stance markers ({``}is is interesting{''}) or discourse organizers ({``}in this section{''}), and their development over time. Our approach is multi-disciplinary: to detect MWEs we use Universal Dependencies, to classify them functionally we use an approach from register linguistics, and to assess their role in diachronic development we use an information-theoretic measure, relative entropy.
[ "Alves, Diego", "Degaetano-Ortlieb, Stefania", "Schmidt, Elena", "Teich, Elke" ]
Diachronic Analysis of Multi-word Expression Functional Categories in Scientific English
mwe-1.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.13.bib
https://aclanthology.org/2024.mwe-1.13/
@inproceedings{hadj-mohamed-etal-2024-lexicons, title = "Lexicons Gain the Upper Hand in {A}rabic {MWE} Identification", author = "Hadj Mohamed, Najet and Savary, Agata and Ben Khelil, Cherifa and Antoine, Jean-Yves and Keskes, Iskandar and Hadrich-Belguith, Lamia", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.13", pages = "88--97", abstract = "This paper highlights the importance of integrating MWE identification with the development of syntactic MWE lexicons. It suggests that lexicons with minimal morphosyntactic information can amplify current MWE-annotated datasets and refine identification strategies. To our knowledge, this work represents the first attempt to focus on both seen and unseen of VMWEs for Arabic. It also deals with the challenge of differentiating between literal and figurative interpretations of idiomatic expressions. The approach involves a dual-phase procedure: first projecting a VMWE lexicon onto a corpus to identify candidate occurrences, then disambiguating these occurrences to distinguish idiomatic from literal instances. Experiments outlined in the paper aim to assess the efficacy of this technique, utilizing a lexicon known as LEXAR and the {``}parseme-ar{''} corpus. The findings suggest that lexicon-driven strategies have the potential to refine MWE identification, particularly for unseen occurrences.", }
This paper highlights the importance of integrating MWE identification with the development of syntactic MWE lexicons. It suggests that lexicons with minimal morphosyntactic information can amplify current MWE-annotated datasets and refine identification strategies. To our knowledge, this work represents the first attempt to focus on both seen and unseen of VMWEs for Arabic. It also deals with the challenge of differentiating between literal and figurative interpretations of idiomatic expressions. The approach involves a dual-phase procedure: first projecting a VMWE lexicon onto a corpus to identify candidate occurrences, then disambiguating these occurrences to distinguish idiomatic from literal instances. Experiments outlined in the paper aim to assess the efficacy of this technique, utilizing a lexicon known as LEXAR and the {``}parseme-ar{''} corpus. The findings suggest that lexicon-driven strategies have the potential to refine MWE identification, particularly for unseen occurrences.
[ "Hadj Mohamed, Najet", "Savary, Agata", "Ben Khelil, Cherifa", "Antoine, Jean-Yves", "Keskes, Isk", "ar", "Hadrich-Belguith, Lamia" ]
Lexicons Gain the Upper Hand in Arabic MWE Identification
mwe-1.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.14.bib
https://aclanthology.org/2024.mwe-1.14/
@inproceedings{jain-vaidya-2024-revisiting, title = "Revisiting {VMWE}s in {H}indi: Annotating Layers of Predication", author = "Jain, Kanishka and Vaidya, Ashwini", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.14", pages = "98--105", abstract = "Multiword expressions in languages like Hindi are both productive and challenging. Hindi not only uses a variety of verbal multiword expressions (VMWEs) but also employs different combinatorial strategies to create new types of multiword expressions. In this paper we are investigating two such strategies that are quite common in the language. Firstly, we describe that VMWEs in Hindi are not just lexical but also morphological. Causatives are formed morphologically in Hindi. Second, we examine Stacked VMWEs i.e. when at least two VMWEs occur together. We suggest that the existing PARSEME annotation framework can be extended to these two phenomena without changing the existing guidelines. We also propose rule-based heuristics using existing Universal Dependency annotations to automatically identify and annotate some of the VMWEs in the language. The goal of this paper is to refine the existing PARSEME corpus of Hindi for VMWEs while expanding its scope giving a more comprehensive picture of VMWEs in Hindi.", }
Multiword expressions in languages like Hindi are both productive and challenging. Hindi not only uses a variety of verbal multiword expressions (VMWEs) but also employs different combinatorial strategies to create new types of multiword expressions. In this paper we are investigating two such strategies that are quite common in the language. Firstly, we describe that VMWEs in Hindi are not just lexical but also morphological. Causatives are formed morphologically in Hindi. Second, we examine Stacked VMWEs i.e. when at least two VMWEs occur together. We suggest that the existing PARSEME annotation framework can be extended to these two phenomena without changing the existing guidelines. We also propose rule-based heuristics using existing Universal Dependency annotations to automatically identify and annotate some of the VMWEs in the language. The goal of this paper is to refine the existing PARSEME corpus of Hindi for VMWEs while expanding its scope giving a more comprehensive picture of VMWEs in Hindi.
[ "Jain, Kanishka", "Vaidya, Ashwini" ]
Revisiting VMWEs in Hindi: Annotating Layers of Predication
mwe-1.14
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.15.bib
https://aclanthology.org/2024.mwe-1.15/
@inproceedings{krstev-etal-2024-towards, title = "Towards the semantic annotation of {SR}-{ELEXIS} corpus: Insights into Multiword Expressions and Named Entities", author = "Krstev, Cvetana and Stankovi{\'c}, Ranka and Markovi{\'c}, Aleksandra M. and Mihajlov, Teodora Sofija", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.15", pages = "106--114", abstract = "This paper presents the work in progress on ELEXIS-sr corpus, the Serbian addition to the ELEXIS multilingual annotated corpus ElexisWSD, comprising semantic annotations and word sense repositories. The ELEXIS corpus has parallel annotations in ten European languages, serving as a cross-lingual benchmark for evaluating low and medium-resourced European languages. The focus in this paper is on multiword expressions (MWEs) and named entities (NEs), their recognition in the ELEXIS-sr sentence set, and comparison with annotations in other languages. The first steps in building the Serbian sense inventory are discussed, and some results concerning MWEs and NEs are analysed. Once completed, the ELEXIS-sr corpus will be the first sense annotated corpus using the Serbian WordNet (SrpWN). Finally, ideas to represent MWE lexicon entries as Linguistic Linked-Open Data (LLOD) and connect them with occurrences in the corpus are presented.", }
This paper presents the work in progress on ELEXIS-sr corpus, the Serbian addition to the ELEXIS multilingual annotated corpus ElexisWSD, comprising semantic annotations and word sense repositories. The ELEXIS corpus has parallel annotations in ten European languages, serving as a cross-lingual benchmark for evaluating low and medium-resourced European languages. The focus in this paper is on multiword expressions (MWEs) and named entities (NEs), their recognition in the ELEXIS-sr sentence set, and comparison with annotations in other languages. The first steps in building the Serbian sense inventory are discussed, and some results concerning MWEs and NEs are analysed. Once completed, the ELEXIS-sr corpus will be the first sense annotated corpus using the Serbian WordNet (SrpWN). Finally, ideas to represent MWE lexicon entries as Linguistic Linked-Open Data (LLOD) and connect them with occurrences in the corpus are presented.
[ "Krstev, Cvetana", "Stankovi{\\'c}, Ranka", "Markovi{\\'c}, Aleks", "ra M.", "Mihajlov, Teodora Sofija" ]
Towards the semantic annotation of SR-ELEXIS corpus: Insights into Multiword Expressions and Named Entities
mwe-1.15
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.16.bib
https://aclanthology.org/2024.mwe-1.16/
@inproceedings{ehren-etal-2024-leave, title = "To Leave No Stone Unturned: Annotating Verbal Idioms in the {P}arallel {M}eaning {B}ank", author = "Ehren, Rafael and Evang, Kilian and Kallmeyer, Laura", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.16", pages = "115--124", abstract = "Idioms present many challenges to semantic annotation in a lexicalized framework, which leads to them being underrepresented or inadequately annotated in sembanks. In this work, we address this problem with respect to verbal idioms in the Parallel Meaning Bank (PMB), specifically in its German part, where only some idiomatic expressions have been annotated correctly. We first select candidate idiomatic expressions, then determine their idiomaticity status and whether they are decomposable or not, and then we annotate their semantics using WordNet senses and VerbNet semantic roles. Overall, inter-annotator agreement is very encouraging. A difficulty, however, is to choose the correct word sense. This is not surprising, given that English synsets are many and there is often no unique mapping from German idioms and words to them. Besides this, there are many subtle differences and interesting challenging cases. We discuss some of them in this paper.", }
Idioms present many challenges to semantic annotation in a lexicalized framework, which leads to them being underrepresented or inadequately annotated in sembanks. In this work, we address this problem with respect to verbal idioms in the Parallel Meaning Bank (PMB), specifically in its German part, where only some idiomatic expressions have been annotated correctly. We first select candidate idiomatic expressions, then determine their idiomaticity status and whether they are decomposable or not, and then we annotate their semantics using WordNet senses and VerbNet semantic roles. Overall, inter-annotator agreement is very encouraging. A difficulty, however, is to choose the correct word sense. This is not surprising, given that English synsets are many and there is often no unique mapping from German idioms and words to them. Besides this, there are many subtle differences and interesting challenging cases. We discuss some of them in this paper.
[ "Ehren, Rafael", "Evang, Kilian", "Kallmeyer, Laura" ]
To Leave No Stone Unturned: Annotating Verbal Idioms in the Parallel Meaning Bank
mwe-1.16
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.17.bib
https://aclanthology.org/2024.mwe-1.17/
@inproceedings{gamba-etal-2024-universal, title = "Universal Feature-based Morphological Trees", author = "Gamba, Federica and Stephen, Abishek and {\v{Z}}abokrtsk{\'y}, Zden{\v{e}}k", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.17", pages = "125--137", abstract = "The paper proposes a novel data representation inspired by Universal Dependencies (UD) syntactic trees, which are extended to capture the internal morphological structure of word forms. As a result, morphological segmentation is incorporated within the UD representation of syntactic dependencies. To derive the proposed data structure we leverage existing annotation of UD treebanks as well as available resources for segmentation, and we select 10 languages to work with in the presented case study. Additionally, statistical analysis reveals a robust correlation between morphs and sets of morphological features of words. We thus align the morphs to the observed feature inventories capturing the morphological meaning of morphs. Through the beneficial exploitation of cross-lingual correspondence of morphs, the proposed syntactic representation based on morphological segmentation proves to enhance the comparability of sentence structures across languages.", }
The paper proposes a novel data representation inspired by Universal Dependencies (UD) syntactic trees, which are extended to capture the internal morphological structure of word forms. As a result, morphological segmentation is incorporated within the UD representation of syntactic dependencies. To derive the proposed data structure we leverage existing annotation of UD treebanks as well as available resources for segmentation, and we select 10 languages to work with in the presented case study. Additionally, statistical analysis reveals a robust correlation between morphs and sets of morphological features of words. We thus align the morphs to the observed feature inventories capturing the morphological meaning of morphs. Through the beneficial exploitation of cross-lingual correspondence of morphs, the proposed syntactic representation based on morphological segmentation proves to enhance the comparability of sentence structures across languages.
[ "Gamba, Federica", "Stephen, Abishek", "{\\v{Z}}abokrtsk{\\'y}, Zden{\\v{e}}k" ]
Universal Feature-based Morphological Trees
mwe-1.17
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.18.bib
https://aclanthology.org/2024.mwe-1.18/
@inproceedings{perri-etal-2024-combining, title = "Combining Grammatical and Relational Approaches. A Hybrid Method for the Identification of Candidate Collocations from Corpora", author = "Perri, Damiano and Fioravanti, Irene and Gervasi, Osvaldo and Spina, Stefania", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.18", pages = "138--146", abstract = "We present an evaluation of three different methods for the automatic identification of candidate collocations in corpora, part of a research project focused on the development of a learner dictionary of Italian collocations. We compare the commonly used POS-based method and the syntactic dependency-based method with a hybrid method integrating both approaches. We conduct a statistical analysis on a sample corpus of written and spoken texts of different registers. Results show that the hybrid method can correctly detect more candidate collocations against a human annotated benchmark. The scores are particularly high in adjectival modifier rela- tions. A hybrid approach to candidate collocation identification seems to lead to an improvement in the quality of results.", }
We present an evaluation of three different methods for the automatic identification of candidate collocations in corpora, part of a research project focused on the development of a learner dictionary of Italian collocations. We compare the commonly used POS-based method and the syntactic dependency-based method with a hybrid method integrating both approaches. We conduct a statistical analysis on a sample corpus of written and spoken texts of different registers. Results show that the hybrid method can correctly detect more candidate collocations against a human annotated benchmark. The scores are particularly high in adjectival modifier rela- tions. A hybrid approach to candidate collocation identification seems to lead to an improvement in the quality of results.
[ "Perri, Damiano", "Fioravanti, Irene", "Gervasi, Osvaldo", "Spina, Stefania" ]
Combining Grammatical and Relational Approaches. A Hybrid Method for the Identification of Candidate Collocations from Corpora
mwe-1.18
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.19.bib
https://aclanthology.org/2024.mwe-1.19/
@inproceedings{barbu-mititelu-etal-2024-multiword, title = "Multiword Expressions between the Corpus and the Lexicon: Universality, Idiosyncrasy, and the Lexicon-Corpus Interface", author = "Barbu Mititelu, Verginica and Giouli, Voula and Evang, Kilian and Zeman, Daniel and Osenova, Petya and Tiberius, Carole and Krek, Simon and Markantonatou, Stella and Stoyanova, Ivelina and Stankovi{\'c}, Ranka and Chiarcos, Christian", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.19", pages = "147--153", abstract = "We present ongoing work towards defining a lexicon-corpus interface to serve as a benchmark in the representation of multiword expressions (of various parts of speech) in dedicated lexica and the linking of these entries to their corpus occurrences. The final aim is the harnessing of such resources for the automatic identification of multiword expressions in a text. The involvement of several natural languages aims at the universality of a solution not centered on a particular language, and also accommodating idiosyncrasies. Challenges in the lexicographic description of multiword expressions are discussed, the current status of lexica dedicated to this linguistic phenomenon is outlined, as well as the solution we envisage for creating an ecosystem of interlinked lexica and corpora containing and, respectively, annotated with multiword expressions.", }
We present ongoing work towards defining a lexicon-corpus interface to serve as a benchmark in the representation of multiword expressions (of various parts of speech) in dedicated lexica and the linking of these entries to their corpus occurrences. The final aim is the harnessing of such resources for the automatic identification of multiword expressions in a text. The involvement of several natural languages aims at the universality of a solution not centered on a particular language, and also accommodating idiosyncrasies. Challenges in the lexicographic description of multiword expressions are discussed, the current status of lexica dedicated to this linguistic phenomenon is outlined, as well as the solution we envisage for creating an ecosystem of interlinked lexica and corpora containing and, respectively, annotated with multiword expressions.
[ "Barbu Mititelu, Verginica", "Giouli, Voula", "Evang, Kilian", "Zeman, Daniel", "Osenova, Petya", "Tiberius, Carole", "Krek, Simon", "Markantonatou, Stella", "Stoyanova, Ivelina", "Stankovi{\\'c}, Ranka", "Chiarcos, Christian" ]
Multiword Expressions between the Corpus and the Lexicon: Universality, Idiosyncrasy, and the Lexicon-Corpus Interface
mwe-1.19
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.20.bib
https://aclanthology.org/2024.mwe-1.20/
@inproceedings{cibej-etal-2024-annotation, title = "Annotation of Multiword Expressions in the {SUK} 1.0 Training Corpus of {S}lovene: Lessons Learned and Future Steps", author = "{\v{C}}ibej, Jaka and Gantar, Polona and Bon, Mija", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.20", pages = "154--162", abstract = "Recent progress within the UniDive COST Action on the compilation of universal guidelines for the annotation of non-verbal multiword expressions (MWEs) has provided an opportunity to improve and expand the work previously done within the PARSEME COST Action on the annotation of verbal multiword expressions in the SUK 1.0 Training Corpus of Slovene. A segment of the training corpus had already been annotated with verbal MWEs during PARSEME. As a follow-up and part of the New Grammar of Modern Standard Slovene (NSSSS) project, the same segment was annotated with non verbal MWEs, resulting in approximately 6, 500 sentences annotated by at least three annotators (described in Gantar et al., 2019). Since then, the entire SUK 1.0 was also manually annotated with UD part-of-speech tags. In the paper, we present an analysis of the MWE annotations exported from the corpus along with their part-of-speech structures through the lens of Universal Dependencies. We discuss the usefulness of the data in terms of potential insight for the further compilation and fine-tuning of guidelines particularly for non-verbal MWEs, and conclude with our plans for future work.", }
Recent progress within the UniDive COST Action on the compilation of universal guidelines for the annotation of non-verbal multiword expressions (MWEs) has provided an opportunity to improve and expand the work previously done within the PARSEME COST Action on the annotation of verbal multiword expressions in the SUK 1.0 Training Corpus of Slovene. A segment of the training corpus had already been annotated with verbal MWEs during PARSEME. As a follow-up and part of the New Grammar of Modern Standard Slovene (NSSSS) project, the same segment was annotated with non verbal MWEs, resulting in approximately 6, 500 sentences annotated by at least three annotators (described in Gantar et al., 2019). Since then, the entire SUK 1.0 was also manually annotated with UD part-of-speech tags. In the paper, we present an analysis of the MWE annotations exported from the corpus along with their part-of-speech structures through the lens of Universal Dependencies. We discuss the usefulness of the data in terms of potential insight for the further compilation and fine-tuning of guidelines particularly for non-verbal MWEs, and conclude with our plans for future work.
[ "{\\v{C}}ibej, Jaka", "Gantar, Polona", "Bon, Mija" ]
Annotation of Multiword Expressions in the SUK 1.0 Training Corpus of Slovene: Lessons Learned and Future Steps
mwe-1.20
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.21.bib
https://aclanthology.org/2024.mwe-1.21/
@inproceedings{stephen-zeman-2024-light, title = "Light Verb Constructions in {U}niversal {D}ependencies for {S}outh {A}sian Languages", author = "Stephen, Abishek and Zeman, Daniel", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.21", pages = "163--177", abstract = "We conduct a morphosyntactic investigation into the light verb constructions (LVCs) or the verbo-nominal predicates in South Asian languages. This work spans the Indo-Aryan and Dravidian language families in treebanks based on Universal Dependencies (UD). For the selected languages we show how well the existing annotation guidelines fare for the LVCs. We also reiterate the importance of the core and oblique distinction in UD and how informative it is for making accurate morphosyntactic annotation judgments for such predicates.", }
We conduct a morphosyntactic investigation into the light verb constructions (LVCs) or the verbo-nominal predicates in South Asian languages. This work spans the Indo-Aryan and Dravidian language families in treebanks based on Universal Dependencies (UD). For the selected languages we show how well the existing annotation guidelines fare for the LVCs. We also reiterate the importance of the core and oblique distinction in UD and how informative it is for making accurate morphosyntactic annotation judgments for such predicates.
[ "Stephen, Abishek", "Zeman, Daniel" ]
Light Verb Constructions in Universal Dependencies for South Asian Languages
mwe-1.21
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.mwe-1.22.bib
https://aclanthology.org/2024.mwe-1.22/
@inproceedings{phelps-etal-2024-sign, title = "Sign of the Times: Evaluating the use of Large Language Models for Idiomaticity Detection", author = "Phelps, Dylan and Pickard, Thomas M. R. and Mi, Maggie and Gow-Smith, Edward and Villavicencio, Aline", editor = {Bhatia, Archna and Bouma, Gosse and Do{\u{g}}ru{\"o}z, A. Seza and Evang, Kilian and Garcia, Marcos and Giouli, Voula and Han, Lifeng and Nivre, Joakim and Rademaker, Alexandre}, booktitle = "Proceedings of the Joint Workshop on Multiword Expressions and Universal Dependencies (MWE-UD) @ LREC-COLING 2024", month = may, year = "2024", address = "Torino, Italia", publisher = "ELRA and ICCL", url = "https://aclanthology.org/2024.mwe-1.22", pages = "178--187", abstract = "Despite the recent ubiquity of large language models and their high zero-shot prompted performance across a wide range of tasks, it is still not known how well they perform on tasks which require processing of potentially idiomatic language. In particular, how well do such models perform in comparison to encoder-only models fine-tuned specifically for idiomaticity tasks? In this work, we attempt to answer this question by looking at the performance of a range of LLMs (both local and software-as-a-service models) on three idiomaticity datasets: SemEval 2022 Task 2a, FLUTE, and MAGPIE. Overall, we find that whilst these models do give competitive performance, they do not match the results of fine-tuned task-specific models, even at the largest scales (e.g. for GPT-4). Nevertheless, we do see consistent performance improvements across model scale. Additionally, we investigate prompting approaches to improve performance, and discuss the practicalities of using LLMs for these tasks.", }
Despite the recent ubiquity of large language models and their high zero-shot prompted performance across a wide range of tasks, it is still not known how well they perform on tasks which require processing of potentially idiomatic language. In particular, how well do such models perform in comparison to encoder-only models fine-tuned specifically for idiomaticity tasks? In this work, we attempt to answer this question by looking at the performance of a range of LLMs (both local and software-as-a-service models) on three idiomaticity datasets: SemEval 2022 Task 2a, FLUTE, and MAGPIE. Overall, we find that whilst these models do give competitive performance, they do not match the results of fine-tuned task-specific models, even at the largest scales (e.g. for GPT-4). Nevertheless, we do see consistent performance improvements across model scale. Additionally, we investigate prompting approaches to improve performance, and discuss the practicalities of using LLMs for these tasks.
[ "Phelps, Dylan", "Pickard, Thomas M. R.", "Mi, Maggie", "Gow-Smith, Edward", "Villavicencio, Aline" ]
Sign of the Times: Evaluating the use of Large Language Models for Idiomaticity Detection
mwe-1.22
Poster
2405.09279
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]