Datasets:

bibtex_url
stringlengths
41
53
proceedings
stringlengths
38
50
bibtext
stringlengths
566
3.75k
abstract
stringlengths
4
3.1k
authors
sequencelengths
1
66
title
stringlengths
12
172
id
stringlengths
7
19
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringlengths
0
40
n_linked_authors
int64
-1
21
upvotes
int64
-1
116
num_comments
int64
-1
11
n_authors
int64
-1
61
Models
sequencelengths
0
100
Datasets
sequencelengths
0
100
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
100
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
https://aclanthology.org/2024.genbench-1.6.bib
https://aclanthology.org/2024.genbench-1.6/
@inproceedings{bueno-etal-2024-mlissard, title = "{ML}issard: Multilingual Long and Simple Sequential Reasoning Benchmarks", author = "Bueno, Mirelle Candida and Lotufo, Roberto and Frassetto Nogueira, Rodrigo", editor = "Hupkes, Dieuwke and Dankers, Verna and Batsuren, Khuyagbaatar and Kazemnejad, Amirhossein and Christodoulopoulos, Christos and Giulianelli, Mario and Cotterell, Ryan", booktitle = "Proceedings of the 2nd GenBench Workshop on Generalisation (Benchmarking) in NLP", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.genbench-1.6", pages = "86--95", abstract = "Language models are now capable of solving tasks that require dealing with long sequences consisting of hundreds of thousands of tokens. However, they often fail on tasks that require repetitive use of simple rules, even on sequences that are much shorter than those seen during training. For example, state-of-the-art LLMs can find common items in two lists with up to 20 items but fail when lists have 80 items. In this paper, we introduce MLissard, a multilingual benchmark designed to evaluate models{'} abilities to process and generate texts of varied lengths and offers a mechanism for controlling sequence complexity. Our evaluation of open-source and proprietary models show a consistent decline in performance across all models and languages as the complexity of the sequence increases. Surprisingly, the use of in-context examples in languages other than English helps increase extrapolation performance significantly.", }
Language models are now capable of solving tasks that require dealing with long sequences consisting of hundreds of thousands of tokens. However, they often fail on tasks that require repetitive use of simple rules, even on sequences that are much shorter than those seen during training. For example, state-of-the-art LLMs can find common items in two lists with up to 20 items but fail when lists have 80 items. In this paper, we introduce MLissard, a multilingual benchmark designed to evaluate models{'} abilities to process and generate texts of varied lengths and offers a mechanism for controlling sequence complexity. Our evaluation of open-source and proprietary models show a consistent decline in performance across all models and languages as the complexity of the sequence increases. Surprisingly, the use of in-context examples in languages other than English helps increase extrapolation performance significantly.
[ "Bueno, Mirelle C", "ida", "Lotufo, Roberto", "Frassetto Nogueira, Rodrigo" ]
MLissard: Multilingual Long and Simple Sequential Reasoning Benchmarks
genbench-1.6
Poster
2410.06396
[ "https://github.com/unicamp-dl/lissard" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.genbench-1.7.bib
https://aclanthology.org/2024.genbench-1.7/
@inproceedings{park-etal-2024-multiprageval, title = "{M}ulti{P}rag{E}val: Multilingual Pragmatic Evaluation of Large Language Models", author = "Park, Dojun and Lee, Jiwoo and Park, Seohyun and Jeong, Hyeyun and Koo, Youngeun and Hwang, Soonha and Park, Seonwoo and Lee, Sungeun", editor = "Hupkes, Dieuwke and Dankers, Verna and Batsuren, Khuyagbaatar and Kazemnejad, Amirhossein and Christodoulopoulos, Christos and Giulianelli, Mario and Cotterell, Ryan", booktitle = "Proceedings of the 2nd GenBench Workshop on Generalisation (Benchmarking) in NLP", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.genbench-1.7", pages = "96--119", abstract = "As the capabilities of Large Language Models (LLMs) expand, it becomes increasingly important to evaluate them beyond basic knowledge assessment, focusing on higher-level language understanding. This study introduces MultiPragEval, the first multilingual pragmatic evaluation of LLMs, designed for English, German, Korean, and Chinese. Comprising 1200 question units categorized according to Grice{'}s Cooperative Principle and its four conversational maxims, MultiPragEval enables an in-depth assessment of LLMs{'} contextual awareness and their ability to infer implied meanings. Our findings demonstrate that Claude3-Opus significantly outperforms other models in all tested languages, establishing a state-of-the-art in the field. Among open-source models, Solar-10.7B and Qwen1.5-14B emerge as strong competitors. By analyzing pragmatic inference, we provide valuable insights into the capabilities essential for advanced language comprehension in AI systems.", }
As the capabilities of Large Language Models (LLMs) expand, it becomes increasingly important to evaluate them beyond basic knowledge assessment, focusing on higher-level language understanding. This study introduces MultiPragEval, the first multilingual pragmatic evaluation of LLMs, designed for English, German, Korean, and Chinese. Comprising 1200 question units categorized according to Grice{'}s Cooperative Principle and its four conversational maxims, MultiPragEval enables an in-depth assessment of LLMs{'} contextual awareness and their ability to infer implied meanings. Our findings demonstrate that Claude3-Opus significantly outperforms other models in all tested languages, establishing a state-of-the-art in the field. Among open-source models, Solar-10.7B and Qwen1.5-14B emerge as strong competitors. By analyzing pragmatic inference, we provide valuable insights into the capabilities essential for advanced language comprehension in AI systems.
[ "Park, Dojun", "Lee, Jiwoo", "Park, Seohyun", "Jeong, Hyeyun", "Koo, Youngeun", "Hwang, Soonha", "Park, Seonwoo", "Lee, Sungeun" ]
MultiPragEval: Multilingual Pragmatic Evaluation of Large Language Models
genbench-1.7
Poster
2406.07736
[ "https://github.com/DojunPark/MultiPragEval" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.genbench-1.8.bib
https://aclanthology.org/2024.genbench-1.8/
@inproceedings{arzt-hanbury-2024-beyond, title = "Beyond the Numbers: Transparency in Relation Extraction Benchmark Creation and Leaderboards", author = "Arzt, Varvara and Hanbury, Allan", editor = "Hupkes, Dieuwke and Dankers, Verna and Batsuren, Khuyagbaatar and Kazemnejad, Amirhossein and Christodoulopoulos, Christos and Giulianelli, Mario and Cotterell, Ryan", booktitle = "Proceedings of the 2nd GenBench Workshop on Generalisation (Benchmarking) in NLP", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.genbench-1.8", pages = "120--130", abstract = "This paper investigates the transparency in the creation of benchmarks and the use of leaderboards for measuring progress in NLP, with a focus on the relation extraction (RE) task. Existing RE benchmarks often suffer from insufficient documentation, lacking crucial details such as data sources, inter-annotator agreement, the algorithms used for the selection of instances for datasets, and information on potential biases like dataset imbalance. Progress in RE is frequently measured by leaderboards that rank systems based on evaluation methods, typically limited to aggregate metrics like F1-score. However, the absence of detailed performance analysis beyond these metrics can obscure the true generalisation capabilities of models. Our analysis reveals that widely used RE benchmarks, such as TACRED and NYT, tend to be highly imbalanced and contain noisy labels. Moreover, the lack of class-based performance metrics fails to accurately reflect model performance across datasets with a large number of relation types. These limitations should be carefully considered when reporting progress in RE. While our discussion centers on the transparency of RE benchmarks and leaderboards, the observations we discuss are broadly applicable to other NLP tasks as well. Rather than undermining the significance and value of existing RE benchmarks and the development of new models, this paper advocates for improved documentation and more rigorous evaluation to advance the field.", }
This paper investigates the transparency in the creation of benchmarks and the use of leaderboards for measuring progress in NLP, with a focus on the relation extraction (RE) task. Existing RE benchmarks often suffer from insufficient documentation, lacking crucial details such as data sources, inter-annotator agreement, the algorithms used for the selection of instances for datasets, and information on potential biases like dataset imbalance. Progress in RE is frequently measured by leaderboards that rank systems based on evaluation methods, typically limited to aggregate metrics like F1-score. However, the absence of detailed performance analysis beyond these metrics can obscure the true generalisation capabilities of models. Our analysis reveals that widely used RE benchmarks, such as TACRED and NYT, tend to be highly imbalanced and contain noisy labels. Moreover, the lack of class-based performance metrics fails to accurately reflect model performance across datasets with a large number of relation types. These limitations should be carefully considered when reporting progress in RE. While our discussion centers on the transparency of RE benchmarks and leaderboards, the observations we discuss are broadly applicable to other NLP tasks as well. Rather than undermining the significance and value of existing RE benchmarks and the development of new models, this paper advocates for improved documentation and more rigorous evaluation to advance the field.
[ "Arzt, Varvara", "Hanbury, Allan" ]
Beyond the Numbers: Transparency in Relation Extraction Benchmark Creation and Leaderboards
genbench-1.8
Poster
2411.05224
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.genbench-1.9.bib
https://aclanthology.org/2024.genbench-1.9/
@inproceedings{ross-etal-2024-artificial, title = "Is artificial intelligence still intelligence? {LLM}s generalize to novel adjective-noun pairs, but don{'}t mimic the full human distribution", author = "Ross, Hayley and Davidson, Kathryn and Kim, Najoung", editor = "Hupkes, Dieuwke and Dankers, Verna and Batsuren, Khuyagbaatar and Kazemnejad, Amirhossein and Christodoulopoulos, Christos and Giulianelli, Mario and Cotterell, Ryan", booktitle = "Proceedings of the 2nd GenBench Workshop on Generalisation (Benchmarking) in NLP", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.genbench-1.9", pages = "131--153", abstract = "Inferences from adjective-noun combinations like {``}Is artificial intelligence still intelligence?{''} provide a good test bed for LLMs{'} understanding of meaning and compositional generalization capability, since there are many combinations which are novel to both humans and LLMs but nevertheless elicit convergent human judgments. We study a range of LLMs and find that the largest models we tested are able to draw human-like inferences when the inference is determined by context and can generalize to unseen adjective-noun combinations. We also propose three methods to evaluate LLMs on these inferences out of context, where there is a distribution of human-like answers rather than a single correct answer. We find that LLMs show a human-like distribution on at most 75{\%} of our dataset, which is promising but still leaves room for improvement.", }
Inferences from adjective-noun combinations like {``}Is artificial intelligence still intelligence?{''} provide a good test bed for LLMs{'} understanding of meaning and compositional generalization capability, since there are many combinations which are novel to both humans and LLMs but nevertheless elicit convergent human judgments. We study a range of LLMs and find that the largest models we tested are able to draw human-like inferences when the inference is determined by context and can generalize to unseen adjective-noun combinations. We also propose three methods to evaluate LLMs on these inferences out of context, where there is a distribution of human-like answers rather than a single correct answer. We find that LLMs show a human-like distribution on at most 75{\%} of our dataset, which is promising but still leaves room for improvement.
[ "Ross, Hayley", "Davidson, Kathryn", "Kim, Najoung" ]
Is artificial intelligence still intelligence? LLMs generalize to novel adjective-noun pairs, but don't mimic the full human distribution
genbench-1.9
Poster
2410.17482
[ "https://github.com/rossh2/artificial-intelligence" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.genbench-1.10.bib
https://aclanthology.org/2024.genbench-1.10/
@inproceedings{phatthiyaphaibun-etal-2024-chie, title = "{CHIE}: Generative {MRC} Evaluation for in-context {QA} with Correctness, Helpfulness, Irrelevancy, and Extraneousness Aspects", author = "Phatthiyaphaibun, Wannaphong and Nonesung, Surapon and Limkonchotiwat, Peerat and Udomcharoenchaikit, Can and Sawatphol, Jitkapat and Chuangsuwanich, Ekapol and Nutanong, Sarana", editor = "Hupkes, Dieuwke and Dankers, Verna and Batsuren, Khuyagbaatar and Kazemnejad, Amirhossein and Christodoulopoulos, Christos and Giulianelli, Mario and Cotterell, Ryan", booktitle = "Proceedings of the 2nd GenBench Workshop on Generalisation (Benchmarking) in NLP", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.genbench-1.10", pages = "154--164", abstract = "The evaluation of generative models in Machine Reading Comprehension (MRC) presents distinct difficulties, as traditional metrics like BLEU, ROUGE, METEOR, Exact Match, and F1 score often struggle to capture the nuanced and diverse responses. While embedding-based metrics such as BERTScore and BARTScore focus on semantic similarity, they still fail to fully address aspects such as recognizing additional helpful information and rewarding contextual faithfulness. Recent advances in large language model (LLM) based metrics offer more fine-grained evaluations, but challenges such as score clustering remain. This paper introduces a multi-aspect evaluation framework, CHIE,incorporating aspects of \textbf{C}orrectness, \textbf{H}elpfulness, \textbf{I}rrelevance, and \textbf{E}xtraneousness. Our approach, which uses binary categorical values rather than continuous rating scales, aligns well with human judgments, indicating its potential as a comprehensive and effective evaluation method.", }
The evaluation of generative models in Machine Reading Comprehension (MRC) presents distinct difficulties, as traditional metrics like BLEU, ROUGE, METEOR, Exact Match, and F1 score often struggle to capture the nuanced and diverse responses. While embedding-based metrics such as BERTScore and BARTScore focus on semantic similarity, they still fail to fully address aspects such as recognizing additional helpful information and rewarding contextual faithfulness. Recent advances in large language model (LLM) based metrics offer more fine-grained evaluations, but challenges such as score clustering remain. This paper introduces a multi-aspect evaluation framework, CHIE,incorporating aspects of \textbf{C}orrectness, \textbf{H}elpfulness, \textbf{I}rrelevance, and \textbf{E}xtraneousness. Our approach, which uses binary categorical values rather than continuous rating scales, aligns well with human judgments, indicating its potential as a comprehensive and effective evaluation method.
[ "Phatthiyaphaibun, Wannaphong", "Nonesung, Surapon", "Limkonchotiwat, Peerat", "Udomcharoenchaikit, Can", "Sawatphol, Jitkapat", "Chuangsuwanich, Ekapol", "Nutanong, Sarana" ]
CHIE: Generative MRC Evaluation for in-context QA with Correctness, Helpfulness, Irrelevancy, and Extraneousness Aspects
genbench-1.10
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.genbench-1.11.bib
https://aclanthology.org/2024.genbench-1.11/
@inproceedings{dutt-etal-2024-investigating, title = "Investigating the Generalizability of Pretrained Language Models across Multiple Dimensions: A Case Study of {NLI} and {MRC}", author = "Dutt, Ritam and Choudhury, Sagnik Ray and Rao, Varun Venkat and Rose, Carolyn and Vydiswaran, V.G.Vinod", editor = "Hupkes, Dieuwke and Dankers, Verna and Batsuren, Khuyagbaatar and Kazemnejad, Amirhossein and Christodoulopoulos, Christos and Giulianelli, Mario and Cotterell, Ryan", booktitle = "Proceedings of the 2nd GenBench Workshop on Generalisation (Benchmarking) in NLP", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.genbench-1.11", pages = "165--182", abstract = "Generalization refers to the ability of machine learning models to perform well on dataset distributions different from the one it was trained on. While several pre-existing works have characterized the generalizability of NLP models across different dimensions, such as domain shift, adversarial perturbations, or compositional variations, most studies were carried out in a stand-alone setting, emphasizing a single dimension of interest. We bridge this gap by systematically investigating the generalizability of pre-trained language models across different architectures, sizes, and training strategies, over multiple dimensions for the task of natural language inference and question answering. Our results indicate that model instances typically exhibit consistent generalization trends, i.e., they generalize equally well (or poorly) across most scenarios, and this ability is correlated with model architecture, base dataset performance, size, and training mechanism. We hope this research motivates further work in a) developing a multi-dimensional generalization benchmark for systematic evaluation and b) examining the reasons behind models{'} generalization abilities. The code and data are available at https://github.com/sagnik/md-gen-nlp, and the trained models are released at https://huggingface.co/varun-v-rao.", }
Generalization refers to the ability of machine learning models to perform well on dataset distributions different from the one it was trained on. While several pre-existing works have characterized the generalizability of NLP models across different dimensions, such as domain shift, adversarial perturbations, or compositional variations, most studies were carried out in a stand-alone setting, emphasizing a single dimension of interest. We bridge this gap by systematically investigating the generalizability of pre-trained language models across different architectures, sizes, and training strategies, over multiple dimensions for the task of natural language inference and question answering. Our results indicate that model instances typically exhibit consistent generalization trends, i.e., they generalize equally well (or poorly) across most scenarios, and this ability is correlated with model architecture, base dataset performance, size, and training mechanism. We hope this research motivates further work in a) developing a multi-dimensional generalization benchmark for systematic evaluation and b) examining the reasons behind models{'} generalization abilities. The code and data are available at https://github.com/sagnik/md-gen-nlp, and the trained models are released at https://huggingface.co/varun-v-rao.
[ "Dutt, Ritam", "Choudhury, Sagnik Ray", "Rao, Varun Venkat", "Rose, Carolyn", "Vydiswaran, V.G.Vinod" ]
Investigating the Generalizability of Pretrained Language Models across Multiple Dimensions: A Case Study of NLI and MRC
genbench-1.11
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.genbench-1.12.bib
https://aclanthology.org/2024.genbench-1.12/
@inproceedings{razzhigaev-etal-2024-omnidialog, title = "{O}mni{D}ialog: A Multimodal Benchmark for Generalization Across Text, Visual, and Audio Modalities", author = "Razzhigaev, Anton and Kurkin, Maxim and Goncharova, Elizaveta and Abdullaeva, Irina and Lysenko, Anastasia and Panchenko, Alexander and Kuznetsov, Andrey and Dimitrov, Denis", editor = "Hupkes, Dieuwke and Dankers, Verna and Batsuren, Khuyagbaatar and Kazemnejad, Amirhossein and Christodoulopoulos, Christos and Giulianelli, Mario and Cotterell, Ryan", booktitle = "Proceedings of the 2nd GenBench Workshop on Generalisation (Benchmarking) in NLP", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.genbench-1.12", pages = "183--195", abstract = "We introduce $\textit{OmniDialog}$ {---} the first trimodal comprehensive benchmark grounded in a knowledge graph (Wikidata) to evaluate the generalization of Large Multimodal Models (LMMs) across three modalities. Our benchmark consists of more than 4,000 dialogues, each averaging 10 turns, all annotated and cross-validated by human experts. The dialogues in our dataset are designed to prevent shortcut learning by incorporating various formats and misleading or irrelevant multimodal cues. We also evaluate both multimodal and unimodal models to gain insights into how they process modality inputs introduced in the conversation.", }
We introduce $\textit{OmniDialog}$ {---} the first trimodal comprehensive benchmark grounded in a knowledge graph (Wikidata) to evaluate the generalization of Large Multimodal Models (LMMs) across three modalities. Our benchmark consists of more than 4,000 dialogues, each averaging 10 turns, all annotated and cross-validated by human experts. The dialogues in our dataset are designed to prevent shortcut learning by incorporating various formats and misleading or irrelevant multimodal cues. We also evaluate both multimodal and unimodal models to gain insights into how they process modality inputs introduced in the conversation.
[ "Razzhigaev, Anton", "Kurkin, Maxim", "Goncharova, Elizaveta", "Abdullaeva, Irina", "Lysenko, Anastasia", "Panchenko, Alex", "er", "Kuznetsov, Andrey", "Dimitrov, Denis" ]
OmniDialog: A Multimodal Benchmark for Generalization Across Text, Visual, and Audio Modalities
genbench-1.12
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.genbench-1.13.bib
https://aclanthology.org/2024.genbench-1.13/
@inproceedings{koufakou-etal-2024-towards, title = "Towards a new Benchmark for Emotion Detection in {NLP}: A Unifying Framework of Recent Corpora", author = "Koufakou, Anna and Nieves, Elijah and Peller, John", editor = "Hupkes, Dieuwke and Dankers, Verna and Batsuren, Khuyagbaatar and Kazemnejad, Amirhossein and Christodoulopoulos, Christos and Giulianelli, Mario and Cotterell, Ryan", booktitle = "Proceedings of the 2nd GenBench Workshop on Generalisation (Benchmarking) in NLP", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.genbench-1.13", pages = "196--206", abstract = "Emotion recognition in text is a complex and evolving field that has garnered considerable interest. This paper addresses the pressing need to explore and experiment with new corpora annotated with emotions. We identified several corpora presented since 2018. We restricted this study to English single-labeled data. Nevertheless, the datasets vary in source, domain, topic, emotion types, and distributions. As a basis for benchmarking, we conducted emotion detection experiments by fine-tuning a pretrained model and compared our outcomes with results from the original publications. More importantly, in our efforts to combine existing resources, we created a unified corpus from these diverse datasets and evaluated the impact of training on that corpus versus on the training set for each corpus. Our approach aims to streamline research by offering a unified platform for emotion detection to aid comparisons and benchmarking, addressing a significant gap in the current landscape. Additionally, we present a discussion of related practices and challenges. Our code and dataset information are available at https://github.com/a-koufakou/EmoDetect-Unify. We hope this will enable the NLP community to leverage this unified framework towards a new benchmark in emotion detection.", }
Emotion recognition in text is a complex and evolving field that has garnered considerable interest. This paper addresses the pressing need to explore and experiment with new corpora annotated with emotions. We identified several corpora presented since 2018. We restricted this study to English single-labeled data. Nevertheless, the datasets vary in source, domain, topic, emotion types, and distributions. As a basis for benchmarking, we conducted emotion detection experiments by fine-tuning a pretrained model and compared our outcomes with results from the original publications. More importantly, in our efforts to combine existing resources, we created a unified corpus from these diverse datasets and evaluated the impact of training on that corpus versus on the training set for each corpus. Our approach aims to streamline research by offering a unified platform for emotion detection to aid comparisons and benchmarking, addressing a significant gap in the current landscape. Additionally, we present a discussion of related practices and challenges. Our code and dataset information are available at https://github.com/a-koufakou/EmoDetect-Unify. We hope this will enable the NLP community to leverage this unified framework towards a new benchmark in emotion detection.
[ "Koufakou, Anna", "Nieves, Elijah", "Peller, John" ]
Towards a new Benchmark for Emotion Detection in NLP: A Unifying Framework of Recent Corpora
genbench-1.13
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.mrl-1.1.bib
https://aclanthology.org/2024.mrl-1.1/
@inproceedings{csaki-etal-2024-sambalingo, title = "{S}amba{L}ingo: Teaching Large Language Models New Languages", author = "Csaki, Zoltan and Li, Bo and Li, Jonathan Lingjie and Xu, Qiantong and Pawakapan, Pian and Zhang, Leon and Du, Yun and Zhao, Hengyu and Hu, Changran and Thakker, Urmish", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.1", pages = "1--21", abstract = "Despite the widespread availability of LLMs, there remains a substantial gap in their capabilities and availability across diverse languages. One approach to address these issues has been to take an existing pre-trained LLM and continue to train it on new languages. While prior works have experimented with language adaptation, many questions around best practices and methodology have not been covered. In this paper, we present a comprehensive investigation into the adaptation of LLMs to new languages. Our study covers the key components in this process, including vocabulary extension, direct preference optimization and the data scarcity problem for human alignment in low resource languages. We scale these experiments across 9 languages and 2 parameter scales (7B and 70B). We compare our models against Llama 2, Aya-101, XGLM, BLOOM and existing language experts, outperforming all prior published baselines. Additionally, all evaluation code and checkpoints are made public to facilitate future research.", }
Despite the widespread availability of LLMs, there remains a substantial gap in their capabilities and availability across diverse languages. One approach to address these issues has been to take an existing pre-trained LLM and continue to train it on new languages. While prior works have experimented with language adaptation, many questions around best practices and methodology have not been covered. In this paper, we present a comprehensive investigation into the adaptation of LLMs to new languages. Our study covers the key components in this process, including vocabulary extension, direct preference optimization and the data scarcity problem for human alignment in low resource languages. We scale these experiments across 9 languages and 2 parameter scales (7B and 70B). We compare our models against Llama 2, Aya-101, XGLM, BLOOM and existing language experts, outperforming all prior published baselines. Additionally, all evaluation code and checkpoints are made public to facilitate future research.
[ "Csaki, Zoltan", "Li, Bo", "Li, Jonathan Lingjie", "Xu, Qiantong", "Pawakapan, Pian", "Zhang, Leon", "Du, Yun", "Zhao, Hengyu", "Hu, Changran", "Thakker, Urmish" ]
SambaLingo: Teaching Large Language Models New Languages
mrl-1.1
Poster
2404.05829
[ "" ]
https://huggingface.co/papers/2404.05829
1
12
0
10
[ "sambanovasystems/SambaLingo-Arabic-Chat", "sambanovasystems/SambaLingo-Russian-Chat", "sambanovasystems/SambaLingo-Turkish-Chat", "sambanovasystems/SambaLingo-Hungarian-Chat", "sambanovasystems/SambaLingo-Turkish-Base", "sambanovasystems/SambaLingo-Arabic-Base", "sambanovasystems/SambaLingo-Thai-Chat", "sambanovasystems/SambaLingo-Russian-Base", "sambanovasystems/SambaLingo-Japanese-Chat", "sambanovasystems/SambaLingo-Bulgarian-Chat", "sambanovasystems/SambaLingo-Thai-Base", "sambanovasystems/SambaLingo-Slovenian-Chat", "sambanovasystems/SambaLingo-Hungarian-Base", "sambanovasystems/SambaLingo-Serbian-Chat", "sambanovasystems/SambaLingo-Serbian-Base", "sambanovasystems/SambaLingo-Slovenian-Base", "sambanovasystems/SambaLingo-Bulgarian-Base", "sambanovasystems/SambaLingo-Japanese-Base", "sambanovasystems/SambaLingo-Hungarian-Chat-70B", "sambanovasystems/SambaLingo-Thai-Chat-70B", "sambanovasystems/SambaLingo-Thai-Base-70B", "ariel-ml/SambaLingo-Hungarian-Chat-GGUF", "sambanovasystems/SambaLingo-Hungarian-Base-70B", "sambanovasystems/SambaLingo-Arabic-Chat-70B", "sambanovasystems/SambaLingo-Arabic-Base-70B", "ordis-co-ltd/sambanovasystems-SambaLingo-Thai-Chat-70B-Q4_K_M-gguf", "RichardErkhov/sambanovasystems_-_SambaLingo-Arabic-Chat-70B-gguf", "RichardErkhov/sambanovasystems_-_SambaLingo-Arabic-Base-70B-gguf", "RichardErkhov/sambanovasystems_-_SambaLingo-Arabic-Chat-gguf", "RichardErkhov/sambanovasystems_-_SambaLingo-Arabic-Base-gguf", "RichardErkhov/sambanovasystems_-_SambaLingo-Turkish-Chat-gguf", "RichardErkhov/sambanovasystems_-_SambaLingo-Hungarian-Chat-gguf", "RichardErkhov/sambanovasystems_-_SambaLingo-Slovenian-Chat-gguf", "RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf" ]
[]
[ "ahemid/sambanovasystems-SambaLingo-Arabic-Chat", "Chegue100/mychat", "0x7o/SambaLingo-Russian-Chat", "Makaria/my_test_bot" ]
[ "sambanovasystems/SambaLingo-Arabic-Chat", "sambanovasystems/SambaLingo-Russian-Chat", "sambanovasystems/SambaLingo-Turkish-Chat", "sambanovasystems/SambaLingo-Hungarian-Chat", "sambanovasystems/SambaLingo-Turkish-Base", "sambanovasystems/SambaLingo-Arabic-Base", "sambanovasystems/SambaLingo-Thai-Chat", "sambanovasystems/SambaLingo-Russian-Base", "sambanovasystems/SambaLingo-Japanese-Chat", "sambanovasystems/SambaLingo-Bulgarian-Chat", "sambanovasystems/SambaLingo-Thai-Base", "sambanovasystems/SambaLingo-Slovenian-Chat", "sambanovasystems/SambaLingo-Hungarian-Base", "sambanovasystems/SambaLingo-Serbian-Chat", "sambanovasystems/SambaLingo-Serbian-Base", "sambanovasystems/SambaLingo-Slovenian-Base", "sambanovasystems/SambaLingo-Bulgarian-Base", "sambanovasystems/SambaLingo-Japanese-Base", "sambanovasystems/SambaLingo-Hungarian-Chat-70B", "sambanovasystems/SambaLingo-Thai-Chat-70B", "sambanovasystems/SambaLingo-Thai-Base-70B", "ariel-ml/SambaLingo-Hungarian-Chat-GGUF", "sambanovasystems/SambaLingo-Hungarian-Base-70B", "sambanovasystems/SambaLingo-Arabic-Chat-70B", "sambanovasystems/SambaLingo-Arabic-Base-70B", "ordis-co-ltd/sambanovasystems-SambaLingo-Thai-Chat-70B-Q4_K_M-gguf", "RichardErkhov/sambanovasystems_-_SambaLingo-Arabic-Chat-70B-gguf", "RichardErkhov/sambanovasystems_-_SambaLingo-Arabic-Base-70B-gguf", "RichardErkhov/sambanovasystems_-_SambaLingo-Arabic-Chat-gguf", "RichardErkhov/sambanovasystems_-_SambaLingo-Arabic-Base-gguf", "RichardErkhov/sambanovasystems_-_SambaLingo-Turkish-Chat-gguf", "RichardErkhov/sambanovasystems_-_SambaLingo-Hungarian-Chat-gguf", "RichardErkhov/sambanovasystems_-_SambaLingo-Slovenian-Chat-gguf", "RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf" ]
[]
[ "ahemid/sambanovasystems-SambaLingo-Arabic-Chat", "Chegue100/mychat", "0x7o/SambaLingo-Russian-Chat", "Makaria/my_test_bot" ]
1
https://aclanthology.org/2024.mrl-1.2.bib
https://aclanthology.org/2024.mrl-1.2/
@inproceedings{mihaylov-shtedritski-2024-elegant-bridge, title = "What an Elegant Bridge: Multilingual {LLM}s are Biased Similarly in Different Languages", author = "Mihaylov, Viktor and Shtedritski, Aleksandar", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.2", pages = "22--29", abstract = "This paper investigates biases of Large Language Models (LLMs) through the lens of grammatical gender. Drawing inspiration from seminal works in psycholinguistics, particularly the study of gender{'}s influence on language perception, we leverage multilingual LLMs to revisit and expand upon the foundational experiments of Boroditsky (2003). Employing LLMs as a novel method for examining psycholinguistic biases related to grammatical gender, we prompt a model to describe nouns with adjectives in various languages, focusing specifically on languages with grammatical gender. In particular, we look at adjective co-occurrences across gender and languages, and train a binary classifier to predict grammatical gender given adjectives an LLM uses to describe a noun. Surprisingly, we find that a simple classifier can not only predict noun gender above chance but also exhibit cross-language transferability. We show that while LLMs may describe words differently in different languages, they are biased similarly.", }
This paper investigates biases of Large Language Models (LLMs) through the lens of grammatical gender. Drawing inspiration from seminal works in psycholinguistics, particularly the study of gender{'}s influence on language perception, we leverage multilingual LLMs to revisit and expand upon the foundational experiments of Boroditsky (2003). Employing LLMs as a novel method for examining psycholinguistic biases related to grammatical gender, we prompt a model to describe nouns with adjectives in various languages, focusing specifically on languages with grammatical gender. In particular, we look at adjective co-occurrences across gender and languages, and train a binary classifier to predict grammatical gender given adjectives an LLM uses to describe a noun. Surprisingly, we find that a simple classifier can not only predict noun gender above chance but also exhibit cross-language transferability. We show that while LLMs may describe words differently in different languages, they are biased similarly.
[ "Mihaylov, Viktor", "Shtedritski, Aleks", "ar" ]
What an Elegant Bridge: Multilingual LLMs are Biased Similarly in Different Languages
mrl-1.2
Poster
2407.09704
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.mrl-1.3.bib
https://aclanthology.org/2024.mrl-1.3/
@inproceedings{toraman-2024-adapting, title = "Adapting Open-Source Generative Large Language Models for Low-Resource Languages: A Case Study for {T}urkish", author = "Toraman, Cagri", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.3", pages = "30--44", abstract = "Despite advancements in English-dominant generative large language models, further development is needed for low-resource languages to enhance global accessibility. The primary methods for representing these languages are monolingual and multilingual pretraining. Monolingual pretraining is expensive due to hardware requirements, and multilingual models often have uneven performance across languages. This study explores an alternative solution by adapting large language models, primarily trained on English, to low-resource languages. We assess various strategies, including continual training, instruction fine-tuning, task-specific fine-tuning, and vocabulary extension. The results show that continual training improves language comprehension, as reflected in perplexity scores, and task-specific tuning generally enhances performance of downstream tasks. However, extending the vocabulary shows no substantial benefits. Additionally, while larger models improve task performance with few-shot tuning, multilingual models perform worse than their monolingual counterparts when adapted.", }
Despite advancements in English-dominant generative large language models, further development is needed for low-resource languages to enhance global accessibility. The primary methods for representing these languages are monolingual and multilingual pretraining. Monolingual pretraining is expensive due to hardware requirements, and multilingual models often have uneven performance across languages. This study explores an alternative solution by adapting large language models, primarily trained on English, to low-resource languages. We assess various strategies, including continual training, instruction fine-tuning, task-specific fine-tuning, and vocabulary extension. The results show that continual training improves language comprehension, as reflected in perplexity scores, and task-specific tuning generally enhances performance of downstream tasks. However, extending the vocabulary shows no substantial benefits. Additionally, while larger models improve task performance with few-shot tuning, multilingual models perform worse than their monolingual counterparts when adapted.
[ "Toraman, Cagri" ]
Adapting Open-Source Generative Large Language Models for Low-Resource Languages: A Case Study for Turkish
mrl-1.3
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.mrl-1.4.bib
https://aclanthology.org/2024.mrl-1.4/
@inproceedings{faisal-anastasopoulos-2024-efficient, title = "An Efficient Approach for Studying Cross-Lingual Transfer in Multilingual Language Models", author = "Faisal, Fahim and Anastasopoulos, Antonios", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.4", pages = "45--92", abstract = "The capacity and effectiveness of pre-trained multilingual models (MLMs) for zero-shot cross-lingual transfer is well established. However, phenomena of positive or negative transfer, and the effect of language choice still need to be fully understood, especially in the complex setting of massively multilingual LMs. We propose an \textit{efficient} method to study transfer language influence in zero-shot performance on another target language. Unlike previous work, our approach \textit{disentangles downstream tasks from language}, using dedicated adapter units. Our findings suggest that some languages do not largely affect others, while some languages, especially ones unseen during pre-training, can be extremely beneficial or detrimental for different target languages. We find that no transfer language is beneficial for all target languages. We do, curiously, observe languages previously unseen by MLMs consistently benefit from transfer from \textit{almost any} language. We additionally use our modular approach to quantify negative interference efficiently and categorize languages accordingly. Furthermore, we provide a list of promising transfer-target language configurations that consistently lead to target language performance improvements.", }
The capacity and effectiveness of pre-trained multilingual models (MLMs) for zero-shot cross-lingual transfer is well established. However, phenomena of positive or negative transfer, and the effect of language choice still need to be fully understood, especially in the complex setting of massively multilingual LMs. We propose an \textit{efficient} method to study transfer language influence in zero-shot performance on another target language. Unlike previous work, our approach \textit{disentangles downstream tasks from language}, using dedicated adapter units. Our findings suggest that some languages do not largely affect others, while some languages, especially ones unseen during pre-training, can be extremely beneficial or detrimental for different target languages. We find that no transfer language is beneficial for all target languages. We do, curiously, observe languages previously unseen by MLMs consistently benefit from transfer from \textit{almost any} language. We additionally use our modular approach to quantify negative interference efficiently and categorize languages accordingly. Furthermore, we provide a list of promising transfer-target language configurations that consistently lead to target language performance improvements.
[ "Faisal, Fahim", "Anastasopoulos, Antonios" ]
An Efficient Approach for Studying Cross-Lingual Transfer in Multilingual Language Models
mrl-1.4
Poster
2403.20088
[ "https://github.com/ffaisal93/neg_inf" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.mrl-1.5.bib
https://aclanthology.org/2024.mrl-1.5/
@inproceedings{devine-2024-sure, title = "Are You Sure? Rank Them Again: Repeated Ranking For Better Preference Datasets", author = "Devine, Peter", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.5", pages = "93--105", abstract = "Training Large Language Models (LLMs) with Reinforcement Learning from AI Feedback (RLAIF) aligns model outputs more closely with human preferences. This involves an evaluator model ranking multiple candidate responses to user prompts. However, the rankings from popular evaluator models such as GPT-4 can be inconsistent.We propose the Repeat Ranking method, in which we evaluate the same responses multiple times and train only on those responses which are consistently ranked. Using 2,714 training prompts in 62 languages, we generated responses from 7 top multilingual LLMs and had GPT-4 rank them five times each. Evaluating on MT-Bench chat benchmarks in six languages, our method outperformed the standard practice of training on all available prompts.Our work highlights the quality versus quantity trade-off in RLAIF dataset generation and offers a stackable strategy for enhancing dataset and thus model quality.", }
Training Large Language Models (LLMs) with Reinforcement Learning from AI Feedback (RLAIF) aligns model outputs more closely with human preferences. This involves an evaluator model ranking multiple candidate responses to user prompts. However, the rankings from popular evaluator models such as GPT-4 can be inconsistent.We propose the Repeat Ranking method, in which we evaluate the same responses multiple times and train only on those responses which are consistently ranked. Using 2,714 training prompts in 62 languages, we generated responses from 7 top multilingual LLMs and had GPT-4 rank them five times each. Evaluating on MT-Bench chat benchmarks in six languages, our method outperformed the standard practice of training on all available prompts.Our work highlights the quality versus quantity trade-off in RLAIF dataset generation and offers a stackable strategy for enhancing dataset and thus model quality.
[ "Devine, Peter" ]
Are You Sure? Rank Them Again: Repeated Ranking For Better Preference Datasets
mrl-1.5
Poster
2405.18952
[ "" ]
https://huggingface.co/papers/2405.18952
1
10
0
1
[ "lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half", "lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half-gguf", "lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25", "RichardErkhov/lightblue_-_suzume-llama-3-8B-multilingual-orpo-borda-top25-gguf", "Apel-sin/suzume-llama-3-8B-multilingual-orpo-borda-half-exl2", "lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75", "lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full", "darkshapes/suzume-llama-3-8B-multilingual-orpo-borda-top25-gguf", "RichardErkhov/lightblue_-_suzume-llama-3-8B-multilingual-orpo-borda-top75-gguf", "RichardErkhov/lightblue_-_suzume-llama-3-8B-multilingual-orpo-borda-full-gguf", "RichardErkhov/lightblue_-_suzume-llama-3-8B-multilingual-orpo-borda-half-gguf" ]
[ "lightblue/mitsu", "lightblue/mitsu_tophalf_borda", "lightblue/mitsu_full_borda", "lightblue/mitsu_top75_borda", "lightblue/mitsu_top25_borda" ]
[ "featherless-ai/try-this-model", "eduagarcia/open_pt_llm_leaderboard", "Granther/try-this-model", "Darok/Featherless-Feud", "emekaboris/try-this-model", "John6666/votepurchase-crash", "SC999/NV_Nemotron" ]
[ "lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half", "lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half-gguf", "lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25", "RichardErkhov/lightblue_-_suzume-llama-3-8B-multilingual-orpo-borda-top25-gguf", "Apel-sin/suzume-llama-3-8B-multilingual-orpo-borda-half-exl2", "lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75", "lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full", "darkshapes/suzume-llama-3-8B-multilingual-orpo-borda-top25-gguf", "RichardErkhov/lightblue_-_suzume-llama-3-8B-multilingual-orpo-borda-top75-gguf", "RichardErkhov/lightblue_-_suzume-llama-3-8B-multilingual-orpo-borda-full-gguf", "RichardErkhov/lightblue_-_suzume-llama-3-8B-multilingual-orpo-borda-half-gguf" ]
[ "lightblue/mitsu", "lightblue/mitsu_tophalf_borda", "lightblue/mitsu_full_borda", "lightblue/mitsu_top75_borda", "lightblue/mitsu_top25_borda" ]
[ "featherless-ai/try-this-model", "eduagarcia/open_pt_llm_leaderboard", "Granther/try-this-model", "Darok/Featherless-Feud", "emekaboris/try-this-model", "John6666/votepurchase-crash", "SC999/NV_Nemotron" ]
1
https://aclanthology.org/2024.mrl-1.6.bib
https://aclanthology.org/2024.mrl-1.6/
@inproceedings{devine-2024-tagengo, title = "Tagengo: A Multilingual Chat Dataset", author = "Devine, Peter", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.6", pages = "106--113", abstract = "Open source large language models (LLMs) have shown great improvements in recent times. However, many of these models are focused solely on popular spoken languages. We present a high quality dataset of more than 70k prompt-response pairs in 74 languages which consist of human generated prompts and synthetic responses. We use this dataset to train a state-of-the-art open source English LLM to chat multilingually.We evaluate our model on MT-Bench chat benchmarks in 6 languages, finding that our multilingual model outperforms previous state-of-the-art open source LLMs across each language. We further find that training on more multilingual data is beneficial to the performance in a chosen target language (Japanese) compared to simply training on only data in that language.These results indicate the necessity of training on large amounts of high quality multilingual data to make a more accessible LLM.", }
Open source large language models (LLMs) have shown great improvements in recent times. However, many of these models are focused solely on popular spoken languages. We present a high quality dataset of more than 70k prompt-response pairs in 74 languages which consist of human generated prompts and synthetic responses. We use this dataset to train a state-of-the-art open source English LLM to chat multilingually.We evaluate our model on MT-Bench chat benchmarks in 6 languages, finding that our multilingual model outperforms previous state-of-the-art open source LLMs across each language. We further find that training on more multilingual data is beneficial to the performance in a chosen target language (Japanese) compared to simply training on only data in that language.These results indicate the necessity of training on large amounts of high quality multilingual data to make a more accessible LLM.
[ "Devine, Peter" ]
Tagengo: A Multilingual Chat Dataset
mrl-1.6
Poster
2405.12612
[ "https://github.com/Peter-Devine/multilingual_mt_bench" ]
https://huggingface.co/papers/2405.12612
1
3
0
1
[ "lightblue/suzume-llama-3-8B-multilingual", "lightblue/suzume-llama-3-8B-multilingual-gguf", "lightblue/suzume-llama-3-8B-japanese", "lightblue/suzume-llama-3-8B-japanese-gguf", "QuantFactory/suzume-llama-3-8B-japanese-GGUF", "darkshapes/suzume-llama-3-8B-multilingual-orpo-borda-top25-gguf", "RichardErkhov/lightblue_-_suzume-llama-3-8B-japanese-gguf", "RichardErkhov/lightblue_-_suzume-llama-3-8B-multilingual-gguf" ]
[ "lightblue/tagengo-gpt4" ]
[ "featherless-ai/try-this-model", "eduagarcia/open_pt_llm_leaderboard", "Granther/try-this-model", "Darok/Featherless-Feud", "emekaboris/try-this-model", "John6666/votepurchase-crash", "SC999/NV_Nemotron" ]
[ "lightblue/suzume-llama-3-8B-multilingual", "lightblue/suzume-llama-3-8B-multilingual-gguf", "lightblue/suzume-llama-3-8B-japanese", "lightblue/suzume-llama-3-8B-japanese-gguf", "QuantFactory/suzume-llama-3-8B-japanese-GGUF", "darkshapes/suzume-llama-3-8B-multilingual-orpo-borda-top25-gguf", "RichardErkhov/lightblue_-_suzume-llama-3-8B-japanese-gguf", "RichardErkhov/lightblue_-_suzume-llama-3-8B-multilingual-gguf" ]
[ "lightblue/tagengo-gpt4" ]
[ "featherless-ai/try-this-model", "eduagarcia/open_pt_llm_leaderboard", "Granther/try-this-model", "Darok/Featherless-Feud", "emekaboris/try-this-model", "John6666/votepurchase-crash", "SC999/NV_Nemotron" ]
1
https://aclanthology.org/2024.mrl-1.7.bib
https://aclanthology.org/2024.mrl-1.7/
@inproceedings{chronopoulou-etal-2024-language, title = "Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization", author = "Chronopoulou, Alexandra and Pfeiffer, Jonas and Maynez, Joshua and Wang, Xinyi and Ruder, Sebastian and Agrawal, Priyanka", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.7", pages = "114--126", abstract = "Parameter-efficient fine-tuning (PEFT) using labeled task data can significantly improve the performance of large language models (LLMs) on the downstream task. However, there are 7000 languages in the world and many of these languages lack labeled data for real-world language generation tasks. In this paper, we propose to improve zero-shot cross-lingual transfer by composing expert modules trained separately on language or task data. Our method composes $\textit{language}$ and $\textit{task}$ PEFT adapters via element-wise arithmetic operations to leverage unlabeled data and English labeled data. We extend our approach to cases where labeled data from more languages is available and propose to arithmetically compose PEFT adapters trained on languages related to the target. Empirical results on summarization demonstrate that our method is a strategy that obtains consistent gains using minimal training of PEFT parameters.", }
Parameter-efficient fine-tuning (PEFT) using labeled task data can significantly improve the performance of large language models (LLMs) on the downstream task. However, there are 7000 languages in the world and many of these languages lack labeled data for real-world language generation tasks. In this paper, we propose to improve zero-shot cross-lingual transfer by composing expert modules trained separately on language or task data. Our method composes $\textit{language}$ and $\textit{task}$ PEFT adapters via element-wise arithmetic operations to leverage unlabeled data and English labeled data. We extend our approach to cases where labeled data from more languages is available and propose to arithmetically compose PEFT adapters trained on languages related to the target. Empirical results on summarization demonstrate that our method is a strategy that obtains consistent gains using minimal training of PEFT parameters.
[ "Chronopoulou, Alex", "ra", "Pfeiffer, Jonas", "Maynez, Joshua", "Wang, Xinyi", "Ruder, Sebastian", "Agrawal, Priyanka" ]
Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization
mrl-1.7
Poster
2311.09344
[ "" ]
https://huggingface.co/papers/2311.09344
0
1
0
6
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.mrl-1.8.bib
https://aclanthology.org/2024.mrl-1.8/
@inproceedings{zhang-etal-2024-modeling-bilingual, title = "Modeling Bilingual Sentence Processing: Evaluating {RNN} and Transformer Architectures for Cross-Language Structural Priming", author = "Zhang, Demi and Xiao, Bushi and Gao, Chao and Youm, Sangpil and Dorr, Bonnie J", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.8", pages = "127--136", abstract = "This study evaluates the performance of Recurrent Neural Network (RNN) and Transformer models in replicating cross-language structural priming, a key indicator of abstract grammatical representations in human language processing. Focusing on Chinese-English priming, which involves two typologically distinct languages, we examine how these models handle the robust phenomenon of structural priming, where exposure to a particular sentence structure increases the likelihood of selecting a similar structure subsequently. Our findings indicate that transformers outperform RNNs in generating primed sentence structures, with accuracy rates that exceed 25.84{\%} to 33. 33{\%}. This challenges the conventional belief that human sentence processing primarily involves recurrent and immediate processing and suggests a role for cue-based retrieval mechanisms. This work contributes to our understanding of how computational models may reflect human cognitive processes across diverse language families.", }
This study evaluates the performance of Recurrent Neural Network (RNN) and Transformer models in replicating cross-language structural priming, a key indicator of abstract grammatical representations in human language processing. Focusing on Chinese-English priming, which involves two typologically distinct languages, we examine how these models handle the robust phenomenon of structural priming, where exposure to a particular sentence structure increases the likelihood of selecting a similar structure subsequently. Our findings indicate that transformers outperform RNNs in generating primed sentence structures, with accuracy rates that exceed 25.84{\%} to 33. 33{\%}. This challenges the conventional belief that human sentence processing primarily involves recurrent and immediate processing and suggests a role for cue-based retrieval mechanisms. This work contributes to our understanding of how computational models may reflect human cognitive processes across diverse language families.
[ "Zhang, Demi", "Xiao, Bushi", "Gao, Chao", "Youm, Sangpil", "Dorr, Bonnie J" ]
Modeling Bilingual Sentence Processing: Evaluating RNN and Transformer Architectures for Cross-Language Structural Priming
mrl-1.8
Poster
2405.09508
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.mrl-1.9.bib
https://aclanthology.org/2024.mrl-1.9/
@inproceedings{vandenbulcke-etal-2024-recipe, title = "Recipe for Zero-shot {POS} Tagging: Is It Useful in Realistic Scenarios?", author = "Vandenbulcke, Zeno and Vermeire, Lukas and de Lhoneux, Miryam", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.9", pages = "137--147", abstract = "POS tagging plays a fundamental role in numerous applications. While POS taggers are highly accurate in well-resourced settings, they lag behind in cases of limited or missing training data. This paper focuses on POS tagging for languages with limited data. We seek to identify favourable characteristics of datasets for training POS tagging models using related languages without specific training on the target language. This is a zero-shot approach. We investigate both mono- and multilingual models trained on related languages and compare their accuracies. Additionally, we compare these results with models trained directly on the target language itself. We do this for three target low-resource languages, for each of which we select several support languages. Our research highlights the importance of accurate dataset selection for developing effective zero-shot POS tagging models. Particularly, a strong linguistic relationship and high-quality datasets ensure optimal results. For extremely low-resource languages, zero-shot training proves to be a viable option.", }
POS tagging plays a fundamental role in numerous applications. While POS taggers are highly accurate in well-resourced settings, they lag behind in cases of limited or missing training data. This paper focuses on POS tagging for languages with limited data. We seek to identify favourable characteristics of datasets for training POS tagging models using related languages without specific training on the target language. This is a zero-shot approach. We investigate both mono- and multilingual models trained on related languages and compare their accuracies. Additionally, we compare these results with models trained directly on the target language itself. We do this for three target low-resource languages, for each of which we select several support languages. Our research highlights the importance of accurate dataset selection for developing effective zero-shot POS tagging models. Particularly, a strong linguistic relationship and high-quality datasets ensure optimal results. For extremely low-resource languages, zero-shot training proves to be a viable option.
[ "V", "enbulcke, Zeno", "Vermeire, Lukas", "de Lhoneux, Miryam" ]
Recipe for Zero-shot POS Tagging: Is It Useful in Realistic Scenarios?
mrl-1.9
Poster
2410.10576
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.mrl-1.10.bib
https://aclanthology.org/2024.mrl-1.10/
@inproceedings{sanchez-etal-2024-gender, title = "Gender-specific Machine Translation with Large Language Models", author = "S{\'a}nchez, Eduardo and Andrews, Pierre and Stenetorp, Pontus and Artetxe, Mikel and Costa-juss{\`a}, Marta R.", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.10", pages = "148--158", abstract = "{`}While machine translation (MT) systems have seen significant improvements,it is still common for translations to reflect societal biases, such as genderbias. Decoder-only language models (LLMs) have demonstrated potential in MT, albeitwith performance slightly lagging behind traditional encoder-decoder neural machinetranslation (NMT) systems. However, LLMs offer a unique advantage: the abilityto control the properties of the output through prompting. In this study, we leveragethis flexibility to explore Llama{''}s capability to produce gender-specific translations.Our results indicate that Llama can generate gender-specific translations withtranslation quality and gender bias comparable to NLLB, a state-of-the-art multilingualNMT system.{'}", }
{`}While machine translation (MT) systems have seen significant improvements,it is still common for translations to reflect societal biases, such as genderbias. Decoder-only language models (LLMs) have demonstrated potential in MT, albeitwith performance slightly lagging behind traditional encoder-decoder neural machinetranslation (NMT) systems. However, LLMs offer a unique advantage: the abilityto control the properties of the output through prompting. In this study, we leveragethis flexibility to explore Llama{''}s capability to produce gender-specific translations.Our results indicate that Llama can generate gender-specific translations withtranslation quality and gender bias comparable to NLLB, a state-of-the-art multilingualNMT system.{'}
[ "S{\\'a}nchez, Eduardo", "Andrews, Pierre", "Stenetorp, Pontus", "Artetxe, Mikel", "Costa-juss{\\`a}, Marta R." ]
Gender-specific Machine Translation with Large Language Models
mrl-1.10
Poster
2309.03175
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.mrl-1.11.bib
https://aclanthology.org/2024.mrl-1.11/
@inproceedings{xiao-etal-2024-jina, title = "{J}ina-{C}ol{BERT}-v2: A General-Purpose Multilingual Late Interaction Retriever", author = "Xiao, Han and Wang, Bo and Jha, Rohan", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.11", pages = "159--166", abstract = "Multi-vector dense models, such as ColBERT, have proven highly effective in information retrieval. ColBERT{'}s late interaction scoring approximates the joint query-document attention seen in cross-encoders while maintaining inference efficiency closer to traditional dense retrieval models, thanks to its bi-encoder architecture and recent optimizations in indexing and search. In this paper, we introduce a novel architecture and a training framework to support long context window and multilingual retrieval. Leveraging Matryoshka Representation Loss, we further demonstrate that the reducing the embedding dimensionality from 128 to 64 has insignificant impact on the model{'}s retrieval performance and cut storage requirements by up to 50{\%}. Our new model, Jina-ColBERT-v2, demonstrates strong performance across a range of English and multilingual retrieval tasks,", }
Multi-vector dense models, such as ColBERT, have proven highly effective in information retrieval. ColBERT{'}s late interaction scoring approximates the joint query-document attention seen in cross-encoders while maintaining inference efficiency closer to traditional dense retrieval models, thanks to its bi-encoder architecture and recent optimizations in indexing and search. In this paper, we introduce a novel architecture and a training framework to support long context window and multilingual retrieval. Leveraging Matryoshka Representation Loss, we further demonstrate that the reducing the embedding dimensionality from 128 to 64 has insignificant impact on the model{'}s retrieval performance and cut storage requirements by up to 50{\%}. Our new model, Jina-ColBERT-v2, demonstrates strong performance across a range of English and multilingual retrieval tasks,
[ "Xiao, Han", "Wang, Bo", "Jha, Rohan" ]
Jina-ColBERT-v2: A General-Purpose Multilingual Late Interaction Retriever
mrl-1.11
Poster
2408.16672
[ "" ]
https://huggingface.co/papers/2408.16672
5
6
1
6
[ "jinaai/jina-colbert-v2" ]
[]
[ "open-webui/open-webui", "jscheah/open-webui", "JdrCydsek/open-webui-3", "cuio/open-webui", "sun-i/open-webui", "coolmanx/open-webui", "alosongngu/besen", "TonyWang2233/open-webui", "Dr-Newtons/ai", "rclon/web", "cky2024/open-webui", "arcticaurora/ai", "bchgod/open-webui", "tang-x/open-webui", "tokenfactory/ai-station", "mapleleaff/AI", "iouoracle/open-webui", "xhxhdvduenxvxheje/open-CHAT", "KITraining/open-webui-0-3-23", "houin/open-webui", "jackyes/open-w", "forrany/open-webui", "maxwell3530/open-webui", "baothi/open-webui", "jnlduck/jnl-open", "mahdibenammar/Digixify-alpha", "cuio/u", "xshiyu/open-webui", "Syzuki1113/open-webui", "Rfym21/OpenWebUI", "beea/open-webui", "xmjer1/open-webui", "mollys12138/open-webui", "J1ang/open-webui", "sgpsonnet/open-webui", "SingHA/open-webui-usa", "tuankietckcit/TK-AI", "xiaowang213/open-webui", "lennygon/open-webui", "Names315/open", "etgpao/open-webui", "yyw-syq/open-webui", "iatbsky/open-webui", "RockyLeo/open-webui", "dong56872/open-webui", "Manjuc21/open-webui", "ftaeaw/czh_openwenui", "cuio/hi", "lenaya/open-webui", "zhzabcd/aiold", "yxmnjxzx/open-webui", "gccnb/open-webui", "ty4032/open-webui", "zhouddddd/open-webui", "dreamofinfinity1/open-webui", "hunterJr/AI-WebUi", "c1a200/open-webui", "yangtb2024/open-webui", "JasonChen/open-webui", "snailyp/open-webui", "Potivv7/open-webui", "KITraining/open-webui-0-3-35", "zhzabcd/ai-studio", "tuankietckcit/SEO-GenZ", "qiaohao/open-web", "PaperCraneCr/openwebui", "NlyNe/open-webui", "Turgo-hf/open-webui", "xnwh/ow", "Baphes/opengpt", "IcedCola-OvO/open-webui", "LingLingrj/open-webui1", "gaoqilan/open-webui", "Surbao/open-webui", "kioab123/open-webui", "tadapho/open-webui", "SmallKid/open-webui", "drag0n1/open-webui", "shulinbao/open-webui", "xibalami/open-webui", "raannakasturi/open-webui", "shulinbao/horseui-lite" ]
[ "jinaai/jina-colbert-v2" ]
[]
[ "open-webui/open-webui", "jscheah/open-webui", "JdrCydsek/open-webui-3", "cuio/open-webui", "sun-i/open-webui", "coolmanx/open-webui", "alosongngu/besen", "TonyWang2233/open-webui", "Dr-Newtons/ai", "rclon/web", "cky2024/open-webui", "arcticaurora/ai", "bchgod/open-webui", "tang-x/open-webui", "tokenfactory/ai-station", "mapleleaff/AI", "iouoracle/open-webui", "xhxhdvduenxvxheje/open-CHAT", "KITraining/open-webui-0-3-23", "houin/open-webui", "jackyes/open-w", "forrany/open-webui", "maxwell3530/open-webui", "baothi/open-webui", "jnlduck/jnl-open", "mahdibenammar/Digixify-alpha", "cuio/u", "xshiyu/open-webui", "Syzuki1113/open-webui", "Rfym21/OpenWebUI", "beea/open-webui", "xmjer1/open-webui", "mollys12138/open-webui", "J1ang/open-webui", "sgpsonnet/open-webui", "SingHA/open-webui-usa", "tuankietckcit/TK-AI", "xiaowang213/open-webui", "lennygon/open-webui", "Names315/open", "etgpao/open-webui", "yyw-syq/open-webui", "iatbsky/open-webui", "RockyLeo/open-webui", "dong56872/open-webui", "Manjuc21/open-webui", "ftaeaw/czh_openwenui", "cuio/hi", "lenaya/open-webui", "zhzabcd/aiold", "yxmnjxzx/open-webui", "gccnb/open-webui", "ty4032/open-webui", "zhouddddd/open-webui", "dreamofinfinity1/open-webui", "hunterJr/AI-WebUi", "c1a200/open-webui", "yangtb2024/open-webui", "JasonChen/open-webui", "snailyp/open-webui", "Potivv7/open-webui", "KITraining/open-webui-0-3-35", "zhzabcd/ai-studio", "tuankietckcit/SEO-GenZ", "qiaohao/open-web", "PaperCraneCr/openwebui", "NlyNe/open-webui", "Turgo-hf/open-webui", "xnwh/ow", "Baphes/opengpt", "IcedCola-OvO/open-webui", "LingLingrj/open-webui1", "gaoqilan/open-webui", "Surbao/open-webui", "kioab123/open-webui", "tadapho/open-webui", "SmallKid/open-webui", "drag0n1/open-webui", "shulinbao/open-webui", "xibalami/open-webui", "raannakasturi/open-webui", "shulinbao/horseui-lite" ]
1
https://aclanthology.org/2024.mrl-1.12.bib
https://aclanthology.org/2024.mrl-1.12/
@inproceedings{yadav-etal-2024-cross, title = "Cross-Lingual Named Entity Recognition for Low-Resource Languages: A {H}indi-{N}epali Case Study Using Multilingual {BERT} Models", author = "Yadav, Dipendra and Suravee, Sumaiya and Strau{\ss}, Tobias and Yordanova, Kristina", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.12", pages = "167--174", abstract = "This study investigates the potential of cross-lingual transfer learning for Named Entity Recognition (NER) between Hindi and Nepali, two languages that, despite their linguistic similarities, face significant disparities in available resources. By leveraging multilingual BERT models, including RemBERT, BERT Multilingual, MuRIL, and DistilBERT Multilingual, the research examines whether pre-training them on a resource-rich language like Hindi can enhance NER performance in a resource-constrained language like Nepali and vice versa. The study conducts experiments in both monolingual and cross-lingual settings to evaluate the models{'} effectiveness in transferring linguistic knowledge between the two languages. The findings reveal that while RemBERT and MuRIL perform well in monolingual contexts{---}RemBERT excelling in Hindi and MuRIL in Nepali{---}BERT Multilingual performs comparatively best in cross-lingual scenarios, in generalizing features across the languages. Although DistilBERT Multilingual demonstrates slightly lower performance in cross-lingual tasks, it balances efficiency with competitive results. The study underscores the importance of model selection based on linguistic and resource-specific contexts, highlighting that general-purpose models like BERT Multilingual are particularly well-suited for cross-lingual applications.", }
This study investigates the potential of cross-lingual transfer learning for Named Entity Recognition (NER) between Hindi and Nepali, two languages that, despite their linguistic similarities, face significant disparities in available resources. By leveraging multilingual BERT models, including RemBERT, BERT Multilingual, MuRIL, and DistilBERT Multilingual, the research examines whether pre-training them on a resource-rich language like Hindi can enhance NER performance in a resource-constrained language like Nepali and vice versa. The study conducts experiments in both monolingual and cross-lingual settings to evaluate the models{'} effectiveness in transferring linguistic knowledge between the two languages. The findings reveal that while RemBERT and MuRIL perform well in monolingual contexts{---}RemBERT excelling in Hindi and MuRIL in Nepali{---}BERT Multilingual performs comparatively best in cross-lingual scenarios, in generalizing features across the languages. Although DistilBERT Multilingual demonstrates slightly lower performance in cross-lingual tasks, it balances efficiency with competitive results. The study underscores the importance of model selection based on linguistic and resource-specific contexts, highlighting that general-purpose models like BERT Multilingual are particularly well-suited for cross-lingual applications.
[ "Yadav, Dipendra", "Suravee, Sumaiya", "Strau{\\ss}, Tobias", "Yordanova, Kristina" ]
Cross-Lingual Named Entity Recognition for Low-Resource Languages: A Hindi-Nepali Case Study Using Multilingual BERT Models
mrl-1.12
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.mrl-1.13.bib
https://aclanthology.org/2024.mrl-1.13/
@inproceedings{gupta-etal-2024-parameter, title = "Parameter-efficient Adaptation of Multilingual Multimodal Models for Low-resource {ASR}", author = "Gupta, Abhishek and Parulekar, Amruta and Chattopadhyay, Sameep and Jyothi, Preethi", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.13", pages = "175--185", abstract = "Automatic speech recognition (ASR) for low-resource languages remains a challenge due to the scarcity of labeled training data. Parameter-efficient fine-tuning and text-only adaptation are two popular methods that have been used to address such low-resource settings. In this work, we investigate how these techniques can be effectively combined using a multilingual multimodal model like SeamlessM4T. Multimodal models are able to leverage unlabeled text via text-only adaptation with further parameter-efficient ASR fine-tuning, thus boosting ASR performance. We also show cross-lingual transfer from a high-resource language, achieving up to a relative 17{\%} WER reduction over baseline in an extremely low-resource setting without any labeled speech.", }
Automatic speech recognition (ASR) for low-resource languages remains a challenge due to the scarcity of labeled training data. Parameter-efficient fine-tuning and text-only adaptation are two popular methods that have been used to address such low-resource settings. In this work, we investigate how these techniques can be effectively combined using a multilingual multimodal model like SeamlessM4T. Multimodal models are able to leverage unlabeled text via text-only adaptation with further parameter-efficient ASR fine-tuning, thus boosting ASR performance. We also show cross-lingual transfer from a high-resource language, achieving up to a relative 17{\%} WER reduction over baseline in an extremely low-resource setting without any labeled speech.
[ "Gupta, Abhishek", "Parulekar, Amruta", "Chattopadhyay, Sameep", "Jyothi, Preethi" ]
Parameter-efficient Adaptation of Multilingual Multimodal Models for Low-resource ASR
mrl-1.13
Poster
2410.13445
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.mrl-1.14.bib
https://aclanthology.org/2024.mrl-1.14/
@inproceedings{eschrich-liu-2024-towards, title = "Towards Cross-Linguistic Semantic Grounding using Dictionary Graph Analysis", author = "Eschrich, Ethan and Liu, Zoey", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.14", pages = "186--188", abstract = "Previous work has explored the structure of dictionaries as directed graphs, with arcs between words when one word is used in the definition of another. We analyze the efficacy of these methodologies and explore the cross-linguistic patterns of the strongly connected components of multiple monolingual dictionaries. We find that the number of sources in the condensation graph of a directed dictionary graph is roughly stable across multiple different languages, and present future research directions.", }
Previous work has explored the structure of dictionaries as directed graphs, with arcs between words when one word is used in the definition of another. We analyze the efficacy of these methodologies and explore the cross-linguistic patterns of the strongly connected components of multiple monolingual dictionaries. We find that the number of sources in the condensation graph of a directed dictionary graph is roughly stable across multiple different languages, and present future research directions.
[ "Eschrich, Ethan", "Liu, Zoey" ]
Towards Cross-Linguistic Semantic Grounding using Dictionary Graph Analysis
mrl-1.14
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.mrl-1.15.bib
https://aclanthology.org/2024.mrl-1.15/
@inproceedings{nikolich-etal-2024-vikhr, title = "Vikhr: Constructing a State-of-the-art Bilingual Open-Source Instruction-Following Large Language Model for {R}ussian", author = "Nikolich, Aleksandr and Korolev, Konstantin and Bratchikov, Sergei and Kiselev, Igor and Shelmanov, Artem", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.15", pages = "189--199", abstract = "There has been a surge in the development of various Large Language Models (LLMs). However, text generation for languages other than English often faces significant challenges, including poor generation quality and reduced computational performance due to the disproportionate representation of tokens in the model{'}s vocabulary. In this work, we address these issues by developing a pipeline for adaptation of English-oriented pre-trained models to other languages and constructing efficient bilingual LLMs. Using this pipeline, we construct Vikhr, a state-of-the-art bilingual open-source instruction-following LLM designed specifically for the Russian language. {``}Vikhr{''} refers to the name of the Mistral LLM series and means a {``}strong gust of wind.{''}Unlike previous Russian-language models that typically rely on LoRA adapters on top of English-oriented models, sacrificing performance for lower training costs, Vikhr features an adapted tokenizer vocabulary and undergoes the continued pre-training and instruction tuning of all weights. This not only enhances the model{'}s performance but also significantly improves its computational and contextual efficiency.The remarkable performance of Vikhr across various Russian-language benchmarks can also be attributed to our efforts in expanding instruction datasets and corpora for continued pre-training. Vikhr not only sets the new state of the art among open-source LLMs for Russian but even outperforms some proprietary closed-source models on certain benchmarks. The model weights, instruction sets, and code are publicly available.", }
There has been a surge in the development of various Large Language Models (LLMs). However, text generation for languages other than English often faces significant challenges, including poor generation quality and reduced computational performance due to the disproportionate representation of tokens in the model{'}s vocabulary. In this work, we address these issues by developing a pipeline for adaptation of English-oriented pre-trained models to other languages and constructing efficient bilingual LLMs. Using this pipeline, we construct Vikhr, a state-of-the-art bilingual open-source instruction-following LLM designed specifically for the Russian language. {``}Vikhr{''} refers to the name of the Mistral LLM series and means a {``}strong gust of wind.{''}Unlike previous Russian-language models that typically rely on LoRA adapters on top of English-oriented models, sacrificing performance for lower training costs, Vikhr features an adapted tokenizer vocabulary and undergoes the continued pre-training and instruction tuning of all weights. This not only enhances the model{'}s performance but also significantly improves its computational and contextual efficiency.The remarkable performance of Vikhr across various Russian-language benchmarks can also be attributed to our efforts in expanding instruction datasets and corpora for continued pre-training. Vikhr not only sets the new state of the art among open-source LLMs for Russian but even outperforms some proprietary closed-source models on certain benchmarks. The model weights, instruction sets, and code are publicly available.
[ "Nikolich, Aleks", "r", "Korolev, Konstantin", "Bratchikov, Sergei", "Kiselev, Igor", "Shelmanov, Artem" ]
Vikhr: Constructing a State-of-the-art Bilingual Open-Source Instruction-Following Large Language Model for Russian
mrl-1.15
Poster
2405.13929
[ "" ]
https://huggingface.co/papers/2405.13929
3
53
4
3
[ "Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24", "Vikhrmodels/Vikhr-7B-instruct_0.4", "Vikhrmodels/Vikhr-Llama-3.2-1B-Instruct", "Vikhrmodels/Vikhr-7B-instruct_0.2", "Vikhrmodels/Vikhr-Gemma-2B-instruct", "Vikhrmodels/Vikhr-Qwen-2.5-0.5b-Instruct", "Vikhrmodels/it-5.3-fp16-32k", "Vikhrmodels/Vikhr-2-VL-2b-Instruct-experimental", "Vikhrmodels/Vikhr-Llama-3.2-1B-instruct-GGUF", "Vikhrmodels/it-5.2-fp16-cp", "Vikhrmodels/Vikhr-Qwen-2.5-0.5B-instruct-GGUF", "Vikhrmodels/Vikhr-Llama-3.2-1B-Instruct-abliterated", "QuantFactory/Vikhr-Llama-3.2-1B-Instruct-GGUF", "QuantFactory/Vikhr-Qwen-2.5-0.5b-Instruct-GGUF", "Vikhrmodels/it-5.2-fp16-cp-GGUF", "QuantFactory/Vikhr-Gemma-2B-instruct-GGUF", "RichardErkhov/Vikhrmodels_-_Vikhr-Gemma-2B-instruct-gguf", "RichardErkhov/Vikhrmodels_-_Vikhr-Llama-3.2-1B-Instruct-gguf", "mav23/Vikhr-Qwen-2.5-0.5b-Instruct-GGUF", "mav23/Vikhr-Llama-3.2-1B-Instruct-GGUF", "RichardErkhov/Vikhrmodels_-_Vikhr-Qwen-2.5-0.5b-Instruct-gguf", "RichardErkhov/Vikhrmodels_-_Vikhr-Llama-3.2-1B-Instruct-abliterated-gguf", "RichardErkhov/Vikhrmodels_-_Vikhr-Llama-3.2-1B-Instruct-4bits", "RichardErkhov/Vikhrmodels_-_Vikhr-Llama-3.2-1B-Instruct-8bits", "RichardErkhov/Vikhrmodels_-_Vikhr-Gemma-2B-instruct-4bits", "RichardErkhov/Vikhrmodels_-_Vikhr-Gemma-2B-instruct-8bits" ]
[ "Vikhrmodels/GrandMaster-PRO-MAX", "Vikhrmodels/russain_math", "Vikhrmodels/russian_physics" ]
[ "featherless-ai/try-this-model", "imperialwool/llama-cpp-api", "Verdefff/Vikhrmodels-Vikhr-7B-instruct_0.4", "Emroi/azure_", "Emroi/test", "SC999/NV_Nemotron" ]
[ "Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24", "Vikhrmodels/Vikhr-7B-instruct_0.4", "Vikhrmodels/Vikhr-Llama-3.2-1B-Instruct", "Vikhrmodels/Vikhr-7B-instruct_0.2", "Vikhrmodels/Vikhr-Gemma-2B-instruct", "Vikhrmodels/Vikhr-Qwen-2.5-0.5b-Instruct", "Vikhrmodels/it-5.3-fp16-32k", "Vikhrmodels/Vikhr-2-VL-2b-Instruct-experimental", "Vikhrmodels/Vikhr-Llama-3.2-1B-instruct-GGUF", "Vikhrmodels/it-5.2-fp16-cp", "Vikhrmodels/Vikhr-Qwen-2.5-0.5B-instruct-GGUF", "Vikhrmodels/Vikhr-Llama-3.2-1B-Instruct-abliterated", "QuantFactory/Vikhr-Llama-3.2-1B-Instruct-GGUF", "QuantFactory/Vikhr-Qwen-2.5-0.5b-Instruct-GGUF", "Vikhrmodels/it-5.2-fp16-cp-GGUF", "QuantFactory/Vikhr-Gemma-2B-instruct-GGUF", "RichardErkhov/Vikhrmodels_-_Vikhr-Gemma-2B-instruct-gguf", "RichardErkhov/Vikhrmodels_-_Vikhr-Llama-3.2-1B-Instruct-gguf", "mav23/Vikhr-Qwen-2.5-0.5b-Instruct-GGUF", "mav23/Vikhr-Llama-3.2-1B-Instruct-GGUF", "RichardErkhov/Vikhrmodels_-_Vikhr-Qwen-2.5-0.5b-Instruct-gguf", "RichardErkhov/Vikhrmodels_-_Vikhr-Llama-3.2-1B-Instruct-abliterated-gguf", "RichardErkhov/Vikhrmodels_-_Vikhr-Llama-3.2-1B-Instruct-4bits", "RichardErkhov/Vikhrmodels_-_Vikhr-Llama-3.2-1B-Instruct-8bits", "RichardErkhov/Vikhrmodels_-_Vikhr-Gemma-2B-instruct-4bits", "RichardErkhov/Vikhrmodels_-_Vikhr-Gemma-2B-instruct-8bits" ]
[ "Vikhrmodels/GrandMaster-PRO-MAX", "Vikhrmodels/russain_math", "Vikhrmodels/russian_physics" ]
[ "featherless-ai/try-this-model", "imperialwool/llama-cpp-api", "Verdefff/Vikhrmodels-Vikhr-7B-instruct_0.4", "Emroi/azure_", "Emroi/test", "SC999/NV_Nemotron" ]
1
https://aclanthology.org/2024.mrl-1.16.bib
https://aclanthology.org/2024.mrl-1.16/
@inproceedings{jung-etal-2024-mitigating, title = "Mitigating the Linguistic Gap with Phonemic Representations for Robust Cross-lingual Transfer", author = "Jung, Haeji and Oh, Changdae and Kang, Jooeon and Sohn, Jimin and Song, Kyungwoo and Kim, Jinkyu and Mortensen, David R", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.16", pages = "200--211", abstract = "Approaches to improving multilingual language understanding often struggle with significant performance gaps between high-resource and low-resource languages. While there are efforts to align the languages in a single latent space to mitigate such gaps, how different input-level representations influence such gaps has not been investigated, particularly with phonemic inputs. We hypothesize that the performance gaps are affected by representation discrepancies between those languages, and revisit the use of phonemic representations as a means to mitigate these discrepancies.To demonstrate the effectiveness of phonemic representations, we present experiments on three representative cross-lingual tasks on 12 languages in total. The results show that phonemic representations exhibit higher similarities between languages compared to orthographic representations, and it consistently outperforms grapheme-based baseline model on languages that are relatively low-resourced.We present quantitative evidence from three cross-lingual tasks that demonstrate the effectiveness of phonemic representations, and it is further justified by a theoretical analysis of the cross-lingual performance gap.", }
Approaches to improving multilingual language understanding often struggle with significant performance gaps between high-resource and low-resource languages. While there are efforts to align the languages in a single latent space to mitigate such gaps, how different input-level representations influence such gaps has not been investigated, particularly with phonemic inputs. We hypothesize that the performance gaps are affected by representation discrepancies between those languages, and revisit the use of phonemic representations as a means to mitigate these discrepancies.To demonstrate the effectiveness of phonemic representations, we present experiments on three representative cross-lingual tasks on 12 languages in total. The results show that phonemic representations exhibit higher similarities between languages compared to orthographic representations, and it consistently outperforms grapheme-based baseline model on languages that are relatively low-resourced.We present quantitative evidence from three cross-lingual tasks that demonstrate the effectiveness of phonemic representations, and it is further justified by a theoretical analysis of the cross-lingual performance gap.
[ "Jung, Haeji", "Oh, Changdae", "Kang, Jooeon", "Sohn, Jimin", "Song, Kyungwoo", "Kim, Jinkyu", "Mortensen, David R" ]
Mitigating the Linguistic Gap with Phonemic Representations for Robust Cross-lingual Transfer
mrl-1.16
Poster
2402.14279
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.mrl-1.17.bib
https://aclanthology.org/2024.mrl-1.17/
@inproceedings{fekete-etal-2024-leveraging, title = "Leveraging Adapters for Improved Cross-lingual Transfer for Low-Resource Creole {MT}", author = "Fekete, Marcell Richard and Lavrinovics, Ernests and Robinson, Nathaniel Romney and Lent, Heather and Dabre, Raj and Bjerva, Johannes", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.17", pages = "212--215", abstract = "{---}{---}{---}{--} EXTENDED ABSTRACT INTRODUCTION {---}{---}{---}{--}Creole languages are low-resource languages, often genetically related to languages like English, French, and Portuguese, due to their linguistic histories with colonialism (DeGraff, 2003). As such, Creoles stand to benefit greatly from both data-efficient methods and transfer-learning from high-resource languages. At the same time, it has been observed by Lent et al. (2022b) that machine translation (MT) is a highly desired language technology by speakers of many Creoles. To this end, recent works have contributed new datasets, allowing for the development and evaluation of MT systems for Creoles (Robinson et al., 2024; Lent et al. 2024). In this work, we explore the use of the limited monolingual and parallel data for Creoles using parameter-efficient adaptation methods. Specifically, we compare the performance of different adapter architectures over the set of available benchmarks. We find adapters a promising approach for Creoles because they are parameter-efficient and have been shown to leverage transfer learning between related languages (Faisal and Anastasopoulos, 2022). While we perform experiments across multiple Creoles, we present only on Haitian Creole in this extended abstract. For future work, we aim to explore the potentials for leveraging other high-resourced languages for parameter-efficient transfer learning.", }
{---}{---}{---}{--} EXTENDED ABSTRACT INTRODUCTION {---}{---}{---}{--}Creole languages are low-resource languages, often genetically related to languages like English, French, and Portuguese, due to their linguistic histories with colonialism (DeGraff, 2003). As such, Creoles stand to benefit greatly from both data-efficient methods and transfer-learning from high-resource languages. At the same time, it has been observed by Lent et al. (2022b) that machine translation (MT) is a highly desired language technology by speakers of many Creoles. To this end, recent works have contributed new datasets, allowing for the development and evaluation of MT systems for Creoles (Robinson et al., 2024; Lent et al. 2024). In this work, we explore the use of the limited monolingual and parallel data for Creoles using parameter-efficient adaptation methods. Specifically, we compare the performance of different adapter architectures over the set of available benchmarks. We find adapters a promising approach for Creoles because they are parameter-efficient and have been shown to leverage transfer learning between related languages (Faisal and Anastasopoulos, 2022). While we perform experiments across multiple Creoles, we present only on Haitian Creole in this extended abstract. For future work, we aim to explore the potentials for leveraging other high-resourced languages for parameter-efficient transfer learning.
[ "Fekete, Marcell Richard", "Lavrinovics, Ernests", "Robinson, Nathaniel Romney", "Lent, Heather", "Dabre, Raj", "Bjerva, Johannes" ]
Leveraging Adapters for Improved Cross-lingual Transfer for Low-Resource Creole MT
mrl-1.17
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.mrl-1.18.bib
https://aclanthology.org/2024.mrl-1.18/
@inproceedings{agrawal-etal-2024-evaluating, title = "Evaluating Multilingual Long-Context Models for Retrieval and Reasoning", author = "Agrawal, Ameeta and Dang, Andy and Bagheri Nezhad, Sina and Pokharel, Rhitabrat and Scheinberg, Russell", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.18", pages = "216--231", abstract = "Recent large language models (LLMs) demonstrate impressive capabilities in handling long contexts, some exhibiting near-perfect recall on synthetic retrieval tasks. However, these evaluations have mainly focused on English text and involved a single target sentence within lengthy contexts. Our work investigates how LLM performance generalizes to multilingual settings with multiple hidden target sentences. We create a new dataset {--} mLongRR {--} to comprehensively evaluate several multilingual long-context LLMs on retrieval and reasoning tasks across five languages: English, Vietnamese, Indonesian, Swahili, and Somali. These languages share the Latin script but belong to distinct language families and resource levels. Our analysis reveals a significant performance gap between languages. The best-performing models such as Gemini-1.5 and GPT-4o, achieve around 96{\%} accuracy in English to around 36{\%} in Somali with a single target sentence. However, this accuracy drops to 40{\%} in English and 0{\%} in Somali when dealing with three target sentences. Our findings highlight the challenges long-context LLMs face when processing longer contexts, an increase in the number of target sentences, or languages of lower resource levels.", }
Recent large language models (LLMs) demonstrate impressive capabilities in handling long contexts, some exhibiting near-perfect recall on synthetic retrieval tasks. However, these evaluations have mainly focused on English text and involved a single target sentence within lengthy contexts. Our work investigates how LLM performance generalizes to multilingual settings with multiple hidden target sentences. We create a new dataset {--} mLongRR {--} to comprehensively evaluate several multilingual long-context LLMs on retrieval and reasoning tasks across five languages: English, Vietnamese, Indonesian, Swahili, and Somali. These languages share the Latin script but belong to distinct language families and resource levels. Our analysis reveals a significant performance gap between languages. The best-performing models such as Gemini-1.5 and GPT-4o, achieve around 96{\%} accuracy in English to around 36{\%} in Somali with a single target sentence. However, this accuracy drops to 40{\%} in English and 0{\%} in Somali when dealing with three target sentences. Our findings highlight the challenges long-context LLMs face when processing longer contexts, an increase in the number of target sentences, or languages of lower resource levels.
[ "Agrawal, Ameeta", "Dang, Andy", "Bagheri Nezhad, Sina", "Pokharel, Rhitabrat", "Scheinberg, Russell" ]
Evaluating Multilingual Long-Context Models for Retrieval and Reasoning
mrl-1.18
Poster
2409.18006
[ "https://github.com/portnlp/mlongrr" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.mrl-1.19.bib
https://aclanthology.org/2024.mrl-1.19/
@inproceedings{brack-etal-2024-community, title = "Community {OSCAR}: A Community Effort for Multilingual Web Data", author = "Brack, Manuel and Ostendorff, Malte and Ortiz Suarez, Pedro and Saiz, Jos{\'e} Javier and Castilla, I{\~n}aki Lacunza and Palomar-Giner, Jorge and Shvets, Alexander and Schramowski, Patrick and Rehm, Georg and Villegas, Marta and Kersting, Kristian", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.19", pages = "232--235", abstract = "The development of large language models (LLMs) relies heavily on extensive, high-quality datasets. Publicly available datasets focus predominantly on English, leaving other language communities behind. To address this issue, we introduce Community OSCAR, a multilingual dataset initiative designed to address the gap between English and non-English data availability. Through a collective effort, Community OSCAR covers over 150 languages with 45 billion documents, totaling over 345 TiB of data. Initial results indicate that Community OSCAR provides valuable raw data for training LLMs and enhancing the performance of multilingual models. This work aims to contribute to the ongoing advancements in multilingual NLP and to support a more inclusive AI ecosystem by making high-quality, multilingual data more accessible to those working with low-resource languages.", }
The development of large language models (LLMs) relies heavily on extensive, high-quality datasets. Publicly available datasets focus predominantly on English, leaving other language communities behind. To address this issue, we introduce Community OSCAR, a multilingual dataset initiative designed to address the gap between English and non-English data availability. Through a collective effort, Community OSCAR covers over 150 languages with 45 billion documents, totaling over 345 TiB of data. Initial results indicate that Community OSCAR provides valuable raw data for training LLMs and enhancing the performance of multilingual models. This work aims to contribute to the ongoing advancements in multilingual NLP and to support a more inclusive AI ecosystem by making high-quality, multilingual data more accessible to those working with low-resource languages.
[ "Brack, Manuel", "Ostendorff, Malte", "Ortiz Suarez, Pedro", "Saiz, Jos{\\'e} Javier", "Castilla, I{\\~n}aki Lacunza", "Palomar-Giner, Jorge", "Shvets, Alex", "er", "Schramowski, Patrick", "Rehm, Georg", "Villegas, Marta", "Kersting, Kristian" ]
Community OSCAR: A Community Effort for Multilingual Web Data
mrl-1.19
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.mrl-1.20.bib
https://aclanthology.org/2024.mrl-1.20/
@inproceedings{skianis-etal-2024-leveraging, title = "Leveraging {LLM}s for Translating and Classifying Mental Health Data", author = {Skianis, Konstantinos and Do{\u{g}}ru{\"o}z, A. Seza and Pavlopoulos, John}, editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.20", pages = "236--241", abstract = "Large language models (LLMs) are increasingly used in medical fields. In mental health support, the early identification of linguistic markers associated with mental health conditions can provide valuable support to mental health professionals, and reduce long waiting times for patients.Despite the benefits of LLMs for mental health support, there is limited research on their application in mental health systems for languages other than English. Our study addresses this gap by focusing on the detection of depression severity in Greek through user-generated posts which are automatically translated from English. Our results show that GPT3.5-turbo is not very successful in identifying the severity of depression in English, and it has a varying performance in Greek as well. Our study underscores the necessity for further research, especially in languages with less resources.Also, careful implementation is necessary to ensure that LLMs are used effectively in mental health platforms, and human supervision remains crucial to avoid misdiagnosis.", }
Large language models (LLMs) are increasingly used in medical fields. In mental health support, the early identification of linguistic markers associated with mental health conditions can provide valuable support to mental health professionals, and reduce long waiting times for patients.Despite the benefits of LLMs for mental health support, there is limited research on their application in mental health systems for languages other than English. Our study addresses this gap by focusing on the detection of depression severity in Greek through user-generated posts which are automatically translated from English. Our results show that GPT3.5-turbo is not very successful in identifying the severity of depression in English, and it has a varying performance in Greek as well. Our study underscores the necessity for further research, especially in languages with less resources.Also, careful implementation is necessary to ensure that LLMs are used effectively in mental health platforms, and human supervision remains crucial to avoid misdiagnosis.
[ "Skianis, Konstantinos", "Do{\\u{g}}ru{\\\"o}z, A. Seza", "Pavlopoulos, John" ]
Leveraging LLMs for Translating and Classifying Mental Health Data
mrl-1.20
Poster
2410.12985
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.mrl-1.21.bib
https://aclanthology.org/2024.mrl-1.21/
@inproceedings{acikgoz-etal-2024-bridging, title = "Bridging the Bosphorus: Advancing {T}urkish Large Language Models through Strategies for Low-Resource Language Adaptation and Benchmarking", author = "Acikgoz, Emre Can and Erdogan, Mete and Yuret, Deniz", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.21", pages = "242--268", abstract = "Large Language Models (LLMs) are becoming crucial across various fields, emphasizing the urgency for high-quality models in underrepresented languages. This study explores the unique challenges faced by low-resource languages, such as data scarcity, model selection, evaluation, and computational limitations, with a special focus on Turkish. We conduct an in-depth analysis to evaluate the impact of training strategies, model choices, and data availability on the performance of LLMs designed for underrepresented languages. Our approach includes two methodologies: (i) adapting existing LLMs originally pretrained in English to understand Turkish, and (ii) developing a model from the ground up using Turkish pretraining data, both supplemented with supervised fine-tuning on a novel Turkish instruction-tuning dataset aimed at enhancing reasoning capabilities. The relative performance of these methods is evaluated through the creation of a new leaderboard for Turkish LLMs, featuring benchmarks that assess different reasoning and knowledge skills. Furthermore, we conducted experiments on data and model scaling, both during pretraining and fine-tuning, simultaneously emphasizing the capacity for knowledge transfer across languages and addressing the challenges of catastrophic forgetting encountered during fine-tuning on a different language. Our goal is to offer a detailed guide for advancing the LLM framework in low-resource linguistic contexts, thereby making natural language processing (NLP) benefits more globally accessible.", }
Large Language Models (LLMs) are becoming crucial across various fields, emphasizing the urgency for high-quality models in underrepresented languages. This study explores the unique challenges faced by low-resource languages, such as data scarcity, model selection, evaluation, and computational limitations, with a special focus on Turkish. We conduct an in-depth analysis to evaluate the impact of training strategies, model choices, and data availability on the performance of LLMs designed for underrepresented languages. Our approach includes two methodologies: (i) adapting existing LLMs originally pretrained in English to understand Turkish, and (ii) developing a model from the ground up using Turkish pretraining data, both supplemented with supervised fine-tuning on a novel Turkish instruction-tuning dataset aimed at enhancing reasoning capabilities. The relative performance of these methods is evaluated through the creation of a new leaderboard for Turkish LLMs, featuring benchmarks that assess different reasoning and knowledge skills. Furthermore, we conducted experiments on data and model scaling, both during pretraining and fine-tuning, simultaneously emphasizing the capacity for knowledge transfer across languages and addressing the challenges of catastrophic forgetting encountered during fine-tuning on a different language. Our goal is to offer a detailed guide for advancing the LLM framework in low-resource linguistic contexts, thereby making natural language processing (NLP) benefits more globally accessible.
[ "Acikgoz, Emre Can", "Erdogan, Mete", "Yuret, Deniz" ]
Bridging the Bosphorus: Advancing Turkish Large Language Models through Strategies for Low-Resource Language Adaptation and Benchmarking
mrl-1.21
Poster
2405.04685
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.mrl-1.22.bib
https://aclanthology.org/2024.mrl-1.22/
@inproceedings{zeng-etal-2024-unsupervised, title = "Unsupervised Text Representation Learning via Instruction-Tuning for Zero-Shot Dense Retrieval", author = "Zeng, Qiuhai and Qiu, Zimeng and Hwang, Dae Yon and He, Xin and Campbell, William M.", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.22", pages = "269--279", abstract = "Dense retrieval systems are commonly used for information retrieval (IR). They rely on learning text representations through an encoder and usually require supervised modeling via labelled data which can be costly to obtain or simply unavailable. In this study, we introduce a novel unsupervised text representation learning technique via instruction-tuning the pre-trained encoder-decoder large language model (LLM) under the dual-encoder retrieval framework. We demonstrate on multiple languages that the corpus representation can be augmented by the representations of relevant synthetic queries generated by the instruct-tuned LLM founded on the Rao-Blackwell theorem. Furthermore, we effectively align the query and corpus text representation with self-instruct tuning. We evaluate our proposed method under low-resource settings on three English, two German and one Portuguese retrieval datasets measuring NDCG@10, MRR@100, Recall@100. We significantly improve the average zero-shot retrieval performance on all metrics, increasing out-of-box FLAN-T5 model variations by [4.73{\%}, 6.15{\%}] in absolute NDCG@10 and exceeding four supervised dense retrievers.", }
Dense retrieval systems are commonly used for information retrieval (IR). They rely on learning text representations through an encoder and usually require supervised modeling via labelled data which can be costly to obtain or simply unavailable. In this study, we introduce a novel unsupervised text representation learning technique via instruction-tuning the pre-trained encoder-decoder large language model (LLM) under the dual-encoder retrieval framework. We demonstrate on multiple languages that the corpus representation can be augmented by the representations of relevant synthetic queries generated by the instruct-tuned LLM founded on the Rao-Blackwell theorem. Furthermore, we effectively align the query and corpus text representation with self-instruct tuning. We evaluate our proposed method under low-resource settings on three English, two German and one Portuguese retrieval datasets measuring NDCG@10, MRR@100, Recall@100. We significantly improve the average zero-shot retrieval performance on all metrics, increasing out-of-box FLAN-T5 model variations by [4.73{\%}, 6.15{\%}] in absolute NDCG@10 and exceeding four supervised dense retrievers.
[ "Zeng, Qiuhai", "Qiu, Zimeng", "Hwang, Dae Yon", "He, Xin", "Campbell, William M." ]
Unsupervised Text Representation Learning via Instruction-Tuning for Zero-Shot Dense Retrieval
mrl-1.22
Poster
2409.16497
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.mrl-1.23.bib
https://aclanthology.org/2024.mrl-1.23/
@inproceedings{yang-etal-2024-language-bias, title = "Language Bias in Multilingual Information Retrieval: The Nature of the Beast and Mitigation Methods", author = "Yang, Jinrui and Jiang, Fan and Baldwin, Timothy", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.23", pages = "280--292", abstract = "Language fairness in multilingual information retrieval (MLIR) systems is crucial for ensuring equitable access to information across diverse languages. This paper sheds light on the issue, based on the assumption that queries in different languages, but with identical semantics, should yield equivalent ranking lists when retrieving on the same multilingual documents. We evaluate the degree of fairness using both traditional retrieval methods, and a DPR neural ranker based on mBERT and XLM-R. Additionally, we introduce {`}LaKDA{'}, a novel loss designed to mitigate language biases in neural MLIR approaches. Our analysis exposes intrinsic language biases in current MLIR technologies, with notable disparities across the retrieval methods, and the effectiveness of LaKDA in enhancing language fairness.", }
Language fairness in multilingual information retrieval (MLIR) systems is crucial for ensuring equitable access to information across diverse languages. This paper sheds light on the issue, based on the assumption that queries in different languages, but with identical semantics, should yield equivalent ranking lists when retrieving on the same multilingual documents. We evaluate the degree of fairness using both traditional retrieval methods, and a DPR neural ranker based on mBERT and XLM-R. Additionally, we introduce {`}LaKDA{'}, a novel loss designed to mitigate language biases in neural MLIR approaches. Our analysis exposes intrinsic language biases in current MLIR technologies, with notable disparities across the retrieval methods, and the effectiveness of LaKDA in enhancing language fairness.
[ "Yang, Jinrui", "Jiang, Fan", "Baldwin, Timothy" ]
Language Bias in Multilingual Information Retrieval: The Nature of the Beast and Mitigation Methods
mrl-1.23
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.mrl-1.24.bib
https://aclanthology.org/2024.mrl-1.24/
@inproceedings{wu-etal-2024-representational-isomorphism, title = "Representational Isomorphism and Alignment of Multilingual Large Language Models", author = "Wu, Di and Lei, Yibin and Yates, Andrew and Monz, Christof", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.24", pages = "293--297", abstract = "In this extended abstract, we investigate the capability of Large Language Models (LLMs) to represent texts in multilingual contexts. Our findings reveal that sentence representations derived from LLMs exhibit a high degree of isomorphism across languages. This existing isomorphism facilitates representational alignments in few-shot settings. Specifically, by applying a contrastive objective at the representation level with only a small number (e.g., 100) of translation pairs, we significantly improve models{'} performance on Semantic Textual Similarity (STS) tasks across languages.", }
In this extended abstract, we investigate the capability of Large Language Models (LLMs) to represent texts in multilingual contexts. Our findings reveal that sentence representations derived from LLMs exhibit a high degree of isomorphism across languages. This existing isomorphism facilitates representational alignments in few-shot settings. Specifically, by applying a contrastive objective at the representation level with only a small number (e.g., 100) of translation pairs, we significantly improve models{'} performance on Semantic Textual Similarity (STS) tasks across languages.
[ "Wu, Di", "Lei, Yibin", "Yates, Andrew", "Monz, Christof" ]
Representational Isomorphism and Alignment of Multilingual Large Language Models
mrl-1.24
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.mrl-1.25.bib
https://aclanthology.org/2024.mrl-1.25/
@inproceedings{bassi-etal-2024-generalization, title = "Generalization Measures for Zero-Shot Cross-Lingual Transfer", author = "Bassi, Saksham and Ataman, Duygu and Cho, Kyunghyun", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.25", pages = "298--309", abstract = "Building robust and reliable machine learning systems requires models with the capacity to generalize their knowledge to interpret unseen inputs with different characteristics. Traditional language model evaluation tasks lack informative metrics about model generalization, and their applicability in new settings is often measured using task and language-specific downstream performance, which is lacking in many languages and tasks. To address this gap, we explore a set of efficient and reliable measures that could aid in computing more information related to the generalization capability of language models, particularly in cross-lingual zero-shot settings. Our central hypothesis is that the sharpness of a model{'}s loss landscape, i.e., the representation of loss values over its weight space, can indicate its generalization potential, with a flatter landscape suggesting better generalization. We propose a novel and stable algorithm to reliably compute the sharpness of a model optimum, and demonstrate its correlation with successful cross-lingual transfer.", }
Building robust and reliable machine learning systems requires models with the capacity to generalize their knowledge to interpret unseen inputs with different characteristics. Traditional language model evaluation tasks lack informative metrics about model generalization, and their applicability in new settings is often measured using task and language-specific downstream performance, which is lacking in many languages and tasks. To address this gap, we explore a set of efficient and reliable measures that could aid in computing more information related to the generalization capability of language models, particularly in cross-lingual zero-shot settings. Our central hypothesis is that the sharpness of a model{'}s loss landscape, i.e., the representation of loss values over its weight space, can indicate its generalization potential, with a flatter landscape suggesting better generalization. We propose a novel and stable algorithm to reliably compute the sharpness of a model optimum, and demonstrate its correlation with successful cross-lingual transfer.
[ "Bassi, Saksham", "Ataman, Duygu", "Cho, Kyunghyun" ]
Generalization Measures for Zero-Shot Cross-Lingual Transfer
mrl-1.25
Poster
2404.15928
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.mrl-1.26.bib
https://aclanthology.org/2024.mrl-1.26/
@inproceedings{mehrparvar-pezzelle-2024-detecting, title = "Detecting and Translating Language Ambiguity with Multilingual {LLM}s", author = "Mehrparvar, Behrang and Pezzelle, Sandro", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.26", pages = "310--323", abstract = "Most languages could be ambiguous, which means the same conveyed text or speech, results in different actions by different readers or listeners. In this project, we propose a method to detect the ambiguity of a sentence using translation by multilingual LLMs. In particular, we hypothesize that a good machine translator should preserve the ambiguity of sentences in all target languages. Therefore, we investigate whether ambiguity is encoded in the hidden representation of a translation model or, instead, if only a single meaning is encoded. In our experiments, we have been able to predict ambiguity of sentences with high accuracy using machine translation without direct use of semantics and only based on the reconstruction error of a function that maps the forward and backward translation hidden representations to each other. The potential applications of the proposed approach span i) detecting ambiguous sentences, ii) fine-tuning existing multilingual LLMs to preserve ambiguous information, and iii) developing AI systems that can generate ambiguity-free languages when needed.", }
Most languages could be ambiguous, which means the same conveyed text or speech, results in different actions by different readers or listeners. In this project, we propose a method to detect the ambiguity of a sentence using translation by multilingual LLMs. In particular, we hypothesize that a good machine translator should preserve the ambiguity of sentences in all target languages. Therefore, we investigate whether ambiguity is encoded in the hidden representation of a translation model or, instead, if only a single meaning is encoded. In our experiments, we have been able to predict ambiguity of sentences with high accuracy using machine translation without direct use of semantics and only based on the reconstruction error of a function that maps the forward and backward translation hidden representations to each other. The potential applications of the proposed approach span i) detecting ambiguous sentences, ii) fine-tuning existing multilingual LLMs to preserve ambiguous information, and iii) developing AI systems that can generate ambiguity-free languages when needed.
[ "Mehrparvar, Behrang", "Pezzelle, S", "ro" ]
Detecting and Translating Language Ambiguity with Multilingual LLMs
mrl-1.26
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.mrl-1.27.bib
https://aclanthology.org/2024.mrl-1.27/
@inproceedings{hashimoto-etal-2024-mlt, title = "{MLT}-{DR}: Multi-Lingual/Task Demonstration RetrievalAn Attempt towards Generalized Retriever for In-Context Learning", author = "Hashimoto, Kazuma and Akula, Arjun Reddy and Raman, Karthik and Bendersky, Michael", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.27", pages = "324--345", abstract = "This paper presents Multi-Lingual/Task Demonstration Retrieval (MLT-DR) for in-context learning with Large Language Models (LLMs).Our goal is to investigate how dense demonstration retrieval models are generalized across languages and tasks.We first convert 81 tasks into a common format, covering various languages, task types, and domains.For 8 English-based tasks among them, we use machine translation to create synthetic multi/cross-lingual tasks, by translating the examples into non-English languages to explicitly cover more than 130 languages.We then use an instruction-tuned LLM to estimate utility of demonstrations for all the tasks to train the demonstration retrieval models.In our experiments, we show an interesting counterintuitive observation; to compute embeddings of demonstrations, using both the input and ground-truth output hurts the generalization ability of the retriever on unseen tasks whose output space is quite different from those in the seen task set.We also examine that our retriever robustly works even with LLMs that we did not touch during the development of the models.The retrieval models{'} checkpoints are publicly available at \url{URL-available-upon-publication}.", }
This paper presents Multi-Lingual/Task Demonstration Retrieval (MLT-DR) for in-context learning with Large Language Models (LLMs).Our goal is to investigate how dense demonstration retrieval models are generalized across languages and tasks.We first convert 81 tasks into a common format, covering various languages, task types, and domains.For 8 English-based tasks among them, we use machine translation to create synthetic multi/cross-lingual tasks, by translating the examples into non-English languages to explicitly cover more than 130 languages.We then use an instruction-tuned LLM to estimate utility of demonstrations for all the tasks to train the demonstration retrieval models.In our experiments, we show an interesting counterintuitive observation; to compute embeddings of demonstrations, using both the input and ground-truth output hurts the generalization ability of the retriever on unseen tasks whose output space is quite different from those in the seen task set.We also examine that our retriever robustly works even with LLMs that we did not touch during the development of the models.The retrieval models{'} checkpoints are publicly available at \url{URL-available-upon-publication}.
[ "Hashimoto, Kazuma", "Akula, Arjun Reddy", "Raman, Karthik", "Bendersky, Michael" ]
MLT-DR: Multi-Lingual/Task Demonstration RetrievalAn Attempt towards Generalized Retriever for In-Context Learning
mrl-1.27
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.mrl-1.28.bib
https://aclanthology.org/2024.mrl-1.28/
@inproceedings{li-etal-2024-mcgill, title = "{M}c{G}ill {NLP} Group Submission to the {MRL} 2024 Shared Task: Ensembling Enhances Effectiveness of Multilingual Small {LM}s", author = "Li, Senyu and Yu, Hao and Ojo, Jessica and Adelani, David Ifeoluwa", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.28", pages = "346--356", abstract = "We present our systems for the three tasks and five languages included in the MRL 2024 Shared Task on Multilingual Multi-task Information Retrieval: (1) Named Entity Recognition, (2) Free-form Question Answering, and (3) Multiple-choice Question Answering. For each task, we explored the impact of selecting different multilingual language models for fine-tuning across various target languages, and implemented an ensemble system that generates final outputs based on predictions from multiple fine-tuned models. All models are large language models fine-tuned on task-specific data. Our experimental results show that a more balanced dataset would yield better results. However, when training data for certain languages are scarce, fine-tuning on a large amount of English data supplemented by a small amount of {``}triggering data{''} in the target language can produce decent results.", }
We present our systems for the three tasks and five languages included in the MRL 2024 Shared Task on Multilingual Multi-task Information Retrieval: (1) Named Entity Recognition, (2) Free-form Question Answering, and (3) Multiple-choice Question Answering. For each task, we explored the impact of selecting different multilingual language models for fine-tuning across various target languages, and implemented an ensemble system that generates final outputs based on predictions from multiple fine-tuned models. All models are large language models fine-tuned on task-specific data. Our experimental results show that a more balanced dataset would yield better results. However, when training data for certain languages are scarce, fine-tuning on a large amount of English data supplemented by a small amount of {``}triggering data{''} in the target language can produce decent results.
[ "Li, Senyu", "Yu, Hao", "Ojo, Jessica", "Adelani, David Ifeoluwa" ]
McGill NLP Group Submission to the MRL 2024 Shared Task: Ensembling Enhances Effectiveness of Multilingual Small LMs
mrl-1.28
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.mrl-1.29.bib
https://aclanthology.org/2024.mrl-1.29/
@inproceedings{hammerl-etal-2024-cuni, title = "{CUNI} and {LMU} Submission to the {MRL} 2024 Shared Task on Multi-lingual Multi-task Information Retrieval", author = {H{\"a}mmerl, Katharina and Manea, Andrei-Alexandru and Vico, Gianluca and Helcl, Jind{\v{r}}ich and Libovick{\'y}, Jind{\v{r}}ich}, editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.29", pages = "357--364", abstract = "We present the joint CUNI and LMU submission to the MRL 2024 Shared Task on Multi-lingual Multi-task Information Retrieval.The shared task objective was to explore how we can deploy modern methods in NLP in multi-lingual low-resource settings, tested on two sub-tasks: Named-entity recognition and question answering.Our solutions to the subtasks are based on data acquisition and model adaptation.We compare the performance of our submitted systems with the translate-test approachwhich proved to be the most useful in the previous edition of the shared task.Our results show that using more data as well as fine-tuning recent multilingual pre-trained models leads to considerable improvements over the translate-test baseline.Our code is available at https://github.com/ufal/mrl2024-multilingual-ir-shared-task.", }
We present the joint CUNI and LMU submission to the MRL 2024 Shared Task on Multi-lingual Multi-task Information Retrieval.The shared task objective was to explore how we can deploy modern methods in NLP in multi-lingual low-resource settings, tested on two sub-tasks: Named-entity recognition and question answering.Our solutions to the subtasks are based on data acquisition and model adaptation.We compare the performance of our submitted systems with the translate-test approachwhich proved to be the most useful in the previous edition of the shared task.Our results show that using more data as well as fine-tuning recent multilingual pre-trained models leads to considerable improvements over the translate-test baseline.Our code is available at https://github.com/ufal/mrl2024-multilingual-ir-shared-task.
[ "H{\\\"a}mmerl, Katharina", "Manea, Andrei-Alex", "ru", "Vico, Gianluca", "Helcl, Jind{\\v{r}}ich", "Libovick{\\'y}, Jind{\\v{r}}ich" ]
CUNI and LMU Submission to the MRL 2024 Shared Task on Multi-lingual Multi-task Information Retrieval
mrl-1.29
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.mrl-1.30.bib
https://aclanthology.org/2024.mrl-1.30/
@inproceedings{tinner-etal-2024-findings, title = "Findings of the 2nd Shared Task on Multi-lingual Multi-task Information Retrieval at {MRL} 2024", author = "Tinner, Francesco and Mantri, Raghav and Hajili, Mammad and Chukwuneke, Chiamaka and Massey, Dylan and Ajibade, Benjamin A. and Kocak, Bilge Deniz and Dawud, Abolade and Atala, Jonathan and Sirin, Hale and Olaleye, Kayode and Rzayev, Anar and Adelani, David and Ataman, Duygu", editor = {S{\"a}lev{\"a}, Jonne and Owodunni, Abraham}, booktitle = "Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.mrl-1.30", pages = "365--376", abstract = "Large language models (LLMs) demonstrate exceptional proficiency in both the comprehension and generation of textual data, particularly in English, a language for which extensive public benchmarks have been established across a wide range of natural language processing (NLP) tasks. Nonetheless, their performance in multilingual contexts and specialized domains remains less rigorously validated, raising questions about their reliability and generalizability across linguistically diverse and domain-specific settings. The second edition of the Shared Task on Multilingual Multitask Information Retrieval aims to provide a comprehensive and inclusive multilingual evaluation benchmark which aids assessing the ability of multilingual LLMs to capture logical, factual, or causal relationships within lengthy text contexts and generate language under sparse settings, particularly in scenarios with under-resourced languages. The shared task consists of two subtasks crucial to information retrieval: Named entity recognition (NER) and reading comprehension (RC), in 7 data-scarce languages: Azerbaijani, Swiss German, Turkish and , which previously lacked annotated resources in information retrieval tasks. This year specifally focus on the multiple-choice question answering evaluation setting which provides a more objective setting for comparing different methods across languages.", }
Large language models (LLMs) demonstrate exceptional proficiency in both the comprehension and generation of textual data, particularly in English, a language for which extensive public benchmarks have been established across a wide range of natural language processing (NLP) tasks. Nonetheless, their performance in multilingual contexts and specialized domains remains less rigorously validated, raising questions about their reliability and generalizability across linguistically diverse and domain-specific settings. The second edition of the Shared Task on Multilingual Multitask Information Retrieval aims to provide a comprehensive and inclusive multilingual evaluation benchmark which aids assessing the ability of multilingual LLMs to capture logical, factual, or causal relationships within lengthy text contexts and generate language under sparse settings, particularly in scenarios with under-resourced languages. The shared task consists of two subtasks crucial to information retrieval: Named entity recognition (NER) and reading comprehension (RC), in 7 data-scarce languages: Azerbaijani, Swiss German, Turkish and , which previously lacked annotated resources in information retrieval tasks. This year specifally focus on the multiple-choice question answering evaluation setting which provides a more objective setting for comparing different methods across languages.
[ "Tinner, Francesco", "Mantri, Raghav", "Hajili, Mammad", "Chukwuneke, Chiamaka", "Massey, Dylan", "Ajibade, Benjamin A.", "Kocak, Bilge Deniz", "Dawud, Abolade", "Atala, Jonathan", "Sirin, Hale", "Olaleye, Kayode", "Rzayev, Anar", "Adelani, David", "Ataman, Duygu" ]
Findings of the 2nd Shared Task on Multi-lingual Multi-task Information Retrieval at MRL 2024
mrl-1.30
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.1.bib
https://aclanthology.org/2024.nllp-1.1/
@inproceedings{c-r-etal-2024-legen, title = "{L}e{G}en: Complex Information Extraction from Legal sentences using Generative Models", author = "C R, Chaitra and Kulkarni, Sankalp and Sagi, Sai Rama Akash Varma and Pandey, Shashank and Yalavarthy, Rohit and Chakraborty, Dipanjan and Upadhyay, Prajna Devi", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.1", pages = "1--17", abstract = "Constructing legal knowledge graphs from unstructured legal texts is a complex challenge due to the intricate nature of legal language. While open information extraction (OIE) techniques can convert text into triples of the form subject, relation, object, they often fall short of capturing the nuanced relationships within lengthy legal sentences, necessitating more sophisticated approaches known as complex information extraction. This paper proposes $LeGen$ {--} an end-to-end approach leveraging pre-trained large language models (GPT-4o, T5, BART) to perform complex information extraction from legal sentences. $LeGen$ learns and represents the discourse structure of legal sentences, capturing both their complexity and semantics. It minimizes error propagation typical in multi-step pipelines and achieves up to a 32.2{\%} gain on the Indian Legal benchmark. Additionally, it demonstrates competitive performance on open information extraction benchmarks. A promising application of the resulting legal knowledge graphs is in developing question-answering systems for government schemes, tailored to the Next Billion Users who struggle with the complexity of legal language. Our code and data are available at https://github.com/prajnaupadhyay/LegalIE", }
Constructing legal knowledge graphs from unstructured legal texts is a complex challenge due to the intricate nature of legal language. While open information extraction (OIE) techniques can convert text into triples of the form subject, relation, object, they often fall short of capturing the nuanced relationships within lengthy legal sentences, necessitating more sophisticated approaches known as complex information extraction. This paper proposes $LeGen$ {--} an end-to-end approach leveraging pre-trained large language models (GPT-4o, T5, BART) to perform complex information extraction from legal sentences. $LeGen$ learns and represents the discourse structure of legal sentences, capturing both their complexity and semantics. It minimizes error propagation typical in multi-step pipelines and achieves up to a 32.2{\%} gain on the Indian Legal benchmark. Additionally, it demonstrates competitive performance on open information extraction benchmarks. A promising application of the resulting legal knowledge graphs is in developing question-answering systems for government schemes, tailored to the Next Billion Users who struggle with the complexity of legal language. Our code and data are available at https://github.com/prajnaupadhyay/LegalIE
[ "C R, Chaitra", "Kulkarni, Sankalp", "Sagi, Sai Rama Akash Varma", "P", "ey, Shashank", "Yalavarthy, Rohit", "Chakraborty, Dipanjan", "Upadhyay, Prajna Devi" ]
LeGen: Complex Information Extraction from Legal sentences using Generative Models
nllp-1.1
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.2.bib
https://aclanthology.org/2024.nllp-1.2/
@inproceedings{sie-etal-2024-summarizing, title = "Summarizing Long Regulatory Documents with a Multi-Step Pipeline", author = "Sie, Mika and Beek, Ruby and Bots, Michiel and Brinkkemper, Sjaak and Gatt, Albert", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.2", pages = "18--32", abstract = "Due to their length and complexity, long regulatory texts are challenging to summarize. To address this, a multi-step extractive-abstractive architecture is proposed to handle lengthy regulatory documents more effectively. In this paper, we show that the effectiveness of a two-step architecture for summarizing long regulatory texts varies significantly depending on the model used. Specifically, the two-step architecture improves the performance of decoder-only models. For abstractive encoder-decoder models with short context lengths, the effectiveness of an extractive step varies, whereas for long-context encoder-decoder models, the extractive step worsens their performance. This research also highlights the challenges of evaluating generated texts, as evidenced by the differing results from human and automated evaluations. Most notably, human evaluations favoured language models pretrained on legal text, while automated metrics rank general-purpose language models higher. The results underscore the importance of selecting the appropriate summarization strategy based on model architecture and context length.", }
Due to their length and complexity, long regulatory texts are challenging to summarize. To address this, a multi-step extractive-abstractive architecture is proposed to handle lengthy regulatory documents more effectively. In this paper, we show that the effectiveness of a two-step architecture for summarizing long regulatory texts varies significantly depending on the model used. Specifically, the two-step architecture improves the performance of decoder-only models. For abstractive encoder-decoder models with short context lengths, the effectiveness of an extractive step varies, whereas for long-context encoder-decoder models, the extractive step worsens their performance. This research also highlights the challenges of evaluating generated texts, as evidenced by the differing results from human and automated evaluations. Most notably, human evaluations favoured language models pretrained on legal text, while automated metrics rank general-purpose language models higher. The results underscore the importance of selecting the appropriate summarization strategy based on model architecture and context length.
[ "Sie, Mika", "Beek, Ruby", "Bots, Michiel", "Brinkkemper, Sjaak", "Gatt, Albert" ]
Summarizing Long Regulatory Documents with a Multi-Step Pipeline
nllp-1.2
Poster
2408.09777
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.nllp-1.3.bib
https://aclanthology.org/2024.nllp-1.3/
@inproceedings{liu-etal-2024-enhancing-legal, title = "Enhancing Legal Expertise in Large Language Models through Composite Model Integration: The Development and Evaluation of Law-Neo", author = "Liu, Zhihao and Zhu, Yanzhen and Lu, Mengyuan", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.3", pages = "33--41", abstract = "Although large language models (LLMs) like ChatGPT have demonstrated considerable capabilities in general domains, they often lack proficiency in specialized fields. Enhancing a model{'}s performance in a specific domain, such as law, while maintaining low costs, has been a significant challenge. Existing methods, such as fine-tuning or building mixture of experts (MoE) models, often struggle to balance model parameters, training costs, and domain-specific performance. Inspired by composition to augment language models, we have developed Law-Neo, a novel model designed to enhance legal LLMs. This model significantly improves the model{'}s legal domain expertise at minimal training costs, while retaining the logical capabilities of a large-scale anchor model. Our Law-Neo model outperformed other models in comprehensive experiments on multiple legal task benchmarks, demonstrating the effectiveness of this approach.", }
Although large language models (LLMs) like ChatGPT have demonstrated considerable capabilities in general domains, they often lack proficiency in specialized fields. Enhancing a model{'}s performance in a specific domain, such as law, while maintaining low costs, has been a significant challenge. Existing methods, such as fine-tuning or building mixture of experts (MoE) models, often struggle to balance model parameters, training costs, and domain-specific performance. Inspired by composition to augment language models, we have developed Law-Neo, a novel model designed to enhance legal LLMs. This model significantly improves the model{'}s legal domain expertise at minimal training costs, while retaining the logical capabilities of a large-scale anchor model. Our Law-Neo model outperformed other models in comprehensive experiments on multiple legal task benchmarks, demonstrating the effectiveness of this approach.
[ "Liu, Zhihao", "Zhu, Yanzhen", "Lu, Mengyuan" ]
Enhancing Legal Expertise in Large Language Models through Composite Model Integration: The Development and Evaluation of Law-Neo
nllp-1.3
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.4.bib
https://aclanthology.org/2024.nllp-1.4/
@inproceedings{meghdadi-inkpen-2024-uottawa, title = "u{O}ttawa at {L}egal{L}ens-2024: Transformer-based Classification Experiments", author = "Meghdadi, Nima and Inkpen, Diana", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.4", pages = "42--47", abstract = "This paper presents the methods used for LegalLens-2024, which focused on detecting legal violations within unstructured textual data and associating these violations with potentially affected individuals. The shared task included two subtasks: A) Legal Named Entity Recognition (L-NER) and B) Legal Natural Language Inference (L-NLI). For subtask A, we utilized the spaCy library, while for subtask B, we employed a combined model incorporating RoBERTa and CNN. Our results were 86.3{\%} in the L-NER subtask and 88.25{\%} in the L-NLI subtask. Overall, our paper demonstrates the effectiveness of transformer models in addressing complex tasks in the legal domain.", }
This paper presents the methods used for LegalLens-2024, which focused on detecting legal violations within unstructured textual data and associating these violations with potentially affected individuals. The shared task included two subtasks: A) Legal Named Entity Recognition (L-NER) and B) Legal Natural Language Inference (L-NLI). For subtask A, we utilized the spaCy library, while for subtask B, we employed a combined model incorporating RoBERTa and CNN. Our results were 86.3{\%} in the L-NER subtask and 88.25{\%} in the L-NLI subtask. Overall, our paper demonstrates the effectiveness of transformer models in addressing complex tasks in the legal domain.
[ "Meghdadi, Nima", "Inkpen, Diana" ]
uOttawa at LegalLens-2024: Transformer-based Classification Experiments
nllp-1.4
Poster
2410.21139
[ "https://github.com/nimameghdadi/uottawa-at-legallens-2024-transformer-based-classification" ]
https://huggingface.co/papers/2410.21139
0
0
0
2
[ "nimamegh/roberta_cnn_legal" ]
[]
[]
[ "nimamegh/roberta_cnn_legal" ]
[]
[]
1
https://aclanthology.org/2024.nllp-1.5.bib
https://aclanthology.org/2024.nllp-1.5/
@inproceedings{beauchemin-etal-2024-quebec, title = "{Q}uebec Automobile Insurance Question-Answering With Retrieval-Augmented Generation", author = "Beauchemin, David and Khoury, Richard and Gagnon, Zachary", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.5", pages = "48--60", abstract = "Large Language Models (LLMs) perform outstandingly in various downstream tasks, and the use of the Retrieval-Augmented Generation (RAG) architecture has been shown to improve performance for legal question answering (Nuruzzaman and Hussain, 2020; Louis et al., 2024). However, there are limited applications in insurance questions-answering, a specific type of legal document. This paper introduces two corpora: the Quebec Automobile Insurance Expertise Reference Corpus and a set of 82 Expert Answers to Layperson Automobile Insurance Questions. Our study leverages both corpora to automatically and manually assess a GPT4-o, a state-of-the-art (SOTA) LLM, to answer Quebec automobile insurance questions. Our results demonstrate that, on average, using our expertise reference corpus generates better responses on both automatic and manual evaluation metrics. However, they also highlight that LLM QA is unreliable enough for mass utilization in critical areas. Indeed, our results show that between 5{\%} to 13{\%} of answered questions include a false statement that could lead to customer misunderstanding.", }
Large Language Models (LLMs) perform outstandingly in various downstream tasks, and the use of the Retrieval-Augmented Generation (RAG) architecture has been shown to improve performance for legal question answering (Nuruzzaman and Hussain, 2020; Louis et al., 2024). However, there are limited applications in insurance questions-answering, a specific type of legal document. This paper introduces two corpora: the Quebec Automobile Insurance Expertise Reference Corpus and a set of 82 Expert Answers to Layperson Automobile Insurance Questions. Our study leverages both corpora to automatically and manually assess a GPT4-o, a state-of-the-art (SOTA) LLM, to answer Quebec automobile insurance questions. Our results demonstrate that, on average, using our expertise reference corpus generates better responses on both automatic and manual evaluation metrics. However, they also highlight that LLM QA is unreliable enough for mass utilization in critical areas. Indeed, our results show that between 5{\%} to 13{\%} of answered questions include a false statement that could lead to customer misunderstanding.
[ "Beauchemin, David", "Khoury, Richard", "Gagnon, Zachary" ]
Quebec Automobile Insurance Question-Answering With Retrieval-Augmented Generation
nllp-1.5
Poster
2410.09623
[ "https://github.com/GRAAL-Research/quebec-insurance-rag-corpora" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.nllp-1.6.bib
https://aclanthology.org/2024.nllp-1.6/
@inproceedings{nigam-etal-2024-rethinking, title = "Rethinking Legal Judgement Prediction in a Realistic Scenario in the Era of Large Language Models", author = "Nigam, Shubham Kumar and Deroy, Aniket and Maity, Subhankar and Bhattacharya, Arnab", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.6", pages = "61--80", abstract = "This study investigates judgment prediction in a realistic scenario within the context of Indian judgments, utilizing a range of transformer-based models, including InLegalBERT, BERT, and XLNet, alongside LLMs such as Llama-2 and GPT-3.5 Turbo. In this realistic scenario, we simulate how judgments are predicted at the point when a case is presented for a decision in court, using only the information available at that time, such as the facts of the case, statutes, precedents, and arguments. This approach mimics real-world conditions, where decisions must be made without the benefit of hindsight, unlike retrospective analyses often found in previous studies. For transformer models, we experiment with hierarchical transformers and the summarization of judgment facts to optimize input for these models. Our experiments with LLMs reveal that GPT-3.5 Turbo excels in realistic scenarios, demonstrating robust performance in judgment prediction. Furthermore, incorporating additional legal information, such as statutes and precedents, significantly improves the outcome of the prediction task. The LLMs also provide explanations for their predictions. To evaluate the quality of these predictions and explanations, we introduce two human evaluation metrics: Clarity and Linking. Our findings from both automatic and human evaluations indicate that, despite advancements in LLMs, they are yet to achieve expert-level performance in judgment prediction and explanation tasks.", }
This study investigates judgment prediction in a realistic scenario within the context of Indian judgments, utilizing a range of transformer-based models, including InLegalBERT, BERT, and XLNet, alongside LLMs such as Llama-2 and GPT-3.5 Turbo. In this realistic scenario, we simulate how judgments are predicted at the point when a case is presented for a decision in court, using only the information available at that time, such as the facts of the case, statutes, precedents, and arguments. This approach mimics real-world conditions, where decisions must be made without the benefit of hindsight, unlike retrospective analyses often found in previous studies. For transformer models, we experiment with hierarchical transformers and the summarization of judgment facts to optimize input for these models. Our experiments with LLMs reveal that GPT-3.5 Turbo excels in realistic scenarios, demonstrating robust performance in judgment prediction. Furthermore, incorporating additional legal information, such as statutes and precedents, significantly improves the outcome of the prediction task. The LLMs also provide explanations for their predictions. To evaluate the quality of these predictions and explanations, we introduce two human evaluation metrics: Clarity and Linking. Our findings from both automatic and human evaluations indicate that, despite advancements in LLMs, they are yet to achieve expert-level performance in judgment prediction and explanation tasks.
[ "Nigam, Shubham Kumar", "Deroy, Aniket", "Maity, Subhankar", "Bhattacharya, Arnab" ]
Rethinking Legal Judgement Prediction in a Realistic Scenario in the Era of Large Language Models
nllp-1.6
Poster
2410.10542
[ "https://github.com/shubhamkumarnigam/realistic_ljp" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.nllp-1.7.bib
https://aclanthology.org/2024.nllp-1.7/
@inproceedings{xie-etal-2024-clc, title = "The {CLC}-{UKET} Dataset: Benchmarking Case Outcome Prediction for the {UK} Employment Tribunal", author = "Xie, Huiyuan and Steffek, Felix and De Faria, Joana and Carter, Christine and Rutherford, Jonathan", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.7", pages = "81--96", abstract = "This paper explores the intersection of technological innovation and access to justice by developing a benchmark for predicting case outcomes in the UK Employment Tribunal (UKET). To address the challenge of extensive manual annotation, the study employs a large language model (LLM) for automatic annotation, resulting in the creation of the CLC-UKET dataset. The dataset consists of approximately 19,000 UKET cases and their metadata. Comprehensive legal annotations cover facts, claims, precedent references, statutory references, case outcomes, reasons and jurisdiction codes. Facilitated by the CLC-UKET data, we examine a multi-class case outcome prediction task in the UKET. Human predictions are collected to establish a performance reference for model comparison. Empirical results from baseline models indicate that finetuned transformer models outperform zero-shot and few-shot LLMs on the UKET prediction task. The performance of zero-shot LLMs can be enhanced by integrating task-related information into few-shot examples. We hope that the CLC-UKET dataset, along with human annotations and empirical findings, can serve as a valuable benchmark for employment-related dispute resolution.", }
This paper explores the intersection of technological innovation and access to justice by developing a benchmark for predicting case outcomes in the UK Employment Tribunal (UKET). To address the challenge of extensive manual annotation, the study employs a large language model (LLM) for automatic annotation, resulting in the creation of the CLC-UKET dataset. The dataset consists of approximately 19,000 UKET cases and their metadata. Comprehensive legal annotations cover facts, claims, precedent references, statutory references, case outcomes, reasons and jurisdiction codes. Facilitated by the CLC-UKET data, we examine a multi-class case outcome prediction task in the UKET. Human predictions are collected to establish a performance reference for model comparison. Empirical results from baseline models indicate that finetuned transformer models outperform zero-shot and few-shot LLMs on the UKET prediction task. The performance of zero-shot LLMs can be enhanced by integrating task-related information into few-shot examples. We hope that the CLC-UKET dataset, along with human annotations and empirical findings, can serve as a valuable benchmark for employment-related dispute resolution.
[ "Xie, Huiyuan", "Steffek, Felix", "De Faria, Joana", "Carter, Christine", "Rutherford, Jonathan" ]
The CLC-UKET Dataset: Benchmarking Case Outcome Prediction for the UK Employment Tribunal
nllp-1.7
Poster
2409.08098
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.nllp-1.8.bib
https://aclanthology.org/2024.nllp-1.8/
@inproceedings{mali-etal-2024-information, title = "Information Extraction for Planning Court Cases", author = "Mali, Drish and Mali, Rubash and Barale, Claire", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.8", pages = "97--114", abstract = "Legal documents are often long and unstructured, making them challenging and time-consuming to apprehend. An automatic system that can identify relevant entities and labels within legal documents, would significantly reduce the legal research time. We developed a system to streamline legal case analysis from planning courts by extracting key information from XML files using Named Entity Recognition (NER) and multi-label classification models to convert them into structured form. This research contributes three novel datasets for the Planning Court cases: a NER dataset, a multi-label dataset fully annotated by humans, and newly re-annotated multi-label datasets partially annotated using LLMs. We experimented with various general-purpose and legal domain-specific models with different maximum sequence lengths. It was noted that incorporating paragraph position information improved the performance of models for the multi-label classification task. Our research highlighted the importance of domain-specific models, with LegalRoBERTa and LexLM demonstrating the best performance.", }
Legal documents are often long and unstructured, making them challenging and time-consuming to apprehend. An automatic system that can identify relevant entities and labels within legal documents, would significantly reduce the legal research time. We developed a system to streamline legal case analysis from planning courts by extracting key information from XML files using Named Entity Recognition (NER) and multi-label classification models to convert them into structured form. This research contributes three novel datasets for the Planning Court cases: a NER dataset, a multi-label dataset fully annotated by humans, and newly re-annotated multi-label datasets partially annotated using LLMs. We experimented with various general-purpose and legal domain-specific models with different maximum sequence lengths. It was noted that incorporating paragraph position information improved the performance of models for the multi-label classification task. Our research highlighted the importance of domain-specific models, with LegalRoBERTa and LexLM demonstrating the best performance.
[ "Mali, Drish", "Mali, Rubash", "Barale, Claire" ]
Information Extraction for Planning Court Cases
nllp-1.8
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.9.bib
https://aclanthology.org/2024.nllp-1.9/
@inproceedings{itani-etal-2024-automated, title = "Automated Anonymization of Parole Hearing Transcripts", author = "Itani, Abed and Siskou, Wassiliki and Hautli-Janisz, Annette", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.9", pages = "115--128", abstract = "Responsible natural language processing is more and more concerned with preventing the violation of personal rights that language technology can entail (CITATION). In this paper we illustrate the case of parole hearings in California, the verbatim transcripts of which are made available to the general public upon a request sent to the California Board of Parole Hearings. The parole hearing setting is highly sensitive: inmates face a board of legal representatives who discuss highly personal matters not only about the inmates themselves but also about victims and their relatives, such as spouses and children. Participants have no choice in contributing to the data collection process, since the disclosure of the transcripts is mandated by law. As researchers who are interested in understanding and modeling the communication in these hierarchy-driven settings, we face an ethical dilemma: publishing raw data as is for the community would compromise the privacy of all individuals affected, but manually cleaning the data requires a substantive effort. In this paper we present an automated anonymization process which reliably removes and pseudonymizes sensitive data in verbatim transcripts, while at the same time preserving the structure and content of the data. Our results show that the process exhibits little to no leakage of sensitive information when applied to more than 300 hearing transcripts.", }
Responsible natural language processing is more and more concerned with preventing the violation of personal rights that language technology can entail (CITATION). In this paper we illustrate the case of parole hearings in California, the verbatim transcripts of which are made available to the general public upon a request sent to the California Board of Parole Hearings. The parole hearing setting is highly sensitive: inmates face a board of legal representatives who discuss highly personal matters not only about the inmates themselves but also about victims and their relatives, such as spouses and children. Participants have no choice in contributing to the data collection process, since the disclosure of the transcripts is mandated by law. As researchers who are interested in understanding and modeling the communication in these hierarchy-driven settings, we face an ethical dilemma: publishing raw data as is for the community would compromise the privacy of all individuals affected, but manually cleaning the data requires a substantive effort. In this paper we present an automated anonymization process which reliably removes and pseudonymizes sensitive data in verbatim transcripts, while at the same time preserving the structure and content of the data. Our results show that the process exhibits little to no leakage of sensitive information when applied to more than 300 hearing transcripts.
[ "Itani, Abed", "Siskou, Wassiliki", "Hautli-Janisz, Annette" ]
Automated Anonymization of Parole Hearing Transcripts
nllp-1.9
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.10.bib
https://aclanthology.org/2024.nllp-1.10/
@inproceedings{tan-etal-2024-towards-automated, title = "Towards an Automated Pointwise Evaluation Metric for Generated Long-Form Legal Summaries", author = "Tan, Shao Min and Grail, Quentin and Quartey, Lee", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.10", pages = "129--142", abstract = "Long-form abstractive summarization is a task that has particular importance in the legal domain. Automated evaluation metrics are important for the development of text generation models, but existing research on the evaluation of generated summaries has focused mainly on short summaries. We introduce an automated evaluation methodology for generated long-form legal summaries, which involves breaking each summary into individual points, comparing the points in a human-written and machine-generated summary, and calculating a recall and precision score for the latter. The method is designed to be particularly suited for the complexities of legal text, and is also fully interpretable. We also create and release a small meta-dataset for the benchmarking of evaluation methods, focusing on long-form legal summarization. Our evaluation metric corresponds better with human evaluation compared to existing metrics which were not developed for legal data.", }
Long-form abstractive summarization is a task that has particular importance in the legal domain. Automated evaluation metrics are important for the development of text generation models, but existing research on the evaluation of generated summaries has focused mainly on short summaries. We introduce an automated evaluation methodology for generated long-form legal summaries, which involves breaking each summary into individual points, comparing the points in a human-written and machine-generated summary, and calculating a recall and precision score for the latter. The method is designed to be particularly suited for the complexities of legal text, and is also fully interpretable. We also create and release a small meta-dataset for the benchmarking of evaluation methods, focusing on long-form legal summarization. Our evaluation metric corresponds better with human evaluation compared to existing metrics which were not developed for legal data.
[ "Tan, Shao Min", "Grail, Quentin", "Quartey, Lee" ]
Towards an Automated Pointwise Evaluation Metric for Generated Long-Form Legal Summaries
nllp-1.10
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.11.bib
https://aclanthology.org/2024.nllp-1.11/
@inproceedings{narendra-etal-2024-enhancing, title = "Enhancing Contract Negotiations with {LLM}-Based Legal Document Comparison", author = "Narendra, Savinay and Shetty, Kaushal and Ratnaparkhi, Adwait", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.11", pages = "143--153", abstract = "We present a large language model (LLM) based approach for comparing legal contracts with their corresponding template documents. Legal professionals use commonly observed deviations between templates and contracts to help with contract negotiations, and also to refine the template documents. Our comparison approach, based on the well-studied natural language inference (NLI) task, first splits a template into key concepts and then uses LLMs to decide if the concepts are entailed by the contract document. We also repeat this procedure in the opposite direction - contract clauses are tested for entailment against the template clause to see if they contain additional information. The non-entailed concepts are labelled, organized and filtered by frequency, and placed into a clause library, which is used to suggest changes to the template documents. We first show that our LLM-based approach outperforms all previous work on a publicly available dataset designed for NLI in the legal domain. We then apply it to a private real-world legal dataset, achieve an accuracy of 96.46{\%}. Our approach is the first in the literature to produce a natural language comparison between legal contracts and their template documents.", }
We present a large language model (LLM) based approach for comparing legal contracts with their corresponding template documents. Legal professionals use commonly observed deviations between templates and contracts to help with contract negotiations, and also to refine the template documents. Our comparison approach, based on the well-studied natural language inference (NLI) task, first splits a template into key concepts and then uses LLMs to decide if the concepts are entailed by the contract document. We also repeat this procedure in the opposite direction - contract clauses are tested for entailment against the template clause to see if they contain additional information. The non-entailed concepts are labelled, organized and filtered by frequency, and placed into a clause library, which is used to suggest changes to the template documents. We first show that our LLM-based approach outperforms all previous work on a publicly available dataset designed for NLI in the legal domain. We then apply it to a private real-world legal dataset, achieve an accuracy of 96.46{\%}. Our approach is the first in the literature to produce a natural language comparison between legal contracts and their template documents.
[ "Narendra, Savinay", "Shetty, Kaushal", "Ratnaparkhi, Adwait" ]
Enhancing Contract Negotiations with LLM-Based Legal Document Comparison
nllp-1.11
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.12.bib
https://aclanthology.org/2024.nllp-1.12/
@inproceedings{redelaar-etal-2024-attributed, title = "Attributed Question Answering for Preconditions in the {D}utch Law", author = "Redelaar, Felicia and Van Drie, Romy and Verberne, Suzan and De Boer, Maaike", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.12", pages = "154--165", abstract = "In this paper, we address the problem of answering questions about preconditions in the law, e.g. {``}When can the court terminate the guardianship of a natural person?{''}. When answering legal questions, it is important to attribute the relevant part of the law; we therefore not only generate answers but also references to law articles. We implement a retrieval augmented generation (RAG) pipeline for long-form answers based on the Dutch law, using several state-of-the-art retrievers and generators. For evaluating our pipeline, we create a dataset containing legal QA pairs with attributions. Our experiments show promising results on our extended version for the automatic evaluation metrics from the Automatic LLMs{'} Citation Evaluation (ALCE) Framework and the G-EVAL Framework. Our findings indicate that RAG has significant potential in complex, citation-heavy domains like law, as it helps laymen understand legal preconditions and rights by generating high-quality answers with accurate attributions.", }
In this paper, we address the problem of answering questions about preconditions in the law, e.g. {``}When can the court terminate the guardianship of a natural person?{''}. When answering legal questions, it is important to attribute the relevant part of the law; we therefore not only generate answers but also references to law articles. We implement a retrieval augmented generation (RAG) pipeline for long-form answers based on the Dutch law, using several state-of-the-art retrievers and generators. For evaluating our pipeline, we create a dataset containing legal QA pairs with attributions. Our experiments show promising results on our extended version for the automatic evaluation metrics from the Automatic LLMs{'} Citation Evaluation (ALCE) Framework and the G-EVAL Framework. Our findings indicate that RAG has significant potential in complex, citation-heavy domains like law, as it helps laymen understand legal preconditions and rights by generating high-quality answers with accurate attributions.
[ "Redelaar, Felicia", "Van Drie, Romy", "Verberne, Suzan", "De Boer, Maaike" ]
Attributed Question Answering for Preconditions in the Dutch Law
nllp-1.12
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.13.bib
https://aclanthology.org/2024.nllp-1.13/
@inproceedings{etcheverry-etal-2024-algorithm, title = "Algorithm for Automatic Legislative Text Consolidation", author = "Etcheverry, Matias and Real-del-Sarte, Thibaud and Chavallard, Pauline", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.13", pages = "166--175", abstract = "This study introduces a method for automating the consolidation process in a legal context, a time-consuming task traditionally performed by legal professionals. We present a generative approach that processes legislative texts to automatically apply amendments. Our method employs light quantized generative model, finetuned with LoRA, to generate accurate and reliable amended texts. To the authors knowledge, this is the first time generative models are used on legislative text consolidation. Our dataset is publicly available on HuggingFace. Experimental results demonstrate a significant improvement in efficiency, offering faster updates to legal documents. A full automated pipeline of legislative text consolidation can be done in a few hours, with a success rate of more than 63{\%} on a difficult bill.", }
This study introduces a method for automating the consolidation process in a legal context, a time-consuming task traditionally performed by legal professionals. We present a generative approach that processes legislative texts to automatically apply amendments. Our method employs light quantized generative model, finetuned with LoRA, to generate accurate and reliable amended texts. To the authors knowledge, this is the first time generative models are used on legislative text consolidation. Our dataset is publicly available on HuggingFace. Experimental results demonstrate a significant improvement in efficiency, offering faster updates to legal documents. A full automated pipeline of legislative text consolidation can be done in a few hours, with a success rate of more than 63{\%} on a difficult bill.
[ "Etcheverry, Matias", "Real-del-Sarte, Thibaud", "Chavallard, Pauline" ]
Algorithm for Automatic Legislative Text Consolidation
nllp-1.13
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.14.bib
https://aclanthology.org/2024.nllp-1.14/
@inproceedings{trautmann-etal-2024-measuring, title = "Measuring the Groundedness of Legal Question-Answering Systems", author = "Trautmann, Dietrich and Ostapuk, Natalia and Grail, Quentin and Pol, Adrian and Bonifazi, Guglielmo and Gao, Shang and Gajek, Martin", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.14", pages = "176--186", abstract = "In high-stakes domains like legal question-answering, the accuracy and trustworthiness of generative AI systems are of paramount importance. This work presents a comprehensive benchmark of various methods to assess the groundedness of AI-generated responses, aiming to significantly enhance their reliability. Our experiments include similarity-based metrics and natural language inference models to evaluate whether responses are well-founded in the given contexts. We also explore different prompting strategies for large language models to improve the detection of ungrounded responses. We validated the effectiveness of these methods using a newly created grounding classification corpus, designed specifically for legal queries and corresponding responses from retrieval-augmented prompting, focusing on their alignment with source material. Our results indicate potential in groundedness classification of generated responses, with the best method achieving a macro-F1 score of 0.8. Additionally, we evaluated the methods in terms of their latency to determine their suitability for real-world applications, as this step typically follows the generation process. This capability is essential for processes that may trigger additional manual verification or automated response regeneration. In summary, this study demonstrates the potential of various detection methods to improve the trustworthiness of generative AI in legal settings.", }
In high-stakes domains like legal question-answering, the accuracy and trustworthiness of generative AI systems are of paramount importance. This work presents a comprehensive benchmark of various methods to assess the groundedness of AI-generated responses, aiming to significantly enhance their reliability. Our experiments include similarity-based metrics and natural language inference models to evaluate whether responses are well-founded in the given contexts. We also explore different prompting strategies for large language models to improve the detection of ungrounded responses. We validated the effectiveness of these methods using a newly created grounding classification corpus, designed specifically for legal queries and corresponding responses from retrieval-augmented prompting, focusing on their alignment with source material. Our results indicate potential in groundedness classification of generated responses, with the best method achieving a macro-F1 score of 0.8. Additionally, we evaluated the methods in terms of their latency to determine their suitability for real-world applications, as this step typically follows the generation process. This capability is essential for processes that may trigger additional manual verification or automated response regeneration. In summary, this study demonstrates the potential of various detection methods to improve the trustworthiness of generative AI in legal settings.
[ "Trautmann, Dietrich", "Ostapuk, Natalia", "Grail, Quentin", "Pol, Adrian", "Bonifazi, Guglielmo", "Gao, Shang", "Gajek, Martin" ]
Measuring the Groundedness of Legal Question-Answering Systems
nllp-1.14
Poster
2410.08764
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.nllp-1.15.bib
https://aclanthology.org/2024.nllp-1.15/
@inproceedings{attali-tomeh-2024-transductive, title = "Transductive Legal Judgment Prediction Combining {BERT} Embeddings with Delaunay-Based {GNN}s", author = "Attali, Hugo and Tomeh, Nadi", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.15", pages = "187--193", abstract = "This paper presents a novel approach to legal judgment prediction by combining BERT embeddings with a Delaunay-based Graph Neural Network (GNN). Unlike inductive methods that classify legal documents independently, our transductive approach models the entire document set as a graph, capturing both contextual and relational information. This method significantly improves classification accuracy by enabling effective label propagation across connected documents. Evaluated on the Swiss-Judgment-Prediction (SJP) dataset, our model outperforms established baselines, including larger models with cross-lingual training and data augmentation techniques, while maintaining efficiency with minimal computational overhead.", }
This paper presents a novel approach to legal judgment prediction by combining BERT embeddings with a Delaunay-based Graph Neural Network (GNN). Unlike inductive methods that classify legal documents independently, our transductive approach models the entire document set as a graph, capturing both contextual and relational information. This method significantly improves classification accuracy by enabling effective label propagation across connected documents. Evaluated on the Swiss-Judgment-Prediction (SJP) dataset, our model outperforms established baselines, including larger models with cross-lingual training and data augmentation techniques, while maintaining efficiency with minimal computational overhead.
[ "Attali, Hugo", "Tomeh, Nadi" ]
Transductive Legal Judgment Prediction Combining BERT Embeddings with Delaunay-Based GNNs
nllp-1.15
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.16.bib
https://aclanthology.org/2024.nllp-1.16/
@inproceedings{chowdhury-etal-2024-cross, title = "Cross Examine: An Ensemble-based approach to leverage Large Language Models for Legal Text Analytics", author = "Chowdhury, Saurav and Dey, Lipika and Joshi, Suyog", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.16", pages = "194--204", abstract = "Legal documents are complex in nature, describing a course of argumentative reasoning that is followed to settle a case. Churning through large volumes of legal documents is a daily requirement for a large number of professionals who need access to the information embedded in them. Natural language processing methods that help in document summarization with key information components, insight extraction and question answering play a crucial role in legal text processing. Most of the existing document analysis systems use supervised machine learning, which require large volumes of annotated training data for every different application and are expensive to build. In this paper we propose a legal text analytics pipeline using Large Language Models (LLM), which can work with little or no training data. For document summarization, we propose an iterative pipeline using retrieval augmented generation to ensure that the generated text remains contextually relevant. For question answering, we propose a novel ontology-driven ensemble approach similar to cross-examination that exploits questioning and verification principles. A knowledge graph, created with the extracted information, stores the key entities and relationships reflecting the repository content structure. A new dataset is created with Indian court documents related to bail applications for cases filed under Protection of Children from Sexual Offences (POCSO) Act, 2012 an Indian law to protect children from sexual abuse and offences. Analysis of insights extracted from the answers reveal patterns of crime and social conditions leading to those crimes, which are important inputs for social scientists as well as legal system.", }
Legal documents are complex in nature, describing a course of argumentative reasoning that is followed to settle a case. Churning through large volumes of legal documents is a daily requirement for a large number of professionals who need access to the information embedded in them. Natural language processing methods that help in document summarization with key information components, insight extraction and question answering play a crucial role in legal text processing. Most of the existing document analysis systems use supervised machine learning, which require large volumes of annotated training data for every different application and are expensive to build. In this paper we propose a legal text analytics pipeline using Large Language Models (LLM), which can work with little or no training data. For document summarization, we propose an iterative pipeline using retrieval augmented generation to ensure that the generated text remains contextually relevant. For question answering, we propose a novel ontology-driven ensemble approach similar to cross-examination that exploits questioning and verification principles. A knowledge graph, created with the extracted information, stores the key entities and relationships reflecting the repository content structure. A new dataset is created with Indian court documents related to bail applications for cases filed under Protection of Children from Sexual Offences (POCSO) Act, 2012 an Indian law to protect children from sexual abuse and offences. Analysis of insights extracted from the answers reveal patterns of crime and social conditions leading to those crimes, which are important inputs for social scientists as well as legal system.
[ "Chowdhury, Saurav", "Dey, Lipika", "Joshi, Suyog" ]
Cross Examine: An Ensemble-based approach to leverage Large Language Models for Legal Text Analytics
nllp-1.16
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.17.bib
https://aclanthology.org/2024.nllp-1.17/
@inproceedings{aspromonte-etal-2024-llms, title = "{LLM}s to the Rescue: Explaining {DSA} Statements of Reason with Platform{'}s Terms of Services", author = "Aspromonte, Marco and Ferraris, Andrea and Galli, Federico and Contissa, Giuseppe", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.17", pages = "205--215", abstract = "The Digital Services Act (DSA) requires online platforms in the EU to provide {``}statements of reason{''} (SoRs) when restricting user content, but their effectiveness in ensuring transparency is still debated due to vague and complex terms of service (ToS). This paper explores the use of NLP techniques, specifically multi-agent systems based on large language models (LLMs), to clarify SoRs by linking them to relevant ToS sections. Analysing SoRs from platforms like Booking.com, Reddit, and LinkedIn, our findings show that LLMs can enhance the interpretability of content moderation decisions, improving user understanding and engagement with DSA requirements.", }
The Digital Services Act (DSA) requires online platforms in the EU to provide {``}statements of reason{''} (SoRs) when restricting user content, but their effectiveness in ensuring transparency is still debated due to vague and complex terms of service (ToS). This paper explores the use of NLP techniques, specifically multi-agent systems based on large language models (LLMs), to clarify SoRs by linking them to relevant ToS sections. Analysing SoRs from platforms like Booking.com, Reddit, and LinkedIn, our findings show that LLMs can enhance the interpretability of content moderation decisions, improving user understanding and engagement with DSA requirements.
[ "Aspromonte, Marco", "Ferraris, Andrea", "Galli, Federico", "Contissa, Giuseppe" ]
LLMs to the Rescue: Explaining DSA Statements of Reason with Platform's Terms of Services
nllp-1.17
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.18.bib
https://aclanthology.org/2024.nllp-1.18/
@inproceedings{blair-stanek-etal-2024-blt, title = "{BLT}: Can Large Language Models Handle Basic Legal Text?", author = "Blair-Stanek, Andrew and Holzenberger, Nils and Van Durme, Benjamin", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.18", pages = "216--232", abstract = "We find that the best publicly available LLMs like GPT-4 and Claude currently perform poorly on basic legal text handling. This motivates the creation of a benchmark consisting of examples that lawyers and paralegals would expect LLMs to handle zero-shot, such as looking up the text at a line of a witness deposition or at a subsection of a contract. LLMs{'} poor performance on this benchmark casts into doubt their reliability as-is for legal practice. However, fine-tuning on our training set brings even a small model to near-perfect performance. This benchmark will be useful for fine-tuning LLMs for downstream legal tasks, as well as for tracking LLMs{'} reliability as-is for basic legal tasks.", }
We find that the best publicly available LLMs like GPT-4 and Claude currently perform poorly on basic legal text handling. This motivates the creation of a benchmark consisting of examples that lawyers and paralegals would expect LLMs to handle zero-shot, such as looking up the text at a line of a witness deposition or at a subsection of a contract. LLMs{'} poor performance on this benchmark casts into doubt their reliability as-is for legal practice. However, fine-tuning on our training set brings even a small model to near-perfect performance. This benchmark will be useful for fine-tuning LLMs for downstream legal tasks, as well as for tracking LLMs{'} reliability as-is for basic legal tasks.
[ "Blair-Stanek, Andrew", "Holzenberger, Nils", "Van Durme, Benjamin" ]
BLT: Can Large Language Models Handle Basic Legal Text?
nllp-1.18
Poster
2311.09693
[ "https://github.com/blairstanek/blt" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.nllp-1.19.bib
https://aclanthology.org/2024.nllp-1.19/
@inproceedings{cheniki-etal-2024-multi, title = "Multi-Property Multi-Label Documents Metadata Recommendation based on Encoder Embeddings", author = {Cheniki, Nasredine and Daudaravicius, Vidas and Feliachi, Abdelfettah and Hardy, Didier and K{\"u}ster, Marc Wilhelm}, editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.19", pages = "233--242", abstract = "The task of document classification, particularly multi-label classification, presents a significant challenge due to the complexity of assigning multiple relevant labels to each document. This complexity is further amplified in multi-property multi-label classification tasks, where documents must be categorized across various sets of labels. In this research, we introduce an innovative encoder embedding-driven approach to multi-property multi-label document classification that leverages semantic-text similarity and the reuse of pre-existing annotated data to enhance the efficiency and accuracy of the document annotation process. Our method requires only a single model for text similarity, eliminating the need for multiple property-specific classifiers and thereby reducing computational demands and simplifying deployment. We evaluate our approach through a prototype deployed for daily operations, which demonstrates superior performance over existing classification systems. Our contributions include improved accuracy without additional training, increased efficiency, and demonstrated effectiveness in practical applications. The results of our study indicate the potential of our approach to be applied across various domains requiring multi-property multi-label document classification, offering a scalable and adaptable solution for metadata annotation tasks.", }
The task of document classification, particularly multi-label classification, presents a significant challenge due to the complexity of assigning multiple relevant labels to each document. This complexity is further amplified in multi-property multi-label classification tasks, where documents must be categorized across various sets of labels. In this research, we introduce an innovative encoder embedding-driven approach to multi-property multi-label document classification that leverages semantic-text similarity and the reuse of pre-existing annotated data to enhance the efficiency and accuracy of the document annotation process. Our method requires only a single model for text similarity, eliminating the need for multiple property-specific classifiers and thereby reducing computational demands and simplifying deployment. We evaluate our approach through a prototype deployed for daily operations, which demonstrates superior performance over existing classification systems. Our contributions include improved accuracy without additional training, increased efficiency, and demonstrated effectiveness in practical applications. The results of our study indicate the potential of our approach to be applied across various domains requiring multi-property multi-label document classification, offering a scalable and adaptable solution for metadata annotation tasks.
[ "Cheniki, Nasredine", "Daudaravicius, Vidas", "Feliachi, Abdelfettah", "Hardy, Didier", "K{\\\"u}ster, Marc Wilhelm" ]
Multi-Property Multi-Label Documents Metadata Recommendation based on Encoder Embeddings
nllp-1.19
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.20.bib
https://aclanthology.org/2024.nllp-1.20/
@inproceedings{staliunaite-etal-2024-comparative, title = "Comparative Study of Explainability Methods for Legal Outcome Prediction", author = "Staliunaite, Ieva and Valvoda, Josef and Satoh, Ken", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.20", pages = "243--258", abstract = "This paper investigates explainability in Natural Legal Language Processing (NLLP). We study the task of legal outcome prediction of the European Court of Human Rights cases in a ternary classification setup, where a language model is fine-tuned to predict whether an article has been claimed and violated (positive outcome), claimed but not violated (negative outcome) or not claimed at all (null outcome). Specifically, we experiment with three popular NLP explainability methods. Correlating the attribution scores of input-level methods (Integrated Gradients and Contrastive Explanations) with rationales from court rulings, we show that the correlations are very weak, with absolute values of Spearman and Kendall correlation coefficients ranging between 0.003 and 0.094. Furthermore, we use a concept-level interpretability method (Concept Erasure) with human expert annotations of legal reasoning, to show that obscuring legal concepts from the model representation has an insignificant effect on model performance (at most a decline of 0.26 F1). Therefore, our results indicate that automated legal outcome prediction models are not reliably grounded in legal reasoning.", }
This paper investigates explainability in Natural Legal Language Processing (NLLP). We study the task of legal outcome prediction of the European Court of Human Rights cases in a ternary classification setup, where a language model is fine-tuned to predict whether an article has been claimed and violated (positive outcome), claimed but not violated (negative outcome) or not claimed at all (null outcome). Specifically, we experiment with three popular NLP explainability methods. Correlating the attribution scores of input-level methods (Integrated Gradients and Contrastive Explanations) with rationales from court rulings, we show that the correlations are very weak, with absolute values of Spearman and Kendall correlation coefficients ranging between 0.003 and 0.094. Furthermore, we use a concept-level interpretability method (Concept Erasure) with human expert annotations of legal reasoning, to show that obscuring legal concepts from the model representation has an insignificant effect on model performance (at most a decline of 0.26 F1). Therefore, our results indicate that automated legal outcome prediction models are not reliably grounded in legal reasoning.
[ "Staliunaite, Ieva", "Valvoda, Josef", "Satoh, Ken" ]
Comparative Study of Explainability Methods for Legal Outcome Prediction
nllp-1.20
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.21.bib
https://aclanthology.org/2024.nllp-1.21/
@inproceedings{bordia-2024-bonafide, title = "Bonafide at {L}egal{L}ens 2024 Shared Task: Using Lightweight {D}e{BERT}a Based Encoder For Legal Violation Detection and Resolution", author = "Bordia, Shikha", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.21", pages = "259--266", abstract = "In this work, we present two systems{---}Named Entity Resolution (NER) and Natural Language Inference (NLI){---}for detecting legal violations within unstructured textual data and for associating these violations with potentially affected individuals, respectively. Both these systems are lightweight DeBERTa based encoders that outperform the LLM baselines. The proposed NER system achieved an F1 score of 60.01{\%} on Subtask A of the LegalLens challenge, which focuses on identifying violations. The proposed NLI system achieved an F1 score of 84.73{\%} on Subtask B of the LegalLens challenge, which focuses on resolving these violations by matching them with pre-existing legal complaints of class action cases. Our NER system ranked sixth and NLI system ranked fifth on the LegalLens leaderboard. We release the trained models and inference scripts.", }
In this work, we present two systems{---}Named Entity Resolution (NER) and Natural Language Inference (NLI){---}for detecting legal violations within unstructured textual data and for associating these violations with potentially affected individuals, respectively. Both these systems are lightweight DeBERTa based encoders that outperform the LLM baselines. The proposed NER system achieved an F1 score of 60.01{\%} on Subtask A of the LegalLens challenge, which focuses on identifying violations. The proposed NLI system achieved an F1 score of 84.73{\%} on Subtask B of the LegalLens challenge, which focuses on resolving these violations by matching them with pre-existing legal complaints of class action cases. Our NER system ranked sixth and NLI system ranked fifth on the LegalLens leaderboard. We release the trained models and inference scripts.
[ "Bordia, Shikha" ]
Bonafide at LegalLens 2024 Shared Task: Using Lightweight DeBERTa Based Encoder For Legal Violation Detection and Resolution
nllp-1.21
Poster
2410.22977
[ "https://github.com/BordiaS/LegalLens_inference" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.nllp-1.22.bib
https://aclanthology.org/2024.nllp-1.22/
@inproceedings{chlapanis-etal-2024-lar, title = "{LAR}-{ECHR}: A New Legal Argument Reasoning Task and Dataset for Cases of the {E}uropean Court of Human Rights", author = "Chlapanis, Odysseas and Galanis, Dimitris and Androutsopoulos, Ion", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.22", pages = "267--279", abstract = "We present Legal Argument Reasoning (LAR), a novel task designed to evaluate the legal reasoning capabilities of Large Language Models (LLMs). The task requires selecting the correct next statement (from multiple choice options) in a chain of legal arguments from court proceedings, given the facts of the case. We constructed a dataset (LAR-ECHR) for this task using cases from the European Court of Human Rights (ECHR). We evaluated seven general-purpose LLMs on LAR-ECHR and found that (a) the ranking of the models is aligned with that of LegalBench, an established US-based legal reasoning benchmark, even though LAR-ECHR is based on EU law, (b) LAR-ECHR distinguishes top models more clearly, compared to LegalBench, (c) even the best model (GPT-4o) obtains 75.8{\%} accuracy on LAR-ECHR, indicating significant potential for further model improvement. The process followed to construct LAR-ECHR can be replicated with cases from other legal systems.", }
We present Legal Argument Reasoning (LAR), a novel task designed to evaluate the legal reasoning capabilities of Large Language Models (LLMs). The task requires selecting the correct next statement (from multiple choice options) in a chain of legal arguments from court proceedings, given the facts of the case. We constructed a dataset (LAR-ECHR) for this task using cases from the European Court of Human Rights (ECHR). We evaluated seven general-purpose LLMs on LAR-ECHR and found that (a) the ranking of the models is aligned with that of LegalBench, an established US-based legal reasoning benchmark, even though LAR-ECHR is based on EU law, (b) LAR-ECHR distinguishes top models more clearly, compared to LegalBench, (c) even the best model (GPT-4o) obtains 75.8{\%} accuracy on LAR-ECHR, indicating significant potential for further model improvement. The process followed to construct LAR-ECHR can be replicated with cases from other legal systems.
[ "Chlapanis, Odysseas", "Galanis, Dimitris", "Androutsopoulos, Ion" ]
LAR-ECHR: A New Legal Argument Reasoning Task and Dataset for Cases of the European Court of Human Rights
nllp-1.22
Poster
2410.13352
[ "" ]
https://huggingface.co/papers/2410.13352
3
1
0
3
[]
[ "AUEB-NLP/lar-echr" ]
[]
[]
[ "AUEB-NLP/lar-echr" ]
[]
1
https://aclanthology.org/2024.nllp-1.24.bib
https://aclanthology.org/2024.nllp-1.24/
@inproceedings{hou-etal-2024-gaps, title = "Gaps or Hallucinations? Scrutinizing Machine-Generated Legal Analysis for Fine-grained Text Evaluations", author = "Hou, Abe and Jurayj, William and Holzenberger, Nils and Blair-Stanek, Andrew and Van Durme, Benjamin", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.24", pages = "280--302", abstract = "Large Language Models (LLMs) show promise as a writing aid for professionals performing legal analyses. However, LLMs can often hallucinate in this setting, in ways difficult to recognize by non-professionals and existing text evaluation metrics. In this work, we pose the question: when can machine-generated legal analysis be evaluated as acceptable? We introduce the neutral notion of gaps {--} as opposed to hallucinations in a strict erroneous sense {--} to refer to the difference between human-written and machine-generated legal analysis. Gaps do not always equate to invalid generation. Working with legal experts, we consider the CLERC generation task proposed in Hou et al. (2024b), leading to a taxonomy, a fine-grained detector for predicting gap categories, and an annotated dataset for automatic evaluation. Our best detector achieves 67{\%} F1 score and 80{\%} precision on the test set. Employing this detector as an automated metric on legal analysis generated by SOTA LLMs, we find around 80{\%} contain hallucinations of different kinds.", }
Large Language Models (LLMs) show promise as a writing aid for professionals performing legal analyses. However, LLMs can often hallucinate in this setting, in ways difficult to recognize by non-professionals and existing text evaluation metrics. In this work, we pose the question: when can machine-generated legal analysis be evaluated as acceptable? We introduce the neutral notion of gaps {--} as opposed to hallucinations in a strict erroneous sense {--} to refer to the difference between human-written and machine-generated legal analysis. Gaps do not always equate to invalid generation. Working with legal experts, we consider the CLERC generation task proposed in Hou et al. (2024b), leading to a taxonomy, a fine-grained detector for predicting gap categories, and an annotated dataset for automatic evaluation. Our best detector achieves 67{\%} F1 score and 80{\%} precision on the test set. Employing this detector as an automated metric on legal analysis generated by SOTA LLMs, we find around 80{\%} contain hallucinations of different kinds.
[ "Hou, Abe", "Jurayj, William", "Holzenberger, Nils", "Blair-Stanek, Andrew", "Van Durme, Benjamin" ]
Gaps or Hallucinations? Scrutinizing Machine-Generated Legal Analysis for Fine-grained Text Evaluations
nllp-1.24
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.25.bib
https://aclanthology.org/2024.nllp-1.25/
@inproceedings{kwak-etal-2024-classify, title = "Classify First, and Then Extract: Prompt Chaining Technique for Information Extraction", author = "Kwak, Alice and Morrison, Clayton and Bambauer, Derek and Surdeanu, Mihai", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.25", pages = "303--317", abstract = "This work presents a new task-aware prompt design and example retrieval approach for information extraction (IE) using a prompt chaining technique. Our approach divides IE tasks into two steps: (1) text classification to understand what information (e.g., entity or event types) is contained in the underlying text and (2) information extraction for the identified types. Initially, we use a large language model (LLM) in a few-shot setting to classify the contained information. The classification output is used to select the relevant prompt and retrieve the examples relevant to the input text. Finally, we ask a LLM to do the information extraction with the generated prompt. By evaluating our approach on legal IE tasks with two different LLMs, we demonstrate that the prompt chaining technique improves the LLM{'}s overall performance in a few-shot setting when compared to the baseline in which examples from all possible classes are included in the prompt. Our approach can be used in a low-resource setting as it does not require a large amount of training data. Also, it can be easily adapted to many different IE tasks by simply adjusting the prompts. Lastly, it provides a cost benefit by reducing the number of tokens in the prompt.", }
This work presents a new task-aware prompt design and example retrieval approach for information extraction (IE) using a prompt chaining technique. Our approach divides IE tasks into two steps: (1) text classification to understand what information (e.g., entity or event types) is contained in the underlying text and (2) information extraction for the identified types. Initially, we use a large language model (LLM) in a few-shot setting to classify the contained information. The classification output is used to select the relevant prompt and retrieve the examples relevant to the input text. Finally, we ask a LLM to do the information extraction with the generated prompt. By evaluating our approach on legal IE tasks with two different LLMs, we demonstrate that the prompt chaining technique improves the LLM{'}s overall performance in a few-shot setting when compared to the baseline in which examples from all possible classes are included in the prompt. Our approach can be used in a low-resource setting as it does not require a large amount of training data. Also, it can be easily adapted to many different IE tasks by simply adjusting the prompts. Lastly, it provides a cost benefit by reducing the number of tokens in the prompt.
[ "Kwak, Alice", "Morrison, Clayton", "Bambauer, Derek", "Surdeanu, Mihai" ]
Classify First, and Then Extract: Prompt Chaining Technique for Information Extraction
nllp-1.25
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.26.bib
https://aclanthology.org/2024.nllp-1.26/
@inproceedings{kadiyala-etal-2024-augmenting, title = "Augmenting Legal Decision Support Systems with {LLM}-based {NLI} for Analyzing Social Media Evidence", author = "Kadiyala, Ram Mohan Rao and Pullakhandam, Siddartha and Mehreen, Kanwal and Tippareddy, Subhasya and Srivastava, Ashay", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.26", pages = "318--325", abstract = "This paper presents our system description and error analysis of our entry for NLLP 2024 shared task on Legal Natural Language Inference (L-NLI). The task required classifying these relationships as entailed, contradicted, or neutral, indicating any association between the review and the complaint. Our system emerged as the winning submission, significantly outperforming other entries with a substantial margin and demonstrating the effectiveness of our approach in legal text analysis. We provide a detailed analysis of the strengths and limitations of each model and approach tested, along with a thorough error analysis and suggestions for future improvements. This paper aims to contribute to the growing field of legal NLP by offering insights into advanced techniques for natural language inference in legal contexts, making it accessible to both experts and newcomers in the field.", }
This paper presents our system description and error analysis of our entry for NLLP 2024 shared task on Legal Natural Language Inference (L-NLI). The task required classifying these relationships as entailed, contradicted, or neutral, indicating any association between the review and the complaint. Our system emerged as the winning submission, significantly outperforming other entries with a substantial margin and demonstrating the effectiveness of our approach in legal text analysis. We provide a detailed analysis of the strengths and limitations of each model and approach tested, along with a thorough error analysis and suggestions for future improvements. This paper aims to contribute to the growing field of legal NLP by offering insights into advanced techniques for natural language inference in legal contexts, making it accessible to both experts and newcomers in the field.
[ "Kadiyala, Ram Mohan Rao", "Pullakh", "am, Siddartha", "Mehreen, Kanwal", "Tippareddy, Subhasya", "Srivastava, Ashay" ]
Augmenting Legal Decision Support Systems with LLM-based NLI for Analyzing Social Media Evidence
nllp-1.26
Poster
2410.15990
[ "https://github.com/1-800-shared-tasks/emnlp-2024-nllp" ]
https://huggingface.co/papers/2410.15990
4
1
0
5
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.27.bib
https://aclanthology.org/2024.nllp-1.27/
@inproceedings{taranukhin-etal-2024-empowering, title = "Empowering Air Travelers: A Chatbot for {C}anadian Air Passenger Rights", author = "Taranukhin, Maksym and Ravi, Sahithya and Lukacs, Gabor and Milios, Evangelos and Shwartz, Vered", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.27", pages = "326--335", abstract = "The Canadian air travel sector has seen a significant increase in flight delays, cancellations, and other issues concerning passenger rights. Recognizing this demand, we present a chatbot to assist passengers and educate them about their rights. Our system breaks a complex user input into simple queries which are used to retrieve information from a collection of documents detailing air travel regulations. The most relevant passages from these documents are presented along with links to the original documents and the generated queries, enabling users to dissect and leverage the information for their unique circumstances. The system successfully overcomes two predominant challenges: understanding complex user inputs, and delivering accurate answers, free of hallucinations, that passengers can rely on for making informed decisions. A user study comparing the chatbot to a Google search demonstrated the chatbot{'}s usefulness and ease of use. Beyond the primary goal of providing accurate and timely information to air passengers regarding their rights, we hope that this system will also enable further research exploring the tradeoff between the user-friendly conversational interface of chatbots and the accuracy of retrieval systems.", }
The Canadian air travel sector has seen a significant increase in flight delays, cancellations, and other issues concerning passenger rights. Recognizing this demand, we present a chatbot to assist passengers and educate them about their rights. Our system breaks a complex user input into simple queries which are used to retrieve information from a collection of documents detailing air travel regulations. The most relevant passages from these documents are presented along with links to the original documents and the generated queries, enabling users to dissect and leverage the information for their unique circumstances. The system successfully overcomes two predominant challenges: understanding complex user inputs, and delivering accurate answers, free of hallucinations, that passengers can rely on for making informed decisions. A user study comparing the chatbot to a Google search demonstrated the chatbot{'}s usefulness and ease of use. Beyond the primary goal of providing accurate and timely information to air passengers regarding their rights, we hope that this system will also enable further research exploring the tradeoff between the user-friendly conversational interface of chatbots and the accuracy of retrieval systems.
[ "Taranukhin, Maksym", "Ravi, Sahithya", "Lukacs, Gabor", "Milios, Evangelos", "Shwartz, Vered" ]
Empowering Air Travelers: A Chatbot for Canadian Air Passenger Rights
nllp-1.27
Poster
2403.12678
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.nllp-1.28.bib
https://aclanthology.org/2024.nllp-1.28/
@inproceedings{tan-minh-etal-2024-enhancing, title = "Enhancing Legal Violation Identification with {LLM}s and Deep Learning Techniques: Achievements in the {L}egal{L}ens 2024 Competition", author = "Tan Minh, Nguyen and Ngoc Mai, Duy and Xuan Bach, Le and Huu Dung, Nguyen and Cong Minh, Pham and Nguyen, Ha Thanh and Vuong, Thi Hai Yen", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.28", pages = "336--345", abstract = "LegalLens is a competition organized to encourage advancements in automatically detecting legal violations. This paper presents our solutions for two tasks Legal Named Entity Recognition (L-NER) and Legal Natural Language Inference (L-NLI). Our approach involves fine-tuning BERT-based models, designing methods based on data characteristics, and a novel prompting template for data augmentation using LLMs. As a result, we secured first place in L-NER and third place in L-NLI among thirty-six participants. We also perform error analysis to provide valuable insights and pave the way for future enhancements in legal NLP. Our implementation is available at https://github.com/lxbach10012004/legal-lens/tree/main", }
LegalLens is a competition organized to encourage advancements in automatically detecting legal violations. This paper presents our solutions for two tasks Legal Named Entity Recognition (L-NER) and Legal Natural Language Inference (L-NLI). Our approach involves fine-tuning BERT-based models, designing methods based on data characteristics, and a novel prompting template for data augmentation using LLMs. As a result, we secured first place in L-NER and third place in L-NLI among thirty-six participants. We also perform error analysis to provide valuable insights and pave the way for future enhancements in legal NLP. Our implementation is available at https://github.com/lxbach10012004/legal-lens/tree/main
[ "Tan Minh, Nguyen", "Ngoc Mai, Duy", "Xuan Bach, Le", "Huu Dung, Nguyen", "Cong Minh, Pham", "Nguyen, Ha Thanh", "Vuong, Thi Hai Yen" ]
Enhancing Legal Violation Identification with LLMs and Deep Learning Techniques: Achievements in the LegalLens 2024 Competition
nllp-1.28
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.30.bib
https://aclanthology.org/2024.nllp-1.30/
@inproceedings{rajan-sequiera-2024-legallens, title = "{L}egal{L}ens 2024 Shared Task: Masala-chai Submission", author = "Rajan, Khalid and Sequiera, Royal", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.30", pages = "346--354", abstract = "In this paper, we present the masala-chai team{'}s participation in the LegalLens 2024 shared task and detail our approach to predicting legal entities and performing natural language inference (NLI) in the legal domain. We experimented with various transformer-based models, including BERT, RoBERTa, Llama 3.1, and GPT-4o. Our results show that state-of-the-art models like GPT-4o underperformed in NER and NLI tasks, even when using advanced techniques such as bootstrapping and prompt optimization. The best performance in NER (accuracy: 0.806, F1 macro: 0.701) was achieved with a fine-tuned RoBERTa model, while the highest NLI results (accuracy: 0.825, F1 macro: 0.833) came from a fine-tuned Llama 3.1 8B model. Notably, RoBERTa, despite having significantly fewer parameters than Llama 3.1 8B, delivered comparable results. We discuss key findings and insights from our experiments and provide our results and code for reproducibility and further analysis at https://github.com/rosequ/masala-chai", }
In this paper, we present the masala-chai team{'}s participation in the LegalLens 2024 shared task and detail our approach to predicting legal entities and performing natural language inference (NLI) in the legal domain. We experimented with various transformer-based models, including BERT, RoBERTa, Llama 3.1, and GPT-4o. Our results show that state-of-the-art models like GPT-4o underperformed in NER and NLI tasks, even when using advanced techniques such as bootstrapping and prompt optimization. The best performance in NER (accuracy: 0.806, F1 macro: 0.701) was achieved with a fine-tuned RoBERTa model, while the highest NLI results (accuracy: 0.825, F1 macro: 0.833) came from a fine-tuned Llama 3.1 8B model. Notably, RoBERTa, despite having significantly fewer parameters than Llama 3.1 8B, delivered comparable results. We discuss key findings and insights from our experiments and provide our results and code for reproducibility and further analysis at https://github.com/rosequ/masala-chai
[ "Rajan, Khalid", "Sequiera, Royal" ]
LegalLens 2024 Shared Task: Masala-chai Submission
nllp-1.30
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.31.bib
https://aclanthology.org/2024.nllp-1.31/
@inproceedings{rajaraman-veeramani-2024-semantists, title = "Semantists at {L}egal{L}ens-2024: Data-efficient Training of {LLM}{'}s for Legal Violation Identification", author = "Rajaraman, Kanagasabai and Veeramani, Hariram", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.31", pages = "355--360", abstract = "In this paper, we describe our system for LegalLens-2024 Shared Task on automatically identifying legal violations from unstructured text sources. We participate in Subtask B, called Legal Natural Language Inference (L-NLI), that aims to predict the relationship between a given premise summarizing a class action complaint and a hypothesis from an online media text, indicating any association between the review and the complaint. This task is challenging as it provides only limited labelled data. In our work, we adopt LLM based methods and explore various data-efficient learning approaches for maximizing performance. In the end, our best model employed an ensemble of LLM{'}s fine-tuned on the task-specific data, and achieved a Macro F1 score of 78.5{\%} on test data, and ranked 2nd among all teams submissions.", }
In this paper, we describe our system for LegalLens-2024 Shared Task on automatically identifying legal violations from unstructured text sources. We participate in Subtask B, called Legal Natural Language Inference (L-NLI), that aims to predict the relationship between a given premise summarizing a class action complaint and a hypothesis from an online media text, indicating any association between the review and the complaint. This task is challenging as it provides only limited labelled data. In our work, we adopt LLM based methods and explore various data-efficient learning approaches for maximizing performance. In the end, our best model employed an ensemble of LLM{'}s fine-tuned on the task-specific data, and achieved a Macro F1 score of 78.5{\%} on test data, and ranked 2nd among all teams submissions.
[ "Rajaraman, Kanagasabai", "Veeramani, Hariram" ]
Semantists at LegalLens-2024: Data-efficient Training of LLM's for Legal Violation Identification
nllp-1.31
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.33.bib
https://aclanthology.org/2024.nllp-1.33/
@inproceedings{hagag-etal-2024-legallens, title = "{L}egal{L}ens Shared Task 2024: Legal Violation Identification in Unstructured Text", author = "Hagag, Ben and Gil Semo, Gil and Bernsohn, Dor and Harpaz, Liav and Vaezipoor, Pashootan and Saha, Rohit and Truskovskyi, Kyryl and Spanakis, Gerasimos", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.33", pages = "361--370", abstract = "This paper presents the results of the LegalLens Shared Task, focusing on detecting legal violations within text in the wild across two sub-tasks: LegalLens-NER for identifying legal violation entities and LegalLens-NLI for associating these violations with relevant legal contexts and affected individuals. Using an enhanced LegalLens dataset covering labor, privacy, and consumer protection domains, 38 teams participated in the task. Our analysis reveals that while a mix of approaches was used, the top-performing teams in both tasks consistently relied on fine-tuning pre-trained language models, outperforming legal-specific models and few-shot methods. The top-performing team achieved a 7.11{\%} improvement in NER over the baseline, while NLI saw a more marginal improvement of 5.7{\%}. Despite these gains, the complexity of legal texts leaves room for further advancements.", }
This paper presents the results of the LegalLens Shared Task, focusing on detecting legal violations within text in the wild across two sub-tasks: LegalLens-NER for identifying legal violation entities and LegalLens-NLI for associating these violations with relevant legal contexts and affected individuals. Using an enhanced LegalLens dataset covering labor, privacy, and consumer protection domains, 38 teams participated in the task. Our analysis reveals that while a mix of approaches was used, the top-performing teams in both tasks consistently relied on fine-tuning pre-trained language models, outperforming legal-specific models and few-shot methods. The top-performing team achieved a 7.11{\%} improvement in NER over the baseline, while NLI saw a more marginal improvement of 5.7{\%}. Despite these gains, the complexity of legal texts leaves room for further advancements.
[ "Hagag, Ben", "Gil Semo, Gil", "Bernsohn, Dor", "Harpaz, Liav", "Vaezipoor, Pashootan", "Saha, Rohit", "Truskovskyi, Kyryl", "Spanakis, Gerasimos" ]
LegalLens Shared Task 2024: Legal Violation Identification in Unstructured Text
nllp-1.33
Poster
2410.12064
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.nllp-1.34.bib
https://aclanthology.org/2024.nllp-1.34/
@inproceedings{tran-etal-2024-deberta, title = "{D}e{BERT}a Beats Behemoths: A Comparative Analysis of Fine-Tuning, Prompting, and {PEFT} Approaches on {L}egal{L}ens{NER}", author = "Tran, Hanh Thi Hong and Chatterjee, Nishan and Pollak, Senja and Doucet, Antoine", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.34", pages = "371--380", abstract = "This paper summarizes the participation of our team (Flawless Lawgic) in the legal named entity recognition (L-NER) task at LegalLens 2024: Detecting Legal Violations. Given possible unstructured texts (e.g., online media texts), we aim to identify legal violations by extracting legal entities such as {``}violation{''}, {``}violation by{''}, {``}violation on{''}, and {``}law{''}. This system-description paper discusses our approaches to address the task, empirically highlighting the performances of fine-tuning models from the Transformers family (e.g., RoBERTa and DeBERTa) against open-sourced LLMs (e.g., Llama, Mistral) with different tuning settings (e.g., LoRA, Supervised Fine-Tuning (SFT) and prompting strategies). Our best results, with a weighted F1 of 0.705 on the test set, show a 30 percentage points increase in F1 compared to the baseline and rank 2 on the leaderboard, leaving a marginal gap of only 0.4 percentage points lower than the top solution. Our solutions are available at github.com/honghanhh/lner.", }
This paper summarizes the participation of our team (Flawless Lawgic) in the legal named entity recognition (L-NER) task at LegalLens 2024: Detecting Legal Violations. Given possible unstructured texts (e.g., online media texts), we aim to identify legal violations by extracting legal entities such as {``}violation{''}, {``}violation by{''}, {``}violation on{''}, and {``}law{''}. This system-description paper discusses our approaches to address the task, empirically highlighting the performances of fine-tuning models from the Transformers family (e.g., RoBERTa and DeBERTa) against open-sourced LLMs (e.g., Llama, Mistral) with different tuning settings (e.g., LoRA, Supervised Fine-Tuning (SFT) and prompting strategies). Our best results, with a weighted F1 of 0.705 on the test set, show a 30 percentage points increase in F1 compared to the baseline and rank 2 on the leaderboard, leaving a marginal gap of only 0.4 percentage points lower than the top solution. Our solutions are available at github.com/honghanhh/lner.
[ "Tran, Hanh Thi Hong", "Chatterjee, Nishan", "Pollak, Senja", "Doucet, Antoine" ]
DeBERTa Beats Behemoths: A Comparative Analysis of Fine-Tuning, Prompting, and PEFT Approaches on LegalLensNER
nllp-1.34
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nllp-1.35.bib
https://aclanthology.org/2024.nllp-1.35/
@inproceedings{t-y-s-s-etal-2024-lexsumm, title = "{L}ex{S}umm and {L}ex{T}5: Benchmarking and Modeling Legal Summarization Tasks in {E}nglish", author = "T.y.s.s, Santosh and Weiss, Cornelius and Grabmair, Matthias", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.35", pages = "381--403", abstract = "In the evolving NLP landscape, benchmarks serve as yardsticks for gauging progress. However, existing Legal NLP benchmarks only focus on predictive tasks, overlooking generative tasks. This work curates LexSumm, a benchmark designed for evaluating legal summarization tasks in English. It comprises eight English legal summarization datasets, from diverse jurisdictions, such as the US, UK, EU and India. Additionally, we release LexT5, legal oriented sequence-to-sequence model, addressing the limitation of the existing BERT-style encoder-only models in the legal domain. We assess its capabilities through zero-shot probing on LegalLAMA and fine-tuning on LexSumm. Our analysis reveals abstraction and faithfulness errors even in summaries generated by zero-shot LLMs, indicating opportunities for further improvements. LexSumm benchmark and LexT5 model are available at https://github.com/TUMLegalTech/LexSumm-LexT5.", }
In the evolving NLP landscape, benchmarks serve as yardsticks for gauging progress. However, existing Legal NLP benchmarks only focus on predictive tasks, overlooking generative tasks. This work curates LexSumm, a benchmark designed for evaluating legal summarization tasks in English. It comprises eight English legal summarization datasets, from diverse jurisdictions, such as the US, UK, EU and India. Additionally, we release LexT5, legal oriented sequence-to-sequence model, addressing the limitation of the existing BERT-style encoder-only models in the legal domain. We assess its capabilities through zero-shot probing on LegalLAMA and fine-tuning on LexSumm. Our analysis reveals abstraction and faithfulness errors even in summaries generated by zero-shot LLMs, indicating opportunities for further improvements. LexSumm benchmark and LexT5 model are available at https://github.com/TUMLegalTech/LexSumm-LexT5.
[ "T.y.s.s, Santosh", "Weiss, Cornelius", "Grabmair, Matthias" ]
LexSumm and LexT5: Benchmarking and Modeling Legal Summarization Tasks in English
nllp-1.35
Poster
2410.09527
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.nllp-1.36.bib
https://aclanthology.org/2024.nllp-1.36/
@inproceedings{t-y-s-s-etal-2024-towards-supporting, title = "Towards Supporting Legal Argumentation with {NLP}: Is More Data Really All You Need?", author = "T.y.s.s, Santosh and Ashley, Kevin and Atkinson, Katie and Grabmair, Matthias", editor = "Aletras, Nikolaos and Chalkidis, Ilias and Barrett, Leslie and Goan{\textcommabelow{t}}{\u{a}}, C{\u{a}}t{\u{a}}lina and Preo{\textcommabelow{t}}iuc-Pietro, Daniel and Spanakis, Gerasimos", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2024", month = nov, year = "2024", address = "Miami, FL, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nllp-1.36", pages = "404--421", abstract = "Modeling legal reasoning and argumentation justifying decisions in cases has always been central to AI {\&} Law, yet contemporary developments in legal NLP have increasingly focused on statistically classifying legal conclusions from text. While conceptually {``}simpler{'}, these approaches often fall short in providing usable justifications connecting to appropriate legal concepts. This paper reviews both traditional symbolic works in AI {\&} Law and recent advances in legal NLP, and distills possibilities of integrating expert-informed knowledge to strike a balance between scalability and explanation in symbolic vs. data-driven approaches. We identify open challenges and discuss the potential of modern NLP models and methods that integrate conceptual legal knowledge.", }
Modeling legal reasoning and argumentation justifying decisions in cases has always been central to AI {\&} Law, yet contemporary developments in legal NLP have increasingly focused on statistically classifying legal conclusions from text. While conceptually {``}simpler{'}, these approaches often fall short in providing usable justifications connecting to appropriate legal concepts. This paper reviews both traditional symbolic works in AI {\&} Law and recent advances in legal NLP, and distills possibilities of integrating expert-informed knowledge to strike a balance between scalability and explanation in symbolic vs. data-driven approaches. We identify open challenges and discuss the potential of modern NLP models and methods that integrate conceptual legal knowledge.
[ "T.y.s.s, Santosh", "Ashley, Kevin", "Atkinson, Katie", "Grabmair, Matthias" ]
Towards Supporting Legal Argumentation with NLP: Is More Data Really All You Need?
nllp-1.36
Poster
2406.10974
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.nlp4dh-1.1.bib
https://aclanthology.org/2024.nlp4dh-1.1/
@inproceedings{ohman-liimatta-2024-text, title = "Text Length and the Function of Intentionality: A Case Study of Contrastive Subreddits", author = "Ohman, Emily Sofi and Liimatta, Aatu", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.1", pages = "1--8", abstract = "Text length is of central concern in natural language processing (NLP) tasks, yet it is very much under-researched. In this paper, we use social media data, specifically Reddit, to explore the function of text length and intentionality by contrasting subreddits of the same topic where one is considered more serious/professional/academic and the other more relaxed/beginner/layperson. We hypothesize that word choices are more deliberate and intentional in the more in-depth and professional subreddits with texts subsequently becoming longer as a function of this intentionality. We argue that this has deep implications for many applied NLP tasks such as emotion and sentiment analysis, fake news and disinformation detection, and other modeling tasks focused on social media and similar platforms where users interact with each other via the medium of text.", }
Text length is of central concern in natural language processing (NLP) tasks, yet it is very much under-researched. In this paper, we use social media data, specifically Reddit, to explore the function of text length and intentionality by contrasting subreddits of the same topic where one is considered more serious/professional/academic and the other more relaxed/beginner/layperson. We hypothesize that word choices are more deliberate and intentional in the more in-depth and professional subreddits with texts subsequently becoming longer as a function of this intentionality. We argue that this has deep implications for many applied NLP tasks such as emotion and sentiment analysis, fake news and disinformation detection, and other modeling tasks focused on social media and similar platforms where users interact with each other via the medium of text.
[ "Ohman, Emily Sofi", "Liimatta, Aatu" ]
Text Length and the Function of Intentionality: A Case Study of Contrastive Subreddits
nlp4dh-1.1
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.2.bib
https://aclanthology.org/2024.nlp4dh-1.2/
@inproceedings{li-2024-tracing, title = "Tracing the Genealogies of Ideas with Sentence Embeddings", author = "Li, Lucian", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.2", pages = "9--16", abstract = "Detecting intellectual influence in unstructured text is an important problem for a wide range of fields, including intellectual history, social science, and bibliometrics. A wide range of previous studies in computational social science and digital humanities have attempted to resolve this through a range of dictionary, embedding, and language model based methods. I introduce an approach which leverages a sentence embedding index to efficiently search for similar ideas in a large historical corpus. This method remains robust in conditions of high OCR error found in real mass digitized historical corpora that disrupt previous published methods, while also capturing paraphrase and indirect influence. I evaluate this method on a large corpus of 250,000 nonfiction texts from the 19th century, and find that discovered influence is in line with history of science literature. By expanding the scope of our search for influence and the origins of ideas beyond traditional structured corpora and canonical works and figures, we can get a more nuanced perspective on influence and idea dissemination that can encompass epistemically marginalized groups.", }
Detecting intellectual influence in unstructured text is an important problem for a wide range of fields, including intellectual history, social science, and bibliometrics. A wide range of previous studies in computational social science and digital humanities have attempted to resolve this through a range of dictionary, embedding, and language model based methods. I introduce an approach which leverages a sentence embedding index to efficiently search for similar ideas in a large historical corpus. This method remains robust in conditions of high OCR error found in real mass digitized historical corpora that disrupt previous published methods, while also capturing paraphrase and indirect influence. I evaluate this method on a large corpus of 250,000 nonfiction texts from the 19th century, and find that discovered influence is in line with history of science literature. By expanding the scope of our search for influence and the origins of ideas beyond traditional structured corpora and canonical works and figures, we can get a more nuanced perspective on influence and idea dissemination that can encompass epistemically marginalized groups.
[ "Li, Lucian" ]
Tracing the Genealogies of Ideas with Sentence Embeddings
nlp4dh-1.2
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.3.bib
https://aclanthology.org/2024.nlp4dh-1.3/
@inproceedings{yang-anderson-2024-evaluating, title = "Evaluating Computational Representations of Character: An Austen Character Similarity Benchmark", author = "Yang, Funing and Anderson, Carolyn Jane", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.3", pages = "17--30", abstract = "Several systems have been developed to extract information about characters to aid computational analysis of English literature. We propose character similarity grouping as a holistic evaluation task for these pipelines. We present AustenAlike, a benchmark suite of character similarities in Jane Austen{'}s novels. Our benchmark draws on three notions of character similarity: a structurally defined notion of similarity; a socially defined notion of similarity; and an expert defined set extracted from literary criticism. We use AustenAlike to evaluate character features extracted using two pipelines, BookNLP and FanfictionNLP. We build character representations from four kinds of features and compare them to the three AustenAlike benchmarks and to GPT-4 similarity rankings. We find that though computational representations capture some broad similarities based on shared social and narrative roles, the expert pairings in our third benchmark are challenging for all systems, highlighting the subtler aspects of similarity noted by human readers.", }
Several systems have been developed to extract information about characters to aid computational analysis of English literature. We propose character similarity grouping as a holistic evaluation task for these pipelines. We present AustenAlike, a benchmark suite of character similarities in Jane Austen{'}s novels. Our benchmark draws on three notions of character similarity: a structurally defined notion of similarity; a socially defined notion of similarity; and an expert defined set extracted from literary criticism. We use AustenAlike to evaluate character features extracted using two pipelines, BookNLP and FanfictionNLP. We build character representations from four kinds of features and compare them to the three AustenAlike benchmarks and to GPT-4 similarity rankings. We find that though computational representations capture some broad similarities based on shared social and narrative roles, the expert pairings in our third benchmark are challenging for all systems, highlighting the subtler aspects of similarity noted by human readers.
[ "Yang, Funing", "Anderson, Carolyn Jane" ]
Evaluating Computational Representations of Character: An Austen Character Similarity Benchmark
nlp4dh-1.3
Poster
2408.16131
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.nlp4dh-1.4.bib
https://aclanthology.org/2024.nlp4dh-1.4/
@inproceedings{umphrey-etal-2024-investigating, title = "Investigating Expert-in-the-Loop {LLM} Discourse Patterns for Ancient Intertextual Analysis", author = "Umphrey, Ray and Roberts, Jesse and Roberts, Lindsey", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.4", pages = "31--40", abstract = "This study explores the potential of large language models (LLMs) for identifying and examining intertextual relationships within biblical, koine Greek texts. By evaluating the performance of LLMs on various intertextuality scenarios the study demonstrates that these models can detect direct quotations, allusions, and echoes between texts. The LLM{'}s ability to generate novel intertextual observations and connections highlights its potential to uncover new insights. However, the model also struggles with long query passages and the inclusion of false intertextual dependences, emphasizing the importance of expert evaluation. The expert-in-the-loop methodology presented offers a scalable approach for intertextual research into the complex web of intertextuality within and beyond the biblical corpus.", }
This study explores the potential of large language models (LLMs) for identifying and examining intertextual relationships within biblical, koine Greek texts. By evaluating the performance of LLMs on various intertextuality scenarios the study demonstrates that these models can detect direct quotations, allusions, and echoes between texts. The LLM{'}s ability to generate novel intertextual observations and connections highlights its potential to uncover new insights. However, the model also struggles with long query passages and the inclusion of false intertextual dependences, emphasizing the importance of expert evaluation. The expert-in-the-loop methodology presented offers a scalable approach for intertextual research into the complex web of intertextuality within and beyond the biblical corpus.
[ "Umphrey, Ray", "Roberts, Jesse", "Roberts, Lindsey" ]
Investigating Expert-in-the-Loop LLM Discourse Patterns for Ancient Intertextual Analysis
nlp4dh-1.4
Poster
2409.01882
[ "" ]
https://huggingface.co/papers/2409.01882
1
0
0
3
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.5.bib
https://aclanthology.org/2024.nlp4dh-1.5/
@inproceedings{cruciani-2024-extracting, title = "Extracting Relations from Ecclesiastical Cultural Heritage Texts", author = "Cruciani, Giulia", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.5", pages = "41--50", abstract = "Motivated by the increasing volume of data and the necessity of getting valuable insights, this research describes the process of extracting entities and relations from Italian texts in the context of ecclesiastical cultural heritage data. Named Entity Recognition (NER) and Relation Extraction (RE) are paramount tasks in Natural Language Processing. This paper presents a traditional methodology based on a two-step procedure: firstly, a custom model for Named Entity Recognition extracts entities from data, and then, a multi-input neural network model is trained to perform Relation Classification as a multi-label classification problem. Data are provided by IDS{\&}Unitelm (technological partner of the IT Services and National Office for Ecclesiastical Cultural Heritage and Religious Buildings of CEI, the Italian Episcopal Conference) and concerns biographical texts of 9,982 entities of type person, which can be accessed by the online portal BeWeb. This approach aims to enhance the organization and accessibility of ecclesiastical cultural heritage data, offering deeper insights into historical biographical records.", }
Motivated by the increasing volume of data and the necessity of getting valuable insights, this research describes the process of extracting entities and relations from Italian texts in the context of ecclesiastical cultural heritage data. Named Entity Recognition (NER) and Relation Extraction (RE) are paramount tasks in Natural Language Processing. This paper presents a traditional methodology based on a two-step procedure: firstly, a custom model for Named Entity Recognition extracts entities from data, and then, a multi-input neural network model is trained to perform Relation Classification as a multi-label classification problem. Data are provided by IDS{\&}Unitelm (technological partner of the IT Services and National Office for Ecclesiastical Cultural Heritage and Religious Buildings of CEI, the Italian Episcopal Conference) and concerns biographical texts of 9,982 entities of type person, which can be accessed by the online portal BeWeb. This approach aims to enhance the organization and accessibility of ecclesiastical cultural heritage data, offering deeper insights into historical biographical records.
[ "Cruciani, Giulia" ]
Extracting Relations from Ecclesiastical Cultural Heritage Texts
nlp4dh-1.5
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.6.bib
https://aclanthology.org/2024.nlp4dh-1.6/
@inproceedings{krusic-2024-constructing, title = "Constructing a Sentiment-Annotated Corpus of {A}ustrian Historical Newspapers: Challenges, Tools, and Annotator Experience", author = "Krusic, Lucija", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.6", pages = "51--62", abstract = "This study presents the development of a sentiment-annotated corpus of historical newspaper texts in Austrian German, addressing a gap in annotated corpora for Natural Language Processing in the field of Digital Humanities. Three annotators categorised 1005 sentences from two 19th-century periodicals into four sentiment categories: positive, negative, neutral, and mixed. The annotators, Masters and PhD students in Linguistics and Digital Humanities, are considered semi-experts and have received substantial training during this annotation study. Three tools were used and compared in the annotation process: Google Sheets, Google Forms and Doccano, and resulted in a gold standard corpus. The analysis revealed a fair to moderate inter-rater agreement (Fleiss{'} kappa = 0.405) and an average percentage agreement of 45.7{\%} for full consensus and 92.5{\%} for majority vote. As majority vote is needed for the creation of a gold standard corpus, these results are considered sufficient, and the annotations reliable. The study also introduced comprehensive guidelines for sentiment annotation, which were essential to overcome the challenges posed by historical language and context. The annotators{'} experience was assessed through a combination of standardised usability tests (NASA-TLX and UEQ-S) and a detailed custom-made user experience questionnaire, which provided qualitative insights into the difficulties and usability of the tools used. The questionnaire is an additional resource that can be used to assess usability and user experience assessments in future annotation studies. The findings demonstrate the effectiveness of semi-expert annotators and dedicated tools in producing reliable annotations and provide valuable resources, including the annotated corpus, guidelines, and a user experience questionnaire, for future sentiment analysis and annotation of Austrian historical texts. The sentiment-annotated corpus will be used as the gold standard for fine-tuning and evaluating machine learning models for sentiment analysis of Austrian historical newspapers with the topic of migration and minorities in a subsequent study.", }
This study presents the development of a sentiment-annotated corpus of historical newspaper texts in Austrian German, addressing a gap in annotated corpora for Natural Language Processing in the field of Digital Humanities. Three annotators categorised 1005 sentences from two 19th-century periodicals into four sentiment categories: positive, negative, neutral, and mixed. The annotators, Masters and PhD students in Linguistics and Digital Humanities, are considered semi-experts and have received substantial training during this annotation study. Three tools were used and compared in the annotation process: Google Sheets, Google Forms and Doccano, and resulted in a gold standard corpus. The analysis revealed a fair to moderate inter-rater agreement (Fleiss{'} kappa = 0.405) and an average percentage agreement of 45.7{\%} for full consensus and 92.5{\%} for majority vote. As majority vote is needed for the creation of a gold standard corpus, these results are considered sufficient, and the annotations reliable. The study also introduced comprehensive guidelines for sentiment annotation, which were essential to overcome the challenges posed by historical language and context. The annotators{'} experience was assessed through a combination of standardised usability tests (NASA-TLX and UEQ-S) and a detailed custom-made user experience questionnaire, which provided qualitative insights into the difficulties and usability of the tools used. The questionnaire is an additional resource that can be used to assess usability and user experience assessments in future annotation studies. The findings demonstrate the effectiveness of semi-expert annotators and dedicated tools in producing reliable annotations and provide valuable resources, including the annotated corpus, guidelines, and a user experience questionnaire, for future sentiment analysis and annotation of Austrian historical texts. The sentiment-annotated corpus will be used as the gold standard for fine-tuning and evaluating machine learning models for sentiment analysis of Austrian historical newspapers with the topic of migration and minorities in a subsequent study.
[ "Krusic, Lucija" ]
Constructing a Sentiment-Annotated Corpus of Austrian Historical Newspapers: Challenges, Tools, and Annotator Experience
nlp4dh-1.6
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.7.bib
https://aclanthology.org/2024.nlp4dh-1.7/
@inproceedings{vasicek-etal-2024-truth, title = "It is a Truth Individually Acknowledged: Cross-references On Demand", author = "Vasicek, Piper and Byun, Courtni and Seppi, Kevin", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.7", pages = "63--74", abstract = "Cross-references link source passages of text to other passages that elucidate the source passage in some way and can deepen human understanding. Despite their usefulness, however, good cross-references are hard to find, and extensive sets of cross-references only exist for the few most highly studied books such as the Bible, for which scholars have been collecting cross-references for hundreds of years. Therefore, we propose a new task: generate cross-references for user-selected text on demand. We define a metric, coverage, to evaluate task performance. We adapt several models to generate cross references, including an Anchor Words topic model, SBERT SentenceTransformers, and ChatGPT, and evaluate their coverage in both English and German on existing cross-reference datasets. While ChatGPT outperforms other models on these datasets, this is likely due to data contamination. We hand-evaluate performance on the well-known works of Jane Austen and a less-known science fiction series Sons of the Starfarers by Joe Vasicek, finding that ChatGPT does not perform as well on these works; sentence embeddings perform best. We experiment with newer LLMs and large context windows, and suggest that future work should focus on deploying cross-references on-demand with readers to determine their effectiveness in the wild.", }
Cross-references link source passages of text to other passages that elucidate the source passage in some way and can deepen human understanding. Despite their usefulness, however, good cross-references are hard to find, and extensive sets of cross-references only exist for the few most highly studied books such as the Bible, for which scholars have been collecting cross-references for hundreds of years. Therefore, we propose a new task: generate cross-references for user-selected text on demand. We define a metric, coverage, to evaluate task performance. We adapt several models to generate cross references, including an Anchor Words topic model, SBERT SentenceTransformers, and ChatGPT, and evaluate their coverage in both English and German on existing cross-reference datasets. While ChatGPT outperforms other models on these datasets, this is likely due to data contamination. We hand-evaluate performance on the well-known works of Jane Austen and a less-known science fiction series Sons of the Starfarers by Joe Vasicek, finding that ChatGPT does not perform as well on these works; sentence embeddings perform best. We experiment with newer LLMs and large context windows, and suggest that future work should focus on deploying cross-references on-demand with readers to determine their effectiveness in the wild.
[ "Vasicek, Piper", "Byun, Courtni", "Seppi, Kevin" ]
It is a Truth Individually Acknowledged: Cross-references On Demand
nlp4dh-1.7
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.8.bib
https://aclanthology.org/2024.nlp4dh-1.8/
@inproceedings{venglarova-etal-2024-extracting, title = "Extracting position titles from unstructured historical job advertisements", author = "Venglarova, Klara and Adam, Raven and Vogeler, Georg", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.8", pages = "75--84", abstract = "This paper explores the automated extraction of job titles from unstructured historical job advertisements, using a corpus of digitized German-language newspapers from 1850-1950. The study addresses the challenges of working with unstructured, OCR-processed historical data, contrasting with contemporary approaches that often use structured, digitally-born datasets when dealing with this text type. We compare four extraction methods: a dictionary-based approach, a rule-based approach, a named entity recognition (NER) mode, and a text-generation method. The NER approach, trained on manually annotated data, achieved the highest F1 score (0.944 using transformers model trained on GPU, 0.884 model trained on CPU), demonstrating its flexibility and ability to correctly identify job titles. The text-generation approach performs similarly (0.920). However, the rule-based (0.69) and dictionary-based (0.632) methods reach relatively high F1 Scores as well, while offering the advantage of not requiring extensive labeling of training data. The results highlight the complexities of extracting meaningful job titles from historical texts, with implications for further research into labor market trends and occupational history.", }
This paper explores the automated extraction of job titles from unstructured historical job advertisements, using a corpus of digitized German-language newspapers from 1850-1950. The study addresses the challenges of working with unstructured, OCR-processed historical data, contrasting with contemporary approaches that often use structured, digitally-born datasets when dealing with this text type. We compare four extraction methods: a dictionary-based approach, a rule-based approach, a named entity recognition (NER) mode, and a text-generation method. The NER approach, trained on manually annotated data, achieved the highest F1 score (0.944 using transformers model trained on GPU, 0.884 model trained on CPU), demonstrating its flexibility and ability to correctly identify job titles. The text-generation approach performs similarly (0.920). However, the rule-based (0.69) and dictionary-based (0.632) methods reach relatively high F1 Scores as well, while offering the advantage of not requiring extensive labeling of training data. The results highlight the complexities of extracting meaningful job titles from historical texts, with implications for further research into labor market trends and occupational history.
[ "Venglarova, Klara", "Adam, Raven", "Vogeler, Georg" ]
Extracting position titles from unstructured historical job advertisements
nlp4dh-1.8
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.9.bib
https://aclanthology.org/2024.nlp4dh-1.9/
@inproceedings{hervieux-etal-2024-language, title = "Language Resources From Prominent Born-Digital Humanities Texts are Still Needed in the Age of {LLM}s", author = "Hervieux, Natalie and Yao, Peiran and Brown, Susan and Barbosa, Denilson", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.9", pages = "85--104", abstract = "The digital humanities (DH) community fundamentally embraces the use of computerized tools for the study and creation of knowledge related to language, history, culture, and human values, in which natural language plays a prominent role. Many successful DH tools rely heavily on Natural Language Processing methods, and several efforts exist within the DH community to promote the use of newer and better tools. Nevertheless, most NLP research is driven by web corpora that are noticeably different from texts commonly found in DH artifacts, which tend to use richer language and refer to rarer entities. Thus, the near-human performance achieved by state-of-the-art NLP tools on web texts might not be achievable on DH texts. We introduce a dataset carefully created by computer scientists and digital humanists intended to serve as a reference point for the development and evaluation of NLP tools. The dataset is a subset of a born-digital textbase resulting from a prominent and ongoing experiment in digital literary history, containing thousands of multi-sentence excerpts that are suited for information extraction tasks. We fully describe the dataset and show that its language is demonstrably different than the corpora normally used in training language resources in the NLP community.", }
The digital humanities (DH) community fundamentally embraces the use of computerized tools for the study and creation of knowledge related to language, history, culture, and human values, in which natural language plays a prominent role. Many successful DH tools rely heavily on Natural Language Processing methods, and several efforts exist within the DH community to promote the use of newer and better tools. Nevertheless, most NLP research is driven by web corpora that are noticeably different from texts commonly found in DH artifacts, which tend to use richer language and refer to rarer entities. Thus, the near-human performance achieved by state-of-the-art NLP tools on web texts might not be achievable on DH texts. We introduce a dataset carefully created by computer scientists and digital humanists intended to serve as a reference point for the development and evaluation of NLP tools. The dataset is a subset of a born-digital textbase resulting from a prominent and ongoing experiment in digital literary history, containing thousands of multi-sentence excerpts that are suited for information extraction tasks. We fully describe the dataset and show that its language is demonstrably different than the corpora normally used in training language resources in the NLP community.
[ "Hervieux, Natalie", "Yao, Peiran", "Brown, Susan", "Barbosa, Denilson" ]
Language Resources From Prominent Born-Digital Humanities Texts are Still Needed in the Age of LLMs
nlp4dh-1.9
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.10.bib
https://aclanthology.org/2024.nlp4dh-1.10/
@inproceedings{pawlowski-walkowiak-2024-nlp, title = "{NLP} for Digital Humanities: Processing Chronological Text Corpora", author = "Paw{\l}owski, Adam and Walkowiak, Tomasz", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.10", pages = "105--112", abstract = "The paper focuses on the integration of Natural Language Processing (NLP) techniques to analyze extensive chronological text corpora. This research underscores the synergy between humanistic inquiry and computational methods, especially in the processing and analysis of sequential textual data known as lexical series. A reference workflow for chronological corpus analysis is introduced, outlining the methodologies applicable to the ChronoPress corpus, a data set that encompasses 22 years of Polish press from 1945 to 1966. The study showcases the potential of this approach in uncovering cultural and historical patterns through the analysis of lexical series. The findings highlight both the challenges and opportunities present in leveraging lexical series analysis within Digital Humanities, emphasizing the necessity for advanced data filtering and anomaly detection algorithms to effectively manage the vast and intricate datasets characteristic of this field.", }
The paper focuses on the integration of Natural Language Processing (NLP) techniques to analyze extensive chronological text corpora. This research underscores the synergy between humanistic inquiry and computational methods, especially in the processing and analysis of sequential textual data known as lexical series. A reference workflow for chronological corpus analysis is introduced, outlining the methodologies applicable to the ChronoPress corpus, a data set that encompasses 22 years of Polish press from 1945 to 1966. The study showcases the potential of this approach in uncovering cultural and historical patterns through the analysis of lexical series. The findings highlight both the challenges and opportunities present in leveraging lexical series analysis within Digital Humanities, emphasizing the necessity for advanced data filtering and anomaly detection algorithms to effectively manage the vast and intricate datasets characteristic of this field.
[ "Paw{\\l}owski, Adam", "Walkowiak, Tomasz" ]
NLP for Digital Humanities: Processing Chronological Text Corpora
nlp4dh-1.10
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.11.bib
https://aclanthology.org/2024.nlp4dh-1.11/
@inproceedings{du-hoste-2024-multi, title = "A Multi-task Framework with Enhanced Hierarchical Attention for Sentiment Analysis on Classical {C}hinese Poetry: Utilizing Information from Short Lines", author = "Du, Quanqi and Hoste, Veronique", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.11", pages = "113--122", abstract = "Classical Chinese poetry has a long history, dating back to the 11th century BC. By investigating the sentiment expressed in the poetry, we can gain more insights in the emotional life and history development in ancient Chinese culture. To help improve the sentiment analysis performance in the field of classical Chinese poetry, we propose to utilize the unique information from the individual short lines that compose the poem, and introduce a multi-task framework with hierarchical attention enhanced with short line sentiment labels. Specifically, the multi-task framework comprises sentiment analysis for both the overall poem and the short lines, while the hierarchical attention consists of word- and sentence-level attention, with the latter enhanced with additional information from short line sentiments. Our experimental results showcase that our approach leveraging more fine-grained information from short lines outperforms the state-of-the-art, achieving an accuracy score of 72.88{\%} and an F1-macro score of 71.05{\%}.", }
Classical Chinese poetry has a long history, dating back to the 11th century BC. By investigating the sentiment expressed in the poetry, we can gain more insights in the emotional life and history development in ancient Chinese culture. To help improve the sentiment analysis performance in the field of classical Chinese poetry, we propose to utilize the unique information from the individual short lines that compose the poem, and introduce a multi-task framework with hierarchical attention enhanced with short line sentiment labels. Specifically, the multi-task framework comprises sentiment analysis for both the overall poem and the short lines, while the hierarchical attention consists of word- and sentence-level attention, with the latter enhanced with additional information from short line sentiments. Our experimental results showcase that our approach leveraging more fine-grained information from short lines outperforms the state-of-the-art, achieving an accuracy score of 72.88{\%} and an F1-macro score of 71.05{\%}.
[ "Du, Quanqi", "Hoste, Veronique" ]
A Multi-task Framework with Enhanced Hierarchical Attention for Sentiment Analysis on Classical Chinese Poetry: Utilizing Information from Short Lines
nlp4dh-1.11
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.12.bib
https://aclanthology.org/2024.nlp4dh-1.12/
@inproceedings{miyagawa-etal-2024-exploring, title = "Exploring Similarity Measures and Intertextuality in {V}edic {S}anskrit Literature", author = "Miyagawa, So and Kyogoku, Yuki and Tsukagoshi, Yuzuki and Amano, Kyoko", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.12", pages = "123--131", abstract = "This paper examines semantic similarity and intertextuality in selected texts from the Vedic Sanskrit corpus, specifically the Maitr{\=a}yaṇ{\=\i} Saṃhit{\=a} (MS) and K{\=a}ṭhaka-Saṃhit{\=a} (KS). Three computational methods are employed: Word2Vec for word embeddings, stylo package for stylometric analysis, and TRACER for text reuse detection. By comparing various sections of the texts at different granularities, patterns of similarity and structural alignment are uncovered, providing insights into textual relationships and chronology. Word embeddings capture semantic similarities, while stylometric analysis reveals clusters and components that differentiate the texts. TRACER identifies parallel passages, indicating probable instances of text reuse. The computational analysis corroborates previous philological studies, suggesting a shared period of composition between MS.1.9 and MS.1.7. This research highlights the potential of computational methods in studying ancient Sanskrit literature, complementing traditional approaches. The agreement among the methods strengthens the validity of the findings, and the visualizations offer a nuanced understanding of textual connections. The study demonstrates that smaller chunk sizes are more effective for detecting intertextual parallels, showcasing the power of these techniques in unraveling the complexities of ancient texts.", }
This paper examines semantic similarity and intertextuality in selected texts from the Vedic Sanskrit corpus, specifically the Maitr{\=a}yaṇ{\=\i} Saṃhit{\=a} (MS) and K{\=a}ṭhaka-Saṃhit{\=a} (KS). Three computational methods are employed: Word2Vec for word embeddings, stylo package for stylometric analysis, and TRACER for text reuse detection. By comparing various sections of the texts at different granularities, patterns of similarity and structural alignment are uncovered, providing insights into textual relationships and chronology. Word embeddings capture semantic similarities, while stylometric analysis reveals clusters and components that differentiate the texts. TRACER identifies parallel passages, indicating probable instances of text reuse. The computational analysis corroborates previous philological studies, suggesting a shared period of composition between MS.1.9 and MS.1.7. This research highlights the potential of computational methods in studying ancient Sanskrit literature, complementing traditional approaches. The agreement among the methods strengthens the validity of the findings, and the visualizations offer a nuanced understanding of textual connections. The study demonstrates that smaller chunk sizes are more effective for detecting intertextual parallels, showcasing the power of these techniques in unraveling the complexities of ancient texts.
[ "Miyagawa, So", "Kyogoku, Yuki", "Tsukagoshi, Yuzuki", "Amano, Kyoko" ]
Exploring Similarity Measures and Intertextuality in Vedic Sanskrit Literature
nlp4dh-1.12
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.13.bib
https://aclanthology.org/2024.nlp4dh-1.13/
@inproceedings{manrique-gomez-etal-2024-historical, title = "Historical Ink: 19th Century {L}atin {A}merican {S}panish Newspaper Corpus with {LLM} {OCR} Correction", author = "Manrique-Gomez, Laura and Montes, Tony and Rodriguez Herrera, Arturo and Manrique, Ruben", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.13", pages = "132--139", abstract = "This paper presents two significant contributions: First, it introduces a novel dataset of 19th-century Latin American newspaper texts, addressing a critical gap in specialized corpora for historical and linguistic analysis in this region. Second, it develops a flexible framework that utilizes a Large Language Model for OCR error correction and linguistic surface form detection in digitized corpora. This semi-automated framework is adaptable to various contexts and datasets and is applied to the newly created dataset.", }
This paper presents two significant contributions: First, it introduces a novel dataset of 19th-century Latin American newspaper texts, addressing a critical gap in specialized corpora for historical and linguistic analysis in this region. Second, it develops a flexible framework that utilizes a Large Language Model for OCR error correction and linguistic surface form detection in digitized corpora. This semi-automated framework is adaptable to various contexts and datasets and is applied to the newly created dataset.
[ "Manrique-Gomez, Laura", "Montes, Tony", "Rodriguez Herrera, Arturo", "Manrique, Ruben" ]
Historical Ink: 19th Century Latin American Spanish Newspaper Corpus with LLM OCR Correction
nlp4dh-1.13
Poster
2407.12838
[ "https://github.com/historicalink/LatamXIX" ]
https://huggingface.co/papers/2407.12838
1
0
0
3
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.14.bib
https://aclanthology.org/2024.nlp4dh-1.14/
@inproceedings{feldkamp-etal-2024-canonical, title = "Canonical Status and Literary Influence: A Comparative Study of {D}anish Novels from the Modern Breakthrough (1870{--}1900)", author = "Feldkamp, Pascale and Lassche, Alie and Kostkan, Jan and Kardos, M{\'a}rton and Enevoldsen, Kenneth and Baunvig, Katrine and Nielbo, Kristoffer", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.14", pages = "140--155", abstract = "We examine the relationship between the canonization of Danish novels and their textual innovation and influence, taking the Danish Modern Breakthrough era (1870{--}1900) as a case study. We evaluate whether canonical novels introduced a significant textual novelty in their time, and explore their influence on the overall literary trend of the period. By analyzing the positions of canonical versus non-canonical novels in semantic space, we seek to better understand the link between a novel{'}s canonical status and its literary impact. Additionally, we examine the overall diversification of Modern Breakthrough novels during this significant period of rising literary readership. We find that canonical novels stand out from both the historical novel genre and non-canonical novels of the period. Our findings on diversification within and across groups indicate that the novels now regarded as canonical served as literary trendsetters of their time.", }
We examine the relationship between the canonization of Danish novels and their textual innovation and influence, taking the Danish Modern Breakthrough era (1870{--}1900) as a case study. We evaluate whether canonical novels introduced a significant textual novelty in their time, and explore their influence on the overall literary trend of the period. By analyzing the positions of canonical versus non-canonical novels in semantic space, we seek to better understand the link between a novel{'}s canonical status and its literary impact. Additionally, we examine the overall diversification of Modern Breakthrough novels during this significant period of rising literary readership. We find that canonical novels stand out from both the historical novel genre and non-canonical novels of the period. Our findings on diversification within and across groups indicate that the novels now regarded as canonical served as literary trendsetters of their time.
[ "Feldkamp, Pascale", "Lassche, Alie", "Kostkan, Jan", "Kardos, M{\\'a}rton", "Enevoldsen, Kenneth", "Baunvig, Katrine", "Nielbo, Kristoffer" ]
Canonical Status and Literary Influence: A Comparative Study of Danish Novels from the Modern Breakthrough (1870–1900)
nlp4dh-1.14
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.15.bib
https://aclanthology.org/2024.nlp4dh-1.15/
@inproceedings{chopra-etal-2024-deciphering, title = "Deciphering psycho-social effects of Eating Disorder : Analysis of {R}eddit Posts using Large Language Model({LLM})s and Topic Modeling", author = "Chopra, Medini and Chatterjee, Anindita and Dey, Lipika and Das, Partha Pratim", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.15", pages = "156--164", abstract = "Eating disorders are a global health concern as they manifest in increasing numbers across all sections of society. Social network platforms have emerged as a dependable source of information about the disease, its effect, and its prevalence among different sections. This work lays the foundation for large-scale analysis of social media data using large language models (LLMs). We show that using LLMs can drastically reduce the time and resource requirements for garnering insights from large data repositories. With respect to ED, this work focuses on understanding its psychological impacts on both patients and those who live in their proximity. Social scientists can utilize the proposed approach to design more focused studies with better representative groups.", }
Eating disorders are a global health concern as they manifest in increasing numbers across all sections of society. Social network platforms have emerged as a dependable source of information about the disease, its effect, and its prevalence among different sections. This work lays the foundation for large-scale analysis of social media data using large language models (LLMs). We show that using LLMs can drastically reduce the time and resource requirements for garnering insights from large data repositories. With respect to ED, this work focuses on understanding its psychological impacts on both patients and those who live in their proximity. Social scientists can utilize the proposed approach to design more focused studies with better representative groups.
[ "Chopra, Medini", "Chatterjee, Anindita", "Dey, Lipika", "Das, Partha Pratim" ]
Deciphering psycho-social effects of Eating Disorder : Analysis of Reddit Posts using Large Language Model(LLM)s and Topic Modeling
nlp4dh-1.15
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.16.bib
https://aclanthology.org/2024.nlp4dh-1.16/
@inproceedings{nguyen-nguyen-2024-topic, title = "Topic-Aware Causal Intervention for Counterfactual Detection", author = "Nguyen, Thong Thanh and Nguyen, Truc-My", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.16", pages = "165--176", abstract = "Counterfactual statements, which describe events that did not or cannot take place, are beneficial to numerous NLP applications. Hence, we consider the problem of counterfactual detection (CFD) and seek to enhance the CFD models. Previous models are reliant on clue phrases to predict counterfactuality, so they suffer from significant performance drop when clue phrase hints do not exist during testing. Moreover, these models tend to predict non-counterfactuals over counterfactuals. To address these issues, we propose to integrate neural topic model into the CFD model to capture the global semantics of the input statement. We continue to causally intervene the hidden representations of the CFD model to balance the effect of the class labels. Extensive experiments show that our approach outperforms previous state-of-the-art CFD and bias-resolving methods in both the CFD and other bias-sensitive tasks.", }
Counterfactual statements, which describe events that did not or cannot take place, are beneficial to numerous NLP applications. Hence, we consider the problem of counterfactual detection (CFD) and seek to enhance the CFD models. Previous models are reliant on clue phrases to predict counterfactuality, so they suffer from significant performance drop when clue phrase hints do not exist during testing. Moreover, these models tend to predict non-counterfactuals over counterfactuals. To address these issues, we propose to integrate neural topic model into the CFD model to capture the global semantics of the input statement. We continue to causally intervene the hidden representations of the CFD model to balance the effect of the class labels. Extensive experiments show that our approach outperforms previous state-of-the-art CFD and bias-resolving methods in both the CFD and other bias-sensitive tasks.
[ "Nguyen, Thong Thanh", "Nguyen, Truc-My" ]
Topic-Aware Causal Intervention for Counterfactual Detection
nlp4dh-1.16
Poster
2409.16668
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.nlp4dh-1.17.bib
https://aclanthology.org/2024.nlp4dh-1.17/
@inproceedings{dipper-laarmann-quante-2024-ud, title = "{UD} for {G}erman Poetry", author = "Dipper, Stefanie and Laarmann-Quante, Ronja", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.17", pages = "177--188", abstract = "This article deals with the syntactic analysis of German-language poetry from different centuries. We use Universal Dependencies (UD) as our syntactic framework. We discuss particular challenges of the poems in terms of tokenization, sentence boundary recognition and special syntactic constructions. Our annotated corpus currently consists of 20 poems with a total of 2,162 tokens, which originate from the PoeTree.de corpus. We present some statistics on our annotations and also evaluate the automatic UD annotation from PoeTree.de using our annotations.", }
This article deals with the syntactic analysis of German-language poetry from different centuries. We use Universal Dependencies (UD) as our syntactic framework. We discuss particular challenges of the poems in terms of tokenization, sentence boundary recognition and special syntactic constructions. Our annotated corpus currently consists of 20 poems with a total of 2,162 tokens, which originate from the PoeTree.de corpus. We present some statistics on our annotations and also evaluate the automatic UD annotation from PoeTree.de using our annotations.
[ "Dipper, Stefanie", "Laarmann-Quante, Ronja" ]
UD for German Poetry
nlp4dh-1.17
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.18.bib
https://aclanthology.org/2024.nlp4dh-1.18/
@inproceedings{dent-etal-2024-molye, title = "Moly{\'e}: A Corpus-based Approach to Language Contact in Colonial {F}rance", author = "Dent, Rasul and Janes, Juliette and Clerice, Thibault and Ortiz Suarez, Pedro and Sagot, Beno{\^\i}t", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.18", pages = "189--199", abstract = "Whether or not several Creole languages which developed during the early modern period can be considered genetic descendants of European languages has been the subject of intense debate. This is in large part due to the absence of evidence of intermediate forms. This work introduces a new open corpus, the Moly{\'e} corpus, which combines stereotypical representations of three kinds of language variation in Europe with early attestations of French-based Creole languages across a period of 400 years. It is intended to facilitate future research on the continuity between contact situations in Europe and Creolophone (former) colonies.", }
Whether or not several Creole languages which developed during the early modern period can be considered genetic descendants of European languages has been the subject of intense debate. This is in large part due to the absence of evidence of intermediate forms. This work introduces a new open corpus, the Moly{\'e} corpus, which combines stereotypical representations of three kinds of language variation in Europe with early attestations of French-based Creole languages across a period of 400 years. It is intended to facilitate future research on the continuity between contact situations in Europe and Creolophone (former) colonies.
[ "Dent, Rasul", "Janes, Juliette", "Clerice, Thibault", "Ortiz Suarez, Pedro", "Sagot, Beno{\\^\\i}t" ]
Molyé: A Corpus-based Approach to Language Contact in Colonial France
nlp4dh-1.18
Poster
2408.04554
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.nlp4dh-1.19.bib
https://aclanthology.org/2024.nlp4dh-1.19/
@inproceedings{kurzynski-etal-2024-vector, title = "Vector Poetics: Parallel Couplet Detection in Classical {C}hinese Poetry", author = "Kurzynski, Maciej and Xu, Xiaotong and Feng, Yu", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.19", pages = "200--208", abstract = "This paper explores computational approaches for detecting parallelism in classical Chinese poetry, a rhetorical device where two verses mirror each other in syntax, meaning, tone, and rhythm. We experiment with five classification methods: (1) verb position matching, (2) integrated semantic, syntactic, and word-segmentation analysis, (3) difference-based character embeddings, (4) structured examples (inner/outer couplets), and (5) GPT-guided classification. We use a manually annotated dataset, containing 6,125 pentasyllabic couplets, to evaluate performance. The results indicate that parallelism detection poses a significant challenge even for powerful LLMs such as GPT-4o, with the highest F1 score below 0.72. Nevertheless, each method contributes valuable insights into the art of parallelism in Chinese poetry, suggesting a new understanding of parallelism as a verbal expression of principal components in a culturally defined vector space.", }
This paper explores computational approaches for detecting parallelism in classical Chinese poetry, a rhetorical device where two verses mirror each other in syntax, meaning, tone, and rhythm. We experiment with five classification methods: (1) verb position matching, (2) integrated semantic, syntactic, and word-segmentation analysis, (3) difference-based character embeddings, (4) structured examples (inner/outer couplets), and (5) GPT-guided classification. We use a manually annotated dataset, containing 6,125 pentasyllabic couplets, to evaluate performance. The results indicate that parallelism detection poses a significant challenge even for powerful LLMs such as GPT-4o, with the highest F1 score below 0.72. Nevertheless, each method contributes valuable insights into the art of parallelism in Chinese poetry, suggesting a new understanding of parallelism as a verbal expression of principal components in a culturally defined vector space.
[ "Kurzynski, Maciej", "Xu, Xiaotong", "Feng, Yu" ]
Vector Poetics: Parallel Couplet Detection in Classical Chinese Poetry
nlp4dh-1.19
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.20.bib
https://aclanthology.org/2024.nlp4dh-1.20/
@inproceedings{roussel-2024-adapting, title = "Adapting Measures of Literality for Use with Historical Language Data", author = "Roussel, Adam", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.20", pages = "209--215", abstract = "This paper concerns the adaptation of two existing computational measures relating to the estimation of the literality of expressions to enable their use in scenarios where data is scarce, as is usually the case with historical language data. Being able to determine an expression{'}s literality via statistical means could support a range of linguistic annotation tasks, such as those relating to metaphor, metonymy, and idiomatic expressions, however making this judgment is especially difficult for modern annotators of historical and ancient texts. Therefore we re-implement these measures using smaller corpora and count-based vectors more suited to these amounts of training data. The adapted measures are evaluated against an existing data set of particle verbs annotated with degrees of literality. The results were inconclusive, yielding low correlations between 0.05 and 0.10 (Spearman{'}s ρ). Further work is needed to determine which measures and types of data correspond to which aspects of literality.", }
This paper concerns the adaptation of two existing computational measures relating to the estimation of the literality of expressions to enable their use in scenarios where data is scarce, as is usually the case with historical language data. Being able to determine an expression{'}s literality via statistical means could support a range of linguistic annotation tasks, such as those relating to metaphor, metonymy, and idiomatic expressions, however making this judgment is especially difficult for modern annotators of historical and ancient texts. Therefore we re-implement these measures using smaller corpora and count-based vectors more suited to these amounts of training data. The adapted measures are evaluated against an existing data set of particle verbs annotated with degrees of literality. The results were inconclusive, yielding low correlations between 0.05 and 0.10 (Spearman{'}s ρ). Further work is needed to determine which measures and types of data correspond to which aspects of literality.
[ "Roussel, Adam" ]
Adapting Measures of Literality for Use with Historical Language Data
nlp4dh-1.20
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.21.bib
https://aclanthology.org/2024.nlp4dh-1.21/
@inproceedings{kupari-etal-2024-improving, title = "Improving {L}atin Dependency Parsing by Combining Treebanks and Predictions", author = "Kupari, Hanna-Mari Kristiina and Henriksson, Erik and Laippala, Veronika and Kanerva, Jenna", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.21", pages = "216--228", abstract = "This paper introduces new models designed to improve the morpho-syntactic parsing of the five largest Latin treebanks in the Universal Dependencies (UD) framework. First, using two state-of-the-art parsers, Trankit and Stanza, along with our custom UD tagger, we train new models on the five treebanks both individually and by combining them into novel merged datasets. We also test the models on the CIRCSE test set. In an additional experiment, we evaluate whether this set can be accurately tagged using the novel LASLA corpus (https://github.com/CIRCSE/LASLA). Second, we aim to improve the results by combining the predictions of different models through an atomic morphological feature voting system. The results of our two main experiments demonstrate significant improvements, particularly for the smaller treebanks, with LAS scores increasing by 16.10 and 11.85{\%}-points for UDante and Perseus, respectively (Gamba and Zeman, 2023a). Additionally, the voting system for morphological features (FEATS) brings improvements, especially for the smaller Latin treebanks: Perseus 3.15{\%} and CIRCSE 2.47{\%}-points. Tagging the CIRCSE set with our custom model using the LASLA model improves POS 6.71 and FEATS 11.04{\%}-points respectively, compared to our best-performing UD PROIEL model. Our results show that larger datasets and ensemble predictions can significantly improve performance.", }
This paper introduces new models designed to improve the morpho-syntactic parsing of the five largest Latin treebanks in the Universal Dependencies (UD) framework. First, using two state-of-the-art parsers, Trankit and Stanza, along with our custom UD tagger, we train new models on the five treebanks both individually and by combining them into novel merged datasets. We also test the models on the CIRCSE test set. In an additional experiment, we evaluate whether this set can be accurately tagged using the novel LASLA corpus (https://github.com/CIRCSE/LASLA). Second, we aim to improve the results by combining the predictions of different models through an atomic morphological feature voting system. The results of our two main experiments demonstrate significant improvements, particularly for the smaller treebanks, with LAS scores increasing by 16.10 and 11.85{\%}-points for UDante and Perseus, respectively (Gamba and Zeman, 2023a). Additionally, the voting system for morphological features (FEATS) brings improvements, especially for the smaller Latin treebanks: Perseus 3.15{\%} and CIRCSE 2.47{\%}-points. Tagging the CIRCSE set with our custom model using the LASLA model improves POS 6.71 and FEATS 11.04{\%}-points respectively, compared to our best-performing UD PROIEL model. Our results show that larger datasets and ensemble predictions can significantly improve performance.
[ "Kupari, Hanna-Mari Kristiina", "Henriksson, Erik", "Laippala, Veronika", "Kanerva, Jenna" ]
Improving Latin Dependency Parsing by Combining Treebanks and Predictions
nlp4dh-1.21
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.22.bib
https://aclanthology.org/2024.nlp4dh-1.22/
@inproceedings{sindane-marivate-2024-n, title = "From N-grams to Pre-trained Multilingual Models For Language Identification", author = "Sindane, Thapelo Andrew and Marivate, Vukosi", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.22", pages = "229--239", abstract = "In this paper, we investigate the use of N-gram models and Large Pre-trained Multilingual models for Language Identification (LID) across 11 South African languages. For N-gram models, this study shows that effective data size selection remains crucial for establishing effective frequency distributions of the target languages, that efficiently model each language, thus, improving language ranking. For pre-trained multilingual models, we conduct extensive experiments covering a diverse set of massively pre-trained multilingual (PLM) models {--} mBERT, RemBERT, XLM-r, and Afri-centric multilingual models {--} AfriBERTa, Afro-XLMr, AfroLM, and Serengeti. We further compare these models with available large-scale Language Identification tools: Compact Language Detector v3 (CLD V3), AfroLID, GlotLID, and OpenLID to highlight the importance of focused-based LID. From these, we show that Serengeti is a superior model across models: N-grams to Transformers on average. Moreover, we propose a lightweight BERT-based LID model (za{\_}BERT{\_}lid) trained with NHCLT + Vukzenzele corpus, which performs on par with our best-performing Afri-centric models.", }
In this paper, we investigate the use of N-gram models and Large Pre-trained Multilingual models for Language Identification (LID) across 11 South African languages. For N-gram models, this study shows that effective data size selection remains crucial for establishing effective frequency distributions of the target languages, that efficiently model each language, thus, improving language ranking. For pre-trained multilingual models, we conduct extensive experiments covering a diverse set of massively pre-trained multilingual (PLM) models {--} mBERT, RemBERT, XLM-r, and Afri-centric multilingual models {--} AfriBERTa, Afro-XLMr, AfroLM, and Serengeti. We further compare these models with available large-scale Language Identification tools: Compact Language Detector v3 (CLD V3), AfroLID, GlotLID, and OpenLID to highlight the importance of focused-based LID. From these, we show that Serengeti is a superior model across models: N-grams to Transformers on average. Moreover, we propose a lightweight BERT-based LID model (za{\_}BERT{\_}lid) trained with NHCLT + Vukzenzele corpus, which performs on par with our best-performing Afri-centric models.
[ "Sindane, Thapelo Andrew", "Marivate, Vukosi" ]
From N-grams to Pre-trained Multilingual Models For Language Identification
nlp4dh-1.22
Poster
2410.08728
[ "https://github.com/dsfsi/za-lid" ]
https://huggingface.co/papers/2410.08728
0
0
0
2
[]
[]
[ "dsfsi/dsfsi-language-identification-spaces" ]
[]
[]
[ "dsfsi/dsfsi-language-identification-spaces" ]
1
https://aclanthology.org/2024.nlp4dh-1.23.bib
https://aclanthology.org/2024.nlp4dh-1.23/
@inproceedings{rassem-etal-2024-visualising, title = "Visualising Changes in Semantic Neighbourhoods of {E}nglish Noun Compounds over Time", author = "Rassem, Malak and Tsigkouli, Myrto and Jenkins, Chris W. and Mileti{\'c}, Filip and Schulte im Walde, Sabine", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.23", pages = "240--246", abstract = "This paper provides a framework and tool set for computing and visualising dynamic, time- specific semantic neighbourhoods of English noun-noun compounds and their constituents over time. Our framework not only identifies salient vector-space dimensions and neighbours in notoriously sparse data: we specifically bring together changes in meaning aspects and degrees of (non-)compositionality.", }
This paper provides a framework and tool set for computing and visualising dynamic, time- specific semantic neighbourhoods of English noun-noun compounds and their constituents over time. Our framework not only identifies salient vector-space dimensions and neighbours in notoriously sparse data: we specifically bring together changes in meaning aspects and degrees of (non-)compositionality.
[ "Rassem, Malak", "Tsigkouli, Myrto", "Jenkins, Chris W.", "Mileti{\\'c}, Filip", "Schulte im Walde, Sabine" ]
Visualising Changes in Semantic Neighbourhoods of English Noun Compounds over Time
nlp4dh-1.23
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.24.bib
https://aclanthology.org/2024.nlp4dh-1.24/
@inproceedings{schulz-deichsler-2024-seflag, title = "{SEFLAG}: Systematic Evaluation Framework for {NLP} Models and Datasets in {L}atin and {A}ncient {G}reek", author = "Schulz, Konstantin and Deichsler, Florian", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.24", pages = "247--258", abstract = "Literary scholars of Latin and Ancient Greek increasingly use natural language processing for their work, but many models and datasets are hard to use due to a lack of sustainable research data management. This paper introduces the Systematic Evaluation Framework for natural language processing models and datasets in Latin and Ancient Greek (SEFLAG), which consistently assesses language resources using common criteria, such as specific evaluation metrics, metadata and risk analysis. The framework, a work in progress in its initial phase, currently covers lemmatization and named entity recognition for both languages, with plans for adding dependency parsing and other tasks. For increased transparency and sustainability, a thorough documentation is included as well as an integration into the HuggingFace ecosystem. The combination of these efforts is designed to support researchers in their search for suitable models.", }
Literary scholars of Latin and Ancient Greek increasingly use natural language processing for their work, but many models and datasets are hard to use due to a lack of sustainable research data management. This paper introduces the Systematic Evaluation Framework for natural language processing models and datasets in Latin and Ancient Greek (SEFLAG), which consistently assesses language resources using common criteria, such as specific evaluation metrics, metadata and risk analysis. The framework, a work in progress in its initial phase, currently covers lemmatization and named entity recognition for both languages, with plans for adding dependency parsing and other tasks. For increased transparency and sustainability, a thorough documentation is included as well as an integration into the HuggingFace ecosystem. The combination of these efforts is designed to support researchers in their search for suitable models.
[ "Schulz, Konstantin", "Deichsler, Florian" ]
SEFLAG: Systematic Evaluation Framework for NLP Models and Datasets in Latin and Ancient Greek
nlp4dh-1.24
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.25.bib
https://aclanthology.org/2024.nlp4dh-1.25/
@inproceedings{kenneth-etal-2024-two, title = "A Two-Model Approach for Humour Style Recognition", author = "Kenneth, Mary Ogbuka and Khosmood, Foaad and Edalat, Abbas", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.25", pages = "259--274", abstract = "Humour, a fundamental aspect of human communication, manifests itself in various styles that significantly impact social interactions and mental health. Recognising different humour styles poses challenges due to the lack of established datasets and machine learning (ML) models. To address this gap, we present a new text dataset for humour style recognition, comprising 1463 instances across four styles (self-enhancing, self-deprecating, affiliative, and aggressive) and non-humorous text, with lengths ranging from 4 to 229 words. Our research employs various computational methods, including classic machine learning classifiers, text embedding models, and DistilBERT, to establish baseline performance. Additionally, we propose a two-model approach to enhance humour style recognition, particularly in distinguishing between affiliative and aggressive styles. Our method demonstrates an 11.61{\%} improvement in f1-score for affiliative humour classification, with consistent improvements in the 14 models tested. Our findings contribute to the computational analysis of humour in text, offering new tools for studying humour in literature, social media, and other textual sources.", }
Humour, a fundamental aspect of human communication, manifests itself in various styles that significantly impact social interactions and mental health. Recognising different humour styles poses challenges due to the lack of established datasets and machine learning (ML) models. To address this gap, we present a new text dataset for humour style recognition, comprising 1463 instances across four styles (self-enhancing, self-deprecating, affiliative, and aggressive) and non-humorous text, with lengths ranging from 4 to 229 words. Our research employs various computational methods, including classic machine learning classifiers, text embedding models, and DistilBERT, to establish baseline performance. Additionally, we propose a two-model approach to enhance humour style recognition, particularly in distinguishing between affiliative and aggressive styles. Our method demonstrates an 11.61{\%} improvement in f1-score for affiliative humour classification, with consistent improvements in the 14 models tested. Our findings contribute to the computational analysis of humour in text, offering new tools for studying humour in literature, social media, and other textual sources.
[ "Kenneth, Mary Ogbuka", "Khosmood, Foaad", "Edalat, Abbas" ]
A Two-Model Approach for Humour Style Recognition
nlp4dh-1.25
Poster
2410.12842
[ "https://github.com/MaryKenneth/Two_Model_Humour_Style" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.nlp4dh-1.26.bib
https://aclanthology.org/2024.nlp4dh-1.26/
@inproceedings{tsukagoshi-ohmukai-2024-n, title = "N-gram-Based Preprocessing for Sandhi Reversion in {V}edic {S}anskrit", author = "Tsukagoshi, Yuzuki and Ohmukai, Ikki", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.26", pages = "275--279", abstract = "This study aims to address the challenges posed by sandhi in Vedic Sanskrit, a phenomenon that complicates the computational analysis of Sanskrit texts. By focusing on sandhi reversion, the research seeks to improve the accuracy of processing Vedic Sanskrit, an older layer of the language. Sandhi, a phonological phenomenon, poses challenges for text processing in Sanskrit due to the fusion of word boundaries or the sound change around word boundaries. In this research, we developed a transformer-based model with a novel n-gram preprocessing strategy to improve the accuracy of sandhi reversion for Vedic. We created character-based n-gram texts of varying lengths (n = 2, 3, 4, 5, 6) from the Rigveda, the oldest Vedic text, and trained models on these texts to perform machine translation from post-sandhi to pre-sandhi forms. In the results, we found that the model trained with 5-gram text achieved the highest accuracy. This success is likely due to the 5-gram{'}s ability to capture the maximum phonemic context in which Vedic sandhi occurs, making it more effective for the task. These findings suggest that by leveraging the inherent characteristics of phonological changes in language, even simple preprocessing methods like n-gram segmentation can significantly improve the accuracy of complex linguistic tasks.", }
This study aims to address the challenges posed by sandhi in Vedic Sanskrit, a phenomenon that complicates the computational analysis of Sanskrit texts. By focusing on sandhi reversion, the research seeks to improve the accuracy of processing Vedic Sanskrit, an older layer of the language. Sandhi, a phonological phenomenon, poses challenges for text processing in Sanskrit due to the fusion of word boundaries or the sound change around word boundaries. In this research, we developed a transformer-based model with a novel n-gram preprocessing strategy to improve the accuracy of sandhi reversion for Vedic. We created character-based n-gram texts of varying lengths (n = 2, 3, 4, 5, 6) from the Rigveda, the oldest Vedic text, and trained models on these texts to perform machine translation from post-sandhi to pre-sandhi forms. In the results, we found that the model trained with 5-gram text achieved the highest accuracy. This success is likely due to the 5-gram{'}s ability to capture the maximum phonemic context in which Vedic sandhi occurs, making it more effective for the task. These findings suggest that by leveraging the inherent characteristics of phonological changes in language, even simple preprocessing methods like n-gram segmentation can significantly improve the accuracy of complex linguistic tasks.
[ "Tsukagoshi, Yuzuki", "Ohmukai, Ikki" ]
N-gram-Based Preprocessing for Sandhi Reversion in Vedic Sanskrit
nlp4dh-1.26
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.27.bib
https://aclanthology.org/2024.nlp4dh-1.27/
@inproceedings{virk-etal-2024-enhancing, title = "Enhancing {S}wedish Parliamentary Data: Annotation, Accessibility, and Application in Digital Humanities", author = {Virk, Shafqat Mumtaz and Ohlsson, Claes and Tahmasebi, Nina and Bj{\"o}rck, Henrik and Runefelt, Leif}, editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.27", pages = "280--288", abstract = "The Swedish bicameral parliament data presents a valuable textual resource that is of interest for many researches and scholars. The parliamentary texts offer many avenues for research including the study of how various affairs were run by governments over time. The Parliament proceedings are available in textual format, but in their original form, they are noisy and unstructured and thus hard to explore and investigate. In this paper, we report the transformation of the raw bicameral parliament data (1867-1970) into a structured lexical resource annotated with various word and document level attributes. The annotated data is then made searchable through two modern corpus infrastructure components which provide a wide array of corpus exploration, visualization, and comparison options. To demonstrate the practical utility of this resource, we present a case study examining the transformation of the concept of {`}market{'} over time from a tangible physical entity to an abstract idea.", }
The Swedish bicameral parliament data presents a valuable textual resource that is of interest for many researches and scholars. The parliamentary texts offer many avenues for research including the study of how various affairs were run by governments over time. The Parliament proceedings are available in textual format, but in their original form, they are noisy and unstructured and thus hard to explore and investigate. In this paper, we report the transformation of the raw bicameral parliament data (1867-1970) into a structured lexical resource annotated with various word and document level attributes. The annotated data is then made searchable through two modern corpus infrastructure components which provide a wide array of corpus exploration, visualization, and comparison options. To demonstrate the practical utility of this resource, we present a case study examining the transformation of the concept of {`}market{'} over time from a tangible physical entity to an abstract idea.
[ "Virk, Shafqat Mumtaz", "Ohlsson, Claes", "Tahmasebi, Nina", "Bj{\\\"o}rck, Henrik", "Runefelt, Leif" ]
Enhancing Swedish Parliamentary Data: Annotation, Accessibility, and Application in Digital Humanities
nlp4dh-1.27
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.28.bib
https://aclanthology.org/2024.nlp4dh-1.28/
@inproceedings{dargis-etal-2024-evaluating, title = "Evaluating Open-Source {LLM}s in Low-Resource Languages: Insights from {L}atvian High School Exams", author = "Dar{\c{g}}is, Roberts and B{\=a}rzdi{\c{n}}{\v{s}}, Guntis and Skadi{\c{n}}a, Inguna and Saulite, Baiba", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.28", pages = "289--293", abstract = "The latest large language models (LLM) have significantly advanced natural language processing (NLP) capabilities across various tasks. However, their performance in low-resource languages, such as Latvian with 1.5 million native speakers, remains substantially underexplored due to both limited training data and the absence of comprehensive evaluation benchmarks. This study addresses this gap by conducting a systematic assessment of prominent open-source LLMs on natural language understanding (NLU) and natural language generation (NLG) tasks in Latvian. We utilize standardized high school centralized graduation exams as a benchmark dataset, offering relatable and diverse evaluation scenarios that encompass multiple-choice questions and complex text analysis tasks. Our experimental setup involves testing models from the leading LLM families, including Llama, Qwen, Gemma, and Mistral, with OpenAI{'}s GPT-4 serving as a performance reference. The results reveal that certain open-source models demonstrate competitive performance in NLU tasks, narrowing the gap with GPT-4. However, all models exhibit notable deficiencies in NLG tasks, specifically in generating coherent and contextually appropriate text analyses, highlighting persistent challenges in NLG for low-resource languages. These findings contribute to efforts to develop robust multilingual benchmarks and improve LLM performance in diverse linguistic contexts.", }
The latest large language models (LLM) have significantly advanced natural language processing (NLP) capabilities across various tasks. However, their performance in low-resource languages, such as Latvian with 1.5 million native speakers, remains substantially underexplored due to both limited training data and the absence of comprehensive evaluation benchmarks. This study addresses this gap by conducting a systematic assessment of prominent open-source LLMs on natural language understanding (NLU) and natural language generation (NLG) tasks in Latvian. We utilize standardized high school centralized graduation exams as a benchmark dataset, offering relatable and diverse evaluation scenarios that encompass multiple-choice questions and complex text analysis tasks. Our experimental setup involves testing models from the leading LLM families, including Llama, Qwen, Gemma, and Mistral, with OpenAI{'}s GPT-4 serving as a performance reference. The results reveal that certain open-source models demonstrate competitive performance in NLU tasks, narrowing the gap with GPT-4. However, all models exhibit notable deficiencies in NLG tasks, specifically in generating coherent and contextually appropriate text analyses, highlighting persistent challenges in NLG for low-resource languages. These findings contribute to efforts to develop robust multilingual benchmarks and improve LLM performance in diverse linguistic contexts.
[ "Dar{\\c{g}}is, Roberts", "B{\\=a}rzdi{\\c{n}}{\\v{s}}, Guntis", "Skadi{\\c{n}}a, Inguna", "Saulite, Baiba" ]
Evaluating Open-Source LLMs in Low-Resource Languages: Insights from Latvian High School Exams
nlp4dh-1.28
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.nlp4dh-1.29.bib
https://aclanthology.org/2024.nlp4dh-1.29/
@inproceedings{shmidman-rubinstein-2024-computational, title = "Computational Methods for the Analysis of Complementizer Variability in Language and Literature: The Case of {H}ebrew {``}she-{''} and {``}ki{''}", author = "Shmidman, Avi and Rubinstein, Aynat", editor = {H{\"a}m{\"a}l{\"a}inen, Mika and {\"O}hman, Emily and Miyagawa, So and Alnajjar, Khalid and Bizzoni, Yuri}, booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities", month = nov, year = "2024", address = "Miami, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.nlp4dh-1.29", pages = "294--307", abstract = "We demonstrate a computational method for analyzing complementizer variability within language and literature, focusing on Hebrew as a test case. The primary complementizers in Hebrew are {``}she-{''} and {``}ki{''}. We first run a large-scale corpus analysis to determine the relative preference for one or the other of these complementizers given the preceding verb. On top of this foundation, we leverage clustering methods to measure the degree of interchangeability between the complementizers for each verb. The resulting tables, which provide this information for all common complement-taking verbs in Hebrew, are a first-of-its-kind lexical resource which we provide to the NLP community. Upon this foundation, we demonstrate a computational method to analyze literary works for unusual and unexpected complementizer usages deserving of literary analysis.", }
We demonstrate a computational method for analyzing complementizer variability within language and literature, focusing on Hebrew as a test case. The primary complementizers in Hebrew are {``}she-{''} and {``}ki{''}. We first run a large-scale corpus analysis to determine the relative preference for one or the other of these complementizers given the preceding verb. On top of this foundation, we leverage clustering methods to measure the degree of interchangeability between the complementizers for each verb. The resulting tables, which provide this information for all common complement-taking verbs in Hebrew, are a first-of-its-kind lexical resource which we provide to the NLP community. Upon this foundation, we demonstrate a computational method to analyze literary works for unusual and unexpected complementizer usages deserving of literary analysis.
[ "Shmidman, Avi", "Rubinstein, Aynat" ]
Computational Methods for the Analysis of Complementizer Variability in Language and Literature: The Case of Hebrew “she-” and “ki”
nlp4dh-1.29
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1