bibtex_url
stringlengths 41
53
| proceedings
stringlengths 38
50
| bibtext
stringlengths 566
3.75k
| abstract
stringlengths 4
3.1k
| authors
sequencelengths 1
66
| title
stringlengths 12
172
| id
stringlengths 7
19
| type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringlengths 0
40
| n_linked_authors
int64 -1
21
| upvotes
int64 -1
116
| num_comments
int64 -1
11
| n_authors
int64 -1
61
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
100
| Spaces
sequencelengths 0
100
| old_Models
sequencelengths 0
100
| old_Datasets
sequencelengths 0
100
| old_Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2024.nlp4dh-1.30.bib | https://aclanthology.org/2024.nlp4dh-1.30/ | @inproceedings{henriksson-etal-2024-discrete,
title = "From Discrete to Continuous Classes: A Situational Analysis of Multilingual Web Registers with {LLM} Annotations",
author = {Henriksson, Erik and
Myntti, Amanda and
Hellstr{\"o}m, Saara and
Erten-Johansson, Selcen and
Eskelinen, Anni and
Repo, Liina and
Laippala, Veronika},
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.30",
pages = "308--318",
abstract = "In corpus linguistics, registers{--}language varieties suited to different contexts{--}have traditionally been defined by their situations of use, yet recent studies reveal significant situational variation within registers. Previous quantitative studies, however, have been limited to English, leaving this variation in other languages largely unexplored. To address this gap, we apply a quantitative situational analysis to a large multilingual web register corpus, using large language models (LLMs) to annotate texts in English, Finnish, French, Swedish, and Turkish for 23 situational parameters. Using clustering techniques, we identify six situational text types, such as {``}Advice{''}, {``}Opinion{''} and {``}Marketing{''}, each characterized by distinct situational features. We explore the relationship between these text types and traditional register categories, finding partial alignment, though no register maps perfectly onto a single cluster. These results support the quantitative approach to situational analysis and are consistent with earlier findings for English. Cross-linguistic comparisons show that language accounts for only a small part of situational variation within registers, suggesting registers are situationally similar across languages. This study demonstrates the utility of LLMs in multilingual register analysis and deepens our understanding of situational variation within registers.",
}
| In corpus linguistics, registers{--}language varieties suited to different contexts{--}have traditionally been defined by their situations of use, yet recent studies reveal significant situational variation within registers. Previous quantitative studies, however, have been limited to English, leaving this variation in other languages largely unexplored. To address this gap, we apply a quantitative situational analysis to a large multilingual web register corpus, using large language models (LLMs) to annotate texts in English, Finnish, French, Swedish, and Turkish for 23 situational parameters. Using clustering techniques, we identify six situational text types, such as {``}Advice{''}, {``}Opinion{''} and {``}Marketing{''}, each characterized by distinct situational features. We explore the relationship between these text types and traditional register categories, finding partial alignment, though no register maps perfectly onto a single cluster. These results support the quantitative approach to situational analysis and are consistent with earlier findings for English. Cross-linguistic comparisons show that language accounts for only a small part of situational variation within registers, suggesting registers are situationally similar across languages. This study demonstrates the utility of LLMs in multilingual register analysis and deepens our understanding of situational variation within registers. | [
"Henriksson, Erik",
"Myntti, Am",
"a",
"Hellstr{\\\"o}m, Saara",
"Erten-Johansson, Selcen",
"Eskelinen, Anni",
"Repo, Liina",
"Laippala, Veronika"
] | From Discrete to Continuous Classes: A Situational Analysis of Multilingual Web Registers with LLM Annotations | nlp4dh-1.30 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.31.bib | https://aclanthology.org/2024.nlp4dh-1.31/ | @inproceedings{meaney-etal-2024-testing,
title = "Testing and Adapting the Representational Abilities of Large Language Models on Folktales in Low-Resource Languages",
author = "Meaney, J. A. and
Alex, Beatrice and
Lamb, William",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.31",
pages = "319--324",
abstract = "Folktales are a rich resource of knowledge about the society and culture of a civilisation. Digital folklore research aims to use automated techniques to better understand these folktales, and it relies on abstract representations of the textual data. Although a number of large language models (LLMs) claim to be able to represent low-resource langauges such as Irish and Gaelic, we present two classification tasks to explore how useful these representations are, and three adaptations to improve the performance of these models. We find that adapting the models to work with longer sequences, and continuing pre-training on the domain of folktales improves classification performance, although these findings are tempered by the impressive performance of a baseline SVM with non-contextual features.",
}
| Folktales are a rich resource of knowledge about the society and culture of a civilisation. Digital folklore research aims to use automated techniques to better understand these folktales, and it relies on abstract representations of the textual data. Although a number of large language models (LLMs) claim to be able to represent low-resource langauges such as Irish and Gaelic, we present two classification tasks to explore how useful these representations are, and three adaptations to improve the performance of these models. We find that adapting the models to work with longer sequences, and continuing pre-training on the domain of folktales improves classification performance, although these findings are tempered by the impressive performance of a baseline SVM with non-contextual features. | [
"Meaney, J. A.",
"Alex, Beatrice",
"Lamb, William"
] | Testing and Adapting the Representational Abilities of Large Language Models on Folktales in Low-Resource Languages | nlp4dh-1.31 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.32.bib | https://aclanthology.org/2024.nlp4dh-1.32/ | @inproceedings{messner-lippincott-2024-examining,
title = "Examining Language Modeling Assumptions Using an Annotated Literary Dialect Corpus",
author = "Messner, Craig and
Lippincott, Thomas",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.32",
pages = "325--330",
abstract = "We present a dataset of 19th century American literary orthovariant tokens with a novel layer of human-annotated dialect group tags designed to serve as the basis for computational experiments exploring literarily meaningful orthographic variation. We perform an initial broad set of experiments over this dataset using both token (BERT) and character (CANINE)-level contextual language models. We find indications that the {``}dialect effect{''} produced by intentional orthographic variation employs multiple linguistic channels, and that these channels are able to be surfaced to varied degrees given particular language modelling assumptions. Specifically, we find evidence showing that choice of tokenization scheme meaningfully impact the type of orthographic information a model is able to surface.",
}
| We present a dataset of 19th century American literary orthovariant tokens with a novel layer of human-annotated dialect group tags designed to serve as the basis for computational experiments exploring literarily meaningful orthographic variation. We perform an initial broad set of experiments over this dataset using both token (BERT) and character (CANINE)-level contextual language models. We find indications that the {``}dialect effect{''} produced by intentional orthographic variation employs multiple linguistic channels, and that these channels are able to be surfaced to varied degrees given particular language modelling assumptions. Specifically, we find evidence showing that choice of tokenization scheme meaningfully impact the type of orthographic information a model is able to surface. | [
"Messner, Craig",
"Lippincott, Thomas"
] | Examining Language Modeling Assumptions Using an Annotated Literary Dialect Corpus | nlp4dh-1.32 | Poster | 2410.02674 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.nlp4dh-1.33.bib | https://aclanthology.org/2024.nlp4dh-1.33/ | @inproceedings{katayama-etal-2024-evaluating,
title = "Evaluating Language Models in Location Referring Expression Extraction from Early Modern and Contemporary {J}apanese Texts",
author = "Katayama, Ayuki and
Sakai, Yusuke and
Higashiyama, Shohei and
Ouchi, Hiroki and
Takeuchi, Ayano and
Bando, Ryo and
Hashimoto, Yuta and
Ogiso, Toshinobu and
Watanabe, Taro",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.33",
pages = "331--338",
abstract = "Automatic extraction of geographic information, including Location Referring Expressions (LREs), can aid humanities research in analyzing large collections of historical texts. In this study, to investigate how accurate pretrained Transformer language models (LMs) can extract LREs from historical texts, we evaluate two representative types of LMs, namely, masked language model and causal language model, using early modern and contemporary Japanese datasets. Our experimental results demonstrated the potential of contemporary LMs for historical texts, but also suggest the need for further model enhancement, such as pretraining on historical texts.",
}
| Automatic extraction of geographic information, including Location Referring Expressions (LREs), can aid humanities research in analyzing large collections of historical texts. In this study, to investigate how accurate pretrained Transformer language models (LMs) can extract LREs from historical texts, we evaluate two representative types of LMs, namely, masked language model and causal language model, using early modern and contemporary Japanese datasets. Our experimental results demonstrated the potential of contemporary LMs for historical texts, but also suggest the need for further model enhancement, such as pretraining on historical texts. | [
"Katayama, Ayuki",
"Sakai, Yusuke",
"Higashiyama, Shohei",
"Ouchi, Hiroki",
"Takeuchi, Ayano",
"B",
"o, Ryo",
"Hashimoto, Yuta",
"Ogiso, Toshinobu",
"Watanabe, Taro"
] | Evaluating Language Models in Location Referring Expression Extraction from Early Modern and Contemporary Japanese Texts | nlp4dh-1.33 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.34.bib | https://aclanthology.org/2024.nlp4dh-1.34/ | @inproceedings{jang-jung-2024-evaluating,
title = "Evaluating {LLM} Performance in Character Analysis: A Study of Artificial Beings in Recent {K}orean Science Fiction",
author = "Jang, Woori and
Jung, Seohyon",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.34",
pages = "339--351",
abstract = "Literary works present diverse and complex character behaviors, often implicit or intentionally obscured, making character analysis an inherently challenging task. This study explores LLMs{'} capability to identify and interpret behaviors of artificial beings in 11 award-winning contemporary Korean science fiction short stories. Focusing on artificial beings as a distinct class of characters, rather than on conventional human characters, adds to the multi-layered complexity of analysis. We compared two LLMs, Claude 3.5 Sonnet and GPT-4o, with human experts using a custom eight-label system and a unique agreement metric developed to capture the cognitive intricacies of literary interpretation. Human inter-annotator agreement was around 50{\%}, confirming the subjectivity of literary comprehension. LLMs differed from humans in selected text spans but demonstrated high agreement in label assignment for correctly identified spans. LLMs notably excelled at discerning {`}actions{'} as semantic units rather than isolated grammatical components. This study reaffirms literary interpretation{'}s multifaceted nature while expanding the boundaries of NLP, contributing to discussions about AI{'}s capacity to understand and interpret creative works.",
}
| Literary works present diverse and complex character behaviors, often implicit or intentionally obscured, making character analysis an inherently challenging task. This study explores LLMs{'} capability to identify and interpret behaviors of artificial beings in 11 award-winning contemporary Korean science fiction short stories. Focusing on artificial beings as a distinct class of characters, rather than on conventional human characters, adds to the multi-layered complexity of analysis. We compared two LLMs, Claude 3.5 Sonnet and GPT-4o, with human experts using a custom eight-label system and a unique agreement metric developed to capture the cognitive intricacies of literary interpretation. Human inter-annotator agreement was around 50{\%}, confirming the subjectivity of literary comprehension. LLMs differed from humans in selected text spans but demonstrated high agreement in label assignment for correctly identified spans. LLMs notably excelled at discerning {`}actions{'} as semantic units rather than isolated grammatical components. This study reaffirms literary interpretation{'}s multifaceted nature while expanding the boundaries of NLP, contributing to discussions about AI{'}s capacity to understand and interpret creative works. | [
"Jang, Woori",
"Jung, Seohyon"
] | Evaluating LLM Performance in Character Analysis: A Study of Artificial Beings in Recent Korean Science Fiction | nlp4dh-1.34 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.35.bib | https://aclanthology.org/2024.nlp4dh-1.35/ | @inproceedings{rajaei-moghadam-etal-2024-text,
title = "Text vs. Transcription: A Study of Differences Between the Writing and Speeches of {U}.{S}. Presidents",
author = {Rajaei Moghadam, Mina and
Rezaei, Mosab and
Aygen, G{\"u}l{\c{s}}at and
Freedman, Reva},
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.35",
pages = "352--361",
abstract = "Even after many years of research, answering the question of the differences between spoken and written text remains open. This paper aims to study syntactic features that can serve as distinguishing factors. To do so, we focus on the transcribed speeches and written books of United States presidents. We conducted two experiments to analyze high-level syntactic features. In the first experiment, we examine these features while controlling for the effect of sentence length. In the second experiment, we compare the high-level syntactic features with low-level ones. The results indicate that adding high-level syntactic features enhances model performance, particularly in longer sentences. Moreover, the importance of the prepositional phrases in a sentence increases with sentence length. We also find that these longer sentences with more prepositional phrases are more likely to appear in speeches than in written books by U.S. presidents.",
}
| Even after many years of research, answering the question of the differences between spoken and written text remains open. This paper aims to study syntactic features that can serve as distinguishing factors. To do so, we focus on the transcribed speeches and written books of United States presidents. We conducted two experiments to analyze high-level syntactic features. In the first experiment, we examine these features while controlling for the effect of sentence length. In the second experiment, we compare the high-level syntactic features with low-level ones. The results indicate that adding high-level syntactic features enhances model performance, particularly in longer sentences. Moreover, the importance of the prepositional phrases in a sentence increases with sentence length. We also find that these longer sentences with more prepositional phrases are more likely to appear in speeches than in written books by U.S. presidents. | [
"Rajaei Moghadam, Mina",
"Rezaei, Mosab",
"Aygen, G{\\\"u}l{\\c{s}}at",
"Freedman, Reva"
] | Text vs. Transcription: A Study of Differences Between the Writing and Speeches of U.S. Presidents | nlp4dh-1.35 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.36.bib | https://aclanthology.org/2024.nlp4dh-1.36/ | @inproceedings{hou-2024-mitigating,
title = "Mitigating Biases to Embrace Diversity: A Comprehensive Annotation Benchmark for Toxic Language",
author = "Hou, Xinmeng",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.36",
pages = "362--376",
abstract = "This study introduces a prescriptive annotation benchmark grounded in humanities research to ensure consistent, unbiased labeling of offensive language, particularly for casual and non-mainstream language uses. We contribute two newly annotated datasets that achieve higher inter-annotator agreement between human and language model (LLM) annotations compared to original datasets based on descriptive instructions. Our experiments show that LLMs can serve as effective alternatives when professional annotators are unavailable. Moreover, smaller models fine-tuned on multi-source LLM-annotated data outperform models trained on larger, single-source human-annotated datasets. These findings highlight the value of structured guidelines in reducing subjective variability, maintaining performance with limited data, and embracing language diversity. Content Warning: This article only analyzes offensive language for academic purposes. Discretion is advised.",
}
| This study introduces a prescriptive annotation benchmark grounded in humanities research to ensure consistent, unbiased labeling of offensive language, particularly for casual and non-mainstream language uses. We contribute two newly annotated datasets that achieve higher inter-annotator agreement between human and language model (LLM) annotations compared to original datasets based on descriptive instructions. Our experiments show that LLMs can serve as effective alternatives when professional annotators are unavailable. Moreover, smaller models fine-tuned on multi-source LLM-annotated data outperform models trained on larger, single-source human-annotated datasets. These findings highlight the value of structured guidelines in reducing subjective variability, maintaining performance with limited data, and embracing language diversity. Content Warning: This article only analyzes offensive language for academic purposes. Discretion is advised. | [
"Hou, Xinmeng"
] | Mitigating Biases to Embrace Diversity: A Comprehensive Annotation Benchmark for Toxic Language | nlp4dh-1.36 | Poster | 2410.13313 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.nlp4dh-1.37.bib | https://aclanthology.org/2024.nlp4dh-1.37/ | @inproceedings{neveditsin-etal-2024-classification,
title = "Classification of Buddhist Verses: The Efficacy and Limitations of Transformer-Based Models",
author = "Neveditsin, Nikita and
Salgaonkar, Ambuja and
Lingras, Pawan and
Mago, Vijay",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.37",
pages = "377--385",
abstract = "This study assesses the ability of machine learning to classify verses from Buddhist texts into two categories: Therigatha and Theragatha, attributed to female and male authors, respectively. It highlights the difficulties in data preprocessing and the use of Transformer-based models on Devanagari script due to limited vocabulary, demonstrating that simple statistical models can be equally effective. The research suggests areas for future exploration, provides the dataset for further study, and acknowledges existing limitations and challenges.",
}
| This study assesses the ability of machine learning to classify verses from Buddhist texts into two categories: Therigatha and Theragatha, attributed to female and male authors, respectively. It highlights the difficulties in data preprocessing and the use of Transformer-based models on Devanagari script due to limited vocabulary, demonstrating that simple statistical models can be equally effective. The research suggests areas for future exploration, provides the dataset for further study, and acknowledges existing limitations and challenges. | [
"Neveditsin, Nikita",
"Salgaonkar, Ambuja",
"Lingras, Pawan",
"Mago, Vijay"
] | Classification of Buddhist Verses: The Efficacy and Limitations of Transformer-Based Models | nlp4dh-1.37 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.38.bib | https://aclanthology.org/2024.nlp4dh-1.38/ | @inproceedings{myntti-etal-2024-intersecting,
title = "Intersecting Register and Genre: Understanding the Contents of Web-Crawled Corpora",
author = "Myntti, Amanda and
Repo, Liina and
Freyermuth, Elian and
Kanner, Antti and
Laippala, Veronika and
Henriksson, Erik",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.38",
pages = "386--397",
abstract = "Web-scale corpora present valuable research opportunities but often lack detailed metadata, making them challenging to use in linguistics and social sciences. This study tackles this problem by exploring automatic methods to classify web corpora into specific categories, focusing on text registers such as Interactive Discussion and literary genres such as Politics and Social Sciences. We train two machine learning models to classify documents from the large web-crawled OSCAR dataset: a register classifier using the multilingual, manually annotated CORE corpus, and a genre classifier using a dataset based on Kindle US{\&}UK. Fine-tuned from XLM-R Large, the register and genre classifiers achieved F1-scores of 0.74 and 0.70, respectively. Our analysis includes evaluating the distribution of the predicted text classes and examining the intersection of genre-register pairs using topic modelling. The results show expected combinations between certain registers and genres, such as the Lyrical register often aligning with the Literature {\&} Fiction genre. However, most registers, such as Interactive Discussion, are divided across multiple genres, like Engineering {\&} Transportation and Politics {\&} Social Sciences, depending on the discussion topic. This enriched metadata provides valuable insights and supports new ways of studying digital cultural heritage.",
}
| Web-scale corpora present valuable research opportunities but often lack detailed metadata, making them challenging to use in linguistics and social sciences. This study tackles this problem by exploring automatic methods to classify web corpora into specific categories, focusing on text registers such as Interactive Discussion and literary genres such as Politics and Social Sciences. We train two machine learning models to classify documents from the large web-crawled OSCAR dataset: a register classifier using the multilingual, manually annotated CORE corpus, and a genre classifier using a dataset based on Kindle US{\&}UK. Fine-tuned from XLM-R Large, the register and genre classifiers achieved F1-scores of 0.74 and 0.70, respectively. Our analysis includes evaluating the distribution of the predicted text classes and examining the intersection of genre-register pairs using topic modelling. The results show expected combinations between certain registers and genres, such as the Lyrical register often aligning with the Literature {\&} Fiction genre. However, most registers, such as Interactive Discussion, are divided across multiple genres, like Engineering {\&} Transportation and Politics {\&} Social Sciences, depending on the discussion topic. This enriched metadata provides valuable insights and supports new ways of studying digital cultural heritage. | [
"Myntti, Am",
"a",
"Repo, Liina",
"Freyermuth, Elian",
"Kanner, Antti",
"Laippala, Veronika",
"Henriksson, Erik"
] | Intersecting Register and Genre: Understanding the Contents of Web-Crawled Corpora | nlp4dh-1.38 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.39.bib | https://aclanthology.org/2024.nlp4dh-1.39/ | @inproceedings{gorovaia-etal-2024-sui,
title = "{S}ui Generis: Large Language Models for Authorship Attribution and Verification in {L}atin",
author = "Gorovaia, Svetlana and
Schmidt, Gleb and
Yamshchikov, Ivan P.",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.39",
pages = "398--412",
abstract = "This paper evaluates the performance of Large Language Models (LLMs) in authorship attribu- tion and authorship verification tasks for Latin texts of the Patristic Era. The study showcases that LLMs can be robust in zero-shot author- ship verification even on short texts without sophisticated feature engineering. Yet, the mod- els can also be easily {``}mislead{''} by semantics. The experiments also demonstrate that steering the model{'}s authorship analysis and decision- making is challenging, unlike what is reported in the studies dealing with high-resource mod- ern languages. Although LLMs prove to be able to beat, under certain circumstances, the traditional baselines, obtaining a nuanced and truly explainable decision requires at best a lot of experimentation.",
}
| This paper evaluates the performance of Large Language Models (LLMs) in authorship attribu- tion and authorship verification tasks for Latin texts of the Patristic Era. The study showcases that LLMs can be robust in zero-shot author- ship verification even on short texts without sophisticated feature engineering. Yet, the mod- els can also be easily {``}mislead{''} by semantics. The experiments also demonstrate that steering the model{'}s authorship analysis and decision- making is challenging, unlike what is reported in the studies dealing with high-resource mod- ern languages. Although LLMs prove to be able to beat, under certain circumstances, the traditional baselines, obtaining a nuanced and truly explainable decision requires at best a lot of experimentation. | [
"Gorovaia, Svetlana",
"Schmidt, Gleb",
"Yamshchikov, Ivan P."
] | Sui Generis: Large Language Models for Authorship Attribution and Verification in Latin | nlp4dh-1.39 | Poster | 2410.09245 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.nlp4dh-1.40.bib | https://aclanthology.org/2024.nlp4dh-1.40/ | @inproceedings{igarashi-miyagawa-2024-enhancing,
title = "Enhancing Neural Machine Translation for {A}inu-{J}apanese: A Comprehensive Study on the Impact of Domain and Dialect Integration",
author = "Igarashi, Ryo and
Miyagawa, So",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.40",
pages = "413--422",
abstract = "Neural Machine Translation (NMT) has revolutionized language translation, yet significant challenges persist for low-resource languages, particularly those with high dialectal variation and limited standardization. This comprehensive study focuses on the Ainu language, a critically endangered indigenous language of northern Japan, which epitomizes these challenges. We address the limitations of previous research through two primary strategies: (1) extensive corpus expansion encompassing diverse domains and dialects, and (2) development of innovative methods to incorporate dialect and domain information directly into the translation process. Our approach yielded substantial improvements in translation quality, with BLEU scores increasing from 32.90 to 39.06 (+6.16) for Japanese â Ainu and from 10.45 to 31.83 (+21.38) for Ainu â Japanese. Through rigorous experimentation and analysis, we demonstrate the crucial importance of integrating linguistic variation information in NMT systems for languages characterized by high diversity and limited resources. Our findings have broad implications for improving machine translation for other low-resource languages, potentially advancing preservation and revitalization efforts for endangered languages worldwide.",
}
| Neural Machine Translation (NMT) has revolutionized language translation, yet significant challenges persist for low-resource languages, particularly those with high dialectal variation and limited standardization. This comprehensive study focuses on the Ainu language, a critically endangered indigenous language of northern Japan, which epitomizes these challenges. We address the limitations of previous research through two primary strategies: (1) extensive corpus expansion encompassing diverse domains and dialects, and (2) development of innovative methods to incorporate dialect and domain information directly into the translation process. Our approach yielded substantial improvements in translation quality, with BLEU scores increasing from 32.90 to 39.06 (+6.16) for Japanese â Ainu and from 10.45 to 31.83 (+21.38) for Ainu â Japanese. Through rigorous experimentation and analysis, we demonstrate the crucial importance of integrating linguistic variation information in NMT systems for languages characterized by high diversity and limited resources. Our findings have broad implications for improving machine translation for other low-resource languages, potentially advancing preservation and revitalization efforts for endangered languages worldwide. | [
"Igarashi, Ryo",
"Miyagawa, So"
] | Enhancing Neural Machine Translation for Ainu-Japanese: A Comprehensive Study on the Impact of Domain and Dialect Integration | nlp4dh-1.40 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.41.bib | https://aclanthology.org/2024.nlp4dh-1.41/ | @inproceedings{fischer-biemann-2024-exploring,
title = "Exploring Large Language Models for Qualitative Data Analysis",
author = "Fischer, Tim and
Biemann, Chris",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.41",
pages = "423--437",
abstract = "This paper explores the potential of Large Language Models (LLMs) to enhance qualitative data analysis (QDA) workflows within the open-source QDA platform developed at our university. We identify several opportunities within a typical QDA workflow where AI assistance can boost researcher productivity and translate these opportunities into corresponding NLP tasks: document classification, information extraction, span classification, and text generation. A benchmark tailored to these QDA activities is constructed, utilizing English and German datasets that align with relevant use cases. Focusing on efficiency and accessibility, we evaluate the performance of three prominent open-source LLMs - Llama 3.1, Gemma 2, and Mistral NeMo - on this benchmark. Our findings reveal the promise of LLM integration for streamlining QDA workflows, particularly for English-language projects. Consequently, we have implemented the LLM Assistant as an opt-in feature within our platform and report the implementation details. With this, we hope to further democratize access to AI capabilities for qualitative data analysis.",
}
| This paper explores the potential of Large Language Models (LLMs) to enhance qualitative data analysis (QDA) workflows within the open-source QDA platform developed at our university. We identify several opportunities within a typical QDA workflow where AI assistance can boost researcher productivity and translate these opportunities into corresponding NLP tasks: document classification, information extraction, span classification, and text generation. A benchmark tailored to these QDA activities is constructed, utilizing English and German datasets that align with relevant use cases. Focusing on efficiency and accessibility, we evaluate the performance of three prominent open-source LLMs - Llama 3.1, Gemma 2, and Mistral NeMo - on this benchmark. Our findings reveal the promise of LLM integration for streamlining QDA workflows, particularly for English-language projects. Consequently, we have implemented the LLM Assistant as an opt-in feature within our platform and report the implementation details. With this, we hope to further democratize access to AI capabilities for qualitative data analysis. | [
"Fischer, Tim",
"Biemann, Chris"
] | Exploring Large Language Models for Qualitative Data Analysis | nlp4dh-1.41 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.42.bib | https://aclanthology.org/2024.nlp4dh-1.42/ | @inproceedings{vidal-gorene-etal-2024-cross,
title = "Cross-Dialectal Transfer and Zero-Shot Learning for {A}rmenian Varieties: A Comparative Analysis of {RNN}s, Transformers and {LLM}s",
author = "Vidal-Gor{\`e}ne, Chahan and
Tomeh, Nadi and
Khurshudyan, Victoria",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.42",
pages = "438--449",
abstract = "This paper evaluates lemmatization, POS-tagging, and morphological analysis for four Armenian varieties: Classical Armenian, Modern Eastern Armenian, Modern Western Armenian, and the under-documented Getashen dialect. It compares traditional RNN models, multilingual models like mDeBERTa, and large language models (ChatGPT) using supervised, transfer learning, and zero/few-shot learning approaches. The study finds that RNN models are particularly strong in POS-tagging, while large language models demonstrate high adaptability, especially in handling previously unseen dialect variations. The research highlights the value of cross-variational and in-context learning for enhancing NLP performance in low-resource languages, offering crucial insights into model transferability and supporting the preservation of endangered dialects.",
}
| This paper evaluates lemmatization, POS-tagging, and morphological analysis for four Armenian varieties: Classical Armenian, Modern Eastern Armenian, Modern Western Armenian, and the under-documented Getashen dialect. It compares traditional RNN models, multilingual models like mDeBERTa, and large language models (ChatGPT) using supervised, transfer learning, and zero/few-shot learning approaches. The study finds that RNN models are particularly strong in POS-tagging, while large language models demonstrate high adaptability, especially in handling previously unseen dialect variations. The research highlights the value of cross-variational and in-context learning for enhancing NLP performance in low-resource languages, offering crucial insights into model transferability and supporting the preservation of endangered dialects. | [
"Vidal-Gor{\\`e}ne, Chahan",
"Tomeh, Nadi",
"Khurshudyan, Victoria"
] | Cross-Dialectal Transfer and Zero-Shot Learning for Armenian Varieties: A Comparative Analysis of RNNs, Transformers and LLMs | nlp4dh-1.42 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.43.bib | https://aclanthology.org/2024.nlp4dh-1.43/ | @inproceedings{thorne-etal-2024-increasing,
title = "Increasing the Difficulty of Automatically Generated Questions via Reinforcement Learning with Synthetic Preference for Cost-Effective Cultural Heritage Dataset Generation",
author = "Thorne, William and
Robinson, Ambrose and
Peng, Bohua and
Lin, Chenghua and
Maynard, Diana",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.43",
pages = "450--462",
abstract = "As the cultural heritage sector increasingly adopts technologies like Retrieval-Augmented Generation (RAG) to provide more personalised search experiences and enable conversations with collections data, the demand for specialised evaluation datasets has grown. While end-to-end system testing is essential, it{'}s equally important to assess individual components. We target the final, answering task, which is well-suited to Machine Reading Comprehension (MRC). Although existing MRC datasets address general domains, they lack the specificity needed for cultural heritage information. Unfortunately, the manual creation of such datasets is prohibitively expensive for most heritage institutions. This paper presents a cost-effective approach for generating domain-specific MRC datasets with increased difficulty using Reinforcement Learning from Human Feedback (RLHF) from synthetic preference data. Our method leverages the performance of existing question-answering models on a subset of SQuAD to create a difficulty metric, assuming that more challenging questions are answered correctly less frequently. This research contributes: (1) A methodology for increasing question difficulty using PPO and synthetic data; (2) Empirical evidence of the method{'}s effectiveness, including human evaluation; (3) An in-depth error analysis and study of emergent phenomena; and (4) An open-source codebase and set of three llama-2-chat adapters for reproducibility and adaptation.",
}
| As the cultural heritage sector increasingly adopts technologies like Retrieval-Augmented Generation (RAG) to provide more personalised search experiences and enable conversations with collections data, the demand for specialised evaluation datasets has grown. While end-to-end system testing is essential, it{'}s equally important to assess individual components. We target the final, answering task, which is well-suited to Machine Reading Comprehension (MRC). Although existing MRC datasets address general domains, they lack the specificity needed for cultural heritage information. Unfortunately, the manual creation of such datasets is prohibitively expensive for most heritage institutions. This paper presents a cost-effective approach for generating domain-specific MRC datasets with increased difficulty using Reinforcement Learning from Human Feedback (RLHF) from synthetic preference data. Our method leverages the performance of existing question-answering models on a subset of SQuAD to create a difficulty metric, assuming that more challenging questions are answered correctly less frequently. This research contributes: (1) A methodology for increasing question difficulty using PPO and synthetic data; (2) Empirical evidence of the method{'}s effectiveness, including human evaluation; (3) An in-depth error analysis and study of emergent phenomena; and (4) An open-source codebase and set of three llama-2-chat adapters for reproducibility and adaptation. | [
"Thorne, William",
"Robinson, Ambrose",
"Peng, Bohua",
"Lin, Chenghua",
"Maynard, Diana"
] | Increasing the Difficulty of Automatically Generated Questions via Reinforcement Learning with Synthetic Preference for Cost-Effective Cultural Heritage Dataset Generation | nlp4dh-1.43 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.44.bib | https://aclanthology.org/2024.nlp4dh-1.44/ | @inproceedings{wannaz-miyagawa-2024-assessing,
title = "Assessing Large Language Models in Translating {C}optic and {A}ncient {G}reek Ostraca",
author = "Wannaz, Audric-Charles and
Miyagawa, So",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.44",
pages = "463--471",
abstract = "The advent of Large Language Models (LLMs) substantially raised the quality and lowered the cost of Machine Translation (MT). Can scholars working with ancient languages draw benefits from this new technology? More specifically, can current MT facilitate multilingual digital papyrology? To answer this question, we evaluate 9 LLMs in the task of MT with 4 Coptic and 4 Ancient Greek ostraca into English using 6 NLP metrics. We argue that some models have already reached a performance apt to assist human experts. As can be expected from the difference in training corpus size, all models seem to perform better with Ancient Greek than with Coptic, where hallucinations are markedly more common. In the Coptic texts, the specialised Coptic Translator (CT) competes closely with Claude 3 Opus for the rank of most promising tool, while Claude 3 Opus and GPT-4o compete for the same position in the Ancient Greek texts. We argue that MT now substantially heightens the incentive to work on multilingual corpora. This could have a positive and long-lasting effect on Classics and Egyptology and help reduce the historical bias in translation availability. In closing, we reflect upon the need to meet AI-generated translations with an adequate critical stance.",
}
| The advent of Large Language Models (LLMs) substantially raised the quality and lowered the cost of Machine Translation (MT). Can scholars working with ancient languages draw benefits from this new technology? More specifically, can current MT facilitate multilingual digital papyrology? To answer this question, we evaluate 9 LLMs in the task of MT with 4 Coptic and 4 Ancient Greek ostraca into English using 6 NLP metrics. We argue that some models have already reached a performance apt to assist human experts. As can be expected from the difference in training corpus size, all models seem to perform better with Ancient Greek than with Coptic, where hallucinations are markedly more common. In the Coptic texts, the specialised Coptic Translator (CT) competes closely with Claude 3 Opus for the rank of most promising tool, while Claude 3 Opus and GPT-4o compete for the same position in the Ancient Greek texts. We argue that MT now substantially heightens the incentive to work on multilingual corpora. This could have a positive and long-lasting effect on Classics and Egyptology and help reduce the historical bias in translation availability. In closing, we reflect upon the need to meet AI-generated translations with an adequate critical stance. | [
"Wannaz, Audric-Charles",
"Miyagawa, So"
] | Assessing Large Language Models in Translating Coptic and Ancient Greek Ostraca | nlp4dh-1.44 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.45.bib | https://aclanthology.org/2024.nlp4dh-1.45/ | @inproceedings{piper-etal-2024-social,
title = "The Social Lives of Literary Characters: Combining citizen science and language models to understand narrative social networks",
author = "Piper, Andrew and
Xu, Michael and
Ruths, Derek",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.45",
pages = "472--482",
abstract = "Characters and their interactions are central to the fabric of narratives, playing a crucial role in developing readers{'} social cognition. In this paper, we introduce a novel annotation framework that distinguishes between five types of character interactions, including bilateral and unilateral classifications. Leveraging the crowd-sourcing framework of citizen science, we collect a large dataset of manual annotations (N=13,395). Using this data, we explore how genre and audience factors influence social network structures in a sample of contemporary books. Our findings demonstrate that fictional narratives tend to favor more embodied interactions and exhibit denser and less modular social networks. Our work not only enhances the understanding of narrative social networks but also showcases the potential of integrating citizen science with NLP methodologies for large-scale narrative analysis.",
}
| Characters and their interactions are central to the fabric of narratives, playing a crucial role in developing readers{'} social cognition. In this paper, we introduce a novel annotation framework that distinguishes between five types of character interactions, including bilateral and unilateral classifications. Leveraging the crowd-sourcing framework of citizen science, we collect a large dataset of manual annotations (N=13,395). Using this data, we explore how genre and audience factors influence social network structures in a sample of contemporary books. Our findings demonstrate that fictional narratives tend to favor more embodied interactions and exhibit denser and less modular social networks. Our work not only enhances the understanding of narrative social networks but also showcases the potential of integrating citizen science with NLP methodologies for large-scale narrative analysis. | [
"Piper, Andrew",
"Xu, Michael",
"Ruths, Derek"
] | The Social Lives of Literary Characters: Combining citizen science and language models to understand narrative social networks | nlp4dh-1.45 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.46.bib | https://aclanthology.org/2024.nlp4dh-1.46/ | @inproceedings{bagdasarov-teich-2024-multi,
title = "Multi-word expressions in biomedical abstracts and their plain {E}nglish adaptations",
author = "Bagdasarov, Sergei and
Teich, Elke",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.46",
pages = "483--488",
abstract = "This study analyzes the use of multi-word expressions (MWEs), prefabricated sequences of words (e.g. in this case, this means that, healthcare service, follow up) in biomedical abstracts and their plain language adaptations. While English academic writing became highly specialized and complex from the late 19th century onwards, recent decades have seen a rising demand for a lay-friendly language in scientific content, especially in the health domain, to bridge a communication gap between experts and laypersons. Based on previous research showing that MWEs are easier to process than non-formulaic word sequences of comparable length, we hypothesize that they can potentially be used to create a more reader-friendly language. Our preliminary results suggest some significant differences between complex and plain abstracts when it comes to the usage patterns and informational load of MWEs.",
}
| This study analyzes the use of multi-word expressions (MWEs), prefabricated sequences of words (e.g. in this case, this means that, healthcare service, follow up) in biomedical abstracts and their plain language adaptations. While English academic writing became highly specialized and complex from the late 19th century onwards, recent decades have seen a rising demand for a lay-friendly language in scientific content, especially in the health domain, to bridge a communication gap between experts and laypersons. Based on previous research showing that MWEs are easier to process than non-formulaic word sequences of comparable length, we hypothesize that they can potentially be used to create a more reader-friendly language. Our preliminary results suggest some significant differences between complex and plain abstracts when it comes to the usage patterns and informational load of MWEs. | [
"Bagdasarov, Sergei",
"Teich, Elke"
] | Multi-word expressions in biomedical abstracts and their plain English adaptations | nlp4dh-1.46 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.47.bib | https://aclanthology.org/2024.nlp4dh-1.47/ | @inproceedings{hannani-etal-2024-assessing,
title = "Assessing the Performance of {C}hat{GPT}-4, Fine-tuned {BERT} and Traditional {ML} Models on {M}oroccan {A}rabic Sentiment Analysis",
author = "Hannani, Mohamed and
Soudi, Abdelhadi and
Van Laerhoven, Kristof",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.47",
pages = "489--498",
abstract = "Large Language Models (LLMs) have demonstrated impressive capabilities in various natural language processing tasks across different languages. However, their performance in low-resource languages and dialects, such as Moroccan Arabic (MA), requires further investigation. This study evaluates the performance of ChatGPT-4, different fine-tuned BERT models, FastText as text representation, and traditional machine learning models on MA sentiment analysis. Experiments were done on two open source MA datasets: an X(Twitter) Moroccan Arabic corpus (MAC) and a Moroccan Arabic YouTube corpus (MYC) datasets to assess their capabilities on sentiment text classification. We compare the performance of fully fine-tuned and pre-trained Arabic BERT-based models with ChatGPT-4 in zero-shot settings.",
}
| Large Language Models (LLMs) have demonstrated impressive capabilities in various natural language processing tasks across different languages. However, their performance in low-resource languages and dialects, such as Moroccan Arabic (MA), requires further investigation. This study evaluates the performance of ChatGPT-4, different fine-tuned BERT models, FastText as text representation, and traditional machine learning models on MA sentiment analysis. Experiments were done on two open source MA datasets: an X(Twitter) Moroccan Arabic corpus (MAC) and a Moroccan Arabic YouTube corpus (MYC) datasets to assess their capabilities on sentiment text classification. We compare the performance of fully fine-tuned and pre-trained Arabic BERT-based models with ChatGPT-4 in zero-shot settings. | [
"Hannani, Mohamed",
"Soudi, Abdelhadi",
"Van Laerhoven, Kristof"
] | Assessing the Performance of ChatGPT-4, Fine-tuned BERT and Traditional ML Models on Moroccan Arabic Sentiment Analysis | nlp4dh-1.47 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.48.bib | https://aclanthology.org/2024.nlp4dh-1.48/ | @inproceedings{hamalainen-etal-2024-analyzing,
title = "Analyzing Pok{\'e}mon and Mario Streamers{'} Twitch Chat with {LLM}-based User Embeddings",
author = {H{\"a}m{\"a}l{\"a}inen, Mika and
Rueter, Jack and
Alnajjar, Khalid},
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.48",
pages = "499--503",
abstract = "We present a novel digital humanities method for representing our Twitch chatters as user embeddings created by a large language model (LLM). We cluster these embeddings automatically using affinity propagation and further narrow this clustering down through manual analysis. We analyze the chat of one stream by each Twitch streamer: SmallAnt, DougDoug and PointCrow. Our findings suggest that each streamer has their own type of chatters, however two categories emerge for all of the streamers: supportive viewers and emoji and reaction senders. Repetitive message spammers is a shared chatter category for two of the streamers.",
}
| We present a novel digital humanities method for representing our Twitch chatters as user embeddings created by a large language model (LLM). We cluster these embeddings automatically using affinity propagation and further narrow this clustering down through manual analysis. We analyze the chat of one stream by each Twitch streamer: SmallAnt, DougDoug and PointCrow. Our findings suggest that each streamer has their own type of chatters, however two categories emerge for all of the streamers: supportive viewers and emoji and reaction senders. Repetitive message spammers is a shared chatter category for two of the streamers. | [
"H{\\\"a}m{\\\"a}l{\\\"a}inen, Mika",
"Rueter, Jack",
"Alnajjar, Khalid"
] | Analyzing Pokémon and Mario Streamers' Twitch Chat with LLM-based User Embeddings | nlp4dh-1.48 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.49.bib | https://aclanthology.org/2024.nlp4dh-1.49/ | @inproceedings{inoshita-2024-corpus,
title = "Corpus Development Based on Conflict Structures in the Security Field and {LLM} Bias Verification",
author = "Inoshita, Keito",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.49",
pages = "504--512",
abstract = "This study investigates the presence of biases in large language models (LLMs), specifically focusing on how these models process and reflect inter-state conflict structures. Previous research has often lacked the standardized datasets necessary for a thorough and consistent evaluation of biases in this context. Without such datasets, it is challenging to accurately assess the impact of these biases on critical applications. To address this gap, we developed a diverse and high-quality corpus using a four-phase process. This process included generating texts based on international conflict-related keywords, enhancing emotional diversity to capture a broad spectrum of sentiments, validating the coherence and connections between texts, and conducting final quality assurance through human reviewers who are experts in natural language processing. Our analysis, conducted using this newly developed corpus, revealed subtle but significant negative biases in LLMs, particularly towards Eastern bloc countries such as Russia and China. These biases have the potential to influence decision-making processes in fields like national security and international relations, where accurate, unbiased information is crucial. The findings underscore the importance of evaluating and mitigating these biases to ensure the reliability and fairness of LLMs when applied in sensitive areas.",
}
| This study investigates the presence of biases in large language models (LLMs), specifically focusing on how these models process and reflect inter-state conflict structures. Previous research has often lacked the standardized datasets necessary for a thorough and consistent evaluation of biases in this context. Without such datasets, it is challenging to accurately assess the impact of these biases on critical applications. To address this gap, we developed a diverse and high-quality corpus using a four-phase process. This process included generating texts based on international conflict-related keywords, enhancing emotional diversity to capture a broad spectrum of sentiments, validating the coherence and connections between texts, and conducting final quality assurance through human reviewers who are experts in natural language processing. Our analysis, conducted using this newly developed corpus, revealed subtle but significant negative biases in LLMs, particularly towards Eastern bloc countries such as Russia and China. These biases have the potential to influence decision-making processes in fields like national security and international relations, where accurate, unbiased information is crucial. The findings underscore the importance of evaluating and mitigating these biases to ensure the reliability and fairness of LLMs when applied in sensitive areas. | [
"Inoshita, Keito"
] | Corpus Development Based on Conflict Structures in the Security Field and LLM Bias Verification | nlp4dh-1.49 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.50.bib | https://aclanthology.org/2024.nlp4dh-1.50/ | @inproceedings{marfurt-etal-2024-generating,
title = "Generating Interpretations of Policy Announcements",
author = "Marfurt, Andreas and
Thornton, Ashley and
Sylvan, David and
Henderson, James",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.50",
pages = "513--520",
abstract = "Recent advances in language modeling have focused on (potentially multiple-choice) question answering, open-ended generation, or math and coding problems. We look at a more nuanced task: the interpretation of statements of political actors. To this end, we present a dataset of policy announcements and corresponding annotated interpretations, on the topic of US foreign policy relations with Russia in the years 1993 up to 2016. We analyze the performance of finetuning standard sequence-to-sequence models of varying sizes on predicting the annotated interpretations and compare them to few-shot prompted large language models. We find that 1) model size is not the main factor for success on this task, 2) finetuning smaller models provides both quantitatively and qualitatively superior results to in-context learning with large language models, but 3) large language models pick up the annotation format and approximate the category distribution with just a few in-context examples.",
}
| Recent advances in language modeling have focused on (potentially multiple-choice) question answering, open-ended generation, or math and coding problems. We look at a more nuanced task: the interpretation of statements of political actors. To this end, we present a dataset of policy announcements and corresponding annotated interpretations, on the topic of US foreign policy relations with Russia in the years 1993 up to 2016. We analyze the performance of finetuning standard sequence-to-sequence models of varying sizes on predicting the annotated interpretations and compare them to few-shot prompted large language models. We find that 1) model size is not the main factor for success on this task, 2) finetuning smaller models provides both quantitatively and qualitatively superior results to in-context learning with large language models, but 3) large language models pick up the annotation format and approximate the category distribution with just a few in-context examples. | [
"Marfurt, Andreas",
"Thornton, Ashley",
"Sylvan, David",
"Henderson, James"
] | Generating Interpretations of Policy Announcements | nlp4dh-1.50 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.51.bib | https://aclanthology.org/2024.nlp4dh-1.51/ | @inproceedings{mervaala-kousa-2024-order,
title = "Order Up! Micromanaging Inconsistencies in {C}hat{GPT}-4o Text Analyses",
author = "Mervaala, Erkki and
Kousa, Ilona",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.51",
pages = "521--535",
abstract = "Large language model (LLM) applications have taken the world by storm in the past two years, and the academic sphere has not been an exception. One common, cumbersome task for researchers to attempt to automatise has been text annotation and, to an extent, analysis. Popular LLMs such as ChatGPT have been examined as a research assistant and as an analysis tool, and several discrepancies regarding both transparency and the generative content have been uncovered. Our research approaches the usability and trustworthiness of ChatGPT for text analysis from the point of view of an {``}out-of-the-box{''} zero-shot or few-shot setting, focusing on how the context window and mixed text types affect the analyses generated. Results from our testing indicate that both the types of the texts and the ordering of different kinds of texts do affect the ChatGPT analysis, but also that the context-building is less likely to cause analysis deterioration when analysing similar texts. Though some of these issues are at the core of how LLMs function, many of these caveats can be addressed by transparent research planning.",
}
| Large language model (LLM) applications have taken the world by storm in the past two years, and the academic sphere has not been an exception. One common, cumbersome task for researchers to attempt to automatise has been text annotation and, to an extent, analysis. Popular LLMs such as ChatGPT have been examined as a research assistant and as an analysis tool, and several discrepancies regarding both transparency and the generative content have been uncovered. Our research approaches the usability and trustworthiness of ChatGPT for text analysis from the point of view of an {``}out-of-the-box{''} zero-shot or few-shot setting, focusing on how the context window and mixed text types affect the analyses generated. Results from our testing indicate that both the types of the texts and the ordering of different kinds of texts do affect the ChatGPT analysis, but also that the context-building is less likely to cause analysis deterioration when analysing similar texts. Though some of these issues are at the core of how LLMs function, many of these caveats can be addressed by transparent research planning. | [
"Mervaala, Erkki",
"Kousa, Ilona"
] | Order Up! Micromanaging Inconsistencies in ChatGPT-4o Text Analyses | nlp4dh-1.51 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.52.bib | https://aclanthology.org/2024.nlp4dh-1.52/ | @inproceedings{eklund-etal-2024-ciphe,
title = "{CIPHE}: A Framework for Document Cluster Interpretation and Precision from Human Exploration",
author = "Eklund, Anton and
Forsman, Mona and
Drewes, Frank",
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.52",
pages = "536--548",
abstract = "Document clustering models serve unique application purposes, which turns model quality into a property that depends on the needs of the individual investigator. We propose a framework, Cluster Interpretation and Precision from Human Exploration (CIPHE), for collecting and quantifying human interpretations of cluster samples. CIPHE tasks survey participants to explore actual document texts from cluster samples and records their perceptions. It also includes a novel inclusion task that is used to calculate the cluster precision in an indirect manner. A case study on news clusters shows that CIPHE reveals which clusters have multiple interpretation angles, aiding the investigator in their exploration.",
}
| Document clustering models serve unique application purposes, which turns model quality into a property that depends on the needs of the individual investigator. We propose a framework, Cluster Interpretation and Precision from Human Exploration (CIPHE), for collecting and quantifying human interpretations of cluster samples. CIPHE tasks survey participants to explore actual document texts from cluster samples and records their perceptions. It also includes a novel inclusion task that is used to calculate the cluster precision in an indirect manner. A case study on news clusters shows that CIPHE reveals which clusters have multiple interpretation angles, aiding the investigator in their exploration. | [
"Eklund, Anton",
"Forsman, Mona",
"Drewes, Frank"
] | CIPHE: A Framework for Document Cluster Interpretation and Precision from Human Exploration | nlp4dh-1.52 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4dh-1.53.bib | https://aclanthology.org/2024.nlp4dh-1.53/ | @inproceedings{macias-etal-2024-empowering,
title = "Empowering Teachers with Usability-Oriented {LLM}-Based Tools for Digital Pedagogy",
author = {Macias, Melany Vanessa and
Kharlashkin, Lev and
Huovinen, Leo Einari and
H{\"a}m{\"a}l{\"a}inen, Mika},
editor = {H{\"a}m{\"a}l{\"a}inen, Mika and
{\"O}hman, Emily and
Miyagawa, So and
Alnajjar, Khalid and
Bizzoni, Yuri},
booktitle = "Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities",
month = nov,
year = "2024",
address = "Miami, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4dh-1.53",
pages = "549--557",
abstract = "We present our work on two LLM-based tools that utilize artificial intelligence and creative technology to improve education. The first tool is a Moodle AI plugin, which helps teachers manage their course content more efficiently using AI-driven analysis, content generation, and an interactive chatbot. The second one is a curriculum planning tool that provides insight into the sustainability, work-life relevance, and workload of each course. Both of these tools have the common goal of integrating sustainable development goals (UN SDGs) into teaching, among other things. We will describe the usability-focused and user-centric approach we have embraced when developing these tools.",
}
| We present our work on two LLM-based tools that utilize artificial intelligence and creative technology to improve education. The first tool is a Moodle AI plugin, which helps teachers manage their course content more efficiently using AI-driven analysis, content generation, and an interactive chatbot. The second one is a curriculum planning tool that provides insight into the sustainability, work-life relevance, and workload of each course. Both of these tools have the common goal of integrating sustainable development goals (UN SDGs) into teaching, among other things. We will describe the usability-focused and user-centric approach we have embraced when developing these tools. | [
"Macias, Melany Vanessa",
"Kharlashkin, Lev",
"Huovinen, Leo Einari",
"H{\\\"a}m{\\\"a}l{\\\"a}inen, Mika"
] | Empowering Teachers with Usability-Oriented LLM-Based Tools for Digital Pedagogy | nlp4dh-1.53 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4pi-1.1.bib | https://aclanthology.org/2024.nlp4pi-1.1/ | @inproceedings{wong-2024-social,
title = "What is the social benefit of hate speech detection research? A Systematic Review",
author = "Wong, Sidney Gig-Jan",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.1",
pages = "1--12",
abstract = "While NLP research into hate speech detection has grown exponentially in the last three decades, there has been minimal uptake or engagement from policy makers and non-profit organisations. We argue the absence of ethical frameworks have contributed to this rift between current practice and best practice. By adopting appropriate ethical frameworks, NLP researchers may enable the social impact potential of hate speech research. This position paper is informed by reviewing forty-eight hate speech detection systems associated with thirty-seven publications from different venues.",
}
| While NLP research into hate speech detection has grown exponentially in the last three decades, there has been minimal uptake or engagement from policy makers and non-profit organisations. We argue the absence of ethical frameworks have contributed to this rift between current practice and best practice. By adopting appropriate ethical frameworks, NLP researchers may enable the social impact potential of hate speech research. This position paper is informed by reviewing forty-eight hate speech detection systems associated with thirty-seven publications from different venues. | [
"Wong, Sidney Gig-Jan"
] | What is the social benefit of hate speech detection research? A Systematic Review | nlp4pi-1.1 | Poster | 2409.17467 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.nlp4pi-1.2.bib | https://aclanthology.org/2024.nlp4pi-1.2/ | @inproceedings{singhal-etal-2024-multilingual,
title = "Multilingual Fact-Checking using {LLM}s",
author = "Singhal, Aryan and
Law, Thomas and
Kassner, Coby and
Gupta, Ayushman and
Duan, Evan and
Damle, Aviral and
Li, Ryan Luo",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.2",
pages = "13--31",
abstract = "Due to the recent rise in digital misinformation, there has been great interest shown in using LLMs for fact-checking and claim verification. In this paper, we answer the question: Do LLMs know multilingual facts and can they use this knowledge for effective fact-checking? To this end, we create a benchmark by filtering multilingual claims from the X-fact dataset and evaluating the multilingual fact-checking capabilities of five LLMs across five diverse languages: Spanish, Italian, Portuguese, Turkish, and Tamil on our benchmark. We employ three different prompting techniques: Zero-Shot, English Chain-of-Thought, and Cross-Lingual Prompting, using both greedy and self-consistency decoding. We extensively analyze our results and find that GPT-4o achieves the highest accuracy, but zero-shot prompting with self-consistency was the most effective overall. We also show that techniques like Chain-of-Thought and Cross-Lingual Prompting, which are designed to improve reasoning abilities, do not necessarily improve the fact-checking abilities of LLMs. Interestingly, we find a strong negative correlation between model accuracy and the amount of internet content for a given language. This suggests that LLMs are better at fact-checking from knowledge in low-resource languages. We hope that this study will encourage more work on multilingual fact-checking using LLMs.",
}
| Due to the recent rise in digital misinformation, there has been great interest shown in using LLMs for fact-checking and claim verification. In this paper, we answer the question: Do LLMs know multilingual facts and can they use this knowledge for effective fact-checking? To this end, we create a benchmark by filtering multilingual claims from the X-fact dataset and evaluating the multilingual fact-checking capabilities of five LLMs across five diverse languages: Spanish, Italian, Portuguese, Turkish, and Tamil on our benchmark. We employ three different prompting techniques: Zero-Shot, English Chain-of-Thought, and Cross-Lingual Prompting, using both greedy and self-consistency decoding. We extensively analyze our results and find that GPT-4o achieves the highest accuracy, but zero-shot prompting with self-consistency was the most effective overall. We also show that techniques like Chain-of-Thought and Cross-Lingual Prompting, which are designed to improve reasoning abilities, do not necessarily improve the fact-checking abilities of LLMs. Interestingly, we find a strong negative correlation between model accuracy and the amount of internet content for a given language. This suggests that LLMs are better at fact-checking from knowledge in low-resource languages. We hope that this study will encourage more work on multilingual fact-checking using LLMs. | [
"Singhal, Aryan",
"Law, Thomas",
"Kassner, Coby",
"Gupta, Ayushman",
"Duan, Evan",
"Damle, Aviral",
"Li, Ryan Luo"
] | Multilingual Fact-Checking using LLMs | nlp4pi-1.2 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4pi-1.3.bib | https://aclanthology.org/2024.nlp4pi-1.3/ | @inproceedings{aguirre-dredze-2024-transferring,
title = "Transferring Fairness using Multi-Task Learning with Limited Demographic Information",
author = "Aguirre, Carlos Alejandro and
Dredze, Mark",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.3",
pages = "32--49",
abstract = "Training supervised machine learning systems with a fairness loss can improve prediction fairness across different demographic groups. However, doing so requires demographic annotations for training data, without which we cannot produce debiased classifiers for most tasks. Drawing inspiration from transfer learning methods, we investigate whether we can utilize demographic data from a related task to improve the fairness of a target task. We adapt a single-task fairness loss to a multi-task setting to exploit demographic labels from a related task in debiasing a target task, and demonstrate that demographic fairness objectives transfer fairness within a multi-task framework. Additionally, we show that this approach enables intersectional fairness by transferring between two datasets with different single-axis demographics. We explore different data domains to show how our loss can improve fairness domains and tasks.",
}
| Training supervised machine learning systems with a fairness loss can improve prediction fairness across different demographic groups. However, doing so requires demographic annotations for training data, without which we cannot produce debiased classifiers for most tasks. Drawing inspiration from transfer learning methods, we investigate whether we can utilize demographic data from a related task to improve the fairness of a target task. We adapt a single-task fairness loss to a multi-task setting to exploit demographic labels from a related task in debiasing a target task, and demonstrate that demographic fairness objectives transfer fairness within a multi-task framework. Additionally, we show that this approach enables intersectional fairness by transferring between two datasets with different single-axis demographics. We explore different data domains to show how our loss can improve fairness domains and tasks. | [
"Aguirre, Carlos Alej",
"ro",
"Dredze, Mark"
] | Transferring Fairness using Multi-Task Learning with Limited Demographic Information | nlp4pi-1.3 | Poster | 2305.12671 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.nlp4pi-1.4.bib | https://aclanthology.org/2024.nlp4pi-1.4/ | @inproceedings{aguirre-etal-2024-selecting,
title = "Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models",
author = "Aguirre, Carlos Alejandro and
Sasse, Kuleen and
Cachola, Isabel Alyssa and
Dredze, Mark",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.4",
pages = "50--67",
abstract = "Recently, work in NLP has shifted to few-shot (in-context) learning, with large language models (LLMs) performing well across a range of tasks. However, while fairness evaluations have become a standard for supervised methods, little is known about the fairness of LLMs as prediction systems. Further, common standard methods for fairness involve access to model weights or are applied during finetuning, which are not applicable in few-shot learning. Do LLMs exhibit prediction biases when used for standard NLP tasks?In this work, we analyze the effect of shots, which directly affect the performance of models, on the fairness of LLMs as NLP classification systems. We consider how different shot selection strategies, both existing and new demographically sensitive methods, affect model fairness across three standard fairness datasets. We find that overall the performance of LLMs is not indicative of their fairness, and there is not a single method that fits all scenarios. In light of these facts, we discuss how future work can include LLM fairness in evaluations.",
}
| Recently, work in NLP has shifted to few-shot (in-context) learning, with large language models (LLMs) performing well across a range of tasks. However, while fairness evaluations have become a standard for supervised methods, little is known about the fairness of LLMs as prediction systems. Further, common standard methods for fairness involve access to model weights or are applied during finetuning, which are not applicable in few-shot learning. Do LLMs exhibit prediction biases when used for standard NLP tasks?In this work, we analyze the effect of shots, which directly affect the performance of models, on the fairness of LLMs as NLP classification systems. We consider how different shot selection strategies, both existing and new demographically sensitive methods, affect model fairness across three standard fairness datasets. We find that overall the performance of LLMs is not indicative of their fairness, and there is not a single method that fits all scenarios. In light of these facts, we discuss how future work can include LLM fairness in evaluations. | [
"Aguirre, Carlos Alej",
"ro",
"Sasse, Kuleen",
"Cachola, Isabel Alyssa",
"Dredze, Mark"
] | Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models | nlp4pi-1.4 | Poster | 2311.08472 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.nlp4pi-1.6.bib | https://aclanthology.org/2024.nlp4pi-1.6/ | @inproceedings{aldayel-etal-2024-covert,
title = "Covert Bias: The Severity of Social Views{'} Unalignment in Language Models Towards Implicit and Explicit Opinion",
author = "Aldayel, Abeer and
Alokaili, Areej and
Alahmadi, Rehab",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.6",
pages = "68--77",
abstract = "While various approaches have recently been studied for bias identification, little is known about how implicit language that does not explicitly convey a viewpoint affects bias amplification in large language models. To examine the severity of bias toward a view, we evaluated the performance of two downstream tasks where the implicit and explicit knowledge of social groups were used. First, we present a stress test evaluation by using a biased model in edge cases of excessive bias scenarios. Then, we evaluate how LLMs calibrate linguistically in response to both implicit and explicit opinions when they are aligned with conflicting viewpoints. Our findings reveal a discrepancy in LLM performance in identifying implicit and explicit opinions, with a general tendency of bias toward explicit opinions of opposing stances. Moreover, the bias-aligned models generate more cautious responses using uncertainty phrases compared to the unaligned (zero-shot) base models. The direct, incautious responses of the unaligned models suggest a need for further refinement of decisiveness by incorporating uncertainty markers to enhance their reliability, especially on socially nuanced topics with high subjectivity.",
}
| While various approaches have recently been studied for bias identification, little is known about how implicit language that does not explicitly convey a viewpoint affects bias amplification in large language models. To examine the severity of bias toward a view, we evaluated the performance of two downstream tasks where the implicit and explicit knowledge of social groups were used. First, we present a stress test evaluation by using a biased model in edge cases of excessive bias scenarios. Then, we evaluate how LLMs calibrate linguistically in response to both implicit and explicit opinions when they are aligned with conflicting viewpoints. Our findings reveal a discrepancy in LLM performance in identifying implicit and explicit opinions, with a general tendency of bias toward explicit opinions of opposing stances. Moreover, the bias-aligned models generate more cautious responses using uncertainty phrases compared to the unaligned (zero-shot) base models. The direct, incautious responses of the unaligned models suggest a need for further refinement of decisiveness by incorporating uncertainty markers to enhance their reliability, especially on socially nuanced topics with high subjectivity. | [
"Aldayel, Abeer",
"Alokaili, Areej",
"Alahmadi, Rehab"
] | Covert Bias: The Severity of Social Views' Unalignment in Language Models Towards Implicit and Explicit Opinion | nlp4pi-1.6 | Poster | 2408.08212 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.nlp4pi-1.7.bib | https://aclanthology.org/2024.nlp4pi-1.7/ | @inproceedings{tsai-etal-2024-pg,
title = "{PG}-Story: Taxonomy, Dataset, and Evaluation for Ensuring Child-Safe Content for Story Generation",
author = "Tsai, Alicia Y. and
Oraby, Shereen and
Narayan-Chen, Anjali and
Cervone, Alessandra and
Gella, Spandana and
Verma, Apurv and
Chung, Tagyoung and
Huang, Jing and
Peng, Nanyun",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.7",
pages = "78--97",
abstract = "Creating children{'}s stories through text generation is a creative task that requires stories to be both entertaining and suitable for young audiences. However, since current story generation systems often rely on pre-trained language models fine-tuned with limited story data, they may not always prioritize child-friendliness. This can lead to the unintended generation of stories containing problematic elements such as violence, profanity, and biases. Regrettably, despite the significance of these concerns, there is a lack of clear guidelines and benchmark datasets for ensuring content safety for children. In this paper, we introduce a taxonomy specifically tailored to assess content safety in text, with a strong emphasis on children{'}s well-being. We present PG-Story, a dataset that includes detailed annotations for both sentence-level and discourse-level safety. We demonstrate the potential of identifying unsafe content through self-diagnosis and employing controllable generation techniques during the decoding phase to minimize unsafe elements in generated stories.",
}
| Creating children{'}s stories through text generation is a creative task that requires stories to be both entertaining and suitable for young audiences. However, since current story generation systems often rely on pre-trained language models fine-tuned with limited story data, they may not always prioritize child-friendliness. This can lead to the unintended generation of stories containing problematic elements such as violence, profanity, and biases. Regrettably, despite the significance of these concerns, there is a lack of clear guidelines and benchmark datasets for ensuring content safety for children. In this paper, we introduce a taxonomy specifically tailored to assess content safety in text, with a strong emphasis on children{'}s well-being. We present PG-Story, a dataset that includes detailed annotations for both sentence-level and discourse-level safety. We demonstrate the potential of identifying unsafe content through self-diagnosis and employing controllable generation techniques during the decoding phase to minimize unsafe elements in generated stories. | [
"Tsai, Alicia Y.",
"Oraby, Shereen",
"Narayan-Chen, Anjali",
"Cervone, Aless",
"ra",
"Gella, Sp",
"ana",
"Verma, Apurv",
"Chung, Tagyoung",
"Huang, Jing",
"Peng, Nanyun"
] | PG-Story: Taxonomy, Dataset, and Evaluation for Ensuring Child-Safe Content for Story Generation | nlp4pi-1.7 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4pi-1.8.bib | https://aclanthology.org/2024.nlp4pi-1.8/ | @inproceedings{guzman-etal-2024-towards,
title = "Towards Explainable Multi-Label Text Classification: A Multi-Task Rationalisation Framework for Identifying Indicators of Forced Labour",
author = "Guzman, Erick Mendez and
Schlegel, Viktor and
Batista-Navarro, Riza",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.8",
pages = "98--112",
abstract = "The importance of rationales, or natural language explanations, lies in their capacity to bridge the gap between machine predictions and human understanding, by providing human-readable insights into why a text classifier makes specific decisions. This paper presents a novel multi-task rationalisation approach tailored to enhancing the explainability of multi-label text classifiers to identify indicators of forced labour. Our framework integrates a rationale extraction task with the classification objective and allows the inclusion of human explanations during training. We conduct extensive experiments using transformer-based models on a dataset consisting of 2,800 news articles, each annotated with labels and human-generated explanations. Our findings reveal a statistically significant difference between the best-performing architecture leveraging human rationales during training and variants using only labels. Specifically, the supervised model demonstrates a 10{\%} improvement in predictive performance measured by the weighted F1 score, a 15{\%} increase in the agreement between human and machine-generated rationales, and a 4{\%} improvement in the generated rationales{'} comprehensiveness. These results hold promising implications for addressing complex human rights issues with greater transparency and accountability using advanced NLP techniques.",
}
| The importance of rationales, or natural language explanations, lies in their capacity to bridge the gap between machine predictions and human understanding, by providing human-readable insights into why a text classifier makes specific decisions. This paper presents a novel multi-task rationalisation approach tailored to enhancing the explainability of multi-label text classifiers to identify indicators of forced labour. Our framework integrates a rationale extraction task with the classification objective and allows the inclusion of human explanations during training. We conduct extensive experiments using transformer-based models on a dataset consisting of 2,800 news articles, each annotated with labels and human-generated explanations. Our findings reveal a statistically significant difference between the best-performing architecture leveraging human rationales during training and variants using only labels. Specifically, the supervised model demonstrates a 10{\%} improvement in predictive performance measured by the weighted F1 score, a 15{\%} increase in the agreement between human and machine-generated rationales, and a 4{\%} improvement in the generated rationales{'} comprehensiveness. These results hold promising implications for addressing complex human rights issues with greater transparency and accountability using advanced NLP techniques. | [
"Guzman, Erick Mendez",
"Schlegel, Viktor",
"Batista-Navarro, Riza"
] | Towards Explainable Multi-Label Text Classification: A Multi-Task Rationalisation Framework for Identifying Indicators of Forced Labour | nlp4pi-1.8 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4pi-1.9.bib | https://aclanthology.org/2024.nlp4pi-1.9/ | @inproceedings{schoene-etal-2024-models,
title = "All Models are Wrong, But Some are Deadly: Inconsistencies in Emotion Detection in Suicide-related Tweets",
author = "Schoene, Annika Marie and
Ramachandranpillai, Resmi and
Lazovich, Tomo and
Baeza-Yates, Ricardo A.",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.9",
pages = "113--122",
abstract = "Recent work in psychology has shown that people who experience mental health challenges are more likely to express their thoughts, emotions, and feelings on social media than share it with a clinical professional. Distinguishing suicide-related content, such as suicide mentioned in a humorous context, from genuine expressions of suicidal ideation is essential to better understanding context and risk. In this paper, we give a first insight and analysis into the differences between emotion labels annotated by humans and labels predicted by three fine-tuned language models (LMs) for suicide-related content. We find that (i) there is little agreement between LMs and humans for emotion labels of suicide-related Tweets and (ii) individual LMs predict similar emotion labels for all suicide-related categories. Our findings lead us to question the credibility and usefulness of such methods in high-risk scenarios such as suicide ideation detection.",
}
| Recent work in psychology has shown that people who experience mental health challenges are more likely to express their thoughts, emotions, and feelings on social media than share it with a clinical professional. Distinguishing suicide-related content, such as suicide mentioned in a humorous context, from genuine expressions of suicidal ideation is essential to better understanding context and risk. In this paper, we give a first insight and analysis into the differences between emotion labels annotated by humans and labels predicted by three fine-tuned language models (LMs) for suicide-related content. We find that (i) there is little agreement between LMs and humans for emotion labels of suicide-related Tweets and (ii) individual LMs predict similar emotion labels for all suicide-related categories. Our findings lead us to question the credibility and usefulness of such methods in high-risk scenarios such as suicide ideation detection. | [
"Schoene, Annika Marie",
"Ramach",
"ranpillai, Resmi",
"Lazovich, Tomo",
"Baeza-Yates, Ricardo A."
] | All Models are Wrong, But Some are Deadly: Inconsistencies in Emotion Detection in Suicide-related Tweets | nlp4pi-1.9 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4pi-1.10.bib | https://aclanthology.org/2024.nlp4pi-1.10/ | @inproceedings{ghinassi-etal-2024-efficient,
title = "Efficient Aspect-Based Summarization of Climate Change Reports with Small Language Models",
author = "Ghinassi, Iacopo and
Catalano, Leonardo and
Colella, Tommaso",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.10",
pages = "123--139",
abstract = "The use of Natural Language Processing (NLP) for helping decision-makers with Climate Change action has recently been highlighted as a use case aligning with a broader drive towards NLP technologies for social good. In this context, Aspect-Based Summarization (ABS) systems that extract and summarize relevant information are particularly useful as they provide stakeholders with a convenient way of finding relevant information in expert-curated reports. In this work, we release a new dataset for ABS of Climate Change reports and we employ different Large Language Models (LLMs) and so-called Small Language Models (SLMs) to tackle this problem in an unsupervised way. Considering the problem at hand, we also show how SLMs are not significantly worse for the problem while leading to reduced carbon footprint; we do so by applying for the first time an existing framework considering both energy efficiency and task performance to the evaluation of zero-shot generative models for ABS. Overall, our results show that modern language models, both big and small, can effectively tackle ABS for Climate Change reports but more research is needed when we frame the problem as a Retrieval Augmented Generation (RAG) problem and our work and dataset will help foster efforts in this direction.",
}
| The use of Natural Language Processing (NLP) for helping decision-makers with Climate Change action has recently been highlighted as a use case aligning with a broader drive towards NLP technologies for social good. In this context, Aspect-Based Summarization (ABS) systems that extract and summarize relevant information are particularly useful as they provide stakeholders with a convenient way of finding relevant information in expert-curated reports. In this work, we release a new dataset for ABS of Climate Change reports and we employ different Large Language Models (LLMs) and so-called Small Language Models (SLMs) to tackle this problem in an unsupervised way. Considering the problem at hand, we also show how SLMs are not significantly worse for the problem while leading to reduced carbon footprint; we do so by applying for the first time an existing framework considering both energy efficiency and task performance to the evaluation of zero-shot generative models for ABS. Overall, our results show that modern language models, both big and small, can effectively tackle ABS for Climate Change reports but more research is needed when we frame the problem as a Retrieval Augmented Generation (RAG) problem and our work and dataset will help foster efforts in this direction. | [
"Ghinassi, Iacopo",
"Catalano, Leonardo",
"Colella, Tommaso"
] | Efficient Aspect-Based Summarization of Climate Change Reports with Small Language Models | nlp4pi-1.10 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4pi-1.14.bib | https://aclanthology.org/2024.nlp4pi-1.14/ | @inproceedings{miner-ortega-2024-nlp,
title = "An {NLP} Case Study on Predicting the Before and After of the {U}kraine{--}{R}ussia and Hamas{--}{I}srael Conflicts",
author = "Miner, Jordan and
Ortega, John E.",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.14",
pages = "140--151",
abstract = "We propose a method to predict toxicity and other textual attributes through the use of natural language processing (NLP) techniques for two recent events: the Ukraine-Russia and Hamas-Israel conflicts. This article provides a basis for exploration in future conflicts with hopes to mitigate risk through the analysis of social media before and after a conflict begins. Our work compiles several datasets from Twitter and Reddit for both conflicts in a before and after separation with an aim of predicting a future state of social media for avoidance. More specifically, we show that: (1) there is a noticeable difference in social media discussion leading up to and following a conflict and (2) social media discourse on platforms like Twitter and Reddit is useful in identifying future conflicts before they arise. Our results show that through the use of advanced NLP techniques (both supervised and unsupervised) toxicity and other attributes about language before and after a conflict is predictable with a low error of nearly 1.2 percent for both conflicts.",
}
| We propose a method to predict toxicity and other textual attributes through the use of natural language processing (NLP) techniques for two recent events: the Ukraine-Russia and Hamas-Israel conflicts. This article provides a basis for exploration in future conflicts with hopes to mitigate risk through the analysis of social media before and after a conflict begins. Our work compiles several datasets from Twitter and Reddit for both conflicts in a before and after separation with an aim of predicting a future state of social media for avoidance. More specifically, we show that: (1) there is a noticeable difference in social media discussion leading up to and following a conflict and (2) social media discourse on platforms like Twitter and Reddit is useful in identifying future conflicts before they arise. Our results show that through the use of advanced NLP techniques (both supervised and unsupervised) toxicity and other attributes about language before and after a conflict is predictable with a low error of nearly 1.2 percent for both conflicts. | [
"Miner, Jordan",
"Ortega, John E."
] | An NLP Case Study on Predicting the Before and After of the Ukraine–Russia and Hamas–Israel Conflicts | nlp4pi-1.14 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4pi-1.15.bib | https://aclanthology.org/2024.nlp4pi-1.15/ | @inproceedings{jenny-etal-2024-exploring,
title = "Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis",
author = {Jenny, David F. and
Billeter, Yann and
Sch{\"o}lkopf, Bernhard and
Jin, Zhijing},
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.15",
pages = "152--178",
abstract = "The rapid advancement of Large Language Models (LLMs) has sparked intense debate regarding the prevalence of bias in these models and its mitigation. Yet, as exemplified by both results on debiasing methods in the literature and reports of alignment-related defects from the wider community, bias remains a poorly understood topic despite its practical relevance. To enhance the understanding of the internal causes of bias, we analyse LLM bias through the lens of causal fairness analysis, which enables us to both comprehend the origins of bias and reason about its downstream consequences and mitigation. To operationalize this framework, we propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the LLM decision process. By applying Activity Dependency Networks (ADNs), we then analyse how these attributes influence an LLM{'}s decision process. We apply our method to LLM ratings of argument quality in political debates. We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment, and discuss the consequences of our findings for human-AI alignment and bias mitigation.",
}
| The rapid advancement of Large Language Models (LLMs) has sparked intense debate regarding the prevalence of bias in these models and its mitigation. Yet, as exemplified by both results on debiasing methods in the literature and reports of alignment-related defects from the wider community, bias remains a poorly understood topic despite its practical relevance. To enhance the understanding of the internal causes of bias, we analyse LLM bias through the lens of causal fairness analysis, which enables us to both comprehend the origins of bias and reason about its downstream consequences and mitigation. To operationalize this framework, we propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the LLM decision process. By applying Activity Dependency Networks (ADNs), we then analyse how these attributes influence an LLM{'}s decision process. We apply our method to LLM ratings of argument quality in political debates. We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment, and discuss the consequences of our findings for human-AI alignment and bias mitigation. | [
"Jenny, David F.",
"Billeter, Yann",
"Sch{\\\"o}lkopf, Bernhard",
"Jin, Zhijing"
] | Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis | nlp4pi-1.15 | Poster | 2311.08605 | [
"https://github.com/david-jenny/llm-political-study"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.nlp4pi-1.16.bib | https://aclanthology.org/2024.nlp4pi-1.16/ | @inproceedings{didwania-etal-2024-agrillm,
title = "{A}gri{LLM}:Harnessing Transformers for Framer Queries",
author = "Didwania, Krish and
Seth, Pratinav and
Kasliwal, Aditya and
Agarwal, Amit",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.16",
pages = "179--187",
abstract = "Agriculture, vital for global sustenance, necessitates innovative solutions due to a lack of organized domain experts, particularly in developing countries where many farmers are impoverished and cannot afford expert consulting. Initiatives like Farmers Helpline play a crucial role in such countries, yet challenges such as high operational costs persist. Automating query resolution can alleviate the burden on traditional call centers, providing farmers with immediate and contextually relevant information.The integration of Agriculture and Artificial Intelligence (AI) offers a transformative opportunity to empower farmers and bridge information gaps.Language models like transformers, the rising stars of AI, possess remarkable language understanding capabilities, making them ideal for addressing information gaps in agriculture.This work explores and demonstrates the transformative potential of Large Language Models (LLMs) in automating query resolution for agricultural farmers, leveraging their expertise in deciphering natural language and understanding context. Using a subset of a vast dataset of real-world farmer queries collected in India, our study focuses on approximately 4 million queries from the state of Tamil Nadu, spanning various sectors, seasonal crops, and query types.",
}
| Agriculture, vital for global sustenance, necessitates innovative solutions due to a lack of organized domain experts, particularly in developing countries where many farmers are impoverished and cannot afford expert consulting. Initiatives like Farmers Helpline play a crucial role in such countries, yet challenges such as high operational costs persist. Automating query resolution can alleviate the burden on traditional call centers, providing farmers with immediate and contextually relevant information.The integration of Agriculture and Artificial Intelligence (AI) offers a transformative opportunity to empower farmers and bridge information gaps.Language models like transformers, the rising stars of AI, possess remarkable language understanding capabilities, making them ideal for addressing information gaps in agriculture.This work explores and demonstrates the transformative potential of Large Language Models (LLMs) in automating query resolution for agricultural farmers, leveraging their expertise in deciphering natural language and understanding context. Using a subset of a vast dataset of real-world farmer queries collected in India, our study focuses on approximately 4 million queries from the state of Tamil Nadu, spanning various sectors, seasonal crops, and query types. | [
"Didwania, Krish",
"Seth, Pratinav",
"Kasliwal, Aditya",
"Agarwal, Amit"
] | AgriLLM:Harnessing Transformers for Framer Queries | nlp4pi-1.16 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4pi-1.17.bib | https://aclanthology.org/2024.nlp4pi-1.17/ | @inproceedings{ginga-uban-2024-scitechbaitro,
title = "{S}ci{T}ech{B}ait{RO}: {C}lick{B}ait Detection for {R}omanian Science and Technology News",
author = "G{\^\i}nga, Raluca-Andreea and
Uban, Ana Sabina",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.17",
pages = "188--201",
abstract = "In this paper, we introduce a new annotated corpus of clickbait news in a low-resource language - Romanian, and a rarely covered domain - science and technology news: SciTechBaitRO. It is one of the first and the largest corpus (almost 11,000 examples) of annotated clickbait texts for the Romanian language and the first one to focus on the sci-tech domain, to our knowledge. We evaluate the possibility of automatically detecting clickbait through a series of data analysis and machine learning experiments with varied features and models, including a range of linguistic features, classical machine learning models, deep learning and pre-trained models. We compare the performance of models using different kinds of features, and show that the best results are given by the BERT models, with results of up to 89{\%} F1 score. We additionally evaluate the models in a cross-domain setting for news belonging to other categories (i.e. politics, sports, entertainment) and demonstrate their capacity to generalize by detecting clickbait news outside of domain with high F1-scores.",
}
| In this paper, we introduce a new annotated corpus of clickbait news in a low-resource language - Romanian, and a rarely covered domain - science and technology news: SciTechBaitRO. It is one of the first and the largest corpus (almost 11,000 examples) of annotated clickbait texts for the Romanian language and the first one to focus on the sci-tech domain, to our knowledge. We evaluate the possibility of automatically detecting clickbait through a series of data analysis and machine learning experiments with varied features and models, including a range of linguistic features, classical machine learning models, deep learning and pre-trained models. We compare the performance of models using different kinds of features, and show that the best results are given by the BERT models, with results of up to 89{\%} F1 score. We additionally evaluate the models in a cross-domain setting for news belonging to other categories (i.e. politics, sports, entertainment) and demonstrate their capacity to generalize by detecting clickbait news outside of domain with high F1-scores. | [
"G{\\^\\i}nga, Raluca-Andreea",
"Uban, Ana Sabina"
] | SciTechBaitRO: ClickBait Detection for Romanian Science and Technology News | nlp4pi-1.17 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4pi-1.18.bib | https://aclanthology.org/2024.nlp4pi-1.18/ | @inproceedings{wu-ebling-2024-investigating,
title = "Investigating Ableism in {LLM}s through Multi-turn Conversation",
author = "Wu, Guojun and
Ebling, Sarah",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.18",
pages = "202--210",
abstract = "To reveal ableism (i.e., bias against persons with disabilities) in large language models (LLMs), we introduce a novel approach involving multi-turn conversations, enabling a comparative assessment. Initially, we prompt the LLM to elaborate short biographies, followed by a request to incorporate information about a disability. Finally, we employ several methods to identify the top words that distinguish the disability-integrated biographies from those without. This comparative setting helps us uncover how LLMs handle disability-related information and reveal underlying biases. We observe that LLMs tend to highlight disabilities in a manner that can be perceived as patronizing or as implying that overcoming challenges is unexpected due to the disability.",
}
| To reveal ableism (i.e., bias against persons with disabilities) in large language models (LLMs), we introduce a novel approach involving multi-turn conversations, enabling a comparative assessment. Initially, we prompt the LLM to elaborate short biographies, followed by a request to incorporate information about a disability. Finally, we employ several methods to identify the top words that distinguish the disability-integrated biographies from those without. This comparative setting helps us uncover how LLMs handle disability-related information and reveal underlying biases. We observe that LLMs tend to highlight disabilities in a manner that can be perceived as patronizing or as implying that overcoming challenges is unexpected due to the disability. | [
"Wu, Guojun",
"Ebling, Sarah"
] | Investigating Ableism in LLMs through Multi-turn Conversation | nlp4pi-1.18 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4pi-1.19.bib | https://aclanthology.org/2024.nlp4pi-1.19/ | @inproceedings{sicilia-alikhani-2024-eliciting,
title = "Eliciting Uncertainty in Chain-of-Thought to Mitigate Bias against Forecasting Harmful User Behaviors",
author = "Sicilia, Anthony and
Alikhani, Malihe",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.19",
pages = "211--223",
abstract = "Conversation forecasting tasks a model with predicting the outcome of an unfolding conversation. For instance, it can be applied in social media moderation to predict harmful user behaviors before they occur, allowing for preventative interventions. While large language models (LLMs) have recently been proposed as an effective tool for conversation forecasting, it{'}s unclear what biases they may have, especially against forecasting the (potentially harmful) outcomes we request them to predict during moderation. This paper explores to what extent model uncertainty can be used as a tool to mitigate potential biases. Specifically, we ask three primary research questions: 1) how does LLM forecasting accuracy change when we ask models to represent their uncertainty; 2) how does LLM bias change when we ask models to represent their uncertainty; 3) how can we use uncertainty representations to reduce or completely mitigate biases without many training data points. We address these questions for 5 open-source language models tested on 2 datasets designed to evaluate conversation forecasting for social media moderation.",
}
| Conversation forecasting tasks a model with predicting the outcome of an unfolding conversation. For instance, it can be applied in social media moderation to predict harmful user behaviors before they occur, allowing for preventative interventions. While large language models (LLMs) have recently been proposed as an effective tool for conversation forecasting, it{'}s unclear what biases they may have, especially against forecasting the (potentially harmful) outcomes we request them to predict during moderation. This paper explores to what extent model uncertainty can be used as a tool to mitigate potential biases. Specifically, we ask three primary research questions: 1) how does LLM forecasting accuracy change when we ask models to represent their uncertainty; 2) how does LLM bias change when we ask models to represent their uncertainty; 3) how can we use uncertainty representations to reduce or completely mitigate biases without many training data points. We address these questions for 5 open-source language models tested on 2 datasets designed to evaluate conversation forecasting for social media moderation. | [
"Sicilia, Anthony",
"Alikhani, Malihe"
] | Eliciting Uncertainty in Chain-of-Thought to Mitigate Bias against Forecasting Harmful User Behaviors | nlp4pi-1.19 | Poster | 2410.14744 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.nlp4pi-1.21.bib | https://aclanthology.org/2024.nlp4pi-1.21/ | @inproceedings{sabri-etal-2024-inferring,
title = "Inferring Mental Burnout Discourse Across {R}eddit Communities",
author = "Sabri, Nazanin and
Pham, Anh C. and
Kakkar, Ishita and
ElSherief, Mai",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.21",
pages = "224--231",
abstract = "Mental burnout refers to a psychological syndrome induced by chronic stress that negatively impacts the emotional and physical well-being of individuals. From the occupational context to personal hobbies, burnout is pervasive across domains and therefore affects the morale and productivity of society as a whole. Currently, no linguistic resources are available for the analysis or detection of burnout language. We address this gap by introducing a dataset annotated for burnout language. Given that social media is a platform for sharing life experiences and mental health struggles, our work examines the manifestation of burnout language in Reddit posts. We introduce a contextual word sense disambiguation approach to identify the specific meaning or context in which the word {``}burnout{''} is used, distinguishing between its application in mental health (e.g., job-related stress leading to burnout) and non-mental health contexts (e.g., engine burnout in a mechanical context). We create a dataset of 2,330 manually labeled Reddit posts for this task, as well as annotating the reason the poster associates with their burnout (e.g., professional, personal, non-traditional). We train machine learning models on this dataset achieving a minimum F1 score of 0.84 on the different tasks. We make our dataset of annotated Reddit post IDs publicly available to help advance future research in this field.",
}
| Mental burnout refers to a psychological syndrome induced by chronic stress that negatively impacts the emotional and physical well-being of individuals. From the occupational context to personal hobbies, burnout is pervasive across domains and therefore affects the morale and productivity of society as a whole. Currently, no linguistic resources are available for the analysis or detection of burnout language. We address this gap by introducing a dataset annotated for burnout language. Given that social media is a platform for sharing life experiences and mental health struggles, our work examines the manifestation of burnout language in Reddit posts. We introduce a contextual word sense disambiguation approach to identify the specific meaning or context in which the word {``}burnout{''} is used, distinguishing between its application in mental health (e.g., job-related stress leading to burnout) and non-mental health contexts (e.g., engine burnout in a mechanical context). We create a dataset of 2,330 manually labeled Reddit posts for this task, as well as annotating the reason the poster associates with their burnout (e.g., professional, personal, non-traditional). We train machine learning models on this dataset achieving a minimum F1 score of 0.84 on the different tasks. We make our dataset of annotated Reddit post IDs publicly available to help advance future research in this field. | [
"Sabri, Nazanin",
"Pham, Anh C.",
"Kakkar, Ishita",
"ElSherief, Mai"
] | Inferring Mental Burnout Discourse Across Reddit Communities | nlp4pi-1.21 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4pi-1.22.bib | https://aclanthology.org/2024.nlp4pi-1.22/ | @inproceedings{li-etal-2024-decoding,
title = "Decoding Ableism in Large Language Models: An Intersectional Approach",
author = "Li, Rong and
Kamaraj, Ashwini and
Ma, Jing and
Ebling, Sarah",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.22",
pages = "232--249",
abstract = "With the pervasive use of large language models (LLMs) across various domains, addressing the inherent ableist biases within these models requires more attention and resolution. This paper examines ableism in three LLMs (GPT-3.5, GPT-4, and Llama 3) by analyzing the intersection of disability with two additional social categories: gender and social class. Utilizing two task-specific prompts, we generated and analyzed text outputs with two metrics, VADER and regard, to evaluate sentiment and social perception biases within the responses. Our results indicate a marked improvement in bias mitigation from GPT-3.5 to GPT-4, with the latter demonstrating more positive sentiments overall, while Llama 3 showed comparatively weaker performance. Additionally, our findings underscore the complexity of intersectional biases: These biases are shaped by the combined effects of disability, gender, and class, which alter the expression and perception of ableism in LLM outputs. This research highlights the necessity for more nuanced and inclusive bias mitigation strategies in AI development, contributing to the ongoing dialogue on ethical AI practices.",
}
| With the pervasive use of large language models (LLMs) across various domains, addressing the inherent ableist biases within these models requires more attention and resolution. This paper examines ableism in three LLMs (GPT-3.5, GPT-4, and Llama 3) by analyzing the intersection of disability with two additional social categories: gender and social class. Utilizing two task-specific prompts, we generated and analyzed text outputs with two metrics, VADER and regard, to evaluate sentiment and social perception biases within the responses. Our results indicate a marked improvement in bias mitigation from GPT-3.5 to GPT-4, with the latter demonstrating more positive sentiments overall, while Llama 3 showed comparatively weaker performance. Additionally, our findings underscore the complexity of intersectional biases: These biases are shaped by the combined effects of disability, gender, and class, which alter the expression and perception of ableism in LLM outputs. This research highlights the necessity for more nuanced and inclusive bias mitigation strategies in AI development, contributing to the ongoing dialogue on ethical AI practices. | [
"Li, Rong",
"Kamaraj, Ashwini",
"Ma, Jing",
"Ebling, Sarah"
] | Decoding Ableism in Large Language Models: An Intersectional Approach | nlp4pi-1.22 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4pi-1.23.bib | https://aclanthology.org/2024.nlp4pi-1.23/ | @inproceedings{wasi-2024-explainable,
title = "Explainable Identification of Hate Speech towards Islam using Graph Neural Networks",
author = "Wasi, Azmine Toushik",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.23",
pages = "250--257",
abstract = "Islamophobic language on online platforms fosters intolerance, making detection and elimination crucial for promoting harmony. Traditional hate speech detection models rely on NLP techniques like tokenization, part-of-speech tagging, and encoder-decoder models. However, Graph Neural Networks (GNNs), with their ability to utilize relationships between data points, offer more effective detection and greater explainability. In this work, we represent speeches as nodes and connect them with edges based on their context and similarity to develop the graph. This study introduces a novel paradigm using GNNs to identify and explain hate speech towards Islam. Our model leverages GNNs to understand the context and patterns of hate speech by connecting texts via pretrained NLP-generated word embeddings, achieving state-of-the-art performance and enhancing detection accuracy while providing valuable explanations. This highlights the potential of GNNs in combating online hate speech and fostering a safer, more inclusive online environment.",
}
| Islamophobic language on online platforms fosters intolerance, making detection and elimination crucial for promoting harmony. Traditional hate speech detection models rely on NLP techniques like tokenization, part-of-speech tagging, and encoder-decoder models. However, Graph Neural Networks (GNNs), with their ability to utilize relationships between data points, offer more effective detection and greater explainability. In this work, we represent speeches as nodes and connect them with edges based on their context and similarity to develop the graph. This study introduces a novel paradigm using GNNs to identify and explain hate speech towards Islam. Our model leverages GNNs to understand the context and patterns of hate speech by connecting texts via pretrained NLP-generated word embeddings, achieving state-of-the-art performance and enhancing detection accuracy while providing valuable explanations. This highlights the potential of GNNs in combating online hate speech and fostering a safer, more inclusive online environment. | [
"Wasi, Azmine Toushik"
] | Explainable Identification of Hate Speech towards Islam using Graph Neural Networks | nlp4pi-1.23 | Poster | 2311.04916 | [
""
] | https://huggingface.co/papers/2311.04916 | 1 | 0 | 0 | 1 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.nlp4pi-1.24.bib | https://aclanthology.org/2024.nlp4pi-1.24/ | @inproceedings{harrod-etal-2024-text,
title = "From Text to Maps: {LLM}-Driven Extraction and Geotagging of Epidemiological Data",
author = "Harrod, Karlyn K. and
Bhandari, Prabin and
Anastasopoulos, Antonios",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.24",
pages = "258--270",
abstract = "Epidemiological datasets are essential for public health analysis and decision-making, yet they remain scarce and often difficult to compile due to inconsistent data formats, language barriers, and evolving political boundaries. Traditional methods of creating such datasets involve extensive manual effort and are prone to errors in accurate location extraction. To address these challenges, we propose utilizing large language models (LLMs) to automate the extraction and geotagging of epidemiological data from textual documents. Our approach significantly reduces the manual effort required, limiting human intervention to validating a subset of records against text snippets and verifying the geotagging reasoning, as opposed to reviewing multiple entire documents manually to extract, clean, and geotag. Additionally, the LLMs identify information often overlooked by human annotators, further enhancing the dataset{'}s completeness. Our findings demonstrate that LLMs can be effectively used to semi-automate the extraction and geotagging of epidemiological data, offering several key advantages: (1) comprehensive information extraction with minimal risk of missing critical details; (2) minimal human intervention; (3) higher-resolution data with more precise geotagging; and (4) significantly reduced resource demands compared to traditional methods.",
}
| Epidemiological datasets are essential for public health analysis and decision-making, yet they remain scarce and often difficult to compile due to inconsistent data formats, language barriers, and evolving political boundaries. Traditional methods of creating such datasets involve extensive manual effort and are prone to errors in accurate location extraction. To address these challenges, we propose utilizing large language models (LLMs) to automate the extraction and geotagging of epidemiological data from textual documents. Our approach significantly reduces the manual effort required, limiting human intervention to validating a subset of records against text snippets and verifying the geotagging reasoning, as opposed to reviewing multiple entire documents manually to extract, clean, and geotag. Additionally, the LLMs identify information often overlooked by human annotators, further enhancing the dataset{'}s completeness. Our findings demonstrate that LLMs can be effectively used to semi-automate the extraction and geotagging of epidemiological data, offering several key advantages: (1) comprehensive information extraction with minimal risk of missing critical details; (2) minimal human intervention; (3) higher-resolution data with more precise geotagging; and (4) significantly reduced resource demands compared to traditional methods. | [
"Harrod, Karlyn K.",
"Bh",
"ari, Prabin",
"Anastasopoulos, Antonios"
] | From Text to Maps: LLM-Driven Extraction and Geotagging of Epidemiological Data | nlp4pi-1.24 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4pi-1.25.bib | https://aclanthology.org/2024.nlp4pi-1.25/ | @inproceedings{uyuk-etal-2024-crafting,
title = "Crafting Tomorrow{'}s Headlines: Neural News Generation and Detection in {E}nglish, {T}urkish, {H}ungarian, and {P}ersian",
author = {{\"U}y{\"u}k, Cem and
Rov{\'o}, Danica and
Shaghayeghkolli, Shaghayeghkolli and
Varol, Rabia and
Groh, Georg and
Dementieva, Daryna},
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.25",
pages = "271--307",
abstract = "In the era dominated by information overload and its facilitation with Large Language Models (LLMs), the prevalence of misinformation poses a significant threat to public discourse and societal well-being. A critical concern at present involves the identification of machine-generated news. In this work, we take a significant step by introducing a benchmark dataset designed for neural news detection in four languages: English, Turkish, Hungarian, and Persian. The dataset incorporates outputs from multiple multilingual generators (in both, zero-shot and fine-tuned setups) such as BloomZ, LLaMa-2, Mistral, Mixtral, and GPT-4. Next, we experiment with a variety of classifiers, ranging from those based on linguistic features to advanced Transformer-based models and LLMs prompting. We present the detection results aiming to delve into the interpretablity and robustness of machine-generated texts detectors across all target languages.",
}
| In the era dominated by information overload and its facilitation with Large Language Models (LLMs), the prevalence of misinformation poses a significant threat to public discourse and societal well-being. A critical concern at present involves the identification of machine-generated news. In this work, we take a significant step by introducing a benchmark dataset designed for neural news detection in four languages: English, Turkish, Hungarian, and Persian. The dataset incorporates outputs from multiple multilingual generators (in both, zero-shot and fine-tuned setups) such as BloomZ, LLaMa-2, Mistral, Mixtral, and GPT-4. Next, we experiment with a variety of classifiers, ranging from those based on linguistic features to advanced Transformer-based models and LLMs prompting. We present the detection results aiming to delve into the interpretablity and robustness of machine-generated texts detectors across all target languages. | [
"{\\\"U}y{\\\"u}k, Cem",
"Rov{\\'o}, Danica",
"Shaghayeghkolli, Shaghayeghkolli",
"Varol, Rabia",
"Groh, Georg",
"Dementieva, Daryna"
] | Crafting Tomorrow's Headlines: Neural News Generation and Detection in English, Turkish, Hungarian, and Persian | nlp4pi-1.25 | Poster | 2408.10724 | [
""
] | https://huggingface.co/papers/2408.10724 | 0 | 0 | 0 | 6 | [
"tum-nlp/neural-news-discriminator-BERT-hu",
"tum-nlp/neural-news-discriminator-BERT-en",
"tum-nlp/neural-news-discriminator-BERT-fa",
"tum-nlp/neural-news-discriminator-BERT-tr",
"tum-nlp/neural-news-discriminator-RoBERTa-tr",
"tum-nlp/neural-news-discriminator-RoBERTa-fa",
"tum-nlp/neural-news-discriminator-RoBERTa-hu",
"tum-nlp/neural-news-discriminator-RoBERTa-en"
] | [] | [] | [
"tum-nlp/neural-news-discriminator-BERT-hu",
"tum-nlp/neural-news-discriminator-BERT-en",
"tum-nlp/neural-news-discriminator-BERT-fa",
"tum-nlp/neural-news-discriminator-BERT-tr",
"tum-nlp/neural-news-discriminator-RoBERTa-tr",
"tum-nlp/neural-news-discriminator-RoBERTa-fa",
"tum-nlp/neural-news-discriminator-RoBERTa-hu",
"tum-nlp/neural-news-discriminator-RoBERTa-en"
] | [] | [] | 1 |
https://aclanthology.org/2024.nlp4pi-1.26.bib | https://aclanthology.org/2024.nlp4pi-1.26/ | @inproceedings{kapur-kreiss-2024-reference,
title = "Reference-Based Metrics Are Biased Against Blind and Low-Vision Users{'} Image Description Preferences",
author = "Kapur, Rhea and
Kreiss, Elisa",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.26",
pages = "308--314",
abstract = "Image description generation models are sophisticated Vision-Language Models which promise to make visual content, such as images, non-visually accessible through linguistic descriptions. While these systems can benefit all, their primary motivation tends to lie in allowing blind and low-vision (BLV) users access to increasingly visual (online) discourse. Well-defined evaluation methods are crucial for steering model development into socially useful directions. In this work, we show that the most popular evaluation metrics (reference-based metrics) are biased against BLV users and therefore potentially stifle useful model development. Reference-based metrics assign quality scores based on the similarity to human-generated ground-truth descriptions and are widely accepted as neutrally representing the needs of all users. However, we find that these metrics are more strongly correlated with sighted participant ratings than BLV ratings, and we explore factors which appear to mediate this finding: description length, the image{'}s context of appearance, and the number of reference descriptions available. These findings suggest that there is a need for developing evaluation methods that are established based on specific downstream user groups, and they highlight the importance of reflecting on emerging biases against minorities in the development of general-purpose automatic metrics.",
}
| Image description generation models are sophisticated Vision-Language Models which promise to make visual content, such as images, non-visually accessible through linguistic descriptions. While these systems can benefit all, their primary motivation tends to lie in allowing blind and low-vision (BLV) users access to increasingly visual (online) discourse. Well-defined evaluation methods are crucial for steering model development into socially useful directions. In this work, we show that the most popular evaluation metrics (reference-based metrics) are biased against BLV users and therefore potentially stifle useful model development. Reference-based metrics assign quality scores based on the similarity to human-generated ground-truth descriptions and are widely accepted as neutrally representing the needs of all users. However, we find that these metrics are more strongly correlated with sighted participant ratings than BLV ratings, and we explore factors which appear to mediate this finding: description length, the image{'}s context of appearance, and the number of reference descriptions available. These findings suggest that there is a need for developing evaluation methods that are established based on specific downstream user groups, and they highlight the importance of reflecting on emerging biases against minorities in the development of general-purpose automatic metrics. | [
"Kapur, Rhea",
"Kreiss, Elisa"
] | Reference-Based Metrics Are Biased Against Blind and Low-Vision Users' Image Description Preferences | nlp4pi-1.26 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4pi-1.27.bib | https://aclanthology.org/2024.nlp4pi-1.27/ | @inproceedings{wang-etal-2024-multiclimate,
title = "{M}ulti{C}limate: Multimodal Stance Detection on Climate Change Videos",
author = "Wang, Jiawen and
Zuo, Longfei and
Peng, Siyao and
Plank, Barbara",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.27",
pages = "315--326",
abstract = "Climate change (CC) has attracted increasing attention in NLP in recent years. However, detecting the stance on CC in multimodal data is understudied and remains challenging due to a lack of reliable datasets. To improve the understanding of public opinions and communication strategies, this paper presents MultiClimate, the first open-source manually-annotated stance detection dataset with 100 CC-related YouTube videos and 4,209 frame-transcript pairs. We deploy state-of-the-art vision and language models, as well as multimodal models for MultiClimate stance detection. Results show that text-only BERT significantly outperforms image-only ResNet50 and ViT. Combining both modalities achieves state-of-the-art, 0.747/0.749 in accuracy/F1. Our 100M-sized fusion models also beat CLIP and BLIP, as well as the much larger 9B-sized multimodal IDEFICS and text-only Llama3 and Gemma2, indicating that multimodal stance detection remains challenging for large language models. Our code, dataset, as well as supplementary materials, are available at https://github.com/werywjw/MultiClimate.",
}
| Climate change (CC) has attracted increasing attention in NLP in recent years. However, detecting the stance on CC in multimodal data is understudied and remains challenging due to a lack of reliable datasets. To improve the understanding of public opinions and communication strategies, this paper presents MultiClimate, the first open-source manually-annotated stance detection dataset with 100 CC-related YouTube videos and 4,209 frame-transcript pairs. We deploy state-of-the-art vision and language models, as well as multimodal models for MultiClimate stance detection. Results show that text-only BERT significantly outperforms image-only ResNet50 and ViT. Combining both modalities achieves state-of-the-art, 0.747/0.749 in accuracy/F1. Our 100M-sized fusion models also beat CLIP and BLIP, as well as the much larger 9B-sized multimodal IDEFICS and text-only Llama3 and Gemma2, indicating that multimodal stance detection remains challenging for large language models. Our code, dataset, as well as supplementary materials, are available at https://github.com/werywjw/MultiClimate. | [
"Wang, Jiawen",
"Zuo, Longfei",
"Peng, Siyao",
"Plank, Barbara"
] | MultiClimate: Multimodal Stance Detection on Climate Change Videos | nlp4pi-1.27 | Poster | 2409.18346 | [
"https://github.com/werywjw/multiclimate"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.nlp4pi-1.28.bib | https://aclanthology.org/2024.nlp4pi-1.28/ | @inproceedings{gupta-etal-2024-aavenue,
title = "{AAVENUE}: Detecting {LLM} Biases on {NLU} Tasks in {AAVE} via a Novel Benchmark",
author = "Gupta, Abhay and
Yurtseven, Ece and
Meng, Philip and
Zhu, Kevin",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.28",
pages = "327--333",
abstract = "Detecting biases in natural language understanding (NLU) for African American Vernacular English (AAVE) is crucial to developing inclusive natural language processing (NLP) systems. To address dialect-induced performance discrepancies, we introduce AAVENUE (AAVE Natural Language Understanding Evaluation), a benchmark for evaluating large language model (LLM) performance on NLU tasks in AAVE and Standard American English (SAE). AAVENUE builds upon and extends existing benchmarks like VALUE, replacing deterministic syntactic and morphological transformations with a more flexible methodology leveraging LLM-based translation with few-shot prompting, improving performance across our evaluation metrics when translating key tasks from the GLUE and SuperGLUE benchmarks. We compare AAVENUE and VALUE translations using five popular LLMs and a comprehensive set of metrics including fluency, BARTScore, quality, coherence, and understandability. Additionally, we recruit fluent AAVE speakers to validate our translations for authenticity. Our evaluations reveal that LLMs consistently perform better on SAE tasks than AAVE-translated versions, underscoring inherent biases and highlighting the need for more inclusive NLP models.",
}
| Detecting biases in natural language understanding (NLU) for African American Vernacular English (AAVE) is crucial to developing inclusive natural language processing (NLP) systems. To address dialect-induced performance discrepancies, we introduce AAVENUE (AAVE Natural Language Understanding Evaluation), a benchmark for evaluating large language model (LLM) performance on NLU tasks in AAVE and Standard American English (SAE). AAVENUE builds upon and extends existing benchmarks like VALUE, replacing deterministic syntactic and morphological transformations with a more flexible methodology leveraging LLM-based translation with few-shot prompting, improving performance across our evaluation metrics when translating key tasks from the GLUE and SuperGLUE benchmarks. We compare AAVENUE and VALUE translations using five popular LLMs and a comprehensive set of metrics including fluency, BARTScore, quality, coherence, and understandability. Additionally, we recruit fluent AAVE speakers to validate our translations for authenticity. Our evaluations reveal that LLMs consistently perform better on SAE tasks than AAVE-translated versions, underscoring inherent biases and highlighting the need for more inclusive NLP models. | [
"Gupta, Abhay",
"Yurtseven, Ece",
"Meng, Philip",
"Zhu, Kevin"
] | AAVENUE: Detecting LLM Biases on NLU Tasks in AAVE via a Novel Benchmark | nlp4pi-1.28 | Poster | 2408.14845 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.nlp4pi-1.29.bib | https://aclanthology.org/2024.nlp4pi-1.29/ | @inproceedings{rawat-etal-2024-diversitymedqa,
title = "{D}iversity{M}ed{QA}: A Benchmark for Assessing Demographic Biases in Medical Diagnosis using Large Language Models",
author = "Rawat, Rajat and
McBride, Hudson and
Nirmal, Dhiyaan Chakkresh and
Ghosh, Rajarshi and
Moon, Jong and
Alamuri, Dhruv Karthik and
Zhu, Kevin",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.29",
pages = "334--348",
abstract = "As large language models (LLMs) gain traction in healthcare, concerns about their susceptibility to demographic biases are growing. We introduce DiversityMedQA, a novel benchmark designed to assess LLM responses to medical queries across diverse patient demographics, such as gender and ethnicity. By perturbing questions from the MedQA dataset, which comprises of medical board exam questions, we created a benchmark that captures the nuanced differences in medical diagnosis across varying patient profiles. To ensure that our perturbations did not alter the clinical outcomes, we implemented a filtering strategy to validate each perturbation, so that any performance discrepancies would be indicative of bias. Our findings reveal notable discrepancies in model performance when tested against these demographic variations. By releasing DiversityMedQA, we provide a resource for evaluating and mitigating demographic bias in LLM medical diagnoses.",
}
| As large language models (LLMs) gain traction in healthcare, concerns about their susceptibility to demographic biases are growing. We introduce DiversityMedQA, a novel benchmark designed to assess LLM responses to medical queries across diverse patient demographics, such as gender and ethnicity. By perturbing questions from the MedQA dataset, which comprises of medical board exam questions, we created a benchmark that captures the nuanced differences in medical diagnosis across varying patient profiles. To ensure that our perturbations did not alter the clinical outcomes, we implemented a filtering strategy to validate each perturbation, so that any performance discrepancies would be indicative of bias. Our findings reveal notable discrepancies in model performance when tested against these demographic variations. By releasing DiversityMedQA, we provide a resource for evaluating and mitigating demographic bias in LLM medical diagnoses. | [
"Rawat, Rajat",
"McBride, Hudson",
"Nirmal, Dhiyaan Chakkresh",
"Ghosh, Rajarshi",
"Moon, Jong",
"Alamuri, Dhruv Karthik",
"Zhu, Kevin"
] | DiversityMedQA: A Benchmark for Assessing Demographic Biases in Medical Diagnosis using Large Language Models | nlp4pi-1.29 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4pi-1.30.bib | https://aclanthology.org/2024.nlp4pi-1.30/ | @inproceedings{patil-etal-2024-improving,
title = "Improving Industrial Safety by Auto-Generating Case-specific Preventive Recommendations",
author = "Patil, Sangameshwar and
Koundanya, Sumit and
Kumbhar, Shubham and
Kumar, Alok",
editor = "Dementieva, Daryna and
Ignat, Oana and
Jin, Zhijing and
Mihalcea, Rada and
Piatti, Giorgio and
Tetreault, Joel and
Wilson, Steven and
Zhao, Jieyu",
booktitle = "Proceedings of the Third Workshop on NLP for Positive Impact",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4pi-1.30",
pages = "349--353",
abstract = "In this paper, we propose a novel application to improve industrial safety by generating preventive recommendations using LLMs. Using a dataset of 275 incidents representing 11 different incident types sampled from real-life OSHA incidents, we compare three different LLMs to evaluate the quality of preventive recommendations generated by them. We also show that LLMs are not a panacea for the preventive recommendation generation task. They have limitations and can produce responses that are incorrect or irrelevant. We found that about 65{\%} of the output from Vicuna model was not acceptable at all at the basic readability and other sanity checks level. Mistral and Phi{\_}3 are better than Vicuna, but not all of their recommendations are of similar quality. We find that for a given safety incident case, the generated recommendations can be categorized as specific, generic, or irrelevant. This helps us to better quantify and compare the performance of the models. This paper is among the initial and novel work for the preventive recommendation generation problem. We believe it will pave way for use of NLP to positively impact the industrial safety.",
}
| In this paper, we propose a novel application to improve industrial safety by generating preventive recommendations using LLMs. Using a dataset of 275 incidents representing 11 different incident types sampled from real-life OSHA incidents, we compare three different LLMs to evaluate the quality of preventive recommendations generated by them. We also show that LLMs are not a panacea for the preventive recommendation generation task. They have limitations and can produce responses that are incorrect or irrelevant. We found that about 65{\%} of the output from Vicuna model was not acceptable at all at the basic readability and other sanity checks level. Mistral and Phi{\_}3 are better than Vicuna, but not all of their recommendations are of similar quality. We find that for a given safety incident case, the generated recommendations can be categorized as specific, generic, or irrelevant. This helps us to better quantify and compare the performance of the models. This paper is among the initial and novel work for the preventive recommendation generation problem. We believe it will pave way for use of NLP to positively impact the industrial safety. | [
"Patil, Sangameshwar",
"Koundanya, Sumit",
"Kumbhar, Shubham",
"Kumar, Alok"
] | Improving Industrial Safety by Auto-Generating Case-specific Preventive Recommendations | nlp4pi-1.30 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4science-1.1.bib | https://aclanthology.org/2024.nlp4science-1.1/ | @inproceedings{horovicz-goldshmidt-2024-tokenshap,
title = "{T}oken{SHAP}: Interpreting Large Language Models with {M}onte {C}arlo Shapley Value Estimation",
author = "Horovicz, Miriam and
Goldshmidt, Roni",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.1",
pages = "1--8",
abstract = "As large language models (LLMs) become increasingly prevalent in critical applications, the need for interpretable AI has grown. We introduce TokenSHAP, a novel method for interpreting LLMs by attributing importance to individual tokens or substrings within input prompts. This approach adapts Shapley values from cooperative game theory to natural language processing, offering a rigorous framework for understanding how different parts of an input contribute to a model{'}s response. TokenSHAP leverages Monte Carlo sampling for computational efficiency, providing interpretable, quantitative measures of token importance. We demonstrate its efficacy across diverse prompts and LLM architectures, showing consistent improvements over existing baselines in alignment with human judgments, faithfulness to model behavior, and consistency. Our method{'}s ability to capture nuanced interactions between tokens provides valuable insights into LLM behavior, enhancing model transparency, improving prompt engineering, and aiding in the development of more reliable AI systems. TokenSHAP represents a significant step towards the necessary interpretability for responsible AI deployment, contributing to the broader goal of creating more transparent, accountable, and trustworthy AI systems. Open Source code https://github.com/ronigold/TokenSHAP",
}
| As large language models (LLMs) become increasingly prevalent in critical applications, the need for interpretable AI has grown. We introduce TokenSHAP, a novel method for interpreting LLMs by attributing importance to individual tokens or substrings within input prompts. This approach adapts Shapley values from cooperative game theory to natural language processing, offering a rigorous framework for understanding how different parts of an input contribute to a model{'}s response. TokenSHAP leverages Monte Carlo sampling for computational efficiency, providing interpretable, quantitative measures of token importance. We demonstrate its efficacy across diverse prompts and LLM architectures, showing consistent improvements over existing baselines in alignment with human judgments, faithfulness to model behavior, and consistency. Our method{'}s ability to capture nuanced interactions between tokens provides valuable insights into LLM behavior, enhancing model transparency, improving prompt engineering, and aiding in the development of more reliable AI systems. TokenSHAP represents a significant step towards the necessary interpretability for responsible AI deployment, contributing to the broader goal of creating more transparent, accountable, and trustworthy AI systems. Open Source code https://github.com/ronigold/TokenSHAP | [
"Horovicz, Miriam",
"Goldshmidt, Roni"
] | TokenSHAP: Interpreting Large Language Models with Monte Carlo Shapley Value Estimation | nlp4science-1.1 | Poster | 2407.10114 | [
"https://github.com/ronigold/TokenSHAP"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.nlp4science-1.2.bib | https://aclanthology.org/2024.nlp4science-1.2/ | @inproceedings{bao-liu-2024-prediction,
title = "Prediction of {CRISPR} On-Target Effects via Deep Learning",
author = "Bao, Condy and
Liu, Fuxiao",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.2",
pages = "9--15",
abstract = "Since the advent of CRISPR-Cas9, a groundbreaking gene-editing technology that enables precise genomic modifications via a short RNA guide sequence, there has been a marked increase in the accessibility and application of this technology across various fields. The success of CRISPR-Cas9 has spurred further investment and led to the discovery of additional CRISPR systems, including CRISPR-Cas13. Distinct from Cas9, which targets DNA, Cas13 targets RNA, offering unique advantages for gene modulation. We focus on Cas13d, a variant known for its collateral activity where it non-specifically cleaves adjacent RNA molecules upon activation, a feature critical to its function. We introduce DeepFM-Crispr, a novel deep learning model developed to predict the on-target efficiency and evaluate the off-target effects of Cas13d. This model harnesses a large language model to generate comprehensive representations rich in evolutionary and structural data, thereby enhancing predictions of RNA secondary structures and overall sgRNA efficacy. A transformer-based architecture processes these inputs to produce a predictive efficacy score. Comparative experiments show that DeepFM-Crispr not only surpasses traditional models but also outperforms recent state-of-the-art deep learning methods in terms of prediction accuracy and reliability.",
}
| Since the advent of CRISPR-Cas9, a groundbreaking gene-editing technology that enables precise genomic modifications via a short RNA guide sequence, there has been a marked increase in the accessibility and application of this technology across various fields. The success of CRISPR-Cas9 has spurred further investment and led to the discovery of additional CRISPR systems, including CRISPR-Cas13. Distinct from Cas9, which targets DNA, Cas13 targets RNA, offering unique advantages for gene modulation. We focus on Cas13d, a variant known for its collateral activity where it non-specifically cleaves adjacent RNA molecules upon activation, a feature critical to its function. We introduce DeepFM-Crispr, a novel deep learning model developed to predict the on-target efficiency and evaluate the off-target effects of Cas13d. This model harnesses a large language model to generate comprehensive representations rich in evolutionary and structural data, thereby enhancing predictions of RNA secondary structures and overall sgRNA efficacy. A transformer-based architecture processes these inputs to produce a predictive efficacy score. Comparative experiments show that DeepFM-Crispr not only surpasses traditional models but also outperforms recent state-of-the-art deep learning methods in terms of prediction accuracy and reliability. | [
"Bao, Condy",
"Liu, Fuxiao"
] | Prediction of CRISPR On-Target Effects via Deep Learning | nlp4science-1.2 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4science-1.3.bib | https://aclanthology.org/2024.nlp4science-1.3/ | @inproceedings{mihaylov-shtedritski-2024-elegant,
title = "What an Elegant Bridge: Multilingual {LLM}s are Biased Similarly in Different Languages",
author = "Mihaylov, Viktor and
Shtedritski, Aleksandar",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.3",
pages = "16--23",
abstract = "This paper investigates biases of Large Language Models (LLMs) through the lens of grammatical gender. Drawing inspiration from seminal works in psycholinguistics, particularly the study of gender{'}s influence on language perception, we leverage multilingual LLMs to revisit and expand upon the foundational experiments of Boroditsky (2003). Employing LLMs as a novel method for examining psycholinguistic biases related to grammatical gender, we prompt a model to describe nouns with adjectives in various languages, focusing specifically on languages with grammatical gender. In particular, we look at adjective co-occurrences across gender and languages, and train a binary classifier to predict grammatical gender given adjectives an LLM uses to describe a noun. Surprisingly, we find that a simple classifier can not only predict noun gender above chance but also exhibit cross-language transferability. We show that while LLMs may describe words differently in different languages, they are biased similarly.",
}
| This paper investigates biases of Large Language Models (LLMs) through the lens of grammatical gender. Drawing inspiration from seminal works in psycholinguistics, particularly the study of gender{'}s influence on language perception, we leverage multilingual LLMs to revisit and expand upon the foundational experiments of Boroditsky (2003). Employing LLMs as a novel method for examining psycholinguistic biases related to grammatical gender, we prompt a model to describe nouns with adjectives in various languages, focusing specifically on languages with grammatical gender. In particular, we look at adjective co-occurrences across gender and languages, and train a binary classifier to predict grammatical gender given adjectives an LLM uses to describe a noun. Surprisingly, we find that a simple classifier can not only predict noun gender above chance but also exhibit cross-language transferability. We show that while LLMs may describe words differently in different languages, they are biased similarly. | [
"Mihaylov, Viktor",
"Shtedritski, Aleks",
"ar"
] | What an Elegant Bridge: Multilingual LLMs are Biased Similarly in Different Languages | nlp4science-1.3 | Poster | 2407.09704 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.nlp4science-1.4.bib | https://aclanthology.org/2024.nlp4science-1.4/ | @inproceedings{abbasi-etal-2024-psycholex,
title = "{P}sycho{L}ex: Unveiling the Psychological Mind of Large Language Models",
author = "Abbasi, Mohammad Amin and
Mirnezami, Farnaz Sadat and
Naderi, Hassan",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.4",
pages = "24--35",
abstract = "This paper explores the intersection of psychology and artificial intelligence through the development and evaluation of specialized Large Language Models (LLMs). We introduce PsychoLex , a suite of resources designed to enhance LLMs{'} proficiency in psychological tasks in both Persian and English. Key contributions include the PsychoLexQA dataset for instructional content and the PsychoLexEval dataset for rigorous evaluation of LLMs in complex psychological scenarios. Additionally, we present the PsychoLexLLaMA model, optimized specifically for psychological applications, demonstrating superior performance compared to general-purpose models. The findings underscore the potential of tailored LLMs for advancing psychological research and applications, while also highlighting areas for further refinement. This research offers a foundational step towards integrating LLMs into specialized psychological domains, with implications for future advancements in AI-driven psychological practice.",
}
| This paper explores the intersection of psychology and artificial intelligence through the development and evaluation of specialized Large Language Models (LLMs). We introduce PsychoLex , a suite of resources designed to enhance LLMs{'} proficiency in psychological tasks in both Persian and English. Key contributions include the PsychoLexQA dataset for instructional content and the PsychoLexEval dataset for rigorous evaluation of LLMs in complex psychological scenarios. Additionally, we present the PsychoLexLLaMA model, optimized specifically for psychological applications, demonstrating superior performance compared to general-purpose models. The findings underscore the potential of tailored LLMs for advancing psychological research and applications, while also highlighting areas for further refinement. This research offers a foundational step towards integrating LLMs into specialized psychological domains, with implications for future advancements in AI-driven psychological practice. | [
"Abbasi, Mohammad Amin",
"Mirnezami, Farnaz Sadat",
"Naderi, Hassan"
] | PsychoLex: Unveiling the Psychological Mind of Large Language Models | nlp4science-1.4 | Poster | 2408.08848 | [
""
] | https://huggingface.co/papers/2408.08848 | 1 | 1 | 0 | 3 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.nlp4science-1.5.bib | https://aclanthology.org/2024.nlp4science-1.5/ | @inproceedings{rezapour-etal-2024-two,
title = "Two-Stage Graph-Augmented Summarization of Scientific Documents",
author = "Rezapour, Rezvaneh and
Ge, Yubin and
Han, Kanyao and
Jeong, Ray and
Diesner, Jana",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.5",
pages = "36--46",
abstract = "Automatic text summarization helps to digest the vast and ever-growing amount of scientific publications. While transformer-based solutions like BERT and SciBERT have advanced scientific summarization, lengthy documents pose a challenge due to the token limits of these models. To address this issue, we introduce and evaluate a two-stage model that combines an extract-then-compress framework. Our model incorporates a {``}graph-augmented extraction module{''} to select order-based salient sentences and an {``}abstractive compression module{''} to generate concise summaries. Additionally, we introduce the *BioConSumm* dataset, which focuses on biodiversity conservation, to support underrepresented domains and explore domain-specific summarization strategies. Out of the tested models, our model achieves the highest ROUGE-2 and ROUGE-L scores on our newly created dataset (*BioConSumm*) and on the *SUMPUBMED* dataset, which serves as a benchmark in the field of biomedicine.",
}
| Automatic text summarization helps to digest the vast and ever-growing amount of scientific publications. While transformer-based solutions like BERT and SciBERT have advanced scientific summarization, lengthy documents pose a challenge due to the token limits of these models. To address this issue, we introduce and evaluate a two-stage model that combines an extract-then-compress framework. Our model incorporates a {``}graph-augmented extraction module{''} to select order-based salient sentences and an {``}abstractive compression module{''} to generate concise summaries. Additionally, we introduce the *BioConSumm* dataset, which focuses on biodiversity conservation, to support underrepresented domains and explore domain-specific summarization strategies. Out of the tested models, our model achieves the highest ROUGE-2 and ROUGE-L scores on our newly created dataset (*BioConSumm*) and on the *SUMPUBMED* dataset, which serves as a benchmark in the field of biomedicine. | [
"Rezapour, Rezvaneh",
"Ge, Yubin",
"Han, Kanyao",
"Jeong, Ray",
"Diesner, Jana"
] | Two-Stage Graph-Augmented Summarization of Scientific Documents | nlp4science-1.5 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4science-1.6.bib | https://aclanthology.org/2024.nlp4science-1.6/ | @inproceedings{krishnan-ghebrehiwet-2024-gcd,
title = "{GCD}-{TM}: Graph-Driven Community Detection for Topic Modelling in Psychiatry Texts",
author = "Krishnan, Anusuya and
Ghebrehiwet, Isaias Mehari",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.6",
pages = "47--57",
abstract = "Psychiatry texts provide critical insights into patient mental states and therapeutic interactions. These texts are essential for understanding psychiatric conditions, treatment dynamics, and patient responses. However, the complex and diverse nature of psychiatric communications poses significant challenges for traditional topic modeling methods. The intricate language, subtle psychological nuances, and varying lengths of text segments make it difficult to extract coherent and meaningful topics. Conventional approaches often struggle to capture the depth and overlap of themes present in these texts. In this study, we present a novel approach to topic modeling that addresses these limitations by reformulating the problem as a community detection task within a graph constructed from the text corpus. Our methodology includes lemmatization for data standardization, TF-IDF vectorization to create a term-document matrix, and cosine similarity computation to produce a similarity matrix. This matrix is then binarized to form a graph, on which community detection is performed using the Louvain method. The detected communities are subsequently analyzed with Latent Dirichlet Allocation (LDA) to extract topics. Our approach outperforms traditional topic modeling methods, offering more accurate and interpretable topic extraction with improved coherence and lower perplexity.",
}
| Psychiatry texts provide critical insights into patient mental states and therapeutic interactions. These texts are essential for understanding psychiatric conditions, treatment dynamics, and patient responses. However, the complex and diverse nature of psychiatric communications poses significant challenges for traditional topic modeling methods. The intricate language, subtle psychological nuances, and varying lengths of text segments make it difficult to extract coherent and meaningful topics. Conventional approaches often struggle to capture the depth and overlap of themes present in these texts. In this study, we present a novel approach to topic modeling that addresses these limitations by reformulating the problem as a community detection task within a graph constructed from the text corpus. Our methodology includes lemmatization for data standardization, TF-IDF vectorization to create a term-document matrix, and cosine similarity computation to produce a similarity matrix. This matrix is then binarized to form a graph, on which community detection is performed using the Louvain method. The detected communities are subsequently analyzed with Latent Dirichlet Allocation (LDA) to extract topics. Our approach outperforms traditional topic modeling methods, offering more accurate and interpretable topic extraction with improved coherence and lower perplexity. | [
"Krishnan, Anusuya",
"Ghebrehiwet, Isaias Mehari"
] | GCD-TM: Graph-Driven Community Detection for Topic Modelling in Psychiatry Texts | nlp4science-1.6 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4science-1.7.bib | https://aclanthology.org/2024.nlp4science-1.7/ | @inproceedings{horawalavithana-etal-2024-scitune,
title = "{SCITUNE}: Aligning Large Language Models with Human-Curated Scientific Multimodal Instructions",
author = "Horawalavithana, Sameera and
Munikoti, Sai and
Stewart, Ian and
Kvinge, Henry and
Pazdernik, Karl",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.7",
pages = "58--72",
abstract = "Instruction finetuning is a popular paradigm to align large language models (LLM) with human intent. Despite its popularity, this idea is less explored in improving LLMs to align existing foundation models with scientific disciplines, concepts and goals. In this work, we present SciTune as a tuning framework to improve the ability of LLMs to follow multimodal instructions generated from scientific publications. To test our methodology, we train a large multimodal model LLaMA-SciTune that connects a vision encoder and LLM for science-focused visual and language understanding. LLaMA-SciTune significantly outperforms the state-of-the-art models in the generated figure types and captions in SciCap and VisText benchmarks. In comparison to the models that are finetuned with synthetic data only, LLaMA-SciTune surpasses human performance on average and in many sub-categories on the ScienceQA benchmark. Our results demonstrate that human-generated scientific multimodal instructions remain highly valuable in tuning LLMs to perform well on science tasks, despite their lower volume and relative scarcity compared to synthetic data.",
}
| Instruction finetuning is a popular paradigm to align large language models (LLM) with human intent. Despite its popularity, this idea is less explored in improving LLMs to align existing foundation models with scientific disciplines, concepts and goals. In this work, we present SciTune as a tuning framework to improve the ability of LLMs to follow multimodal instructions generated from scientific publications. To test our methodology, we train a large multimodal model LLaMA-SciTune that connects a vision encoder and LLM for science-focused visual and language understanding. LLaMA-SciTune significantly outperforms the state-of-the-art models in the generated figure types and captions in SciCap and VisText benchmarks. In comparison to the models that are finetuned with synthetic data only, LLaMA-SciTune surpasses human performance on average and in many sub-categories on the ScienceQA benchmark. Our results demonstrate that human-generated scientific multimodal instructions remain highly valuable in tuning LLMs to perform well on science tasks, despite their lower volume and relative scarcity compared to synthetic data. | [
"Horawalavithana, Sameera",
"Munikoti, Sai",
"Stewart, Ian",
"Kvinge, Henry",
"Pazdernik, Karl"
] | SCITUNE: Aligning Large Language Models with Human-Curated Scientific Multimodal Instructions | nlp4science-1.7 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4science-1.8.bib | https://aclanthology.org/2024.nlp4science-1.8/ | @inproceedings{singh-etal-2024-racer,
title = "{RACER}: An {LLM}-powered Methodology for Scalable Analysis of Semi-structured Mental Health Interviews",
author = "Singh, Satpreet Harcharan and
Jiang, Kevin and
Bhasin, Kanchan and
Sabharwal, Ashutosh and
Moukaddam, Nidal and
Patel, Ankit",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.8",
pages = "73--98",
abstract = "Semi-structured interviews (SSIs) are a commonly employed data-collection method in healthcare research, offering in-depth qualitative insights into subject experiences. Despite their value, manual analysis of SSIs is notoriously time-consuming and labor-intensive, in part due to the difficulty of extracting and categorizing emotional responses, and challenges in scaling human evaluation for large populations. In this study, we develop RACER, a Large Language Model (LLM) based expert-guided automated pipeline that efficiently converts raw interview transcripts into insightful domain-relevant themes and sub-themes. We used RACER to analyze SSIs conducted with 93 healthcare professionals and trainees to assess the broad personal and professional mental health impacts of the COVID-19 crisis. RACER achieves moderately high agreement with two human evaluators (72{\%}), which approaches the human inter-rater agreement (77{\%}). Interestingly, LLMs and humans struggle with similar content involving nuanced emotional, ambivalent/dialectical, and psychological statements. Our study highlights the opportunities and challenges in using LLMs to improve research efficiency and opens new avenues for scalable analysis of SSIs in healthcare research.",
}
| Semi-structured interviews (SSIs) are a commonly employed data-collection method in healthcare research, offering in-depth qualitative insights into subject experiences. Despite their value, manual analysis of SSIs is notoriously time-consuming and labor-intensive, in part due to the difficulty of extracting and categorizing emotional responses, and challenges in scaling human evaluation for large populations. In this study, we develop RACER, a Large Language Model (LLM) based expert-guided automated pipeline that efficiently converts raw interview transcripts into insightful domain-relevant themes and sub-themes. We used RACER to analyze SSIs conducted with 93 healthcare professionals and trainees to assess the broad personal and professional mental health impacts of the COVID-19 crisis. RACER achieves moderately high agreement with two human evaluators (72{\%}), which approaches the human inter-rater agreement (77{\%}). Interestingly, LLMs and humans struggle with similar content involving nuanced emotional, ambivalent/dialectical, and psychological statements. Our study highlights the opportunities and challenges in using LLMs to improve research efficiency and opens new avenues for scalable analysis of SSIs in healthcare research. | [
"Singh, Satpreet Harcharan",
"Jiang, Kevin",
"Bhasin, Kanchan",
"Sabharwal, Ashutosh",
"Moukaddam, Nidal",
"Patel, Ankit"
] | RACER: An LLM-powered Methodology for Scalable Analysis of Semi-structured Mental Health Interviews | nlp4science-1.8 | Poster | 2402.02656 | [
"https://github.com/satpreetsingh/racer"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.nlp4science-1.9.bib | https://aclanthology.org/2024.nlp4science-1.9/ | @inproceedings{berijanian-etal-2024-soft,
title = "Soft Measures for Extracting Causal Collective Intelligence",
author = "Berijanian, Maryam and
Dork, Spencer and
Singh, Kuldeep and
Millikan, Michael Riley and
Riggs, Ashlin and
Swaminathan, Aadarsh and
Gibbs, Sarah L. and
Friedman, Scott E. and
Brugnone, Nathan",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.9",
pages = "99--116",
abstract = "Understanding and modeling collective intelligence is essential for addressing complex social systems. Directed graphs called fuzzy cognitive maps (FCMs) offer a powerful tool for encoding causal mental models, but extracting high-integrity FCMs from text is challenging. This study presents an approach using large language models (LLMs) to automate FCM extraction. We introduce novel graph-based similarity measures and evaluate them by correlating their outputs with human judgments through the Elo rating system. Results show positive correlations with human evaluations, but even the best-performing measure exhibits limitations in capturing FCM nuances. Fine-tuning LLMs improves performance, but existing measures still fall short. This study highlights the need for soft similarity measures tailored to FCM extraction, advancing collective intelligence modeling with NLP.",
}
| Understanding and modeling collective intelligence is essential for addressing complex social systems. Directed graphs called fuzzy cognitive maps (FCMs) offer a powerful tool for encoding causal mental models, but extracting high-integrity FCMs from text is challenging. This study presents an approach using large language models (LLMs) to automate FCM extraction. We introduce novel graph-based similarity measures and evaluate them by correlating their outputs with human judgments through the Elo rating system. Results show positive correlations with human evaluations, but even the best-performing measure exhibits limitations in capturing FCM nuances. Fine-tuning LLMs improves performance, but existing measures still fall short. This study highlights the need for soft similarity measures tailored to FCM extraction, advancing collective intelligence modeling with NLP. | [
"Berijanian, Maryam",
"Dork, Spencer",
"Singh, Kuldeep",
"Millikan, Michael Riley",
"Riggs, Ashlin",
"Swaminathan, Aadarsh",
"Gibbs, Sarah L.",
"Friedman, Scott E.",
"Brugnone, Nathan"
] | Soft Measures for Extracting Causal Collective Intelligence | nlp4science-1.9 | Poster | 2409.18911 | [
"https://github.com/kuldeep7688/soft-measures-causal-intelligence"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.nlp4science-1.10.bib | https://aclanthology.org/2024.nlp4science-1.10/ | @inproceedings{zhou-etal-2024-hypothesis,
title = "Hypothesis Generation with Large Language Models",
author = "Zhou, Yangqiaoyu and
Liu, Haokun and
Srivastava, Tejes and
Mei, Hongyuan and
Tan, Chenhao",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.10",
pages = "117--139",
abstract = "Effective generation of novel hypotheses is instrumental to scientific progress. So far, researchers have been the main powerhouse behind hypothesis generation by painstaking data analysis and thinking (also known as the Eureka moment). In this paper, we examine the potential of large language models (LLMs) to generate hypotheses. We focus on hypothesis generation based on data (i.e., labeled examples). To enable LLMs to handle Long contexts, we generate initial hypotheses from a small number of examples and then update them iteratively to improve the quality of hypotheses. Inspired by multi-armed bandits, we design a reward function to inform the exploitation-exploration tradeoff in the update process. Our algorithm is able to generate hypotheses that enable much better predictive performance than few-shot prompting in classification tasks, improving accuracy by 31.7{\%} on a synthetic dataset and by 13.9{\%}, 3.3{\%} and, 24.9{\%} on three real-world datasets. We also outperform supervised learning by 12.1{\%} and 11.6{\%} on two challenging real-world datasets. Furthermore, we find that the generated hypotheses not only corroborate human-verified theories but also uncover new insights for the tasks.",
}
| Effective generation of novel hypotheses is instrumental to scientific progress. So far, researchers have been the main powerhouse behind hypothesis generation by painstaking data analysis and thinking (also known as the Eureka moment). In this paper, we examine the potential of large language models (LLMs) to generate hypotheses. We focus on hypothesis generation based on data (i.e., labeled examples). To enable LLMs to handle Long contexts, we generate initial hypotheses from a small number of examples and then update them iteratively to improve the quality of hypotheses. Inspired by multi-armed bandits, we design a reward function to inform the exploitation-exploration tradeoff in the update process. Our algorithm is able to generate hypotheses that enable much better predictive performance than few-shot prompting in classification tasks, improving accuracy by 31.7{\%} on a synthetic dataset and by 13.9{\%}, 3.3{\%} and, 24.9{\%} on three real-world datasets. We also outperform supervised learning by 12.1{\%} and 11.6{\%} on two challenging real-world datasets. Furthermore, we find that the generated hypotheses not only corroborate human-verified theories but also uncover new insights for the tasks. | [
"Zhou, Yangqiaoyu",
"Liu, Haokun",
"Srivastava, Tejes",
"Mei, Hongyuan",
"Tan, Chenhao"
] | Hypothesis Generation with Large Language Models | nlp4science-1.10 | Poster | 2404.04326 | [
"https://github.com/chicagohai/hypothesis-generation"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.nlp4science-1.11.bib | https://aclanthology.org/2024.nlp4science-1.11/ | @inproceedings{berger-etal-2024-dreaming,
title = "Dreaming with {C}hat{GPT}: Unraveling the Challenges of {LLM}s Dream Generation",
author = "Berger, Harel and
King, Hadar and
David, Omer",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.11",
pages = "140--147",
abstract = "Large Language Models (LLMs), such as ChatGPT, are used daily for different human-like text generation tasks. This motivates us to ask: \textit{Can an LLM generate human dreams?} For this research, we explore this new avenue through the lens of ChatGPT, and its ability to generate valid dreams. We have three main findings: (i) Chatgpt-4o, the new version of chatGPT, generated all requested dreams. (ii) Generated dreams meet key psychological criteria of dreams. We hope our work will set the stage for developing a new task of dream generation for LLMs. This task can help psychologists evaluate patients{'} dreams based on their demographic factors.",
}
| Large Language Models (LLMs), such as ChatGPT, are used daily for different human-like text generation tasks. This motivates us to ask: \textit{Can an LLM generate human dreams?} For this research, we explore this new avenue through the lens of ChatGPT, and its ability to generate valid dreams. We have three main findings: (i) Chatgpt-4o, the new version of chatGPT, generated all requested dreams. (ii) Generated dreams meet key psychological criteria of dreams. We hope our work will set the stage for developing a new task of dream generation for LLMs. This task can help psychologists evaluate patients{'} dreams based on their demographic factors. | [
"Berger, Harel",
"King, Hadar",
"David, Omer"
] | Dreaming with ChatGPT: Unraveling the Challenges of LLMs Dream Generation | nlp4science-1.11 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4science-1.12.bib | https://aclanthology.org/2024.nlp4science-1.12/ | @inproceedings{chaturvedi-2024-llms,
title = "{LLM}s and {NLP} for Generalized Learning in {AI}-Enhanced Educational Videos and Powering Curated Videos with Generative Intelligence",
author = "Chaturvedi, Naina",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.12",
pages = "148--154",
abstract = "LLMs and NLP for Generalized Learning in AI-Enhanced Educational Videos and Powering Curated Videos with Generative IntelligenceAuthors - Naina Chaturvedi, Rutgers UniversityAnanda Gunawardena, Rutgers UniversityContact: cnaina1601@gmail.com or nc832@cs.rutgers.eduThe rapid advancement of Large Language Models (LLMs) and Natural Language Processing (NLP) technologies has opened new frontiers in educational content creation and consumption. This paper explores the intersection of these technologies with instructional videos in computer science education, addressing the crucial aspect of generalization in NLP models within an educational context.With 78{\%} of computer science students utilizing YouTube to supplement traditional learning materials, there{'}s a clear demand for high-quality video content. However, the challenge of finding appropriate resources has led 73{\%} of students to prefer curated video libraries. We propose a novel approach that leverages LLMs and NLP techniques to revolutionize this space, focusing on the ability of these models to generalize across diverse educational content and contexts.Our research utilizes the cubits.ai platform, developed at Princeton University, to demonstrate how generative AI, powered by advanced LLMs, can transform standard video playlists into interactive, AI-enhanced learning experiences. We present a framework for creating AI-generated video summaries, on-demand questions, and in-depth topic explorations, all while considering the challenges posed by LLMs trained on vast, often opaque datasets. Our approach not only enhances student engagement but also provides a unique opportunity to study how well these models generalize across different educational topics and student needs.Drawing insights from computer science courses at Princeton and Rutgers Universities, we highlight the transformative potential of AI-enhanced videos in promoting active learning, particularly in large classes. This research contributes to the ongoing dialogue about generalization in NLP while simultaneously demonstrating practical applications in educational technology. By bridging these domains, we aim to establish a shared platform for state-of-the-art generalization testing in NLP within an educational framework.Our findings not only demonstrate how educators can enhance existing video playlists using AI but also provide insights into the challenges and opportunities of using LLMs in educational settings. This work serves as a cornerstone for catalyzing research on generalization in the NLP community, particularly focusing on the application and evaluation of LLMs in adaptive, personalized learning environments.Keywords: Instructional videos; AI-enhanced learning; Large Language Models (LLMs); Natural Language Processing (NLP); generalization in NLP; computer science education; cubits.ai platform; AI-generated content; interactive video experiences; video summarization; on-demand questions; personalized learning; active learning; data-driven insights; generative AI; educational technology; adaptive learning environments",
}
| LLMs and NLP for Generalized Learning in AI-Enhanced Educational Videos and Powering Curated Videos with Generative IntelligenceAuthors - Naina Chaturvedi, Rutgers UniversityAnanda Gunawardena, Rutgers UniversityContact: cnaina1601@gmail.com or nc832@cs.rutgers.eduThe rapid advancement of Large Language Models (LLMs) and Natural Language Processing (NLP) technologies has opened new frontiers in educational content creation and consumption. This paper explores the intersection of these technologies with instructional videos in computer science education, addressing the crucial aspect of generalization in NLP models within an educational context.With 78{\%} of computer science students utilizing YouTube to supplement traditional learning materials, there{'}s a clear demand for high-quality video content. However, the challenge of finding appropriate resources has led 73{\%} of students to prefer curated video libraries. We propose a novel approach that leverages LLMs and NLP techniques to revolutionize this space, focusing on the ability of these models to generalize across diverse educational content and contexts.Our research utilizes the cubits.ai platform, developed at Princeton University, to demonstrate how generative AI, powered by advanced LLMs, can transform standard video playlists into interactive, AI-enhanced learning experiences. We present a framework for creating AI-generated video summaries, on-demand questions, and in-depth topic explorations, all while considering the challenges posed by LLMs trained on vast, often opaque datasets. Our approach not only enhances student engagement but also provides a unique opportunity to study how well these models generalize across different educational topics and student needs.Drawing insights from computer science courses at Princeton and Rutgers Universities, we highlight the transformative potential of AI-enhanced videos in promoting active learning, particularly in large classes. This research contributes to the ongoing dialogue about generalization in NLP while simultaneously demonstrating practical applications in educational technology. By bridging these domains, we aim to establish a shared platform for state-of-the-art generalization testing in NLP within an educational framework.Our findings not only demonstrate how educators can enhance existing video playlists using AI but also provide insights into the challenges and opportunities of using LLMs in educational settings. This work serves as a cornerstone for catalyzing research on generalization in the NLP community, particularly focusing on the application and evaluation of LLMs in adaptive, personalized learning environments.Keywords: Instructional videos; AI-enhanced learning; Large Language Models (LLMs); Natural Language Processing (NLP); generalization in NLP; computer science education; cubits.ai platform; AI-generated content; interactive video experiences; video summarization; on-demand questions; personalized learning; active learning; data-driven insights; generative AI; educational technology; adaptive learning environments | [
"Chaturvedi, Naina"
] | LLMs and NLP for Generalized Learning in AI-Enhanced Educational Videos and Powering Curated Videos with Generative Intelligence | nlp4science-1.12 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4science-1.13.bib | https://aclanthology.org/2024.nlp4science-1.13/ | @inproceedings{cao-etal-2024-moral,
title = "The Moral Foundations {W}eibo Corpus",
author = "Cao, Renjie and
Hu, Miaoyan and
Wei, Jiahan and
Ihnaini, Baha",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.13",
pages = "155--165",
abstract = "Moral sentiments expressed in natural language significantly influence both online and offline environments, shaping behavioral styles and interaction patterns, including social media self-presentation, cyberbullying, adherence to social norms, and ethical decision-making. To effectively measure moral sentiments in natural language processing texts, it is crucial to utilize large, annotated datasets that provide nuanced understanding for accurate analysis and model training. However, existing corpora, while valuable, often face linguistic limitations. To address this gap in the Chinese language domain, we introduce the Moral Foundation Weibo Corpus. This corpus consists of 25,671 Chinese comments on Weibo, encompassing six diverse topic areas. Each comment is manually annotated by at least three systematically trained annotators based on ten moral categories derived from a grounded theory of morality. To assess annotator reliability, we present the kappa test results, a gold standard for measuring consistency. Additionally, we apply several the latest large language models to supplement the manual annotations, conducting analytical experiments to compare their performance and report baseline results for moral sentiment classification.",
}
| Moral sentiments expressed in natural language significantly influence both online and offline environments, shaping behavioral styles and interaction patterns, including social media self-presentation, cyberbullying, adherence to social norms, and ethical decision-making. To effectively measure moral sentiments in natural language processing texts, it is crucial to utilize large, annotated datasets that provide nuanced understanding for accurate analysis and model training. However, existing corpora, while valuable, often face linguistic limitations. To address this gap in the Chinese language domain, we introduce the Moral Foundation Weibo Corpus. This corpus consists of 25,671 Chinese comments on Weibo, encompassing six diverse topic areas. Each comment is manually annotated by at least three systematically trained annotators based on ten moral categories derived from a grounded theory of morality. To assess annotator reliability, we present the kappa test results, a gold standard for measuring consistency. Additionally, we apply several the latest large language models to supplement the manual annotations, conducting analytical experiments to compare their performance and report baseline results for moral sentiment classification. | [
"Cao, Renjie",
"Hu, Miaoyan",
"Wei, Jiahan",
"Ihnaini, Baha"
] | The Moral Foundations Weibo Corpus | nlp4science-1.13 | Poster | 2411.09612 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.nlp4science-1.14.bib | https://aclanthology.org/2024.nlp4science-1.14/ | @inproceedings{kenigsbuch-shapira-2024-serious,
title = "Why So Serious: Humor and its Association with Treatment Measurements Process and Outcome",
author = "Kenigsbuch, Matan and
Shapira, Natalie",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.14",
pages = "166--174",
abstract = "Humor is an important social construct with various roles in human communication, yet clinicians remain divided on its appropriateness and effectiveness. Despite its importance, empirical research on humor in psychotherapy is limited. This study explores the theoretical concept of {``}humor{''} by examining the operational variable of {``}laughs{''} within psychotherapy. Method: We analyzed transcriptions from 872 psychotherapy sessions involving 68 clients treated by 59 therapists. Clients self-reported their symptoms and state of well-being before each session, while both clients and therapists provided self-reports on their therapeutic alliance after each session. Through text analysis, we extracted the number of laughs and words for each session. We investigated the within-client associations between laughs and symptoms, well-being, therapeutic alliance, and clients{'} number of words. Results: We found concurrent session-level associations between laughs and well-being, symptoms, and the number of words. However, no significant associations were observed between laughs and the therapeutic alliance, either from the perspective of the therapist or the client.",
}
| Humor is an important social construct with various roles in human communication, yet clinicians remain divided on its appropriateness and effectiveness. Despite its importance, empirical research on humor in psychotherapy is limited. This study explores the theoretical concept of {``}humor{''} by examining the operational variable of {``}laughs{''} within psychotherapy. Method: We analyzed transcriptions from 872 psychotherapy sessions involving 68 clients treated by 59 therapists. Clients self-reported their symptoms and state of well-being before each session, while both clients and therapists provided self-reports on their therapeutic alliance after each session. Through text analysis, we extracted the number of laughs and words for each session. We investigated the within-client associations between laughs and symptoms, well-being, therapeutic alliance, and clients{'} number of words. Results: We found concurrent session-level associations between laughs and well-being, symptoms, and the number of words. However, no significant associations were observed between laughs and the therapeutic alliance, either from the perspective of the therapist or the client. | [
"Kenigsbuch, Matan",
"Shapira, Natalie"
] | Why So Serious: Humor and its Association with Treatment Measurements Process and Outcome | nlp4science-1.14 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4science-1.15.bib | https://aclanthology.org/2024.nlp4science-1.15/ | @inproceedings{yousefi-collins-2024-learning,
title = "Learning the Bitter Lesson: Empirical Evidence from 20 Years of {CVPR} Proceedings",
author = "Yousefi, Mojtaba and
Collins, Jack",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.15",
pages = "175--187",
abstract = "This study examines the alignment of Conference on Computer Vision and Pattern Recognition (CVPR) research with the principles of the {``}bitter lesson{''} proposed by Rich Sutton. We analyze two decades of CVPR abstracts and titles using large language models (LLMs) to assess the field{'}s embracement of these principles. Our methodology leverages state-of-the-art natural language processing techniques to systematically evaluate the evolution of research approaches in computer vision. The results reveal significant trends in the adoption of general-purpose learning algorithms and the utilization of increased computational resources. We discuss the implications of these findings for the future direction of computer vision research and its potential impact on broader artificial intelligence development. This work contributes to the ongoing dialogue about the most effective strategies for advancing machine learning and computer vision, offering insights that may guide future research priorities and methodologies in the field.",
}
| This study examines the alignment of Conference on Computer Vision and Pattern Recognition (CVPR) research with the principles of the {``}bitter lesson{''} proposed by Rich Sutton. We analyze two decades of CVPR abstracts and titles using large language models (LLMs) to assess the field{'}s embracement of these principles. Our methodology leverages state-of-the-art natural language processing techniques to systematically evaluate the evolution of research approaches in computer vision. The results reveal significant trends in the adoption of general-purpose learning algorithms and the utilization of increased computational resources. We discuss the implications of these findings for the future direction of computer vision research and its potential impact on broader artificial intelligence development. This work contributes to the ongoing dialogue about the most effective strategies for advancing machine learning and computer vision, offering insights that may guide future research priorities and methodologies in the field. | [
"Yousefi, Mojtaba",
"Collins, Jack"
] | Learning the Bitter Lesson: Empirical Evidence from 20 Years of CVPR Proceedings | nlp4science-1.15 | Poster | 2410.09649 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.nlp4science-1.16.bib | https://aclanthology.org/2024.nlp4science-1.16/ | @inproceedings{kumar-etal-2024-personalized,
title = "Personalized-{ABA}: Personalized Treatment Plan Generation for Applied Behavior Analysis using Natural Language Processing",
author = "Kumar, Aman and
Au, Mareiko and
Semlawat, Raj and
Sridhar, Malavica and
Gurnani, Hitesh",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.16",
pages = "188--196",
abstract = "Autism Spectrum Disorder (ASD) is a neurological and developmental disability that affects how an individual learns, communicates, interacts with others. Applied Behavior Analysis (ABA) is a gold standard therapy for children and adults suffering from ASD to improve their learning, social, and communication skills. Today, 1 in 36 children are diagnosed with ASD with expectations that this rate will only continue to rise. The supply of certified ABA providers is alarmingly insufficient to meet the needs of children with ASD. In fact, waitlists to receive ABA therapy in the United States exceed 10 months in most states. Clinicians or Board Certified Behavior Analysts (BCBAs) are now experiencing intense bottlenecks around diagnostic evaluations and developing treatment plans quickly enough to support timely access to care. Over the past few years, Artificial Intelligence has changed the way industries operate by offering powerful ways to process, analyze, generate, and predict data. In this paper, we have addressed the problem of both time and supply restrictions faced by ABA providers by proposing a novel method for personalized treatment plan generation and program prediction by leveraging the capabilities of Deep Learning and Large Language Models (LLM). Additionally, we have introduced two separate models for behavior program prediction (F1-Score: 0.671) and skill acquisition program predictions (Rouge-1 Score: 0.476) which will help ABA providers in treatment plan implementation. Results are promising: an AI-generated treatment plan demonstrates a high similarity (Average Similarity Score: 0.915) to the original treatment plan written by a BCBA. Finally, as we partnered with a multi-state ABA provider in building this product, we ran a single-blind study that concluded that BCBAs prefer an AI-generated treatment plan 65 percent of the time compared to a BCBA-generated one.",
}
| Autism Spectrum Disorder (ASD) is a neurological and developmental disability that affects how an individual learns, communicates, interacts with others. Applied Behavior Analysis (ABA) is a gold standard therapy for children and adults suffering from ASD to improve their learning, social, and communication skills. Today, 1 in 36 children are diagnosed with ASD with expectations that this rate will only continue to rise. The supply of certified ABA providers is alarmingly insufficient to meet the needs of children with ASD. In fact, waitlists to receive ABA therapy in the United States exceed 10 months in most states. Clinicians or Board Certified Behavior Analysts (BCBAs) are now experiencing intense bottlenecks around diagnostic evaluations and developing treatment plans quickly enough to support timely access to care. Over the past few years, Artificial Intelligence has changed the way industries operate by offering powerful ways to process, analyze, generate, and predict data. In this paper, we have addressed the problem of both time and supply restrictions faced by ABA providers by proposing a novel method for personalized treatment plan generation and program prediction by leveraging the capabilities of Deep Learning and Large Language Models (LLM). Additionally, we have introduced two separate models for behavior program prediction (F1-Score: 0.671) and skill acquisition program predictions (Rouge-1 Score: 0.476) which will help ABA providers in treatment plan implementation. Results are promising: an AI-generated treatment plan demonstrates a high similarity (Average Similarity Score: 0.915) to the original treatment plan written by a BCBA. Finally, as we partnered with a multi-state ABA provider in building this product, we ran a single-blind study that concluded that BCBAs prefer an AI-generated treatment plan 65 percent of the time compared to a BCBA-generated one. | [
"Kumar, Aman",
"Au, Mareiko",
"Semlawat, Raj",
"Sridhar, Malavica",
"Gurnani, Hitesh"
] | Personalized-ABA: Personalized Treatment Plan Generation for Applied Behavior Analysis using Natural Language Processing | nlp4science-1.16 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4science-1.17.bib | https://aclanthology.org/2024.nlp4science-1.17/ | @inproceedings{chai-etal-2024-exploring,
title = "Exploring Scientific Hypothesis Generation with Mamba",
author = "Chai, Miaosen and
Herron, Emily and
Cervantes, Erick and
Ghosal, Tirthankar",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.17",
pages = "197--207",
abstract = "Generating scientifically grounded hypotheses is a challenging frontier task for generative AI models in science. The difficulty arises from the inherent subjectivity of the task and the extensive knowledge of prior work required to assess the validity of a generated hypothesis. Large Language Models (LLMs), trained on vast datasets from diverse sources, have shown a strong ability to utilize the knowledge embedded in their training data. Recent research has explored using transformer-based models for scientific hypothesis generation, leveraging their advanced capabilities. However, these models often require a significant number of parameters to manage Long sequences, which can be a limitation. State Space Models, such as Mamba, offer an alternative by effectively handling very Long sequences with fewer parameters than transformers. In this work, we investigate the use of Mamba for scientific hypothesis generation. Our preliminary findings indicate that Mamba achieves similar performance w.r.t. transformer-based models of similar sizes for a higher-order complex task like hypothesis generation. We have made our code available here: https://github.com/fglx-c/Exploring-Scientific-Hypothesis-Generation-with-Mamba",
}
| Generating scientifically grounded hypotheses is a challenging frontier task for generative AI models in science. The difficulty arises from the inherent subjectivity of the task and the extensive knowledge of prior work required to assess the validity of a generated hypothesis. Large Language Models (LLMs), trained on vast datasets from diverse sources, have shown a strong ability to utilize the knowledge embedded in their training data. Recent research has explored using transformer-based models for scientific hypothesis generation, leveraging their advanced capabilities. However, these models often require a significant number of parameters to manage Long sequences, which can be a limitation. State Space Models, such as Mamba, offer an alternative by effectively handling very Long sequences with fewer parameters than transformers. In this work, we investigate the use of Mamba for scientific hypothesis generation. Our preliminary findings indicate that Mamba achieves similar performance w.r.t. transformer-based models of similar sizes for a higher-order complex task like hypothesis generation. We have made our code available here: https://github.com/fglx-c/Exploring-Scientific-Hypothesis-Generation-with-Mamba | [
"Chai, Miaosen",
"Herron, Emily",
"Cervantes, Erick",
"Ghosal, Tirthankar"
] | Exploring Scientific Hypothesis Generation with Mamba | nlp4science-1.17 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4science-1.18.bib | https://aclanthology.org/2024.nlp4science-1.18/ | @inproceedings{lama-etal-2024-benchmarking,
title = "Benchmarking Automated Theorem Proving with Large Language Models",
author = "Lama, Vanessa and
Ma, Catherine and
Ghosal, Tirthankar",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.18",
pages = "208--218",
abstract = "Theorem proving presents a significant challenge for large language models (LLMs) due to the requirement for formal proofs to be rigorously checked by proof assistants, such as Lean, eliminating any margin for error or hallucination. While existing LLM-based theorem provers attempt to operate autonomously, they often struggle with novel and complex theorems where human insights are essential. Lean Copilot is a novel framework that integrates LLM inference into the Lean proof assistant environment. In this work, we benchmark performance of several LLMs including general and math-specific models for theorem proving using the Lean Copilot framework. Our initial investigation suggests that a general-purpose large model like LLaMa-70B still has edge over math-specific smaller models for the task under consideration. We provide useful insights into the performance of different LLMs we chose for the task.",
}
| Theorem proving presents a significant challenge for large language models (LLMs) due to the requirement for formal proofs to be rigorously checked by proof assistants, such as Lean, eliminating any margin for error or hallucination. While existing LLM-based theorem provers attempt to operate autonomously, they often struggle with novel and complex theorems where human insights are essential. Lean Copilot is a novel framework that integrates LLM inference into the Lean proof assistant environment. In this work, we benchmark performance of several LLMs including general and math-specific models for theorem proving using the Lean Copilot framework. Our initial investigation suggests that a general-purpose large model like LLaMa-70B still has edge over math-specific smaller models for the task under consideration. We provide useful insights into the performance of different LLMs we chose for the task. | [
"Lama, Vanessa",
"Ma, Catherine",
"Ghosal, Tirthankar"
] | Benchmarking Automated Theorem Proving with Large Language Models | nlp4science-1.18 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4science-1.19.bib | https://aclanthology.org/2024.nlp4science-1.19/ | @inproceedings{a-beal-cohen-etal-2024-grid,
title = "The Grid: A semi-automated tool to support expert-driven modeling",
author = "A. Beal Cohen, Allegra and
Alexeeva, Maria and
Alcock, Keith and
Surdeanu, Mihai",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.19",
pages = "219--229",
abstract = "When building models of human behavior, we often struggle to find data that capture important factors at the right level of granularity. In these cases, we must rely on expert knowledge to build models. To help partially automate the organization of expert knowledge for modeling, we combine natural language processing (NLP) and machine learning (ML) methods in a tool called the Grid. The Grid helps users organize textual knowledge into clickable cells aLong two dimensions using iterative, collaborative clustering. We conduct a user study to explore participants{'} reactions to the Grid, as well as to investigate whether its clustering feature helps participants organize a corpus of expert knowledge. We find that participants using the Grid{'}s clustering feature appeared to work more efficiently than those without it, but written feedback about the clustering was critical. We conclude that the general design of the Grid was positively received and that some of the user challenges can likely be mitigated through the use of LLMs.",
}
| When building models of human behavior, we often struggle to find data that capture important factors at the right level of granularity. In these cases, we must rely on expert knowledge to build models. To help partially automate the organization of expert knowledge for modeling, we combine natural language processing (NLP) and machine learning (ML) methods in a tool called the Grid. The Grid helps users organize textual knowledge into clickable cells aLong two dimensions using iterative, collaborative clustering. We conduct a user study to explore participants{'} reactions to the Grid, as well as to investigate whether its clustering feature helps participants organize a corpus of expert knowledge. We find that participants using the Grid{'}s clustering feature appeared to work more efficiently than those without it, but written feedback about the clustering was critical. We conclude that the general design of the Grid was positively received and that some of the user challenges can likely be mitigated through the use of LLMs. | [
"A. Beal Cohen, Allegra",
"Alexeeva, Maria",
"Alcock, Keith",
"Surdeanu, Mihai"
] | The Grid: A semi-automated tool to support expert-driven modeling | nlp4science-1.19 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4science-1.20.bib | https://aclanthology.org/2024.nlp4science-1.20/ | @inproceedings{zong-lin-2024-categorical,
title = "Categorical Syllogisms Revisited: A Review of the Logical Reasoning Abilities of {LLM}s for Analyzing Categorical Syllogisms",
author = "Zong, Shi and
Lin, Jimmy",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.20",
pages = "230--239",
abstract = "There has been a huge number of benchmarks proposed to evaluate how large language models (LLMs) behave for logic inference tasks. However, it remains an open question how to properly evaluate this ability. In this paper, we provide a systematic overview of prior works on the logical reasoning ability of LLMs for analyzing categorical syllogisms. We first investigate all the possible variations for categorical syllogisms from a purely logical perspective and then examine the underlying configurations (i.e., mood and figure) tested by existing datasets. Our results indicate that compared to template-based synthetic datasets, crowdsourcing approaches normally sacrifice the coverage of configurations (i.e., mood and figure) of categorical syllogisms for more language variations, thus bringing challenges to fully testing LLMs under different situations. We then summarize the findings and observations for the performance of LLMs to infer the validity of syllogisms from the current literature. The error rate breakdown analyses suggest that the interpretation of quantifiers seems to be the current bottleneck that limits the performance of the LLMs and is thus worth more attention. Finally, we discuss several points that might be worth considering when researchers plan to release categorical syllogism datasets. We hope our work will provide a timely review of the current literature regarding categorical syllogisms, and motivate more interdisciplinary research between communities, specifically computational linguists and logicians.",
}
| There has been a huge number of benchmarks proposed to evaluate how large language models (LLMs) behave for logic inference tasks. However, it remains an open question how to properly evaluate this ability. In this paper, we provide a systematic overview of prior works on the logical reasoning ability of LLMs for analyzing categorical syllogisms. We first investigate all the possible variations for categorical syllogisms from a purely logical perspective and then examine the underlying configurations (i.e., mood and figure) tested by existing datasets. Our results indicate that compared to template-based synthetic datasets, crowdsourcing approaches normally sacrifice the coverage of configurations (i.e., mood and figure) of categorical syllogisms for more language variations, thus bringing challenges to fully testing LLMs under different situations. We then summarize the findings and observations for the performance of LLMs to infer the validity of syllogisms from the current literature. The error rate breakdown analyses suggest that the interpretation of quantifiers seems to be the current bottleneck that limits the performance of the LLMs and is thus worth more attention. Finally, we discuss several points that might be worth considering when researchers plan to release categorical syllogism datasets. We hope our work will provide a timely review of the current literature regarding categorical syllogisms, and motivate more interdisciplinary research between communities, specifically computational linguists and logicians. | [
"Zong, Shi",
"Lin, Jimmy"
] | Categorical Syllogisms Revisited: A Review of the Logical Reasoning Abilities of LLMs for Analyzing Categorical Syllogisms | nlp4science-1.20 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.nlp4science-1.21.bib | https://aclanthology.org/2024.nlp4science-1.21/ | @inproceedings{tikhonov-etal-2024-individuation,
title = "Individuation in Neural Models with and without Visual Grounding",
author = "Tikhonov, Alexey and
Bylinina, Lisa and
Yamshchikov, Ivan P.",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.21",
pages = "240--248",
abstract = "We show differences between a language-and-vision model CLIP and two text-only models {---} FastText and SBERT {---} when it comes to the encoding of individuation information. We study latent representations that CLIP provides for substrates, granular aggregates, and various numbers of objects. We demonstrate that CLIP embeddings capture quantitative differences in individuation better than models trained on text-only data. Moreover, the individuation hierarchy we deduce from the CLIP embeddings agrees with the hierarchies proposed in linguistics and cognitive science.",
}
| We show differences between a language-and-vision model CLIP and two text-only models {---} FastText and SBERT {---} when it comes to the encoding of individuation information. We study latent representations that CLIP provides for substrates, granular aggregates, and various numbers of objects. We demonstrate that CLIP embeddings capture quantitative differences in individuation better than models trained on text-only data. Moreover, the individuation hierarchy we deduce from the CLIP embeddings agrees with the hierarchies proposed in linguistics and cognitive science. | [
"Tikhonov, Alexey",
"Bylinina, Lisa",
"Yamshchikov, Ivan P."
] | Individuation in Neural Models with and without Visual Grounding | nlp4science-1.21 | Poster | 2409.18868 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.nlp4science-1.22.bib | https://aclanthology.org/2024.nlp4science-1.22/ | @inproceedings{wasi-islam-2024-cogergllm,
title = "{C}og{E}rg{LLM}: Exploring Large Language Model Systems Design Perspective Using Cognitive Ergonomics",
author = "Wasi, Azmine Toushik and
Islam, Mst Rafia",
editor = "Peled-Cohen, Lotem and
Calderon, Nitay and
Lissak, Shir and
Reichart, Roi",
booktitle = "Proceedings of the 1st Workshop on NLP for Science (NLP4Science)",
month = nov,
year = "2024",
address = "Miami, FL, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.nlp4science-1.22",
pages = "249--258",
abstract = "Integrating cognitive ergonomics with LLMs is crucial for improving safety, reliability, and user satisfaction in human-AI interactions. Current LLM designs often lack this integration, resulting in systems that may not fully align with human cognitive capabilities and limitations. This oversight exacerbates biases in LLM outputs and leads to suboptimal user experiences due to inconsistent application of user-centered design principles. Researchers are increasingly leveraging NLP, particularly LLMs, to model and understand human behavior across social sciences, psychology, psychiatry, health, and neuroscience. Our position paper explores the need to integrate cognitive ergonomics into LLM design, providing a comprehensive framework and practical guidelines for ethical development. By addressing these challenges, we aim to advance safer, more reliable, and ethically sound human-AI interactions.",
}
| Integrating cognitive ergonomics with LLMs is crucial for improving safety, reliability, and user satisfaction in human-AI interactions. Current LLM designs often lack this integration, resulting in systems that may not fully align with human cognitive capabilities and limitations. This oversight exacerbates biases in LLM outputs and leads to suboptimal user experiences due to inconsistent application of user-centered design principles. Researchers are increasingly leveraging NLP, particularly LLMs, to model and understand human behavior across social sciences, psychology, psychiatry, health, and neuroscience. Our position paper explores the need to integrate cognitive ergonomics into LLM design, providing a comprehensive framework and practical guidelines for ethical development. By addressing these challenges, we aim to advance safer, more reliable, and ethically sound human-AI interactions. | [
"Wasi, Azmine Toushik",
"Islam, Mst Rafia"
] | CogErgLLM: Exploring Large Language Model Systems Design Perspective Using Cognitive Ergonomics | nlp4science-1.22 | Poster | 2407.02885 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.sicon-1.1.bib | https://aclanthology.org/2024.sicon-1.1/ | @inproceedings{kim-guerzhoy-2024-observing,
title = "Observing the {S}outhern {US} Culture of Honor Using Large-Scale Social Media Analysis",
author = "Kim, Juho and
Guerzhoy, Michael",
editor = "Hale, James and
Chawla, Kushal and
Garg, Muskan",
booktitle = "Proceedings of the Second Workshop on Social Influence in Conversations (SICon 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.sicon-1.1",
pages = "1--8",
abstract = "A culture of honor refers to a social system where individuals{'} status, reputation, and esteem play a central role in governing interpersonal relations. Past works have associated this concept with the United States (US) South and related with it various traits such as higher sensitivity to insult, a higher value on reputation, and a tendency to react violently to insults. In this paper, we hypothesize and confirm that internet users from the US South, where a culture of honor is more prevalent, are more likely to display a trait predicted by their belonging to a culture of honor. Specifically, we test the hypothesis that US Southerners are more likely to retaliate to personal attacks by personally attacking back. We leverage OpenAI{'}s GPT-3.5 API to both geolocate internet users and to automatically detect whether users are insulting each other. We validate the use of GPT-3.5 by measuring its performance on manually-labeled subsets of the data. Our work demonstrates the potential of formulating a hypothesis based on a conceptual framework, operationalizing it in a way that is amenable to large-scale LLM-aided analysis, manually validating the use of the LLM, and drawing a conclusion.",
}
| A culture of honor refers to a social system where individuals{'} status, reputation, and esteem play a central role in governing interpersonal relations. Past works have associated this concept with the United States (US) South and related with it various traits such as higher sensitivity to insult, a higher value on reputation, and a tendency to react violently to insults. In this paper, we hypothesize and confirm that internet users from the US South, where a culture of honor is more prevalent, are more likely to display a trait predicted by their belonging to a culture of honor. Specifically, we test the hypothesis that US Southerners are more likely to retaliate to personal attacks by personally attacking back. We leverage OpenAI{'}s GPT-3.5 API to both geolocate internet users and to automatically detect whether users are insulting each other. We validate the use of GPT-3.5 by measuring its performance on manually-labeled subsets of the data. Our work demonstrates the potential of formulating a hypothesis based on a conceptual framework, operationalizing it in a way that is amenable to large-scale LLM-aided analysis, manually validating the use of the LLM, and drawing a conclusion. | [
"Kim, Juho",
"Guerzhoy, Michael"
] | Observing the Southern US Culture of Honor Using Large-Scale Social Media Analysis | sicon-1.1 | Poster | 2410.13887 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.sicon-1.2.bib | https://aclanthology.org/2024.sicon-1.2/ | @inproceedings{yin-etal-2024-respect,
title = "Should We Respect {LLM}s? A Cross-Lingual Study on the Influence of Prompt Politeness on {LLM} Performance",
author = "Yin, Ziqi and
Wang, Hao and
Horio, Kaito and
Kawahara, Daisuike and
Sekine, Satoshi",
editor = "Hale, James and
Chawla, Kushal and
Garg, Muskan",
booktitle = "Proceedings of the Second Workshop on Social Influence in Conversations (SICon 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.sicon-1.2",
pages = "9--35",
abstract = "We investigate the impact of politeness levels in prompts on the performance of large language models (LLMs). Polite language in human communications often garners more compliance and effectiveness, while rudeness can cause aversion, impacting response quality. We consider that LLMs mirror human communication traits, suggesting they align with human cultural norms. We assess the impact of politeness in prompts on LLMs across English, Chinese, and Japanese tasks. We observed that impolite prompts often result in poor performance, but overly polite language does not guarantee better outcomes. The best politeness level is different according to the language. This phenomenon suggests that LLMs not only reflect human behavior but are also influenced by language, particularly in different cultural contexts. Our findings highlight the need to factor in politeness for cross-cultural natural language processing and LLM usage.",
}
| We investigate the impact of politeness levels in prompts on the performance of large language models (LLMs). Polite language in human communications often garners more compliance and effectiveness, while rudeness can cause aversion, impacting response quality. We consider that LLMs mirror human communication traits, suggesting they align with human cultural norms. We assess the impact of politeness in prompts on LLMs across English, Chinese, and Japanese tasks. We observed that impolite prompts often result in poor performance, but overly polite language does not guarantee better outcomes. The best politeness level is different according to the language. This phenomenon suggests that LLMs not only reflect human behavior but are also influenced by language, particularly in different cultural contexts. Our findings highlight the need to factor in politeness for cross-cultural natural language processing and LLM usage. | [
"Yin, Ziqi",
"Wang, Hao",
"Horio, Kaito",
"Kawahara, Daisuike",
"Sekine, Satoshi"
] | Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance | sicon-1.2 | Poster | 2402.14531 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.sicon-1.3.bib | https://aclanthology.org/2024.sicon-1.3/ | @inproceedings{fisher-ram-2024-personality,
title = "Personality Differences Drive Conversational Dynamics: A High-Dimensional {NLP} Approach",
author = "Fisher, Julia R. and
Ram, Nilam",
editor = "Hale, James and
Chawla, Kushal and
Garg, Muskan",
booktitle = "Proceedings of the Second Workshop on Social Influence in Conversations (SICon 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.sicon-1.3",
pages = "36--45",
abstract = "This paper investigates how the topical flow of dyadic conversations emerges over time and how differences in interlocutors{'} personality traits contribute to this topical flow. Leveraging text embeddings, we map the trajectories of conversations between strangers into a high-dimensional space. Using nonlinear projections and clustering, we then identify when each interlocutor enters and exits various topics. Differences in conversational flow are quantified via , a summary measure of the {``}spread{''} of topics covered during a conversation, and , a time-varying measure of the cosine similarity between interlocutors{'} embeddings. Our findings suggest that interlocutors with a larger difference in the personality dimension of openness influence each other to spend more time discussing a wider range of topics and that interlocutors with a larger difference in extraversion experience a larger decrease in linguistic alignment throughout their conversation. We also examine how participants{'} affect (emotion) changes from before to after a conversation, finding that a larger difference in extraversion predicts a larger difference in affect change and that a greater topic entropy predicts a larger affect increase. This work demonstrates how communication research can be advanced through the use of high-dimensional NLP methods and identifies personality difference as an important driver of social influence.",
}
| This paper investigates how the topical flow of dyadic conversations emerges over time and how differences in interlocutors{'} personality traits contribute to this topical flow. Leveraging text embeddings, we map the trajectories of conversations between strangers into a high-dimensional space. Using nonlinear projections and clustering, we then identify when each interlocutor enters and exits various topics. Differences in conversational flow are quantified via , a summary measure of the {``}spread{''} of topics covered during a conversation, and , a time-varying measure of the cosine similarity between interlocutors{'} embeddings. Our findings suggest that interlocutors with a larger difference in the personality dimension of openness influence each other to spend more time discussing a wider range of topics and that interlocutors with a larger difference in extraversion experience a larger decrease in linguistic alignment throughout their conversation. We also examine how participants{'} affect (emotion) changes from before to after a conversation, finding that a larger difference in extraversion predicts a larger difference in affect change and that a greater topic entropy predicts a larger affect increase. This work demonstrates how communication research can be advanced through the use of high-dimensional NLP methods and identifies personality difference as an important driver of social influence. | [
"Fisher, Julia R.",
"Ram, Nilam"
] | Personality Differences Drive Conversational Dynamics: A High-Dimensional NLP Approach | sicon-1.3 | Poster | 2410.11043 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.sicon-1.4.bib | https://aclanthology.org/2024.sicon-1.4/ | @inproceedings{kodama-etal-2024-recommind,
title = "{R}ecom{M}ind: Movie Recommendation Dialogue with Seeker{'}s Internal State",
author = "Kodama, Takashi and
Kiyomaru, Hirokazu and
Huang, Yin Jou and
Kurohashi, Sadao",
editor = "Hale, James and
Chawla, Kushal and
Garg, Muskan",
booktitle = "Proceedings of the Second Workshop on Social Influence in Conversations (SICon 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.sicon-1.4",
pages = "46--63",
abstract = "Humans pay careful attention to the interlocutor{'}s internal state in dialogues. For example, in recommendation dialogues, we make recommendations while estimating the seeker{'}s internal state, such as his/her level of knowledge and interest. Since there are no existing annotated resources for the analysis and experiment, we constructed RecomMind, a movie recommendation dialogue dataset with annotations of the seeker{'}s internal state at the entity level. Each entity has a first-person label annotated by the seeker and a second-person label annotated by the recommender. Our analysis based on RecomMind reveals that the success of recommendations is enhanced when recommenders mention entities that seekers do not know but are interested in. We also propose a response generation framework that explicitly considers the seeker{'}s internal state, utilizing the chain-of-thought prompting. The human evaluation results show that our proposed method outperforms the baseline method in both consistency and the success of recommendations.",
}
| Humans pay careful attention to the interlocutor{'}s internal state in dialogues. For example, in recommendation dialogues, we make recommendations while estimating the seeker{'}s internal state, such as his/her level of knowledge and interest. Since there are no existing annotated resources for the analysis and experiment, we constructed RecomMind, a movie recommendation dialogue dataset with annotations of the seeker{'}s internal state at the entity level. Each entity has a first-person label annotated by the seeker and a second-person label annotated by the recommender. Our analysis based on RecomMind reveals that the success of recommendations is enhanced when recommenders mention entities that seekers do not know but are interested in. We also propose a response generation framework that explicitly considers the seeker{'}s internal state, utilizing the chain-of-thought prompting. The human evaluation results show that our proposed method outperforms the baseline method in both consistency and the success of recommendations. | [
"Kodama, Takashi",
"Kiyomaru, Hirokazu",
"Huang, Yin Jou",
"Kurohashi, Sadao"
] | RecomMind: Movie Recommendation Dialogue with Seeker's Internal State | sicon-1.4 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.sicon-1.5.bib | https://aclanthology.org/2024.sicon-1.5/ | @inproceedings{lee-etal-2024-redefining,
title = "Redefining Proactivity for Information Seeking Dialogue",
author = "Lee, Jing Yang and
Kim, Seokhwan and
Mehta, Kartik and
Kao, Jiun-Yu and
Lin, Yu-Hsiang and
Gupta, Arpit",
editor = "Hale, James and
Chawla, Kushal and
Garg, Muskan",
booktitle = "Proceedings of the Second Workshop on Social Influence in Conversations (SICon 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.sicon-1.5",
pages = "64--84",
abstract = "Humans pay careful attention to the interlocutor{'}s internal state in dialogues. For example, in recommendation dialogues, we make recommendations while estimating the seeker{'}s internal state, such as his/her level of knowledge and interest. Since there are no existing annotated resources for the analysis and experiment, we constructed RecomMind, a movie recommendation dialogue dataset with annotations of the seeker{'}s internal state at the entity level. Each entity has a first-person label annotated by the seeker and a second-person label annotated by the recommender. Our analysis based on RecomMind reveals that the success of recommendations is enhanced when recommenders mention entities that seekers do not know but are interested in. We also propose a response generation framework that explicitly considers the seeker{'}s internal state, utilizing the chain-of-thought prompting. The human evaluation results show that our proposed method outperforms the baseline method in both consistency and the success of recommendations.",
}
| Humans pay careful attention to the interlocutor{'}s internal state in dialogues. For example, in recommendation dialogues, we make recommendations while estimating the seeker{'}s internal state, such as his/her level of knowledge and interest. Since there are no existing annotated resources for the analysis and experiment, we constructed RecomMind, a movie recommendation dialogue dataset with annotations of the seeker{'}s internal state at the entity level. Each entity has a first-person label annotated by the seeker and a second-person label annotated by the recommender. Our analysis based on RecomMind reveals that the success of recommendations is enhanced when recommenders mention entities that seekers do not know but are interested in. We also propose a response generation framework that explicitly considers the seeker{'}s internal state, utilizing the chain-of-thought prompting. The human evaluation results show that our proposed method outperforms the baseline method in both consistency and the success of recommendations. | [
"Lee, Jing Yang",
"Kim, Seokhwan",
"Mehta, Kartik",
"Kao, Jiun-Yu",
"Lin, Yu-Hsiang",
"Gupta, Arpit"
] | Redefining Proactivity for Information Seeking Dialogue | sicon-1.5 | Poster | 2410.15297 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.sicon-1.6.bib | https://aclanthology.org/2024.sicon-1.6/ | @inproceedings{zeng-2024-leveraging,
title = "Leveraging Large Language Models for Code-Mixed Data Augmentation in Sentiment Analysis",
author = "Zeng, Linda",
editor = "Hale, James and
Chawla, Kushal and
Garg, Muskan",
booktitle = "Proceedings of the Second Workshop on Social Influence in Conversations (SICon 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.sicon-1.6",
pages = "85--101",
abstract = "Code-mixing (CM), where speakers blend languages within a single expression, is prevalent in multilingual societies but poses challenges for natural language processing due to its complexity and limited data. We propose using a large language model to generate synthetic CM data, which is then used to enhance the performance of task-specific models for CM sentiment analysis. Our results show that in Spanish-English, synthetic data improved the F1 score by 9.32{\%}, outperforming previous augmentation techniques. However, in Malayalam-English, synthetic data only helped when the baseline was low; with strong natural data, additional synthetic data offered little benefit. Human evaluation confirmed that this approach is a simple, cost-effective way to generate natural-sounding CM sentences, particularly beneficial for low baselines. Our findings suggest that few-shot prompting of large language models is a promising method for CM data augmentation and has significant impact on improving sentiment analysis, an important element in the development of social influence systems.",
}
| Code-mixing (CM), where speakers blend languages within a single expression, is prevalent in multilingual societies but poses challenges for natural language processing due to its complexity and limited data. We propose using a large language model to generate synthetic CM data, which is then used to enhance the performance of task-specific models for CM sentiment analysis. Our results show that in Spanish-English, synthetic data improved the F1 score by 9.32{\%}, outperforming previous augmentation techniques. However, in Malayalam-English, synthetic data only helped when the baseline was low; with strong natural data, additional synthetic data offered little benefit. Human evaluation confirmed that this approach is a simple, cost-effective way to generate natural-sounding CM sentences, particularly beneficial for low baselines. Our findings suggest that few-shot prompting of large language models is a promising method for CM data augmentation and has significant impact on improving sentiment analysis, an important element in the development of social influence systems. | [
"Zeng, Linda"
] | Leveraging Large Language Models for Code-Mixed Data Augmentation in Sentiment Analysis | sicon-1.6 | Poster | 2411.00691 | [
"https://github.com/lindazeng979/llm-cmsa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.sicon-1.7.bib | https://aclanthology.org/2024.sicon-1.7/ | @inproceedings{martinez-etal-2024-balancing,
title = "Balancing Transparency and Accuracy: A Comparative Analysis of Rule-Based and Deep Learning Models in Political Bias Classification",
author = "Martinez, Manuel Nunez and
Schmer-Galunder, Sonja and
Liu, Zoey and
Youm, Sangpil and
Jayaweera, Chathuri and
Dorr, Bonnie J.",
editor = "Hale, James and
Chawla, Kushal and
Garg, Muskan",
booktitle = "Proceedings of the Second Workshop on Social Influence in Conversations (SICon 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.sicon-1.7",
pages = "102--115",
abstract = "The unchecked spread of digital information, combined with increasing political polarization and the tendency of individuals to isolate themselves from opposing political viewpoints opposing views, has driven researchers to develop systems for automatically detecting political bias in media. This trend has been further fueled by discussions on social media. We explore methods for categorizing bias in US news articles, comparing rule-based and deep learning approaches. The study highlights the sensitivity of modern self-learning systems to unconstrained data ingestion, while reconsidering the strengths of traditional rule-based systems. Applying both models to left-leaning (CNN) and right-leaning (FOX) News articles, we assess their effectiveness on data beyond the original training and test sets. This analysis highlights each model{'}s accuracy, offers a framework for exploring deep-learning explainability, and sheds light on political bias in US news media. We contrast the opaque architecture of a deep learning model with the transparency of a linguistically informed rule-based model, showing that the rule-based model performs consistently across different data conditions and offers greater transparency, whereas the deep learning model is dependent on the training set and struggles with unseen data.",
}
| The unchecked spread of digital information, combined with increasing political polarization and the tendency of individuals to isolate themselves from opposing political viewpoints opposing views, has driven researchers to develop systems for automatically detecting political bias in media. This trend has been further fueled by discussions on social media. We explore methods for categorizing bias in US news articles, comparing rule-based and deep learning approaches. The study highlights the sensitivity of modern self-learning systems to unconstrained data ingestion, while reconsidering the strengths of traditional rule-based systems. Applying both models to left-leaning (CNN) and right-leaning (FOX) News articles, we assess their effectiveness on data beyond the original training and test sets. This analysis highlights each model{'}s accuracy, offers a framework for exploring deep-learning explainability, and sheds light on political bias in US news media. We contrast the opaque architecture of a deep learning model with the transparency of a linguistically informed rule-based model, showing that the rule-based model performs consistently across different data conditions and offers greater transparency, whereas the deep learning model is dependent on the training set and struggles with unseen data. | [
"Martinez, Manuel Nunez",
"Schmer-Galunder, Sonja",
"Liu, Zoey",
"Youm, Sangpil",
"Jayaweera, Chathuri",
"Dorr, Bonnie J."
] | Balancing Transparency and Accuracy: A Comparative Analysis of Rule-Based and Deep Learning Models in Political Bias Classification | sicon-1.7 | Poster | 2411.04328 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.sicon-1.8.bib | https://aclanthology.org/2024.sicon-1.8/ | @inproceedings{siskou-espinoza-2024-different,
title = "{''}So, are you a different person today?{''} Analyzing Bias in Questions during Parole Hearings",
author = "Siskou, Wassiliki and
Espinoza, Ingrid",
editor = "Hale, James and
Chawla, Kushal and
Garg, Muskan",
booktitle = "Proceedings of the Second Workshop on Social Influence in Conversations (SICon 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.sicon-1.8",
pages = "116--128",
abstract = "During Parole Suitability Hearings commissioners need to evaluate whether an inmate{'}s risk of reoffending has decreased sufficiently to justify their release from prison before completing their full sentence. The conversation between the commissioners and the inmate is the key element of such hearings and is largely driven by question-and-answer patterns which can be influenced by the commissioner{'}s questioning behavior. To our knowledge, no previous study has investigated the relationship between the types of questions asked during parole hearings and potentially biased outcomes. We address this gap by analysing commissioner{'}s questioning behavior during Californian parole hearings. We test ChatGPT-4o{'}s capability of annotating questions automatically and achieve a high F1-score of 0.91 without prior training. By analysing all questions posed directly by commissioners to inmates, we tested for potential biases in question types across multiple demographic variables. The results show minimal bias in questioning behavior toward inmates asking for parole.",
}
| During Parole Suitability Hearings commissioners need to evaluate whether an inmate{'}s risk of reoffending has decreased sufficiently to justify their release from prison before completing their full sentence. The conversation between the commissioners and the inmate is the key element of such hearings and is largely driven by question-and-answer patterns which can be influenced by the commissioner{'}s questioning behavior. To our knowledge, no previous study has investigated the relationship between the types of questions asked during parole hearings and potentially biased outcomes. We address this gap by analysing commissioner{'}s questioning behavior during Californian parole hearings. We test ChatGPT-4o{'}s capability of annotating questions automatically and achieve a high F1-score of 0.91 without prior training. By analysing all questions posed directly by commissioners to inmates, we tested for potential biases in question types across multiple demographic variables. The results show minimal bias in questioning behavior toward inmates asking for parole. | [
"Siskou, Wassiliki",
"Espinoza, Ingrid"
] | ”So, are you a different person today?” Analyzing Bias in Questions during Parole Hearings | sicon-1.8 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.sicon-1.9.bib | https://aclanthology.org/2024.sicon-1.9/ | @inproceedings{perera-etal-2024-principles,
title = "Principles for {AI}-Assisted Social Influence and Their Application to Social Mediation",
author = "Perera, Ian and
Memory, Alex and
Kazakova, Vera A. and
Dorr, Bonnie J. and
Mather, Brodie and
Bose, Ritwik and
Mahyari, Arash and
Lofdahl, Corey and
Blackburn, Mack S. and
Bhatia, Archna and
Patterson, Brandon and
Pirolli, Peter",
editor = "Hale, James and
Chawla, Kushal and
Garg, Muskan",
booktitle = "Proceedings of the Second Workshop on Social Influence in Conversations (SICon 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.sicon-1.9",
pages = "129--140",
abstract = "Successful social influence, whether at individual or community levels, requires expertise and care in several dimensions of communication: understanding of emotions, beliefs, and values; transparency; and context-aware behavior shaping. Based on our experience in identifying mediation needs in social media and engaging with moderators and users, we developed a set of principles that we believe social influence systems should adhere to to ensure ethical operation, effectiveness, widespread adoption, and trust by users on both sides of the engagement of influence. We demonstrate these principles in D-ESC: Dialogue Assistant for Engaging in Social-Cybermediation, in the context of AI-assisted social media mediation, a newer paradigm of automatic moderation that responds to unique and changing communities while engendering and maintaining trust in users, moderators, and platform-holders. Through this case study, we identify opportunities for our principles to guide future systems towards greater opportunities for positive social change.",
}
| Successful social influence, whether at individual or community levels, requires expertise and care in several dimensions of communication: understanding of emotions, beliefs, and values; transparency; and context-aware behavior shaping. Based on our experience in identifying mediation needs in social media and engaging with moderators and users, we developed a set of principles that we believe social influence systems should adhere to to ensure ethical operation, effectiveness, widespread adoption, and trust by users on both sides of the engagement of influence. We demonstrate these principles in D-ESC: Dialogue Assistant for Engaging in Social-Cybermediation, in the context of AI-assisted social media mediation, a newer paradigm of automatic moderation that responds to unique and changing communities while engendering and maintaining trust in users, moderators, and platform-holders. Through this case study, we identify opportunities for our principles to guide future systems towards greater opportunities for positive social change. | [
"Perera, Ian",
"Memory, Alex",
"Kazakova, Vera A.",
"Dorr, Bonnie J.",
"Mather, Brodie",
"Bose, Ritwik",
"Mahyari, Arash",
"Lofdahl, Corey",
"Blackburn, Mack S.",
"Bhatia, Archna",
"Patterson, Br",
"on",
"Pirolli, Peter"
] | Principles for AI-Assisted Social Influence and Their Application to Social Mediation | sicon-1.9 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.sicon-1.10.bib | https://aclanthology.org/2024.sicon-1.10/ | @inproceedings{wu-etal-2024-ehdchat,
title = "{EHDC}hat: A Knowledge-Grounded, Empathy-Enhanced Language Model for Healthcare Interactions",
author = "Wu, Shenghan and
Hsu, Wynne and
Lee, Mong Li",
editor = "Hale, James and
Chawla, Kushal and
Garg, Muskan",
booktitle = "Proceedings of the Second Workshop on Social Influence in Conversations (SICon 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.sicon-1.10",
pages = "141--151",
abstract = "Large Language Models (LLMs) excel at a range of tasks but often struggle with issues like hallucination and inadequate empathy support. To address hallucinations, we ground our dialogues in medical knowledge sourced from external repositories such as Disease Ontology and DrugBank. To improve empathy support, we develop the Empathetic Healthcare Dialogues dataset, which utilizes multiple dialogue strategies in each response. This dataset is then used to fine-tune an LLM, and we introduce a lightweight, adaptable method called Strategy Combination Guidance to enhance the emotional support capabilities of the fine-tuned model, named EHDChat. Our evaluations show that EHDChat significantly outperforms existing models in providing emotional support and medical accuracy, demonstrating the effectiveness of our approach in enhancing empathetic and informed AI interactions in healthcare.",
}
| Large Language Models (LLMs) excel at a range of tasks but often struggle with issues like hallucination and inadequate empathy support. To address hallucinations, we ground our dialogues in medical knowledge sourced from external repositories such as Disease Ontology and DrugBank. To improve empathy support, we develop the Empathetic Healthcare Dialogues dataset, which utilizes multiple dialogue strategies in each response. This dataset is then used to fine-tune an LLM, and we introduce a lightweight, adaptable method called Strategy Combination Guidance to enhance the emotional support capabilities of the fine-tuned model, named EHDChat. Our evaluations show that EHDChat significantly outperforms existing models in providing emotional support and medical accuracy, demonstrating the effectiveness of our approach in enhancing empathetic and informed AI interactions in healthcare. | [
"Wu, Shenghan",
"Hsu, Wynne",
"Lee, Mong Li"
] | EHDChat: A Knowledge-Grounded, Empathy-Enhanced Language Model for Healthcare Interactions | sicon-1.10 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.sicon-1.11.bib | https://aclanthology.org/2024.sicon-1.11/ | @inproceedings{chia-etal-2024-domain,
title = "Domain-Expanded {ASTE}: Rethinking Generalization in Aspect Sentiment Triplet Extraction",
author = "Chia, Yew Ken and
Chen, Hui and
Chen, Guizhen and
Han, Wei and
Aljunied, Sharifah Mahani and
Poria, Soujanya and
Bing, Lidong",
editor = "Hale, James and
Chawla, Kushal and
Garg, Muskan",
booktitle = "Proceedings of the Second Workshop on Social Influence in Conversations (SICon 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.sicon-1.11",
pages = "152--165",
abstract = "Aspect Sentiment Triplet Extraction (ASTE) is a challenging task in sentiment analysis, aiming to provide fine-grained insights into human sentiments. However, existing benchmarks are limited to two domains and do not evaluate model performance on unseen domains, raising concerns about the generalization of proposed methods. Furthermore, it remains unclear if large language models (LLMs) can effectively handle complex sentiment tasks like ASTE. In this work, we address the issue of generalization in ASTE from both a benchmarking and modeling perspective. We introduce a domain-expanded benchmark by annotating samples from diverse domains, enabling evaluation of models in both in-domain and out-of-domain settings. Additionally, we propose CASE, a simple and effective decoding strategy that enhances trustworthiness and performance of LLMs in ASTE. Through comprehensive experiments involving multiple tasks, settings, and models, we demonstrate that CASE can serve as a general decoding strategy for complex sentiment tasks. By expanding the scope of evaluation and providing a more reliable decoding strategy, we aim to inspire the research community to reevaluate the generalizability of benchmarks and models for ASTE. Our code, data, and models are available at https://github.com/DAMO-NLP-SG/domain-expanded-aste.",
}
| Aspect Sentiment Triplet Extraction (ASTE) is a challenging task in sentiment analysis, aiming to provide fine-grained insights into human sentiments. However, existing benchmarks are limited to two domains and do not evaluate model performance on unseen domains, raising concerns about the generalization of proposed methods. Furthermore, it remains unclear if large language models (LLMs) can effectively handle complex sentiment tasks like ASTE. In this work, we address the issue of generalization in ASTE from both a benchmarking and modeling perspective. We introduce a domain-expanded benchmark by annotating samples from diverse domains, enabling evaluation of models in both in-domain and out-of-domain settings. Additionally, we propose CASE, a simple and effective decoding strategy that enhances trustworthiness and performance of LLMs in ASTE. Through comprehensive experiments involving multiple tasks, settings, and models, we demonstrate that CASE can serve as a general decoding strategy for complex sentiment tasks. By expanding the scope of evaluation and providing a more reliable decoding strategy, we aim to inspire the research community to reevaluate the generalizability of benchmarks and models for ASTE. Our code, data, and models are available at https://github.com/DAMO-NLP-SG/domain-expanded-aste. | [
"Chia, Yew Ken",
"Chen, Hui",
"Chen, Guizhen",
"Han, Wei",
"Aljunied, Sharifah Mahani",
"Poria, Soujanya",
"Bing, Lidong"
] | Domain-Expanded ASTE: Rethinking Generalization in Aspect Sentiment Triplet Extraction | sicon-1.11 | Poster | 2305.14434 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.tsar-1.1.bib | https://aclanthology.org/2024.tsar-1.1/ | @inproceedings{north-etal-2024-multils,
title = "{M}ulti{LS}: An End-to-End Lexical Simplification Framework",
author = "North, Kai and
Ranasinghe, Tharindu and
Shardlow, Matthew and
Zampieri, Marcos",
editor = "Shardlow, Matthew and
Saggion, Horacio and
Alva-Manchego, Fernando and
Zampieri, Marcos and
North, Kai and
{\v{S}}tajner, Sanja and
Stodden, Regina",
booktitle = "Proceedings of the Third Workshop on Text Simplification, Accessibility and Readability (TSAR 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.tsar-1.1",
pages = "1--11",
abstract = "Lexical Simplification (LS) automatically replaces difficult to read words for easier alternatives while preserving a sentence{'}s original meaning. Several datasets exist for LS and each of them specialize in one or two sub-tasks within the LS pipeline. However, as of this moment, no single LS dataset has been developed that covers all LS sub-tasks. We present MultiLS, the first LS framework that allows for the creation of a multi-task LS dataset. We also present MultiLS-PT, the first dataset created using the MultiLS framework. We demonstrate the potential of MultiLS-PT by carrying out all LS sub-tasks of (1) lexical complexity prediction (LCP), (2) substitute generation, and (3) substitute ranking for Portuguese.",
}
| Lexical Simplification (LS) automatically replaces difficult to read words for easier alternatives while preserving a sentence{'}s original meaning. Several datasets exist for LS and each of them specialize in one or two sub-tasks within the LS pipeline. However, as of this moment, no single LS dataset has been developed that covers all LS sub-tasks. We present MultiLS, the first LS framework that allows for the creation of a multi-task LS dataset. We also present MultiLS-PT, the first dataset created using the MultiLS framework. We demonstrate the potential of MultiLS-PT by carrying out all LS sub-tasks of (1) lexical complexity prediction (LCP), (2) substitute generation, and (3) substitute ranking for Portuguese. | [
"North, Kai",
"Ranasinghe, Tharindu",
"Shardlow, Matthew",
"Zampieri, Marcos"
] | MultiLS: An End-to-End Lexical Simplification Framework | tsar-1.1 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.tsar-1.2.bib | https://aclanthology.org/2024.tsar-1.2/ | @inproceedings{shmidman-shmidman-2024-otobert,
title = "{O}to{BERT}: Identifying Suffixed Verbal Forms in {M}odern {H}ebrew Literature",
author = "Shmidman, Avi and
Shmidman, Shaltiel",
editor = "Shardlow, Matthew and
Saggion, Horacio and
Alva-Manchego, Fernando and
Zampieri, Marcos and
North, Kai and
{\v{S}}tajner, Sanja and
Stodden, Regina",
booktitle = "Proceedings of the Third Workshop on Text Simplification, Accessibility and Readability (TSAR 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.tsar-1.2",
pages = "12--19",
abstract = "We provide a solution for a specific morphological obstacle which often makes Hebrew literature difficult to parse for the younger generation. The morphologically-rich nature of the Hebrew language allows pronominal direct objects to be realized as bound morphemes, suffixed to the verb. Although such suffixes are often utilized in Biblical Hebrew, their use has all but disappeared in modern Hebrew. Nevertheless, authors of modern Hebrew literature, in their search for literary flair, do make use of such forms. These unusual forms are notorious for alienating young readers from Hebrew literature, especially because these rare suffixed forms are often orthographically identical to common Hebrew words with different meanings. Upon encountering such words, readers naturally select the usual analysis of the word; yet, upon completing the sentence, they find themselves confounded. Young readers end up feeling {``}tricked{''}, and this in turn contributes to their alienation from the text. In order to address this challenge, we pretrained a new BERT model specifically geared to identify such forms, so that they may be automatically simplified and/or flagged. We release this new BERT model to the public for unrestricted use.",
}
| We provide a solution for a specific morphological obstacle which often makes Hebrew literature difficult to parse for the younger generation. The morphologically-rich nature of the Hebrew language allows pronominal direct objects to be realized as bound morphemes, suffixed to the verb. Although such suffixes are often utilized in Biblical Hebrew, their use has all but disappeared in modern Hebrew. Nevertheless, authors of modern Hebrew literature, in their search for literary flair, do make use of such forms. These unusual forms are notorious for alienating young readers from Hebrew literature, especially because these rare suffixed forms are often orthographically identical to common Hebrew words with different meanings. Upon encountering such words, readers naturally select the usual analysis of the word; yet, upon completing the sentence, they find themselves confounded. Young readers end up feeling {``}tricked{''}, and this in turn contributes to their alienation from the text. In order to address this challenge, we pretrained a new BERT model specifically geared to identify such forms, so that they may be automatically simplified and/or flagged. We release this new BERT model to the public for unrestricted use. | [
"Shmidman, Avi",
"Shmidman, Shaltiel"
] | OtoBERT: Identifying Suffixed Verbal Forms in Modern Hebrew Literature | tsar-1.2 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.tsar-1.3.bib | https://aclanthology.org/2024.tsar-1.3/ | @inproceedings{qiu-etal-2024-complex,
title = "{C}omp{L}ex-{ZH}: A New Dataset for Lexical Complexity Prediction in {M}andarin and {C}antonese",
author = "Qiu, Le and
Guo, Shanyue and
Wong, Tak-Sum and
Chersoni, Emmanuele and
Lee, John and
Huang, Chu-Ren",
editor = "Shardlow, Matthew and
Saggion, Horacio and
Alva-Manchego, Fernando and
Zampieri, Marcos and
North, Kai and
{\v{S}}tajner, Sanja and
Stodden, Regina",
booktitle = "Proceedings of the Third Workshop on Text Simplification, Accessibility and Readability (TSAR 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.tsar-1.3",
pages = "20--26",
abstract = "The prediction of lexical complexity in context is assuming an increasing relevance in Natural Language Processing research, since identifying complex words is often the first step of text simplification pipelines. To the best of our knowledge, though, datasets annotated with complex words are available only for English and for a limited number of Western languages.In our paper, we introduce CompLex-ZH, a dataset including words annotated with complexity scores in sentential contexts for Chinese. Our data include sentences in Mandarin and Cantonese, which were selected from a variety of sources and textual genres. We provide a first evaluation with baselines combining hand-crafted and language models-based features.",
}
| The prediction of lexical complexity in context is assuming an increasing relevance in Natural Language Processing research, since identifying complex words is often the first step of text simplification pipelines. To the best of our knowledge, though, datasets annotated with complex words are available only for English and for a limited number of Western languages.In our paper, we introduce CompLex-ZH, a dataset including words annotated with complexity scores in sentential contexts for Chinese. Our data include sentences in Mandarin and Cantonese, which were selected from a variety of sources and textual genres. We provide a first evaluation with baselines combining hand-crafted and language models-based features. | [
"Qiu, Le",
"Guo, Shanyue",
"Wong, Tak-Sum",
"Chersoni, Emmanuele",
"Lee, John",
"Huang, Chu-Ren"
] | CompLex-ZH: A New Dataset for Lexical Complexity Prediction in Mandarin and Cantonese | tsar-1.3 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.tsar-1.4.bib | https://aclanthology.org/2024.tsar-1.4/ | @inproceedings{anschutz-etal-2024-images,
title = "Images Speak Volumes: User-Centric Assessment of Image Generation for Accessible Communication",
author = {Ansch{\"u}tz, Miriam and
Sylaj, Tringa and
Groh, Georg},
editor = "Shardlow, Matthew and
Saggion, Horacio and
Alva-Manchego, Fernando and
Zampieri, Marcos and
North, Kai and
{\v{S}}tajner, Sanja and
Stodden, Regina",
booktitle = "Proceedings of the Third Workshop on Text Simplification, Accessibility and Readability (TSAR 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.tsar-1.4",
pages = "27--40",
abstract = "Explanatory images play a pivotal role in accessible and easy-to-read (E2R) texts. However, the images available in online databases are not tailored toward the respective texts, and the creation of customized images is expensive. In this large-scale study, we investigated whether text-to-image generation models can close this gap by providing customizable images quickly and easily. We benchmarked seven, four open- and three closed-source, image generation models and provide an extensive evaluation of the resulting images. In addition, we performed a user study with people from the E2R target group to examine whether the images met their requirements. We find that some of the models show remarkable performance, but none of the models are ready to be used at a larger scale without human supervision. Our research is an important step toward facilitating the creation of accessible information for E2R creators and tailoring accessible images to the target group{'}s needs.",
}
| Explanatory images play a pivotal role in accessible and easy-to-read (E2R) texts. However, the images available in online databases are not tailored toward the respective texts, and the creation of customized images is expensive. In this large-scale study, we investigated whether text-to-image generation models can close this gap by providing customizable images quickly and easily. We benchmarked seven, four open- and three closed-source, image generation models and provide an extensive evaluation of the resulting images. In addition, we performed a user study with people from the E2R target group to examine whether the images met their requirements. We find that some of the models show remarkable performance, but none of the models are ready to be used at a larger scale without human supervision. Our research is an important step toward facilitating the creation of accessible information for E2R creators and tailoring accessible images to the target group{'}s needs. | [
"Ansch{\\\"u}tz, Miriam",
"Sylaj, Tringa",
"Groh, Georg"
] | Images Speak Volumes: User-Centric Assessment of Image Generation for Accessible Communication | tsar-1.4 | Poster | 2410.03430 | [
"https://github.com/MiriUll/Image-Generation-for-Accessible-Communication"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.tsar-1.5.bib | https://aclanthology.org/2024.tsar-1.5/ | @inproceedings{bakker-kamps-2024-cochrane,
title = "Cochrane-auto: An Aligned Dataset for the Simplification of Biomedical Abstracts",
author = "Bakker, Jan and
Kamps, Jaap",
editor = "Shardlow, Matthew and
Saggion, Horacio and
Alva-Manchego, Fernando and
Zampieri, Marcos and
North, Kai and
{\v{S}}tajner, Sanja and
Stodden, Regina",
booktitle = "Proceedings of the Third Workshop on Text Simplification, Accessibility and Readability (TSAR 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.tsar-1.5",
pages = "41--51",
abstract = "The most reliable and up-to-date information on health questions is in the biomedical literature, but inaccessible due to the complex language full of jargon. Domain specific scientific text simplification holds the promise to make this literature accessible to a lay audience. Therefore, we create Cochrane-auto: a large corpus of pairs of aligned sentences, paragraphs, and abstracts from biomedical abstracts and lay summaries. Experiments demonstrate that a plan-guided simplification system trained on Cochrane-auto is able to outperform a strong baseline trained on unaligned abstracts and lay summaries. More generally, our freely available corpus complementing Newsela-auto and Wiki-auto facilitates text simplification research beyond the sentence-level and direct lexical and grammatical revisions.",
}
| The most reliable and up-to-date information on health questions is in the biomedical literature, but inaccessible due to the complex language full of jargon. Domain specific scientific text simplification holds the promise to make this literature accessible to a lay audience. Therefore, we create Cochrane-auto: a large corpus of pairs of aligned sentences, paragraphs, and abstracts from biomedical abstracts and lay summaries. Experiments demonstrate that a plan-guided simplification system trained on Cochrane-auto is able to outperform a strong baseline trained on unaligned abstracts and lay summaries. More generally, our freely available corpus complementing Newsela-auto and Wiki-auto facilitates text simplification research beyond the sentence-level and direct lexical and grammatical revisions. | [
"Bakker, Jan",
"Kamps, Jaap"
] | Cochrane-auto: An Aligned Dataset for the Simplification of Biomedical Abstracts | tsar-1.5 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.tsar-1.6.bib | https://aclanthology.org/2024.tsar-1.6/ | @inproceedings{kim-etal-2024-considering,
title = "Considering Human Interaction and Variability in Automatic Text Simplification",
author = "Kim, Jenia and
Leijnen, Stefan and
Beinborn, Lisa",
editor = "Shardlow, Matthew and
Saggion, Horacio and
Alva-Manchego, Fernando and
Zampieri, Marcos and
North, Kai and
{\v{S}}tajner, Sanja and
Stodden, Regina",
booktitle = "Proceedings of the Third Workshop on Text Simplification, Accessibility and Readability (TSAR 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.tsar-1.6",
pages = "52--60",
abstract = "Research into automatic text simplification aims to promote access to information for all members of society. To facilitate generalizability, simplification research often abstracts away from specific use cases, and targets a prototypical reader and an underspecified content creator. In this paper, we consider a real-world use case {--} simplification technology for use in Dutch municipalities {--} and identify the needs of the content creators and the target audiences in this use case. The stakeholders envision a system that (a) assists the human writer without taking over the task; (b) can provide diverse alternative outputs, tailored for specific target audiences; and (c) can explain and motivate the suggestions that it outputs. These requirements call for technology that is characterized by modularity, explainability, and variability. We believe that these are important research directions that require further exploration.",
}
| Research into automatic text simplification aims to promote access to information for all members of society. To facilitate generalizability, simplification research often abstracts away from specific use cases, and targets a prototypical reader and an underspecified content creator. In this paper, we consider a real-world use case {--} simplification technology for use in Dutch municipalities {--} and identify the needs of the content creators and the target audiences in this use case. The stakeholders envision a system that (a) assists the human writer without taking over the task; (b) can provide diverse alternative outputs, tailored for specific target audiences; and (c) can explain and motivate the suggestions that it outputs. These requirements call for technology that is characterized by modularity, explainability, and variability. We believe that these are important research directions that require further exploration. | [
"Kim, Jenia",
"Leijnen, Stefan",
"Beinborn, Lisa"
] | Considering Human Interaction and Variability in Automatic Text Simplification | tsar-1.6 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.tsar-1.7.bib | https://aclanthology.org/2024.tsar-1.7/ | @inproceedings{lyu-pergola-2024-society,
title = "Society of Medical Simplifiers",
author = "Lyu, Chen and
Pergola, Gabriele",
editor = "Shardlow, Matthew and
Saggion, Horacio and
Alva-Manchego, Fernando and
Zampieri, Marcos and
North, Kai and
{\v{S}}tajner, Sanja and
Stodden, Regina",
booktitle = "Proceedings of the Third Workshop on Text Simplification, Accessibility and Readability (TSAR 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.tsar-1.7",
pages = "61--68",
abstract = "Medical text simplification is crucial for making complex biomedical literature more accessible to non-experts. Traditional methods struggle with the specialized terms and jargon of medical texts, lacking the flexibility to adapt the simplification process dynamically. In contrast, recent advancements in large language models (LLMs) present unique opportunities by offering enhanced control over text simplification through iterative refinement and collaboration between specialized agents. In this work, we introduce the Society of Medical Simplifiers, a novel LLM-based framework inspired by the {``}Society of Mind{''} (SOM) philosophy. Our approach leverages the strengths of LLMs by assigning five distinct roles, i.e., Layperson, Simplifier, Medical Expert, Language Clarifier, and Redundancy Checker, organized into interaction loops. This structure allows the agents to progressively improve text simplification while maintaining the complexity and accuracy of the original content. Evaluations on the Cochrane text simplification dataset demonstrate that our framework is on par with or outperforms state-of-the-art methods, achieving superior readability and content preservation through controlled simplification processes.",
}
| Medical text simplification is crucial for making complex biomedical literature more accessible to non-experts. Traditional methods struggle with the specialized terms and jargon of medical texts, lacking the flexibility to adapt the simplification process dynamically. In contrast, recent advancements in large language models (LLMs) present unique opportunities by offering enhanced control over text simplification through iterative refinement and collaboration between specialized agents. In this work, we introduce the Society of Medical Simplifiers, a novel LLM-based framework inspired by the {``}Society of Mind{''} (SOM) philosophy. Our approach leverages the strengths of LLMs by assigning five distinct roles, i.e., Layperson, Simplifier, Medical Expert, Language Clarifier, and Redundancy Checker, organized into interaction loops. This structure allows the agents to progressively improve text simplification while maintaining the complexity and accuracy of the original content. Evaluations on the Cochrane text simplification dataset demonstrate that our framework is on par with or outperforms state-of-the-art methods, achieving superior readability and content preservation through controlled simplification processes. | [
"Lyu, Chen",
"Pergola, Gabriele"
] | Society of Medical Simplifiers | tsar-1.7 | Poster | 2410.09631 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.tsar-1.8.bib | https://aclanthology.org/2024.tsar-1.8/ | @inproceedings{nohejl-etal-2024-difficult,
title = "Difficult for Whom? A Study of {J}apanese Lexical Complexity",
author = "Nohejl, Adam and
Hayakawa, Akio and
Ide, Yusuke and
Watanabe, Taro",
editor = "Shardlow, Matthew and
Saggion, Horacio and
Alva-Manchego, Fernando and
Zampieri, Marcos and
North, Kai and
{\v{S}}tajner, Sanja and
Stodden, Regina",
booktitle = "Proceedings of the Third Workshop on Text Simplification, Accessibility and Readability (TSAR 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.tsar-1.8",
pages = "69--81",
abstract = "The tasks of lexical complexity prediction (LCP) and complex word identification (CWI) commonly presuppose that difficult-to-understand words are shared by the target population. Meanwhile, personalization methods have also been proposed to adapt models to individual needs. We verify that a recent Japanese LCP dataset is representative of its target population by partially replicating the annotation. By another reannotation we show that native Chinese speakers perceive the complexity differently due to Sino-Japanese vocabulary. To explore the possibilities of personalization, we compare competitive baselines trained on the group mean ratings and individual ratings in terms of performance for an individual. We show that the model trained on a group mean performs similarly to an individual model in the CWI task, while achieving good LCP performance for an individual is difficult. We also experiment with adapting a finetuned BERT model, which results only in marginal improvements across all settings.",
}
| The tasks of lexical complexity prediction (LCP) and complex word identification (CWI) commonly presuppose that difficult-to-understand words are shared by the target population. Meanwhile, personalization methods have also been proposed to adapt models to individual needs. We verify that a recent Japanese LCP dataset is representative of its target population by partially replicating the annotation. By another reannotation we show that native Chinese speakers perceive the complexity differently due to Sino-Japanese vocabulary. To explore the possibilities of personalization, we compare competitive baselines trained on the group mean ratings and individual ratings in terms of performance for an individual. We show that the model trained on a group mean performs similarly to an individual model in the CWI task, while achieving good LCP performance for an individual is difficult. We also experiment with adapting a finetuned BERT model, which results only in marginal improvements across all settings. | [
"Nohejl, Adam",
"Hayakawa, Akio",
"Ide, Yusuke",
"Watanabe, Taro"
] | Difficult for Whom? A Study of Japanese Lexical Complexity | tsar-1.8 | Poster | 2410.18567 | [
"https://github.com/naist-nlp/multils-japanese"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.tsar-1.9.bib | https://aclanthology.org/2024.tsar-1.9/ | @inproceedings{saggion-etal-2024-lexical,
title = "Lexical Complexity Prediction and Lexical Simplification for {C}atalan and {S}panish: Resource Creation, Quality Assessment, and Ethical Considerations",
author = "Saggion, Horacio and
Bott, Stefan and
Szasz, Sandra and
P{\'e}rez, Nelson and
Calder{\'o}n, Sa{\'u}l and
Sol{\'\i}s, Mart{\'\i}n",
editor = "Shardlow, Matthew and
Saggion, Horacio and
Alva-Manchego, Fernando and
Zampieri, Marcos and
North, Kai and
{\v{S}}tajner, Sanja and
Stodden, Regina",
booktitle = "Proceedings of the Third Workshop on Text Simplification, Accessibility and Readability (TSAR 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.tsar-1.9",
pages = "82--94",
abstract = "Automatic lexical simplification is a task to substitute lexical items that may be unfamiliar and difficult to understand with easier and more common words. This paper presents the description and analysis of two novel datasets for lexical simplification in Spanish and Catalan. This dataset represents the first of its kind in Catalan and a substantial addition to the sparse data on automatic lexical simplification which is available for Spanish. Specifically, it is the first dataset for Spanish which includes scalar ratings of the understanding difficulty of lexical items. In addition, we present a detailed analysis aiming at assessing the appropriateness and ethical dimensions of the data for the lexical simplification task.",
}
| Automatic lexical simplification is a task to substitute lexical items that may be unfamiliar and difficult to understand with easier and more common words. This paper presents the description and analysis of two novel datasets for lexical simplification in Spanish and Catalan. This dataset represents the first of its kind in Catalan and a substantial addition to the sparse data on automatic lexical simplification which is available for Spanish. Specifically, it is the first dataset for Spanish which includes scalar ratings of the understanding difficulty of lexical items. In addition, we present a detailed analysis aiming at assessing the appropriateness and ethical dimensions of the data for the lexical simplification task. | [
"Saggion, Horacio",
"Bott, Stefan",
"Szasz, S",
"ra",
"P{\\'e}rez, Nelson",
"Calder{\\'o}n, Sa{\\'u}l",
"Sol{\\'\\i}s, Mart{\\'\\i}n"
] | Lexical Complexity Prediction and Lexical Simplification for Catalan and Spanish: Resource Creation, Quality Assessment, and Ethical Considerations | tsar-1.9 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.tsar-1.10.bib | https://aclanthology.org/2024.tsar-1.10/ | @inproceedings{lyu-pergola-2024-scigispy,
title = "{S}ci{G}is{P}y: a Novel Metric for Biomedical Text Simplification via Gist Inference Score",
author = "Lyu, Chen and
Pergola, Gabriele",
editor = "Shardlow, Matthew and
Saggion, Horacio and
Alva-Manchego, Fernando and
Zampieri, Marcos and
North, Kai and
{\v{S}}tajner, Sanja and
Stodden, Regina",
booktitle = "Proceedings of the Third Workshop on Text Simplification, Accessibility and Readability (TSAR 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.tsar-1.10",
pages = "95--106",
abstract = "Biomedical literature is often written in highly specialized language, posing significant comprehension challenges for non-experts. Automatic text simplification (ATS) offers a solution by making such texts more accessible while preserving critical information. However, evaluating ATS for biomedical texts is still challenging due to the limitations of existing evaluation metrics. General-domain metrics like SARI, BLEU, and ROUGE focus on surface-level text features, and readability metrics like FKGL and ARI fail to account for domain-specific terminology or assess how well the simplified text conveys core meanings (gist). To address this, we introduce SciGisPy, a novel evaluation metric inspired by Gist Inference Score (GIS) from Fuzzy-Trace Theory (FTT). SciGisPy measures how well a simplified text facilitates the formation of abstract inferences (gist) necessary for comprehension, especially in the biomedical domain. We revise GIS for this purpose by introducing domain-specific enhancements, including semantic chunking, Information Content (IC) theory, and specialized embeddings, while removing unsuitable indexes. Our experimental evaluation on the Cochrane biomedical text simplification dataset demonstrates that SciGisPy outperforms the original GIS formulation, with a significant increase in correctly identified simplified texts (84{\%} versus 44.8{\%}). The results and a thorough ablation study confirm that SciGisPy better captures the essential meaning of biomedical content, outperforming existing approaches.",
}
| Biomedical literature is often written in highly specialized language, posing significant comprehension challenges for non-experts. Automatic text simplification (ATS) offers a solution by making such texts more accessible while preserving critical information. However, evaluating ATS for biomedical texts is still challenging due to the limitations of existing evaluation metrics. General-domain metrics like SARI, BLEU, and ROUGE focus on surface-level text features, and readability metrics like FKGL and ARI fail to account for domain-specific terminology or assess how well the simplified text conveys core meanings (gist). To address this, we introduce SciGisPy, a novel evaluation metric inspired by Gist Inference Score (GIS) from Fuzzy-Trace Theory (FTT). SciGisPy measures how well a simplified text facilitates the formation of abstract inferences (gist) necessary for comprehension, especially in the biomedical domain. We revise GIS for this purpose by introducing domain-specific enhancements, including semantic chunking, Information Content (IC) theory, and specialized embeddings, while removing unsuitable indexes. Our experimental evaluation on the Cochrane biomedical text simplification dataset demonstrates that SciGisPy outperforms the original GIS formulation, with a significant increase in correctly identified simplified texts (84{\%} versus 44.8{\%}). The results and a thorough ablation study confirm that SciGisPy better captures the essential meaning of biomedical content, outperforming existing approaches. | [
"Lyu, Chen",
"Pergola, Gabriele"
] | SciGisPy: a Novel Metric for Biomedical Text Simplification via Gist Inference Score | tsar-1.10 | Poster | 2410.09632 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.tsar-1.11.bib | https://aclanthology.org/2024.tsar-1.11/ | @inproceedings{stodden-2024-easse,
title = "{EASSE}-{DE} {\&} {EASSE}-multi: Easier Automatic Sentence Simplification Evaluation for {G}erman {\&} Multiple Languages",
author = "Stodden, Regina",
editor = "Shardlow, Matthew and
Saggion, Horacio and
Alva-Manchego, Fernando and
Zampieri, Marcos and
North, Kai and
{\v{S}}tajner, Sanja and
Stodden, Regina",
booktitle = "Proceedings of the Third Workshop on Text Simplification, Accessibility and Readability (TSAR 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.tsar-1.11",
pages = "107--116",
abstract = "In this work, we propose EASSE-multi, a framework for easier automatic sentence evaluation for languages other than English. Compared to the original EASSE framework, EASSE-multi does not focus only on English.It contains tokenizers and versions of text simplification evaluation metrics which are suitable for multiple languages. In this paper, we exemplify the usage of EASSE-multi for German TS resulting in EASSE-DE. Further, we compare text simplification results when evaluating with different language or tokenization settings of the metrics. Based on this, we formulate recommendations on how to make the evaluation of (German) TS models more transparent and better comparable. Additionally, we present a benchmark on German TS evaluated with EASSE-DE and make its resources (i.e., test sets, system outputs, and evaluation reports) available. The code of EASSE-multi and its German specialisation (EASSE-DE) can be found at https://github.com/rstodden/easse-multi and https://github.com/rstodden/easse-de.",
}
| In this work, we propose EASSE-multi, a framework for easier automatic sentence evaluation for languages other than English. Compared to the original EASSE framework, EASSE-multi does not focus only on English.It contains tokenizers and versions of text simplification evaluation metrics which are suitable for multiple languages. In this paper, we exemplify the usage of EASSE-multi for German TS resulting in EASSE-DE. Further, we compare text simplification results when evaluating with different language or tokenization settings of the metrics. Based on this, we formulate recommendations on how to make the evaluation of (German) TS models more transparent and better comparable. Additionally, we present a benchmark on German TS evaluated with EASSE-DE and make its resources (i.e., test sets, system outputs, and evaluation reports) available. The code of EASSE-multi and its German specialisation (EASSE-DE) can be found at https://github.com/rstodden/easse-multi and https://github.com/rstodden/easse-de. | [
"Stodden, Regina"
] | EASSE-DE & EASSE-multi: Easier Automatic Sentence Simplification Evaluation for German & Multiple Languages | tsar-1.11 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.tsar-1.12.bib | https://aclanthology.org/2024.tsar-1.12/ | @inproceedings{paula-camilo-junior-2024-evaluating,
title = "Evaluating the Simplification of {B}razilian Legal Rulings in {LLM}s Using Readability Scores as a Target",
author = "Paula, Antonio Flavio and
Camilo-Junior, Celso",
editor = "Shardlow, Matthew and
Saggion, Horacio and
Alva-Manchego, Fernando and
Zampieri, Marcos and
North, Kai and
{\v{S}}tajner, Sanja and
Stodden, Regina",
booktitle = "Proceedings of the Third Workshop on Text Simplification, Accessibility and Readability (TSAR 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.tsar-1.12",
pages = "117--125",
abstract = "Legal documents are often characterized by complex language, including jargon and technical terms, making them challenging for Natural Language Processing (NLP) applications. We apply the readability-controlled text modification task with an emphasis on legal texts simplification. Additionally, our work explores an evaluation based on the comparison of word complexity in the documents using Zipf scale, demonstrating the models{'} ability to simplify text according to the target readability scores, while also identifying a limit to this capability. Our results with Llama-3 and Sabi{\'a}-2 show that while the complexity score decreases with higher readability targets, there is a trade-off with reduced semantic similarity.",
}
| Legal documents are often characterized by complex language, including jargon and technical terms, making them challenging for Natural Language Processing (NLP) applications. We apply the readability-controlled text modification task with an emphasis on legal texts simplification. Additionally, our work explores an evaluation based on the comparison of word complexity in the documents using Zipf scale, demonstrating the models{'} ability to simplify text according to the target readability scores, while also identifying a limit to this capability. Our results with Llama-3 and Sabi{\'a}-2 show that while the complexity score decreases with higher readability targets, there is a trade-off with reduced semantic similarity. | [
"Paula, Antonio Flavio",
"Camilo-Junior, Celso"
] | Evaluating the Simplification of Brazilian Legal Rulings in LLMs Using Readability Scores as a Target | tsar-1.12 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.tsar-1.13.bib | https://aclanthology.org/2024.tsar-1.13/ | @inproceedings{trott-riviere-2024-measuring,
title = "Measuring and Modifying the Readability of {E}nglish Texts with {GPT}-4",
author = "Trott, Sean and
Rivi{\`e}re, Pamela",
editor = "Shardlow, Matthew and
Saggion, Horacio and
Alva-Manchego, Fernando and
Zampieri, Marcos and
North, Kai and
{\v{S}}tajner, Sanja and
Stodden, Regina",
booktitle = "Proceedings of the Third Workshop on Text Simplification, Accessibility and Readability (TSAR 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.tsar-1.13",
pages = "126--134",
abstract = "The success of Large Language Models (LLMs) in other domains has raised the question of whether LLMs can reliably assess and manipulate the readability of text. We approach this question empirically. First, using a published corpus of 4,724 English text excerpts, we find that readability estimates produced {``}zero-shot{''} from GPT-4 Turbo and GPT-4o mini exhibit relatively high correlation with human judgments (r = 0.76 and r = 0.74, respectively), out-performing estimates derived from traditional readability formulas and various psycholinguistic indices. Then, in a pre-registered human experiment (N = 59), we ask whether Turbo can reliably make text easier or harder to read. We find evidence to support this hypothesis, though considerable variance in human judgments remains unexplained. We conclude by discussing the limitations of this approach, including limited scope, as well as the validity of the {``}readability{''} construct and its dependence on context, audience, and goal.",
}
| The success of Large Language Models (LLMs) in other domains has raised the question of whether LLMs can reliably assess and manipulate the readability of text. We approach this question empirically. First, using a published corpus of 4,724 English text excerpts, we find that readability estimates produced {``}zero-shot{''} from GPT-4 Turbo and GPT-4o mini exhibit relatively high correlation with human judgments (r = 0.76 and r = 0.74, respectively), out-performing estimates derived from traditional readability formulas and various psycholinguistic indices. Then, in a pre-registered human experiment (N = 59), we ask whether Turbo can reliably make text easier or harder to read. We find evidence to support this hypothesis, though considerable variance in human judgments remains unexplained. We conclude by discussing the limitations of this approach, including limited scope, as well as the validity of the {``}readability{''} construct and its dependence on context, audience, and goal. | [
"Trott, Sean",
"Rivi{\\`e}re, Pamela"
] | Measuring and Modifying the Readability of English Texts with GPT-4 | tsar-1.13 | Poster | 2410.14028 | [
"https://github.com/seantrott/llm_readability"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wat-1.1.bib | https://aclanthology.org/2024.wat-1.1/ | @inproceedings{tang-etal-2024-creative-context,
title = "Creative and Context-Aware Translation of {E}ast {A}sian Idioms with {GPT}-4",
author = "Tang, Kenan and
Song, Peiyang and
Qin, Yao and
Yan, Xifeng",
editor = "Nakazawa, Toshiaki and
Goto, Isao",
booktitle = "Proceedings of the Eleventh Workshop on Asian Translation (WAT 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wat-1.1",
pages = "1--21",
abstract = "As a type of figurative language, an East Asian idiom condenses rich cultural background into only a few characters. Translating such idioms is challenging for human translators, who often resort to choosing a context-aware translation from an existing list of candidates. However, compiling a dictionary of candidate translations demands much time and creativity even for expert translators. To alleviate such burden, we evaluate if GPT-4 can help generate high-quality translations. Based on automatic evaluations of faithfulness and creativity, we first identify Pareto-optimal prompting strategies that can outperform translation engines from Google and DeepL. Then, at a low cost, our context-aware translations can achieve far more high-quality translations per idiom than the human baseline. We open-source all code and data to facilitate further research.",
}
| As a type of figurative language, an East Asian idiom condenses rich cultural background into only a few characters. Translating such idioms is challenging for human translators, who often resort to choosing a context-aware translation from an existing list of candidates. However, compiling a dictionary of candidate translations demands much time and creativity even for expert translators. To alleviate such burden, we evaluate if GPT-4 can help generate high-quality translations. Based on automatic evaluations of faithfulness and creativity, we first identify Pareto-optimal prompting strategies that can outperform translation engines from Google and DeepL. Then, at a low cost, our context-aware translations can achieve far more high-quality translations per idiom than the human baseline. We open-source all code and data to facilitate further research. | [
"Tang, Kenan",
"Song, Peiyang",
"Qin, Yao",
"Yan, Xifeng"
] | Creative and Context-Aware Translation of East Asian Idioms with GPT-4 | wat-1.1 | Poster | 2410.00988 | [
"https://github.com/kenantang/cjk-idioms-gpt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
https://aclanthology.org/2024.wat-1.2.bib | https://aclanthology.org/2024.wat-1.2/ | @inproceedings{imamura-utiyama-2024-empirical,
title = "An Empirical Study of Multilingual Vocabulary for Neural Machine Translation Models",
author = "Imamura, Kenji and
Utiyama, Masao",
editor = "Nakazawa, Toshiaki and
Goto, Isao",
booktitle = "Proceedings of the Eleventh Workshop on Asian Translation (WAT 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wat-1.2",
pages = "22--35",
abstract = "In this paper, we discuss multilingual vocabulary for neural machine translation models. Multilingual vocabularies should generate highly accurate machine translations regardless of the languages, and have preferences so that tokenized strings contain rare out-of-vocabulary (OOV) tokens and token sequences are short. In this paper, we discuss the characteristics of various multilingual vocabularies via tokenization and translation experiments. We also present our recommended vocabulary and tokenizer.",
}
| In this paper, we discuss multilingual vocabulary for neural machine translation models. Multilingual vocabularies should generate highly accurate machine translations regardless of the languages, and have preferences so that tokenized strings contain rare out-of-vocabulary (OOV) tokens and token sequences are short. In this paper, we discuss the characteristics of various multilingual vocabularies via tokenization and translation experiments. We also present our recommended vocabulary and tokenizer. | [
"Imamura, Kenji",
"Utiyama, Masao"
] | An Empirical Study of Multilingual Vocabulary for Neural Machine Translation Models | wat-1.2 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wat-1.3.bib | https://aclanthology.org/2024.wat-1.3/ | @inproceedings{dabre-etal-2024-machine,
title = "Machine Translation Of {M}arathi Dialects: A Case Study Of Kadodi",
author = "Dabre, Raj and
Dabre, Mary and
Pereira, Teresa",
editor = "Nakazawa, Toshiaki and
Goto, Isao",
booktitle = "Proceedings of the Eleventh Workshop on Asian Translation (WAT 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wat-1.3",
pages = "36--44",
abstract = "While Marathi is considered as a low- to middle-resource language, its 42 dialects have mostly been ignored, mainly because these dialects are mostly spoken and rarely written, making them extremely low-resource. In this paper we explore the machine translation (MT) of Kadodi, also known as Samvedi, which is a dialect of Marathi. We first discuss the Kadodi dialect, highlighting the differences from the standard dialect, followed by presenting a manually curated dataset called Suman consisting of a trilingual Kadodi-Marathi-English dictionary of 949 entries and 942 simple sentence triples and idioms created by native Kadodi speakers. We then evaluate 3 existing large language models (LLMs) supporting Marathi, namely Gemma-2-9b, Sarvam-2b-0.5 and LLaMa-3.1-8b, in few-shot prompting style to determine their efficacy for translation involving Kadodi. We observe that these models exhibit rather lackluster performance in handling Kadodi even for simple sentences, indicating a dire situation.",
}
| While Marathi is considered as a low- to middle-resource language, its 42 dialects have mostly been ignored, mainly because these dialects are mostly spoken and rarely written, making them extremely low-resource. In this paper we explore the machine translation (MT) of Kadodi, also known as Samvedi, which is a dialect of Marathi. We first discuss the Kadodi dialect, highlighting the differences from the standard dialect, followed by presenting a manually curated dataset called Suman consisting of a trilingual Kadodi-Marathi-English dictionary of 949 entries and 942 simple sentence triples and idioms created by native Kadodi speakers. We then evaluate 3 existing large language models (LLMs) supporting Marathi, namely Gemma-2-9b, Sarvam-2b-0.5 and LLaMa-3.1-8b, in few-shot prompting style to determine their efficacy for translation involving Kadodi. We observe that these models exhibit rather lackluster performance in handling Kadodi even for simple sentences, indicating a dire situation. | [
"Dabre, Raj",
"Dabre, Mary",
"Pereira, Teresa"
] | Machine Translation Of Marathi Dialects: A Case Study Of Kadodi | wat-1.3 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |
||
https://aclanthology.org/2024.wat-1.4.bib | https://aclanthology.org/2024.wat-1.4/ | @inproceedings{qian-etal-2024-large-language,
title = "Are Large Language Models State-of-the-art Quality Estimators for Machine Translation of User-generated Content?",
author = "Qian, Shenbin and
Orasan, Constantin and
Kanojia, Diptesh and
Do Carmo, F{\'e}lix",
editor = "Nakazawa, Toshiaki and
Goto, Isao",
booktitle = "Proceedings of the Eleventh Workshop on Asian Translation (WAT 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wat-1.4",
pages = "45--55",
abstract = "This paper investigates whether large language models (LLMs) are state-of-the-art quality estimators for machine translation of user-generated content (UGC) that contains emotional expressions, without the use of reference translations. To achieve this, we employ an existing emotion-related dataset with human-annotated errors and calculate quality evaluation scores based on the Multi-dimensional Quality Metrics. We compare the accuracy of several LLMs with that of our fine-tuned baseline models, under in-context learning and parameter-efficient fine-tuning (PEFT) scenarios. We find that PEFT of LLMs leads to better performance in score prediction with human interpretable explanations than fine-tuned models. However, a manual analysis of LLM outputs reveals that they still have problems such as refusal to reply to a prompt and unstable output while evaluating machine translation of UGC.",
}
| This paper investigates whether large language models (LLMs) are state-of-the-art quality estimators for machine translation of user-generated content (UGC) that contains emotional expressions, without the use of reference translations. To achieve this, we employ an existing emotion-related dataset with human-annotated errors and calculate quality evaluation scores based on the Multi-dimensional Quality Metrics. We compare the accuracy of several LLMs with that of our fine-tuned baseline models, under in-context learning and parameter-efficient fine-tuning (PEFT) scenarios. We find that PEFT of LLMs leads to better performance in score prediction with human interpretable explanations than fine-tuned models. However, a manual analysis of LLM outputs reveals that they still have problems such as refusal to reply to a prompt and unstable output while evaluating machine translation of UGC. | [
"Qian, Shenbin",
"Orasan, Constantin",
"Kanojia, Diptesh",
"Do Carmo, F{\\'e}lix"
] | Are Large Language Models State-of-the-art Quality Estimators for Machine Translation of User-generated Content? | wat-1.4 | Poster | 2410.06338 | [
"https://github.com/surrey-nlp/LLMs4MTQE-UGC"
] | https://huggingface.co/papers/2410.06338 | 0 | 0 | 0 | 4 | [] | [] | [] | [] | [] | [] | 1 |
https://aclanthology.org/2024.wat-1.5.bib | https://aclanthology.org/2024.wat-1.5/ | @inproceedings{dalal-etal-2024-ai,
title = "{AI}-Tutor: Interactive Learning of Ancient Knowledge from Low-Resource Languages",
author = "Dalal, Siddhartha and
Aditya, Rahul and
Chithrra Raghuram, Vethavikashini and
Koratamaddi, Prahlad",
editor = "Nakazawa, Toshiaki and
Goto, Isao",
booktitle = "Proceedings of the Eleventh Workshop on Asian Translation (WAT 2024)",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.wat-1.5",
pages = "56--66",
abstract = "Many low-resource languages, such as Prakrit, present significant linguistic complexities and have limited modern-day resources. These languages often have multiple derivatives; for example, Prakrit, a language in use by masses around 2500 years ago for 500 years, includes Pali and Gandhari, which encompass a vast body of Buddhist literature, as well as Ardhamagadhi, rich in Jain literature. Despite these challenges, these languages are invaluable for their historical, religious, and cultural insights needed by non-language experts and others.To explore and understand the deep knowledge within these ancient texts for non-language experts, we propose a novel approach: translating multiple dialects of the parent language into a contemporary language and then enabling them to interact with the system in their native language, including English, Hindi, French and German, through a question-and-answer interface built on Large Language Models. We demonstrate the effectiveness of this novel AI-Tutor system by focusing on Ardhamagadhi and Pali.",
}
| Many low-resource languages, such as Prakrit, present significant linguistic complexities and have limited modern-day resources. These languages often have multiple derivatives; for example, Prakrit, a language in use by masses around 2500 years ago for 500 years, includes Pali and Gandhari, which encompass a vast body of Buddhist literature, as well as Ardhamagadhi, rich in Jain literature. Despite these challenges, these languages are invaluable for their historical, religious, and cultural insights needed by non-language experts and others.To explore and understand the deep knowledge within these ancient texts for non-language experts, we propose a novel approach: translating multiple dialects of the parent language into a contemporary language and then enabling them to interact with the system in their native language, including English, Hindi, French and German, through a question-and-answer interface built on Large Language Models. We demonstrate the effectiveness of this novel AI-Tutor system by focusing on Ardhamagadhi and Pali. | [
"Dalal, Siddhartha",
"Aditya, Rahul",
"Chithrra Raghuram, Vethavikashini",
"Koratamaddi, Prahlad"
] | AI-Tutor: Interactive Learning of Ancient Knowledge from Low-Resource Languages | wat-1.5 | Poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 1 |