bibtex_url
stringlengths 41
52
| proceedings
stringlengths 38
49
| bibtext
stringlengths 788
3.49k
| abstract
stringlengths 0
2.12k
| authors
sequencelengths 1
58
| title
stringlengths 16
181
| id
stringlengths 7
18
| type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 170
values | n_linked_authors
int64 -1
9
| upvotes
int64 -1
56
| num_comments
int64 -1
9
| n_authors
int64 -1
57
| paper_page_exists_pre_conf
int64 0
1
| Models
sequencelengths 0
99
| Datasets
sequencelengths 0
5
| Spaces
sequencelengths 0
57
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2024.naacl-long.201.bib | https://aclanthology.org/2024.naacl-long.201/ | @inproceedings{linzbach-etal-2024-dissecting,
title = "Dissecting Paraphrases: The Impact of Prompt Syntax and supplementary Information on Knowledge Retrieval from Pretrained Language Models",
author = "Linzbach, Stephan and
Dimitrov, Dimitar and
Kallmeyer, Laura and
Evang, Kilian and
Jabeen, Hajira and
Dietze, Stefan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.201",
doi = "10.18653/v1/2024.naacl-long.201",
pages = "3645--3655",
abstract = "Pre-trained Language Models (PLMs) are known to contain various kinds of knowledge.One method to infer relational knowledge is through the use of cloze-style prompts, where a model is tasked to predict missing subjects orobjects. Typically, designing these prompts is a tedious task because small differences in syntax or semantics can have a substantial impact on knowledge retrieval performance. Simultaneously, evaluating the impact of either prompt syntax or information is challenging due to their interdependence. We designed CONPARE-LAMA {--} a dedicated probe, consisting of 34 million distinct prompts that facilitate comparison across minimal paraphrases. These paraphrases follow a unified meta-template enabling the controlled variation of syntax and semantics across arbitrary relations.CONPARE-LAMA enables insights into the independent impact of either syntactical form or semantic information of paraphrases on the knowledge retrieval performance of PLMs. Extensive knowledge retrieval experiments using our probe reveal that prompts following clausal syntax have several desirable properties in comparison to appositive syntax: i) they are more useful when querying PLMs with a combination of supplementary information, ii) knowledge is more consistently recalled across different combinations of supplementary information, and iii) they decrease response uncertainty when retrieving known facts. In addition, range information can boost knowledge retrieval performance more than domain information, even though domain information is more reliably helpful across syntactic forms.",
}
| Pre-trained Language Models (PLMs) are known to contain various kinds of knowledge.One method to infer relational knowledge is through the use of cloze-style prompts, where a model is tasked to predict missing subjects orobjects. Typically, designing these prompts is a tedious task because small differences in syntax or semantics can have a substantial impact on knowledge retrieval performance. Simultaneously, evaluating the impact of either prompt syntax or information is challenging due to their interdependence. We designed CONPARE-LAMA {--} a dedicated probe, consisting of 34 million distinct prompts that facilitate comparison across minimal paraphrases. These paraphrases follow a unified meta-template enabling the controlled variation of syntax and semantics across arbitrary relations.CONPARE-LAMA enables insights into the independent impact of either syntactical form or semantic information of paraphrases on the knowledge retrieval performance of PLMs. Extensive knowledge retrieval experiments using our probe reveal that prompts following clausal syntax have several desirable properties in comparison to appositive syntax: i) they are more useful when querying PLMs with a combination of supplementary information, ii) knowledge is more consistently recalled across different combinations of supplementary information, and iii) they decrease response uncertainty when retrieving known facts. In addition, range information can boost knowledge retrieval performance more than domain information, even though domain information is more reliably helpful across syntactic forms. | [
"Linzbach, Stephan",
"Dimitrov, Dimitar",
"Kallmeyer, Laura",
"Evang, Kilian",
"Jabeen, Hajira",
"Dietze, Stefan"
] | Dissecting Paraphrases: The Impact of Prompt Syntax and supplementary Information on Knowledge Retrieval from Pretrained Language Models | naacl-long.201 | Poster | 2404.01992 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.202.bib | https://aclanthology.org/2024.naacl-long.202/ | @inproceedings{spataru-2024-know,
title = "Know When To Stop: A Study of Semantic Drift in Text Generation",
author = "Spataru, Ava and
Hambro, Eric and
Voita, Elena and
Cancedda, Nicola",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.202",
doi = "10.18653/v1/2024.naacl-long.202",
pages = "3656--3671",
abstract = "In this work, we explicitly show that modern LLMs tend to generate correct facts first, then {``}drift away{''} and generate incorrect facts later: this was occasionally observed but never properly measured. We develop a semantic drift score that measures the degree of separation between correct and incorrect facts in generated texts and confirm our hypothesis when generating Wikipedia-style biographies. This correct-then-incorrect generation pattern suggests that factual accuracy can be improved by knowing when to stop generation. Therefore, we explore the trade-off between information quantity and factual accuracy for several early stopping methods and manage to improve factuality by a large margin. We further show that reranking with semantic similarity can further improve these results, both compared to the baseline and when combined with early stopping. Finally, we try calling external API to bring the model back to the right generation path, but do not get positive results. Overall, our methods generalize and can be applied to any long-form text generation to produce more reliable information, by balancing trade-offs between factual accuracy, information quantity and computational cost.",
}
| In this work, we explicitly show that modern LLMs tend to generate correct facts first, then {``}drift away{''} and generate incorrect facts later: this was occasionally observed but never properly measured. We develop a semantic drift score that measures the degree of separation between correct and incorrect facts in generated texts and confirm our hypothesis when generating Wikipedia-style biographies. This correct-then-incorrect generation pattern suggests that factual accuracy can be improved by knowing when to stop generation. Therefore, we explore the trade-off between information quantity and factual accuracy for several early stopping methods and manage to improve factuality by a large margin. We further show that reranking with semantic similarity can further improve these results, both compared to the baseline and when combined with early stopping. Finally, we try calling external API to bring the model back to the right generation path, but do not get positive results. Overall, our methods generalize and can be applied to any long-form text generation to produce more reliable information, by balancing trade-offs between factual accuracy, information quantity and computational cost. | [
"Spataru, Ava",
"Hambro, Eric",
"Voita, Elena",
"Cancedda, Nicola"
] | Know When To Stop: A Study of Semantic Drift in Text Generation | naacl-long.202 | Poster | 2404.05411 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.203.bib | https://aclanthology.org/2024.naacl-long.203/ | @inproceedings{tou-sun-2024-curriculum,
title = "Curriculum Masking in Vision-Language Pretraining to Maximize Cross Modal Interaction",
author = "Tou, Kraig and
Sun, Zijun",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.203",
doi = "10.18653/v1/2024.naacl-long.203",
pages = "3672--3688",
abstract = "Many leading methods in Vision and language (V+L) pretraining utilize masked language modeling (MLM) as a standard pretraining component, with the expectation that reconstruction of masked text tokens would necessitate reference to corresponding image context via cross/self attention and thus promote representation fusion. However, we observe that the minimization of MLM loss in earlier training stages can depend disproportionately on local text signals, leading to poor training efficiency and inconsistency with the goal of representation fusion. The extent of this lack of cross modal interaction depends strongly which token(s) are masked. To address this issue, we propose a curriculum masking scheme as a replacement for random masking. Tokens are selected to be masked at a frequency proportional to the expected level of cross modal interaction necessary to reconstruct them. This is achieved using a parallel mask selection agent that measures the cross modal flow of information and treats it as a reward to be maximized. By additionally masking contiguous spans that include key objects and their relations, we also achieve better relational understanding, which has been shown to be lacking in many SOTA models. Our experiments on a wide range of V+L tasks show that we trail closely behind state-of-the-art methods despite pretraining on 300x to 1000x less data and we also achieve either top or runner-up performance on tasks from the ARO benchmark which tests compositional relationships. Finally, we demonstrate the potential of our method to scale to larger pretraining data.",
}
| Many leading methods in Vision and language (V+L) pretraining utilize masked language modeling (MLM) as a standard pretraining component, with the expectation that reconstruction of masked text tokens would necessitate reference to corresponding image context via cross/self attention and thus promote representation fusion. However, we observe that the minimization of MLM loss in earlier training stages can depend disproportionately on local text signals, leading to poor training efficiency and inconsistency with the goal of representation fusion. The extent of this lack of cross modal interaction depends strongly which token(s) are masked. To address this issue, we propose a curriculum masking scheme as a replacement for random masking. Tokens are selected to be masked at a frequency proportional to the expected level of cross modal interaction necessary to reconstruct them. This is achieved using a parallel mask selection agent that measures the cross modal flow of information and treats it as a reward to be maximized. By additionally masking contiguous spans that include key objects and their relations, we also achieve better relational understanding, which has been shown to be lacking in many SOTA models. Our experiments on a wide range of V+L tasks show that we trail closely behind state-of-the-art methods despite pretraining on 300x to 1000x less data and we also achieve either top or runner-up performance on tasks from the ARO benchmark which tests compositional relationships. Finally, we demonstrate the potential of our method to scale to larger pretraining data. | [
"Tou, Kraig",
"Sun, Zijun"
] | Curriculum Masking in Vision-Language Pretraining to Maximize Cross Modal Interaction | naacl-long.203 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.204.bib | https://aclanthology.org/2024.naacl-long.204/ | @inproceedings{espana-bonet-barron-cedeno-2024-elote,
title = "Elote, Choclo and Mazorca: on the Varieties of {S}panish",
author = "Espa{\~n}a-Bonet, Cristina and
Barr{\'o}n-Cede{\~n}o, Alberto",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.204",
doi = "10.18653/v1/2024.naacl-long.204",
pages = "3689--3711",
abstract = "Spanish is one of the most widespread languages: the official language in 20 countries and the second most-spoken native language. Its contact with other languages across different regions and the rich regional and cultural diversity has produced varieties which divert from each other, particularly in terms of lexicon. Still, available corpora, and models trained upon them, generally treat Spanish as one monolithic language, which dampers prediction and generation power when dealing with different varieties. To alleviate the situation, we compile and curate datasets in the different varieties of Spanish around the world at an unprecedented scale and create the CEREAL corpus. With such a resource at hand, we perform a stylistic analysis to identify and characterise varietal differences. We implement a classifier specially designed to deal with long documents and identify Spanish varieties (and therefore expand CEREAL further). We produce varietal-specific embeddings, and analyse the cultural differences that they encode. We make data, code and models publicly available.",
}
| Spanish is one of the most widespread languages: the official language in 20 countries and the second most-spoken native language. Its contact with other languages across different regions and the rich regional and cultural diversity has produced varieties which divert from each other, particularly in terms of lexicon. Still, available corpora, and models trained upon them, generally treat Spanish as one monolithic language, which dampers prediction and generation power when dealing with different varieties. To alleviate the situation, we compile and curate datasets in the different varieties of Spanish around the world at an unprecedented scale and create the CEREAL corpus. With such a resource at hand, we perform a stylistic analysis to identify and characterise varietal differences. We implement a classifier specially designed to deal with long documents and identify Spanish varieties (and therefore expand CEREAL further). We produce varietal-specific embeddings, and analyse the cultural differences that they encode. We make data, code and models publicly available. | [
"Espa{\\~n}a-Bonet, Cristina",
"Barr{\\'o}n-Cede{\\~n}o, Alberto"
] | Elote, Choclo and Mazorca: on the Varieties of Spanish | naacl-long.204 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.205.bib | https://aclanthology.org/2024.naacl-long.205/ | @inproceedings{wang-etal-2024-ada,
title = "{A}da-{LE}val: Evaluating long-context {LLM}s with length-adaptable benchmarks",
author = "Wang, Chonghua and
Duan, Haodong and
Zhang, Songyang and
Lin, Dahua and
Chen, Kai",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.205",
doi = "10.18653/v1/2024.naacl-long.205",
pages = "3712--3724",
abstract = "Recently, the large language model (LLM) community has shown increasing interest in enhancing LLMs{'} capability to handle extremely long documents. As various long-text techniques and model architectures emerge, the precise and detailed evaluation of models{'} long-text capabilities has become increasingly important. Existing long-text evaluation benchmarks, such as L-Eval and LongBench, construct long-text test sets based on open-source datasets, focusing mainly on QA and summarization tasks. These datasets include test samples of varying lengths (from 2k to 32k+) entangled together, making it challenging to assess model capabilities across different length ranges. Moreover, they do not cover the ultralong settings (100k+ tokens) that the latest LLMs claim to achieve. In this paper, we introduce Ada-LEval, a length-adaptable benchmark for evaluating the long-context understanding of LLMs. Ada-LEval includes two challenging subsets, TSort and BestAnswer, which enable a more reliable evaluation of LLMs{'} long context capabilities. These benchmarks support intricate manipulation of the length of test cases, and can easily produce text samples up to 128k tokens. We evaluate 4 state-of-the-art closed-source API models and 6 open-source models with Ada-LEval. The evaluation results demonstrate the limitations of current LLMs, especially in ultra-long-context settings. Our code is available at https://github.com/open-compass/Ada-LEval.",
}
| Recently, the large language model (LLM) community has shown increasing interest in enhancing LLMs{'} capability to handle extremely long documents. As various long-text techniques and model architectures emerge, the precise and detailed evaluation of models{'} long-text capabilities has become increasingly important. Existing long-text evaluation benchmarks, such as L-Eval and LongBench, construct long-text test sets based on open-source datasets, focusing mainly on QA and summarization tasks. These datasets include test samples of varying lengths (from 2k to 32k+) entangled together, making it challenging to assess model capabilities across different length ranges. Moreover, they do not cover the ultralong settings (100k+ tokens) that the latest LLMs claim to achieve. In this paper, we introduce Ada-LEval, a length-adaptable benchmark for evaluating the long-context understanding of LLMs. Ada-LEval includes two challenging subsets, TSort and BestAnswer, which enable a more reliable evaluation of LLMs{'} long context capabilities. These benchmarks support intricate manipulation of the length of test cases, and can easily produce text samples up to 128k tokens. We evaluate 4 state-of-the-art closed-source API models and 6 open-source models with Ada-LEval. The evaluation results demonstrate the limitations of current LLMs, especially in ultra-long-context settings. Our code is available at https://github.com/open-compass/Ada-LEval. | [
"Wang, Chonghua",
"Duan, Haodong",
"Zhang, Songyang",
"Lin, Dahua",
"Chen, Kai"
] | Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks | naacl-long.205 | Poster | 2404.06480 | [
"https://github.com/open-compass/ada-leval"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.206.bib | https://aclanthology.org/2024.naacl-long.206/ | @inproceedings{ofori-boateng-etal-2024-zero,
title = "A Zero-Shot Monolingual Dual Stage Information Retrieval System for {S}panish Biomedical Systematic Literature Reviews",
author = "Ofori-Boateng, Regina and
Aceves-Martins, Magaly and
Wiratunga, Nirmalie and
Moreno-Garcia, Carlos",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.206",
doi = "10.18653/v1/2024.naacl-long.206",
pages = "3725--3736",
abstract = "Systematic Reviews (SRs) are foundational in healthcare for synthesising evidence to inform clinical practices. Traditionally skewed towards English-language databases, SRs often exclude significant research in other languages, leading to potential biases. This study addresses this gap by focusing on Spanish, a language notably underrepresented in SRs. We present a foundational zero-shot dual information retrieval (IR) baseline system, integrating traditional retrieval methods with pre-trained language models and cross-attention re-rankers for enhanced accuracy in Spanish biomedical literature retrieval. Utilising the LILACS database, known for its comprehensive coverage of Latin American and Caribbean biomedical literature, we evaluate the approach with three real-life case studies in Spanish SRs. The findings demonstrate the system{'}s efficacy and underscore the importance of query formulation. This study contributes to the field of IR by promoting language inclusivity and supports the development of more comprehensive and globally representative healthcare guidelines.",
}
| Systematic Reviews (SRs) are foundational in healthcare for synthesising evidence to inform clinical practices. Traditionally skewed towards English-language databases, SRs often exclude significant research in other languages, leading to potential biases. This study addresses this gap by focusing on Spanish, a language notably underrepresented in SRs. We present a foundational zero-shot dual information retrieval (IR) baseline system, integrating traditional retrieval methods with pre-trained language models and cross-attention re-rankers for enhanced accuracy in Spanish biomedical literature retrieval. Utilising the LILACS database, known for its comprehensive coverage of Latin American and Caribbean biomedical literature, we evaluate the approach with three real-life case studies in Spanish SRs. The findings demonstrate the system{'}s efficacy and underscore the importance of query formulation. This study contributes to the field of IR by promoting language inclusivity and supports the development of more comprehensive and globally representative healthcare guidelines. | [
"Ofori-Boateng, Regina",
"Aceves-Martins, Magaly",
"Wiratunga, Nirmalie",
"Moreno-Garcia, Carlos"
] | A Zero-Shot Monolingual Dual Stage Information Retrieval System for Spanish Biomedical Systematic Literature Reviews | naacl-long.206 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.207.bib | https://aclanthology.org/2024.naacl-long.207/ | @inproceedings{siyuan-etal-2024-layoutpointer,
title = "{L}ayout{P}ointer: A Spatial-Context Adaptive Pointer Network for Visual Information Extraction",
author = "Siyuan, Huang and
Xiong, Yongping and
Guibin, Wu",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.207",
doi = "10.18653/v1/2024.naacl-long.207",
pages = "3737--3748",
abstract = "Visual Information Extraction (VIE), as a crucial task of Document Intelligence, involves two primary sub-tasks: Semantic Entity Recognition (SER) and Relation Extraction (RE). However, VIE faces two significant challenges. Firstly, most existing models inadequately utilize spatial information of entities, often failing to predict connections or incorrectly linking spatially distant entities. Secondly, the improper input order of tokens challenges in extracting complete entity pairs from documents with multi-line entities when text is extracted via PDF parser or OCR. To address these challenges, we propose LayoutPointer, a Spatial-Context Adaptive Pointer Network. LayoutPointer explicitly enhances spatial-context relationships by incorporating 2D relative position information and adaptive spatial constraints within self-attention. Furthermore, we recast the RE task as a specialized cycle detection problem, employing a unique tail-to-head pointer to restore the semantic order among multi-line entities. To better evaluate the effectiveness of our proposed method, we reconstruct a multi-line dataset named MLFUD, which more accurately reflects real-world scenarios. Fine-tuning experimental results on FUNSD, XFUND, and MLFUD datasets demonstrate that LayoutPointer significantly outperforms existing state-of-the-art methods in F1 scores for RE tasks (e.g., 5.71{\%} improvement on XFUND using LayoutPointer$_{\text{BASE-X}}$ over LayoutLMv3).",
}
| Visual Information Extraction (VIE), as a crucial task of Document Intelligence, involves two primary sub-tasks: Semantic Entity Recognition (SER) and Relation Extraction (RE). However, VIE faces two significant challenges. Firstly, most existing models inadequately utilize spatial information of entities, often failing to predict connections or incorrectly linking spatially distant entities. Secondly, the improper input order of tokens challenges in extracting complete entity pairs from documents with multi-line entities when text is extracted via PDF parser or OCR. To address these challenges, we propose LayoutPointer, a Spatial-Context Adaptive Pointer Network. LayoutPointer explicitly enhances spatial-context relationships by incorporating 2D relative position information and adaptive spatial constraints within self-attention. Furthermore, we recast the RE task as a specialized cycle detection problem, employing a unique tail-to-head pointer to restore the semantic order among multi-line entities. To better evaluate the effectiveness of our proposed method, we reconstruct a multi-line dataset named MLFUD, which more accurately reflects real-world scenarios. Fine-tuning experimental results on FUNSD, XFUND, and MLFUD datasets demonstrate that LayoutPointer significantly outperforms existing state-of-the-art methods in F1 scores for RE tasks (e.g., 5.71{\%} improvement on XFUND using LayoutPointer$_{\text{BASE-X}}$ over LayoutLMv3). | [
"Siyuan, Huang",
"Xiong, Yongping",
"Guibin, Wu"
] | LayoutPointer: A Spatial-Context Adaptive Pointer Network for Visual Information Extraction | naacl-long.207 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.208.bib | https://aclanthology.org/2024.naacl-long.208/ | @inproceedings{rosati-etal-2024-long,
title = "Long-form evaluation of model editing",
author = "Rosati, Domenic and
Gonzales, Robie and
Chen, Jinkun and
Yu, Xuemin and
Kayani, Yahya and
Rudzicz, Frank and
Sajjad, Hassan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.208",
doi = "10.18653/v1/2024.naacl-long.208",
pages = "3749--3780",
abstract = "Evaluations of model editing, a technique for changing the factual knowledge held by Large Language Models (LLMs), currently only use the {`}next few token{'} completions after a prompt. As a result, the impact of these methods on longer natural language generation is largely unknown. We introduce long-form evaluation of model editing ($\textbf{\textit{LEME}}$) a novel evaluation protocol that measures the efficacy and impact of model editing in long-form generative settings. Our protocol consists of a machine-rated survey and a classifier which correlates well with human ratings. Importantly, we find that our protocol has very little relationship with previous short-form metrics (despite being designed to extend efficacy, generalization, locality, and portability into a long-form setting), indicating that our method introduces a novel set of dimensions for understanding model editing methods. Using this protocol, we benchmark a number of model editing techniques and present several findings including that, while some methods (ROME and MEMIT) perform well in making consistent edits within a limited scope, they suffer much more from factual drift than other methods. Finally, we present a qualitative analysis that illustrates common failure modes in long-form generative settings including internal consistency, lexical cohesion, and locality issues.",
}
| Evaluations of model editing, a technique for changing the factual knowledge held by Large Language Models (LLMs), currently only use the {`}next few token{'} completions after a prompt. As a result, the impact of these methods on longer natural language generation is largely unknown. We introduce long-form evaluation of model editing ($\textbf{\textit{LEME}}$) a novel evaluation protocol that measures the efficacy and impact of model editing in long-form generative settings. Our protocol consists of a machine-rated survey and a classifier which correlates well with human ratings. Importantly, we find that our protocol has very little relationship with previous short-form metrics (despite being designed to extend efficacy, generalization, locality, and portability into a long-form setting), indicating that our method introduces a novel set of dimensions for understanding model editing methods. Using this protocol, we benchmark a number of model editing techniques and present several findings including that, while some methods (ROME and MEMIT) perform well in making consistent edits within a limited scope, they suffer much more from factual drift than other methods. Finally, we present a qualitative analysis that illustrates common failure modes in long-form generative settings including internal consistency, lexical cohesion, and locality issues. | [
"Rosati, Domenic",
"Gonzales, Robie",
"Chen, Jinkun",
"Yu, Xuemin",
"Kayani, Yahya",
"Rudzicz, Frank",
"Sajjad, Hassan"
] | Long-form evaluation of model editing | naacl-long.208 | Oral | 2402.09394 | [
"https://github.com/domenicrosati/longform-evaluation-model-editing"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.209.bib | https://aclanthology.org/2024.naacl-long.209/ | @inproceedings{jin-etal-2024-analyzing,
title = "Analyzing the Role of Semantic Representations in the Era of Large Language Models",
author = {Jin, Zhijing and
Chen, Yuen and
Gonzalez Adauto, Fernando and
Liu, Jiarui and
Zhang, Jiayi and
Michael, Julian and
Sch{\"o}lkopf, Bernhard and
Diab, Mona},
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.209",
doi = "10.18653/v1/2024.naacl-long.209",
pages = "3781--3798",
abstract = "Traditionally, natural language processing (NLP) models often use a rich set of features created by linguistic expertise, such as semantic representations. However, in the era of large language models (LLMs), more and more tasks are turned into generic, end-to-end sequence generation problems. In this paper, we investigate the question: what is the role of semantic representations in the era of LLMs? Specifically, we investigate the effect of Abstract Meaning Representation (AMR) across five diverse NLP tasks. We propose an AMR-driven chain-of-thought prompting method, which we call AMRCOT, and find that it generally hurts performance more than it helps. To investigate what AMR may have to offer on these tasks, we conduct a series of analysis experiments. We find that it is difficult to predict which input examples AMR may help or hurt on, but errors tend to arise with multi-word expressions, named entities, and in the final inference step where the LLM must connect its reasoning over the AMR to its prediction. We recommend focusing on these areas for future work in semantic representations for LLMs. Our code: https://github.com/causalNLP/amr{\_}llm",
}
| Traditionally, natural language processing (NLP) models often use a rich set of features created by linguistic expertise, such as semantic representations. However, in the era of large language models (LLMs), more and more tasks are turned into generic, end-to-end sequence generation problems. In this paper, we investigate the question: what is the role of semantic representations in the era of LLMs? Specifically, we investigate the effect of Abstract Meaning Representation (AMR) across five diverse NLP tasks. We propose an AMR-driven chain-of-thought prompting method, which we call AMRCOT, and find that it generally hurts performance more than it helps. To investigate what AMR may have to offer on these tasks, we conduct a series of analysis experiments. We find that it is difficult to predict which input examples AMR may help or hurt on, but errors tend to arise with multi-word expressions, named entities, and in the final inference step where the LLM must connect its reasoning over the AMR to its prediction. We recommend focusing on these areas for future work in semantic representations for LLMs. Our code: https://github.com/causalNLP/amr{\_}llm | [
"Jin, Zhijing",
"Chen, Yuen",
"Gonzalez Adauto, Fern",
"o",
"Liu, Jiarui",
"Zhang, Jiayi",
"Michael, Julian",
"Sch{\\\"o}lkopf, Bernhard",
"Diab, Mona"
] | Analyzing the Role of Semantic Representations in the Era of Large Language Models | naacl-long.209 | Poster | 2405.01502 | [
"https://github.com/causalnlp/amr_llm"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.210.bib | https://aclanthology.org/2024.naacl-long.210/ | @inproceedings{li-etal-2024-traq,
title = "{TRAQ}: Trustworthy Retrieval Augmented Question Answering via Conformal Prediction",
author = "Li, Shuo and
Park, Sangdon and
Lee, Insup and
Bastani, Osbert",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.210",
doi = "10.18653/v1/2024.naacl-long.210",
pages = "3799--3821",
abstract = "When applied to open-domain question answering, large language models (LLMs) frequently generate incorrect responses based on made-up facts, which are called \textit{hallucinations}. Retrieval augmented generation (RAG) is a promising strategy to avoid hallucinations, but it does not provide guarantees on its correctness. To address this challenge, we propose the Trustworthy Retrieval Augmented Question Answering, or *TRAQ*, which provides the first end-to-end statistical correctness guarantee for RAG. TRAQ uses conformal prediction, a statistical technique for constructing prediction sets that are guaranteed to contain the semantically correct response with high probability. Additionally, TRAQ leverages Bayesian optimization to minimize the size of the constructed sets. In an extensive experimental evaluation, we demonstrate that TRAQ provides the desired correctness guarantee while reducing prediction set size by 16.2{\%} on average compared to an ablation. The implementation is available: [https://github.com/shuoli90/TRAQ](https://github.com/shuoli90/TRAQ).",
}
| When applied to open-domain question answering, large language models (LLMs) frequently generate incorrect responses based on made-up facts, which are called \textit{hallucinations}. Retrieval augmented generation (RAG) is a promising strategy to avoid hallucinations, but it does not provide guarantees on its correctness. To address this challenge, we propose the Trustworthy Retrieval Augmented Question Answering, or *TRAQ*, which provides the first end-to-end statistical correctness guarantee for RAG. TRAQ uses conformal prediction, a statistical technique for constructing prediction sets that are guaranteed to contain the semantically correct response with high probability. Additionally, TRAQ leverages Bayesian optimization to minimize the size of the constructed sets. In an extensive experimental evaluation, we demonstrate that TRAQ provides the desired correctness guarantee while reducing prediction set size by 16.2{\%} on average compared to an ablation. The implementation is available: [https://github.com/shuoli90/TRAQ](https://github.com/shuoli90/TRAQ). | [
"Li, Shuo",
"Park, Sangdon",
"Lee, Insup",
"Bastani, Osbert"
] | TRAQ: Trustworthy Retrieval Augmented Question Answering via Conformal Prediction | naacl-long.210 | Poster | 2307.04642 | [
"https://github.com/shuoli90/traq"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.211.bib | https://aclanthology.org/2024.naacl-long.211/ | @inproceedings{zhao-etal-2024-mapguide,
title = "{M}ap{G}uide: A Simple yet Effective Method to Reconstruct Continuous Language from Brain Activities",
author = "Zhao, Xinpei and
Sun, Jingyuan and
Wang, Shaonan and
Ye, Jing and
Zhang, Xiaohan and
Zong, Chengqing",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.211",
doi = "10.18653/v1/2024.naacl-long.211",
pages = "3822--3832",
abstract = "Decoding continuous language from brain activity is a formidable yet promising field of research. It is particularly significant for aiding people with speech disabilities to communicate through brain signals. This field addresses the complex task of mapping brain signals to text. The previous best attempt reverse-engineered this process in an indirect way: it began by learning to encode brain activity from text and then guided text generation by aligning with predicted brain responses. In contrast, we propose a simple yet effective method that guides text reconstruction by directly comparing them with the predicted text embeddings mapped from brain activities. Comprehensive experiments reveal that our method significantly outperforms the current state-of-the-art model, showing average improvements of 77{\%} and 54{\%} on BLEU and METEOR scores. We further validate the proposed modules through detailed ablation studies and case analyses and highlight a critical correlation: the more precisely we map brain activities to text embeddings, the better the text reconstruction results. Such insight can simplify the task of reconstructing language from brain activities for future work, emphasizing the importance of improving brain-to-text-embedding mapping techniques.",
}
| Decoding continuous language from brain activity is a formidable yet promising field of research. It is particularly significant for aiding people with speech disabilities to communicate through brain signals. This field addresses the complex task of mapping brain signals to text. The previous best attempt reverse-engineered this process in an indirect way: it began by learning to encode brain activity from text and then guided text generation by aligning with predicted brain responses. In contrast, we propose a simple yet effective method that guides text reconstruction by directly comparing them with the predicted text embeddings mapped from brain activities. Comprehensive experiments reveal that our method significantly outperforms the current state-of-the-art model, showing average improvements of 77{\%} and 54{\%} on BLEU and METEOR scores. We further validate the proposed modules through detailed ablation studies and case analyses and highlight a critical correlation: the more precisely we map brain activities to text embeddings, the better the text reconstruction results. Such insight can simplify the task of reconstructing language from brain activities for future work, emphasizing the importance of improving brain-to-text-embedding mapping techniques. | [
"Zhao, Xinpei",
"Sun, Jingyuan",
"Wang, Shaonan",
"Ye, Jing",
"Zhang, Xiaohan",
"Zong, Chengqing"
] | MapGuide: A Simple yet Effective Method to Reconstruct Continuous Language from Brain Activities | naacl-long.211 | Poster | 2403.17516 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.212.bib | https://aclanthology.org/2024.naacl-long.212/ | @inproceedings{munnangi-etal-2024-fly,
title = "On-the-fly Definition Augmentation of {LLM}s for Biomedical {NER}",
author = "Munnangi, Monica and
Feldman, Sergey and
Wallace, Byron and
Amir, Silvio and
Hope, Tom and
Naik, Aakanksha",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.212",
doi = "10.18653/v1/2024.naacl-long.212",
pages = "3833--3854",
abstract = "Despite their general capabilities, LLMs still struggle on biomedicalNER tasks, which are difficult due to the presence of specialized terminology and lack of training data. In this work we set out to improve LLM performance on biomedical NER in limited data settings via a new knowledge augmentation approach which incorporates definitions of relevant concepts on-the-fly. During this process, to provide a test bed for knowledge augmentation, we perform a comprehensive exploration of prompting strategies. Our experiments show that definition augmentation is useful for both open source and closed LLMs.For example, it leads to a relative improvement of 15{\%} (on average) in GPT-4 performance (F1) across all (six) of our test datasets. We conduct extensive ablations and analyses to demonstrate that our performance improvements stem from adding relevant definitional knowledge. We find that careful prompting strategies also improve LLM performance, allowing them to outperform fine-tuned language models in few-shot settings. To facilitate future research in this direction, we release our code at https://github.com/allenai/beacon.",
}
| Despite their general capabilities, LLMs still struggle on biomedicalNER tasks, which are difficult due to the presence of specialized terminology and lack of training data. In this work we set out to improve LLM performance on biomedical NER in limited data settings via a new knowledge augmentation approach which incorporates definitions of relevant concepts on-the-fly. During this process, to provide a test bed for knowledge augmentation, we perform a comprehensive exploration of prompting strategies. Our experiments show that definition augmentation is useful for both open source and closed LLMs.For example, it leads to a relative improvement of 15{\%} (on average) in GPT-4 performance (F1) across all (six) of our test datasets. We conduct extensive ablations and analyses to demonstrate that our performance improvements stem from adding relevant definitional knowledge. We find that careful prompting strategies also improve LLM performance, allowing them to outperform fine-tuned language models in few-shot settings. To facilitate future research in this direction, we release our code at https://github.com/allenai/beacon. | [
"Munnangi, Monica",
"Feldman, Sergey",
"Wallace, Byron",
"Amir, Silvio",
"Hope, Tom",
"Naik, Aakanksha"
] | On-the-fly Definition Augmentation of LLMs for Biomedical NER | naacl-long.212 | Poster | 2404.00152 | [
"https://github.com/allenai/beacon"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.213.bib | https://aclanthology.org/2024.naacl-long.213/ | @inproceedings{li-etal-2024-land,
title = "This Land is {Your, My} Land: Evaluating Geopolitical Bias in Language Models through Territorial Disputes",
author = "Li, Bryan and
Haider, Samar and
Callison-Burch, Chris",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.213",
doi = "10.18653/v1/2024.naacl-long.213",
pages = "3855--3871",
abstract = "Do the Spratly Islands belong to China, the Philippines, or Vietnam? A pretrained large language model (LLM) may answer differently if asked in the languages of each claimant country: Chinese, Tagalog, or Vietnamese. This contrasts with a multilingual human, who would likely answer consistently. In this paper, we show that LLMs recall certain geographical knowledge inconsistently when queried in different languages{---}a phenomenon we term geopolitical bias. As a targeted case study, we consider territorial disputes, an inherently controversial and multilingual task. We introduce BorderLines, a dataset of territorial disputes which covers 251 territories, each associated with a set of multiple-choice questions in the languages of each claimant country (49 languages in total). We also propose a suite of evaluation metrics to precisely quantify bias and consistency in responses across different languages. We then evaluate various multilingual LLMs on our dataset and metrics to probe their internal knowledge and use the proposed metrics to discover numerous inconsistencies in how these models respond in different languages. Finally, we explore several prompt modification strategies, aiming to either amplify or mitigate geopolitical bias, which highlights how brittle LLMs are and how they tailor their responses depending on cues from the interaction context. Our code and data are available at https://github.com/manestay/borderlines.",
}
| Do the Spratly Islands belong to China, the Philippines, or Vietnam? A pretrained large language model (LLM) may answer differently if asked in the languages of each claimant country: Chinese, Tagalog, or Vietnamese. This contrasts with a multilingual human, who would likely answer consistently. In this paper, we show that LLMs recall certain geographical knowledge inconsistently when queried in different languages{---}a phenomenon we term geopolitical bias. As a targeted case study, we consider territorial disputes, an inherently controversial and multilingual task. We introduce BorderLines, a dataset of territorial disputes which covers 251 territories, each associated with a set of multiple-choice questions in the languages of each claimant country (49 languages in total). We also propose a suite of evaluation metrics to precisely quantify bias and consistency in responses across different languages. We then evaluate various multilingual LLMs on our dataset and metrics to probe their internal knowledge and use the proposed metrics to discover numerous inconsistencies in how these models respond in different languages. Finally, we explore several prompt modification strategies, aiming to either amplify or mitigate geopolitical bias, which highlights how brittle LLMs are and how they tailor their responses depending on cues from the interaction context. Our code and data are available at https://github.com/manestay/borderlines. | [
"Li, Bryan",
"Haider, Samar",
"Callison-Burch, Chris"
] | This Land is Your, My Land: Evaluating Geopolitical Bias in Language Models through Territorial Disputes | naacl-long.213 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.214.bib | https://aclanthology.org/2024.naacl-long.214/ | @inproceedings{tan-etal-2024-set,
title = "Set-Aligning Framework for Auto-Regressive Event Temporal Graph Generation",
author = "Tan, Xingwei and
Zhou, Yuxiang and
Pergola, Gabriele and
He, Yulan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.214",
doi = "10.18653/v1/2024.naacl-long.214",
pages = "3872--3892",
abstract = "Event temporal graphs have been shown as convenient and effective representations of complex temporal relations between events in text. Recent studies, which employ pre-trained language models to auto-regressively generate linearised graphs for constructing event temporal graphs, have shown promising results. However, these methods have often led to suboptimal graph generation as the linearised graphs exhibit set characteristics which are instead treated sequentially by language models. This discrepancy stems from the conventional text generation objectives, leading to erroneous penalisation of correct predictions caused by the misalignment of elements in target sequences. To address these challenges, we reframe the task as a conditional set generation problem, proposing a Set-aligning Framework tailored for the effective utilisation of Large Language Models (LLMs). The framework incorporates data augmentations and set-property regularisations designed to alleviate text generation loss penalties associated with the linearised graph edge sequences, thus encouraging the generation of more relation edges. Experimental results show that our framework surpasses existing baselines for event temporal graph generation. Furthermore, under zero-shot settings, the structural knowledge introduced through our framework notably improves model generalisation, particularly when the training examples available are limited.",
}
| Event temporal graphs have been shown as convenient and effective representations of complex temporal relations between events in text. Recent studies, which employ pre-trained language models to auto-regressively generate linearised graphs for constructing event temporal graphs, have shown promising results. However, these methods have often led to suboptimal graph generation as the linearised graphs exhibit set characteristics which are instead treated sequentially by language models. This discrepancy stems from the conventional text generation objectives, leading to erroneous penalisation of correct predictions caused by the misalignment of elements in target sequences. To address these challenges, we reframe the task as a conditional set generation problem, proposing a Set-aligning Framework tailored for the effective utilisation of Large Language Models (LLMs). The framework incorporates data augmentations and set-property regularisations designed to alleviate text generation loss penalties associated with the linearised graph edge sequences, thus encouraging the generation of more relation edges. Experimental results show that our framework surpasses existing baselines for event temporal graph generation. Furthermore, under zero-shot settings, the structural knowledge introduced through our framework notably improves model generalisation, particularly when the training examples available are limited. | [
"Tan, Xingwei",
"Zhou, Yuxiang",
"Pergola, Gabriele",
"He, Yulan"
] | Set-Aligning Framework for Auto-Regressive Event Temporal Graph Generation | naacl-long.214 | Poster | 2404.01532 | [
"https://github.com/xingwei-warwick/set-aligning-event-temporal-graph-generation"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.215.bib | https://aclanthology.org/2024.naacl-long.215/ | @inproceedings{zhang-etal-2024-languageflow,
title = "{L}anguage{F}low: Advancing Diffusion Language Generation with Probabilistic Flows",
author = "Zhang, Shujian and
Wu, Lemeng and
Gong, Chengyue and
Liu, Xingchao",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.215",
doi = "10.18653/v1/2024.naacl-long.215",
pages = "3893--3905",
abstract = "Recent works have demonstrated success in controlling sentence attributes (e.g., sentiment) and structure (e.g., syntactic structure) based on the diffusion language model. A key component that drives theimpressive performance for generating high-quality samples from noise is iteratively denoise for thousands of steps. While beneficial, the complexity of starting from the noise and the learning steps has limited its implementation to many NLP real-world applications. This paper proposes Language Rectified Flow (LF).Our method is based on the reformulation of the standard probabilistic flow models.Language rectified flow learns (neural) ordinary differentialequation models to transport between the source distribution and the target distribution, henceproviding a unified and effective solution to generative modeling and domain transfer.From the source distribution, our language rectified flow yields fast simulation and effectively decreases the inference time. Experiments on three challenging fine-grained control tasks and multiple high-quality text editing show that our method consistently outperforms its baselines. Extensive experiments and ablation studies demonstrate that our method can be general, effective, and beneficial for many NLP tasks.",
}
| Recent works have demonstrated success in controlling sentence attributes (e.g., sentiment) and structure (e.g., syntactic structure) based on the diffusion language model. A key component that drives theimpressive performance for generating high-quality samples from noise is iteratively denoise for thousands of steps. While beneficial, the complexity of starting from the noise and the learning steps has limited its implementation to many NLP real-world applications. This paper proposes Language Rectified Flow (LF).Our method is based on the reformulation of the standard probabilistic flow models.Language rectified flow learns (neural) ordinary differentialequation models to transport between the source distribution and the target distribution, henceproviding a unified and effective solution to generative modeling and domain transfer.From the source distribution, our language rectified flow yields fast simulation and effectively decreases the inference time. Experiments on three challenging fine-grained control tasks and multiple high-quality text editing show that our method consistently outperforms its baselines. Extensive experiments and ablation studies demonstrate that our method can be general, effective, and beneficial for many NLP tasks. | [
"Zhang, Shujian",
"Wu, Lemeng",
"Gong, Chengyue",
"Liu, Xingchao"
] | LanguageFlow: Advancing Diffusion Language Generation with Probabilistic Flows | naacl-long.215 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.216.bib | https://aclanthology.org/2024.naacl-long.216/ | @inproceedings{patel-etal-2024-towards,
title = "Towards Improved Multi-Source Attribution for Long-Form Answer Generation",
author = "Patel, Nilay and
Subramanian, Shivashankar and
Garg, Siddhant and
Banerjee, Pratyay and
Misra, Amita",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.216",
doi = "10.18653/v1/2024.naacl-long.216",
pages = "3906--3919",
abstract = "Teaching large language models (LLMs) to generate text with attribution to evidence sources can reduce hallucinations, improve verifiability in question answering systems (QA), and increase reliability of retrieval augmented LLMs. Despite gaining increasing popularity for usage in QA systems and search engines, current LLMs struggle with attribution for long-form responses which require reasoning over multiple evidence sources. To address this, in this paper we aim to improve the attribution capability of LLMs for long-form answer generation to multiple sources, with multiple citations per sentence. However, data for training multi-source attributable QA systems is difficult and expensive to annotate, and therefore scarce. To overcome this challenge, we transform existing QA datasets for this task (MultiAttr), and empirically demonstrate, on a wide range of attribution benchmark datasets, that fine-tuning on MultiAttr provides significant improvements over training only on the target QA domain. Lastly, to fill a gap in existing benchmarks, we present a multi-source attribution dataset containing multi-paragraph answers, PolitiICite, based on PolitiFact articles that discuss events closely related to implementation statuses of election promises.",
}
| Teaching large language models (LLMs) to generate text with attribution to evidence sources can reduce hallucinations, improve verifiability in question answering systems (QA), and increase reliability of retrieval augmented LLMs. Despite gaining increasing popularity for usage in QA systems and search engines, current LLMs struggle with attribution for long-form responses which require reasoning over multiple evidence sources. To address this, in this paper we aim to improve the attribution capability of LLMs for long-form answer generation to multiple sources, with multiple citations per sentence. However, data for training multi-source attributable QA systems is difficult and expensive to annotate, and therefore scarce. To overcome this challenge, we transform existing QA datasets for this task (MultiAttr), and empirically demonstrate, on a wide range of attribution benchmark datasets, that fine-tuning on MultiAttr provides significant improvements over training only on the target QA domain. Lastly, to fill a gap in existing benchmarks, we present a multi-source attribution dataset containing multi-paragraph answers, PolitiICite, based on PolitiFact articles that discuss events closely related to implementation statuses of election promises. | [
"Patel, Nilay",
"Subramanian, Shivashankar",
"Garg, Siddhant",
"Banerjee, Pratyay",
"Misra, Amita"
] | Towards Improved Multi-Source Attribution for Long-Form Answer Generation | naacl-long.216 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.217.bib | https://aclanthology.org/2024.naacl-long.217/ | @inproceedings{carranza-etal-2024-synthetic,
title = "Synthetic Query Generation for Privacy-Preserving Deep Retrieval Systems using Differentially Private Language Models",
author = "Carranza, Aldo and
Farahani, Rezsa and
Ponomareva, Natalia and
Kurakin, Alexey and
Jagielski, Matthew and
Nasr, Milad",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.217",
doi = "10.18653/v1/2024.naacl-long.217",
pages = "3920--3930",
abstract = "We address the challenge of ensuring differential privacy (DP) guarantees in training deep retrieval systems. Training these systems often involves the use of contrastive-style losses, which are typically non-per-example decomposable, making them difficult to directly DP-train with since common techniques require per-example gradients. To address this issue, we propose an approach that prioritizes ensuring query privacy prior to training a deep retrieval system. Our method employs DP language models (LMs) to generate private synthetic queries representative of the original data. These synthetic queries can be used in downstream retrieval system training without compromising privacy. Our approach demonstrates a significant enhancement in retrieval quality compared to direct DP-training, all while maintaining query-level privacy guarantees. This work highlights the potential of harnessing LMs to overcome limitations in standard DP-training methods.",
}
| We address the challenge of ensuring differential privacy (DP) guarantees in training deep retrieval systems. Training these systems often involves the use of contrastive-style losses, which are typically non-per-example decomposable, making them difficult to directly DP-train with since common techniques require per-example gradients. To address this issue, we propose an approach that prioritizes ensuring query privacy prior to training a deep retrieval system. Our method employs DP language models (LMs) to generate private synthetic queries representative of the original data. These synthetic queries can be used in downstream retrieval system training without compromising privacy. Our approach demonstrates a significant enhancement in retrieval quality compared to direct DP-training, all while maintaining query-level privacy guarantees. This work highlights the potential of harnessing LMs to overcome limitations in standard DP-training methods. | [
"Carranza, Aldo",
"Farahani, Rezsa",
"Ponomareva, Natalia",
"Kurakin, Alexey",
"Jagielski, Matthew",
"Nasr, Milad"
] | Synthetic Query Generation for Privacy-Preserving Deep Retrieval Systems using Differentially Private Language Models | naacl-long.217 | Poster | 2305.05973 | [
""
] | https://huggingface.co/papers/2305.05973 | 2 | 1 | 0 | 6 | 1 | [] | [] | [] |
https://aclanthology.org/2024.naacl-long.218.bib | https://aclanthology.org/2024.naacl-long.218/ | @inproceedings{nath-etal-2024-okay,
title = "Okay, Let{'}s Do This! Modeling Event Coreference with Generated Rationales and Knowledge Distillation",
author = "Nath, Abhijnan and
Manafi Avari, Shadi and
Chelle, Avyakta and
Krishnaswamy, Nikhil",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.218",
doi = "10.18653/v1/2024.naacl-long.218",
pages = "3931--3946",
abstract = "In NLP, Event Coreference Resolution (ECR) is the task of connecting event clusters that refer to the same underlying real-life event, usually via neural systems. In this work, we investigate using abductive free-text rationales (FTRs) generated by modern autoregressive LLMs as distant supervision of smaller student models for cross-document coreference (CDCR) of events. We implement novel rationale-oriented event clustering and knowledge distillation methods for event coreference scoring that leverage enriched information from the FTRs for improved CDCR without additional annotation or expensive document clustering. Our model using coreference-specific knowledge distillation achieves SOTA $B^3$ $F_1$ on the ECB+ and GVC corpora and we establish a new baseline on the AIDA Phase 1 corpus. Our code can be found at https://github.com/csu-signal/llama{\_}cdcr.",
}
| In NLP, Event Coreference Resolution (ECR) is the task of connecting event clusters that refer to the same underlying real-life event, usually via neural systems. In this work, we investigate using abductive free-text rationales (FTRs) generated by modern autoregressive LLMs as distant supervision of smaller student models for cross-document coreference (CDCR) of events. We implement novel rationale-oriented event clustering and knowledge distillation methods for event coreference scoring that leverage enriched information from the FTRs for improved CDCR without additional annotation or expensive document clustering. Our model using coreference-specific knowledge distillation achieves SOTA $B^3$ $F_1$ on the ECB+ and GVC corpora and we establish a new baseline on the AIDA Phase 1 corpus. Our code can be found at https://github.com/csu-signal/llama{\_}cdcr. | [
"Nath, Abhijnan",
"Manafi Avari, Shadi",
"Chelle, Avyakta",
"Krishnaswamy, Nikhil"
] | Okay, Let's Do This! Modeling Event Coreference with Generated Rationales and Knowledge Distillation | naacl-long.218 | Oral | 2404.03196 | [
"https://github.com/csu-signal/llama_cdcr"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.219.bib | https://aclanthology.org/2024.naacl-long.219/ | @inproceedings{agrawal-etal-2024-knowledge,
title = "Can Knowledge Graphs Reduce Hallucinations in {LLM}s? : A Survey",
author = "Agrawal, Garima and
Kumarage, Tharindu and
Alghamdi, Zeyad and
Liu, Huan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.219",
doi = "10.18653/v1/2024.naacl-long.219",
pages = "3947--3960",
abstract = "The contemporary LLMs are prone to producing hallucinations, stemming mainly from the knowledge gaps within the models. To address this critical limitation, researchers employ diverse strategies to augment the LLMs by incorporating external knowledge, aiming to reduce hallucinations and enhance reasoning accuracy. Among these strategies, leveraging knowledge graphs as a source of external information has demonstrated promising results. In this survey, we comprehensively review these knowledge-graph-based augmentation techniques in LLMs, focusing on their efficacy in mitigating hallucinations. We systematically categorize these methods into three overarching groups, offering methodological comparisons and performance evaluations. Lastly, this survey explores the current trends and challenges associated with these techniques and outlines potential avenues for future research in this emerging field.",
}
| The contemporary LLMs are prone to producing hallucinations, stemming mainly from the knowledge gaps within the models. To address this critical limitation, researchers employ diverse strategies to augment the LLMs by incorporating external knowledge, aiming to reduce hallucinations and enhance reasoning accuracy. Among these strategies, leveraging knowledge graphs as a source of external information has demonstrated promising results. In this survey, we comprehensively review these knowledge-graph-based augmentation techniques in LLMs, focusing on their efficacy in mitigating hallucinations. We systematically categorize these methods into three overarching groups, offering methodological comparisons and performance evaluations. Lastly, this survey explores the current trends and challenges associated with these techniques and outlines potential avenues for future research in this emerging field. | [
"Agrawal, Garima",
"Kumarage, Tharindu",
"Alghamdi, Zeyad",
"Liu, Huan"
] | Can Knowledge Graphs Reduce Hallucinations in LLMs? : A Survey | naacl-long.219 | Oral | 2311.07914 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.220.bib | https://aclanthology.org/2024.naacl-long.220/ | @inproceedings{ondov-etal-2024-pedagogically,
title = "Pedagogically Aligned Objectives Create Reliable Automatic Cloze Tests",
author = "Ondov, Brian and
Attal, Kush and
Demner-Fushman, Dina",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.220",
doi = "10.18653/v1/2024.naacl-long.220",
pages = "3961--3972",
abstract = "The cloze training objective of Masked Language Models makes them a natural choice for generating plausible distractors for human cloze questions. However, distractors must also be both distinct and incorrect, neither of which is directly addressed by existing neural methods. Evaluation of recent models has also relied largely on automated metrics, which cannot demonstrate the reliability or validity of human comprehension tests. In this work, we first formulate the pedagogically motivated objectives of plausibility, incorrectness, and distinctiveness in terms of conditional distributions from language models. Second, we present an unsupervised, interpretable method that uses these objectives to jointly optimize sets of distractors. Third, we test the reliability and validity of the resulting cloze tests compared to other methods with human participants. We find our method has stronger correlation with teacher-created comprehension tests than the state-of-the-art neural method and is more internally consistent. Our implementation is freely available and can quickly create a multiple choice cloze test from any given passage.",
}
| The cloze training objective of Masked Language Models makes them a natural choice for generating plausible distractors for human cloze questions. However, distractors must also be both distinct and incorrect, neither of which is directly addressed by existing neural methods. Evaluation of recent models has also relied largely on automated metrics, which cannot demonstrate the reliability or validity of human comprehension tests. In this work, we first formulate the pedagogically motivated objectives of plausibility, incorrectness, and distinctiveness in terms of conditional distributions from language models. Second, we present an unsupervised, interpretable method that uses these objectives to jointly optimize sets of distractors. Third, we test the reliability and validity of the resulting cloze tests compared to other methods with human participants. We find our method has stronger correlation with teacher-created comprehension tests than the state-of-the-art neural method and is more internally consistent. Our implementation is freely available and can quickly create a multiple choice cloze test from any given passage. | [
"Ondov, Brian",
"Attal, Kush",
"Demner-Fushman, Dina"
] | Pedagogically Aligned Objectives Create Reliable Automatic Cloze Tests | naacl-long.220 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.221.bib | https://aclanthology.org/2024.naacl-long.221/ | @inproceedings{hashimoto-etal-2024-take,
title = "Take One Step at a Time to Know Incremental Utility of Demonstration: An Analysis on Reranking for Few-Shot In-Context Learning",
author = "Hashimoto, Kazuma and
Raman, Karthik and
Bendersky, Michael",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.221",
doi = "10.18653/v1/2024.naacl-long.221",
pages = "3973--3990",
abstract = "In-Context Learning (ICL) is an emergent capability of Large Language Models (LLMs). Only a few demonstrations enable LLMs to be used as blackbox for new tasks. Previous studies have shown that using LLMs{'} outputs as labels is effective in training models to select demonstrations. Such a label is expected to estimate utility of a demonstration in ICL; however, it has not been well understood how different labeling strategies affect results on target tasks. This paper presents an analysis on different utility functions by focusing on LLMs{'} output probability given ground-truth output, and task-specific reward given LLMs{'} prediction. Unlike the previous work, we introduce a novel labeling method, incremental utility, which estimates how much incremental knowledge is brought into the LLMs by a demonstration. We conduct experiments with instruction-tuned LLMs on binary/multi-class classification, segmentation, and translation across Arabic, English, Finnish, Japanese, and Spanish. Our results show that (1) the probability is effective when the probability values are distributed across the whole value range (on the classification tasks), and (2) the downstream metric is more robust when nuanced reward values are provided with long outputs (on the segmentation and translation tasks). We then show that the proposed incremental utility further helps ICL by contrasting how the LLMs perform with and without the demonstrations.",
}
| In-Context Learning (ICL) is an emergent capability of Large Language Models (LLMs). Only a few demonstrations enable LLMs to be used as blackbox for new tasks. Previous studies have shown that using LLMs{'} outputs as labels is effective in training models to select demonstrations. Such a label is expected to estimate utility of a demonstration in ICL; however, it has not been well understood how different labeling strategies affect results on target tasks. This paper presents an analysis on different utility functions by focusing on LLMs{'} output probability given ground-truth output, and task-specific reward given LLMs{'} prediction. Unlike the previous work, we introduce a novel labeling method, incremental utility, which estimates how much incremental knowledge is brought into the LLMs by a demonstration. We conduct experiments with instruction-tuned LLMs on binary/multi-class classification, segmentation, and translation across Arabic, English, Finnish, Japanese, and Spanish. Our results show that (1) the probability is effective when the probability values are distributed across the whole value range (on the classification tasks), and (2) the downstream metric is more robust when nuanced reward values are provided with long outputs (on the segmentation and translation tasks). We then show that the proposed incremental utility further helps ICL by contrasting how the LLMs perform with and without the demonstrations. | [
"Hashimoto, Kazuma",
"Raman, Karthik",
"Bendersky, Michael"
] | Take One Step at a Time to Know Incremental Utility of Demonstration: An Analysis on Reranking for Few-Shot In-Context Learning | naacl-long.221 | Poster | 2311.09619 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.222.bib | https://aclanthology.org/2024.naacl-long.222/ | @inproceedings{han-etal-2024-lm,
title = "{LM}-Infinite: Zero-Shot Extreme Length Generalization for Large Language Models",
author = "Han, Chi and
Wang, Qifan and
Peng, Hao and
Xiong, Wenhan and
Chen, Yu and
Ji, Heng and
Wang, Sinong",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.222",
doi = "10.18653/v1/2024.naacl-long.222",
pages = "3991--4008",
abstract = "Today{'}s large language models (LLMs) typically train on short text segments (e.g., {\textless}4K tokens) due to the quadratic complexity of their Transformer architectures. As a result, their performance suffers drastically on inputs longer than those encountered during training, substantially limiting their applications in real-world tasks involving long contexts such as encod- ing scientific articles, code repositories, or long dialogues. Through both theoretical analysis and empirical investigation, this work identifies three major factors contributing to this length generalization failure. Our theoretical analysis reveals that commonly used techniques like using a sliding-window attention pattern or relative positional encodings are inadequate to address them. Answering these challenges, we propose LM-Infinite, a simple and effective method for enhancing LLMs{'} capabilities of handling long contexts. LM-Infinite is highly flexible and can be used with most modern LLMs off-the-shelf. Without any parameter updates, it allows LLMs pre-trained with 2K or 4K-long segments to generalize to up to 200M length inputs while retaining perplexity. It also improves performance on downstream tasks such as Passkey Retrieval and Qasper in the zero-shot setting. LM-Infinite brings substantial efficiency improvements: it achieves 2.7{\mbox{$\times$}} decoding speed up and 7.5{\mbox{$\times$}} memory saving over the original model. Our code will be publicly available upon publication.",
}
| Today{'}s large language models (LLMs) typically train on short text segments (e.g., {\textless}4K tokens) due to the quadratic complexity of their Transformer architectures. As a result, their performance suffers drastically on inputs longer than those encountered during training, substantially limiting their applications in real-world tasks involving long contexts such as encod- ing scientific articles, code repositories, or long dialogues. Through both theoretical analysis and empirical investigation, this work identifies three major factors contributing to this length generalization failure. Our theoretical analysis reveals that commonly used techniques like using a sliding-window attention pattern or relative positional encodings are inadequate to address them. Answering these challenges, we propose LM-Infinite, a simple and effective method for enhancing LLMs{'} capabilities of handling long contexts. LM-Infinite is highly flexible and can be used with most modern LLMs off-the-shelf. Without any parameter updates, it allows LLMs pre-trained with 2K or 4K-long segments to generalize to up to 200M length inputs while retaining perplexity. It also improves performance on downstream tasks such as Passkey Retrieval and Qasper in the zero-shot setting. LM-Infinite brings substantial efficiency improvements: it achieves 2.7{\mbox{$\times$}} decoding speed up and 7.5{\mbox{$\times$}} memory saving over the original model. Our code will be publicly available upon publication. | [
"Han, Chi",
"Wang, Qifan",
"Peng, Hao",
"Xiong, Wenhan",
"Chen, Yu",
"Ji, Heng",
"Wang, Sinong"
] | LM-Infinite: Zero-Shot Extreme Length Generalization for Large Language Models | naacl-long.222 | Poster | 2308.16137 | [
"https://github.com/Glaciohound/LM-Infinite"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.223.bib | https://aclanthology.org/2024.naacl-long.223/ | @inproceedings{sun-etal-2024-conscendi,
title = "{CONSCENDI}: A Contrastive and Scenario-Guided Distillation Approach to Guardrail Models for Virtual Assistants",
author = "Sun, Albert and
Nair, Varun and
Schumacher, Elliot and
Kannan, Anitha",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.223",
doi = "10.18653/v1/2024.naacl-long.223",
pages = "4009--4030",
abstract = "A wave of new task-based virtual assistants has been fueled by increasingly powerful large language models (LLMs), such as GPT-4 (OpenAI, 2023). A major challenge in deploying LLM-based virtual conversational assistants in real world settings is ensuring they operate within what is admissible for the task. To overcome this challenge, the designers of these virtual assistants rely on an independent guardrail system that verifies the virtual assistant{'}s output aligns with the constraints required for the task. However, relying on commonly used, prompt-based guardrails can be difficult to engineer correctly and comprehensively. To address these challenges, we propose CONSCENDI. We use CONSCENDI to exhaustively generate training data with two key LLM-powered components: scenario-augmented generation and contrastive training examples. When generating conversational data, we generate a set of rule-breaking scenarios, which enumerate a diverse set of high-level ways a rule can be violated. This scenario-guided approach produces a diverse training set and provides chatbot designers greater control. To generate contrastive examples, we prompt the LLM to alter conversations with violations into acceptable conversations to enable fine-grained distinctions. We then use this data, generated by CONSCENDI, to train a smaller model. We find that CONSCENDI results in guardrail models that improve over baselines in multiple dialogue domains.",
}
| A wave of new task-based virtual assistants has been fueled by increasingly powerful large language models (LLMs), such as GPT-4 (OpenAI, 2023). A major challenge in deploying LLM-based virtual conversational assistants in real world settings is ensuring they operate within what is admissible for the task. To overcome this challenge, the designers of these virtual assistants rely on an independent guardrail system that verifies the virtual assistant{'}s output aligns with the constraints required for the task. However, relying on commonly used, prompt-based guardrails can be difficult to engineer correctly and comprehensively. To address these challenges, we propose CONSCENDI. We use CONSCENDI to exhaustively generate training data with two key LLM-powered components: scenario-augmented generation and contrastive training examples. When generating conversational data, we generate a set of rule-breaking scenarios, which enumerate a diverse set of high-level ways a rule can be violated. This scenario-guided approach produces a diverse training set and provides chatbot designers greater control. To generate contrastive examples, we prompt the LLM to alter conversations with violations into acceptable conversations to enable fine-grained distinctions. We then use this data, generated by CONSCENDI, to train a smaller model. We find that CONSCENDI results in guardrail models that improve over baselines in multiple dialogue domains. | [
"Sun, Albert",
"Nair, Varun",
"Schumacher, Elliot",
"Kannan, Anitha"
] | CONSCENDI: A Contrastive and Scenario-Guided Distillation Approach to Guardrail Models for Virtual Assistants | naacl-long.223 | Poster | 2304.14364 | [
""
] | https://huggingface.co/papers/2304.14364 | 3 | 0 | 0 | 4 | 1 | [] | [] | [] |
https://aclanthology.org/2024.naacl-long.224.bib | https://aclanthology.org/2024.naacl-long.224/ | @inproceedings{yoo-etal-2024-advancing,
title = "Advancing Beyond Identification: Multi-bit Watermark for Large Language Models",
author = "Yoo, KiYoon and
Ahn, Wonhyuk and
Kwak, Nojun",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.224",
doi = "10.18653/v1/2024.naacl-long.224",
pages = "4031--4055",
abstract = "We show the viability of tackling misuses of large language models beyond the identification of machine-generated text. While existing zero-bit watermark methods focus on detection only, some malicious misuses demand tracing the adversary user for counteracting them. To address this, we propose Multi-bit Watermark via Position Allocation, embedding traceable multi-bit information during language model generation. Through allocating tokens onto different parts of the messages, we embed longer messages in high corruption settings without added latency. By independently embedding sub-units of messages, the proposed method outperforms the existing works in terms of robustness and latency. Leveraging the benefits of zero-bit watermarking, our method enables robust extraction of the watermark without any model access, embedding and extraction of long messages ($\geq$ 32-bit) without finetuning, and maintaining text quality, while allowing zero-bit detection all at the same time.",
}
| We show the viability of tackling misuses of large language models beyond the identification of machine-generated text. While existing zero-bit watermark methods focus on detection only, some malicious misuses demand tracing the adversary user for counteracting them. To address this, we propose Multi-bit Watermark via Position Allocation, embedding traceable multi-bit information during language model generation. Through allocating tokens onto different parts of the messages, we embed longer messages in high corruption settings without added latency. By independently embedding sub-units of messages, the proposed method outperforms the existing works in terms of robustness and latency. Leveraging the benefits of zero-bit watermarking, our method enables robust extraction of the watermark without any model access, embedding and extraction of long messages ($\geq$ 32-bit) without finetuning, and maintaining text quality, while allowing zero-bit detection all at the same time. | [
"Yoo, KiYoon",
"Ahn, Wonhyuk",
"Kwak, Nojun"
] | Advancing Beyond Identification: Multi-bit Watermark for Large Language Models | naacl-long.224 | Oral | 2308.00221 | [
"https://github.com/bangawayoo/mb-lm-watermarking"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.225.bib | https://aclanthology.org/2024.naacl-long.225/ | @inproceedings{chen-etal-2024-htccn,
title = "{HTCCN}: Temporal Causal Convolutional Networks with {H}awkes Process for Extrapolation Reasoning in Temporal Knowledge Graphs",
author = "Chen, Tingxuan and
Long, Jun and
Yang, Liu and
Wang, Zidong and
Wang, Yongheng and
Jin, Xiongnan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.225",
doi = "10.18653/v1/2024.naacl-long.225",
pages = "4056--4066",
abstract = "Temporal knowledge graphs (TKGs) serve as powerful tools for storing and modeling dynamic facts, holding immense potential in anticipating future facts. Since future facts are inherently unknowable, effectively modeling the intricate temporal structure of historical facts becomes paramount for accurate prediction. However, current models often rely heavily on fact recurrence or periodicity, leading to information loss due to prolonged evolutionary processes. Notably, the occurrence of one fact always influences the likelihood of another. To this end, we propose HTCCN, a novel Hawkes process-based temporal causal convolutional network designed for temporal reasoning under extrapolation settings. HTCCN employs a temporal causal convolutional network to model the historical interdependence of facts and leverages Hawkes to model link formation processes inductively in TKGs. Importantly, HTCCN introduces dual-level dynamics to comprehensively capture the temporal evolution of facts. Rigorous experimentation on four real-world datasets underscores the superior performance of HTCCN.",
}
| Temporal knowledge graphs (TKGs) serve as powerful tools for storing and modeling dynamic facts, holding immense potential in anticipating future facts. Since future facts are inherently unknowable, effectively modeling the intricate temporal structure of historical facts becomes paramount for accurate prediction. However, current models often rely heavily on fact recurrence or periodicity, leading to information loss due to prolonged evolutionary processes. Notably, the occurrence of one fact always influences the likelihood of another. To this end, we propose HTCCN, a novel Hawkes process-based temporal causal convolutional network designed for temporal reasoning under extrapolation settings. HTCCN employs a temporal causal convolutional network to model the historical interdependence of facts and leverages Hawkes to model link formation processes inductively in TKGs. Importantly, HTCCN introduces dual-level dynamics to comprehensively capture the temporal evolution of facts. Rigorous experimentation on four real-world datasets underscores the superior performance of HTCCN. | [
"Chen, Tingxuan",
"Long, Jun",
"Yang, Liu",
"Wang, Zidong",
"Wang, Yongheng",
"Jin, Xiongnan"
] | HTCCN: Temporal Causal Convolutional Networks with Hawkes Process for Extrapolation Reasoning in Temporal Knowledge Graphs | naacl-long.225 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.226.bib | https://aclanthology.org/2024.naacl-long.226/ | @inproceedings{hou-etal-2024-semstamp,
title = "{S}em{S}tamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation",
author = "Hou, Abe and
Zhang, Jingyu and
He, Tianxing and
Wang, Yichen and
Chuang, Yung-Sung and
Wang, Hongwei and
Shen, Lingfeng and
Van Durme, Benjamin and
Khashabi, Daniel and
Tsvetkov, Yulia",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.226",
doi = "10.18653/v1/2024.naacl-long.226",
pages = "4067--4082",
abstract = "Existing watermarked generation algorithms employ token-level designs and therefore, are vulnerable to paraphrase attacks. To address this issue, we introduce watermarking on the semantic representation of sentences. We propose SemStamp, a robust sentence-level semantic watermarking algorithm that uses locality-sensitive hashing (LSH) to partition the semantic space of sentences. The algorithm encodes and LSH-hashes a candidate sentence generated by a language model, and conducts rejection sampling until the sampled sentence falls in watermarked partitions in the semantic embedding space. To test the paraphrastic robustness of watermarking algorithms, we propose a {``}bigram paraphrase{''} attack that produces paraphrases with small bigram overlap with the original sentence. This attack is shown to be effective against existing token-level watermark algorithms, while posing only minor degradations to SemStamp. Experimental results show that our novel semantic watermark algorithm is not only more robust than the previous state-of-the-art method on various paraphrasers and domains, but also better at preserving the quality of generation.",
}
| Existing watermarked generation algorithms employ token-level designs and therefore, are vulnerable to paraphrase attacks. To address this issue, we introduce watermarking on the semantic representation of sentences. We propose SemStamp, a robust sentence-level semantic watermarking algorithm that uses locality-sensitive hashing (LSH) to partition the semantic space of sentences. The algorithm encodes and LSH-hashes a candidate sentence generated by a language model, and conducts rejection sampling until the sampled sentence falls in watermarked partitions in the semantic embedding space. To test the paraphrastic robustness of watermarking algorithms, we propose a {``}bigram paraphrase{''} attack that produces paraphrases with small bigram overlap with the original sentence. This attack is shown to be effective against existing token-level watermark algorithms, while posing only minor degradations to SemStamp. Experimental results show that our novel semantic watermark algorithm is not only more robust than the previous state-of-the-art method on various paraphrasers and domains, but also better at preserving the quality of generation. | [
"Hou, Abe",
"Zhang, Jingyu",
"He, Tianxing",
"Wang, Yichen",
"Chuang, Yung-Sung",
"Wang, Hongwei",
"Shen, Lingfeng",
"Van Durme, Benjamin",
"Khashabi, Daniel",
"Tsvetkov, Yulia"
] | SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation | naacl-long.226 | Poster | 2310.03991 | [
"https://github.com/bohanhou14/semstamp"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.227.bib | https://aclanthology.org/2024.naacl-long.227/ | @inproceedings{maab-etal-2024-media,
title = "Media Bias Detection Across Families of Language Models",
author = "Maab, Iffat and
Marrese-Taylor, Edison and
Pad{\'o}, Sebastian and
Matsuo, Yutaka",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.227",
doi = "10.18653/v1/2024.naacl-long.227",
pages = "4083--4098",
abstract = "Bias in reporting can influence the public{'}s opinion on relevant societal issues. Examples include informational bias (selective presentation of content) and lexical bias (specific framing of content through linguistic choices). The recognition of media bias is arguably an area where NLP can contribute to the {``}social good{''}. Traditional NLP models have shown good performance in classifying media bias, but require careful model design and extensive tuning. In this paper, we ask how well prompting of large language models can recognize media bias. Through an extensive empirical study including a wide selection of pre-trained models, we find that prompt-based techniques can deliver comparable performance to traditional models with greatly reduced effort and that, similar to traditional models, the availability of context substantially improves results. We further show that larger models can leverage different kinds of context simultaneously, obtaining further performance improvements.",
}
| Bias in reporting can influence the public{'}s opinion on relevant societal issues. Examples include informational bias (selective presentation of content) and lexical bias (specific framing of content through linguistic choices). The recognition of media bias is arguably an area where NLP can contribute to the {``}social good{''}. Traditional NLP models have shown good performance in classifying media bias, but require careful model design and extensive tuning. In this paper, we ask how well prompting of large language models can recognize media bias. Through an extensive empirical study including a wide selection of pre-trained models, we find that prompt-based techniques can deliver comparable performance to traditional models with greatly reduced effort and that, similar to traditional models, the availability of context substantially improves results. We further show that larger models can leverage different kinds of context simultaneously, obtaining further performance improvements. | [
"Maab, Iffat",
"Marrese-Taylor, Edison",
"Pad{\\'o}, Sebastian",
"Matsuo, Yutaka"
] | Media Bias Detection Across Families of Language Models | naacl-long.227 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.228.bib | https://aclanthology.org/2024.naacl-long.228/ | @inproceedings{kong-etal-2024-better,
title = "Better Zero-Shot Reasoning with Role-Play Prompting",
author = "Kong, Aobo and
Zhao, Shiwan and
Chen, Hao and
Li, Qicheng and
Qin, Yong and
Sun, Ruiqi and
Zhou, Xin and
Wang, Enzhi and
Dong, Xiaohang",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.228",
doi = "10.18653/v1/2024.naacl-long.228",
pages = "4099--4113",
abstract = "Modern large language models (LLMs) exhibit a remarkable capacity for role-playing, enabling them to embody not only human characters but also non-human entities. This versatility allows them to simulate complex human-like interactions and behaviors within various contexts, as well as to emulate specific objects or systems. While these capabilities have enhanced user engagement and introduced novel modes of interaction, the influence of role-playing on LLMs{'} reasoning abilities remains underexplored. In this study, we introduce a strategically designed role-play prompting methodology and assess its performance under the zero-shot setting across twelve diverse reasoning benchmarks. Our empirical results illustrate that role-play prompting consistently surpasses the standard zero-shot approach across most datasets. Notably, in experiments conducted using ChatGPT, accuracy on AQuA rises from 53.5{\%} to 63.8{\%}, and on Last Letter from 23.8{\%} to 84.2{\%}. Upon further comparison with the Zero-Shot-CoT technique, which prompts the model to {``}think step by step{''}, our study demonstrates that role-play prompting acts as a more effective trigger for the CoT process.This highlights its potential to augment the reasoning capabilities of LLMs. We release our code at https://github.com/NKU-HLT/Role-Play-Prompting.",
}
| Modern large language models (LLMs) exhibit a remarkable capacity for role-playing, enabling them to embody not only human characters but also non-human entities. This versatility allows them to simulate complex human-like interactions and behaviors within various contexts, as well as to emulate specific objects or systems. While these capabilities have enhanced user engagement and introduced novel modes of interaction, the influence of role-playing on LLMs{'} reasoning abilities remains underexplored. In this study, we introduce a strategically designed role-play prompting methodology and assess its performance under the zero-shot setting across twelve diverse reasoning benchmarks. Our empirical results illustrate that role-play prompting consistently surpasses the standard zero-shot approach across most datasets. Notably, in experiments conducted using ChatGPT, accuracy on AQuA rises from 53.5{\%} to 63.8{\%}, and on Last Letter from 23.8{\%} to 84.2{\%}. Upon further comparison with the Zero-Shot-CoT technique, which prompts the model to {``}think step by step{''}, our study demonstrates that role-play prompting acts as a more effective trigger for the CoT process.This highlights its potential to augment the reasoning capabilities of LLMs. We release our code at https://github.com/NKU-HLT/Role-Play-Prompting. | [
"Kong, Aobo",
"Zhao, Shiwan",
"Chen, Hao",
"Li, Qicheng",
"Qin, Yong",
"Sun, Ruiqi",
"Zhou, Xin",
"Wang, Enzhi",
"Dong, Xiaohang"
] | Better Zero-Shot Reasoning with Role-Play Prompting | naacl-long.228 | Poster | 2308.07702 | [
"https://github.com/HLT-NLP/Role-Play-Prompting"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.229.bib | https://aclanthology.org/2024.naacl-long.229/ | @inproceedings{cheng-etal-2024-event,
title = "Event-Content-Oriented Dialogue Generation in Short Video",
author = "Cheng, Fenghua and
Li, Xue and
Huang, Zi and
Wang, Jinxiang and
Wang, Sen",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.229",
doi = "10.18653/v1/2024.naacl-long.229",
pages = "4114--4124",
abstract = "Understanding complex events from different modalities, associating to external knowledge and generating response in a clear point of view are still unexplored in today{'}s multi-modal dialogue research. The great challenges include 1) lack of event-based multi-modal dialogue dataset; 2) understanding of complex events and 3) heterogeneity gap between different modalities. To overcome these challenges, we firstly introduce a novel event-oriented video-dialogue dataset called SportsVD (Sports-domain Video-dialogue Dataset). To our best knowledge, SportsVD is the first dataset that consists of complex events videos and opinion-based conversations with regards to contents in these events. Meanwhile, we present multi-modal dialogue generation method VCD (Video Commentary Dialogue) to generate human-like response according to event contents in the video and related external knowledge. In contrast to previous video-based dialogue generation, we focus on opinion-based response and the understanding of longer and more complex event contents. We evaluate VCD{'}s performance on SportsVD and other baselines under several automatic metrics. Experiments demonstrate VCD can outperform among other state-of-the-art baselines. Our work is available at https://github.com/Cheng-Fenghua/SportsVD.",
}
| Understanding complex events from different modalities, associating to external knowledge and generating response in a clear point of view are still unexplored in today{'}s multi-modal dialogue research. The great challenges include 1) lack of event-based multi-modal dialogue dataset; 2) understanding of complex events and 3) heterogeneity gap between different modalities. To overcome these challenges, we firstly introduce a novel event-oriented video-dialogue dataset called SportsVD (Sports-domain Video-dialogue Dataset). To our best knowledge, SportsVD is the first dataset that consists of complex events videos and opinion-based conversations with regards to contents in these events. Meanwhile, we present multi-modal dialogue generation method VCD (Video Commentary Dialogue) to generate human-like response according to event contents in the video and related external knowledge. In contrast to previous video-based dialogue generation, we focus on opinion-based response and the understanding of longer and more complex event contents. We evaluate VCD{'}s performance on SportsVD and other baselines under several automatic metrics. Experiments demonstrate VCD can outperform among other state-of-the-art baselines. Our work is available at https://github.com/Cheng-Fenghua/SportsVD. | [
"Cheng, Fenghua",
"Li, Xue",
"Huang, Zi",
"Wang, Jinxiang",
"Wang, Sen"
] | Event-Content-Oriented Dialogue Generation in Short Video | naacl-long.229 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.230.bib | https://aclanthology.org/2024.naacl-long.230/ | @inproceedings{chen-etal-2024-dog,
title = "{D}o{G}-Instruct: Towards Premium Instruction-Tuning Data via Text-Grounded Instruction Wrapping",
author = "Chen, Yongrui and
Jiang, Haiyun and
Huang, Xinting and
Shi, Shuming and
Qi, Guilin",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.230",
doi = "10.18653/v1/2024.naacl-long.230",
pages = "4125--4135",
abstract = "The improvement of LLMs{'} instruction-following capabilities relies heavily on the availability of high-quality instruction-response pairs. Unfortunately, the current methods used to collect the pairs suffer from either unaffordable labor costs or severe hallucinations in the self-generation of LLM.To tackle these challenges, this paper proposes a scalable solution.It involves training LLMs to generate instruction-response pairs based on human-written documents, rather than relying solely on self-generation without context.Our proposed method not only exploits the advantages of human-written documents in reducing hallucinations but also utilizes an LLM to wrap the expression of documents, which enables us to bridge the gap between various document styles and the standard AI response.Experiments demonstrate that our method outperforms existing typical methods on multiple benchmarks.In particular, compared to the best-performing baseline, the LLM trained using our generated dataset exhibits a 10{\%} relative improvement in performance on AlpacaEval, despite utilizing only 1/5 of its training data.Furthermore, a comprehensive manual evaluation validates the quality of the data we generated.",
}
| The improvement of LLMs{'} instruction-following capabilities relies heavily on the availability of high-quality instruction-response pairs. Unfortunately, the current methods used to collect the pairs suffer from either unaffordable labor costs or severe hallucinations in the self-generation of LLM.To tackle these challenges, this paper proposes a scalable solution.It involves training LLMs to generate instruction-response pairs based on human-written documents, rather than relying solely on self-generation without context.Our proposed method not only exploits the advantages of human-written documents in reducing hallucinations but also utilizes an LLM to wrap the expression of documents, which enables us to bridge the gap between various document styles and the standard AI response.Experiments demonstrate that our method outperforms existing typical methods on multiple benchmarks.In particular, compared to the best-performing baseline, the LLM trained using our generated dataset exhibits a 10{\%} relative improvement in performance on AlpacaEval, despite utilizing only 1/5 of its training data.Furthermore, a comprehensive manual evaluation validates the quality of the data we generated. | [
"Chen, Yongrui",
"Jiang, Haiyun",
"Huang, Xinting",
"Shi, Shuming",
"Qi, Guilin"
] | DoG-Instruct: Towards Premium Instruction-Tuning Data via Text-Grounded Instruction Wrapping | naacl-long.230 | Poster | 2309.05447 | [
"https://github.com/bahuia/dog-instruct"
] | https://huggingface.co/papers/2309.05447 | 0 | 1 | 0 | 5 | 1 | [
"bahuia/dog-instruct-wrapper-7b-lora"
] | [] | [] |
https://aclanthology.org/2024.naacl-long.231.bib | https://aclanthology.org/2024.naacl-long.231/ | @inproceedings{t-y-s-s-etal-2024-beyond,
title = "Beyond Borders: Investigating Cross-Jurisdiction Transfer in Legal Case Summarization",
author = "T.y.s.s, Santosh and
Venkatkrishna, Vatsal and
Ghosh, Saptarshi and
Grabmair, Matthias",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.231",
doi = "10.18653/v1/2024.naacl-long.231",
pages = "4136--4150",
abstract = "Legal professionals face the challenge of managing an overwhelming volume of lengthy judgments, making automated legal case summarization crucial. However, prior approaches mainly focused on training and evaluating these models within the same jurisdiction. In this study, we explore the cross-jurisdictional generalizability of legal case summarization models. Specifically, we explore how to effectively summarize legal cases of a target jurisdiction where reference summaries are not available. In particular, we investigate whether supplementing models with unlabeled target jurisdiction corpus and extractive silver summaries obtained from unsupervised algorithms on target data enhances transfer performance. Our comprehensive study on three datasets from different jurisdictions highlights the role of pre-training in improving transfer performance. We shed light on the pivotal influence of jurisdictional similarity in selecting optimal source datasets for effective transfer. Furthermore, our findings underscore that incorporating unlabeled target data yields improvements in general pre-trained models, with additional gains when silver summaries are introduced. This augmentation is especially valuable when dealing with extractive datasets and scenarios featuring limited alignment between source and target jurisdictions. Our study provides key insights for developing adaptable legal case summarization systems, transcending jurisdictional boundaries.",
}
| Legal professionals face the challenge of managing an overwhelming volume of lengthy judgments, making automated legal case summarization crucial. However, prior approaches mainly focused on training and evaluating these models within the same jurisdiction. In this study, we explore the cross-jurisdictional generalizability of legal case summarization models. Specifically, we explore how to effectively summarize legal cases of a target jurisdiction where reference summaries are not available. In particular, we investigate whether supplementing models with unlabeled target jurisdiction corpus and extractive silver summaries obtained from unsupervised algorithms on target data enhances transfer performance. Our comprehensive study on three datasets from different jurisdictions highlights the role of pre-training in improving transfer performance. We shed light on the pivotal influence of jurisdictional similarity in selecting optimal source datasets for effective transfer. Furthermore, our findings underscore that incorporating unlabeled target data yields improvements in general pre-trained models, with additional gains when silver summaries are introduced. This augmentation is especially valuable when dealing with extractive datasets and scenarios featuring limited alignment between source and target jurisdictions. Our study provides key insights for developing adaptable legal case summarization systems, transcending jurisdictional boundaries. | [
"T.y.s.s, Santosh",
"Venkatkrishna, Vatsal",
"Ghosh, Saptarshi",
"Grabmair, Matthias"
] | Beyond Borders: Investigating Cross-Jurisdiction Transfer in Legal Case Summarization | naacl-long.231 | Poster | 2403.19317 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.232.bib | https://aclanthology.org/2024.naacl-long.232/ | @inproceedings{lu-etal-2024-edc,
title = "{EDC}: Effective and Efficient Dialog Comprehension For Dialog State Tracking",
author = "Lu, Qifan and
Ramasubramanian, Bhaskar and
Poovendran, Radha",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.232",
doi = "10.18653/v1/2024.naacl-long.232",
pages = "4151--4165",
abstract = "In Task-Oriented Dialog (TOD) systems, Dialog State Tracking (DST) structurally extracts information from user and system utterances, which can be further used for querying databases and forming responses to users. The two major categories of DST methods, sequential and independent methods, face trade-offs between accuracy and efficiency. To resolve this issue, we propose Effective and Efficient Dialog Comprehension (EDC), an alternative DST approach that leverages the tree structure of the dialog state. EDC predicts domains, slot names and slot values of the dialog state step-by-step for better accuracy, and efficiently encodes dialog contexts with causal attention patterns. We evaluate EDC on several popular TOD datasets and EDC is able to achieve state-of-the-art Joint Goal Accuracy (JGA). We also show theoretically and empirically that EDC is more efficient than model designs used by previous works.",
}
| In Task-Oriented Dialog (TOD) systems, Dialog State Tracking (DST) structurally extracts information from user and system utterances, which can be further used for querying databases and forming responses to users. The two major categories of DST methods, sequential and independent methods, face trade-offs between accuracy and efficiency. To resolve this issue, we propose Effective and Efficient Dialog Comprehension (EDC), an alternative DST approach that leverages the tree structure of the dialog state. EDC predicts domains, slot names and slot values of the dialog state step-by-step for better accuracy, and efficiently encodes dialog contexts with causal attention patterns. We evaluate EDC on several popular TOD datasets and EDC is able to achieve state-of-the-art Joint Goal Accuracy (JGA). We also show theoretically and empirically that EDC is more efficient than model designs used by previous works. | [
"Lu, Qifan",
"Ramasubramanian, Bhaskar",
"Poovendran, Radha"
] | EDC: Effective and Efficient Dialog Comprehension For Dialog State Tracking | naacl-long.232 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.233.bib | https://aclanthology.org/2024.naacl-long.233/ | @inproceedings{shatnawi-etal-2024-automatic,
title = "Automatic Restoration of Diacritics for Speech Data Sets",
author = "Shatnawi, Sara and
Alqahtani, Sawsan and
Aldarmaki, Hanan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.233",
doi = "10.18653/v1/2024.naacl-long.233",
pages = "4166--4176",
abstract = "Automatic text-based diacritic restoration models generally have high diacritic error rates when applied to speech transcripts as a result of domain and style shifts in spoken language. In this work, we explore the possibility of improving the performance of automatic diacritic restoration when applied to speech data by utilizing parallel spoken utterances. In particular, we use the pre-trained Whisper ASR model fine-tuned on relatively small amounts of diacritized Arabic speech data to produce rough diacritized transcripts for the speech utterances, which we then use as an additional input for diacritic restoration models. The proposed framework consistently improves diacritic restoration performance compared to text-only baselines. Our results highlight the inadequacy of current text-based diacritic restoration models for speech data sets and provide a new baseline for speech-based diacritic restoration.",
}
| Automatic text-based diacritic restoration models generally have high diacritic error rates when applied to speech transcripts as a result of domain and style shifts in spoken language. In this work, we explore the possibility of improving the performance of automatic diacritic restoration when applied to speech data by utilizing parallel spoken utterances. In particular, we use the pre-trained Whisper ASR model fine-tuned on relatively small amounts of diacritized Arabic speech data to produce rough diacritized transcripts for the speech utterances, which we then use as an additional input for diacritic restoration models. The proposed framework consistently improves diacritic restoration performance compared to text-only baselines. Our results highlight the inadequacy of current text-based diacritic restoration models for speech data sets and provide a new baseline for speech-based diacritic restoration. | [
"Shatnawi, Sara",
"Alqahtani, Sawsan",
"Aldarmaki, Hanan"
] | Automatic Restoration of Diacritics for Speech Data Sets | naacl-long.233 | Poster | 2311.10771 | [
"https://github.com/sarashatnawi/diacritization"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.234.bib | https://aclanthology.org/2024.naacl-long.234/ | @inproceedings{heredia-etal-2024-xnlieu,
title = "{XNLI}eu: a dataset for cross-lingual {NLI} in {B}asque",
author = "Heredia, Maite and
Etxaniz, Julen and
Zulaika, Muitze and
Saralegi, Xabier and
Barnes, Jeremy and
Soroa, Aitor",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.234",
doi = "10.18653/v1/2024.naacl-long.234",
pages = "4177--4188",
abstract = "XNLI is a popular Natural Language Inference (NLI) benchmark widely used to evaluate cross-lingual Natural Language Understanding (NLU) capabilities across languages. In this paper, we expand XNLI to include Basque, a low-resource language that can greatly benefit from transfer-learning approaches. The new dataset, dubbed XNLIeu, has been developed by first machine-translating the English XNLI corpus into Basque, followed by a manual post-edition step. We have conducted a series of experiments using mono- and multilingual LLMs to assess a) the effect of professional post-edition on the MT system; b) the best cross-lingual strategy for NLI in Basque; and c) whether the choice of the best cross-lingual strategy is influenced by the fact that the dataset is built by translation. The results show that post-edition is necessary and that the translate-train cross-lingual strategy obtains better results overall, although the gain is lower when tested in a dataset that has been built natively from scratch. Our code and datasets are publicly available under open licenses.",
}
| XNLI is a popular Natural Language Inference (NLI) benchmark widely used to evaluate cross-lingual Natural Language Understanding (NLU) capabilities across languages. In this paper, we expand XNLI to include Basque, a low-resource language that can greatly benefit from transfer-learning approaches. The new dataset, dubbed XNLIeu, has been developed by first machine-translating the English XNLI corpus into Basque, followed by a manual post-edition step. We have conducted a series of experiments using mono- and multilingual LLMs to assess a) the effect of professional post-edition on the MT system; b) the best cross-lingual strategy for NLI in Basque; and c) whether the choice of the best cross-lingual strategy is influenced by the fact that the dataset is built by translation. The results show that post-edition is necessary and that the translate-train cross-lingual strategy obtains better results overall, although the gain is lower when tested in a dataset that has been built natively from scratch. Our code and datasets are publicly available under open licenses. | [
"Heredia, Maite",
"Etxaniz, Julen",
"Zulaika, Muitze",
"Saralegi, Xabier",
"Barnes, Jeremy",
"Soroa, Aitor"
] | XNLIeu: a dataset for cross-lingual NLI in Basque | naacl-long.234 | Poster | 2404.06996 | [
"https://github.com/hitz-zentroa/xnli-eu"
] | https://huggingface.co/papers/2404.06996 | 2 | 0 | 0 | 6 | 1 | [] | [] | [] |
https://aclanthology.org/2024.naacl-long.235.bib | https://aclanthology.org/2024.naacl-long.235/ | @inproceedings{wang-etal-2024-mdr,
title = "{MDR}: Model-Specific Demonstration Retrieval at Inference Time for In-Context Learning",
author = "Wang, Huazheng and
Wu, Jinming and
Sun, Haifeng and
Xia, Zixuan and
Cheng, Daixuan and
Wang, Jingyu and
Qi, Qi and
Liao, Jianxin",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.235",
doi = "10.18653/v1/2024.naacl-long.235",
pages = "4189--4204",
abstract = "Recently, retrieval-based in-context learning (ICL) methods for selecting demonstrations have been widely investigated. Existing methods train a dense retriever to retrieve the most appropriate demonstrations for a given test query, which improves ICL performance. However, we find that distinct LLMs exhibit different biases for {``}what is a good demonstration{''} since they possess differences in training data, model architectures and training methods. As a result, a demonstration suitable for one LLM may not be appropriate for others.Previous approaches ignore the model bias and fail to retrieve the most appropriate demonstrations for different inference LLMs, resulting in a degradation of ICL performance.To address this problem, we propose a simple yet effective metric to evaluate the appropriateness of demonstrations for a specific inference LLM. Furthermore, we introduce a Model-specific Demonstration Retrieval (MDR) method for ICL at inference time, which considers the biases of different LLMs. We test MDR on seen and unseen tasks with multi-scale inference LLMs, such as GPT-Neo-2.7B, LLaMA-7B and Vicuna-13B. Experiments on 23 datasets across 11 data domains highlight the remarkable effectiveness of MDR, showcasing improvements of up to 41.2{\%} in comparison to methods that neglect model biases.",
}
| Recently, retrieval-based in-context learning (ICL) methods for selecting demonstrations have been widely investigated. Existing methods train a dense retriever to retrieve the most appropriate demonstrations for a given test query, which improves ICL performance. However, we find that distinct LLMs exhibit different biases for {``}what is a good demonstration{''} since they possess differences in training data, model architectures and training methods. As a result, a demonstration suitable for one LLM may not be appropriate for others.Previous approaches ignore the model bias and fail to retrieve the most appropriate demonstrations for different inference LLMs, resulting in a degradation of ICL performance.To address this problem, we propose a simple yet effective metric to evaluate the appropriateness of demonstrations for a specific inference LLM. Furthermore, we introduce a Model-specific Demonstration Retrieval (MDR) method for ICL at inference time, which considers the biases of different LLMs. We test MDR on seen and unseen tasks with multi-scale inference LLMs, such as GPT-Neo-2.7B, LLaMA-7B and Vicuna-13B. Experiments on 23 datasets across 11 data domains highlight the remarkable effectiveness of MDR, showcasing improvements of up to 41.2{\%} in comparison to methods that neglect model biases. | [
"Wang, Huazheng",
"Wu, Jinming",
"Sun, Haifeng",
"Xia, Zixuan",
"Cheng, Daixuan",
"Wang, Jingyu",
"Qi, Qi",
"Liao, Jianxin"
] | MDR: Model-Specific Demonstration Retrieval at Inference Time for In-Context Learning | naacl-long.235 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.236.bib | https://aclanthology.org/2024.naacl-long.236/ | @inproceedings{lee-etal-2024-exploring-cross,
title = "Exploring Cross-Cultural Differences in {E}nglish Hate Speech Annotations: From Dataset Construction to Analysis",
author = "Lee, Nayeon and
Jung, Chani and
Myung, Junho and
Jin, Jiho and
Camacho-Collados, Jose and
Kim, Juho and
Oh, Alice",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.236",
doi = "10.18653/v1/2024.naacl-long.236",
pages = "4205--4224",
abstract = "Most hate speech datasets neglect the cultural diversity within a single language, resulting in a critical shortcoming in hate speech detection. To address this, we introduce CREHate, a CRoss-cultural English Hate speech dataset. To construct CREHate, we follow a two-step procedure: 1) cultural post collection and 2) cross-cultural annotation. We sample posts from the SBIC dataset, which predominantly represents North America, and collect posts from four geographically diverse English-speaking countries (Australia, United Kingdom, Singapore, and South Africa) using culturally hateful keywords we retrieve from our survey. Annotations are collected from the four countries plus the United States to establish representative labels for each country. Our analysis highlights statistically significant disparities across countries in hate speech annotations. Only 56.2{\%} of the posts in CREHate achieve consensus among all countries, with the highest pairwise label difference rate of 26{\%}. Qualitative analysis shows that label disagreement occurs mostly due to different interpretations of sarcasm and the personal bias of annotators on divisive topics. Lastly, we evaluate large language models (LLMs) under a zero-shot setting and show that current LLMs tend to show higher accuracies on Anglosphere country labels in CREHate.Our dataset and codes are available at: https://github.com/nlee0212/CREHate",
}
| Most hate speech datasets neglect the cultural diversity within a single language, resulting in a critical shortcoming in hate speech detection. To address this, we introduce CREHate, a CRoss-cultural English Hate speech dataset. To construct CREHate, we follow a two-step procedure: 1) cultural post collection and 2) cross-cultural annotation. We sample posts from the SBIC dataset, which predominantly represents North America, and collect posts from four geographically diverse English-speaking countries (Australia, United Kingdom, Singapore, and South Africa) using culturally hateful keywords we retrieve from our survey. Annotations are collected from the four countries plus the United States to establish representative labels for each country. Our analysis highlights statistically significant disparities across countries in hate speech annotations. Only 56.2{\%} of the posts in CREHate achieve consensus among all countries, with the highest pairwise label difference rate of 26{\%}. Qualitative analysis shows that label disagreement occurs mostly due to different interpretations of sarcasm and the personal bias of annotators on divisive topics. Lastly, we evaluate large language models (LLMs) under a zero-shot setting and show that current LLMs tend to show higher accuracies on Anglosphere country labels in CREHate.Our dataset and codes are available at: https://github.com/nlee0212/CREHate | [
"Lee, Nayeon",
"Jung, Chani",
"Myung, Junho",
"Jin, Jiho",
"Camacho-Collados, Jose",
"Kim, Juho",
"Oh, Alice"
] | Exploring Cross-Cultural Differences in English Hate Speech Annotations: From Dataset Construction to Analysis | naacl-long.236 | Oral | 2308.16705 | [
"https://github.com/nlee0212/crehate"
] | https://huggingface.co/papers/2308.16705 | 2 | 0 | 0 | 7 | 1 | [] | [
"nayeon212/CREHate"
] | [] |
https://aclanthology.org/2024.naacl-long.237.bib | https://aclanthology.org/2024.naacl-long.237/ | @inproceedings{zhao-etal-2024-enhancing,
title = "Enhancing Contextual Understanding in Large Language Models through Contrastive Decoding",
author = "Zhao, Zheng and
Monti, Emilio and
Lehmann, Jens and
Assem, Haytham",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.237",
doi = "10.18653/v1/2024.naacl-long.237",
pages = "4225--4237",
abstract = "Large language models (LLMs) tend to inadequately integrate input context during text generation, relying excessively on encoded prior knowledge in model parameters, potentially resulting in generated text with factual inconsistencies or contextually unfaithful content. LLMs utilize two primary knowledge sources: 1) prior (parametric) knowledge from pretraining, and 2) contextual (non-parametric) knowledge from input prompts. The study addresses the open question of how LLMs effectively balance these knowledge sources during the generation process, specifically in the context of open-domain question answering. To address this issue, we introduce a novel approach integrating contrastive decoding with adversarial irrelevant passages as negative samples to enhance robust context grounding during generation. Notably, our method operates at inference time without requiring further training. We conduct comprehensive experiments to demonstrate its applicability and effectiveness, providing empirical evidence showcasing its superiority over existing methodologies.",
}
| Large language models (LLMs) tend to inadequately integrate input context during text generation, relying excessively on encoded prior knowledge in model parameters, potentially resulting in generated text with factual inconsistencies or contextually unfaithful content. LLMs utilize two primary knowledge sources: 1) prior (parametric) knowledge from pretraining, and 2) contextual (non-parametric) knowledge from input prompts. The study addresses the open question of how LLMs effectively balance these knowledge sources during the generation process, specifically in the context of open-domain question answering. To address this issue, we introduce a novel approach integrating contrastive decoding with adversarial irrelevant passages as negative samples to enhance robust context grounding during generation. Notably, our method operates at inference time without requiring further training. We conduct comprehensive experiments to demonstrate its applicability and effectiveness, providing empirical evidence showcasing its superiority over existing methodologies. | [
"Zhao, Zheng",
"Monti, Emilio",
"Lehmann, Jens",
"Assem, Haytham"
] | Enhancing Contextual Understanding in Large Language Models through Contrastive Decoding | naacl-long.237 | Oral | 2405.02750 | [
"https://github.com/amazon-science/contextualunderstanding-contrastivedecoding"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.238.bib | https://aclanthology.org/2024.naacl-long.238/ | @inproceedings{jang-frassinelli-2024-generalizable,
title = "Generalizable Sarcasm Detection is Just Around the Corner, of Course!",
author = "Jang, Hyewon and
Frassinelli, Diego",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.238",
doi = "10.18653/v1/2024.naacl-long.238",
pages = "4238--4249",
abstract = "We tested the robustness of sarcasm detection models by examining their behavior when fine-tuned on four sarcasm datasets containing varying characteristics of sarcasm: label source (authors vs. third-party), domain (social media/online vs. offline conversations/dialogues), style (aggressive vs. humorous mocking). We tested their prediction performance on the same dataset (intra-dataset) and across different datasets (cross-dataset). For intra-dataset predictions, models consistently performed better when fine-tuned with third-party labels rather than with author labels. For cross-dataset predictions, most models failed to generalize well to the other datasets, implying that one type of dataset cannot represent all sorts of sarcasm with different styles and domains. Compared to the existing datasets, models fine-tuned on the new dataset we release in this work showed the highest generalizability to other datasets. With a manual inspection of the datasets and post-hoc analysis, we attributed the difficulty in generalization to the fact that sarcasm actually comes in different domains and styles. We argue that future sarcasm research should take the broad scope of sarcasm into account.",
}
| We tested the robustness of sarcasm detection models by examining their behavior when fine-tuned on four sarcasm datasets containing varying characteristics of sarcasm: label source (authors vs. third-party), domain (social media/online vs. offline conversations/dialogues), style (aggressive vs. humorous mocking). We tested their prediction performance on the same dataset (intra-dataset) and across different datasets (cross-dataset). For intra-dataset predictions, models consistently performed better when fine-tuned with third-party labels rather than with author labels. For cross-dataset predictions, most models failed to generalize well to the other datasets, implying that one type of dataset cannot represent all sorts of sarcasm with different styles and domains. Compared to the existing datasets, models fine-tuned on the new dataset we release in this work showed the highest generalizability to other datasets. With a manual inspection of the datasets and post-hoc analysis, we attributed the difficulty in generalization to the fact that sarcasm actually comes in different domains and styles. We argue that future sarcasm research should take the broad scope of sarcasm into account. | [
"Jang, Hyewon",
"Frassinelli, Diego"
] | Generalizable Sarcasm Detection is Just Around the Corner, of Course! | naacl-long.238 | Oral | 2404.06357 | [
"https://github.com/copsyn/csc"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.239.bib | https://aclanthology.org/2024.naacl-long.239/ | @inproceedings{shen-etal-2024-encoding,
title = "Encoding of lexical tone in self-supervised models of spoken language",
author = "Shen, Gaofei and
Watkins, Michaela and
Alishahi, Afra and
Bisazza, Arianna and
Chrupa{\l}a, Grzegorz",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.239",
doi = "10.18653/v1/2024.naacl-long.239",
pages = "4250--4261",
abstract = "Interpretability research has shown that self-supervised Spoken LanguageModels (SLMs) encode a wide variety of features in human speech from theacoustic, phonetic, phonological, syntactic and semantic levels, to speakercharacteristics. The bulk of prior research on representations of phonologyhas focused on segmental features such as phonemes; the encoding ofsuprasegmental phonology (such as tone and stress patterns) in SLMs is not yetwell understood. Tone is a suprasegmental feature that is present in more thanhalf of the world{'}s languages. This paper aims to analyze the tone encodingcapabilities of SLMs, using Mandarin and Vietnamese as case studies. We showthat SLMs encode lexical tone to a significant degree even when they aretrained on data from non-tonal languages. We further find that SLMs behavesimilarly to native and non-native human participants in tone and consonantperception studies, but they do not follow the same developmental trajectory.",
}
| Interpretability research has shown that self-supervised Spoken LanguageModels (SLMs) encode a wide variety of features in human speech from theacoustic, phonetic, phonological, syntactic and semantic levels, to speakercharacteristics. The bulk of prior research on representations of phonologyhas focused on segmental features such as phonemes; the encoding ofsuprasegmental phonology (such as tone and stress patterns) in SLMs is not yetwell understood. Tone is a suprasegmental feature that is present in more thanhalf of the world{'}s languages. This paper aims to analyze the tone encodingcapabilities of SLMs, using Mandarin and Vietnamese as case studies. We showthat SLMs encode lexical tone to a significant degree even when they aretrained on data from non-tonal languages. We further find that SLMs behavesimilarly to native and non-native human participants in tone and consonantperception studies, but they do not follow the same developmental trajectory. | [
"Shen, Gaofei",
"Watkins, Michaela",
"Alishahi, Afra",
"Bisazza, Arianna",
"Chrupa{\\l}a, Grzegorz"
] | Encoding of lexical tone in self-supervised models of spoken language | naacl-long.239 | Oral | 2403.16865 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.240.bib | https://aclanthology.org/2024.naacl-long.240/ | @inproceedings{periti-tahmasebi-2024-systematic,
title = "A Systematic Comparison of Contextualized Word Embeddings for Lexical Semantic Change",
author = "Periti, Francesco and
Tahmasebi, Nina",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.240",
doi = "10.18653/v1/2024.naacl-long.240",
pages = "4262--4282",
abstract = "Contextualized embeddings are the preferred tool for modeling Lexical Semantic Change (LSC). Current evaluations typically focus on a specific task known as Graded Change Detection (GCD). However, performance comparison across work are often misleading due to their reliance on diverse settings. In this paper, we evaluate state-of-the-art models and approaches for GCD under equal conditions. We further break the LSC problem into Word-in-Context (WiC) and Word Sense Induction (WSI) tasks, and compare models across these different levels. Our evaluation is performed across different languages on eight available benchmarks for LSC, and shows that (i) APD outperforms other approaches for GCD; (ii) XL-LEXEME outperforms other contextualized models for WiC, WSI, and GCD, while being comparable to GPT-4; (iii) there is a clear need for improving the modeling of word meanings, as well as focus on *how*, *when*, and *why* these meanings change, rather than solely focusing on the extent of semantic change.",
}
| Contextualized embeddings are the preferred tool for modeling Lexical Semantic Change (LSC). Current evaluations typically focus on a specific task known as Graded Change Detection (GCD). However, performance comparison across work are often misleading due to their reliance on diverse settings. In this paper, we evaluate state-of-the-art models and approaches for GCD under equal conditions. We further break the LSC problem into Word-in-Context (WiC) and Word Sense Induction (WSI) tasks, and compare models across these different levels. Our evaluation is performed across different languages on eight available benchmarks for LSC, and shows that (i) APD outperforms other approaches for GCD; (ii) XL-LEXEME outperforms other contextualized models for WiC, WSI, and GCD, while being comparable to GPT-4; (iii) there is a clear need for improving the modeling of word meanings, as well as focus on *how*, *when*, and *why* these meanings change, rather than solely focusing on the extent of semantic change. | [
"Periti, Francesco",
"Tahmasebi, Nina"
] | A Systematic Comparison of Contextualized Word Embeddings for Lexical Semantic Change | naacl-long.240 | Poster | 2402.12011 | [
"https://github.com/francescoperiti/cssdetection"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.241.bib | https://aclanthology.org/2024.naacl-long.241/ | @inproceedings{xu-etal-2024-iacos,
title = "i{ACOS}: Advancing Implicit Sentiment Extraction with Informative and Adaptive Negative Examples",
author = "Xu, Xiancai and
Zhang, Jia-Dong and
Xiong, Lei and
Liu, Zhishang",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.241",
doi = "10.18653/v1/2024.naacl-long.241",
pages = "4283--4293",
abstract = "Aspect-based sentiment analysis (ABSA) have been extensively studied, but little light has been shed on the quadruple extraction consisting of four fundamental elements: aspects, categories, opinions and sentiments, especially with implicit aspects and opinions. In this paper, we propose a new method iACOS for extracting Implicit Aspects with Categories and Opinions with Sentiments. First, iACOS appends two implicit tokens at the end of a text to capture the context-aware representation of all tokens including implicit aspects and opinions. Second, iACOS develops a sequence labeling model over the context-aware token representation to co-extract explicit and implicit aspects and opinions. Third, iACOS devises a multi-label classifier with a specialized multi-head attention for discovering aspect-opinion pairs and predicting their categories and sentiments simultaneously. Fourth, iACOS leverages informative and adaptive negative examples to jointly train the multi-label classifier and the other two classifiers on categories and sentiments by multi-task learning. Finally, the experimental results show that iACOS significantly outperforms other quadruple extraction baselines according to the F1 score on two public benchmark datasets.",
}
| Aspect-based sentiment analysis (ABSA) have been extensively studied, but little light has been shed on the quadruple extraction consisting of four fundamental elements: aspects, categories, opinions and sentiments, especially with implicit aspects and opinions. In this paper, we propose a new method iACOS for extracting Implicit Aspects with Categories and Opinions with Sentiments. First, iACOS appends two implicit tokens at the end of a text to capture the context-aware representation of all tokens including implicit aspects and opinions. Second, iACOS develops a sequence labeling model over the context-aware token representation to co-extract explicit and implicit aspects and opinions. Third, iACOS devises a multi-label classifier with a specialized multi-head attention for discovering aspect-opinion pairs and predicting their categories and sentiments simultaneously. Fourth, iACOS leverages informative and adaptive negative examples to jointly train the multi-label classifier and the other two classifiers on categories and sentiments by multi-task learning. Finally, the experimental results show that iACOS significantly outperforms other quadruple extraction baselines according to the F1 score on two public benchmark datasets. | [
"Xu, Xiancai",
"Zhang, Jia-Dong",
"Xiong, Lei",
"Liu, Zhishang"
] | iACOS: Advancing Implicit Sentiment Extraction with Informative and Adaptive Negative Examples | naacl-long.241 | Oral | 2311.03896 | [
"https://github.com/jiadongzh/iacos"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.242.bib | https://aclanthology.org/2024.naacl-long.242/ | @inproceedings{jang-etal-2024-rectifying,
title = "Rectifying Demonstration Shortcut in In-Context Learning",
author = "Jang, Joonwon and
Jang, Sanghwan and
Kweon, Wonbin and
Jeon, Minjin and
Yu, Hwanjo",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.242",
doi = "10.18653/v1/2024.naacl-long.242",
pages = "4294--4321",
abstract = "Large language models (LLMs) are able to solve various tasks with only a few demonstrations utilizing their in-context learning (ICL) abilities.However, LLMs often rely on their pre-trained semantic priors of demonstrations rather than on the input-label relationships to proceed with ICL prediction. In this work, we term this phenomenon as the {`}Demonstration Shortcut{'}.While previous works have primarily focused on improving ICL prediction results for predefined tasks, we aim to rectify the Demonstration Shortcut, thereby enabling the LLM to effectively learn new input-label relationships from demonstrations.To achieve this, we introduce In-Context Calibration, a demonstration-aware calibration method.We evaluate the effectiveness of the proposed method in two settings: (1) the Original ICL Task using the standard label space and (2) the Task Learning setting, where the label space is replaced with semantically unrelated tokens.In both settings, In-Context Calibration demonstrates substantial improvements, with results generalized across three LLM families (OPT, GPT, and Llama2) under various configurations.",
}
| Large language models (LLMs) are able to solve various tasks with only a few demonstrations utilizing their in-context learning (ICL) abilities.However, LLMs often rely on their pre-trained semantic priors of demonstrations rather than on the input-label relationships to proceed with ICL prediction. In this work, we term this phenomenon as the {`}Demonstration Shortcut{'}.While previous works have primarily focused on improving ICL prediction results for predefined tasks, we aim to rectify the Demonstration Shortcut, thereby enabling the LLM to effectively learn new input-label relationships from demonstrations.To achieve this, we introduce In-Context Calibration, a demonstration-aware calibration method.We evaluate the effectiveness of the proposed method in two settings: (1) the Original ICL Task using the standard label space and (2) the Task Learning setting, where the label space is replaced with semantically unrelated tokens.In both settings, In-Context Calibration demonstrates substantial improvements, with results generalized across three LLM families (OPT, GPT, and Llama2) under various configurations. | [
"Jang, Joonwon",
"Jang, Sanghwan",
"Kweon, Wonbin",
"Jeon, Minjin",
"Yu, Hwanjo"
] | Rectifying Demonstration Shortcut in In-Context Learning | naacl-long.242 | Poster | 2403.09488 | [
"https://github.com/lainshower/in-context-calibration"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.243.bib | https://aclanthology.org/2024.naacl-long.243/ | @inproceedings{mayhew-etal-2024-universal,
title = "Universal {NER}: A Gold-Standard Multilingual Named Entity Recognition Benchmark",
author = {Mayhew, Stephen and
Blevins, Terra and
Liu, Shuheng and
Suppa, Marek and
Gonen, Hila and
Imperial, Joseph Marvin and
Karlsson, B{\"o}rje and
Lin, Peiqin and
Ljube{\v{s}}i{\'c}, Nikola and
Miranda, Lester James and
Plank, Barbara and
Riabi, Arij and
Pinter, Yuval},
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.243",
doi = "10.18653/v1/2024.naacl-long.243",
pages = "4322--4337",
abstract = "We introduce Universal NER (UNER), an open, community-driven project to develop gold-standard NER benchmarks in many languages. The overarching goal of UNER is to provide high-quality, cross-lingually consistent annotations to facilitate and standardize multilingual NER research. UNER v1 contains 19 datasets annotated with named entities in a cross-lingual consistent schema across 13 diverse languages. In this paper, we detail the dataset creation and composition of UNER; we also provide initial modeling baselines on both in-language and cross-lingual learning settings. We will release the data, code, and fitted models to the public.",
}
| We introduce Universal NER (UNER), an open, community-driven project to develop gold-standard NER benchmarks in many languages. The overarching goal of UNER is to provide high-quality, cross-lingually consistent annotations to facilitate and standardize multilingual NER research. UNER v1 contains 19 datasets annotated with named entities in a cross-lingual consistent schema across 13 diverse languages. In this paper, we detail the dataset creation and composition of UNER; we also provide initial modeling baselines on both in-language and cross-lingual learning settings. We will release the data, code, and fitted models to the public. | [
"Mayhew, Stephen",
"Blevins, Terra",
"Liu, Shuheng",
"Suppa, Marek",
"Gonen, Hila",
"Imperial, Joseph Marvin",
"Karlsson, B{\\\"o}rje",
"Lin, Peiqin",
"Ljube{\\v{s}}i{\\'c}, Nikola",
"Mir",
"a, Lester James",
"Plank, Barbara",
"Riabi, Arij",
"Pinter, Yuval"
] | Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark | naacl-long.243 | Poster | 2311.09122 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.244.bib | https://aclanthology.org/2024.naacl-long.244/ | @inproceedings{kwon-etal-2024-odd,
title = "{ODD}: A Benchmark Dataset for the Natural Language Processing Based Opioid Related Aberrant Behavior Detection",
author = "Kwon, Sunjae and
Wang, Xun and
Liu, Weisong and
Druhl, Emily and
Sung, Minhee and
Reisman, Joel and
Li, Wenjun and
Kerns, Robert and
Becker, William and
Yu, Hong",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.244",
doi = "10.18653/v1/2024.naacl-long.244",
pages = "4338--4359",
abstract = "Opioid related aberrant behaviors (ORABs) present novel risk factors for opioid overdose. This paper introduces a novel biomedical natural language processing benchmark dataset named ODD, for ORAB Detection Dataset. ODD is an expert-annotated dataset designed to identify ORABs from patients{'} EHR notes and classify them into nine categories; 1) Confirmed Aberrant Behavior, 2) Suggested Aberrant Behavior, 3) Opioids, 4) Indication, 5) Diagnosed opioid dependency, 6) Benzodiazepines, 7) Medication Changes, 8) Central Nervous System-related, and 9) Social Determinants of Health. We explored two state-of-the-art natural language processing models (fine-tuning and prompt-tuning approaches) to identify ORAB. Experimental results show that the prompt-tuning models outperformed the fine-tuning models in most categories and the gains were especially higher among uncommon categories (Suggested Aberrant Behavior, Confirmed Aberrant Behaviors, Diagnosed Opioid Dependence, and Medication Change). Although the best model achieved the highest 88.17{\%} on macro average area under precision recall curve, uncommon classes still have a large room for performance improvement. ODD is publicly available.",
}
| Opioid related aberrant behaviors (ORABs) present novel risk factors for opioid overdose. This paper introduces a novel biomedical natural language processing benchmark dataset named ODD, for ORAB Detection Dataset. ODD is an expert-annotated dataset designed to identify ORABs from patients{'} EHR notes and classify them into nine categories; 1) Confirmed Aberrant Behavior, 2) Suggested Aberrant Behavior, 3) Opioids, 4) Indication, 5) Diagnosed opioid dependency, 6) Benzodiazepines, 7) Medication Changes, 8) Central Nervous System-related, and 9) Social Determinants of Health. We explored two state-of-the-art natural language processing models (fine-tuning and prompt-tuning approaches) to identify ORAB. Experimental results show that the prompt-tuning models outperformed the fine-tuning models in most categories and the gains were especially higher among uncommon categories (Suggested Aberrant Behavior, Confirmed Aberrant Behaviors, Diagnosed Opioid Dependence, and Medication Change). Although the best model achieved the highest 88.17{\%} on macro average area under precision recall curve, uncommon classes still have a large room for performance improvement. ODD is publicly available. | [
"Kwon, Sunjae",
"Wang, Xun",
"Liu, Weisong",
"Druhl, Emily",
"Sung, Minhee",
"Reisman, Joel",
"Li, Wenjun",
"Kerns, Robert",
"Becker, William",
"Yu, Hong"
] | ODD: A Benchmark Dataset for the Natural Language Processing Based Opioid Related Aberrant Behavior Detection | naacl-long.244 | Poster | 2307.02591 | [
"https://github.com/soon91jae/orab_mimic"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.245.bib | https://aclanthology.org/2024.naacl-long.245/ | @inproceedings{zhao-etal-2024-comprehensive,
title = "A Comprehensive Study of Gender Bias in Chemical Named Entity Recognition Models",
author = "Zhao, Xingmeng and
Niazi, Ali and
Rios, Anthony",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.245",
doi = "10.18653/v1/2024.naacl-long.245",
pages = "4360--4374",
abstract = "Chemical named entity recognition (NER) models are used in many downstream tasks, from adverse drug reaction identification to pharmacoepidemiology. However, it is unknown whether these models work the same for everyone. Performance disparities can potentially cause harm rather than the intended good. This paper assesses gender-related performance disparities in chemical NER systems. We develop a framework for measuring gender bias in chemical NER models using synthetic data and a newly annotated corpus of over 92,405 words with self-identified gender information from Reddit. Our evaluation of multiple biomedical NER models reveals evident biases. For instance, synthetic data suggests that female names are frequently misclassified as chemicals, especially when it comes to brand name mentions. Additionally, we observe performance disparities between female- and male-associated data in both datasets. Many systems fail to detect contraceptives such as birth control. Our findings emphasize the biases in chemical NER models, urging practitioners to account for these biases in downstream applications.",
}
| Chemical named entity recognition (NER) models are used in many downstream tasks, from adverse drug reaction identification to pharmacoepidemiology. However, it is unknown whether these models work the same for everyone. Performance disparities can potentially cause harm rather than the intended good. This paper assesses gender-related performance disparities in chemical NER systems. We develop a framework for measuring gender bias in chemical NER models using synthetic data and a newly annotated corpus of over 92,405 words with self-identified gender information from Reddit. Our evaluation of multiple biomedical NER models reveals evident biases. For instance, synthetic data suggests that female names are frequently misclassified as chemicals, especially when it comes to brand name mentions. Additionally, we observe performance disparities between female- and male-associated data in both datasets. Many systems fail to detect contraceptives such as birth control. Our findings emphasize the biases in chemical NER models, urging practitioners to account for these biases in downstream applications. | [
"Zhao, Xingmeng",
"Niazi, Ali",
"Rios, Anthony"
] | A Comprehensive Study of Gender Bias in Chemical Named Entity Recognition Models | naacl-long.245 | Poster | 2212.12799 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.246.bib | https://aclanthology.org/2024.naacl-long.246/ | @inproceedings{xu-etal-2024-promises,
title = "The Promises and Pitfalls of Using Language Models to Measure Instruction Quality in Education",
author = "Xu, Paiheng and
Liu, Jing and
Jones, Nathan and
Cohen, Julie and
Ai, Wei",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.246",
doi = "10.18653/v1/2024.naacl-long.246",
pages = "4375--4389",
abstract = "Assessing instruction quality is a fundamental component of any improvement efforts in the education system. However, traditional manual assessments are expensive, subjective, and heavily dependent on observers{'} expertise and idiosyncratic factors, preventing teachers from getting timely and frequent feedback. Different from prior research that mostly focuses on low-inference instructional practices on a singular basis, this paper presents the first study that leverages Natural Language Processing (NLP) techniques to assess multiple high-inference instructional practices in two distinct educational settings: in-person K-12 classrooms and simulated performance tasks for pre-service teachers. This is also the first study that applies NLP to measure a teaching practice that is widely acknowledged to be particularly effective for students with special needs. We confront two challenges inherent in NLP-based instructional analysis, including noisy and long input data and highly skewed distributions of human ratings. Our results suggest that pretrained Language Models (PLMs) demonstrate performances comparable to the agreement level of human raters for variables that are more discrete and require lower inference, but their efficacy diminishes with more complex teaching practices. Interestingly, using only teachers{'} utterances as input yields strong results for student-centered variables, alleviating common concerns over the difficulty of collecting and transcribing high-quality student speech data in in-person teaching settings. Our findings highlight both the potential and the limitations of current NLP techniques in the education domain, opening avenues for further exploration.",
}
| Assessing instruction quality is a fundamental component of any improvement efforts in the education system. However, traditional manual assessments are expensive, subjective, and heavily dependent on observers{'} expertise and idiosyncratic factors, preventing teachers from getting timely and frequent feedback. Different from prior research that mostly focuses on low-inference instructional practices on a singular basis, this paper presents the first study that leverages Natural Language Processing (NLP) techniques to assess multiple high-inference instructional practices in two distinct educational settings: in-person K-12 classrooms and simulated performance tasks for pre-service teachers. This is also the first study that applies NLP to measure a teaching practice that is widely acknowledged to be particularly effective for students with special needs. We confront two challenges inherent in NLP-based instructional analysis, including noisy and long input data and highly skewed distributions of human ratings. Our results suggest that pretrained Language Models (PLMs) demonstrate performances comparable to the agreement level of human raters for variables that are more discrete and require lower inference, but their efficacy diminishes with more complex teaching practices. Interestingly, using only teachers{'} utterances as input yields strong results for student-centered variables, alleviating common concerns over the difficulty of collecting and transcribing high-quality student speech data in in-person teaching settings. Our findings highlight both the potential and the limitations of current NLP techniques in the education domain, opening avenues for further exploration. | [
"Xu, Paiheng",
"Liu, Jing",
"Jones, Nathan",
"Cohen, Julie",
"Ai, Wei"
] | The Promises and Pitfalls of Using Language Models to Measure Instruction Quality in Education | naacl-long.246 | Poster | 2404.02444 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.247.bib | https://aclanthology.org/2024.naacl-long.247/ | @inproceedings{flemings-etal-2024-differentially,
title = "Differentially Private Next-Token Prediction of Large Language Models",
author = "Flemings, James and
Razaviyayn, Meisam and
Annavaram, Murali",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.247",
doi = "10.18653/v1/2024.naacl-long.247",
pages = "4390--4404",
abstract = "Ensuring the privacy of Large Language Models (LLMs) is becoming increasingly important. The most widely adopted technique to accomplish this is DP-SGD, which trains a model to guarantee Differential Privacy (DP). However, DP-SGD overestimates an adversary{'}s capabilities in having white box access to the model and, as a result, causes longer training times and larger memory usage than SGD. On the other hand, commercial LLM deployments are predominantly cloud-based; hence, adversarial access to LLMs is black-box. Motivated by these observations, we present Private Mixing of Ensemble Distributions (PMixED): a private prediction protocol for next-token prediction that utilizes the inherent stochasticity of next-token sampling and a public model to achieve Differential Privacy. We formalize this by introducing RD-mollifers which project each of the model{'}s output distribution from an ensemble of fine-tuned LLMs onto a set around a public LLM{'}s output distribution, then average the projected distributions and sample from it. Unlike DP-SGD which needs to consider the model architecture during training, PMixED is model agnostic, which makes PMixED a very appealing solution for current deployments. Our results show that PMixED achieves a stronger privacy guarantee than sample-level privacy and outperforms DP-SGD for privacy $\epsilon = 8$ on large-scale datasets. Thus, PMixED offers a practical alternative to DP training methods for achieving strong generative utility without compromising privacy.",
}
| Ensuring the privacy of Large Language Models (LLMs) is becoming increasingly important. The most widely adopted technique to accomplish this is DP-SGD, which trains a model to guarantee Differential Privacy (DP). However, DP-SGD overestimates an adversary{'}s capabilities in having white box access to the model and, as a result, causes longer training times and larger memory usage than SGD. On the other hand, commercial LLM deployments are predominantly cloud-based; hence, adversarial access to LLMs is black-box. Motivated by these observations, we present Private Mixing of Ensemble Distributions (PMixED): a private prediction protocol for next-token prediction that utilizes the inherent stochasticity of next-token sampling and a public model to achieve Differential Privacy. We formalize this by introducing RD-mollifers which project each of the model{'}s output distribution from an ensemble of fine-tuned LLMs onto a set around a public LLM{'}s output distribution, then average the projected distributions and sample from it. Unlike DP-SGD which needs to consider the model architecture during training, PMixED is model agnostic, which makes PMixED a very appealing solution for current deployments. Our results show that PMixED achieves a stronger privacy guarantee than sample-level privacy and outperforms DP-SGD for privacy $\epsilon = 8$ on large-scale datasets. Thus, PMixED offers a practical alternative to DP training methods for achieving strong generative utility without compromising privacy. | [
"Flemings, James",
"Razaviyayn, Meisam",
"Annavaram, Murali"
] | Differentially Private Next-Token Prediction of Large Language Models | naacl-long.247 | Poster | 2403.15638 | [
"https://github.com/james-flemings/pmixed"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.248.bib | https://aclanthology.org/2024.naacl-long.248/ | @inproceedings{goldzycher-etal-2024-improving,
title = "Improving Adversarial Data Collection by Supporting Annotators: Lessons from {GAHD}, a {G}erman Hate Speech Dataset",
author = {Goldzycher, Janis and
R{\"o}ttger, Paul and
Schneider, Gerold},
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.248",
doi = "10.18653/v1/2024.naacl-long.248",
pages = "4405--4424",
abstract = "Hate speech detection models are only as good as the data they are trained on. Datasets sourced from social media suffer from systematic gaps and biases, leading to unreliable models with simplistic decision boundaries. Adversarial datasets, collected by exploiting model weaknesses, promise to fix this problem. However, adversarial data collection can be slow and costly, and individual annotators have limited creativity. In this paper, we introduce GAHD, a new German Adversarial Hate speech Dataset comprising ca. 11k examples. During data collection, we explore new strategies for supporting annotators, to create more diverse adversarial examples more efficiently and provide a manual analysis of annotator disagreements for each strategy. Our experiments show that the resulting dataset is challenging even for state-of-the-art hate speech detection models, and that training on GAHD clearly improves model robustness. Further, we find that mixing multiple support strategies is most advantageous. We make GAHD publicly available at https://github.com/jagol/gahd.",
}
| Hate speech detection models are only as good as the data they are trained on. Datasets sourced from social media suffer from systematic gaps and biases, leading to unreliable models with simplistic decision boundaries. Adversarial datasets, collected by exploiting model weaknesses, promise to fix this problem. However, adversarial data collection can be slow and costly, and individual annotators have limited creativity. In this paper, we introduce GAHD, a new German Adversarial Hate speech Dataset comprising ca. 11k examples. During data collection, we explore new strategies for supporting annotators, to create more diverse adversarial examples more efficiently and provide a manual analysis of annotator disagreements for each strategy. Our experiments show that the resulting dataset is challenging even for state-of-the-art hate speech detection models, and that training on GAHD clearly improves model robustness. Further, we find that mixing multiple support strategies is most advantageous. We make GAHD publicly available at https://github.com/jagol/gahd. | [
"Goldzycher, Janis",
"R{\\\"o}ttger, Paul",
"Schneider, Gerold"
] | Improving Adversarial Data Collection by Supporting Annotators: Lessons from GAHD, a German Hate Speech Dataset | naacl-long.248 | Poster | 2403.19559 | [
"https://github.com/jagol/gahd"
] | https://huggingface.co/papers/2403.19559 | 0 | 1 | 0 | 3 | 1 | [
"jagoldz/gahd"
] | [
"manueltonneau/german-hate-speech-superset",
"jagoldz/gahd",
"davanstrien/gahd"
] | [] |
https://aclanthology.org/2024.naacl-long.249.bib | https://aclanthology.org/2024.naacl-long.249/ | @inproceedings{nogueira-dos-santos-etal-2024-memory,
title = "Memory Augmented Language Models through Mixture of Word Experts",
author = "Nogueira dos Santos, Cicero and
Lee-Thorp, James and
Noble, Isaac and
Chang, Chung-Ching and
Uthus, David",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.249",
doi = "10.18653/v1/2024.naacl-long.249",
pages = "4425--4438",
abstract = "Scaling up the number of parameters of language models has proven to be an effective approach to improve performance. For dense models, increasing their size proportionally increases their computational footprint. In this work, we seek to aggressively decouple learning capacity and FLOPs through Mixture-of-Experts (MoE) style models with large knowledge-rich vocabulary based routing functions. Our proposed approach, dubbed Mixture of Word Experts (MoWE), can be seen as a memory augmented model, where a large set of word-specific experts play the role of a sparse memory. We demonstrate that MoWE performs significantly better than the T5 family of models with similar number of FLOPs in a variety of NLP tasks. Moreover, MoWE outperforms traditional MoE models on knowledge intensive tasks and has similar performance to complex memory augmented approaches that often require to invoke custom mechanisms to search the sparse memory.",
}
| Scaling up the number of parameters of language models has proven to be an effective approach to improve performance. For dense models, increasing their size proportionally increases their computational footprint. In this work, we seek to aggressively decouple learning capacity and FLOPs through Mixture-of-Experts (MoE) style models with large knowledge-rich vocabulary based routing functions. Our proposed approach, dubbed Mixture of Word Experts (MoWE), can be seen as a memory augmented model, where a large set of word-specific experts play the role of a sparse memory. We demonstrate that MoWE performs significantly better than the T5 family of models with similar number of FLOPs in a variety of NLP tasks. Moreover, MoWE outperforms traditional MoE models on knowledge intensive tasks and has similar performance to complex memory augmented approaches that often require to invoke custom mechanisms to search the sparse memory. | [
"Nogueira dos Santos, Cicero",
"Lee-Thorp, James",
"Noble, Isaac",
"Chang, Chung-Ching",
"Uthus, David"
] | Memory Augmented Language Models through Mixture of Word Experts | naacl-long.249 | Poster | 2311.10768 | [
""
] | https://huggingface.co/papers/2311.10768 | 0 | 16 | 1 | 5 | 1 | [] | [] | [] |
https://aclanthology.org/2024.naacl-long.250.bib | https://aclanthology.org/2024.naacl-long.250/ | @inproceedings{jung-etal-2024-impossible,
title = "Impossible Distillation for Paraphrasing and Summarization: How to Make High-quality Lemonade out of Small, Low-quality Model",
author = "Jung, Jaehun and
West, Peter and
Jiang, Liwei and
Brahman, Faeze and
Lu, Ximing and
Fisher, Jillian and
Sorensen, Taylor and
Choi, Yejin",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.250",
doi = "10.18653/v1/2024.naacl-long.250",
pages = "4439--4454",
abstract = "We present Impossible Distillation, a novel framework for paraphrasing and sentence summarization, that distills a high-quality dataset and model from a low-quality teacher that itself cannot perform these tasks. Unlike prior works that rely on an extreme-scale teacher model (e.g., GPT3) or task-specific architecture, we hypothesize and verify the paraphrastic proximity intrinsic to pre-trained LMs (e.g., GPT2), where paraphrases occupy a proximal subspace in the LM distribution. By identifying and distilling generations from these subspaces, Impossible Distillation produces a high-quality dataset and model even from GPT2-scale LMs. We evaluate our method on multiple benchmarks spanning unconstrained / syntax-controlled paraphrase generation and sentence summarization. Our model with 770M parameters consistently outperforms strong baselines, including models distilled from ChatGPT, and sometimes, even ChatGPT itself. Also, we find that our distilled dataset from 1.5B LMs exhibits higher diversity and fidelity than up to 13 times larger datasets.",
}
| We present Impossible Distillation, a novel framework for paraphrasing and sentence summarization, that distills a high-quality dataset and model from a low-quality teacher that itself cannot perform these tasks. Unlike prior works that rely on an extreme-scale teacher model (e.g., GPT3) or task-specific architecture, we hypothesize and verify the paraphrastic proximity intrinsic to pre-trained LMs (e.g., GPT2), where paraphrases occupy a proximal subspace in the LM distribution. By identifying and distilling generations from these subspaces, Impossible Distillation produces a high-quality dataset and model even from GPT2-scale LMs. We evaluate our method on multiple benchmarks spanning unconstrained / syntax-controlled paraphrase generation and sentence summarization. Our model with 770M parameters consistently outperforms strong baselines, including models distilled from ChatGPT, and sometimes, even ChatGPT itself. Also, we find that our distilled dataset from 1.5B LMs exhibits higher diversity and fidelity than up to 13 times larger datasets. | [
"Jung, Jaehun",
"West, Peter",
"Jiang, Liwei",
"Brahman, Faeze",
"Lu, Ximing",
"Fisher, Jillian",
"Sorensen, Taylor",
"Choi, Yejin"
] | Impossible Distillation for Paraphrasing and Summarization: How to Make High-quality Lemonade out of Small, Low-quality Model | naacl-long.250 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.251.bib | https://aclanthology.org/2024.naacl-long.251/ | @inproceedings{tang-etal-2024-tofueval,
title = "{T}ofu{E}val: Evaluating Hallucinations of {LLM}s on Topic-Focused Dialogue Summarization",
author = "Tang, Liyan and
Shalyminov, Igor and
Wong, Amy and
Burnsky, Jon and
Vincent, Jake and
Yang, Yu{'}an and
Singh, Siffi and
Feng, Song and
Song, Hwanjun and
Su, Hang and
Sun, Lijia and
Zhang, Yi and
Mansour, Saab and
McKeown, Kathleen",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.251",
doi = "10.18653/v1/2024.naacl-long.251",
pages = "4455--4480",
abstract = "Single document news summarization has seen substantial progress on faithfulness in recent years, driven by research on the evaluation of factual consistency, or hallucinations. We ask whether these advances carry over to other text summarization domains. We propose a new evaluation benchmark on topic-focused dialogue summarization, generated by LLMs of varying sizes. We provide binary sentence- level human annotations of the factual consistency of these summaries along with detailed explanations of factually inconsistent sentences. Our analysis shows that existing LLMs hallucinate significant amounts of factual errors in the dialogue domain, regardless of the model{'}s size. On the other hand, when LLMs, including GPT-4, serve as binary factual evaluators, they perform poorly and can be outperformed by prevailing state-of-the-art specialized factuality evaluation metrics. Finally, we conducted an analysis of hallucination types with a curated error taxonomy. We find that there are diverse errors and error distributions in model-generated summaries and that non-LLM based metrics can capture all error types better than LLM-based evaluators.",
}
| Single document news summarization has seen substantial progress on faithfulness in recent years, driven by research on the evaluation of factual consistency, or hallucinations. We ask whether these advances carry over to other text summarization domains. We propose a new evaluation benchmark on topic-focused dialogue summarization, generated by LLMs of varying sizes. We provide binary sentence- level human annotations of the factual consistency of these summaries along with detailed explanations of factually inconsistent sentences. Our analysis shows that existing LLMs hallucinate significant amounts of factual errors in the dialogue domain, regardless of the model{'}s size. On the other hand, when LLMs, including GPT-4, serve as binary factual evaluators, they perform poorly and can be outperformed by prevailing state-of-the-art specialized factuality evaluation metrics. Finally, we conducted an analysis of hallucination types with a curated error taxonomy. We find that there are diverse errors and error distributions in model-generated summaries and that non-LLM based metrics can capture all error types better than LLM-based evaluators. | [
"Tang, Liyan",
"Shalyminov, Igor",
"Wong, Amy",
"Burnsky, Jon",
"Vincent, Jake",
"Yang, Yu{'}an",
"Singh, Siffi",
"Feng, Song",
"Song, Hwanjun",
"Su, Hang",
"Sun, Lijia",
"Zhang, Yi",
"Mansour, Saab",
"McKeown, Kathleen"
] | TofuEval: Evaluating Hallucinations of LLMs on Topic-Focused Dialogue Summarization | naacl-long.251 | Poster | 2402.13249 | [
"https://github.com/amazon-science/tofueval"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.252.bib | https://aclanthology.org/2024.naacl-long.252/ | @inproceedings{zhang-etal-2024-moka,
title = "{MOKA}: Moral Knowledge Augmentation for Moral Event Extraction",
author = "Zhang, Xinliang Frederick and
Wu, Winston and
Beauchamp, Nicholas and
Wang, Lu",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.252",
doi = "10.18653/v1/2024.naacl-long.252",
pages = "4481--4502",
abstract = "News media often strive to minimize explicit moral language in news articles, yet most articles are dense with moral values as expressed through the reported events themselves. However, values that are reflected in the intricate dynamics among *participating entities* and *moral events* are far more challenging for most NLP systems to detect, including LLMs. To study this phenomenon, we annotate a new dataset, **MORAL EVENTS**, consisting of 5,494 structured event annotations on 474 news articles by diverse US media across the political spectrum. We further propose **MOKA**, a moral event extraction framework with **MO**ral **K**nowledge **A**ugmentation, which leverages knowledge derived from moral words and moral scenarios to produce structural representations of morality-bearing events. Experiments show that **MOKA** outperforms competitive baselines across three moral event understanding tasks. Further analysis shows even ostensibly nonpartisan media engage in the selective reporting of moral events.",
}
| News media often strive to minimize explicit moral language in news articles, yet most articles are dense with moral values as expressed through the reported events themselves. However, values that are reflected in the intricate dynamics among *participating entities* and *moral events* are far more challenging for most NLP systems to detect, including LLMs. To study this phenomenon, we annotate a new dataset, **MORAL EVENTS**, consisting of 5,494 structured event annotations on 474 news articles by diverse US media across the political spectrum. We further propose **MOKA**, a moral event extraction framework with **MO**ral **K**nowledge **A**ugmentation, which leverages knowledge derived from moral words and moral scenarios to produce structural representations of morality-bearing events. Experiments show that **MOKA** outperforms competitive baselines across three moral event understanding tasks. Further analysis shows even ostensibly nonpartisan media engage in the selective reporting of moral events. | [
"Zhang, Xinliang Frederick",
"Wu, Winston",
"Beauchamp, Nicholas",
"Wang, Lu"
] | MOKA: Moral Knowledge Augmentation for Moral Event Extraction | naacl-long.252 | Poster | 2311.09733 | [
"https://github.com/launchnlp/moka"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.253.bib | https://aclanthology.org/2024.naacl-long.253/ | @inproceedings{cavalin-etal-2024-fixing,
title = "Fixing Rogue Memorization in Many-to-One Multilingual Translators of Extremely-Low-Resource Languages by Rephrasing Training Samples",
author = "Cavalin, Paulo and
Domingues, Pedro Henrique and
Pinhanez, Claudio and
Nogima, Julio",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.253",
doi = "10.18653/v1/2024.naacl-long.253",
pages = "4503--4514",
abstract = "In this paper we study the fine-tuning of pre-trained large high-resource language models (LLMs) into many-to-one multilingual machine translators for extremely-low-resource languages such as endangered Indigenous languages. We explore those issues using datasets created from pseudo-parallel translations to English of The Bible written in 39 Brazilian Indigenous languages using mBART50 and WMT19 as pre-trained models and multiple translation metrics. We examine bilingual and multilingual models and show that, according to machine translation metrics, same-linguistic family models tend to perform best. However, we also found that many-to-one multilingual systems have a tendency to learn a {``}rogue{''} strategy of storing output strings from the training data in the LLM structure and retrieving them instead of performing actual translations. We show that rephrasing the output of the training samples seems to solve the problem.",
}
| In this paper we study the fine-tuning of pre-trained large high-resource language models (LLMs) into many-to-one multilingual machine translators for extremely-low-resource languages such as endangered Indigenous languages. We explore those issues using datasets created from pseudo-parallel translations to English of The Bible written in 39 Brazilian Indigenous languages using mBART50 and WMT19 as pre-trained models and multiple translation metrics. We examine bilingual and multilingual models and show that, according to machine translation metrics, same-linguistic family models tend to perform best. However, we also found that many-to-one multilingual systems have a tendency to learn a {``}rogue{''} strategy of storing output strings from the training data in the LLM structure and retrieving them instead of performing actual translations. We show that rephrasing the output of the training samples seems to solve the problem. | [
"Cavalin, Paulo",
"Domingues, Pedro Henrique",
"Pinhanez, Claudio",
"Nogima, Julio"
] | Fixing Rogue Memorization in Many-to-One Multilingual Translators of Extremely-Low-Resource Languages by Rephrasing Training Samples | naacl-long.253 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.254.bib | https://aclanthology.org/2024.naacl-long.254/ | @inproceedings{wang-etal-2024-backdoor,
title = "Backdoor Attacks on Multilingual Machine Translation",
author = "Wang, Jun and
Xu, Qiongkai and
He, Xuanli and
Rubinstein, Benjamin and
Cohn, Trevor",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.254",
doi = "10.18653/v1/2024.naacl-long.254",
pages = "4515--4534",
abstract = "While multilingual machine translation (MNMT) systems hold substantial promise, they also have security vulnerabilities. Our research highlights that MNMT systems can be susceptible to a particularly devious style of backdoor attack, whereby an attacker injects poisoned data into a low-resource language pair to cause malicious translations in other languages, including high-resource languages.Our experimental results reveal that injecting less than 0.01{\%} poisoned data into a low-resource language pair can achieve an average 20{\%} attack success rate in attacking high-resource language pairs. This type of attack is of particular concern, given the larger attack surface of languages inherent to low-resource settings. Our aim is to bring attention to these vulnerabilities within MNMT systems with the hope of encouraging the community to address security concerns in machine translation, especially in the context of low-resource languages.",
}
| While multilingual machine translation (MNMT) systems hold substantial promise, they also have security vulnerabilities. Our research highlights that MNMT systems can be susceptible to a particularly devious style of backdoor attack, whereby an attacker injects poisoned data into a low-resource language pair to cause malicious translations in other languages, including high-resource languages.Our experimental results reveal that injecting less than 0.01{\%} poisoned data into a low-resource language pair can achieve an average 20{\%} attack success rate in attacking high-resource language pairs. This type of attack is of particular concern, given the larger attack surface of languages inherent to low-resource settings. Our aim is to bring attention to these vulnerabilities within MNMT systems with the hope of encouraging the community to address security concerns in machine translation, especially in the context of low-resource languages. | [
"Wang, Jun",
"Xu, Qiongkai",
"He, Xuanli",
"Rubinstein, Benjamin",
"Cohn, Trevor"
] | Backdoor Attacks on Multilingual Machine Translation | naacl-long.254 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.255.bib | https://aclanthology.org/2024.naacl-long.255/ | @inproceedings{guo-etal-2024-personalized,
title = "Personalized Jargon Identification for Enhanced Interdisciplinary Communication",
author = "Guo, Yue and
Chang, Joseph Chee and
Antoniak, Maria and
Bransom, Erin and
Cohen, Trevor and
Wang, Lucy and
August, Tal",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.255",
doi = "10.18653/v1/2024.naacl-long.255",
pages = "4535--4550",
abstract = "Scientific jargon can confuse researchers when they read materials from other domains. Identifying and translating jargon for individual researchers could speed up research, but current methods of jargon identification mainly use corpus-level familiarity indicators rather than modeling researcher-specific needs, which can vary greatly based on each researcher{'}s background. We collect a dataset of over 10K term familiarity annotations from 11 computer science researchers for terms drawn from 100 paper abstracts. Analysis of this data reveals that jargon familiarity and information needs vary widely across annotators, even within the same sub-domain (e.g., NLP). We investigate features representing domain, subdomain, and individual knowledge to predict individual jargon familiarity. We compare supervised and prompt-based approaches, finding that prompt-based methods using information about the individual researcher (e.g., personal publications, self-defined subfield of research) yield the highest accuracy, though the task remains difficult and supervised approaches have lower false positive rates. This research offers insights into features and methods for the novel task of integrating personal data into scientific jargon identification.",
}
| Scientific jargon can confuse researchers when they read materials from other domains. Identifying and translating jargon for individual researchers could speed up research, but current methods of jargon identification mainly use corpus-level familiarity indicators rather than modeling researcher-specific needs, which can vary greatly based on each researcher{'}s background. We collect a dataset of over 10K term familiarity annotations from 11 computer science researchers for terms drawn from 100 paper abstracts. Analysis of this data reveals that jargon familiarity and information needs vary widely across annotators, even within the same sub-domain (e.g., NLP). We investigate features representing domain, subdomain, and individual knowledge to predict individual jargon familiarity. We compare supervised and prompt-based approaches, finding that prompt-based methods using information about the individual researcher (e.g., personal publications, self-defined subfield of research) yield the highest accuracy, though the task remains difficult and supervised approaches have lower false positive rates. This research offers insights into features and methods for the novel task of integrating personal data into scientific jargon identification. | [
"Guo, Yue",
"Chang, Joseph Chee",
"Antoniak, Maria",
"Bransom, Erin",
"Cohen, Trevor",
"Wang, Lucy",
"August, Tal"
] | Personalized Jargon Identification for Enhanced Interdisciplinary Communication | naacl-long.255 | Poster | 2311.09481 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.256.bib | https://aclanthology.org/2024.naacl-long.256/ | @inproceedings{huang-etal-2024-flames,
title = "Flames: Benchmarking Value Alignment of {LLM}s in {C}hinese",
author = "Huang, Kexin and
Liu, Xiangyang and
Guo, Qianyu and
Sun, Tianxiang and
Sun, Jiawei and
Wang, Yaru and
Zhou, Zeyang and
Wang, Yixu and
Teng, Yan and
Qiu, Xipeng and
Wang, Yingchun and
Lin, Dahua",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.256",
doi = "10.18653/v1/2024.naacl-long.256",
pages = "4551--4591",
abstract = "The widespread adoption of large language models (LLMs) across various regions underscores the urgent need to evaluate their alignment with human values. Current benchmarks, however, fall short of effectively uncovering safety vulnerabilities in LLMs. Despite numerous models achieving high scores and {`}topping the chart{'} in these evaluations, there is still a significant gap in LLMs{'} deeper alignment with human values and achieving genuine harmlessness. To this end, this paper proposes a value alignment benchmark named Flames, which encompasses both common harmlessness principles and a unique morality dimension that integrates specific Chinese values such as harmony. Accordingly, we carefully design adversarial prompts that incorporate complex scenarios and jailbreaking methods, mostly with implicit malice. By prompting 17 mainstream LLMs, we obtain model responses and rigorously annotate them for detailed evaluation. Our findings indicate that all the evaluated LLMs demonstrate relatively poor performance on Flames, particularly in the safety and fairness dimensions. We also develop a lightweight specified scorer capable of scoring LLMs across multiple dimensions to efficiently evaluate new models on the benchmark. The complexity of Flames has far exceeded existing benchmarks, setting a new challenge for contemporary LLMs and highlighting the need for further alignment of LLMs. Our benchmark is publicly available at https://github.com/AIFlames/Flames.",
}
| The widespread adoption of large language models (LLMs) across various regions underscores the urgent need to evaluate their alignment with human values. Current benchmarks, however, fall short of effectively uncovering safety vulnerabilities in LLMs. Despite numerous models achieving high scores and {`}topping the chart{'} in these evaluations, there is still a significant gap in LLMs{'} deeper alignment with human values and achieving genuine harmlessness. To this end, this paper proposes a value alignment benchmark named Flames, which encompasses both common harmlessness principles and a unique morality dimension that integrates specific Chinese values such as harmony. Accordingly, we carefully design adversarial prompts that incorporate complex scenarios and jailbreaking methods, mostly with implicit malice. By prompting 17 mainstream LLMs, we obtain model responses and rigorously annotate them for detailed evaluation. Our findings indicate that all the evaluated LLMs demonstrate relatively poor performance on Flames, particularly in the safety and fairness dimensions. We also develop a lightweight specified scorer capable of scoring LLMs across multiple dimensions to efficiently evaluate new models on the benchmark. The complexity of Flames has far exceeded existing benchmarks, setting a new challenge for contemporary LLMs and highlighting the need for further alignment of LLMs. Our benchmark is publicly available at https://github.com/AIFlames/Flames. | [
"Huang, Kexin",
"Liu, Xiangyang",
"Guo, Qianyu",
"Sun, Tianxiang",
"Sun, Jiawei",
"Wang, Yaru",
"Zhou, Zeyang",
"Wang, Yixu",
"Teng, Yan",
"Qiu, Xipeng",
"Wang, Yingchun",
"Lin, Dahua"
] | Flames: Benchmarking Value Alignment of LLMs in Chinese | naacl-long.256 | Poster | 2311.06899 | [
"https://github.com/aiflames/flames"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.257.bib | https://aclanthology.org/2024.naacl-long.257/ | @inproceedings{ma-etal-2024-mitigating,
title = "Mitigating Bias for Question Answering Models by Tracking Bias Influence",
author = "Ma, Mingyu and
Kao, Jiun-Yu and
Gupta, Arpit and
Lin, Yu-Hsiang and
Zhao, Wenbo and
Chung, Tagyoung and
Wang, Wei and
Chang, Kai-Wei and
Peng, Nanyun",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.257",
doi = "10.18653/v1/2024.naacl-long.257",
pages = "4592--4610",
abstract = "Models of various NLP tasks have been shown to exhibit stereotypes, and the bias in the question answering (QA) models is especially harmful as the output answers might be directly consumed by the end users. There have been datasets to evaluate bias in QA models, while bias mitigation technique for the QA models is still under-explored. In this work, we propose BMBI, an approach to mitigate the bias of multiple-choice QA models. Based on the intuition that a model would lean to be more biased if it learns from a biased example, we measure the bias level of a query instance by observing its influence on another instance. If the influenced instance is more biased, we derive that the query instance is biased. We then use the bias level detected as an optimization objective to form a multi-task learning setting in addition to the original QA task. We further introduce a new bias evaluation metric to quantify bias in a comprehensive and sensitive way. We show that our method could be applied to multiple QA formulations across multiple bias categories. It can significantly reduce the bias level in all 9 bias categories in the BBQ dataset while maintaining comparable QA accuracy.",
}
| Models of various NLP tasks have been shown to exhibit stereotypes, and the bias in the question answering (QA) models is especially harmful as the output answers might be directly consumed by the end users. There have been datasets to evaluate bias in QA models, while bias mitigation technique for the QA models is still under-explored. In this work, we propose BMBI, an approach to mitigate the bias of multiple-choice QA models. Based on the intuition that a model would lean to be more biased if it learns from a biased example, we measure the bias level of a query instance by observing its influence on another instance. If the influenced instance is more biased, we derive that the query instance is biased. We then use the bias level detected as an optimization objective to form a multi-task learning setting in addition to the original QA task. We further introduce a new bias evaluation metric to quantify bias in a comprehensive and sensitive way. We show that our method could be applied to multiple QA formulations across multiple bias categories. It can significantly reduce the bias level in all 9 bias categories in the BBQ dataset while maintaining comparable QA accuracy. | [
"Ma, Mingyu",
"Kao, Jiun-Yu",
"Gupta, Arpit",
"Lin, Yu-Hsiang",
"Zhao, Wenbo",
"Chung, Tagyoung",
"Wang, Wei",
"Chang, Kai-Wei",
"Peng, Nanyun"
] | Mitigating Bias for Question Answering Models by Tracking Bias Influence | naacl-long.257 | Poster | 2310.08795 | [
""
] | https://huggingface.co/papers/2310.08795 | 1 | 0 | 0 | 9 | 1 | [] | [] | [] |
https://aclanthology.org/2024.naacl-long.258.bib | https://aclanthology.org/2024.naacl-long.258/ | @inproceedings{kim-etal-2024-extending,
title = "Extending {CLIP}{'}s Image-Text Alignment to Referring Image Segmentation",
author = "Kim, Seoyeon and
Kang, Minguk and
Kim, Dongwon and
Park, Jaesik and
Kwak, Suha",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.258",
doi = "10.18653/v1/2024.naacl-long.258",
pages = "4611--4628",
abstract = "Referring Image Segmentation (RIS) is a cross-modal task that aims to segment an instance described by a natural language expression. Recent methods leverage large-scale pretrained unimodal models as backbones along with fusion techniques for joint reasoning across modalities. However, the inherent cross-modal nature of RIS raises questions about the effectiveness of unimodal backbones. We propose RISCLIP, a novel framework that effectively leverages the cross-modal nature of CLIP for RIS. Observing CLIP{'}s inherent alignment between image and text features, we capitalize on this starting point and introduce simple but strong modules that enhance unimodal feature extraction and leverage rich alignment knowledge in CLIP{'}s image-text shared-embedding space. RISCLIP exhibits outstanding results on all three major RIS benchmarks and also outperforms previous CLIP-based methods, demonstrating the efficacy of our strategy in extending CLIP{'}s image-text alignment to RIS.",
}
| Referring Image Segmentation (RIS) is a cross-modal task that aims to segment an instance described by a natural language expression. Recent methods leverage large-scale pretrained unimodal models as backbones along with fusion techniques for joint reasoning across modalities. However, the inherent cross-modal nature of RIS raises questions about the effectiveness of unimodal backbones. We propose RISCLIP, a novel framework that effectively leverages the cross-modal nature of CLIP for RIS. Observing CLIP{'}s inherent alignment between image and text features, we capitalize on this starting point and introduce simple but strong modules that enhance unimodal feature extraction and leverage rich alignment knowledge in CLIP{'}s image-text shared-embedding space. RISCLIP exhibits outstanding results on all three major RIS benchmarks and also outperforms previous CLIP-based methods, demonstrating the efficacy of our strategy in extending CLIP{'}s image-text alignment to RIS. | [
"Kim, Seoyeon",
"Kang, Minguk",
"Kim, Dongwon",
"Park, Jaesik",
"Kwak, Suha"
] | Extending CLIP's Image-Text Alignment to Referring Image Segmentation | naacl-long.258 | Poster | 2306.08498 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.259.bib | https://aclanthology.org/2024.naacl-long.259/ | @inproceedings{lin-ma-2024-generating,
title = "Generating Attractive and Authentic Copywriting from Customer Reviews",
author = "Lin, Yu-Xiang and
Ma, Wei-Yun",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.259",
doi = "10.18653/v1/2024.naacl-long.259",
pages = "4629--4642",
abstract = "The goal of product copywriting is to capture the interest of potential buyers by emphasizing the features of products through text descriptions. As e-commerce platforms offer a wide range of services, it{'}s becoming essential to dynamically adjust the styles of these auto-generated descriptions. Typical approaches to copywriting generation often rely solely on specified product attributes, which may result in dull and repetitive content. To tackle this issue, we propose to generate copywriting based on customer reviews, as they provide firsthand practical experiences with products, offering a richer source of information than just product attributes. We have developed a sequence-to-sequence framework, enhanced with reinforcement learning, to produce copywriting that is attractive, authentic, and rich in information. Our framework outperforms all existing baseline and zero-shot large language models, including LLaMA-2-chat-7B and GPT-3.5, in terms of both attractiveness and faithfulness. Furthermore, this work features the use of LLMs for aspect-based summaries collection and argument allure assessment. Experiments demonstrate the effectiveness of using LLMs for marketing domain corpus construction. The code and the dataset is publicly available at: \url{https://github.com/YuXiangLin1234/Copywriting-Generation}.",
}
| The goal of product copywriting is to capture the interest of potential buyers by emphasizing the features of products through text descriptions. As e-commerce platforms offer a wide range of services, it{'}s becoming essential to dynamically adjust the styles of these auto-generated descriptions. Typical approaches to copywriting generation often rely solely on specified product attributes, which may result in dull and repetitive content. To tackle this issue, we propose to generate copywriting based on customer reviews, as they provide firsthand practical experiences with products, offering a richer source of information than just product attributes. We have developed a sequence-to-sequence framework, enhanced with reinforcement learning, to produce copywriting that is attractive, authentic, and rich in information. Our framework outperforms all existing baseline and zero-shot large language models, including LLaMA-2-chat-7B and GPT-3.5, in terms of both attractiveness and faithfulness. Furthermore, this work features the use of LLMs for aspect-based summaries collection and argument allure assessment. Experiments demonstrate the effectiveness of using LLMs for marketing domain corpus construction. The code and the dataset is publicly available at: \url{https://github.com/YuXiangLin1234/Copywriting-Generation}. | [
"Lin, Yu-Xiang",
"Ma, Wei-Yun"
] | Generating Attractive and Authentic Copywriting from Customer Reviews | naacl-long.259 | Poster | 2404.13906 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.260.bib | https://aclanthology.org/2024.naacl-long.260/ | @inproceedings{xiong-etal-2024-effective,
title = "Effective Long-Context Scaling of Foundation Models",
author = "Xiong, Wenhan and
Liu, Jingyu and
Molybog, Igor and
Zhang, Hejia and
Bhargava, Prajjwal and
Hou, Rui and
Martin, Louis and
Rungta, Rashi and
Sankararaman, Karthik Abinav and
Oguz, Barlas and
Khabsa, Madian and
Fang, Han and
Mehdad, Yashar and
Narang, Sharan and
Malik, Kshitiz and
Fan, Angela and
Bhosale, Shruti and
Edunov, Sergey and
Lewis, Mike and
Wang, Sinong and
Ma, Hao",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.260",
doi = "10.18653/v1/2024.naacl-long.260",
pages = "4643--4663",
abstract = "We present an effective recipe to train strong long-context LLMs that are capable of utilizing massive context windows of up to 32,000 tokens. Our models are built through continual pretraining from Llama 2 checkpoints with longer text sequences and on a dataset where long texts are upsampled. We perform extensive evaluation using language modeling, synthetic context probing tasks, and a wide range of downstream benchmarks. Across all evaluations, our models achieve consistent improvements on most regular-context tasks and significant improvements on long-context tasks over Llama 2. Moreover, with a cost-effective instruction tuning procedure that is free of expensive annotation, the presented models can already surpass $\texttt{gpt-3.5-turbo-16k}${`}s overall performance on long-context benchmarks. Alongside these results, we provide an in-depth analysis on each individual component of our method. We delve into Llama{'}s position encodings and discuss its key limitation in modeling long data. We examine the impact of various design choices in the pretraining process, including the data mix and the training curriculum of sequence lengths {--} ablation results suggest that having abundant long texts in the pretrain dataset is $\textit{not}$ the key to achieving strong performance, and we empirically verify that long context continual pretraining is more efficient and similarly effective compared to pretraining from scratch with long sequences.",
}
| We present an effective recipe to train strong long-context LLMs that are capable of utilizing massive context windows of up to 32,000 tokens. Our models are built through continual pretraining from Llama 2 checkpoints with longer text sequences and on a dataset where long texts are upsampled. We perform extensive evaluation using language modeling, synthetic context probing tasks, and a wide range of downstream benchmarks. Across all evaluations, our models achieve consistent improvements on most regular-context tasks and significant improvements on long-context tasks over Llama 2. Moreover, with a cost-effective instruction tuning procedure that is free of expensive annotation, the presented models can already surpass $\texttt{gpt-3.5-turbo-16k}${`}s overall performance on long-context benchmarks. Alongside these results, we provide an in-depth analysis on each individual component of our method. We delve into Llama{'}s position encodings and discuss its key limitation in modeling long data. We examine the impact of various design choices in the pretraining process, including the data mix and the training curriculum of sequence lengths {--} ablation results suggest that having abundant long texts in the pretrain dataset is $\textit{not}$ the key to achieving strong performance, and we empirically verify that long context continual pretraining is more efficient and similarly effective compared to pretraining from scratch with long sequences. | [
"Xiong, Wenhan",
"Liu, Jingyu",
"Molybog, Igor",
"Zhang, Hejia",
"Bhargava, Prajjwal",
"Hou, Rui",
"Martin, Louis",
"Rungta, Rashi",
"Sankararaman, Karthik Abinav",
"Oguz, Barlas",
"Khabsa, Madian",
"Fang, Han",
"Mehdad, Yashar",
"Narang, Sharan",
"Malik, Kshitiz",
"Fan, Angela",
"Bhosale, Shruti",
"Edunov, Sergey",
"Lewis, Mike",
"Wang, Sinong",
"Ma, Hao"
] | Effective Long-Context Scaling of Foundation Models | naacl-long.260 | Poster | 2309.16039 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.261.bib | https://aclanthology.org/2024.naacl-long.261/ | @inproceedings{gao-etal-2024-empowering,
title = "Empowering Diffusion Models on the Embedding Space for Text Generation",
author = "Gao, Zhujin and
Guo, Junliang and
Tan, Xu and
Zhu, Yongxin and
Zhang, Fang and
Bian, Jiang and
Xu, Linli",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.261",
doi = "10.18653/v1/2024.naacl-long.261",
pages = "4664--4683",
abstract = "Diffusion models have achieved state-of-the-art synthesis quality on both visual and audio tasks, and recent works further adapt them to textual data by diffusing on the embedding space. In this paper, we conduct systematic studies of the optimization challenges encountered with both the embedding space and the denoising model, which have not been carefully explored. Firstly, the data distribution is learnable for embeddings, which may lead to the collapse of the embedding space and unstable training. To alleviate this problem, we propose a new objective called the anchor loss which is more efficient than previous methods. Secondly, we find the noise levels of conventional schedules are insufficient for training a desirable denoising model while introducing varying degrees of degeneration in consequence. To address this challenge, we propose a novel framework called noise rescaling. Based on the above analysis, we propose Difformer, an embedding diffusion model based on Transformer. Experiments on varieties of seminal text generation tasks show the effectiveness of the proposed methods and the superiority of Difformer over previous state-of-the-art embedding diffusion baselines.",
}
| Diffusion models have achieved state-of-the-art synthesis quality on both visual and audio tasks, and recent works further adapt them to textual data by diffusing on the embedding space. In this paper, we conduct systematic studies of the optimization challenges encountered with both the embedding space and the denoising model, which have not been carefully explored. Firstly, the data distribution is learnable for embeddings, which may lead to the collapse of the embedding space and unstable training. To alleviate this problem, we propose a new objective called the anchor loss which is more efficient than previous methods. Secondly, we find the noise levels of conventional schedules are insufficient for training a desirable denoising model while introducing varying degrees of degeneration in consequence. To address this challenge, we propose a novel framework called noise rescaling. Based on the above analysis, we propose Difformer, an embedding diffusion model based on Transformer. Experiments on varieties of seminal text generation tasks show the effectiveness of the proposed methods and the superiority of Difformer over previous state-of-the-art embedding diffusion baselines. | [
"Gao, Zhujin",
"Guo, Junliang",
"Tan, Xu",
"Zhu, Yongxin",
"Zhang, Fang",
"Bian, Jiang",
"Xu, Linli"
] | Empowering Diffusion Models on the Embedding Space for Text Generation | naacl-long.261 | Poster | 2212.09412 | [
"https://github.com/zhjgao/difformer"
] | https://huggingface.co/papers/2212.09412 | 2 | 1 | 0 | 7 | 1 | [] | [] | [] |
https://aclanthology.org/2024.naacl-long.262.bib | https://aclanthology.org/2024.naacl-long.262/ | @inproceedings{xia-etal-2024-aligning,
title = "Aligning as Debiasing: Causality-Aware Alignment via Reinforcement Learning with Interventional Feedback",
author = "Xia, Yu and
Yu, Tong and
He, Zhankui and
Zhao, Handong and
McAuley, Julian and
Li, Shuai",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.262",
doi = "10.18653/v1/2024.naacl-long.262",
pages = "4684--4695",
abstract = "Large language models (LLMs) often generate biased outputs containing offensive, toxic, or stereotypical text. Existing LLM alignment methods such as reinforcement learning from human feedback (RLHF) alleviate biases primarily based on reward signals from current model outputs without considering the source of biases. In this work, to explore how biases are formed, we revisit LLMs{'} text generation from a causal perspective. We identify pretraining data and input prompts, which contain semantic correlations of textual phrases, as two confounders between LLMs and model outputs causing biases. Inspired by our causal view, we leverage the reward model in RL alignment as an instrumental variable to perform causal intervention on LLMs. Utilizing the reward difference between an initial LLM and intervened LLM as interventional feedback to guide RL finetuning, we propose Causality-Aware Alignment (CAA) for LLM debiasing. Experiments on two text generation tasks with three different alignment objectives demonstrate the advantages of our method in aligning LLMs to generate less biased and safer outputs.",
}
| Large language models (LLMs) often generate biased outputs containing offensive, toxic, or stereotypical text. Existing LLM alignment methods such as reinforcement learning from human feedback (RLHF) alleviate biases primarily based on reward signals from current model outputs without considering the source of biases. In this work, to explore how biases are formed, we revisit LLMs{'} text generation from a causal perspective. We identify pretraining data and input prompts, which contain semantic correlations of textual phrases, as two confounders between LLMs and model outputs causing biases. Inspired by our causal view, we leverage the reward model in RL alignment as an instrumental variable to perform causal intervention on LLMs. Utilizing the reward difference between an initial LLM and intervened LLM as interventional feedback to guide RL finetuning, we propose Causality-Aware Alignment (CAA) for LLM debiasing. Experiments on two text generation tasks with three different alignment objectives demonstrate the advantages of our method in aligning LLMs to generate less biased and safer outputs. | [
"Xia, Yu",
"Yu, Tong",
"He, Zhankui",
"Zhao, H",
"ong",
"McAuley, Julian",
"Li, Shuai"
] | Aligning as Debiasing: Causality-Aware Alignment via Reinforcement Learning with Interventional Feedback | naacl-long.262 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.263.bib | https://aclanthology.org/2024.naacl-long.263/ | @inproceedings{wang-etal-2024-fake,
title = "Fake Alignment: Are {LLM}s Really Aligned Well?",
author = "Wang, Yixu and
Teng, Yan and
Huang, Kexin and
Lyu, Chengqi and
Zhang, Songyang and
Zhang, Wenwei and
Ma, Xingjun and
Jiang, Yu-Gang and
Qiao, Yu and
Wang, Yingchun",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.263",
doi = "10.18653/v1/2024.naacl-long.263",
pages = "4696--4712",
abstract = "The growing awareness of safety concerns in large language models (LLMs) has sparked considerable interest in the evaluation of safety. This study investigates an under-explored issue about the evaluation of LLMs, namely the substantial discrepancy in performance between multiple-choice questions and open-ended questions. Inspired by research on jailbreak attack patterns, we argue this is caused by mismatched generalization. That is, LLM only remembers the answer style for open-ended safety questions, which makes it unable to solve other forms of safety tests. We refer to this phenomenon as fake alignment and construct a comparative benchmark to empirically verify its existence in LLMs. We introduce a Fake alIgNment Evaluation (FINE) framework and two novel metrics{---}{---}Consistency Score (CS) and Consistent Safety Score (CSS), which jointly assess two complementary forms of evaluation to quantify fake alignment and obtain corrected performance estimation. Applying FINE to 14 widely-used LLMs reveals several models with purported safety are poorly aligned in practice. Subsequently, we found that multiple-choice format data can also be used as high-quality contrast distillation-based fine-tuning data, which can strongly improve the alignment consistency of LLMs with minimal fine-tuning overhead. For data and code, see https://github.com/AIFlames/Fake-Alignment.",
}
| The growing awareness of safety concerns in large language models (LLMs) has sparked considerable interest in the evaluation of safety. This study investigates an under-explored issue about the evaluation of LLMs, namely the substantial discrepancy in performance between multiple-choice questions and open-ended questions. Inspired by research on jailbreak attack patterns, we argue this is caused by mismatched generalization. That is, LLM only remembers the answer style for open-ended safety questions, which makes it unable to solve other forms of safety tests. We refer to this phenomenon as fake alignment and construct a comparative benchmark to empirically verify its existence in LLMs. We introduce a Fake alIgNment Evaluation (FINE) framework and two novel metrics{---}{---}Consistency Score (CS) and Consistent Safety Score (CSS), which jointly assess two complementary forms of evaluation to quantify fake alignment and obtain corrected performance estimation. Applying FINE to 14 widely-used LLMs reveals several models with purported safety are poorly aligned in practice. Subsequently, we found that multiple-choice format data can also be used as high-quality contrast distillation-based fine-tuning data, which can strongly improve the alignment consistency of LLMs with minimal fine-tuning overhead. For data and code, see https://github.com/AIFlames/Fake-Alignment. | [
"Wang, Yixu",
"Teng, Yan",
"Huang, Kexin",
"Lyu, Chengqi",
"Zhang, Songyang",
"Zhang, Wenwei",
"Ma, Xingjun",
"Jiang, Yu-Gang",
"Qiao, Yu",
"Wang, Yingchun"
] | Fake Alignment: Are LLMs Really Aligned Well? | naacl-long.263 | Oral | 2311.05915 | [
"https://github.com/aiflames/fake-alignment"
] | https://huggingface.co/papers/2311.05915 | 2 | 2 | 0 | 10 | 1 | [] | [] | [] |
https://aclanthology.org/2024.naacl-long.264.bib | https://aclanthology.org/2024.naacl-long.264/ | @inproceedings{mao-etal-2024-visually,
title = "Visually Guided Generative Text-Layout Pre-training for Document Intelligence",
author = "Mao, Zhiming and
Bai, Haoli and
Hou, Lu and
Shang, Lifeng and
Jiang, Xin and
Liu, Qun and
Wong, Kam-Fai",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.264",
doi = "10.18653/v1/2024.naacl-long.264",
pages = "4713--4730",
abstract = "Prior study shows that pre-training techniques can boost the performance of visual document understanding (VDU), which typically requires models to gain abilities to perceive and reason both document texts and layouts (e.g., locations of texts and table-cells). To this end, we propose visually guided generative text-layout pre-training, named ViTLP. Given a document image, the model optimizes hierarchical language and layout modeling objectives to generate the interleaved text and layout sequence. In addition, to address the limitation of processing long documents by Transformers, we introduce a straightforward yet effective multi-segment generative pre-training scheme, facilitating ViTLP to process word-intensive documents of any length. ViTLP can function as a native OCR model to localize and recognize texts of document images. Besides, ViTLP can be effectively applied to various downstream VDU tasks. Extensive experiments show that ViTLP achieves competitive performance over existing baselines on benchmark VDU tasks, including information extraction, document classification, and document question answering.",
}
| Prior study shows that pre-training techniques can boost the performance of visual document understanding (VDU), which typically requires models to gain abilities to perceive and reason both document texts and layouts (e.g., locations of texts and table-cells). To this end, we propose visually guided generative text-layout pre-training, named ViTLP. Given a document image, the model optimizes hierarchical language and layout modeling objectives to generate the interleaved text and layout sequence. In addition, to address the limitation of processing long documents by Transformers, we introduce a straightforward yet effective multi-segment generative pre-training scheme, facilitating ViTLP to process word-intensive documents of any length. ViTLP can function as a native OCR model to localize and recognize texts of document images. Besides, ViTLP can be effectively applied to various downstream VDU tasks. Extensive experiments show that ViTLP achieves competitive performance over existing baselines on benchmark VDU tasks, including information extraction, document classification, and document question answering. | [
"Mao, Zhiming",
"Bai, Haoli",
"Hou, Lu",
"Shang, Lifeng",
"Jiang, Xin",
"Liu, Qun",
"Wong, Kam-Fai"
] | Visually Guided Generative Text-Layout Pre-training for Document Intelligence | naacl-long.264 | Poster | 2403.16516 | [
"https://github.com/veason-silverbullet/vitlp"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.265.bib | https://aclanthology.org/2024.naacl-long.265/ | @inproceedings{zhu-etal-2024-hill,
title = "{HILL}: Hierarchy-aware Information Lossless Contrastive Learning for Hierarchical Text Classification",
author = "Zhu, He and
Wu, Junran and
Liu, Ruomei and
Hou, Yue and
Yuan, Ze and
Li, Shangzhe and
Pan, Yicheng and
Xu, Ke",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.265",
doi = "10.18653/v1/2024.naacl-long.265",
pages = "4731--4745",
abstract = "Existing self-supervised methods in natural language processing (NLP), especially hierarchical text classification (HTC), mainly focus on self-supervised contrastive learning, extremely relying on human-designed augmentation rules to generate contrastive samples, which can potentially corrupt or distort the original information. In this paper, we tend to investigate the feasibility of a contrastive learning scheme in which the semantic and syntactic information inherent in the input sample is adequately reserved in the contrastive samples and fused during the learning process. Specifically, we propose an information lossless contrastive learning strategy for HTC, namely $\textbf{H}$ierarchy-aware $\textbf{I}$nformation $\textbf{L}$ossless contrastive $\textbf{L}$earning (HILL), which consists of a text encoder representing the input document, and a structure encoder directly generating the positive sample. The structure encoder takes the document embedding as input, extracts the essential syntactic information inherent in the label hierarchy with the principle of structural entropy minimization, and injects the syntactic information into the text representation via hierarchical representation learning. Experiments on three common datasets are conducted to verify the superiority of HILL.",
}
| Existing self-supervised methods in natural language processing (NLP), especially hierarchical text classification (HTC), mainly focus on self-supervised contrastive learning, extremely relying on human-designed augmentation rules to generate contrastive samples, which can potentially corrupt or distort the original information. In this paper, we tend to investigate the feasibility of a contrastive learning scheme in which the semantic and syntactic information inherent in the input sample is adequately reserved in the contrastive samples and fused during the learning process. Specifically, we propose an information lossless contrastive learning strategy for HTC, namely $\textbf{H}$ierarchy-aware $\textbf{I}$nformation $\textbf{L}$ossless contrastive $\textbf{L}$earning (HILL), which consists of a text encoder representing the input document, and a structure encoder directly generating the positive sample. The structure encoder takes the document embedding as input, extracts the essential syntactic information inherent in the label hierarchy with the principle of structural entropy minimization, and injects the syntactic information into the text representation via hierarchical representation learning. Experiments on three common datasets are conducted to verify the superiority of HILL. | [
"Zhu, He",
"Wu, Junran",
"Liu, Ruomei",
"Hou, Yue",
"Yuan, Ze",
"Li, Shangzhe",
"Pan, Yicheng",
"Xu, Ke"
] | HILL: Hierarchy-aware Information Lossless Contrastive Learning for Hierarchical Text Classification | naacl-long.265 | Poster | 2403.17307 | [
"https://github.com/rooooyy/hill"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.266.bib | https://aclanthology.org/2024.naacl-long.266/ | @inproceedings{ma-etal-2024-investigating,
title = "Investigating the Emergent Audio Classification Ability of {ASR} Foundation Models",
author = "Ma, Rao and
Liusie, Adian and
Gales, Mark and
Knill, Kate",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.266",
doi = "10.18653/v1/2024.naacl-long.266",
pages = "4746--4760",
abstract = "Text and vision foundation models can perform many tasks in a zero-shot setting, a desirable property that enables these systems to be applied in general and low-resource settings. There has been far less work, however, on the zero-shot abilities of ASR foundation models, with these systems typically fine-tuned to specific tasks or constrained to applications that match their training criterion and data annotation. In this work we investigate the ability of Whisper and MMS, ASR foundation models trained primarily for speech recognition, to perform zero-shot audio classification. We use simple template-based text prompts at the decoder and use the resulting decoding probabilities to generate zero-shot predictions. Without training the model on extra data or adding any new parameters, we demonstrate that Whisper shows promising zero-shot classification performance on a range of 8 audio-classification datasets, outperforming the accuracy of existing state-of-the-art zero-shot baselines by an average of 9{\%}. One important step to unlock the emergent ability is debiasing, where a simple unsupervised reweighting method of the class probabilities yields consistent significant performance gains. We further show that performance increases with model size, implying that as ASR foundation models scale up, they may exhibit improved zero-shot performance.",
}
| Text and vision foundation models can perform many tasks in a zero-shot setting, a desirable property that enables these systems to be applied in general and low-resource settings. There has been far less work, however, on the zero-shot abilities of ASR foundation models, with these systems typically fine-tuned to specific tasks or constrained to applications that match their training criterion and data annotation. In this work we investigate the ability of Whisper and MMS, ASR foundation models trained primarily for speech recognition, to perform zero-shot audio classification. We use simple template-based text prompts at the decoder and use the resulting decoding probabilities to generate zero-shot predictions. Without training the model on extra data or adding any new parameters, we demonstrate that Whisper shows promising zero-shot classification performance on a range of 8 audio-classification datasets, outperforming the accuracy of existing state-of-the-art zero-shot baselines by an average of 9{\%}. One important step to unlock the emergent ability is debiasing, where a simple unsupervised reweighting method of the class probabilities yields consistent significant performance gains. We further show that performance increases with model size, implying that as ASR foundation models scale up, they may exhibit improved zero-shot performance. | [
"Ma, Rao",
"Liusie, Adian",
"Gales, Mark",
"Knill, Kate"
] | Investigating the Emergent Audio Classification Ability of ASR Foundation Models | naacl-long.266 | Poster | 2311.09363 | [
"https://github.com/julirao/whisper_audio_classification"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.267.bib | https://aclanthology.org/2024.naacl-long.267/ | @inproceedings{mueller-etal-2024-context,
title = "In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax",
author = "Mueller, Aaron and
Webson, Albert and
Petty, Jackson and
Linzen, Tal",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.267",
doi = "10.18653/v1/2024.naacl-long.267",
pages = "4761--4779",
abstract = "In-context learning (ICL) is now a common method for teaching large language models (LLMs) new tasks: given labeled examples in the input context, the LLM learns to perform the task without weight updates. Do models guided via ICL infer the underlying structure of the task defined by the context, or do they rely on superficial heuristics that only generalize to identically distributed examples? We address this question using transformations tasks and an NLI task that assess sensitivity to syntax{---}a requirement for robust language understanding. We further investigate whether out-of-distribution generalization can be improved via chain-of-thought prompting, where the model is provided with a sequence of intermediate computation steps that illustrate how the task ought to be performed. In experiments with models from the GPT, PaLM, and Llama 2 families, we find large variance across LMs. The variance is explained more by the composition of the pre-training corpus and supervision methods than by model size; in particular, models pre-trained on code generalize better, and benefit more from chain-of-thought prompting.",
}
| In-context learning (ICL) is now a common method for teaching large language models (LLMs) new tasks: given labeled examples in the input context, the LLM learns to perform the task without weight updates. Do models guided via ICL infer the underlying structure of the task defined by the context, or do they rely on superficial heuristics that only generalize to identically distributed examples? We address this question using transformations tasks and an NLI task that assess sensitivity to syntax{---}a requirement for robust language understanding. We further investigate whether out-of-distribution generalization can be improved via chain-of-thought prompting, where the model is provided with a sequence of intermediate computation steps that illustrate how the task ought to be performed. In experiments with models from the GPT, PaLM, and Llama 2 families, we find large variance across LMs. The variance is explained more by the composition of the pre-training corpus and supervision methods than by model size; in particular, models pre-trained on code generalize better, and benefit more from chain-of-thought prompting. | [
"Mueller, Aaron",
"Webson, Albert",
"Petty, Jackson",
"Linzen, Tal"
] | In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax | naacl-long.267 | Poster | 2311.07811 | [
"https://github.com/aaronmueller/syntax-icl"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.268.bib | https://aclanthology.org/2024.naacl-long.268/ | @inproceedings{wang-etal-2024-prompt,
title = "Prompt-Singer: Controllable Singing-Voice-Synthesis with Natural Language Prompt",
author = "Wang, Yongqi and
Hu, Ruofan and
Huang, Rongjie and
Hong, Zhiqing and
Li, Ruiqi and
Liu, Wenrui and
You, Fuming and
Jin, Tao and
Zhao, Zhou",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.268",
doi = "10.18653/v1/2024.naacl-long.268",
pages = "4780--4794",
abstract = "Recent singing-voice-synthesis (SVS) methods have achieved remarkable audio quality and naturalness, yet they lack the capability to control the style attributes of the synthesized singing explicitly. We propose Prompt-Singer, the first SVS method that enables attribute controlling on singer gender, vocal range and volume with natural language. We adopt a model architecture based on a decoder-only transformer with a multi-scale hierarchy, and design a range-melody decoupled pitch representation that enables text-conditioned vocal range control while keeping melodic accuracy. Furthermore, we explore various experiment settings, including different types of text representations, text encoder fine-tuning, and introducing speech data to alleviate data scarcity, aiming to facilitate further research. Experiments show that our model achieves favorable controlling ability and audio quality. Audio samples are available at http://prompt-singer.github.io .",
}
| Recent singing-voice-synthesis (SVS) methods have achieved remarkable audio quality and naturalness, yet they lack the capability to control the style attributes of the synthesized singing explicitly. We propose Prompt-Singer, the first SVS method that enables attribute controlling on singer gender, vocal range and volume with natural language. We adopt a model architecture based on a decoder-only transformer with a multi-scale hierarchy, and design a range-melody decoupled pitch representation that enables text-conditioned vocal range control while keeping melodic accuracy. Furthermore, we explore various experiment settings, including different types of text representations, text encoder fine-tuning, and introducing speech data to alleviate data scarcity, aiming to facilitate further research. Experiments show that our model achieves favorable controlling ability and audio quality. Audio samples are available at http://prompt-singer.github.io . | [
"Wang, Yongqi",
"Hu, Ruofan",
"Huang, Rongjie",
"Hong, Zhiqing",
"Li, Ruiqi",
"Liu, Wenrui",
"You, Fuming",
"Jin, Tao",
"Zhao, Zhou"
] | Prompt-Singer: Controllable Singing-Voice-Synthesis with Natural Language Prompt | naacl-long.268 | Poster | 2403.11780 | [
"https://github.com/cyanbx/Prompt-Singer"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.269.bib | https://aclanthology.org/2024.naacl-long.269/ | @inproceedings{mujtaba-etal-2024-lost,
title = "Lost in Transcription: Identifying and Quantifying the Accuracy Biases of Automatic Speech Recognition Systems Against Disfluent Speech",
author = "Mujtaba, Dena and
Mahapatra, Nihar and
Arney, Megan and
Yaruss, J and
Gerlach-Houck, Hope and
Herring, Caryn and
Bin, Jia",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.269",
doi = "10.18653/v1/2024.naacl-long.269",
pages = "4795--4809",
abstract = "Automatic speech recognition (ASR) systems, increasingly prevalent in education, healthcare, employment, and mobile technology, face significant challenges in inclusivity, particularly for the 80 million-strong global community of people who stutter. These systems often fail to accurately interpret speech patterns deviating from typical fluency, leading to critical usability issues and misinterpretations. This study evaluates six leading ASRs, analyzing their performance on both a real-world dataset of speech samples from individuals who stutter and a synthetic dataset derived from the widely-used LibriSpeech benchmark. The synthetic dataset, uniquely designed to incorporate various stuttering events, enables an in-depth analysis of each ASR{'}s handling of disfluent speech. Our comprehensive assessment includes metrics such as word error rate (WER), character error rate (CER), and semantic accuracy of the transcripts. The results reveal a consistent and statistically significant accuracy bias across all ASRs against disfluent speech, manifesting in significant syntactical and semantic inaccuracies in transcriptions. These findings highlight a critical gap in current ASR technologies, underscoring the need for effective bias mitigation strategies. Addressing this bias is imperative not only to improve the technology{'}s usability for people who stutter but also to ensure their equitable and inclusive participation in the rapidly evolving digital landscape.",
}
| Automatic speech recognition (ASR) systems, increasingly prevalent in education, healthcare, employment, and mobile technology, face significant challenges in inclusivity, particularly for the 80 million-strong global community of people who stutter. These systems often fail to accurately interpret speech patterns deviating from typical fluency, leading to critical usability issues and misinterpretations. This study evaluates six leading ASRs, analyzing their performance on both a real-world dataset of speech samples from individuals who stutter and a synthetic dataset derived from the widely-used LibriSpeech benchmark. The synthetic dataset, uniquely designed to incorporate various stuttering events, enables an in-depth analysis of each ASR{'}s handling of disfluent speech. Our comprehensive assessment includes metrics such as word error rate (WER), character error rate (CER), and semantic accuracy of the transcripts. The results reveal a consistent and statistically significant accuracy bias across all ASRs against disfluent speech, manifesting in significant syntactical and semantic inaccuracies in transcriptions. These findings highlight a critical gap in current ASR technologies, underscoring the need for effective bias mitigation strategies. Addressing this bias is imperative not only to improve the technology{'}s usability for people who stutter but also to ensure their equitable and inclusive participation in the rapidly evolving digital landscape. | [
"Mujtaba, Dena",
"Mahapatra, Nihar",
"Arney, Megan",
"Yaruss, J",
"Gerlach-Houck, Hope",
"Herring, Caryn",
"Bin, Jia"
] | Lost in Transcription: Identifying and Quantifying the Accuracy Biases of Automatic Speech Recognition Systems Against Disfluent Speech | naacl-long.269 | Oral | 2405.06150 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.270.bib | https://aclanthology.org/2024.naacl-long.270/ | @inproceedings{helwe-etal-2024-mafalda,
title = "{MAFALDA}: A Benchmark and Comprehensive Study of Fallacy Detection and Classification",
author = "Helwe, Chadi and
Calamai, Tom and
Paris, Pierre-Henri and
Clavel, Chlo{\'e} and
Suchanek, Fabian",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.270",
doi = "10.18653/v1/2024.naacl-long.270",
pages = "4810--4845",
abstract = "We introduce MAFALDA, a benchmark for fallacy classification that merges and unites previous fallacy datasets. It comes with a taxonomy that aligns, refines, and unifies existing classifications of fallacies. We further provide a manual annotation of a part of the dataset together with manual explanations for each annotation. We propose a new annotation scheme tailored for subjective NLP tasks, and a new evaluation method designed to handle subjectivity. We then evaluate several language models under a zero-shot learning setting and human performances on MAFALDA to assess their capability to detect and classify fallacies.",
}
| We introduce MAFALDA, a benchmark for fallacy classification that merges and unites previous fallacy datasets. It comes with a taxonomy that aligns, refines, and unifies existing classifications of fallacies. We further provide a manual annotation of a part of the dataset together with manual explanations for each annotation. We propose a new annotation scheme tailored for subjective NLP tasks, and a new evaluation method designed to handle subjectivity. We then evaluate several language models under a zero-shot learning setting and human performances on MAFALDA to assess their capability to detect and classify fallacies. | [
"Helwe, Chadi",
"Calamai, Tom",
"Paris, Pierre-Henri",
"Clavel, Chlo{\\'e}",
"Suchanek, Fabian"
] | MAFALDA: A Benchmark and Comprehensive Study of Fallacy Detection and Classification | naacl-long.270 | Oral | 2311.09761 | [
"https://github.com/chadihelwe/mafalda"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.271.bib | https://aclanthology.org/2024.naacl-long.271/ | @inproceedings{qian-etal-2024-diffusion,
title = "Diffusion Glancing Transformer for Parallel Sequence-to-Sequence Learning",
author = "Qian, Lihua and
Wang, Mingxuan and
Liu, Yang and
Zhou, Hao",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.271",
doi = "10.18653/v1/2024.naacl-long.271",
pages = "4846--4862",
abstract = "Previously, non-autoregressive models were widely recognized as being superior in generation efficiency but inferior in generation quality due to the challenges of modeling multiple target modalities.To enhance the multi-modality modeling ability, we propose the diffusion glancing transformer, which employs a modality diffusion process and residual glancing sampling.The modality diffusion process is a discrete process that interpolates the multi-modal distribution along the decoding steps, and the residual glancing sampling approach guides the model to continuously learn the remaining modalities across the layers. Experimental results on various machine translation and text generation benchmarks demonstrate that DIFFGLAT achieves better generation accuracy while maintaining fast decoding speed compared with both autoregressive and non-autoregressive models.",
}
| Previously, non-autoregressive models were widely recognized as being superior in generation efficiency but inferior in generation quality due to the challenges of modeling multiple target modalities.To enhance the multi-modality modeling ability, we propose the diffusion glancing transformer, which employs a modality diffusion process and residual glancing sampling.The modality diffusion process is a discrete process that interpolates the multi-modal distribution along the decoding steps, and the residual glancing sampling approach guides the model to continuously learn the remaining modalities across the layers. Experimental results on various machine translation and text generation benchmarks demonstrate that DIFFGLAT achieves better generation accuracy while maintaining fast decoding speed compared with both autoregressive and non-autoregressive models. | [
"Qian, Lihua",
"Wang, Mingxuan",
"Liu, Yang",
"Zhou, Hao"
] | Diffusion Glancing Transformer for Parallel Sequence-to-Sequence Learning | naacl-long.271 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.272.bib | https://aclanthology.org/2024.naacl-long.272/ | @inproceedings{cheng-bhat-2024-context,
title = "No Context Needed: Contextual Quandary In Idiomatic Reasoning With Pre-Trained Language Models",
author = "Cheng, Kellen and
Bhat, Suma",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.272",
doi = "10.18653/v1/2024.naacl-long.272",
pages = "4863--4880",
abstract = "Reasoning in the presence of idiomatic expressions (IEs) remains a challenging frontier in natural language understanding (NLU). Unlike standard text, the non-compositional nature of an IE makes it difficult for model comprehension, as their figurative or non-literal mean- ing usually cannot be inferred from the constituent words alone. It stands to reason that in these challenging circumstances, pre-trained language models (PTLMs) should make use of the surrounding context to infer additional in- formation about the IE. In this paper, we investigate the utilization of said context for idiomatic reasoning tasks, which is under-explored relative to arithmetic or commonsense reason- ing (Liu et al., 2022; Yu et al., 2023). Preliminary findings point to a surprising observation: general purpose PTLMs are actually negatively affected by the context, as performance almost always increases with its removal. In these scenarios, models may see gains of up to 3.89{\%}. As a result, we argue that only IE-aware models remain suitable for idiomatic reasoning tasks, given the unexpected and unexplainable manner in which general purpose PTLMs reason over IEs. Additionally, we conduct studies to examine how models utilize the context in various situations, as well as an in-depth analysis on dataset formation and quality. Finally, we provide some explanations and insights into the reasoning process itself based on our results.",
}
| Reasoning in the presence of idiomatic expressions (IEs) remains a challenging frontier in natural language understanding (NLU). Unlike standard text, the non-compositional nature of an IE makes it difficult for model comprehension, as their figurative or non-literal mean- ing usually cannot be inferred from the constituent words alone. It stands to reason that in these challenging circumstances, pre-trained language models (PTLMs) should make use of the surrounding context to infer additional in- formation about the IE. In this paper, we investigate the utilization of said context for idiomatic reasoning tasks, which is under-explored relative to arithmetic or commonsense reason- ing (Liu et al., 2022; Yu et al., 2023). Preliminary findings point to a surprising observation: general purpose PTLMs are actually negatively affected by the context, as performance almost always increases with its removal. In these scenarios, models may see gains of up to 3.89{\%}. As a result, we argue that only IE-aware models remain suitable for idiomatic reasoning tasks, given the unexpected and unexplainable manner in which general purpose PTLMs reason over IEs. Additionally, we conduct studies to examine how models utilize the context in various situations, as well as an in-depth analysis on dataset formation and quality. Finally, we provide some explanations and insights into the reasoning process itself based on our results. | [
"Cheng, Kellen",
"Bhat, Suma"
] | No Context Needed: Contextual Quandary In Idiomatic Reasoning With Pre-Trained Language Models | naacl-long.272 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.273.bib | https://aclanthology.org/2024.naacl-long.273/ | @inproceedings{wang-etal-2024-multi,
title = "Multi-stage Retrieve and Re-rank Model for Automatic Medical Coding Recommendation",
author = "Wang, Xindi and
Mercer, Robert and
Rudzicz, Frank",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.273",
doi = "10.18653/v1/2024.naacl-long.273",
pages = "4881--4891",
abstract = "The International Classification of Diseases (ICD) serves as a definitive medical classification system encompassing a wide range of diseases and conditions. The primary objective of ICD indexing is to allocate a subset of ICD codes to a medical record, which facilitates standardized documentation and management of various health conditions. Most existing approaches have suffered from selecting the proper label subsets from an extremely large ICD collection with a heavy long-tailed label distribution. In this paper, we leverage a multi-stage {``}retrieve and re-rank{''} framework as a novel solution to ICD indexing, via a hybrid discrete retrieval method, and re-rank retrieved candidates with contrastive learning that allows the model to make more accurate predictions from a simplified label space. The retrieval model is a hybrid of auxiliary knowledge of the electronic health records (EHR) and a discrete retrieval method (BM25), which efficiently collects high-quality candidates. In the last stage, we propose a label co-occurrence guided contrastive re-ranking model, which re-ranks the candidate labels by pulling together the clinical notes with positive ICD codes. Experimental results show the proposed method achieves state-of-the-art performance on a number of measures on the MIMIC-III benchmark.",
}
| The International Classification of Diseases (ICD) serves as a definitive medical classification system encompassing a wide range of diseases and conditions. The primary objective of ICD indexing is to allocate a subset of ICD codes to a medical record, which facilitates standardized documentation and management of various health conditions. Most existing approaches have suffered from selecting the proper label subsets from an extremely large ICD collection with a heavy long-tailed label distribution. In this paper, we leverage a multi-stage {``}retrieve and re-rank{''} framework as a novel solution to ICD indexing, via a hybrid discrete retrieval method, and re-rank retrieved candidates with contrastive learning that allows the model to make more accurate predictions from a simplified label space. The retrieval model is a hybrid of auxiliary knowledge of the electronic health records (EHR) and a discrete retrieval method (BM25), which efficiently collects high-quality candidates. In the last stage, we propose a label co-occurrence guided contrastive re-ranking model, which re-ranks the candidate labels by pulling together the clinical notes with positive ICD codes. Experimental results show the proposed method achieves state-of-the-art performance on a number of measures on the MIMIC-III benchmark. | [
"Wang, Xindi",
"Mercer, Robert",
"Rudzicz, Frank"
] | Multi-stage Retrieve and Re-rank Model for Automatic Medical Coding Recommendation | naacl-long.273 | Oral | 2405.19093 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.274.bib | https://aclanthology.org/2024.naacl-long.274/ | @inproceedings{machina-mercer-2024-anisotropy,
title = "Anisotropy is Not Inherent to Transformers",
author = "Machina, Anemily and
Mercer, Robert",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.274",
doi = "10.18653/v1/2024.naacl-long.274",
pages = "4892--4907",
abstract = "Isotropy is the property that embeddings are uniformly distributed around the origin. Previous work has shown that Transformer embedding spaces are anisotropic, which is called the representation degradation problem. This degradation has been assumed to be inherent to the standard language modeling tasks and to apply to all Transformer models regardless of their architecture. In this work we identify a set of Transformer models with isotropic embedding spaces, the large Pythia models. We examine the isotropy of Pythia models and explore how isotropy and anisotropy develop as a model is trained. We find that anisotropic models do not develop as previously theorized, using our own analysis to show that the large Pythia models optimize their final Layer Norm for isotropy, and provide reasoning why previous theoretical justifications for anisotropy were insufficient. The identification of a set of isotropic Transformer models calls previous assumptions into question, provides a set of models to contrast existing analysis, and should lead to deeper insight into isotropy.",
}
| Isotropy is the property that embeddings are uniformly distributed around the origin. Previous work has shown that Transformer embedding spaces are anisotropic, which is called the representation degradation problem. This degradation has been assumed to be inherent to the standard language modeling tasks and to apply to all Transformer models regardless of their architecture. In this work we identify a set of Transformer models with isotropic embedding spaces, the large Pythia models. We examine the isotropy of Pythia models and explore how isotropy and anisotropy develop as a model is trained. We find that anisotropic models do not develop as previously theorized, using our own analysis to show that the large Pythia models optimize their final Layer Norm for isotropy, and provide reasoning why previous theoretical justifications for anisotropy were insufficient. The identification of a set of isotropic Transformer models calls previous assumptions into question, provides a set of models to contrast existing analysis, and should lead to deeper insight into isotropy. | [
"Machina, Anemily",
"Mercer, Robert"
] | Anisotropy is Not Inherent to Transformers | naacl-long.274 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.275.bib | https://aclanthology.org/2024.naacl-long.275/ | @inproceedings{riley-etal-2024-finding,
title = "Finding Replicable Human Evaluations via Stable Ranking Probability",
author = "Riley, Parker and
Deutsch, Daniel and
Foster, George and
Ratnakar, Viresh and
Dabirmoghaddam, Ali and
Freitag, Markus",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.275",
doi = "10.18653/v1/2024.naacl-long.275",
pages = "4908--4919",
abstract = "Reliable human evaluation is critical to the development of successful natural language generation models, but achieving it is notoriously difficult. Stability is a crucial requirement when ranking systems by quality: consistent ranking of systems across repeated evaluations is not just desirable, but essential. Without it, there is no reliable foundation for hill-climbing or product launch decisions. In this paper, we use machine translation and its state-of-the-art human evaluation framework, MQM, as a case study to understand how to set up reliable human evaluations that yield stable conclusions. We investigate the optimal configurations for item allocation to raters, number of ratings per item, and score normalization. Our study on two language pairs provides concrete recommendations for designing replicable human evaluation studies. We also collect and release the largest publicly available dataset of multi-segment translations rated by multiple professional translators, consisting of nearly 140,000 segment annotations across two language pairs.",
}
| Reliable human evaluation is critical to the development of successful natural language generation models, but achieving it is notoriously difficult. Stability is a crucial requirement when ranking systems by quality: consistent ranking of systems across repeated evaluations is not just desirable, but essential. Without it, there is no reliable foundation for hill-climbing or product launch decisions. In this paper, we use machine translation and its state-of-the-art human evaluation framework, MQM, as a case study to understand how to set up reliable human evaluations that yield stable conclusions. We investigate the optimal configurations for item allocation to raters, number of ratings per item, and score normalization. Our study on two language pairs provides concrete recommendations for designing replicable human evaluation studies. We also collect and release the largest publicly available dataset of multi-segment translations rated by multiple professional translators, consisting of nearly 140,000 segment annotations across two language pairs. | [
"Riley, Parker",
"Deutsch, Daniel",
"Foster, George",
"Ratnakar, Viresh",
"Dabirmoghaddam, Ali",
"Freitag, Markus"
] | Finding Replicable Human Evaluations via Stable Ranking Probability | naacl-long.275 | Poster | 2404.01474 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.276.bib | https://aclanthology.org/2024.naacl-long.276/ | @inproceedings{cao-etal-2024-stealthy,
title = "Stealthy and Persistent Unalignment on Large Language Models via Backdoor Injections",
author = "Cao, Yuanpu and
Cao, Bochuan and
Chen, Jinghui",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.276",
doi = "10.18653/v1/2024.naacl-long.276",
pages = "4920--4935",
abstract = "Recent developments in Large Language Models (LLMs) have manifested significant advancements. To facilitate safeguards against malicious exploitation, a body of research has concentrated on aligning LLMs with human preferences and inhibiting their generation of inappropriate content. Unfortunately, such alignments are often vulnerable: fine-tuning with a minimal amount of harmful data can easily unalign the target LLM. While being effective, such fine-tuning-based unalignment approaches also have their own limitations: (1) non-stealthiness, after fine-tuning, safety audits or red-teaming can easily expose the potential weaknesses of the unaligned models, thereby precluding their release/use. (2) non-persistence, the unaligned LLMs can be easily repaired through re-alignment, i.e., fine-tuning again with aligned data points. In this work, we show that it is possible to conduct stealthy and persistent unalignment on large language models via backdoor injections. We also provide a novel understanding of the relationship between the backdoor persistence and the activation pattern and further provide guidelines for potential trigger design. Through extensive experiments, we demonstrate that our proposed stealthy and persistent unalignment can successfully pass the safety evaluation while maintaining strong persistence against re-alignment defense.",
}
| Recent developments in Large Language Models (LLMs) have manifested significant advancements. To facilitate safeguards against malicious exploitation, a body of research has concentrated on aligning LLMs with human preferences and inhibiting their generation of inappropriate content. Unfortunately, such alignments are often vulnerable: fine-tuning with a minimal amount of harmful data can easily unalign the target LLM. While being effective, such fine-tuning-based unalignment approaches also have their own limitations: (1) non-stealthiness, after fine-tuning, safety audits or red-teaming can easily expose the potential weaknesses of the unaligned models, thereby precluding their release/use. (2) non-persistence, the unaligned LLMs can be easily repaired through re-alignment, i.e., fine-tuning again with aligned data points. In this work, we show that it is possible to conduct stealthy and persistent unalignment on large language models via backdoor injections. We also provide a novel understanding of the relationship between the backdoor persistence and the activation pattern and further provide guidelines for potential trigger design. Through extensive experiments, we demonstrate that our proposed stealthy and persistent unalignment can successfully pass the safety evaluation while maintaining strong persistence against re-alignment defense. | [
"Cao, Yuanpu",
"Cao, Bochuan",
"Chen, Jinghui"
] | Stealthy and Persistent Unalignment on Large Language Models via Backdoor Injections | naacl-long.276 | Poster | 2312.00027 | [
"https://github.com/caoyuanpu/backdoorunalign"
] | https://huggingface.co/papers/2312.00027 | 0 | 0 | 0 | 3 | 1 | [
"redslabvt/BEEAR-backdoored-Model-5"
] | [] | [] |
https://aclanthology.org/2024.naacl-long.277.bib | https://aclanthology.org/2024.naacl-long.277/ | @inproceedings{somayajula-etal-2024-generalizable,
title = "Generalizable and Stable Finetuning of Pretrained Language Models on Low-Resource Texts",
author = "Somayajula, Sai Ashish and
Liang, Youwei and
Zhang, Li and
Singh, Abhishek and
Xie, Pengtao",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.277",
doi = "10.18653/v1/2024.naacl-long.277",
pages = "4936--4953",
abstract = "Pretrained Language Models (PLMs) have advanced Natural Language Processing (NLP) tasks significantly, but finetuning PLMs on low-resource datasets poses significant challenges such as instability and overfitting. Previous methods tackle these issues by finetuning a strategically chosen subnetwork on a downstream task, while keeping the remaining weights fixed to the pretrained weights. However, they rely on a suboptimal criteria for sub-network selection, leading to suboptimal solutions. To address these limitations, we propose a regularization method based on attention-guided weight mixup for finetuning PLMs. Our approach represents each network weight as a mixup of task-specific weight and pretrained weight, controlled by a learnable attention parameter, providing finer control over sub-network selection. Furthermore, we employ a bi-level optimization (BLO) based framework on two separate splits of the training dataset, improving generalization and combating overfitting. We validate the efficacy of our proposed method through extensive experiments, demonstrating its superiority over previous methods, particularly in the context of finetuning PLMs on low-resource datasets. Our code is available at https://github.com/Sai-Ashish/Attention{\_}guided{\_}weight{\_}mixup{\_}BLO.",
}
| Pretrained Language Models (PLMs) have advanced Natural Language Processing (NLP) tasks significantly, but finetuning PLMs on low-resource datasets poses significant challenges such as instability and overfitting. Previous methods tackle these issues by finetuning a strategically chosen subnetwork on a downstream task, while keeping the remaining weights fixed to the pretrained weights. However, they rely on a suboptimal criteria for sub-network selection, leading to suboptimal solutions. To address these limitations, we propose a regularization method based on attention-guided weight mixup for finetuning PLMs. Our approach represents each network weight as a mixup of task-specific weight and pretrained weight, controlled by a learnable attention parameter, providing finer control over sub-network selection. Furthermore, we employ a bi-level optimization (BLO) based framework on two separate splits of the training dataset, improving generalization and combating overfitting. We validate the efficacy of our proposed method through extensive experiments, demonstrating its superiority over previous methods, particularly in the context of finetuning PLMs on low-resource datasets. Our code is available at https://github.com/Sai-Ashish/Attention{\_}guided{\_}weight{\_}mixup{\_}BLO. | [
"Somayajula, Sai Ashish",
"Liang, Youwei",
"Zhang, Li",
"Singh, Abhishek",
"Xie, Pengtao"
] | Generalizable and Stable Finetuning of Pretrained Language Models on Low-Resource Texts | naacl-long.277 | Poster | 2403.12918 | [
"https://github.com/sai-ashish/attention_guided_weight_mixup_blo"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.278.bib | https://aclanthology.org/2024.naacl-long.278/ | @inproceedings{lee-etal-2024-detecting-bipolar,
title = "Detecting Bipolar Disorder from Misdiagnosed Major Depressive Disorder with Mood-Aware Multi-Task Learning",
author = "Lee, Daeun and
Jeon, Hyolim and
Son, Sejung and
Park, Chaewon and
An, Ji hyun and
Kim, Seungbae and
Han, Jinyoung",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.278",
doi = "10.18653/v1/2024.naacl-long.278",
pages = "4954--4970",
abstract = "Bipolar Disorder (BD) is a mental disorder characterized by intense mood swings, from depression to manic states. Individuals with BD are at a higher risk of suicide, but BD is often misdiagnosed as Major Depressive Disorder (MDD) due to shared symptoms, resulting in delays in appropriate treatment and increased suicide risk. While early intervention based on social media data has been explored to uncover latent BD risk, little attention has been paid to detecting BD from those misdiagnosed as MDD. Therefore, this study presents a novel approach for identifying BD risk in individuals initially misdiagnosed with MDD. A unique dataset, BD-Risk, is introduced, incorporating mental disorder types and BD mood levels verified by two clinical experts. The proposed multi-task learning for predicting BD risk and BD mood level outperforms the state-of-the-art baselines. Also, the proposed dynamic mood-aware attention can provide insights into the impact of BD mood on future risk, potentially aiding interventions for at-risk individuals.",
}
| Bipolar Disorder (BD) is a mental disorder characterized by intense mood swings, from depression to manic states. Individuals with BD are at a higher risk of suicide, but BD is often misdiagnosed as Major Depressive Disorder (MDD) due to shared symptoms, resulting in delays in appropriate treatment and increased suicide risk. While early intervention based on social media data has been explored to uncover latent BD risk, little attention has been paid to detecting BD from those misdiagnosed as MDD. Therefore, this study presents a novel approach for identifying BD risk in individuals initially misdiagnosed with MDD. A unique dataset, BD-Risk, is introduced, incorporating mental disorder types and BD mood levels verified by two clinical experts. The proposed multi-task learning for predicting BD risk and BD mood level outperforms the state-of-the-art baselines. Also, the proposed dynamic mood-aware attention can provide insights into the impact of BD mood on future risk, potentially aiding interventions for at-risk individuals. | [
"Lee, Daeun",
"Jeon, Hyolim",
"Son, Sejung",
"Park, Chaewon",
"An, Ji hyun",
"Kim, Seungbae",
"Han, Jinyoung"
] | Detecting Bipolar Disorder from Misdiagnosed Major Depressive Disorder with Mood-Aware Multi-Task Learning | naacl-long.278 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.279.bib | https://aclanthology.org/2024.naacl-long.279/ | @inproceedings{bogin-etal-2024-leveraging,
title = "Leveraging Code to Improve In-Context Learning for Semantic Parsing",
author = "Bogin, Ben and
Gupta, Shivanshu and
Clark, Peter and
Sabharwal, Ashish",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.279",
doi = "10.18653/v1/2024.naacl-long.279",
pages = "4971--5012",
abstract = "In-context learning (ICL) is an appealing approach for semantic parsing due to its few-shot nature and improved generalization. However, learning to parse to rare domain-specific languages (DSLs) from just a few demonstrations is challenging, limiting the performance of even the most capable LLMs.In this work, we show how pre-existing coding abilities of LLMs can be leveraged for semantic parsing by (1) using general-purpose programming languages such as Python instead of DSLs and (2) augmenting prompts with a structured domain description that includes, e.g., the available classes and functions. We show that both these changes significantly improve accuracy across three popular datasets; combined, they lead to dramatic improvements (e.g., 7.9{\%} to 66.5{\%} on SMCalFlow compositional split) and can substantially improve compositional generalization, nearly closing the performance gap between easier i.i.d. and harder compositional splits. Finally, comparisons across multiple PLs and DSL variations suggest that the similarity of a target language to general-purpose code is more important than prevalence in pretraining corpora. Our findings provide an improved methodology for building semantic parsers in the modern context of ICL with LLMs.",
}
| In-context learning (ICL) is an appealing approach for semantic parsing due to its few-shot nature and improved generalization. However, learning to parse to rare domain-specific languages (DSLs) from just a few demonstrations is challenging, limiting the performance of even the most capable LLMs.In this work, we show how pre-existing coding abilities of LLMs can be leveraged for semantic parsing by (1) using general-purpose programming languages such as Python instead of DSLs and (2) augmenting prompts with a structured domain description that includes, e.g., the available classes and functions. We show that both these changes significantly improve accuracy across three popular datasets; combined, they lead to dramatic improvements (e.g., 7.9{\%} to 66.5{\%} on SMCalFlow compositional split) and can substantially improve compositional generalization, nearly closing the performance gap between easier i.i.d. and harder compositional splits. Finally, comparisons across multiple PLs and DSL variations suggest that the similarity of a target language to general-purpose code is more important than prevalence in pretraining corpora. Our findings provide an improved methodology for building semantic parsers in the modern context of ICL with LLMs. | [
"Bogin, Ben",
"Gupta, Shivanshu",
"Clark, Peter",
"Sabharwal, Ashish"
] | Leveraging Code to Improve In-Context Learning for Semantic Parsing | naacl-long.279 | Oral | 2311.09519 | [
"https://github.com/allenai/code-semparse"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.280.bib | https://aclanthology.org/2024.naacl-long.280/ | @inproceedings{abaho-etal-2024-improving,
title = "Improving Pre-trained Language Model Sensitivity via Mask Specific losses: A case study on Biomedical {NER}",
author = "Abaho, Micheal and
Bollegala, Danushka and
Leeming, Gary and
Joyce, Dan and
Buchan, Iain",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.280",
doi = "10.18653/v1/2024.naacl-long.280",
pages = "5013--5029",
abstract = "Adapting language models (LMs) to novel domains is often achieved through fine-tuning a pre-trained LM (PLM) on domain-specific data. Fine-tuning introduces new knowledge into an LM, enabling it to comprehend and efficiently perform a target domain task. Fine-tuning can however be inadvertently insensitive if it ignores the wide array of disparities (e.g in word meaning) between source and target domains. For instance, words such as chronic and pressure may be treated lightly in social conversations, however, clinically, these words are usually an expression of concern. To address insensitive fine-tuning, we propose Mask Specific Language Modeling (MSLM), an approach that efficiently acquires target domain knowledge by appropriately weighting the importance of domain-specific terms (DS-terms) during fine-tuning. MSLM jointly masks DS-terms and generic words, then learns mask-specific losses by ensuring LMs incur larger penalties for inaccurately predicting DS-terms compared to generic words. Results of our analysis show that MSLM improves LMs sensitivity and detection of DS-terms. We empirically show that an optimal masking rate not only depends on the LM, but also on the dataset and the length of sequences. Our proposed masking strategy outperforms advanced masking strategies such as span- and PMI-based masking.",
}
| Adapting language models (LMs) to novel domains is often achieved through fine-tuning a pre-trained LM (PLM) on domain-specific data. Fine-tuning introduces new knowledge into an LM, enabling it to comprehend and efficiently perform a target domain task. Fine-tuning can however be inadvertently insensitive if it ignores the wide array of disparities (e.g in word meaning) between source and target domains. For instance, words such as chronic and pressure may be treated lightly in social conversations, however, clinically, these words are usually an expression of concern. To address insensitive fine-tuning, we propose Mask Specific Language Modeling (MSLM), an approach that efficiently acquires target domain knowledge by appropriately weighting the importance of domain-specific terms (DS-terms) during fine-tuning. MSLM jointly masks DS-terms and generic words, then learns mask-specific losses by ensuring LMs incur larger penalties for inaccurately predicting DS-terms compared to generic words. Results of our analysis show that MSLM improves LMs sensitivity and detection of DS-terms. We empirically show that an optimal masking rate not only depends on the LM, but also on the dataset and the length of sequences. Our proposed masking strategy outperforms advanced masking strategies such as span- and PMI-based masking. | [
"Abaho, Micheal",
"Bollegala, Danushka",
"Leeming, Gary",
"Joyce, Dan",
"Buchan, Iain"
] | Improving Pre-trained Language Model Sensitivity via Mask Specific losses: A case study on Biomedical NER | naacl-long.280 | Poster | 2403.18025 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.281.bib | https://aclanthology.org/2024.naacl-long.281/ | @inproceedings{merullo-etal-2024-language,
title = "Language Models Implement Simple {W}ord2{V}ec-style Vector Arithmetic",
author = "Merullo, Jack and
Eickhoff, Carsten and
Pavlick, Ellie",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.281",
doi = "10.18653/v1/2024.naacl-long.281",
pages = "5030--5047",
abstract = "A primary criticism towards language models (LMs) is their inscrutability. This paper presents evidence that, despite their size and complexity, LMs sometimes exploit a simple vector arithmetic style mechanism to solve some relational tasks using regularities encoded in the hidden space of the model (e.g., Poland:Warsaw::China:Beijing). We investigate a range of language model sizes (from 124M parameters to 176B parameters) in an in-context learning setting, and find that for a variety of tasks (involving capital cities, uppercasing, and past-tensing) a key part of the mechanism reduces to a simple additive update typically applied by the feedforward (FFN) networks. We further show that this mechanism is specific to tasks that require retrieval from pretraining memory, rather than retrieval from local context. Our results contribute to a growing body of work on the interpretability of LMs, and offer reason to be optimistic that, despite the massive and non-linear nature of the models, the strategies they ultimately use to solve tasks can sometimes reduce to familiar and even intuitive algorithms.",
}
| A primary criticism towards language models (LMs) is their inscrutability. This paper presents evidence that, despite their size and complexity, LMs sometimes exploit a simple vector arithmetic style mechanism to solve some relational tasks using regularities encoded in the hidden space of the model (e.g., Poland:Warsaw::China:Beijing). We investigate a range of language model sizes (from 124M parameters to 176B parameters) in an in-context learning setting, and find that for a variety of tasks (involving capital cities, uppercasing, and past-tensing) a key part of the mechanism reduces to a simple additive update typically applied by the feedforward (FFN) networks. We further show that this mechanism is specific to tasks that require retrieval from pretraining memory, rather than retrieval from local context. Our results contribute to a growing body of work on the interpretability of LMs, and offer reason to be optimistic that, despite the massive and non-linear nature of the models, the strategies they ultimately use to solve tasks can sometimes reduce to familiar and even intuitive algorithms. | [
"Merullo, Jack",
"Eickhoff, Carsten",
"Pavlick, Ellie"
] | Language Models Implement Simple Word2Vec-style Vector Arithmetic | naacl-long.281 | Poster | 2305.16130 | [
"https://github.com/jmerullo/lm_vector_arithmetic"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.282.bib | https://aclanthology.org/2024.naacl-long.282/ | @inproceedings{zhang-etal-2024-autolora,
title = "{A}uto{L}o{RA}: Automatically Tuning Matrix Ranks in Low-Rank Adaptation Based on Meta Learning",
author = "Zhang, Ruiyi and
Qiang, Rushi and
Somayajula, Sai Ashish and
Xie, Pengtao",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.282",
doi = "10.18653/v1/2024.naacl-long.282",
pages = "5048--5060",
abstract = "Large-scale pretraining followed by task-specific finetuning has achieved great success in various NLP tasks. Since finetuning all parameters of large pretrained models poses substantial computational and memory challenges, several efficient finetuning methods have been developed. Among them, low-rank adaptation (LoRA), which finetunes low-rank incremental update matrices on top of frozen pretrained weights, has proven particularly effective. Nonetheless, LoRA{'}s uniform rank assignment across all layers, along with its reliance on an exhaustive search to find the best rank, leads to high computation costs and suboptimal finetuning performance. To address these limitations, we introduce AutoLoRA, a meta learning based framework for automatically identifying the optimal rank of each LoRA layer. AutoLoRA associates each rank-1 matrix in a low-rank update matrix with a selection variable, which determines whether the rank-1 matrix should be discarded. A meta learning based method is developed to learn these selection variables. The optimal rank is determined by thresholding the values of these variables. Our comprehensive experiments on natural language understanding, generation, and sequence labeling demonstrate the effectiveness of AutoLoRA. The code is publicly available at https://github.com/ruz048/AutoLoRA",
}
| Large-scale pretraining followed by task-specific finetuning has achieved great success in various NLP tasks. Since finetuning all parameters of large pretrained models poses substantial computational and memory challenges, several efficient finetuning methods have been developed. Among them, low-rank adaptation (LoRA), which finetunes low-rank incremental update matrices on top of frozen pretrained weights, has proven particularly effective. Nonetheless, LoRA{'}s uniform rank assignment across all layers, along with its reliance on an exhaustive search to find the best rank, leads to high computation costs and suboptimal finetuning performance. To address these limitations, we introduce AutoLoRA, a meta learning based framework for automatically identifying the optimal rank of each LoRA layer. AutoLoRA associates each rank-1 matrix in a low-rank update matrix with a selection variable, which determines whether the rank-1 matrix should be discarded. A meta learning based method is developed to learn these selection variables. The optimal rank is determined by thresholding the values of these variables. Our comprehensive experiments on natural language understanding, generation, and sequence labeling demonstrate the effectiveness of AutoLoRA. The code is publicly available at https://github.com/ruz048/AutoLoRA | [
"Zhang, Ruiyi",
"Qiang, Rushi",
"Somayajula, Sai Ashish",
"Xie, Pengtao"
] | AutoLoRA: Automatically Tuning Matrix Ranks in Low-Rank Adaptation Based on Meta Learning | naacl-long.282 | Poster | 2403.09113 | [
""
] | https://huggingface.co/papers/2403.09113 | 0 | 0 | 0 | 4 | 1 | [] | [] | [] |
https://aclanthology.org/2024.naacl-long.283.bib | https://aclanthology.org/2024.naacl-long.283/ | @inproceedings{xia-etal-2024-sportqa,
title = "{S}port{QA}: A Benchmark for Sports Understanding in Large Language Models",
author = "Xia, Haotian and
Yang, Zhengbang and
Wang, Yuqing and
Tracy, Rhys and
Zhao, Yun and
Huang, Dongdong and
Chen, Zezhi and
Zhu, Yan and
Wang, Yuan-fang and
Shen, Weining",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.283",
doi = "10.18653/v1/2024.naacl-long.283",
pages = "5061--5081",
abstract = "A deep understanding of sports, a field rich in strategic and dynamic content, is crucial for advancing Natural Language Processing (NLP). This holds particular significance in the context of evaluating and advancing Large Language Models (LLMs), given the existing gap in specialized benchmarks. To bridge this gap, we introduce SportQA, a novel benchmark specifically designed for evaluating LLMs in the context of sports understanding. SportQA encompasses over 70,000 multiple-choice questions across three distinct difficulty levels, each targeting different aspects of sports knowledge from basic historical facts to intricate, scenario-based reasoning tasks. We conducted a thorough evaluation of prevalent LLMs, mainly utilizing few-shot learning paradigms supplemented by chain-of-thought (CoT) prompting. Our results reveal that while LLMs exhibit competent performance in basic sports knowledge, they struggle with more complex, scenario-based sports reasoning, lagging behind human expertise. The introduction of SportQA marks a significant step forward in NLP, offering a tool for assessing and enhancing sports understanding in LLMs. The dataset is available at https://github.com/haotianxia/SportQA",
}
| A deep understanding of sports, a field rich in strategic and dynamic content, is crucial for advancing Natural Language Processing (NLP). This holds particular significance in the context of evaluating and advancing Large Language Models (LLMs), given the existing gap in specialized benchmarks. To bridge this gap, we introduce SportQA, a novel benchmark specifically designed for evaluating LLMs in the context of sports understanding. SportQA encompasses over 70,000 multiple-choice questions across three distinct difficulty levels, each targeting different aspects of sports knowledge from basic historical facts to intricate, scenario-based reasoning tasks. We conducted a thorough evaluation of prevalent LLMs, mainly utilizing few-shot learning paradigms supplemented by chain-of-thought (CoT) prompting. Our results reveal that while LLMs exhibit competent performance in basic sports knowledge, they struggle with more complex, scenario-based sports reasoning, lagging behind human expertise. The introduction of SportQA marks a significant step forward in NLP, offering a tool for assessing and enhancing sports understanding in LLMs. The dataset is available at https://github.com/haotianxia/SportQA | [
"Xia, Haotian",
"Yang, Zhengbang",
"Wang, Yuqing",
"Tracy, Rhys",
"Zhao, Yun",
"Huang, Dongdong",
"Chen, Zezhi",
"Zhu, Yan",
"Wang, Yuan-fang",
"Shen, Weining"
] | SportQA: A Benchmark for Sports Understanding in Large Language Models | naacl-long.283 | Oral | 2402.15862 | [
"https://github.com/haotianxia/sportqa"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.284.bib | https://aclanthology.org/2024.naacl-long.284/ | @inproceedings{truong-etal-2024-revisiting,
title = "Revisiting subword tokenization: A case study on affixal negation in large language models",
author = "Truong, Thinh and
Otmakhova, Yulia and
Verspoor, Karin and
Cohn, Trevor and
Baldwin, Timothy",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.284",
doi = "10.18653/v1/2024.naacl-long.284",
pages = "5082--5095",
abstract = "In this work, we measure the impact of affixal negation on modern English large language models (LLMs). In affixal negation, the negated meaning is expressed through a negative morpheme, which is potentially challenging for LLMs as their tokenizers are often not morphologically plausible. We conduct extensive experiments using LLMs with different subword tokenization methods, which lead to several insights on the interaction between tokenization performance and negation sensitivity. Despite some interesting mismatches between tokenization accuracy and negation detection performance, we show that models can, on the whole, reliably recognize the meaning of affixal negation.",
}
| In this work, we measure the impact of affixal negation on modern English large language models (LLMs). In affixal negation, the negated meaning is expressed through a negative morpheme, which is potentially challenging for LLMs as their tokenizers are often not morphologically plausible. We conduct extensive experiments using LLMs with different subword tokenization methods, which lead to several insights on the interaction between tokenization performance and negation sensitivity. Despite some interesting mismatches between tokenization accuracy and negation detection performance, we show that models can, on the whole, reliably recognize the meaning of affixal negation. | [
"Truong, Thinh",
"Otmakhova, Yulia",
"Verspoor, Karin",
"Cohn, Trevor",
"Baldwin, Timothy"
] | Revisiting subword tokenization: A case study on affixal negation in large language models | naacl-long.284 | Poster | 2404.02421 | [
""
] | https://huggingface.co/papers/2404.02421 | 0 | 1 | 1 | 5 | 1 | [] | [] | [] |
https://aclanthology.org/2024.naacl-long.285.bib | https://aclanthology.org/2024.naacl-long.285/ | @inproceedings{lozoya-etal-2024-generating,
title = "Generating Mental Health Transcripts with {SAPE} ({S}panish Adaptive Prompt Engineering)",
author = "Lozoya, Daniel and
Berazaluce, Alejandro and
Perches, Juan and
L{\'u}a, Eloy and
Conway, Mike and
D{'}Alfonso, Simon",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.285",
doi = "10.18653/v1/2024.naacl-long.285",
pages = "5096--5113",
abstract = "Large language models have become valuable tools for data augmentation in scenarios with limited data availability, as they can generate synthetic data resembling real-world data. However, their generative performance depends on the quality of the prompt used to instruct the model. Prompt engineering that relies on hand-crafted strategies or requires domain experts to adjust the prompt often yields suboptimal results. In this paper we present SAPE, a Spanish Adaptive Prompt Engineering method utilizing genetic algorithms for prompt generation and selection. Our evaluation of SAPE focuses on a generative task that involves the creation of Spanish therapy transcripts, a type of data that is challenging to collect due to the fact that it typically includes protected health information. Through human evaluations conducted by mental health professionals, our results show that SAPE produces Spanish counselling transcripts that more closely resemble authentic therapy transcripts compared to other prompt engineering techniques that are based on Reflexion and Chain-of-Thought.",
}
| Large language models have become valuable tools for data augmentation in scenarios with limited data availability, as they can generate synthetic data resembling real-world data. However, their generative performance depends on the quality of the prompt used to instruct the model. Prompt engineering that relies on hand-crafted strategies or requires domain experts to adjust the prompt often yields suboptimal results. In this paper we present SAPE, a Spanish Adaptive Prompt Engineering method utilizing genetic algorithms for prompt generation and selection. Our evaluation of SAPE focuses on a generative task that involves the creation of Spanish therapy transcripts, a type of data that is challenging to collect due to the fact that it typically includes protected health information. Through human evaluations conducted by mental health professionals, our results show that SAPE produces Spanish counselling transcripts that more closely resemble authentic therapy transcripts compared to other prompt engineering techniques that are based on Reflexion and Chain-of-Thought. | [
"Lozoya, Daniel",
"Berazaluce, Alej",
"ro",
"Perches, Juan",
"L{\\'u}a, Eloy",
"Conway, Mike",
"D{'}Alfonso, Simon"
] | Generating Mental Health Transcripts with SAPE (Spanish Adaptive Prompt Engineering) | naacl-long.285 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.286.bib | https://aclanthology.org/2024.naacl-long.286/ | @inproceedings{foley-etal-2024-geolocating,
title = "Where are you from? Geolocating Speech and Applications to Language Identification",
author = "Foley, Patrick and
Wiesner, Matthew and
Odoom, Bismarck and
Garcia Perera, Leibny Paola and
Murray, Kenton and
Koehn, Philipp",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.286",
doi = "10.18653/v1/2024.naacl-long.286",
pages = "5114--5126",
abstract = "We train models to answer the question, Where are you from? and show how such models can be repurposed for language identification (LID). To our knowledge, this paper is the first to introduce data sources, methods and models to tackle the task of geolocation of speech at a global scale, and the first to explore using geolocation as a proxy-task for LID. Specifically, we explore whether radio broadcasts with known origin can be used to train regression and classification-based models for geolocating speech. We build models on top of self-supervised pretrained models, using attention pooling to qualitatively verify that the model geolocates the speech itself, and not other channel artifacts.The best geolocation models localize speaker origin to around 650km. We confirm the value of speech geolocation as a proxy task by using speech geolocation models for zero-shot LID. Finally, we show that fine-tuning geolocation models for LID outperforms fine-tuning pretrained Wav2Vec2.0 models, and achieves state-of-the-art performance on the FLEURS benchmark.",
}
| We train models to answer the question, Where are you from? and show how such models can be repurposed for language identification (LID). To our knowledge, this paper is the first to introduce data sources, methods and models to tackle the task of geolocation of speech at a global scale, and the first to explore using geolocation as a proxy-task for LID. Specifically, we explore whether radio broadcasts with known origin can be used to train regression and classification-based models for geolocating speech. We build models on top of self-supervised pretrained models, using attention pooling to qualitatively verify that the model geolocates the speech itself, and not other channel artifacts.The best geolocation models localize speaker origin to around 650km. We confirm the value of speech geolocation as a proxy task by using speech geolocation models for zero-shot LID. Finally, we show that fine-tuning geolocation models for LID outperforms fine-tuning pretrained Wav2Vec2.0 models, and achieves state-of-the-art performance on the FLEURS benchmark. | [
"Foley, Patrick",
"Wiesner, Matthew",
"Odoom, Bismarck",
"Garcia Perera, Leibny Paola",
"Murray, Kenton",
"Koehn, Philipp"
] | Where are you from? Geolocating Speech and Applications to Language Identification | naacl-long.286 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.287.bib | https://aclanthology.org/2024.naacl-long.287/ | @inproceedings{yu-etal-2024-teaching,
title = "Teaching Language Models to Self-Improve through Interactive Demonstrations",
author = "Yu, Xiao and
Peng, Baolin and
Galley, Michel and
Gao, Jianfeng and
Yu, Zhou",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.287",
doi = "10.18653/v1/2024.naacl-long.287",
pages = "5127--5149",
abstract = "The self-improving ability of large language models (LLMs), enabled by prompting them to analyze and revise their own outputs, has garnered significant interest in recent research. However, this ability has been shown to be absent and difficult to learn for smaller models, thus widening the performance gap between state-of-the-art LLMs and more cost-effective and faster ones. To reduce this gap, we introduce TriPosT, a training algorithm that endows smaller models with such self-improvement ability, and show that our approach can improve LLaMA-7B{'}s performance on math and reasoning tasks by up to 7.13{\%}. In contrast to prior work, we achieve this by using the smaller model to interact with LLMs to collect feedback and improvements on *its own generations*. We then replay this experience to train the small model. Our experiments on four math and reasoning datasets show that the interactive experience of learning from and correcting its *own* mistakes is crucial for small models to improve their performance.",
}
| The self-improving ability of large language models (LLMs), enabled by prompting them to analyze and revise their own outputs, has garnered significant interest in recent research. However, this ability has been shown to be absent and difficult to learn for smaller models, thus widening the performance gap between state-of-the-art LLMs and more cost-effective and faster ones. To reduce this gap, we introduce TriPosT, a training algorithm that endows smaller models with such self-improvement ability, and show that our approach can improve LLaMA-7B{'}s performance on math and reasoning tasks by up to 7.13{\%}. In contrast to prior work, we achieve this by using the smaller model to interact with LLMs to collect feedback and improvements on *its own generations*. We then replay this experience to train the small model. Our experiments on four math and reasoning datasets show that the interactive experience of learning from and correcting its *own* mistakes is crucial for small models to improve their performance. | [
"Yu, Xiao",
"Peng, Baolin",
"Galley, Michel",
"Gao, Jianfeng",
"Yu, Zhou"
] | Teaching Language Models to Self-Improve through Interactive Demonstrations | naacl-long.287 | Oral | 2310.13522 | [
"https://github.com/jasonyux/tripost"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.288.bib | https://aclanthology.org/2024.naacl-long.288/ | @inproceedings{aboutalebi-etal-2024-magid,
title = "{MAGID}: An Automated Pipeline for Generating Synthetic Multi-modal Datasets",
author = "Aboutalebi, Hossein and
Song, Hwanjun and
Xie, Yusheng and
Gupta, Arshit and
Sun, Lijia and
Su, Hang and
Shalyminov, Igor and
Pappas, Nikolaos and
Singh, Siffi and
Mansour, Saab",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.288",
doi = "10.18653/v1/2024.naacl-long.288",
pages = "5150--5167",
abstract = "Development of multimodal interactive systems is hindered by the lack of rich, multimodal (text, images) conversational data, which is needed in large quantities for LLMs. Previous approaches augment textual dialogues with retrieved images, posing privacy, diversity, and quality constraints. In this work, we introduce \textbf{M}ultimodal \textbf{A}ugmented \textbf{G}enerative \textbf{I}mages \textbf{D}ialogues (MAGID), a framework to augment text-only dialogues with diverse and high-quality images . Subsequently, a diffusion model is applied to craft corresponding images, ensuring alignment with the identified text. Finally, MAGID incorporates an innovative feedback loop between an image description generation module (textual LLM) and image quality modules (addressing aesthetics, image-text matching, and safety), that work in tandem to generate high-quality and multi-modal dialogues. We compare MAGID to other SOTA baselines on three dialogue datasets, using automated and human evaluation. Our results show that MAGID is comparable to or better than baselines, with significant improvements in human evaluation, especially against retrieval baselines where the image database is small.",
}
| Development of multimodal interactive systems is hindered by the lack of rich, multimodal (text, images) conversational data, which is needed in large quantities for LLMs. Previous approaches augment textual dialogues with retrieved images, posing privacy, diversity, and quality constraints. In this work, we introduce \textbf{M}ultimodal \textbf{A}ugmented \textbf{G}enerative \textbf{I}mages \textbf{D}ialogues (MAGID), a framework to augment text-only dialogues with diverse and high-quality images . Subsequently, a diffusion model is applied to craft corresponding images, ensuring alignment with the identified text. Finally, MAGID incorporates an innovative feedback loop between an image description generation module (textual LLM) and image quality modules (addressing aesthetics, image-text matching, and safety), that work in tandem to generate high-quality and multi-modal dialogues. We compare MAGID to other SOTA baselines on three dialogue datasets, using automated and human evaluation. Our results show that MAGID is comparable to or better than baselines, with significant improvements in human evaluation, especially against retrieval baselines where the image database is small. | [
"Aboutalebi, Hossein",
"Song, Hwanjun",
"Xie, Yusheng",
"Gupta, Arshit",
"Sun, Lijia",
"Su, Hang",
"Shalyminov, Igor",
"Pappas, Nikolaos",
"Singh, Siffi",
"Mansour, Saab"
] | MAGID: An Automated Pipeline for Generating Synthetic Multi-modal Datasets | naacl-long.288 | Oral | 2403.03194 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.289.bib | https://aclanthology.org/2024.naacl-long.289/ | @inproceedings{lin-etal-2024-zero,
title = "Zero-shot Generative Linguistic Steganography",
author = "Lin, Ke and
Luo, Yiyang and
Zhang, Zijian and
Ping, Luo",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.289",
doi = "10.18653/v1/2024.naacl-long.289",
pages = "5168--5182",
abstract = "Generative linguistic steganography attempts to hide secret messages into covertext. Previous studies have generally focused on the statistical differences between the covertext and stegotext, however, ill-formed stegotext can readily be identified by humans. In this paper, we propose a novel zero-shot approach based on in-context learning for linguistic steganography to achieve better perceptual and statistical imperceptibility. We also design several new metrics and reproducible language evaluations to measure the imperceptibility of the stegotext. Our experimental results indicate that our method produces $1.926\times$ more innocent and intelligible stegotext than any other method.",
}
| Generative linguistic steganography attempts to hide secret messages into covertext. Previous studies have generally focused on the statistical differences between the covertext and stegotext, however, ill-formed stegotext can readily be identified by humans. In this paper, we propose a novel zero-shot approach based on in-context learning for linguistic steganography to achieve better perceptual and statistical imperceptibility. We also design several new metrics and reproducible language evaluations to measure the imperceptibility of the stegotext. Our experimental results indicate that our method produces $1.926\times$ more innocent and intelligible stegotext than any other method. | [
"Lin, Ke",
"Luo, Yiyang",
"Zhang, Zijian",
"Ping, Luo"
] | Zero-shot Generative Linguistic Steganography | naacl-long.289 | Poster | 2403.10856 | [
"https://github.com/leonardodalinky/zero-shot-gls"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.290.bib | https://aclanthology.org/2024.naacl-long.290/ | @inproceedings{jones-bergen-2024-gpt,
title = "Does {GPT}-4 pass the {T}uring test?",
author = "Jones, Cameron and
Bergen, Ben",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.290",
doi = "10.18653/v1/2024.naacl-long.290",
pages = "5183--5210",
abstract = "We evaluated GPT-4 in a public online Turing test. The best-performing GPT-4 prompt passed in 49.7{\%} of games, outperforming ELIZA (22{\%}) and GPT-3.5 (20{\%}), but falling short of the baseline set by human participants (66{\%}). Participants{'} decisions were based mainly on linguistic style (35{\%}) and socioemotional traits (27{\%}), supporting the idea that intelligence, narrowly conceived, is not sufficient to pass the Turing test. Participant knowledge about LLMs and number of games played positively correlated with accuracy in detecting AI, suggesting learning and practice as possible strategies to mitigate deception. Despite known limitations as a test of intelligence, we argue that the Turing test continues to be relevant as an assessment of naturalistic communication and deception. AI models with the ability to masquerade as humans could have widespread societal consequences, and we analyse the effectiveness of different strategies and criteria for judging humanlikeness.",
}
| We evaluated GPT-4 in a public online Turing test. The best-performing GPT-4 prompt passed in 49.7{\%} of games, outperforming ELIZA (22{\%}) and GPT-3.5 (20{\%}), but falling short of the baseline set by human participants (66{\%}). Participants{'} decisions were based mainly on linguistic style (35{\%}) and socioemotional traits (27{\%}), supporting the idea that intelligence, narrowly conceived, is not sufficient to pass the Turing test. Participant knowledge about LLMs and number of games played positively correlated with accuracy in detecting AI, suggesting learning and practice as possible strategies to mitigate deception. Despite known limitations as a test of intelligence, we argue that the Turing test continues to be relevant as an assessment of naturalistic communication and deception. AI models with the ability to masquerade as humans could have widespread societal consequences, and we analyse the effectiveness of different strategies and criteria for judging humanlikeness. | [
"Jones, Cameron",
"Bergen, Ben"
] | Does GPT-4 pass the Turing test? | naacl-long.290 | Oral | 2310.20216 | [
""
] | https://huggingface.co/papers/2310.20216 | 1 | 17 | 3 | 2 | 1 | [] | [] | [] |
https://aclanthology.org/2024.naacl-long.291.bib | https://aclanthology.org/2024.naacl-long.291/ | @inproceedings{lei-etal-2024-polarity,
title = "Polarity Calibration for Opinion Summarization",
author = "Lei, Yuanyuan and
Song, Kaiqiang and
Cho, Sangwoo and
Wang, Xiaoyang and
Huang, Ruihong and
Yu, Dong",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.291",
doi = "10.18653/v1/2024.naacl-long.291",
pages = "5211--5224",
abstract = "Opinion summarization is automatically generating summaries from a variety of subjective information, such as product reviews or political opinions. The challenge of opinions summarization lies in presenting divergent or even conflicting opinions. We conduct an analysis of previous summarization models, which reveals their inclination to amplify the polarity bias, emphasizing the majority opinions while ignoring the minority opinions. To address this issue and make the summarizer express both sides of opinions, we introduce the concept of polarity calibration, which aims to align the polarity of output summary with that of input text. Specifically, we develop a reinforcement training approach for polarity calibration. This approach feeds the polarity distance between output summary and input text as reward into the summarizer, and also balance polarity calibration with content preservation and language naturality. We evaluate our Polarity Calibration model (PoCa) on two types of opinions summarization tasks: summarizing product reviews and political opinions articles. Automatic and human evaluation demonstrate that our approach can mitigate the polarity mismatch between output summary and input text, as well as maintain the content semantic and language quality.",
}
| Opinion summarization is automatically generating summaries from a variety of subjective information, such as product reviews or political opinions. The challenge of opinions summarization lies in presenting divergent or even conflicting opinions. We conduct an analysis of previous summarization models, which reveals their inclination to amplify the polarity bias, emphasizing the majority opinions while ignoring the minority opinions. To address this issue and make the summarizer express both sides of opinions, we introduce the concept of polarity calibration, which aims to align the polarity of output summary with that of input text. Specifically, we develop a reinforcement training approach for polarity calibration. This approach feeds the polarity distance between output summary and input text as reward into the summarizer, and also balance polarity calibration with content preservation and language naturality. We evaluate our Polarity Calibration model (PoCa) on two types of opinions summarization tasks: summarizing product reviews and political opinions articles. Automatic and human evaluation demonstrate that our approach can mitigate the polarity mismatch between output summary and input text, as well as maintain the content semantic and language quality. | [
"Lei, Yuanyuan",
"Song, Kaiqiang",
"Cho, Sangwoo",
"Wang, Xiaoyang",
"Huang, Ruihong",
"Yu, Dong"
] | Polarity Calibration for Opinion Summarization | naacl-long.291 | Poster | 2404.01706 | [
"https://github.com/yuanyuanlei-nlp/polarity_calibration_naacl_2024"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.292.bib | https://aclanthology.org/2024.naacl-long.292/ | @inproceedings{lei-huang-2024-sentence,
title = "Sentence-level Media Bias Analysis with Event Relation Graph",
author = "Lei, Yuanyuan and
Huang, Ruihong",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.292",
doi = "10.18653/v1/2024.naacl-long.292",
pages = "5225--5238",
abstract = "Media outlets are becoming more partisan and polarized nowadays. In this paper, we identify media bias at the sentence level, and pinpoint bias sentences that intend to sway readers{'} opinions. As bias sentences are often expressed in a neutral and factual way, considering broader context outside a sentence can help reveal the bias. In particular, we observe that events in a bias sentence need to be understood in associations with other events in the document. Therefore, we propose to construct an event relation graph to explicitly reason about event-event relations for sentence-level bias identification. The designed event relation graph consists of events as nodes and four common types of event relations: coreference, temporal, causal, and subevent relations. Then, we incorporate event relation graph for bias sentences identification in two steps: an event-aware language model is built to inject the events and event relations knowledge into the basic language model via soft labels; further, a relation-aware graph attention network is designed to update sentence embedding with events and event relations information based on hard labels. Experiments on two benchmark datasets demonstrate that our approach with the aid of event relation graph improves both precision and recall of bias sentence identification.",
}
| Media outlets are becoming more partisan and polarized nowadays. In this paper, we identify media bias at the sentence level, and pinpoint bias sentences that intend to sway readers{'} opinions. As bias sentences are often expressed in a neutral and factual way, considering broader context outside a sentence can help reveal the bias. In particular, we observe that events in a bias sentence need to be understood in associations with other events in the document. Therefore, we propose to construct an event relation graph to explicitly reason about event-event relations for sentence-level bias identification. The designed event relation graph consists of events as nodes and four common types of event relations: coreference, temporal, causal, and subevent relations. Then, we incorporate event relation graph for bias sentences identification in two steps: an event-aware language model is built to inject the events and event relations knowledge into the basic language model via soft labels; further, a relation-aware graph attention network is designed to update sentence embedding with events and event relations information based on hard labels. Experiments on two benchmark datasets demonstrate that our approach with the aid of event relation graph improves both precision and recall of bias sentence identification. | [
"Lei, Yuanyuan",
"Huang, Ruihong"
] | Sentence-level Media Bias Analysis with Event Relation Graph | naacl-long.292 | Poster | 2404.01722 | [
"https://github.com/yuanyuanlei-nlp/sentence_level_media_bias_naacl_2024"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.293.bib | https://aclanthology.org/2024.naacl-long.293/ | @inproceedings{lei-etal-2024-emona,
title = "{EMONA}: Event-level Moral Opinions in News Articles",
author = "Lei, Yuanyuan and
Miah, Md Messal Monem and
Qamar, Ayesha and
Reddy, Sai Ramana and
Tong, Jonathan and
Xu, Haotian and
Huang, Ruihong",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.293",
doi = "10.18653/v1/2024.naacl-long.293",
pages = "5239--5251",
abstract = "Most previous research on moral frames has focused on social media short texts, little work has explored moral sentiment within news articles. In news articles, authors often express their opinions or political stance through moral judgment towards events, specifically whether the event is right or wrong according to social moral rules. This paper initiates a new task to understand moral opinions towards events in news articles. We have created a new dataset, EMONA, and annotated event-level moral opinions in news articles. This dataset consists of 400 news articles containing over 10k sentences and 45k events, among which 9,613 events received moral foundation labels. Extracting event morality is a challenging task, as moral judgment towards events can be very implicit. Baseline models were built for event moral identification and classification. In addition, we also conduct extrinsic evaluations to integrate event-level moral opinions into three downstream tasks. The statistical analysis and experiments show that moral opinions of events can serve as informative features for identifying ideological bias or subjective events.",
}
| Most previous research on moral frames has focused on social media short texts, little work has explored moral sentiment within news articles. In news articles, authors often express their opinions or political stance through moral judgment towards events, specifically whether the event is right or wrong according to social moral rules. This paper initiates a new task to understand moral opinions towards events in news articles. We have created a new dataset, EMONA, and annotated event-level moral opinions in news articles. This dataset consists of 400 news articles containing over 10k sentences and 45k events, among which 9,613 events received moral foundation labels. Extracting event morality is a challenging task, as moral judgment towards events can be very implicit. Baseline models were built for event moral identification and classification. In addition, we also conduct extrinsic evaluations to integrate event-level moral opinions into three downstream tasks. The statistical analysis and experiments show that moral opinions of events can serve as informative features for identifying ideological bias or subjective events. | [
"Lei, Yuanyuan",
"Miah, Md Messal Monem",
"Qamar, Ayesha",
"Reddy, Sai Ramana",
"Tong, Jonathan",
"Xu, Haotian",
"Huang, Ruihong"
] | EMONA: Event-level Moral Opinions in News Articles | naacl-long.293 | Poster | 2404.01715 | [
"https://github.com/yuanyuanlei-nlp/emona_dataset"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.294.bib | https://aclanthology.org/2024.naacl-long.294/ | @inproceedings{gao-etal-2024-dlm,
title = "{DLM}: A Decoupled Learning Model for Long-tailed Polyphone Disambiguation in {M}andarin",
author = "Gao, Beibei and
Zhang, Yangsen and
Xiang, Ga and
Jiang, Yushan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.294",
doi = "10.18653/v1/2024.naacl-long.294",
pages = "5252--5262",
abstract = "Grapheme-to-phoneme conversion (G2P) is a critical component of the text-to-speech system (TTS), where polyphone disambiguation is the most crucial task. However, polyphone disambiguation datasets often suffer from the long-tail problem, and context learning for polyphonic characters commonly stems from a single dimension. In this paper, we propose a novel model DLM: a Decoupled Learning Model for long-tailed polyphone disambiguation in Mandarin. Firstly, DLM decouples representation and classification learnings. It can apply different data samplers for each stage to obtain an optimal training data distribution. This can mitigate the long-tail problem. Secondly, two improved attention mechanisms and a gradual conversion strategy are integrated into the DLM, which achieve transition learning of context from local to global. Finally, to evaluate the effectiveness of DLM, we construct a balanced polyphone disambiguation corpus via in-context learning. Experiments on the benchmark CPP dataset demonstrate that DLM achieves a boosted accuracy of 99.07{\%}. Moreover, DLM improves the disambiguation performance of long-tailed polyphonic characters. For many long-tailed characters, DLM even achieves an accuracy of 100{\%}.",
}
| Grapheme-to-phoneme conversion (G2P) is a critical component of the text-to-speech system (TTS), where polyphone disambiguation is the most crucial task. However, polyphone disambiguation datasets often suffer from the long-tail problem, and context learning for polyphonic characters commonly stems from a single dimension. In this paper, we propose a novel model DLM: a Decoupled Learning Model for long-tailed polyphone disambiguation in Mandarin. Firstly, DLM decouples representation and classification learnings. It can apply different data samplers for each stage to obtain an optimal training data distribution. This can mitigate the long-tail problem. Secondly, two improved attention mechanisms and a gradual conversion strategy are integrated into the DLM, which achieve transition learning of context from local to global. Finally, to evaluate the effectiveness of DLM, we construct a balanced polyphone disambiguation corpus via in-context learning. Experiments on the benchmark CPP dataset demonstrate that DLM achieves a boosted accuracy of 99.07{\%}. Moreover, DLM improves the disambiguation performance of long-tailed polyphonic characters. For many long-tailed characters, DLM even achieves an accuracy of 100{\%}. | [
"Gao, Beibei",
"Zhang, Yangsen",
"Xiang, Ga",
"Jiang, Yushan"
] | DLM: A Decoupled Learning Model for Long-tailed Polyphone Disambiguation in Mandarin | naacl-long.294 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.naacl-long.295.bib | https://aclanthology.org/2024.naacl-long.295/ | @inproceedings{shu-etal-2024-dont,
title = "You don{'}t need a personality test to know these models are unreliable: Assessing the Reliability of Large Language Models on Psychometric Instruments",
author = "Shu, Bangzhao and
Zhang, Lechen and
Choi, Minje and
Dunagan, Lavinia and
Logeswaran, Lajanugen and
Lee, Moontae and
Card, Dallas and
Jurgens, David",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.295",
doi = "10.18653/v1/2024.naacl-long.295",
pages = "5263--5281",
abstract = "The versatility of Large Language Models (LLMs) on natural language understanding tasks has made them popular for research in social sciences. To properly understand the properties and innate personas of LLMs, researchers have performed studies that involve using prompts in the form of questions that ask LLMs about particular opinions. In this study, we take a cautionary step back and examine whether the current format of prompting LLMs elicits responses in a consistent and robust manner. We first construct a dataset that contains 693 questions encompassing 39 different instruments of persona measurement on 115 persona axes. Additionally, we design a set of prompts containing minor variations and examine LLMs{'} capabilities to generate answers, as well as prompt variations to examine their consistency with respect to content-level variations such as switching the order of response options or negating the statement. Our experiments on 17 different LLMs reveal that even simple perturbations significantly downgrade a model{'}s question-answering ability, and that most LLMs have low negation consistency. Our results suggest that the currently widespread practice of prompting is insufficient to accurately and reliably capture model perceptions, and we therefore discuss potential alternatives to improve these issues.",
}
| The versatility of Large Language Models (LLMs) on natural language understanding tasks has made them popular for research in social sciences. To properly understand the properties and innate personas of LLMs, researchers have performed studies that involve using prompts in the form of questions that ask LLMs about particular opinions. In this study, we take a cautionary step back and examine whether the current format of prompting LLMs elicits responses in a consistent and robust manner. We first construct a dataset that contains 693 questions encompassing 39 different instruments of persona measurement on 115 persona axes. Additionally, we design a set of prompts containing minor variations and examine LLMs{'} capabilities to generate answers, as well as prompt variations to examine their consistency with respect to content-level variations such as switching the order of response options or negating the statement. Our experiments on 17 different LLMs reveal that even simple perturbations significantly downgrade a model{'}s question-answering ability, and that most LLMs have low negation consistency. Our results suggest that the currently widespread practice of prompting is insufficient to accurately and reliably capture model perceptions, and we therefore discuss potential alternatives to improve these issues. | [
"Shu, Bangzhao",
"Zhang, Lechen",
"Choi, Minje",
"Dunagan, Lavinia",
"Logeswaran, Lajanugen",
"Lee, Moontae",
"Card, Dallas",
"Jurgens, David"
] | You don't need a personality test to know these models are unreliable: Assessing the Reliability of Large Language Models on Psychometric Instruments | naacl-long.295 | Oral | 2311.09718 | [
"https://github.com/orange0629/llm-personas"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.296.bib | https://aclanthology.org/2024.naacl-long.296/ | @inproceedings{liu-etal-2024-casa,
title = "{CASA}: Causality-driven Argument Sufficiency Assessment",
author = "Liu, Xiao and
Feng, Yansong and
Chang, Kai-Wei",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.296",
doi = "10.18653/v1/2024.naacl-long.296",
pages = "5282--5302",
abstract = "The argument sufficiency assessment task aims to determine if the premises of a given argument support its conclusion.To tackle this task, existing works often train a classifier on data annotated by humans. However, annotating data is laborious, and annotations are often inconsistent due to subjective criteria. Motivated by the definition of probability of sufficiency (PS) in the causal literature, we proposeCASA, a zero-shot causality-driven argument sufficiency assessment framework. PS measures how likely introducing the premise event would lead to the conclusion when both the premise and conclusion events are absent. To estimate this probability, we propose to use large language models (LLMs) to generate contexts that are inconsistent with the premise and conclusion and revise them by injecting the premise event.Experiments on two logical fallacy detection datasets demonstrate that CASA accurately identifies insufficient arguments. We further deploy CASA in a writing assistance application, and find that suggestions generated by CASA enhance the sufficiency of student-written arguments. Code and data are available at https://github.com/xxxiaol/CASA.",
}
| The argument sufficiency assessment task aims to determine if the premises of a given argument support its conclusion.To tackle this task, existing works often train a classifier on data annotated by humans. However, annotating data is laborious, and annotations are often inconsistent due to subjective criteria. Motivated by the definition of probability of sufficiency (PS) in the causal literature, we proposeCASA, a zero-shot causality-driven argument sufficiency assessment framework. PS measures how likely introducing the premise event would lead to the conclusion when both the premise and conclusion events are absent. To estimate this probability, we propose to use large language models (LLMs) to generate contexts that are inconsistent with the premise and conclusion and revise them by injecting the premise event.Experiments on two logical fallacy detection datasets demonstrate that CASA accurately identifies insufficient arguments. We further deploy CASA in a writing assistance application, and find that suggestions generated by CASA enhance the sufficiency of student-written arguments. Code and data are available at https://github.com/xxxiaol/CASA. | [
"Liu, Xiao",
"Feng, Yansong",
"Chang, Kai-Wei"
] | CASA: Causality-driven Argument Sufficiency Assessment | naacl-long.296 | Poster | 2401.05249 | [
"https://github.com/xxxiaol/casa"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.297.bib | https://aclanthology.org/2024.naacl-long.297/ | @inproceedings{tian-etal-2024-macgyver,
title = "{M}ac{G}yver: Are Large Language Models Creative Problem Solvers?",
author = "Tian, Yufei and
Ravichander, Abhilasha and
Qin, Lianhui and
Le Bras, Ronan and
Marjieh, Raja and
Peng, Nanyun and
Choi, Yejin and
Griffiths, Thomas and
Brahman, Faeze",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.297",
doi = "10.18653/v1/2024.naacl-long.297",
pages = "5303--5324",
abstract = "We explore the creative problem-solving capabilities of modern LLMs in a novel constrained setting. To this end, we create MACGYVER, an automatically generated dataset consisting of over 1,600 real-world problems deliberately designed to trigger innovative usage of objects and necessitate out-of-the-box thinking. We then present our collection to both LLMs and humans to compare and contrast their problem-solving abilities. MACGYVER is challenging for both groups, but in unique and complementary ways. For instance, humans excel in tasks they are familiar with but struggle with domain-specific knowledge, leading to a higher variance. In contrast, LLMs, exposed to a variety of specialized knowledge, attempt broader problems but fail by proposing physically-infeasible actions. Finally, we provide a detailed error analysis of LLMs, and demonstrate the potential of enhancing their problem-solving ability with novel prompting techniques such as iterative step-wise reflection and divergent-convergent thinking.This work (1) introduces a fresh arena for intelligent agents focusing on intricate aspects of physical reasoning, planning, and unconventional thinking, which supplements the existing spectrum of machine intelligence; and (2) provides insight into the constrained problem-solving capabilities of both humans and AI.",
}
| We explore the creative problem-solving capabilities of modern LLMs in a novel constrained setting. To this end, we create MACGYVER, an automatically generated dataset consisting of over 1,600 real-world problems deliberately designed to trigger innovative usage of objects and necessitate out-of-the-box thinking. We then present our collection to both LLMs and humans to compare and contrast their problem-solving abilities. MACGYVER is challenging for both groups, but in unique and complementary ways. For instance, humans excel in tasks they are familiar with but struggle with domain-specific knowledge, leading to a higher variance. In contrast, LLMs, exposed to a variety of specialized knowledge, attempt broader problems but fail by proposing physically-infeasible actions. Finally, we provide a detailed error analysis of LLMs, and demonstrate the potential of enhancing their problem-solving ability with novel prompting techniques such as iterative step-wise reflection and divergent-convergent thinking.This work (1) introduces a fresh arena for intelligent agents focusing on intricate aspects of physical reasoning, planning, and unconventional thinking, which supplements the existing spectrum of machine intelligence; and (2) provides insight into the constrained problem-solving capabilities of both humans and AI. | [
"Tian, Yufei",
"Ravich",
"er, Abhilasha",
"Qin, Lianhui",
"Le Bras, Ronan",
"Marjieh, Raja",
"Peng, Nanyun",
"Choi, Yejin",
"Griffiths, Thomas",
"Brahman, Faeze"
] | MacGyver: Are Large Language Models Creative Problem Solvers? | naacl-long.297 | Poster | 2311.09682 | [
"https://github.com/allenai/macgyver"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.298.bib | https://aclanthology.org/2024.naacl-long.298/ | @inproceedings{ebing-glavas-2024-translate,
title = "To Translate or Not to Translate: A Systematic Investigation of Translation-Based Cross-Lingual Transfer to Low-Resource Languages",
author = "Ebing, Benedikt and
Glava{\v{s}}, Goran",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.298",
doi = "10.18653/v1/2024.naacl-long.298",
pages = "5325--5344",
abstract = "Perfect machine translation (MT) would render cross-lingual transfer (XLT) by means of multilingual language models (mLMs) superfluous. Given, on the one hand, the large body of work on improving XLT with mLMs and, on the other hand, recent advances in massively multilingual MT, in this work, we systematically evaluate existing and propose new translation-based XLT approaches for transfer to low-resource languages. We show that all translation-based approaches dramatically outperform zero-shot XLT with mLMs{---}with the combination of round-trip translation of the source-language training data and the translation of the target-language test instances at inference{---}being generally the most effective. We next show that one can obtain further empirical gains by adding reliable translations to other high-resource languages to the training data. Moreover, we propose an effective translation-based XLT strategy even for languages not supported by the MT system. Finally, we show that model selection for XLT based on target-language validation data obtained with MT outperforms model selection based on the source-language data. We believe our findings warrant a broader inclusion of more robust translation-based baselines in XLT research.",
}
| Perfect machine translation (MT) would render cross-lingual transfer (XLT) by means of multilingual language models (mLMs) superfluous. Given, on the one hand, the large body of work on improving XLT with mLMs and, on the other hand, recent advances in massively multilingual MT, in this work, we systematically evaluate existing and propose new translation-based XLT approaches for transfer to low-resource languages. We show that all translation-based approaches dramatically outperform zero-shot XLT with mLMs{---}with the combination of round-trip translation of the source-language training data and the translation of the target-language test instances at inference{---}being generally the most effective. We next show that one can obtain further empirical gains by adding reliable translations to other high-resource languages to the training data. Moreover, we propose an effective translation-based XLT strategy even for languages not supported by the MT system. Finally, we show that model selection for XLT based on target-language validation data obtained with MT outperforms model selection based on the source-language data. We believe our findings warrant a broader inclusion of more robust translation-based baselines in XLT research. | [
"Ebing, Benedikt",
"Glava{\\v{s}}, Goran"
] | To Translate or Not to Translate: A Systematic Investigation of Translation-Based Cross-Lingual Transfer to Low-Resource Languages | naacl-long.298 | Poster | 2311.09404 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.299.bib | https://aclanthology.org/2024.naacl-long.299/ | @inproceedings{wang-etal-2024-enhancing,
title = "Enhancing Large Language Models Against Inductive Instructions with Dual-critique Prompting",
author = "Wang, Rui and
Wang, Hongru and
Mi, Fei and
Xue, Boyang and
Chen, Yi and
Wong, Kam-Fai and
Xu, Ruifeng",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.299",
doi = "10.18653/v1/2024.naacl-long.299",
pages = "5345--5363",
abstract = "Numerous works are proposed to align large language models (LLMs) with human intents to better fulfill instructions, ensuring they are trustful and helpful.Nevertheless, some human instructions are often malicious or misleading and following them will lead to untruthful and unsafe responses.Previous work rarely focused on understanding how LLMs manage instructions based on counterfactual premises, referred to here as inductive instructions, which may stem from users{'} false beliefs or malicious intents.In this paper, we aim to reveal the behaviors of LLMs towards inductive instructions and enhance their truthfulness and helpfulness accordingly. Specifically, we first introduce a benchmark of Inductive Instructions (INDust), where the false knowledge is incorporated into instructions in multiple different styles. After extensive human and automatic evaluations, we uncovered a universal vulnerability among LLMs in processing inductive instructions.Additionally, we identified that different inductive styles affect the models{'} ability to identify the same underlying errors,and the complexity of the underlying assumptions also influences the model{'}s performance.Motivated by these results, we propose Dual-critique prompting to improve LLM robustness against inductive instructions.Our experiments demonstrate that Dual-critique prompting significantly bolsters the robustness of a diverse array of LLMs, even when confronted with varying degrees of inductive instruction complexity and differing inductive styles.",
}
| Numerous works are proposed to align large language models (LLMs) with human intents to better fulfill instructions, ensuring they are trustful and helpful.Nevertheless, some human instructions are often malicious or misleading and following them will lead to untruthful and unsafe responses.Previous work rarely focused on understanding how LLMs manage instructions based on counterfactual premises, referred to here as inductive instructions, which may stem from users{'} false beliefs or malicious intents.In this paper, we aim to reveal the behaviors of LLMs towards inductive instructions and enhance their truthfulness and helpfulness accordingly. Specifically, we first introduce a benchmark of Inductive Instructions (INDust), where the false knowledge is incorporated into instructions in multiple different styles. After extensive human and automatic evaluations, we uncovered a universal vulnerability among LLMs in processing inductive instructions.Additionally, we identified that different inductive styles affect the models{'} ability to identify the same underlying errors,and the complexity of the underlying assumptions also influences the model{'}s performance.Motivated by these results, we propose Dual-critique prompting to improve LLM robustness against inductive instructions.Our experiments demonstrate that Dual-critique prompting significantly bolsters the robustness of a diverse array of LLMs, even when confronted with varying degrees of inductive instruction complexity and differing inductive styles. | [
"Wang, Rui",
"Wang, Hongru",
"Mi, Fei",
"Xue, Boyang",
"Chen, Yi",
"Wong, Kam-Fai",
"Xu, Ruifeng"
] | Enhancing Large Language Models Against Inductive Instructions with Dual-critique Prompting | naacl-long.299 | Poster | 2305.13733 | [
"https://github.com/devoallen/indust"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.naacl-long.300.bib | https://aclanthology.org/2024.naacl-long.300/ | @inproceedings{zaratiana-etal-2024-gliner,
title = "{GL}i{NER}: Generalist Model for Named Entity Recognition using Bidirectional Transformer",
author = "Zaratiana, Urchade and
Tomeh, Nadi and
Holat, Pierre and
Charnois, Thierry",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-long.300",
doi = "10.18653/v1/2024.naacl-long.300",
pages = "5364--5376",
abstract = "Named Entity Recognition (NER) is essential in various Natural Language Processing (NLP) applications. Traditional NER models are effective but limited to a set of predefined entity types. In contrast, Large Language Models (LLMs) can extract arbitrary entities through natural language instructions, offering greater flexibility. However, their size and cost, particularly for those accessed via APIs like ChatGPT, make them impractical in resource-limited scenarios. In this paper, we introduce a compact NER model trained to identify any type of entity. Leveraging a bidirectional transformer encoder, our model, GLiNER, facilitates parallel entity extraction, an advantage over the slow sequential token generation of LLMs. Through comprehensive testing, GLiNER demonstrate strong performance, outperforming both ChatGPT and fine-tuned LLMs in zero-shot evaluations on various NER benchmarks.",
}
| Named Entity Recognition (NER) is essential in various Natural Language Processing (NLP) applications. Traditional NER models are effective but limited to a set of predefined entity types. In contrast, Large Language Models (LLMs) can extract arbitrary entities through natural language instructions, offering greater flexibility. However, their size and cost, particularly for those accessed via APIs like ChatGPT, make them impractical in resource-limited scenarios. In this paper, we introduce a compact NER model trained to identify any type of entity. Leveraging a bidirectional transformer encoder, our model, GLiNER, facilitates parallel entity extraction, an advantage over the slow sequential token generation of LLMs. Through comprehensive testing, GLiNER demonstrate strong performance, outperforming both ChatGPT and fine-tuned LLMs in zero-shot evaluations on various NER benchmarks. | [
"Zaratiana, Urchade",
"Tomeh, Nadi",
"Holat, Pierre",
"Charnois, Thierry"
] | GLiNER: Generalist Model for Named Entity Recognition using Bidirectional Transformer | naacl-long.300 | Oral | 2311.08526 | [
"https://github.com/urchade/gliner"
] | https://huggingface.co/papers/2311.08526 | 1 | 9 | 0 | 4 | 1 | [
"urchade/gliner_multi",
"urchade/gliner_multi-v2.1",
"urchade/gliner_base",
"numind/NuNER_Zero",
"urchade/gliner_multi_pii-v1",
"urchade/gliner_large-v2",
"urchade/gliner_large-v2.1",
"numind/NuNER_Zero-4k",
"urchade/gliner_medium-v2.1",
"numind/NuNER_Zero-span",
"urchade/gliner_small-v1",
"gliner-community/gliner_large-v2.5",
"urchade/gliner_small-v2",
"taeminlee/gliner_ko",
"urchade/gliner_medium-v1",
"urchade/gliner_medium-v2",
"DeepMount00/GLiNER_ITA_LARGE",
"gliner-community/gliner_small-v2.5",
"urchade/gliner_large-v1",
"gliner-community/gliner_medium-v2.5",
"urchade/gliner_small-v2.1",
"medieval-data/gliner_multi-v2.1-medieval-latin",
"DeepMount00/GLiNER_ITA_BASE",
"DeepMount00/GLiNER_ITA_SMALL",
"ravi3647/test12345",
"anrol13/gliner_medium-v2.1",
"JoltCapital/gliner_large-v2.1",
"jilijeanlouis/NuNER_Zero",
"placingholocaust/gliner_small-v2.1-holocaust",
"menimeni123/helem-pii",
"ljvmiranda921/tl_gliner_small",
"ljvmiranda921/tl_gliner_medium",
"ljvmiranda921/tl_gliner_large"
] | [] | [
"tomaarsen/gliner_medium-v2.1",
"urchade/gliner_multiv2.1",
"DeepMount00/universal_ner_ita",
"numind/NuZero",
"urchade/gliner_multi_pii-v1",
"manu/gliner_multi",
"Tonic/gliner_base",
"MattStammers/pteredactyl_PII",
"KrishGoyani/GLiNER_Resume_Parser",
"woodmastr/Gliner_WienerW",
"vumichien/gliner-japanese-ner",
"Saripudin/zero-shot-demo",
"Datasaur/zero-shot-demo",
"placingholocaust/gliner_small-v2.1-holocaust",
"peter2000/gliner_multi",
"Shamik/extract_any_entity",
"avsolatorio/query-parser",
"nikhilsingh/email-parser",
"oliviercaron/GLiNER_file",
"zeimoto/voiceoperation",
"zeimoto/voicelead",
"IwanSumpena/iwan-space",
"bestofaiml/entity-extraction-1",
"MattStammers/Pteredactyl_PII_Backup",
"j-higgins/KeyIntentNER-T"
] |