Datasets:

bibtex_url
stringlengths
41
50
proceedings
stringlengths
38
47
bibtext
stringlengths
709
3.56k
abstract
stringlengths
17
2.11k
authors
sequencelengths
1
72
title
stringlengths
12
207
id
stringlengths
7
16
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
276 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
14
num_comments
int64
-1
11
n_authors
int64
-1
44
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
100
Datasets
sequencelengths
0
14
Spaces
sequencelengths
0
100
https://aclanthology.org/2023.starsem-1.2.bib
https://aclanthology.org/2023.starsem-1.2/
@inproceedings{bylinina-etal-2023-leverage, title = "Leverage Points in Modality Shifts: Comparing Language-only and Multimodal Word Representations", author = "Tikhonov, Alexey and Bylinina, Lisa and Paperno, Denis", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.2", doi = "10.18653/v1/2023.starsem-1.2", pages = "11--17", abstract = "Multimodal embeddings aim to enrich the semantic information in neural representations of language compared to text-only models. While different embeddings exhibit different applicability and performance on downstream tasks, little is known about the systematic representation differences attributed to the visual modality. Our paper compares word embeddings from three vision-and-language models (CLIP, OpenCLIP and Multilingual CLIP, Radford et al. 2021; Ilharco et al. 2021; Carlsson et al. 2022) and three text-only models, with static (FastText, Bojanowski et al. 2017) as well as contextual representations (multilingual BERT Devlin et al. 2018; XLM-RoBERTa, Conneau et al. 2019). This is the first large-scale study of the effect of visual grounding on language representations, including 46 semantic parameters. We identify meaning properties and relations that characterize words whose embeddings are most affected by the inclusion of visual modality in the training data; that is, points where visual grounding turns out most important. We find that the effect of visual modality correlates most with denotational semantic properties related to concreteness, but is also detected for several specific semantic classes, as well as for valence, a sentiment-related connotational property of linguistic expressions.", }
Multimodal embeddings aim to enrich the semantic information in neural representations of language compared to text-only models. While different embeddings exhibit different applicability and performance on downstream tasks, little is known about the systematic representation differences attributed to the visual modality. Our paper compares word embeddings from three vision-and-language models (CLIP, OpenCLIP and Multilingual CLIP, Radford et al. 2021; Ilharco et al. 2021; Carlsson et al. 2022) and three text-only models, with static (FastText, Bojanowski et al. 2017) as well as contextual representations (multilingual BERT Devlin et al. 2018; XLM-RoBERTa, Conneau et al. 2019). This is the first large-scale study of the effect of visual grounding on language representations, including 46 semantic parameters. We identify meaning properties and relations that characterize words whose embeddings are most affected by the inclusion of visual modality in the training data; that is, points where visual grounding turns out most important. We find that the effect of visual modality correlates most with denotational semantic properties related to concreteness, but is also detected for several specific semantic classes, as well as for valence, a sentiment-related connotational property of linguistic expressions.
[ "Tikhonov, Alexey", "Bylinina, Lisa", "Paperno, Denis" ]
Leverage Points in Modality Shifts: Comparing Language-only and Multimodal Word Representations
starsem-1.2
Poster
2306.02348
[ "https://github.com/altsoph/modality_shifts" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.3.bib
https://aclanthology.org/2023.starsem-1.3/
@inproceedings{yoshida-etal-2023-revisiting, title = "Revisiting Syntax-Based Approach in Negation Scope Resolution", author = "Yoshida, Asahi and Kato, Yoshihide and Matsubara, Shigeki", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.3", doi = "10.18653/v1/2023.starsem-1.3", pages = "18--23", abstract = "Negation scope resolution is the process of detecting the negated part of a sentence. Unlike the syntax-based approach employed in previous research, state-of-the-art methods performed better without the explicit use of syntactic structure. This work revisits the syntax-based approach and re-evaluates the effectiveness of syntactic structure in negation scope resolution. We replace the parser utilized in the prior works with state-of-the-art parsers and modify the syntax-based heuristic rules. The experimental results demonstrate that the simple modifications enhance the performance of the prior syntax-based method to the same level as state-of-the-art end-to-end neural-based methods.", }
Negation scope resolution is the process of detecting the negated part of a sentence. Unlike the syntax-based approach employed in previous research, state-of-the-art methods performed better without the explicit use of syntactic structure. This work revisits the syntax-based approach and re-evaluates the effectiveness of syntactic structure in negation scope resolution. We replace the parser utilized in the prior works with state-of-the-art parsers and modify the syntax-based heuristic rules. The experimental results demonstrate that the simple modifications enhance the performance of the prior syntax-based method to the same level as state-of-the-art end-to-end neural-based methods.
[ "Yoshida, Asahi", "Kato, Yoshihide", "Matsubara, Shigeki" ]
Revisiting Syntax-Based Approach in Negation Scope Resolution
starsem-1.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.4.bib
https://aclanthology.org/2023.starsem-1.4/
@inproceedings{gubelmann-etal-2023-truth, title = "When Truth Matters - Addressing Pragmatic Categories in Natural Language Inference ({NLI}) by Large Language Models ({LLM}s)", author = "Gubelmann, Reto and Kalouli, Aikaterini-lida and Niklaus, Christina and Handschuh, Siegfried", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.4", doi = "10.18653/v1/2023.starsem-1.4", pages = "24--39", abstract = "In this paper, we focus on the ability of large language models (LLMs) to accommodate different pragmatic sentence types, such as questions, commands, as well as sentence fragments for natural language inference (NLI). On the commonly used notion of logical inference, nothing can be inferred from a question, an order, or an incomprehensible sentence fragment. We find MNLI, arguably the most important NLI dataset, and hence models fine-tuned on this dataset, insensitive to this fact. Using a symbolic semantic parser, we develop and make publicly available, fine-tuning datasets designed specifically to address this issue, with promising results. We also make a first exploration of ChatGPT{'}s concept of entailment.", }
In this paper, we focus on the ability of large language models (LLMs) to accommodate different pragmatic sentence types, such as questions, commands, as well as sentence fragments for natural language inference (NLI). On the commonly used notion of logical inference, nothing can be inferred from a question, an order, or an incomprehensible sentence fragment. We find MNLI, arguably the most important NLI dataset, and hence models fine-tuned on this dataset, insensitive to this fact. Using a symbolic semantic parser, we develop and make publicly available, fine-tuning datasets designed specifically to address this issue, with promising results. We also make a first exploration of ChatGPT{'}s concept of entailment.
[ "Gubelmann, Reto", "Kalouli, Aikaterini-lida", "Niklaus, Christina", "H", "schuh, Siegfried" ]
When Truth Matters - Addressing Pragmatic Categories in Natural Language Inference (NLI) by Large Language Models (LLMs)
starsem-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.5.bib
https://aclanthology.org/2023.starsem-1.5/
@inproceedings{sekizawa-yanaka-2023-analyzing, title = "Analyzing Syntactic Generalization Capacity of Pre-trained Language Models on {J}apanese Honorific Conversion", author = "Sekizawa, Ryo and Yanaka, Hitomi", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.5", doi = "10.18653/v1/2023.starsem-1.5", pages = "40--47", abstract = "Using Japanese honorifics is challenging because it requires not only knowledge of the grammatical rules but also contextual information, such as social relationships. It remains unclear whether pre-trained large language models (LLMs) can flexibly handle Japanese honorifics like humans. To analyze this, we introduce an honorific conversion task that considers social relationships among people mentioned in a conversation. We construct a Japanese honorifics dataset from problem templates of various sentence structures to investigate the syntactic generalization capacity of GPT-3, one of the leading LLMs, on this task under two settings: fine-tuning and prompt learning. Our results showed that the fine-tuned GPT-3 performed better in a context-aware honorific conversion task than the prompt-based one. The fine-tuned model demonstrated overall syntactic generalizability towards compound honorific sentences, except when tested with the data involving direct speech.", }
Using Japanese honorifics is challenging because it requires not only knowledge of the grammatical rules but also contextual information, such as social relationships. It remains unclear whether pre-trained large language models (LLMs) can flexibly handle Japanese honorifics like humans. To analyze this, we introduce an honorific conversion task that considers social relationships among people mentioned in a conversation. We construct a Japanese honorifics dataset from problem templates of various sentence structures to investigate the syntactic generalization capacity of GPT-3, one of the leading LLMs, on this task under two settings: fine-tuning and prompt learning. Our results showed that the fine-tuned GPT-3 performed better in a context-aware honorific conversion task than the prompt-based one. The fine-tuned model demonstrated overall syntactic generalizability towards compound honorific sentences, except when tested with the data involving direct speech.
[ "Sekizawa, Ryo", "Yanaka, Hitomi" ]
Analyzing Syntactic Generalization Capacity of Pre-trained Language Models on Japanese Honorific Conversion
starsem-1.5
Poster
2306.03055
[ "" ]
https://huggingface.co/papers/2306.03055
0
0
0
2
1
[]
[]
[]
https://aclanthology.org/2023.starsem-1.6.bib
https://aclanthology.org/2023.starsem-1.6/
@inproceedings{zhang-bethard-2023-improving, title = "Improving Toponym Resolution with Better Candidate Generation, Transformer-based Reranking, and Two-Stage Resolution", author = "Zhang, Zeyu and Bethard, Steven", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.6", doi = "10.18653/v1/2023.starsem-1.6", pages = "48--60", abstract = "Geocoding is the task of converting location mentions in text into structured data that encodes the geospatial semantics. We propose a new architecture for geocoding, GeoNorm. GeoNorm first uses information retrieval techniques to generate a list of candidate entries from the geospatial ontology. Then it reranks the candidate entries using a transformer-based neural network that incorporates information from the ontology such as the entry{'}s population. This generate-and-rerank process is applied twice: first to resolve the less ambiguous countries, states, and counties, and second to resolve the remaining location mentions, using the identified countries, states, and counties as context. Our proposed toponym resolution framework achieves state-of-the-art performance on multiple datasets. Code and models are available at {\textbackslash}url{https://github.com/clulab/geonorm}.", }
Geocoding is the task of converting location mentions in text into structured data that encodes the geospatial semantics. We propose a new architecture for geocoding, GeoNorm. GeoNorm first uses information retrieval techniques to generate a list of candidate entries from the geospatial ontology. Then it reranks the candidate entries using a transformer-based neural network that incorporates information from the ontology such as the entry{'}s population. This generate-and-rerank process is applied twice: first to resolve the less ambiguous countries, states, and counties, and second to resolve the remaining location mentions, using the identified countries, states, and counties as context. Our proposed toponym resolution framework achieves state-of-the-art performance on multiple datasets. Code and models are available at {\textbackslash}url{https://github.com/clulab/geonorm}.
[ "Zhang, Zeyu", "Bethard, Steven" ]
Improving Toponym Resolution with Better Candidate Generation, Transformer-based Reranking, and Two-Stage Resolution
starsem-1.6
Poster
2305.11315
[ "https://github.com/clulab/geonorm" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.7.bib
https://aclanthology.org/2023.starsem-1.7/
@inproceedings{bhattacharyya-etal-2023-crapes, title = "{CRAPES}:Cross-modal Annotation Projection for Visual Semantic Role Labeling", author = "Bhattacharyya, Abhidip and Palmer, Martha and Heckman, Christoffer", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.7", doi = "10.18653/v1/2023.starsem-1.7", pages = "61--70", abstract = "Automatic image comprehension is an important yet challenging task that includes identifying actions in an image and corresponding action participants. Most current approaches to this task, now termed Grounded Situation Recognition (GSR), start by predicting a verb that describes the action and then predict the nouns that can participate in the action as arguments to the verb. This problem formulation limits each image to a single action even though several actions could be depicted. In contrast, text-based Semantic Role Labeling (SRL) aims to label all actions in a sentence, typically resulting in at least two or three predicate argument structures per sentence. We hypothesize that expanding GSR to follow the more liberal SRL text-based approach to action and participant identification could improve image comprehension results. To test this hypothesis and to preserve generalization capabilities, we use general-purpose vision and language components as a front-end. This paper presents our results, a substantial 28.6 point jump in performance on the SWiG dataset, which confirm our hypothesis. We also discuss the benefits of loosely coupled broad-coverage off-the-shelf components which generalized well to out of domain images, and can decrease the need for manual image semantic role annotation.", }
Automatic image comprehension is an important yet challenging task that includes identifying actions in an image and corresponding action participants. Most current approaches to this task, now termed Grounded Situation Recognition (GSR), start by predicting a verb that describes the action and then predict the nouns that can participate in the action as arguments to the verb. This problem formulation limits each image to a single action even though several actions could be depicted. In contrast, text-based Semantic Role Labeling (SRL) aims to label all actions in a sentence, typically resulting in at least two or three predicate argument structures per sentence. We hypothesize that expanding GSR to follow the more liberal SRL text-based approach to action and participant identification could improve image comprehension results. To test this hypothesis and to preserve generalization capabilities, we use general-purpose vision and language components as a front-end. This paper presents our results, a substantial 28.6 point jump in performance on the SWiG dataset, which confirm our hypothesis. We also discuss the benefits of loosely coupled broad-coverage off-the-shelf components which generalized well to out of domain images, and can decrease the need for manual image semantic role annotation.
[ "Bhattacharyya, Abhidip", "Palmer, Martha", "Heckman, Christoffer" ]
CRAPES:Cross-modal Annotation Projection for Visual Semantic Role Labeling
starsem-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.8.bib
https://aclanthology.org/2023.starsem-1.8/
@inproceedings{albanyan-etal-2023-counterhate, title = "Not All Counterhate Tweets Elicit the Same Replies: A Fine-Grained Analysis", author = "Albanyan, Abdullah and Hassan, Ahmed and Blanco, Eduardo", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.8", doi = "10.18653/v1/2023.starsem-1.8", pages = "71--88", abstract = "Counterhate arguments can effectively fight and limit the spread of hate speech. However, they can also exacerbate the hate, as some people may respond with aggression if they feel threatened or targeted by the counterhate. In this paper, we investigate replies to counterhate arguments beyond whether the reply agrees or disagrees with the counterhate argument. We present a corpus with 2,621 replies to counterhate arguments countering hateful tweets, and annotate them with fine-grained characteristics. We show that (a) half of the replies (51{\%}) to the counterhate arguments disagree with the argument, and (b) this kind of reply often supports the hateful tweet (40{\%}). We also analyze the language of counterhate arguments that elicit certain types of replies. Experimental results show that it is feasible to anticipate the kind of replies a counterhate argument will elicit.", }
Counterhate arguments can effectively fight and limit the spread of hate speech. However, they can also exacerbate the hate, as some people may respond with aggression if they feel threatened or targeted by the counterhate. In this paper, we investigate replies to counterhate arguments beyond whether the reply agrees or disagrees with the counterhate argument. We present a corpus with 2,621 replies to counterhate arguments countering hateful tweets, and annotate them with fine-grained characteristics. We show that (a) half of the replies (51{\%}) to the counterhate arguments disagree with the argument, and (b) this kind of reply often supports the hateful tweet (40{\%}). We also analyze the language of counterhate arguments that elicit certain types of replies. Experimental results show that it is feasible to anticipate the kind of replies a counterhate argument will elicit.
[ "Albanyan, Abdullah", "Hassan, Ahmed", "Blanco, Eduardo" ]
Not All Counterhate Tweets Elicit the Same Replies: A Fine-Grained Analysis
starsem-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.9.bib
https://aclanthology.org/2023.starsem-1.9/
@inproceedings{fan-etal-2023-evaluating, title = "Evaluating Factual Consistency of Texts with Semantic Role Labeling", author = "Fan, Jing and Aumiller, Dennis and Gertz, Michael", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.9", doi = "10.18653/v1/2023.starsem-1.9", pages = "89--100", abstract = "Automated evaluation of text generation systems has recently seen increasing attention, particularly checking whether generated text stays truthful to input sources. Existing methods frequently rely on an evaluation using task-specific language models, which in turn allows for little interpretability of generated scores. We introduce SRLScore, a reference-free evaluation metric designed with text summarization in mind. Our approach generates fact tuples constructed from Semantic Role Labels, applied to both input and summary texts.A final factuality score is computed by an adjustable scoring mechanism, which allows for easy adaption of the method across domains. Correlation with human judgments on English summarization datasets shows that SRLScore is competitive with state-of-the-art methods and exhibits stable generalization across datasets without requiring further training or hyperparameter tuning. We experiment with an optional co-reference resolution step, but find that the performance boost is mostly outweighed by the additional compute required. Our metric is available online at: \url{https://github.com/heyjing/SRLScore}", }
Automated evaluation of text generation systems has recently seen increasing attention, particularly checking whether generated text stays truthful to input sources. Existing methods frequently rely on an evaluation using task-specific language models, which in turn allows for little interpretability of generated scores. We introduce SRLScore, a reference-free evaluation metric designed with text summarization in mind. Our approach generates fact tuples constructed from Semantic Role Labels, applied to both input and summary texts.A final factuality score is computed by an adjustable scoring mechanism, which allows for easy adaption of the method across domains. Correlation with human judgments on English summarization datasets shows that SRLScore is competitive with state-of-the-art methods and exhibits stable generalization across datasets without requiring further training or hyperparameter tuning. We experiment with an optional co-reference resolution step, but find that the performance boost is mostly outweighed by the additional compute required. Our metric is available online at: \url{https://github.com/heyjing/SRLScore}
[ "Fan, Jing", "Aumiller, Dennis", "Gertz, Michael" ]
Evaluating Factual Consistency of Texts with Semantic Role Labeling
starsem-1.9
Poster
2305.13309
[ "https://github.com/heyjing/srlscore" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.10.bib
https://aclanthology.org/2023.starsem-1.10/
@inproceedings{truong-etal-2023-language, title = "Language models are not naysayers: an analysis of language models on negation benchmarks", author = "Truong, Thinh Hung and Baldwin, Timothy and Verspoor, Karin and Cohn, Trevor", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.10", doi = "10.18653/v1/2023.starsem-1.10", pages = "101--114", abstract = "Negation has been shown to be a major bottleneck for masked language models, such as BERT. However, whether this finding still holds for larger-sized auto-regressive language models ({``}LLMs{''}) has not been studied comprehensively. With the ever-increasing volume of research and applications of LLMs, we take a step back to evaluate the ability of current-generation LLMs to handle negation, a fundamental linguistic phenomenon that is central to language understanding. We evaluate different LLMs - including the open-source GPT-neo, GPT-3, and InstructGPT - against a wide range of negation benchmarks. Through systematic experimentation with varying model sizes and prompts, we show that LLMs have several limitations including insensitivity to the presence of negation, an inability to capture the lexical semantics of negation, and a failure to reason under negation.", }
Negation has been shown to be a major bottleneck for masked language models, such as BERT. However, whether this finding still holds for larger-sized auto-regressive language models ({``}LLMs{''}) has not been studied comprehensively. With the ever-increasing volume of research and applications of LLMs, we take a step back to evaluate the ability of current-generation LLMs to handle negation, a fundamental linguistic phenomenon that is central to language understanding. We evaluate different LLMs - including the open-source GPT-neo, GPT-3, and InstructGPT - against a wide range of negation benchmarks. Through systematic experimentation with varying model sizes and prompts, we show that LLMs have several limitations including insensitivity to the presence of negation, an inability to capture the lexical semantics of negation, and a failure to reason under negation.
[ "Truong, Thinh Hung", "Baldwin, Timothy", "Verspoor, Karin", "Cohn, Trevor" ]
Language models are not naysayers: an analysis of language models on negation benchmarks
starsem-1.10
Poster
2306.08189
[ "https://github.com/joey234/llm-neg-bench" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.11.bib
https://aclanthology.org/2023.starsem-1.11/
@inproceedings{you-etal-2023-jseegraph, title = "{JSEEG}raph: Joint Structured Event Extraction as Graph Parsing", author = "You, Huiling and Vrelid, Lilja and Touileb, Samia", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.11", doi = "10.18653/v1/2023.starsem-1.11", pages = "115--127", abstract = "We propose a graph-based event extraction framework JSEEGraph that approaches the task of event extraction as general graph parsing in the tradition of Meaning Representation Parsing. It explicitly encodes entities and events in a single semantic graph, and further has the flexibility to encode a wider range of additional IE relations and jointly infer individual tasks. JSEEGraph performs in an end-to-end manner via general graph parsing: (1) instead of flat sequence labelling, nested structures between entities/triggers are efficiently encoded as separate nodes in the graph, allowing for nested and overlapping entities and triggers; (2) both entities, relations, and events can be encoded in the same graph, where entities and event triggers are represented as nodes and entity relations and event arguments are constructed via edges; (3) joint inference avoids error propagation and enhances the interpolation of different IE tasks. We experiment on two benchmark datasets of varying structural complexities; ACE05 and Rich ERE, covering three languages: English, Chinese, and Spanish. Experimental results show that JSEEGraph can handle nested event structures, that it is beneficial to solve different IE tasks jointly, and that event argument extraction in particular benefits from entity extraction. Our code and models are released as open-source.", }
We propose a graph-based event extraction framework JSEEGraph that approaches the task of event extraction as general graph parsing in the tradition of Meaning Representation Parsing. It explicitly encodes entities and events in a single semantic graph, and further has the flexibility to encode a wider range of additional IE relations and jointly infer individual tasks. JSEEGraph performs in an end-to-end manner via general graph parsing: (1) instead of flat sequence labelling, nested structures between entities/triggers are efficiently encoded as separate nodes in the graph, allowing for nested and overlapping entities and triggers; (2) both entities, relations, and events can be encoded in the same graph, where entities and event triggers are represented as nodes and entity relations and event arguments are constructed via edges; (3) joint inference avoids error propagation and enhances the interpolation of different IE tasks. We experiment on two benchmark datasets of varying structural complexities; ACE05 and Rich ERE, covering three languages: English, Chinese, and Spanish. Experimental results show that JSEEGraph can handle nested event structures, that it is beneficial to solve different IE tasks jointly, and that event argument extraction in particular benefits from entity extraction. Our code and models are released as open-source.
[ "You, Huiling", "Vrelid, Lilja", "Touileb, Samia" ]
JSEEGraph: Joint Structured Event Extraction as Graph Parsing
starsem-1.11
Poster
2306.14633
[ "https://github.com/huiling-y/jseegraph" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.12.bib
https://aclanthology.org/2023.starsem-1.12/
@inproceedings{wang-etal-2023-generative, title = "Generative Data Augmentation for Aspect Sentiment Quad Prediction", author = "Wang, An and Jiang, Junfeng and Ma, Youmi and Liu, Ao and Okazaki, Naoaki", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.12", doi = "10.18653/v1/2023.starsem-1.12", pages = "128--140", abstract = "Aspect sentiment quad prediction (ASQP) analyzes the aspect terms, opinion terms, sentiment polarity, and aspect categories in a text. One challenge in this task is the scarcity of data owing to the high annotation cost. Data augmentation techniques are commonly used to address this issue. However, existing approaches simply rewrite texts in the training data, restricting the semantic diversity of the generated data and impairing the quality due to the inconsistency between text and quads. To address these limitations, we augment quads and train a quads-to-text model to generate corresponding texts. Furthermore, we designed novel strategies to filter out low-quality data and balance the sample difficulty distribution of the augmented dataset. Empirical studies on two ASQP datasets demonstrate that our method outperforms other data augmentation methods and achieves state-of-the-art performance on the benchmarks. The source code will be released upon acceptance.", }
Aspect sentiment quad prediction (ASQP) analyzes the aspect terms, opinion terms, sentiment polarity, and aspect categories in a text. One challenge in this task is the scarcity of data owing to the high annotation cost. Data augmentation techniques are commonly used to address this issue. However, existing approaches simply rewrite texts in the training data, restricting the semantic diversity of the generated data and impairing the quality due to the inconsistency between text and quads. To address these limitations, we augment quads and train a quads-to-text model to generate corresponding texts. Furthermore, we designed novel strategies to filter out low-quality data and balance the sample difficulty distribution of the augmented dataset. Empirical studies on two ASQP datasets demonstrate that our method outperforms other data augmentation methods and achieves state-of-the-art performance on the benchmarks. The source code will be released upon acceptance.
[ "Wang, An", "Jiang, Junfeng", "Ma, Youmi", "Liu, Ao", "Okazaki, Naoaki" ]
Generative Data Augmentation for Aspect Sentiment Quad Prediction
starsem-1.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.13.bib
https://aclanthology.org/2023.starsem-1.13/
@inproceedings{cong-etal-2023-language, title = "Are Language Models Sensitive to Semantic Attraction? A Study on Surprisal", author = "Cong, Yan and Chersoni, Emmanuele and Hsu, Yu-yin and Lenci, Alessandro", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.13", doi = "10.18653/v1/2023.starsem-1.13", pages = "141--148", abstract = "In psycholinguistics, semantic attraction is a sentence processing phenomenon in which a given argument violates the selectional requirements of a verb, but this violation is not perceived by comprehenders due to its attraction to another noun in the same sentence, which is syntactically unrelated but semantically sound. In our study, we use autoregressive language models to compute the sentence-level and the target phrase-level Surprisal scores of a psycholinguistic dataset on semantic attraction. Our results show that the models are sensitive to semantic attraction, leading to reduced Surprisal scores, although none of them perfectly matches the human behavioral pattern.", }
In psycholinguistics, semantic attraction is a sentence processing phenomenon in which a given argument violates the selectional requirements of a verb, but this violation is not perceived by comprehenders due to its attraction to another noun in the same sentence, which is syntactically unrelated but semantically sound. In our study, we use autoregressive language models to compute the sentence-level and the target phrase-level Surprisal scores of a psycholinguistic dataset on semantic attraction. Our results show that the models are sensitive to semantic attraction, leading to reduced Surprisal scores, although none of them perfectly matches the human behavioral pattern.
[ "Cong, Yan", "Chersoni, Emmanuele", "Hsu, Yu-yin", "Lenci, Aless", "ro" ]
Are Language Models Sensitive to Semantic Attraction? A Study on Surprisal
starsem-1.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.14.bib
https://aclanthology.org/2023.starsem-1.14/
@inproceedings{tjuatja-etal-2023-syntax, title = "Syntax and Semantics Meet in the {``}Middle{''}: Probing the Syntax-Semantics Interface of {LM}s Through Agentivity", author = "Tjuatja, Lindia and Liu, Emmy and Levin, Lori and Neubig, Graham", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.14", doi = "10.18653/v1/2023.starsem-1.14", pages = "149--164", abstract = "Recent advances in large language models have prompted researchers to examine their abilities across a variety of linguistic tasks, but little has been done to investigate how models handle the interactions in meaning across words and larger syntactic forms{---}i.e. phenomena at the intersection of syntax and semantics. We present the semantic notion of agentivity as a case study for probing such interactions. We created a novel evaluation dataset by utilitizing the unique linguistic properties of a subset of optionally transitive English verbs. This dataset was used to prompt varying sizes of three model classes to see if they are sensitive to agentivity at the lexical level, and if they can appropriately employ these word-level priors given a specific syntactic context. Overall, GPT-3 text-davinci-003 performs extremely well across all experiments, outperforming all other models tested by far. In fact, the results are even better correlated with human judgements than both syntactic and semantic corpus statistics. This suggests that LMs may potentially serve as more useful tools for linguistic annotation, theory testing, and discovery than select corpora for certain tasks.", }
Recent advances in large language models have prompted researchers to examine their abilities across a variety of linguistic tasks, but little has been done to investigate how models handle the interactions in meaning across words and larger syntactic forms{---}i.e. phenomena at the intersection of syntax and semantics. We present the semantic notion of agentivity as a case study for probing such interactions. We created a novel evaluation dataset by utilitizing the unique linguistic properties of a subset of optionally transitive English verbs. This dataset was used to prompt varying sizes of three model classes to see if they are sensitive to agentivity at the lexical level, and if they can appropriately employ these word-level priors given a specific syntactic context. Overall, GPT-3 text-davinci-003 performs extremely well across all experiments, outperforming all other models tested by far. In fact, the results are even better correlated with human judgements than both syntactic and semantic corpus statistics. This suggests that LMs may potentially serve as more useful tools for linguistic annotation, theory testing, and discovery than select corpora for certain tasks.
[ "Tjuatja, Lindia", "Liu, Emmy", "Levin, Lori", "Neubig, Graham" ]
Syntax and Semantics Meet in the “Middle”: Probing the Syntax-Semantics Interface of LMs Through Agentivity
starsem-1.14
Poster
[ "https://github.com/lindiatjuatja/lm_sem" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.15.bib
https://aclanthology.org/2023.starsem-1.15/
@inproceedings{li-etal-2023-pretrained, title = "Can Pretrained Language Models Derive Correct Semantics from Corrupt Subwords under Noise?", author = "Li, Xinzhe and Liu, Ming and Gao, Shang", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.15", doi = "10.18653/v1/2023.starsem-1.15", pages = "165--173", abstract = "For Pretrained Language Models (PLMs), their susceptibility to noise has recently been linked to subword segmentation. However, it is unclear which aspects of segmentation affect their understanding. This study assesses the robustness of PLMs against various disrupted segmentation caused by noise. An evaluation framework for subword segmentation, named Contrastive Lexical Semantic (CoLeS) probe, is proposed. It provides a systematic categorization of segmentation corruption under noise and evaluation protocols by generating contrastive datasets with canonical-noisy word pairs. Experimental results indicate that PLMs are unable to accurately compute word meanings if the noise introduces completely different subwords, small subword fragments, or a large number of additional subwords, particularly when they are inserted within other subwords.", }
For Pretrained Language Models (PLMs), their susceptibility to noise has recently been linked to subword segmentation. However, it is unclear which aspects of segmentation affect their understanding. This study assesses the robustness of PLMs against various disrupted segmentation caused by noise. An evaluation framework for subword segmentation, named Contrastive Lexical Semantic (CoLeS) probe, is proposed. It provides a systematic categorization of segmentation corruption under noise and evaluation protocols by generating contrastive datasets with canonical-noisy word pairs. Experimental results indicate that PLMs are unable to accurately compute word meanings if the noise introduces completely different subwords, small subword fragments, or a large number of additional subwords, particularly when they are inserted within other subwords.
[ "Li, Xinzhe", "Liu, Ming", "Gao, Shang" ]
Can Pretrained Language Models Derive Correct Semantics from Corrupt Subwords under Noise?
starsem-1.15
Poster
2306.15268
[ "https://github.com/xinzhel/word_corruption" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.16.bib
https://aclanthology.org/2023.starsem-1.16/
@inproceedings{tian-etal-2023-idioms, title = "How Are Idioms Processed Inside Transformer Language Models?", author = "Tian, Ye and James, Isobel and Son, Hye", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.16", doi = "10.18653/v1/2023.starsem-1.16", pages = "174--179", abstract = "Idioms such as {``}call it a day{''} and {``}piece of cake,{''} are prevalent in natural language. How do Transformer language models process idioms? This study examines this question by analysing three models - BERT, Multilingual BERT, and DistilBERT. We compare the embeddings of idiomatic and literal expressions across all layers of the networks at both the sentence and word levels. Additionally, we investigate the attention directed from other sentence tokens towards a word within an idiom as opposed to in a literal context. Results indicate that while the three models exhibit slightly different internal mechanisms, they all represent idioms distinctively compared to literal language, with attention playing a critical role. These findings suggest that idioms are semantically and syntactically idiosyncratic, not only for humans but also for language models.", }
Idioms such as {``}call it a day{''} and {``}piece of cake,{''} are prevalent in natural language. How do Transformer language models process idioms? This study examines this question by analysing three models - BERT, Multilingual BERT, and DistilBERT. We compare the embeddings of idiomatic and literal expressions across all layers of the networks at both the sentence and word levels. Additionally, we investigate the attention directed from other sentence tokens towards a word within an idiom as opposed to in a literal context. Results indicate that while the three models exhibit slightly different internal mechanisms, they all represent idioms distinctively compared to literal language, with attention playing a critical role. These findings suggest that idioms are semantically and syntactically idiosyncratic, not only for humans but also for language models.
[ "Tian, Ye", "James, Isobel", "Son, Hye" ]
How Are Idioms Processed Inside Transformer Language Models?
starsem-1.16
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.17.bib
https://aclanthology.org/2023.starsem-1.17/
@inproceedings{calo-etal-2023-shortest, title = "Is Shortest Always Best? The Role of Brevity in Logic-to-Text Generation", author = "Cal{\`o}, Eduardo and Levy, Jordi and Gatt, Albert and Van Deemter, Kees", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.17", doi = "10.18653/v1/2023.starsem-1.17", pages = "180--192", abstract = "Some applications of artificial intelligence make it desirable that logical formulae be converted computationally to comprehensible natural language sentences. As there are many logical equivalents to a given formula, finding the most suitable equivalent to be used as input for such a {``}logic-to-text{''} generation system is a difficult challenge. In this paper, we focus on the role of brevity: Are the shortest formulae the most suitable? We focus on propositional logic (PL), framing formula minimization (i.e., the problem of finding the shortest equivalent of a given formula) as a Quantified Boolean Formulae (QBFs) satisfiability problem. We experiment with several generators and selection strategies to prune the resulting candidates. We conduct exhaustive automatic and human evaluations of the comprehensibility and fluency of the generated texts. The results suggest that while, in many cases, minimization has a positive impact on the quality of the sentences generated, formula minimization may ultimately not be the best strategy.", }
Some applications of artificial intelligence make it desirable that logical formulae be converted computationally to comprehensible natural language sentences. As there are many logical equivalents to a given formula, finding the most suitable equivalent to be used as input for such a {``}logic-to-text{''} generation system is a difficult challenge. In this paper, we focus on the role of brevity: Are the shortest formulae the most suitable? We focus on propositional logic (PL), framing formula minimization (i.e., the problem of finding the shortest equivalent of a given formula) as a Quantified Boolean Formulae (QBFs) satisfiability problem. We experiment with several generators and selection strategies to prune the resulting candidates. We conduct exhaustive automatic and human evaluations of the comprehensibility and fluency of the generated texts. The results suggest that while, in many cases, minimization has a positive impact on the quality of the sentences generated, formula minimization may ultimately not be the best strategy.
[ "Cal{\\`o}, Eduardo", "Levy, Jordi", "Gatt, Albert", "Van Deemter, Kees" ]
Is Shortest Always Best? The Role of Brevity in Logic-to-Text Generation
starsem-1.17
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.18.bib
https://aclanthology.org/2023.starsem-1.18/
@inproceedings{liu-etal-2023-seeking, title = "Seeking Clozure: Robust Hypernym extraction from {BERT} with Anchored Prompts", author = "Liu, Chunhua and Cohn, Trevor and Frermann, Lea", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.18", doi = "10.18653/v1/2023.starsem-1.18", pages = "193--206", abstract = "The automatic extraction of hypernym knowledge from large language models like BERT is an open problem, and it is unclear whether methods fail due to a lack of knowledge in the model or shortcomings of the extraction methods. In particular, methods fail on challenging cases which include rare or abstract concepts, and perform inconsistently under paraphrased prompts. In this study, we revisit the long line of work on pattern-based hypernym extraction, and use it as a diagnostic tool to thoroughly examine the hypernomy knowledge encoded in BERT and the limitations of hypernym extraction methods. We propose to construct prompts from established pattern structures: definitional (X is a Y); lexico-syntactic (Y such as X); and their anchored versions (Y such as X or Z). We devise an automatic method for anchor prediction, and compare different patterns in: (i) their effectiveness for hypernym retrieval from BERT across six English data sets; (ii) on challenge sets of rare and abstract concepts; and (iii) on consistency under paraphrasing. We show that anchoring is particularly useful for abstract concepts and in enhancing consistency across paraphrases, demonstrating how established methods in the field can inform prompt engineering.", }
The automatic extraction of hypernym knowledge from large language models like BERT is an open problem, and it is unclear whether methods fail due to a lack of knowledge in the model or shortcomings of the extraction methods. In particular, methods fail on challenging cases which include rare or abstract concepts, and perform inconsistently under paraphrased prompts. In this study, we revisit the long line of work on pattern-based hypernym extraction, and use it as a diagnostic tool to thoroughly examine the hypernomy knowledge encoded in BERT and the limitations of hypernym extraction methods. We propose to construct prompts from established pattern structures: definitional (X is a Y); lexico-syntactic (Y such as X); and their anchored versions (Y such as X or Z). We devise an automatic method for anchor prediction, and compare different patterns in: (i) their effectiveness for hypernym retrieval from BERT across six English data sets; (ii) on challenge sets of rare and abstract concepts; and (iii) on consistency under paraphrasing. We show that anchoring is particularly useful for abstract concepts and in enhancing consistency across paraphrases, demonstrating how established methods in the field can inform prompt engineering.
[ "Liu, Chunhua", "Cohn, Trevor", "Frermann, Lea" ]
Seeking Clozure: Robust Hypernym extraction from BERT with Anchored Prompts
starsem-1.18
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.19.bib
https://aclanthology.org/2023.starsem-1.19/
@inproceedings{ahia-etal-2023-lexplain, title = "{LEXPLAIN}: Improving Model Explanations via Lexicon Supervision", author = "Ahia, Orevaoghene and Gonen, Hila and Balachandran, Vidhisha and Tsvetkov, Yulia and Smith, Noah A.", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.19", doi = "10.18653/v1/2023.starsem-1.19", pages = "207--216", abstract = "Model explanations that shed light on the model{'}s predictions are becoming a desired additional output of NLP models, alongside their predictions. Challenges in creating these explanations include making them trustworthy and faithful to the model{'}s predictions. In this work, we propose a novel framework for guiding model explanations by supervising them explicitly. To this end, our method, LEXplain, uses task-related lexicons to directly supervise model explanations. This approach consistently improves the model{'}s explanations without sacrificing performance on the task, as we demonstrate on sentiment analysis and toxicity detection. Our analyses show that our method also demotes spurious correlations (i.e., with respect to African American English dialect) when performing the task, improving fairness.", }
Model explanations that shed light on the model{'}s predictions are becoming a desired additional output of NLP models, alongside their predictions. Challenges in creating these explanations include making them trustworthy and faithful to the model{'}s predictions. In this work, we propose a novel framework for guiding model explanations by supervising them explicitly. To this end, our method, LEXplain, uses task-related lexicons to directly supervise model explanations. This approach consistently improves the model{'}s explanations without sacrificing performance on the task, as we demonstrate on sentiment analysis and toxicity detection. Our analyses show that our method also demotes spurious correlations (i.e., with respect to African American English dialect) when performing the task, improving fairness.
[ "Ahia, Orevaoghene", "Gonen, Hila", "Balach", "ran, Vidhisha", "Tsvetkov, Yulia", "Smith, Noah A." ]
LEXPLAIN: Improving Model Explanations via Lexicon Supervision
starsem-1.19
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.20.bib
https://aclanthology.org/2023.starsem-1.20/
@inproceedings{youn-tagkopoulos-2023-kglm, title = "{KGLM}: Integrating Knowledge Graph Structure in Language Models for Link Prediction", author = "Youn, Jason and Tagkopoulos, Ilias", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.20", doi = "10.18653/v1/2023.starsem-1.20", pages = "217--224", abstract = "The ability of knowledge graphs to represent complex relationships at scale has led to their adoption for various needs including knowledge representation, question-answering, and recommendation systems. Knowledge graphs are often incomplete in the information they represent, necessitating the need for knowledge graph completion tasks. Pre-trained and fine-tuned language models have shown promise in these tasks although these models ignore the intrinsic information encoded in the knowledge graph, namely the entity and relation types. In this work, we propose the Knowledge Graph Language Model (KGLM) architecture, where we introduce a new entity/relation embedding layer that learns to differentiate distinctive entity and relation types, therefore allowing the model to learn the structure of the knowledge graph. In this work, we show that further pre-training the language models with this additional embedding layer using the triples extracted from the knowledge graph, followed by the standard fine-tuning phase sets a new state-of-the-art performance for the link prediction task on the benchmark datasets.", }
The ability of knowledge graphs to represent complex relationships at scale has led to their adoption for various needs including knowledge representation, question-answering, and recommendation systems. Knowledge graphs are often incomplete in the information they represent, necessitating the need for knowledge graph completion tasks. Pre-trained and fine-tuned language models have shown promise in these tasks although these models ignore the intrinsic information encoded in the knowledge graph, namely the entity and relation types. In this work, we propose the Knowledge Graph Language Model (KGLM) architecture, where we introduce a new entity/relation embedding layer that learns to differentiate distinctive entity and relation types, therefore allowing the model to learn the structure of the knowledge graph. In this work, we show that further pre-training the language models with this additional embedding layer using the triples extracted from the knowledge graph, followed by the standard fine-tuning phase sets a new state-of-the-art performance for the link prediction task on the benchmark datasets.
[ "Youn, Jason", "Tagkopoulos, Ilias" ]
KGLM: Integrating Knowledge Graph Structure in Language Models for Link Prediction
starsem-1.20
Poster
2211.02744
[ "https://github.com/ibpa/kglm" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.21.bib
https://aclanthology.org/2023.starsem-1.21/
@inproceedings{cho-etal-2023-probing, title = "Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning", author = "Cho, Hyunsoo and Park, Choonghyun and Kim, Junyeob and Kim, Hyuhng Joon and Yoo, Kang Min and Lee, Sang-goo", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.21", doi = "10.18653/v1/2023.starsem-1.21", pages = "225--235", abstract = "As the size of the pre-trained language model (PLM) continues to increase, numerous parameter-efficient transfer learning methods have been proposed recently to compensate for the high cost of fine-tuning. While large PLMs and various PETL methods have achieved impressive results on various benchmarks, it is uncertain whether they can effectively handle inputs that have been distributionally shifted. In this study, we systematically explore how the ability to detect out-of-distribution (OOD) changes as the size of the PLM grows or the transfer methods are altered. Specifically, we evaluated various PETL techniques, including fine-tuning, Adapter, LoRA, and prefix-tuning, with various language models with different scales.", }
As the size of the pre-trained language model (PLM) continues to increase, numerous parameter-efficient transfer learning methods have been proposed recently to compensate for the high cost of fine-tuning. While large PLMs and various PETL methods have achieved impressive results on various benchmarks, it is uncertain whether they can effectively handle inputs that have been distributionally shifted. In this study, we systematically explore how the ability to detect out-of-distribution (OOD) changes as the size of the PLM grows or the transfer methods are altered. Specifically, we evaluated various PETL techniques, including fine-tuning, Adapter, LoRA, and prefix-tuning, with various language models with different scales.
[ "Cho, Hyunsoo", "Park, Choonghyun", "Kim, Junyeob", "Kim, Hyuhng Joon", "Yoo, Kang Min", "Lee, Sang-goo" ]
Probing Out-of-Distribution Robustness of Language Models with Parameter-Efficient Transfer Learning
starsem-1.21
Poster
2301.11660
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.22.bib
https://aclanthology.org/2023.starsem-1.22/
@inproceedings{asher-etal-2023-limits, title = "Limits for learning with language models", author = "Asher, Nicholas and Bhar, Swarnadeep and Chaturvedi, Akshay and Hunter, Julie and Paul, Soumya", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.22", doi = "10.18653/v1/2023.starsem-1.22", pages = "236--248", abstract = "With the advent of large language models (LLMs), the trend in NLP has been to train LLMs on vast amounts of data to solve diverse language understanding and generation tasks. The list of LLM successes is long and varied. Nevertheless, several recent papers provide empirical evidence that LLMs fail to capture important aspects of linguistic meaning. Focusing on universal quantification, we provide a theoretical foundation for these empirical findings by proving that LLMs cannot learn certain fundamental semantic properties including semantic entailment and consistency as they are defined in formal semantics. More generally, we show that LLMs are unable to learn concepts beyond the first level of the Borel Hierarchy, which imposes severe limits on the ability of LMs, both large and small, to capture many aspects of linguistic meaning. This means that LLMs will operate without formal guarantees on tasks that require entailments and deep linguistic understanding.", }
With the advent of large language models (LLMs), the trend in NLP has been to train LLMs on vast amounts of data to solve diverse language understanding and generation tasks. The list of LLM successes is long and varied. Nevertheless, several recent papers provide empirical evidence that LLMs fail to capture important aspects of linguistic meaning. Focusing on universal quantification, we provide a theoretical foundation for these empirical findings by proving that LLMs cannot learn certain fundamental semantic properties including semantic entailment and consistency as they are defined in formal semantics. More generally, we show that LLMs are unable to learn concepts beyond the first level of the Borel Hierarchy, which imposes severe limits on the ability of LMs, both large and small, to capture many aspects of linguistic meaning. This means that LLMs will operate without formal guarantees on tasks that require entailments and deep linguistic understanding.
[ "Asher, Nicholas", "Bhar, Swarnadeep", "Chaturvedi, Akshay", "Hunter, Julie", "Paul, Soumya" ]
Limits for learning with language models
starsem-1.22
Poster
2306.12213
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.23.bib
https://aclanthology.org/2023.starsem-1.23/
@inproceedings{kurosawa-yanaka-2023-character, title = "Does Character-level Information Always Improve {DRS}-based Semantic Parsing?", author = "Kurosawa, Tomoya and Yanaka, Hitomi", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.23", doi = "10.18653/v1/2023.starsem-1.23", pages = "249--258", abstract = "Even in the era of massive language models, it has been suggested that character-level representations improve the performance of neural models. The state-of-the-art neural semantic parser for Discourse Representation Structures uses character-level representations, improving performance in the four languages (i.e., English, German, Dutch, and Italian) in the Parallel Meaning Bank dataset. However, how and why character-level information improves the parser{'}s performance remains unclear. This study provides an in-depth analysis of performance changes by order of character sequences. In the experiments, we compare F1-scores by shuffling the order and randomizing character sequences after testing the performance of character-level information. Our results indicate that incorporating character-level information does not improve the performance in English and German. In addition, we find that the parser is not sensitive to correct character order in Dutch. Nevertheless, performance improvements are observed when using character-level information.", }
Even in the era of massive language models, it has been suggested that character-level representations improve the performance of neural models. The state-of-the-art neural semantic parser for Discourse Representation Structures uses character-level representations, improving performance in the four languages (i.e., English, German, Dutch, and Italian) in the Parallel Meaning Bank dataset. However, how and why character-level information improves the parser{'}s performance remains unclear. This study provides an in-depth analysis of performance changes by order of character sequences. In the experiments, we compare F1-scores by shuffling the order and randomizing character sequences after testing the performance of character-level information. Our results indicate that incorporating character-level information does not improve the performance in English and German. In addition, we find that the parser is not sensitive to correct character order in Dutch. Nevertheless, performance improvements are observed when using character-level information.
[ "Kurosawa, Tomoya", "Yanaka, Hitomi" ]
Does Character-level Information Always Improve DRS-based Semantic Parsing?
starsem-1.23
Poster
2306.02302
[ "https://github.com/ynklab/character_order_analysis" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.24.bib
https://aclanthology.org/2023.starsem-1.24/
@inproceedings{peng-etal-2023-testing, title = "Testing Paraphrase Models on Recognising Sentence Pairs at Different Degrees of Semantic Overlap", author = "Peng, Qiwei and Weir, David and Weeds, Julie", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.24", doi = "10.18653/v1/2023.starsem-1.24", pages = "259--269", abstract = "Paraphrase detection is useful in many natural language understanding applications. Current works typically formulate this problem as a sentence pair binary classification task. However, this setup is not a good fit for many of the intended applications of paraphrase models. In particular, such applications often involve finding the closest paraphrases of the target sentence from a group of candidate sentences where they exhibit different degrees of semantic overlap with the target sentence. To apply models to this paraphrase retrieval scenario, the model must be sensitive to the degree to which two sentences are paraphrases of one another. However, many existing datasets ignore and fail to test models in this setup. In response, we propose adversarial paradigms to create evaluation datasets, which could examine the sensitivity to different degrees of semantic overlap. Empirical results show that, while paraphrase models and different sentence encoders appear successful on standard evaluations, measuring the degree of semantic overlap still remains a big challenge for them.", }
Paraphrase detection is useful in many natural language understanding applications. Current works typically formulate this problem as a sentence pair binary classification task. However, this setup is not a good fit for many of the intended applications of paraphrase models. In particular, such applications often involve finding the closest paraphrases of the target sentence from a group of candidate sentences where they exhibit different degrees of semantic overlap with the target sentence. To apply models to this paraphrase retrieval scenario, the model must be sensitive to the degree to which two sentences are paraphrases of one another. However, many existing datasets ignore and fail to test models in this setup. In response, we propose adversarial paradigms to create evaluation datasets, which could examine the sensitivity to different degrees of semantic overlap. Empirical results show that, while paraphrase models and different sentence encoders appear successful on standard evaluations, measuring the degree of semantic overlap still remains a big challenge for them.
[ "Peng, Qiwei", "Weir, David", "Weeds, Julie" ]
Testing Paraphrase Models on Recognising Sentence Pairs at Different Degrees of Semantic Overlap
starsem-1.24
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.25.bib
https://aclanthology.org/2023.starsem-1.25/
@inproceedings{mickus-etal-2023-mann, title = "„Mann{``} is to {``}Donna{''} as「国王」is to « Reine » Adapting the Analogy Task for Multilingual and Contextual Embeddings", author = "Mickus, Timothee and Cal{\`o}, Eduardo and Jacqmin, L{\'e}o and Paperno, Denis and Constant, Mathieu", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.25", doi = "10.18653/v1/2023.starsem-1.25", pages = "270--283", abstract = "How does the word analogy task fit in the modern NLP landscape? Given the rarity of comparable multilingual benchmarks and the lack of a consensual evaluation protocol for contextual models, this remains an open question. In this paper, we introduce MATS: a multilingual analogy dataset, covering forty analogical relations in six languages, and evaluate human as well as static and contextual embedding performances on the task. We find that not all analogical relations are equally straightforward for humans, static models remain competitive with contextual embeddings, and optimal settings vary across languages and analogical relations. Several key challenges remain, including creating benchmarks that align with human reasoning and understanding what drives differences across methodologies.", }
How does the word analogy task fit in the modern NLP landscape? Given the rarity of comparable multilingual benchmarks and the lack of a consensual evaluation protocol for contextual models, this remains an open question. In this paper, we introduce MATS: a multilingual analogy dataset, covering forty analogical relations in six languages, and evaluate human as well as static and contextual embedding performances on the task. We find that not all analogical relations are equally straightforward for humans, static models remain competitive with contextual embeddings, and optimal settings vary across languages and analogical relations. Several key challenges remain, including creating benchmarks that align with human reasoning and understanding what drives differences across methodologies.
[ "Mickus, Timothee", "Cal{\\`o}, Eduardo", "Jacqmin, L{\\'e}o", "Paperno, Denis", "Constant, Mathieu" ]
„Mann“ is to “Donna” as「国王」is to « Reine » Adapting the Analogy Task for Multilingual and Contextual Embeddings
starsem-1.25
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.26.bib
https://aclanthology.org/2023.starsem-1.26/
@inproceedings{castro-etal-2023-scalable, title = "Scalable Performance Analysis for Vision-Language Models", author = "Castro, Santiago and Ignat, Oana and Mihalcea, Rada", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.26", doi = "10.18653/v1/2023.starsem-1.26", pages = "284--294", abstract = "Joint vision-language models have shown great performance over a diverse set of tasks. However, little is known about their limitations, as the high dimensional space learned by these models makes it difficult to identify semantic errors. Recent work has addressed this problem by designing highly controlled probing task benchmarks. Our paper introduces a more scalable solution that relies on already annotated benchmarks. Our method consists of extracting a large set of diverse features from a vision-language benchmark and measuring their correlation with the output of the target model. We confirm previous findings that CLIP behaves like a bag of words model and performs better with nouns and verbs; we also uncover novel insights such as CLIP getting confused by concrete words. Our framework is available at \url{https://github.com/MichiganNLP/Scalable-VLM-Probing} and can be used with other multimodal models and benchmarks.", }
Joint vision-language models have shown great performance over a diverse set of tasks. However, little is known about their limitations, as the high dimensional space learned by these models makes it difficult to identify semantic errors. Recent work has addressed this problem by designing highly controlled probing task benchmarks. Our paper introduces a more scalable solution that relies on already annotated benchmarks. Our method consists of extracting a large set of diverse features from a vision-language benchmark and measuring their correlation with the output of the target model. We confirm previous findings that CLIP behaves like a bag of words model and performs better with nouns and verbs; we also uncover novel insights such as CLIP getting confused by concrete words. Our framework is available at \url{https://github.com/MichiganNLP/Scalable-VLM-Probing} and can be used with other multimodal models and benchmarks.
[ "Castro, Santiago", "Ignat, Oana", "Mihalcea, Rada" ]
Scalable Performance Analysis for Vision-Language Models
starsem-1.26
Poster
2305.18786
[ "https://github.com/michigannlp/scalable-vlm-probing" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.27.bib
https://aclanthology.org/2023.starsem-1.27/
@inproceedings{zhang-etal-2023-pcfg, title = "{PCFG}-Based Natural Language Interface Improves Generalization for Controlled Text Generation", author = "Zhang, Jingyu and Glass, James and He, Tianxing", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.27", doi = "10.18653/v1/2023.starsem-1.27", pages = "295--313", abstract = "Existing work on controlled text generation (CTG) assumes a control interface of categorical attributes. In this work, we propose a natural language (NL) interface, where we craft a PCFG to embed the control attributes into natural language commands, and propose variants of existing CTG models that take commands as input. In our experiments, we design tailored setups to test the model{'}s generalization abilities. We find our PCFG-based command generation approach is effective for handling unseen commands compared to fix-set templates. Further, our proposed NL models can effectively generalize to unseen attributes (a new ability enabled by the NL interface), as well as unseen attribute combinations. Interestingly, in model comparisons, the simple conditional generation approach, enhanced with our proposed NL interface, is shown to be a strong baseline in those challenging settings.", }
Existing work on controlled text generation (CTG) assumes a control interface of categorical attributes. In this work, we propose a natural language (NL) interface, where we craft a PCFG to embed the control attributes into natural language commands, and propose variants of existing CTG models that take commands as input. In our experiments, we design tailored setups to test the model{'}s generalization abilities. We find our PCFG-based command generation approach is effective for handling unseen commands compared to fix-set templates. Further, our proposed NL models can effectively generalize to unseen attributes (a new ability enabled by the NL interface), as well as unseen attribute combinations. Interestingly, in model comparisons, the simple conditional generation approach, enhanced with our proposed NL interface, is shown to be a strong baseline in those challenging settings.
[ "Zhang, Jingyu", "Glass, James", "He, Tianxing" ]
PCFG-Based Natural Language Interface Improves Generalization for Controlled Text Generation
starsem-1.27
Poster
2210.07431
[ "https://github.com/jackjyzhang/pcfg-nl-interface" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.28.bib
https://aclanthology.org/2023.starsem-1.28/
@inproceedings{del-fishel-2023-true, title = "True Detective: A Deep Abductive Reasoning Benchmark Undoable for {GPT}-3 and Challenging for {GPT}-4", author = "Del, Maksym and Fishel, Mark", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.28", doi = "10.18653/v1/2023.starsem-1.28", pages = "314--322", abstract = "Large language models (LLMs) have demonstrated solid zero-shot reasoning capabilities, which is reflected in their performance on the current test tasks. This calls for a more challenging benchmark requiring highly advanced reasoning ability to be solved. In this paper, we introduce such a benchmark, consisting of 191 long-form (1200 words on average) mystery narratives constructed as detective puzzles. Puzzles are sourced from the {``}5 Minute Mystery{''} platform and include a multiple-choice question for evaluation. Only 47{\%} of humans solve a puzzle successfully on average, while the best human solvers achieve over 80{\%} success rate. We show that GPT-3 models barely outperform random on this benchmark (with 28{\%} accuracy) while state-of-the-art GPT-4 solves only 38{\%} of puzzles. This indicates that there is still a significant gap in the deep reasoning abilities of LLMs and humans and highlights the need for further research in this area. Our work introduces a challenging benchmark for future studies on reasoning in language models and contributes to a better understanding of the limits of LLMs{'} abilities.", }
Large language models (LLMs) have demonstrated solid zero-shot reasoning capabilities, which is reflected in their performance on the current test tasks. This calls for a more challenging benchmark requiring highly advanced reasoning ability to be solved. In this paper, we introduce such a benchmark, consisting of 191 long-form (1200 words on average) mystery narratives constructed as detective puzzles. Puzzles are sourced from the {``}5 Minute Mystery{''} platform and include a multiple-choice question for evaluation. Only 47{\%} of humans solve a puzzle successfully on average, while the best human solvers achieve over 80{\%} success rate. We show that GPT-3 models barely outperform random on this benchmark (with 28{\%} accuracy) while state-of-the-art GPT-4 solves only 38{\%} of puzzles. This indicates that there is still a significant gap in the deep reasoning abilities of LLMs and humans and highlights the need for further research in this area. Our work introduces a challenging benchmark for future studies on reasoning in language models and contributes to a better understanding of the limits of LLMs{'} abilities.
[ "Del, Maksym", "Fishel, Mark" ]
True Detective: A Deep Abductive Reasoning Benchmark Undoable for GPT-3 and Challenging for GPT-4
starsem-1.28
Poster
2212.10114
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.29.bib
https://aclanthology.org/2023.starsem-1.29/
@inproceedings{vahtola-etal-2023-guiding, title = "Guiding Zero-Shot Paraphrase Generation with Fine-Grained Control Tokens", author = "Vahtola, Teemu and Creutz, Mathias and Tiedemann, Jrg", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.29", doi = "10.18653/v1/2023.starsem-1.29", pages = "323--337", abstract = "Sequence-to-sequence paraphrase generation models often struggle with the generation of diverse paraphrases. This deficiency constrains the viability of leveraging paraphrase generation in different Natural Language Processing tasks. We propose a translation-based guided paraphrase generation model that learns useful features for promoting surface form variation in generated paraphrases from cross-lingual parallel data. Our proposed method leverages multilingual neural machine translation pretraining to learn zero-shot paraphrasing. Furthermore, we incorporate dedicated prefix tokens into the training of the machine translation models to promote variation. The prefix tokens are designed to affect various linguistic features related to surface form realizations, and can be applied during inference to guide the decoding process towards a desired solution. We assess the proposed guided model on paraphrase generation in three languages, English, Finnish, and Swedish, and provide analysis on the feasibility of the prefix tokens to guided paraphrasing. Our analysis suggests that the attributes represented by the prefix tokens are useful in promoting variation, by pushing the paraphrases generated by the guided model to diverge from the input sentence while preserving semantics conveyed by the sentence well.", }
Sequence-to-sequence paraphrase generation models often struggle with the generation of diverse paraphrases. This deficiency constrains the viability of leveraging paraphrase generation in different Natural Language Processing tasks. We propose a translation-based guided paraphrase generation model that learns useful features for promoting surface form variation in generated paraphrases from cross-lingual parallel data. Our proposed method leverages multilingual neural machine translation pretraining to learn zero-shot paraphrasing. Furthermore, we incorporate dedicated prefix tokens into the training of the machine translation models to promote variation. The prefix tokens are designed to affect various linguistic features related to surface form realizations, and can be applied during inference to guide the decoding process towards a desired solution. We assess the proposed guided model on paraphrase generation in three languages, English, Finnish, and Swedish, and provide analysis on the feasibility of the prefix tokens to guided paraphrasing. Our analysis suggests that the attributes represented by the prefix tokens are useful in promoting variation, by pushing the paraphrases generated by the guided model to diverge from the input sentence while preserving semantics conveyed by the sentence well.
[ "Vahtola, Teemu", "Creutz, Mathias", "Tiedemann, Jrg" ]
Guiding Zero-Shot Paraphrase Generation with Fine-Grained Control Tokens
starsem-1.29
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.30.bib
https://aclanthology.org/2023.starsem-1.30/
@inproceedings{lietard-etal-2023-tale, title = "A Tale of Two Laws of Semantic Change: Predicting Synonym Changes with Distributional Semantic Models", author = "Lietard, Bastien and Keller, Mikaela and Denis, Pascal", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.30", doi = "10.18653/v1/2023.starsem-1.30", pages = "338--352", abstract = "Lexical Semantic Change is the study of how the meaning of words evolves through time. Another related question is whether and how lexical relations over pairs of words, such as synonymy, change over time. There are currently two competing, apparently opposite hypotheses in the historical linguistic literature regarding how synonymous words evolve: the Law of Differentiation (LD) argues that synonyms tend to take on different meanings over time, whereas the Law of Parallel Change (LPC) claims that synonyms tend to undergo the same semantic change and therefore remain synonyms. So far, there has been little research using distributional models to assess to what extent these laws apply on historical corpora. In this work, we take a first step toward detecting whether LD or LPC operates for given word pairs. After recasting the problem into a more tractable task, we combine two linguistic resources to propose the first complete evaluation framework on this problem and provide empirical evidence in favor of a dominance of LD. We then propose various computational approaches to the problem using Distributional Semantic Models and grounded in recent literature on Lexical Semantic Change detection. Our best approaches achieve a balanced accuracy above 0.6 on our dataset. We discuss challenges still faced by these approaches, such as polysemy or the potential confusion between synonymy and hypernymy.", }
Lexical Semantic Change is the study of how the meaning of words evolves through time. Another related question is whether and how lexical relations over pairs of words, such as synonymy, change over time. There are currently two competing, apparently opposite hypotheses in the historical linguistic literature regarding how synonymous words evolve: the Law of Differentiation (LD) argues that synonyms tend to take on different meanings over time, whereas the Law of Parallel Change (LPC) claims that synonyms tend to undergo the same semantic change and therefore remain synonyms. So far, there has been little research using distributional models to assess to what extent these laws apply on historical corpora. In this work, we take a first step toward detecting whether LD or LPC operates for given word pairs. After recasting the problem into a more tractable task, we combine two linguistic resources to propose the first complete evaluation framework on this problem and provide empirical evidence in favor of a dominance of LD. We then propose various computational approaches to the problem using Distributional Semantic Models and grounded in recent literature on Lexical Semantic Change detection. Our best approaches achieve a balanced accuracy above 0.6 on our dataset. We discuss challenges still faced by these approaches, such as polysemy or the potential confusion between synonymy and hypernymy.
[ "Lietard, Bastien", "Keller, Mikaela", "Denis, Pascal" ]
A Tale of Two Laws of Semantic Change: Predicting Synonym Changes with Distributional Semantic Models
starsem-1.30
Poster
2305.19143
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.31.bib
https://aclanthology.org/2023.starsem-1.31/
@inproceedings{roy-dipta-etal-2023-semantically, title = "Semantically-informed Hierarchical Event Modeling", author = "Roy Dipta, Shubhashis and Rezaee, Mehdi and Ferraro, Francis", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.31", doi = "10.18653/v1/2023.starsem-1.31", pages = "353--369", abstract = "Prior work has shown that coupling sequential latent variable models with semantic ontological knowledge can improve the representational capabilities of event modeling approaches. In this work, we present a novel, doubly hierarchical, semi-supervised event modeling framework that provides structural hierarchy while also accounting for ontological hierarchy. Our approach consistsof multiple layers of structured latent variables, where each successive layer compresses and abstracts the previous layers. We guide this compression through the injection of structured ontological knowledge that is defined at the type level of events: importantly, our model allows for partial injection of semantic knowledge and it does not depend on observing instances at any particular level of the semantic ontology. Across two different datasets and four different evaluation metrics, we demonstrate that our approach is able to out-perform the previous state-of-the-art approaches by up to 8.5{\%}, demonstrating the benefits of structured and semantic hierarchical knowledge for event modeling.", }
Prior work has shown that coupling sequential latent variable models with semantic ontological knowledge can improve the representational capabilities of event modeling approaches. In this work, we present a novel, doubly hierarchical, semi-supervised event modeling framework that provides structural hierarchy while also accounting for ontological hierarchy. Our approach consistsof multiple layers of structured latent variables, where each successive layer compresses and abstracts the previous layers. We guide this compression through the injection of structured ontological knowledge that is defined at the type level of events: importantly, our model allows for partial injection of semantic knowledge and it does not depend on observing instances at any particular level of the semantic ontology. Across two different datasets and four different evaluation metrics, we demonstrate that our approach is able to out-perform the previous state-of-the-art approaches by up to 8.5{\%}, demonstrating the benefits of structured and semantic hierarchical knowledge for event modeling.
[ "Roy Dipta, Shubhashis", "Rezaee, Mehdi", "Ferraro, Francis" ]
Semantically-informed Hierarchical Event Modeling
starsem-1.31
Poster
2212.10547
[ "https://github.com/dipta007/shem" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.32.bib
https://aclanthology.org/2023.starsem-1.32/
@inproceedings{lyu-etal-2023-representation, title = "Representation of Lexical Stylistic Features in Language Models{'} Embedding Space", author = "Lyu, Qing and Apidianaki, Marianna and Callison-burch, Chris", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.32", doi = "10.18653/v1/2023.starsem-1.32", pages = "370--387", abstract = "The representation space of pretrained Language Models (LMs) encodes rich information about words and their relationships (e.g., similarity, hypernymy, polysemy) as well as abstract semantic notions (e.g., intensity). In this paper, we demonstrate that lexical stylistic notions such as complexity, formality, and figurativeness, can also be identified in this space. We show that it is possible to derive a vector representation for each of these stylistic notions from only a small number of seed pairs. Using these vectors, we can characterize new texts in terms of these dimensions by performing simple calculations in the corresponding embedding space. We conduct experiments on five datasets and find that static embeddings encode these features more accurately at the level of words and phrases, whereas contextualized LMs perform better on sentences. The lower performance of contextualized representations at the word level is partially attributable to the anisotropy of their vector space, which can be corrected to some extent using techniques like standardization.", }
The representation space of pretrained Language Models (LMs) encodes rich information about words and their relationships (e.g., similarity, hypernymy, polysemy) as well as abstract semantic notions (e.g., intensity). In this paper, we demonstrate that lexical stylistic notions such as complexity, formality, and figurativeness, can also be identified in this space. We show that it is possible to derive a vector representation for each of these stylistic notions from only a small number of seed pairs. Using these vectors, we can characterize new texts in terms of these dimensions by performing simple calculations in the corresponding embedding space. We conduct experiments on five datasets and find that static embeddings encode these features more accurately at the level of words and phrases, whereas contextualized LMs perform better on sentences. The lower performance of contextualized representations at the word level is partially attributable to the anisotropy of their vector space, which can be corrected to some extent using techniques like standardization.
[ "Lyu, Qing", "Apidianaki, Marianna", "Callison-burch, Chris" ]
Representation of Lexical Stylistic Features in Language Models' Embedding Space
starsem-1.32
Poster
2305.18657
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.33.bib
https://aclanthology.org/2023.starsem-1.33/
@inproceedings{kazeminejad-palmer-2023-event, title = "Event Semantic Knowledge in Procedural Text Understanding", author = "Kazeminejad, Ghazaleh and Palmer, Martha", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.33", doi = "10.18653/v1/2023.starsem-1.33", pages = "388--398", abstract = "The task of entity state tracking aims to automatically analyze procedural texts {--} texts that describe a step-by-step process (e.g. a baking recipe). Specifically, the goal is to track various states of the entities participating in a given process. Some of the challenges for this NLP task include annotated data scarcity and annotators{'} reliance on commonsense knowledge to annotate implicit state information. Zhang et al. (2021) successfully incorporated commonsense entity-centric knowledge from ConceptNet into their BERT-based neural-symbolic architecture. Since English mostly encodes state change information in verbs, we attempted to test whether injecting semantic knowledge of events (retrieved from the state-of-the-art VerbNet parser) into a neural model can also improve the performance on this task. To achieve this, we adapt the methodology introduced by Zhang et al. (2021) for incorporating symbolic entity information from ConceptNet to the incorporation of VerbNet event semantics. We evaluate the performance of our model on the ProPara dataset (Mishra et al., 2018). In addition, we introduce a purely symbolic model for entity state tracking that uses a simple set of case statements, and is informed mostly by linguistic knowledge retrieved from various computational lexical resources. Our approach is inherently domain-agnostic, and our model is explainable and achieves state-of-the-art results on the Recipes dataset (Bosselut et al., 2017).", }
The task of entity state tracking aims to automatically analyze procedural texts {--} texts that describe a step-by-step process (e.g. a baking recipe). Specifically, the goal is to track various states of the entities participating in a given process. Some of the challenges for this NLP task include annotated data scarcity and annotators{'} reliance on commonsense knowledge to annotate implicit state information. Zhang et al. (2021) successfully incorporated commonsense entity-centric knowledge from ConceptNet into their BERT-based neural-symbolic architecture. Since English mostly encodes state change information in verbs, we attempted to test whether injecting semantic knowledge of events (retrieved from the state-of-the-art VerbNet parser) into a neural model can also improve the performance on this task. To achieve this, we adapt the methodology introduced by Zhang et al. (2021) for incorporating symbolic entity information from ConceptNet to the incorporation of VerbNet event semantics. We evaluate the performance of our model on the ProPara dataset (Mishra et al., 2018). In addition, we introduce a purely symbolic model for entity state tracking that uses a simple set of case statements, and is informed mostly by linguistic knowledge retrieved from various computational lexical resources. Our approach is inherently domain-agnostic, and our model is explainable and achieves state-of-the-art results on the Recipes dataset (Bosselut et al., 2017).
[ "Kazeminejad, Ghazaleh", "Palmer, Martha" ]
Event Semantic Knowledge in Procedural Text Understanding
starsem-1.33
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.34.bib
https://aclanthology.org/2023.starsem-1.34/
@inproceedings{myers-palmer-2023-leveraging, title = "Leveraging Active Learning to Minimise {SRL} Annotation Across Corpora", author = "Myers, Skatje and Palmer, Martha", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.34", doi = "10.18653/v1/2023.starsem-1.34", pages = "399--408", abstract = "In this paper we investigate the application of active learning to semantic role labeling (SRL) using Bayesian Active Learning by Disagreement (BALD). Our new predicate-focused selection method quickly improves efficiency on three different specialised domain corpora. This is encouraging news for researchers wanting to port SRL to domain specific applications. Interestingly, with the large and diverse {\textbackslash}textit{OntoNotes} corpus, the sentence selection approach, that collects a larger number of predicates, taking more time to annotate, fares better than the predicate approach. In this paper, we analyze both the selections made by our two selections methods for the various domains and the differences between these corpora in detail.", }
In this paper we investigate the application of active learning to semantic role labeling (SRL) using Bayesian Active Learning by Disagreement (BALD). Our new predicate-focused selection method quickly improves efficiency on three different specialised domain corpora. This is encouraging news for researchers wanting to port SRL to domain specific applications. Interestingly, with the large and diverse {\textbackslash}textit{OntoNotes} corpus, the sentence selection approach, that collects a larger number of predicates, taking more time to annotate, fares better than the predicate approach. In this paper, we analyze both the selections made by our two selections methods for the various domains and the differences between these corpora in detail.
[ "Myers, Skatje", "Palmer, Martha" ]
Leveraging Active Learning to Minimise SRL Annotation Across Corpora
starsem-1.34
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.35.bib
https://aclanthology.org/2023.starsem-1.35/
@inproceedings{pokharel-agrawal-2023-estimating, title = "Estimating Semantic Similarity between In-Domain and Out-of-Domain Samples", author = "Pokharel, Rhitabrat and Agrawal, Ameeta", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.35", doi = "10.18653/v1/2023.starsem-1.35", pages = "409--416", abstract = "Prior work typically describes out-of-domain (OOD) or out-of-distribution (OODist) samples as those that originate from dataset(s) or source(s) different from the training set but for the same task. When compared to in-domain (ID) samples, the models have been known to usually perform poorer on OOD samples, although this observation is not consistent. Another thread of research has focused on OOD detection, albeit mostly using supervised approaches. In this work, we first consolidate and present a systematic analysis of multiple definitions of OOD and OODist as discussed in prior literature. Then, we analyze the performance of a model under ID and OOD/OODist settings in a principled way. Finally, we seek to identify an unsupervised method for reliably identifying OOD/OODist samples without using a trained model. The results of our extensive evaluation using 12 datasets from 4 different tasks suggest the promising potential of unsupervised metrics in this task.", }
Prior work typically describes out-of-domain (OOD) or out-of-distribution (OODist) samples as those that originate from dataset(s) or source(s) different from the training set but for the same task. When compared to in-domain (ID) samples, the models have been known to usually perform poorer on OOD samples, although this observation is not consistent. Another thread of research has focused on OOD detection, albeit mostly using supervised approaches. In this work, we first consolidate and present a systematic analysis of multiple definitions of OOD and OODist as discussed in prior literature. Then, we analyze the performance of a model under ID and OOD/OODist settings in a principled way. Finally, we seek to identify an unsupervised method for reliably identifying OOD/OODist samples without using a trained model. The results of our extensive evaluation using 12 datasets from 4 different tasks suggest the promising potential of unsupervised metrics in this task.
[ "Pokharel, Rhitabrat", "Agrawal, Ameeta" ]
Estimating Semantic Similarity between In-Domain and Out-of-Domain Samples
starsem-1.35
Poster
2306.01206
[ "https://github.com/PortNLP/semantic-similarity" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.36.bib
https://aclanthology.org/2023.starsem-1.36/
@inproceedings{pan-etal-2023-query, title = "Query Generation Using {GPT}-3 for {CLIP}-Based Word Sense Disambiguation for Image Retrieval", author = "Pan, Xiaomeng and Chen, Zhousi and Komachi, Mamoru", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.36", doi = "10.18653/v1/2023.starsem-1.36", pages = "417--422", abstract = "In this study, we propose using the GPT-3 as a query generator for the backend of CLIP as an implicit word sense disambiguation (WSD) component for the SemEval 2023 shared task Visual Word Sense Disambiguation (VWSD). We confirmed previous findings {---} human-like prompts adapted for WSD with quotes benefit both CLIP and GPT-3, whereas plain phrases or poorly templated prompts give the worst results.", }
In this study, we propose using the GPT-3 as a query generator for the backend of CLIP as an implicit word sense disambiguation (WSD) component for the SemEval 2023 shared task Visual Word Sense Disambiguation (VWSD). We confirmed previous findings {---} human-like prompts adapted for WSD with quotes benefit both CLIP and GPT-3, whereas plain phrases or poorly templated prompts give the worst results.
[ "Pan, Xiaomeng", "Chen, Zhousi", "Komachi, Mamoru" ]
Query Generation Using GPT-3 for CLIP-Based Word Sense Disambiguation for Image Retrieval
starsem-1.36
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.37.bib
https://aclanthology.org/2023.starsem-1.37/
@inproceedings{lo-etal-2023-functional, title = "Functional Distributional Semantics at Scale", author = "Lo, Chun Hei and Cheng, Hong and Lam, Wai and Emerson, Guy", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.37", doi = "10.18653/v1/2023.starsem-1.37", pages = "423--436", abstract = "Functional Distributional Semantics is a linguistically motivated framework for modelling lexical and sentence-level semantics with truth-conditional functions using distributional information. Previous implementations of the framework focus on subjectverbobject (SVO) triples only, which largely limits the contextual information available for training and thus the capability of the learnt model. In this paper, we discuss the challenges of extending the previous architectures to training on arbitrary sentences. We address the challenges by proposing a more expressive lexical model that works over a continuous semantic space. This improves the flexibility and computational efficiency of the model, as well as its compatibility with present-day machine-learning frameworks. Our proposal allows the model to be applied to a wider range of semantic tasks, and improved performances are demonstrated from experimental results.", }
Functional Distributional Semantics is a linguistically motivated framework for modelling lexical and sentence-level semantics with truth-conditional functions using distributional information. Previous implementations of the framework focus on subjectverbobject (SVO) triples only, which largely limits the contextual information available for training and thus the capability of the learnt model. In this paper, we discuss the challenges of extending the previous architectures to training on arbitrary sentences. We address the challenges by proposing a more expressive lexical model that works over a continuous semantic space. This improves the flexibility and computational efficiency of the model, as well as its compatibility with present-day machine-learning frameworks. Our proposal allows the model to be applied to a wider range of semantic tasks, and improved performances are demonstrated from experimental results.
[ "Lo, Chun Hei", "Cheng, Hong", "Lam, Wai", "Emerson, Guy" ]
Functional Distributional Semantics at Scale
starsem-1.37
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.38.bib
https://aclanthology.org/2023.starsem-1.38/
@inproceedings{lee-etal-2023-feed, title = "{FEED} {PET}s: Further Experimentation and Expansion on the Disambiguation of Potentially Euphemistic Terms", author = "Lee, Patrick and Shode, Iyanuoluwa and Trujillo, Alain and Zhao, Yuan and Ojo, Olumide and Plancarte, Diana and Feldman, Anna and Peng, Jing", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.38", doi = "10.18653/v1/2023.starsem-1.38", pages = "437--448", abstract = "Transformers have been shown to work well for the task of English euphemism disambiguation, in which a potentially euphemistic term (PET) is classified as euphemistic or non-euphemistic in a particular context. In this study, we expand on the task in two ways. First, we annotate PETs for vagueness, a linguistic property associated with euphemisms, and find that transformers are generally better at classifying vague PETs, suggesting linguistic differences in the data that impact performance. Second, we present novel euphemism corpora in three different languages: Yoruba, Spanish, and Mandarin Chinese. We perform euphemism disambiguation experiments in each language using multilingual transformer models mBERT and XLM-RoBERTa, establishing preliminary results from which to launch future work.", }
Transformers have been shown to work well for the task of English euphemism disambiguation, in which a potentially euphemistic term (PET) is classified as euphemistic or non-euphemistic in a particular context. In this study, we expand on the task in two ways. First, we annotate PETs for vagueness, a linguistic property associated with euphemisms, and find that transformers are generally better at classifying vague PETs, suggesting linguistic differences in the data that impact performance. Second, we present novel euphemism corpora in three different languages: Yoruba, Spanish, and Mandarin Chinese. We perform euphemism disambiguation experiments in each language using multilingual transformer models mBERT and XLM-RoBERTa, establishing preliminary results from which to launch future work.
[ "Lee, Patrick", "Shode, Iyanuoluwa", "Trujillo, Alain", "Zhao, Yuan", "Ojo, Olumide", "Plancarte, Diana", "Feldman, Anna", "Peng, Jing" ]
FEED PETs: Further Experimentation and Expansion on the Disambiguation of Potentially Euphemistic Terms
starsem-1.38
Poster
2306.00217
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.39.bib
https://aclanthology.org/2023.starsem-1.39/
@inproceedings{kadotani-arase-2023-monolingual, title = "Monolingual Phrase Alignment as Parse Forest Mapping", author = "Kadotani, Sora and Arase, Yuki", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.39", doi = "10.18653/v1/2023.starsem-1.39", pages = "449--455", abstract = "We tackle the problem of monolingual phrase alignment conforming to syntactic structures. The existing method formalises the problem as unordered tree mapping; hence, the alignment quality is easily affected by syntactic ambiguities. We address this problem by expanding the method to align parse forests rather than 1-best trees, where syntactic structures and phrase alignment are simultaneously identified. The proposed method achieves efficient alignment by mapping forests on a packed structure. The experimental results indicated that our method improves the phrase alignment quality of the state-of-the-art method by aligning forests rather than 1-best trees.", }
We tackle the problem of monolingual phrase alignment conforming to syntactic structures. The existing method formalises the problem as unordered tree mapping; hence, the alignment quality is easily affected by syntactic ambiguities. We address this problem by expanding the method to align parse forests rather than 1-best trees, where syntactic structures and phrase alignment are simultaneously identified. The proposed method achieves efficient alignment by mapping forests on a packed structure. The experimental results indicated that our method improves the phrase alignment quality of the state-of-the-art method by aligning forests rather than 1-best trees.
[ "Kadotani, Sora", "Arase, Yuki" ]
Monolingual Phrase Alignment as Parse Forest Mapping
starsem-1.39
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.40.bib
https://aclanthology.org/2023.starsem-1.40/
@inproceedings{prange-chersoni-2023-empirical, title = "Empirical Sufficiency Lower Bounds for Language Modeling with Locally-Bootstrapped Semantic Structures", author = "Prange, Jakob and Chersoni, Emmanuele", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.40", doi = "10.18653/v1/2023.starsem-1.40", pages = "456--468", abstract = "In this work we build upon negative results from an attempt at language modeling with predicted semantic structure, in order to establish empirical lower bounds on what could have made the attempt successful. More specifically, we design a concise binary vector representation of semantic structure at the lexical level and evaluate in-depth how good an incremental tagger needs to be in order to achieve better-than-baseline performance with an end-to-end semantic-bootstrapping language model. We envision such a system as consisting of a (pretrained) sequential-neural component and a hierarchical-symbolic component working together to generate text with low surprisal and high linguistic interpretability. We find that (a) dimensionality of the semantic vector representation can be dramatically reduced without losing its main advantages and (b) lower bounds on prediction quality cannot be established via a single score alone, but need to take the distributions of signal and noise into account.", }
In this work we build upon negative results from an attempt at language modeling with predicted semantic structure, in order to establish empirical lower bounds on what could have made the attempt successful. More specifically, we design a concise binary vector representation of semantic structure at the lexical level and evaluate in-depth how good an incremental tagger needs to be in order to achieve better-than-baseline performance with an end-to-end semantic-bootstrapping language model. We envision such a system as consisting of a (pretrained) sequential-neural component and a hierarchical-symbolic component working together to generate text with low surprisal and high linguistic interpretability. We find that (a) dimensionality of the semantic vector representation can be dramatically reduced without losing its main advantages and (b) lower bounds on prediction quality cannot be established via a single score alone, but need to take the distributions of signal and noise into account.
[ "Prange, Jakob", "Chersoni, Emmanuele" ]
Empirical Sufficiency Lower Bounds for Language Modeling with Locally-Bootstrapped Semantic Structures
starsem-1.40
Poster
2305.18915
[ "https://github.com/jakpra/sufficiencylowerbounds" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.41.bib
https://aclanthology.org/2023.starsem-1.41/
@inproceedings{sileo-moens-2023-probing, title = "Probing neural language models for understanding of words of estimative probability", author = "Sileo, Damien and Moens, Marie-francine", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.41", doi = "10.18653/v1/2023.starsem-1.41", pages = "469--476", abstract = "Words of Estimative Probability (WEP) are phrases used to express the plausibility of a statement. Examples include terms like {\textbackslash}textit{probably, maybe, likely, doubt, unlikely}, and {\textbackslash}textit{impossible}. Surveys have shown that human evaluators tend to agree when assigning numerical probability levels to these WEPs. For instance, the term {\textbackslash}textit{highly likely} equates to a median probability of {\$}0.90{{\textbackslash}pm}0.08{\$} according to a survey by {\textbackslash}citet{fagen-ulmschneider}.In this study, our focus is to gauge the competency of neural language processing models in accurately capturing the consensual probability level associated with each WEP. Our first approach is utilizing the UNLI dataset {\textbackslash}cite{chen-etal-2020-uncertain}, which links premises and hypotheses with their perceived joint probability {\$}p{\$}. From this, we craft prompts in the form: ''[{\textbackslash}textsc{Premise}]. [{\textbackslash}textsc{Wep}], [{\textbackslash}textsc{Hypothesis}].{''} This allows us to evaluate whether language models can predict if the consensual probability level of a WEP aligns closely with {\$}p{\$}.In our second approach, we develop a dataset based on WEP-focused probabilistic reasoning to assess if language models can logically process WEP compositions. For example, given the prompt ''[{\textbackslash}textsc{EventA}] {\textbackslash}textit{is likely}. [{\textbackslash}textsc{EventB}] {\textbackslash}textit{is impossible}.{''}, a well-functioning language model should not conclude that [{\textbackslash}textsc{EventA{\$}{\textbackslash}{\&}amp;{\$}B}] is likely. Through our study, we observe that both tasks present challenges to out-of-the-box English language models. However, we also demonstrate that fine-tuning these models can lead to significant and transferable improvements.", }
Words of Estimative Probability (WEP) are phrases used to express the plausibility of a statement. Examples include terms like {\textbackslash}textit{probably, maybe, likely, doubt, unlikely}, and {\textbackslash}textit{impossible}. Surveys have shown that human evaluators tend to agree when assigning numerical probability levels to these WEPs. For instance, the term {\textbackslash}textit{highly likely} equates to a median probability of {\$}0.90{{\textbackslash}pm}0.08{\$} according to a survey by {\textbackslash}citet{fagen-ulmschneider}.In this study, our focus is to gauge the competency of neural language processing models in accurately capturing the consensual probability level associated with each WEP. Our first approach is utilizing the UNLI dataset {\textbackslash}cite{chen-etal-2020-uncertain}, which links premises and hypotheses with their perceived joint probability {\$}p{\$}. From this, we craft prompts in the form: ''[{\textbackslash}textsc{Premise}]. [{\textbackslash}textsc{Wep}], [{\textbackslash}textsc{Hypothesis}].{''} This allows us to evaluate whether language models can predict if the consensual probability level of a WEP aligns closely with {\$}p{\$}.In our second approach, we develop a dataset based on WEP-focused probabilistic reasoning to assess if language models can logically process WEP compositions. For example, given the prompt ''[{\textbackslash}textsc{EventA}] {\textbackslash}textit{is likely}. [{\textbackslash}textsc{EventB}] {\textbackslash}textit{is impossible}.{''}, a well-functioning language model should not conclude that [{\textbackslash}textsc{EventA{\$}{\textbackslash}{\&}amp;{\$}B}] is likely. Through our study, we observe that both tasks present challenges to out-of-the-box English language models. However, we also demonstrate that fine-tuning these models can lead to significant and transferable improvements.
[ "Sileo, Damien", "Moens, Marie-francine" ]
Probing neural language models for understanding of words of estimative probability
starsem-1.41
Poster
2211.03358
[ "" ]
https://huggingface.co/papers/2211.03358
1
1
0
2
1
[]
[ "sileod/probability_words_nli" ]
[]
https://aclanthology.org/2023.starsem-1.42.bib
https://aclanthology.org/2023.starsem-1.42/
@inproceedings{petrak-etal-2023-arithmetic, title = "Arithmetic-Based Pretraining Improving Numeracy of Pretrained Language Models", author = "Petrak, Dominic and Moosavi, Nafise Sadat and Gurevych, Iryna", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.42", doi = "10.18653/v1/2023.starsem-1.42", pages = "477--493", abstract = "State-of-the-art pretrained language models tend to perform below their capabilities when applied out-of-the-box on tasks that require understanding and working with numbers (usually referred to as numeracy). Recent work suggests two main reasons for this: (1) popular tokenisation algorithms have limited expressiveness for numbers, and (2) common pretraining objectives do not target numeracy. Approaches that address these shortcomings usually require architectural changes or pretraining from scratch. In this paper, we propose a new extended pretraining approach called Arithmetic-Based Pretraining that jointly addresses both in one extended pretraining step without requiring architectural changes or pretraining from scratch. Arithmetic-Based Pretraining combines contrastive learning to improve the number representation, and a novel extended pretraining objective called Inferable Number Prediction Task to improve numeracy. Our experiments show the effectiveness of Arithmetic-Based Pretraining in three different tasks that require improved numeracy, i.e., reading comprehension in the DROP dataset, inference-on-tables in the InfoTabs dataset, and table-to-text generation in the WikiBio and SciGen datasets.", }
State-of-the-art pretrained language models tend to perform below their capabilities when applied out-of-the-box on tasks that require understanding and working with numbers (usually referred to as numeracy). Recent work suggests two main reasons for this: (1) popular tokenisation algorithms have limited expressiveness for numbers, and (2) common pretraining objectives do not target numeracy. Approaches that address these shortcomings usually require architectural changes or pretraining from scratch. In this paper, we propose a new extended pretraining approach called Arithmetic-Based Pretraining that jointly addresses both in one extended pretraining step without requiring architectural changes or pretraining from scratch. Arithmetic-Based Pretraining combines contrastive learning to improve the number representation, and a novel extended pretraining objective called Inferable Number Prediction Task to improve numeracy. Our experiments show the effectiveness of Arithmetic-Based Pretraining in three different tasks that require improved numeracy, i.e., reading comprehension in the DROP dataset, inference-on-tables in the InfoTabs dataset, and table-to-text generation in the WikiBio and SciGen datasets.
[ "Petrak, Dominic", "Moosavi, Nafise Sadat", "Gurevych, Iryna" ]
Arithmetic-Based Pretraining Improving Numeracy of Pretrained Language Models
starsem-1.42
Poster
[ "https://github.com/ukplab/starsem2023-arithmetic-based-pretraining" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.43.bib
https://aclanthology.org/2023.starsem-1.43/
@inproceedings{beck-etal-2023-robust, title = "Robust Integration of Contextual Information for Cross-Target Stance Detection", author = "Beck, Tilman and Waldis, Andreas and Gurevych, Iryna", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.43", doi = "10.18653/v1/2023.starsem-1.43", pages = "494--511", abstract = "Stance detection deals with identifying an author{'}s stance towards a target. Most existing stance detection models are limited because they do not consider relevant contextual information which allows for inferring the stance correctly. Complementary context can be found in knowledge bases but integrating the context into pretrained language models is non-trivial due to the graph structure of standard knowledge bases. To overcome this, we explore an approach to integrate contextual information as text which allows for integrating contextual information from heterogeneous sources, such as structured knowledge sources and by prompting large language models. Our approach can outperform competitive baselines on a large and diverse stance detection benchmark in a cross-target setup, i.e. for targets unseen during training. We demonstrate that it is more robust to noisy context and can regularize for unwanted correlations between labels and target-specific vocabulary. Finally, it is independent of the pretrained language model in use.", }
Stance detection deals with identifying an author{'}s stance towards a target. Most existing stance detection models are limited because they do not consider relevant contextual information which allows for inferring the stance correctly. Complementary context can be found in knowledge bases but integrating the context into pretrained language models is non-trivial due to the graph structure of standard knowledge bases. To overcome this, we explore an approach to integrate contextual information as text which allows for integrating contextual information from heterogeneous sources, such as structured knowledge sources and by prompting large language models. Our approach can outperform competitive baselines on a large and diverse stance detection benchmark in a cross-target setup, i.e. for targets unseen during training. We demonstrate that it is more robust to noisy context and can regularize for unwanted correlations between labels and target-specific vocabulary. Finally, it is independent of the pretrained language model in use.
[ "Beck, Tilman", "Waldis, Andreas", "Gurevych, Iryna" ]
Robust Integration of Contextual Information for Cross-Target Stance Detection
starsem-1.43
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.44.bib
https://aclanthology.org/2023.starsem-1.44/
@inproceedings{nikolaev-etal-2023-adverbs, title = "Adverbs, Surprisingly", author = "Nikolaev, Dmitry and Baker, Collin and Petruck, Miriam R. L. and Pad{\'o}, Sebastian", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.44", doi = "10.18653/v1/2023.starsem-1.44", pages = "512--526", abstract = "This paper begins with the premise that adverbs are neglected in computational linguistics. This view derives from two analyses: a literature review and a novel adverb dataset to probe a state-of-the-art language model, thereby uncovering systematic gaps in accounts for adverb meaning. We suggest that using Frame Semantics for characterizing word meaning, as in FrameNet, provides a promising approach to adverb analysis, given its ability to describe ambiguity, semantic roles, and null instantiation.", }
This paper begins with the premise that adverbs are neglected in computational linguistics. This view derives from two analyses: a literature review and a novel adverb dataset to probe a state-of-the-art language model, thereby uncovering systematic gaps in accounts for adverb meaning. We suggest that using Frame Semantics for characterizing word meaning, as in FrameNet, provides a promising approach to adverb analysis, given its ability to describe ambiguity, semantic roles, and null instantiation.
[ "Nikolaev, Dmitry", "Baker, Collin", "Petruck, Miriam R. L.", "Pad{\\'o}, Sebastian" ]
Adverbs, Surprisingly
starsem-1.44
Poster
2305.19650
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.starsem-1.45.bib
https://aclanthology.org/2023.starsem-1.45/
@inproceedings{zhou-etal-2023-sequence, title = "Can Sequence-to-Sequence Transformers Naturally Understand Sequential Instructions?", author = "Zhou, Xiang and Gupta, Aditya and Upadhyay, Shyam and Bansal, Mohit and Faruqui, Manaal", editor = "Palmer, Alexis and Camacho-collados, Jose", booktitle = "Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.starsem-1.45", doi = "10.18653/v1/2023.starsem-1.45", pages = "527--534", abstract = "While many real-life tasks require reasoning over multi-step sequential instructions, collecting fine-grained annotations for each intermediate step can be prohibitively expensive. In this work, we study how general pretrained sequence-to-sequence transformers perform under varying types of annotation for sequential instruction understanding. We conduct experiments using T5 (Raffel et al., 2020) on a commonly-used multi-step instruction understanding dataset SCONE (Long et al., 2016) that includes three sub-tasks. First, we show that with only gold supervision for the final step of a multi-step instruction sequence, depending on the sequential properties of different tasks, transformers may exhibit extremely bad performance on intermediate steps, in stark contrast with their performance on the final step. Next, we explore two directions to relieve this problem. We show that with the same limited annotation budget, using supervision uniformly distributed across different steps (instead of only final-step supervision), we can greatly improve the performance on intermediate steps with a drop in final-step performance. Further, we explore a contrastive learning approach to provide training signals on intermediate steps with zero intermediate gold supervision. This, however, achieves mixed results. It significantly improves the model{'}s bad intermediate-step performance on one subtask, but also shows decreased performance on another subtask.", }
While many real-life tasks require reasoning over multi-step sequential instructions, collecting fine-grained annotations for each intermediate step can be prohibitively expensive. In this work, we study how general pretrained sequence-to-sequence transformers perform under varying types of annotation for sequential instruction understanding. We conduct experiments using T5 (Raffel et al., 2020) on a commonly-used multi-step instruction understanding dataset SCONE (Long et al., 2016) that includes three sub-tasks. First, we show that with only gold supervision for the final step of a multi-step instruction sequence, depending on the sequential properties of different tasks, transformers may exhibit extremely bad performance on intermediate steps, in stark contrast with their performance on the final step. Next, we explore two directions to relieve this problem. We show that with the same limited annotation budget, using supervision uniformly distributed across different steps (instead of only final-step supervision), we can greatly improve the performance on intermediate steps with a drop in final-step performance. Further, we explore a contrastive learning approach to provide training signals on intermediate steps with zero intermediate gold supervision. This, however, achieves mixed results. It significantly improves the model{'}s bad intermediate-step performance on one subtask, but also shows decreased performance on another subtask.
[ "Zhou, Xiang", "Gupta, Aditya", "Upadhyay, Shyam", "Bansal, Mohit", "Faruqui, Manaal" ]
Can Sequence-to-Sequence Transformers Naturally Understand Sequential Instructions?
starsem-1.45
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.1.bib
https://aclanthology.org/2023.sustainlp-1.1/
@inproceedings{silwal-etal-2023-kwikbucks, title = "{K}wik{B}ucks: Correlation Clustering with Cheap-Weak and Expensive-Strong Signals", author = "Silwal, Sandeep and Ahmadian, Sara and Nystrom, Andrew and Mccallum, Andrew and Ramachandran, Deepak and Kazemi, Mehran", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.1", doi = "10.18653/v1/2023.sustainlp-1.1", pages = "1--31", }
No abstract found
[ "Silwal, S", "eep", "Ahmadian, Sara", "Nystrom, Andrew", "Mccallum, Andrew", "Ramach", "ran, Deepak", "Kazemi, Mehran" ]
KwikBucks: Correlation Clustering with Cheap-Weak and Expensive-Strong Signals
sustainlp-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.2.bib
https://aclanthology.org/2023.sustainlp-1.2/
@inproceedings{liu-etal-2023-semantic, title = "Semantic-Oriented Unlabeled Priming for Large-Scale Language Models", author = "Liu, Yanchen and Schick, Timo and Schtze, Hinrich", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.2", doi = "10.18653/v1/2023.sustainlp-1.2", pages = "32--38", }
No abstract found
[ "Liu, Yanchen", "Schick, Timo", "Schtze, Hinrich" ]
Semantic-Oriented Unlabeled Priming for Large-Scale Language Models
sustainlp-1.2
Poster
2202.06133
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.3.bib
https://aclanthology.org/2023.sustainlp-1.3/
@inproceedings{campos-etal-2023-oberta, title = "o{BERT}a: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes", author = "Campos, Daniel and Marques, Alexandre and Kurtz, Mark and Xiang Zhai, Cheng", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.3", doi = "10.18653/v1/2023.sustainlp-1.3", pages = "39--58", }
No abstract found
[ "Campos, Daniel", "Marques, Alex", "re", "Kurtz, Mark", "Xiang Zhai, Cheng" ]
oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes
sustainlp-1.3
Poster
2303.17612
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.4.bib
https://aclanthology.org/2023.sustainlp-1.4/
@inproceedings{campos-etal-2023-quick, title = "Quick Dense Retrievers Consume {KALE}: Post Training {K}ullback{L}eibler Alignment of Embeddings for Asymmetrical dual encoders", author = "Campos, Daniel and Magnani, Alessandro and Zhai, Chengxiang", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.4", doi = "10.18653/v1/2023.sustainlp-1.4", pages = "59--77", }
No abstract found
[ "Campos, Daniel", "Magnani, Aless", "ro", "Zhai, Chengxiang" ]
Quick Dense Retrievers Consume KALE: Post Training KullbackLeibler Alignment of Embeddings for Asymmetrical dual encoders
sustainlp-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.5.bib
https://aclanthology.org/2023.sustainlp-1.5/
@inproceedings{takase-kiyono-2023-lessons, title = "Lessons on Parameter Sharing across Layers in Transformers", author = "Takase, Sho and Kiyono, Shun", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.5", doi = "10.18653/v1/2023.sustainlp-1.5", pages = "78--90", }
No abstract found
[ "Takase, Sho", "Kiyono, Shun" ]
Lessons on Parameter Sharing across Layers in Transformers
sustainlp-1.5
Poster
2104.06022
[ "https://github.com/takase/share_layer_params" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.6.bib
https://aclanthology.org/2023.sustainlp-1.6/
@inproceedings{campos-zhai-2023-asymmetry, title = "To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency", author = "Campos, Daniel and Zhai, Chengxiang", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.6", doi = "10.18653/v1/2023.sustainlp-1.6", pages = "91--109", }
No abstract found
[ "Campos, Daniel", "Zhai, Chengxiang" ]
To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency
sustainlp-1.6
Poster
2304.02721
[ "" ]
https://huggingface.co/papers/2304.02721
1
3
0
2
1
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.7.bib
https://aclanthology.org/2023.sustainlp-1.7/
@inproceedings{liu-etal-2023-small, title = "Small is the New Big: Pre-finetuned compact models are better for Asynchronous Active Learning", author = "Liu, Dantong and Pavani, Kaushik and Dasgupta, Sunny", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.7", doi = "10.18653/v1/2023.sustainlp-1.7", pages = "110--120", }
No abstract found
[ "Liu, Dantong", "Pavani, Kaushik", "Dasgupta, Sunny" ]
Small is the New Big: Pre-finetuned compact models are better for Asynchronous Active Learning
sustainlp-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.8.bib
https://aclanthology.org/2023.sustainlp-1.8/
@inproceedings{shah-etal-2023-adept, title = "{ADEPT}: Adapter-based Efficient Prompt Tuning Approach for Language Models", author = "Shah, Aditya and Thapa, Surendrabikram and Jain, Aneesh and Huang, Lifu", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.8", doi = "10.18653/v1/2023.sustainlp-1.8", pages = "121--128", }
No abstract found
[ "Shah, Aditya", "Thapa, Surendrabikram", "Jain, Aneesh", "Huang, Lifu" ]
ADEPT: Adapter-based Efficient Prompt Tuning Approach for Language Models
sustainlp-1.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.9.bib
https://aclanthology.org/2023.sustainlp-1.9/
@inproceedings{attendu-corbeil-2023-nlu, title = "{NLU} on Data Diets: Dynamic Data Subset Selection for {NLP} Classification Tasks", author = "Attendu, Jean-michel and Corbeil, Jean-philippe", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.9", doi = "10.18653/v1/2023.sustainlp-1.9", pages = "129--146", }
No abstract found
[ "Attendu, Jean-michel", "Corbeil, Jean-philippe" ]
NLU on Data Diets: Dynamic Data Subset Selection for NLP Classification Tasks
sustainlp-1.9
Poster
2306.03208
[ "https://github.com/jpcorbeil-nuance/nlu_data_diets" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.10.bib
https://aclanthology.org/2023.sustainlp-1.10/
@inproceedings{zhang-etal-2023-interactions, title = "On the Interactions of Structural Constraints and Data Resources for Structured Prediction", author = "Zhang, Zhisong and Strubell, Emma and Hovy, Eduard", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.10", doi = "10.18653/v1/2023.sustainlp-1.10", pages = "147--157", }
No abstract found
[ "Zhang, Zhisong", "Strubell, Emma", "Hovy, Eduard" ]
On the Interactions of Structural Constraints and Data Resources for Structured Prediction
sustainlp-1.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.11.bib
https://aclanthology.org/2023.sustainlp-1.11/
@inproceedings{niklaus-giofre-2023-pretrain, title = "Can we Pretrain a {S}ot{A} Legal Language Model on a Budget From Scratch?", author = "Niklaus, Joel and Giofre, Daniele", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.11", doi = "10.18653/v1/2023.sustainlp-1.11", pages = "158--182", }
No abstract found
[ "Niklaus, Joel", "Giofre, Daniele" ]
Can we Pretrain a SotA Legal Language Model on a Budget From Scratch?
sustainlp-1.11
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.12.bib
https://aclanthology.org/2023.sustainlp-1.12/
@inproceedings{lyu-etal-2023-video, title = "Is a Video worth n n Images? A Highly Efficient Approach to Transformer-based Video Question Answering", author = "Lyu, Chenyang and Ji, Tianbo and Graham, Yvette and Foster, Jennifer", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.12", doi = "10.18653/v1/2023.sustainlp-1.12", pages = "183--189", }
No abstract found
[ "Lyu, Chenyang", "Ji, Tianbo", "Graham, Yvette", "Foster, Jennifer" ]
Is a Video worth n n Images? A Highly Efficient Approach to Transformer-based Video Question Answering
sustainlp-1.12
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.13.bib
https://aclanthology.org/2023.sustainlp-1.13/
@inproceedings{xu-etal-2023-unleash, title = "How to Unleash the Power of Large Language Models for Few-shot Relation Extraction?", author = "Xu, Xin and Zhu, Yuqi and Wang, Xiaohan and Zhang, Ningyu", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.13", doi = "10.18653/v1/2023.sustainlp-1.13", pages = "190--200", }
No abstract found
[ "Xu, Xin", "Zhu, Yuqi", "Wang, Xiaohan", "Zhang, Ningyu" ]
How to Unleash the Power of Large Language Models for Few-shot Relation Extraction?
sustainlp-1.13
Poster
2305.01555
[ "https://github.com/zjunlp/DeepKE/tree/main/example/llm" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.14.bib
https://aclanthology.org/2023.sustainlp-1.14/
@inproceedings{mohta-2023-prompting, title = "Prompting language models improves performance in imbalanced setting", author = "Mohta, Jay", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.14", doi = "10.18653/v1/2023.sustainlp-1.14", pages = "201--211", }
No abstract found
[ "Mohta, Jay" ]
Prompting language models improves performance in imbalanced setting
sustainlp-1.14
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.15.bib
https://aclanthology.org/2023.sustainlp-1.15/
@inproceedings{mckenna-sen-2023-kgqa, title = "{KGQA} Without Retraining", author = "Mckenna, Nick and Sen, Priyanka", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.15", doi = "10.18653/v1/2023.sustainlp-1.15", pages = "212--218", }
No abstract found
[ "Mckenna, Nick", "Sen, Priyanka" ]
KGQA Without Retraining
sustainlp-1.15
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.16.bib
https://aclanthology.org/2023.sustainlp-1.16/
@inproceedings{sonkar-etal-2023-maner, title = "{MANER}: Mask Augmented Named Entity Recognition for Extreme Low-Resource Languages", author = "Sonkar, Shashank and Wang, Zichao and Baraniuk, Richard", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.16", doi = "10.18653/v1/2023.sustainlp-1.16", pages = "219--226", }
No abstract found
[ "Sonkar, Shashank", "Wang, Zichao", "Baraniuk, Richard" ]
MANER: Mask Augmented Named Entity Recognition for Extreme Low-Resource Languages
sustainlp-1.16
Poster
2212.09723
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.17.bib
https://aclanthology.org/2023.sustainlp-1.17/
@inproceedings{tang-etal-2023-efficient, title = "Efficient and Interpretable Compressive Text Summarisation with Unsupervised Dual-Agent Reinforcement Learning", author = "Tang, Peggy and Gao, Junbin and Zhang, Lei and Wang, Zhiyong", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.17", doi = "10.18653/v1/2023.sustainlp-1.17", pages = "227--238", }
No abstract found
[ "Tang, Peggy", "Gao, Junbin", "Zhang, Lei", "Wang, Zhiyong" ]
Efficient and Interpretable Compressive Text Summarisation with Unsupervised Dual-Agent Reinforcement Learning
sustainlp-1.17
Poster
2306.03415
[ "https://github.com/peggypytang/urlcomsum" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.18.bib
https://aclanthology.org/2023.sustainlp-1.18/
@inproceedings{szumel-etal-2023-exploring, title = "Exploring the Effect of Frequency Resolution in {FN}et", author = "Szumel, Gregory and Khalighinejad, Ghazal and Stureborg, Rickard and Wiseman, Sam", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.18", doi = "10.18653/v1/2023.sustainlp-1.18", pages = "239--244", }
No abstract found
[ "Szumel, Gregory", "Khalighinejad, Ghazal", "Stureborg, Rickard", "Wiseman, Sam" ]
Exploring the Effect of Frequency Resolution in FNet
sustainlp-1.18
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.19.bib
https://aclanthology.org/2023.sustainlp-1.19/
@inproceedings{anagnostopoulou-etal-2023-towards, title = "Towards Adaptable and Interactive Image Captioning with Data Augmentation and Episodic Memory", author = "Anagnostopoulou, Aliki and Hartmann, Mareike and Sonntag, Daniel", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.19", doi = "10.18653/v1/2023.sustainlp-1.19", pages = "245--256", }
No abstract found
[ "Anagnostopoulou, Aliki", "Hartmann, Mareike", "Sonntag, Daniel" ]
Towards Adaptable and Interactive Image Captioning with Data Augmentation and Episodic Memory
sustainlp-1.19
Poster
2306.03500
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.20.bib
https://aclanthology.org/2023.sustainlp-1.20/
@inproceedings{agrawal-singh-2023-corpus, title = "Corpus Complexity Matters in Pretraining Language Models", author = "Agrawal, Ameeta and Singh, Suresh", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.20", doi = "10.18653/v1/2023.sustainlp-1.20", pages = "257--263", }
No abstract found
[ "Agrawal, Ameeta", "Singh, Suresh" ]
Corpus Complexity Matters in Pretraining Language Models
sustainlp-1.20
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.21.bib
https://aclanthology.org/2023.sustainlp-1.21/
@inproceedings{han-etal-2023-personapkt, title = "{P}ersona{PKT}: Building Personalized Dialogue Agents via Parameter-efficient Knowledge Transfer", author = "Han, Xu and Guo, Bin and Jung, Yoon and Yao, Benjamin and Zhang, Yu and Liu, Xiaohu and Guo, Chenlei", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.21", doi = "10.18653/v1/2023.sustainlp-1.21", pages = "264--273", }
No abstract found
[ "Han, Xu", "Guo, Bin", "Jung, Yoon", "Yao, Benjamin", "Zhang, Yu", "Liu, Xiaohu", "Guo, Chenlei" ]
PersonaPKT: Building Personalized Dialogue Agents via Parameter-efficient Knowledge Transfer
sustainlp-1.21
Poster
2306.08126
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.22.bib
https://aclanthology.org/2023.sustainlp-1.22/
@inproceedings{jawahar-etal-2023-small, title = "Small Character Models Match Large Word Models for Autocomplete Under Memory Constraints", author = "Jawahar, Ganesh and Mukherjee, Subhabrata and Dey, Debadeepta and Abdul-mageed, Muhammad and Lakshmanan, V.s., Laks and Mendes, Caio and De Rosa, Gustavo and Shah, Shital", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.22", doi = "10.18653/v1/2023.sustainlp-1.22", pages = "274--289", }
No abstract found
[ "Jawahar, Ganesh", "Mukherjee, Subhabrata", "Dey, Debadeepta", "Abdul-mageed, Muhammad", "Lakshmanan, V.s., Laks", "Mendes, Caio", "De Rosa, Gustavo", "Shah, Shital" ]
Small Character Models Match Large Word Models for Autocomplete Under Memory Constraints
sustainlp-1.22
Poster
2210.03251
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.23.bib
https://aclanthology.org/2023.sustainlp-1.23/
@inproceedings{wang-hong-2023-query, title = "Query Encoder Distillation via Embedding Alignment is a Strong Baseline Method to Boost Dense Retriever Online Efficiency", author = "Wang, Yuxuan and Hong, Lyu", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.23", doi = "10.18653/v1/2023.sustainlp-1.23", pages = "290--298", }
No abstract found
[ "Wang, Yuxuan", "Hong, Lyu" ]
Query Encoder Distillation via Embedding Alignment is a Strong Baseline Method to Boost Dense Retriever Online Efficiency
sustainlp-1.23
Poster
2306.11550
[ "https://github.com/guest400123064/distill-retriever" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.sustainlp-1.24.bib
https://aclanthology.org/2023.sustainlp-1.24/
@inproceedings{kruit-2023-minimalist, title = "Minimalist Entity Disambiguation for Mid-Resource Languages", author = "Kruit, Benno", editor = "Sadat Moosavi, Nafise and Gurevych, Iryna and Hou, Yufang and Kim, Gyuwan and Kim, Young Jin and Schuster, Tal and Agrawal, Ameeta", booktitle = "Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)", month = jul, year = "2023", address = "Toronto, Canada (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.sustainlp-1.24", doi = "10.18653/v1/2023.sustainlp-1.24", pages = "299--306", }
No abstract found
[ "Kruit, Benno" ]
Minimalist Entity Disambiguation for Mid-Resource Languages
sustainlp-1.24
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.1.bib
https://aclanthology.org/2023.trustnlp-1.1/
@inproceedings{li-etal-2023-towards, title = "Towards Faithful Explanations for Text Classification with Robustness Improvement and Explanation Guided Training", author = "Li, Dongfang and Hu, Baotian and Chen, Qingcai and He, Shan", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.1", doi = "10.18653/v1/2023.trustnlp-1.1", pages = "1--14", abstract = "Feature attribution methods highlight the important input tokens as explanations to model predictions, which have been widely applied to deep neural networks towards trustworthy AI. However, recent works show that explanations provided by these methods face challenges of being faithful and robust. In this paper, we propose a method with Robustness improvement and Explanation Guided training towards more faithful EXplanations (REGEX) for text classification. First, we improve model robustness by input gradient regularization technique and virtual adversarial training. Secondly, we use salient ranking to mask noisy tokens and maximize the similarity between model attention and feature attribution, which can be seen as a self-training procedure without importing other external information. We conduct extensive experiments on six datasets with five attribution methods, and also evaluate the faithfulness in the out-of-domain setting. The results show that REGEX improves fidelity metrics of explanations in all settings and further achieves consistent gains based on two randomization tests. Moreover, we show that using highlight explanations produced by REGEX to train select-then-predict models results in comparable task performance to the end-to-end method.", }
Feature attribution methods highlight the important input tokens as explanations to model predictions, which have been widely applied to deep neural networks towards trustworthy AI. However, recent works show that explanations provided by these methods face challenges of being faithful and robust. In this paper, we propose a method with Robustness improvement and Explanation Guided training towards more faithful EXplanations (REGEX) for text classification. First, we improve model robustness by input gradient regularization technique and virtual adversarial training. Secondly, we use salient ranking to mask noisy tokens and maximize the similarity between model attention and feature attribution, which can be seen as a self-training procedure without importing other external information. We conduct extensive experiments on six datasets with five attribution methods, and also evaluate the faithfulness in the out-of-domain setting. The results show that REGEX improves fidelity metrics of explanations in all settings and further achieves consistent gains based on two randomization tests. Moreover, we show that using highlight explanations produced by REGEX to train select-then-predict models results in comparable task performance to the end-to-end method.
[ "Li, Dongfang", "Hu, Baotian", "Chen, Qingcai", "He, Shan" ]
Towards Faithful Explanations for Text Classification with Robustness Improvement and Explanation Guided Training
trustnlp-1.1
Poster
2312.17591
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.2.bib
https://aclanthology.org/2023.trustnlp-1.2/
@inproceedings{arnold-etal-2023-driving, title = "Driving Context into Text-to-Text Privatization", author = "Arnold, Stefan and Yesilbas, Dilara and Weinzierl, Sven", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.2", doi = "10.18653/v1/2023.trustnlp-1.2", pages = "15--25", abstract = "Metric Differential Privacy enables text-to-text privatization by adding calibrated noise to the vector of a word derived from an embedding space and projecting this noisy vector back to a discrete vocabulary using a nearest neighbor search. Since words are substituted without context, this mechanism is expected to fall short at finding substitutes for words with ambiguous meanings, such as {`}bank{'}. To account for these ambiguous words, we leverage a sense embedding and incorporate a sense disambiguation step prior to noise injection. We encompass our modification to the privatization mechanism with an estimation of privacy and utility. For word sense disambiguation on the Words in Context dataset, we demonstrate a substantial increase in classification accuracy by 6.05{\%}.", }
Metric Differential Privacy enables text-to-text privatization by adding calibrated noise to the vector of a word derived from an embedding space and projecting this noisy vector back to a discrete vocabulary using a nearest neighbor search. Since words are substituted without context, this mechanism is expected to fall short at finding substitutes for words with ambiguous meanings, such as {`}bank{'}. To account for these ambiguous words, we leverage a sense embedding and incorporate a sense disambiguation step prior to noise injection. We encompass our modification to the privatization mechanism with an estimation of privacy and utility. For word sense disambiguation on the Words in Context dataset, we demonstrate a substantial increase in classification accuracy by 6.05{\%}.
[ "Arnold, Stefan", "Yesilbas, Dilara", "Weinzierl, Sven" ]
Driving Context into Text-to-Text Privatization
trustnlp-1.2
Poster
2306.01457
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.3.bib
https://aclanthology.org/2023.trustnlp-1.3/
@inproceedings{narayanan-venkit-etal-2023-automated, title = "Automated Ableism: An Exploration of Explicit Disability Biases in Sentiment and Toxicity Analysis Models", author = "Narayanan Venkit, Pranav and Srinath, Mukund and Wilson, Shomir", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.3", doi = "10.18653/v1/2023.trustnlp-1.3", pages = "26--34", abstract = "We analyze sentiment analysis and toxicity detection models to detect the presence of explicit bias against people with disability (PWD). We employ the bias identification framework of Perturbation Sensitivity Analysis to examine conversations related to PWD on social media platforms, specifically Twitter and Reddit, in order to gain insight into how disability bias is disseminated in real-world social settings. We then create the Bias Identification Test in Sentiment (BITS) corpus to quantify explicit disability bias in any sentiment analysis and toxicity detection models. Our study utilizes BITS to uncover significant biases in four open AIaaS (AI as a Service) sentiment analysis tools, namely TextBlob, VADER, Google Cloud Natural Language API, DistilBERT and two toxicity detection models, namely two versions of Toxic-BERT. Our findings indicate that all of these models exhibit statistically significant explicit bias against PWD.", }
We analyze sentiment analysis and toxicity detection models to detect the presence of explicit bias against people with disability (PWD). We employ the bias identification framework of Perturbation Sensitivity Analysis to examine conversations related to PWD on social media platforms, specifically Twitter and Reddit, in order to gain insight into how disability bias is disseminated in real-world social settings. We then create the Bias Identification Test in Sentiment (BITS) corpus to quantify explicit disability bias in any sentiment analysis and toxicity detection models. Our study utilizes BITS to uncover significant biases in four open AIaaS (AI as a Service) sentiment analysis tools, namely TextBlob, VADER, Google Cloud Natural Language API, DistilBERT and two toxicity detection models, namely two versions of Toxic-BERT. Our findings indicate that all of these models exhibit statistically significant explicit bias against PWD.
[ "Narayanan Venkit, Pranav", "Srinath, Mukund", "Wilson, Shomir" ]
Automated Ableism: An Exploration of Explicit Disability Biases in Sentiment and Toxicity Analysis Models
trustnlp-1.3
Poster
2307.09209
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.4.bib
https://aclanthology.org/2023.trustnlp-1.4/
@inproceedings{cao-etal-2023-pay-attention, title = "Pay Attention to the Robustness of {C}hinese Minority Language Models! Syllable-level Textual Adversarial Attack on {T}ibetan Script", author = "Cao, Xi and Dawa, Dolma and Qun, Nuo and Nyima, Trashi", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.4", doi = "10.18653/v1/2023.trustnlp-1.4", pages = "35--46", abstract = "The textual adversarial attack refers to an attack method in which the attacker adds imperceptible perturbations to the original texts by elaborate design so that the NLP (natural language processing) model produces false judgments. This method is also used to evaluate the robustness of NLP models. Currently, most of the research in this field focuses on English, and there is also a certain amount of research on Chinese. However, to the best of our knowledge, there is little research targeting Chinese minority languages. Textual adversarial attacks are a new challenge for the information processing of Chinese minority languages. In response to this situation, we propose a Tibetan syllable-level black-box textual adversarial attack called TSAttacker based on syllable cosine distance and scoring mechanism. And then, we conduct TSAttacker on six models generated by fine-tuning two PLMs (pre-trained language models) for three downstream tasks. The experiment results show that TSAttacker is effective and generates high-quality adversarial samples. In addition, the robustness of the involved models still has much room for improvement.", }
The textual adversarial attack refers to an attack method in which the attacker adds imperceptible perturbations to the original texts by elaborate design so that the NLP (natural language processing) model produces false judgments. This method is also used to evaluate the robustness of NLP models. Currently, most of the research in this field focuses on English, and there is also a certain amount of research on Chinese. However, to the best of our knowledge, there is little research targeting Chinese minority languages. Textual adversarial attacks are a new challenge for the information processing of Chinese minority languages. In response to this situation, we propose a Tibetan syllable-level black-box textual adversarial attack called TSAttacker based on syllable cosine distance and scoring mechanism. And then, we conduct TSAttacker on six models generated by fine-tuning two PLMs (pre-trained language models) for three downstream tasks. The experiment results show that TSAttacker is effective and generates high-quality adversarial samples. In addition, the robustness of the involved models still has much room for improvement.
[ "Cao, Xi", "Dawa, Dolma", "Qun, Nuo", "Nyima, Trashi" ]
Pay Attention to the Robustness of Chinese Minority Language Models! Syllable-level Textual Adversarial Attack on Tibetan Script
trustnlp-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.5.bib
https://aclanthology.org/2023.trustnlp-1.5/
@inproceedings{aiyappa-etal-2023-trust, title = "Can we trust the evaluation on {C}hat{GPT}?", author = "Aiyappa, Rachith and An, Jisun and Kwak, Haewoon and Ahn, Yong-yeol", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.5", doi = "10.18653/v1/2023.trustnlp-1.5", pages = "47--54", abstract = "ChatGPT, the first large language model with mass adoption, has demonstrated remarkableperformance in numerous natural language tasks. Despite its evident usefulness, evaluatingChatGPT{'}s performance in diverse problem domains remains challenging due to the closednature of the model and its continuous updates via Reinforcement Learning from HumanFeedback (RLHF). We highlight the issue of data contamination in ChatGPT evaluations, with a case study in stance detection. We discuss the challenge of preventing data contamination and ensuring fair model evaluation in the age of closed and continuously trained models.", }
ChatGPT, the first large language model with mass adoption, has demonstrated remarkableperformance in numerous natural language tasks. Despite its evident usefulness, evaluatingChatGPT{'}s performance in diverse problem domains remains challenging due to the closednature of the model and its continuous updates via Reinforcement Learning from HumanFeedback (RLHF). We highlight the issue of data contamination in ChatGPT evaluations, with a case study in stance detection. We discuss the challenge of preventing data contamination and ensuring fair model evaluation in the age of closed and continuously trained models.
[ "Aiyappa, Rachith", "An, Jisun", "Kwak, Haewoon", "Ahn, Yong-yeol" ]
Can we trust the evaluation on ChatGPT?
trustnlp-1.5
Poster
2303.12767
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.6.bib
https://aclanthology.org/2023.trustnlp-1.6/
@inproceedings{chern-etal-2023-improving, title = "Improving Factuality of Abstractive Summarization via Contrastive Reward Learning", author = "Chern, I-chun and Wang, Zhiruo and Das, Sanjan and Sharma, Bhavuk and Liu, Pengfei and Neubig, Graham", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.6", doi = "10.18653/v1/2023.trustnlp-1.6", pages = "55--60", abstract = "Modern abstractive summarization models often generate summaries that contain hallucinated or contradictory information. In this paper, we propose a simple but effective contrastive learning framework that incorporates recent developments in reward learning and factuality metrics. Empirical studies demonstrate that the proposed framework enables summarization models to learn from feedback of factuality metrics using contrastive reward learning, leading to more factual summaries by human evaluations. This suggests that further advances in learning and evaluation algorithms can feed directly into providing more factual summaries. Code and human evaluation results will be publicly available at {\textbackslash}url{https://github.com/EthanC111/factuality{\_}summarization}.", }
Modern abstractive summarization models often generate summaries that contain hallucinated or contradictory information. In this paper, we propose a simple but effective contrastive learning framework that incorporates recent developments in reward learning and factuality metrics. Empirical studies demonstrate that the proposed framework enables summarization models to learn from feedback of factuality metrics using contrastive reward learning, leading to more factual summaries by human evaluations. This suggests that further advances in learning and evaluation algorithms can feed directly into providing more factual summaries. Code and human evaluation results will be publicly available at {\textbackslash}url{https://github.com/EthanC111/factuality{\_}summarization}.
[ "Chern, I-chun", "Wang, Zhiruo", "Das, Sanjan", "Sharma, Bhavuk", "Liu, Pengfei", "Neubig, Graham" ]
Improving Factuality of Abstractive Summarization via Contrastive Reward Learning
trustnlp-1.6
Poster
2307.04507
[ "" ]
https://huggingface.co/papers/2307.04507
1
0
0
6
1
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.7.bib
https://aclanthology.org/2023.trustnlp-1.7/
@inproceedings{jeoung-etal-2023-examining, title = "Examining the Causal Impact of First Names on Language Models: The Case of Social Commonsense Reasoning", author = "Jeoung, Sullam and Diesner, Jana and Kilicoglu, Halil", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.7", doi = "10.18653/v1/2023.trustnlp-1.7", pages = "61--72", abstract = "As language models continue to be integrated into applications of personal and societal relevance, ensuring these models{'} trustworthiness is crucial, particularly with respect to producing consistent outputs regardless of sensitive attributes. Given that first names may serve as proxies for (intersectional) socio-demographic representations, it is imperative to examine the impact of first names on commonsense reasoning capabilities. In this paper, we study whether a model{'}s reasoning given a specific input differs based on the first names provided. Our underlying assumption is that the reasoning about Alice should not differ from the reasoning about James. We propose and implement a controlled experimental framework to measure the causal effect of first names on commonsense reasoning, enabling us to distinguish between model predictions due to chance and caused by actual factors of interest. Our results indicate that the frequency of first names has a direct effect on model prediction, with less frequent names yielding divergent predictions compared to more frequent names. To gain insights into the internal mechanisms of models that are contributing to these behaviors, we also conduct an in-depth explainable analysis. Overall, our findings suggest that to ensure model robustness, it is essential to augment datasets with more diverse first names during the configuration stage.", }
As language models continue to be integrated into applications of personal and societal relevance, ensuring these models{'} trustworthiness is crucial, particularly with respect to producing consistent outputs regardless of sensitive attributes. Given that first names may serve as proxies for (intersectional) socio-demographic representations, it is imperative to examine the impact of first names on commonsense reasoning capabilities. In this paper, we study whether a model{'}s reasoning given a specific input differs based on the first names provided. Our underlying assumption is that the reasoning about Alice should not differ from the reasoning about James. We propose and implement a controlled experimental framework to measure the causal effect of first names on commonsense reasoning, enabling us to distinguish between model predictions due to chance and caused by actual factors of interest. Our results indicate that the frequency of first names has a direct effect on model prediction, with less frequent names yielding divergent predictions compared to more frequent names. To gain insights into the internal mechanisms of models that are contributing to these behaviors, we also conduct an in-depth explainable analysis. Overall, our findings suggest that to ensure model robustness, it is essential to augment datasets with more diverse first names during the configuration stage.
[ "Jeoung, Sullam", "Diesner, Jana", "Kilicoglu, Halil" ]
Examining the Causal Impact of First Names on Language Models: The Case of Social Commonsense Reasoning
trustnlp-1.7
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.8.bib
https://aclanthology.org/2023.trustnlp-1.8/
@inproceedings{khatun-brown-2023-reliability, title = "Reliability Check: An Analysis of {GPT}-3{'}s Response to Sensitive Topics and Prompt Wording", author = "Khatun, Aisha and Brown, Daniel", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.8", doi = "10.18653/v1/2023.trustnlp-1.8", pages = "73--95", abstract = "Large language models (LLMs) have become mainstream technology with their versatile use cases and impressive performance. Despite the countless out-of-the-box applications, LLMs are still not reliable. A lot of work is being done to improve the factual accuracy, consistency, and ethical standards of these models through fine-tuning, prompting, and Reinforcement Learning with Human Feedback (RLHF), but no systematic analysis of the responses of these models to different categories of statements, or on their potential vulnerabilities to simple prompting changes is available. In this work, we analyze what confuses GPT-3: how the model responds to certain sensitive topics and what effects the prompt wording has on the model response. We find that GPT-3 correctly disagrees with obvious Conspiracies and Stereotypes but makes mistakes with common Misconceptions and Controversies. The model responses are inconsistent across prompts and settings, highlighting GPT-3{'}s unreliability.", }
Large language models (LLMs) have become mainstream technology with their versatile use cases and impressive performance. Despite the countless out-of-the-box applications, LLMs are still not reliable. A lot of work is being done to improve the factual accuracy, consistency, and ethical standards of these models through fine-tuning, prompting, and Reinforcement Learning with Human Feedback (RLHF), but no systematic analysis of the responses of these models to different categories of statements, or on their potential vulnerabilities to simple prompting changes is available. In this work, we analyze what confuses GPT-3: how the model responds to certain sensitive topics and what effects the prompt wording has on the model response. We find that GPT-3 correctly disagrees with obvious Conspiracies and Stereotypes but makes mistakes with common Misconceptions and Controversies. The model responses are inconsistent across prompts and settings, highlighting GPT-3{'}s unreliability.
[ "Khatun, Aisha", "Brown, Daniel" ]
Reliability Check: An Analysis of GPT-3's Response to Sensitive Topics and Prompt Wording
trustnlp-1.8
Poster
2306.06199
[ "https://github.com/tanny411/gpt3-reliability-check" ]
https://huggingface.co/papers/2306.06199
1
0
0
2
1
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.9.bib
https://aclanthology.org/2023.trustnlp-1.9/
@inproceedings{raina-gales-2023-sample, title = "Sample Attackability in Natural Language Adversarial Attacks", author = "Raina, Vyas and Gales, Mark", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.9", doi = "10.18653/v1/2023.trustnlp-1.9", pages = "96--107", abstract = "Adversarial attack research in natural language processing (NLP) has made significant progress in designing powerful attack methods and defence approaches. However, few efforts have sought to identify which source samples are the most attackable or robust, i.e. can we determine for an unseen target model, which samples are the most vulnerable to an adversarial attack. This work formally extends the definition of sample attackability/robustness for NLP attacks. Experiments on two popular NLP datasets, four state of the art models and four different NLP adversarial attack methods, demonstrate that sample uncertainty is insufficient for describing characteristics of attackable/robust samples and hence a deep learning based detector can perform much better at identifying the most attackable and robust samples for an unseen target model. Nevertheless, further analysis finds that there is little agreement in which samples are considered the most attackable/robust across different NLP attack methods, explaining a lack of portability of attackability detection methods across attack methods.", }
Adversarial attack research in natural language processing (NLP) has made significant progress in designing powerful attack methods and defence approaches. However, few efforts have sought to identify which source samples are the most attackable or robust, i.e. can we determine for an unseen target model, which samples are the most vulnerable to an adversarial attack. This work formally extends the definition of sample attackability/robustness for NLP attacks. Experiments on two popular NLP datasets, four state of the art models and four different NLP adversarial attack methods, demonstrate that sample uncertainty is insufficient for describing characteristics of attackable/robust samples and hence a deep learning based detector can perform much better at identifying the most attackable and robust samples for an unseen target model. Nevertheless, further analysis finds that there is little agreement in which samples are considered the most attackable/robust across different NLP attack methods, explaining a lack of portability of attackability detection methods across attack methods.
[ "Raina, Vyas", "Gales, Mark" ]
Sample Attackability in Natural Language Adversarial Attacks
trustnlp-1.9
Poster
2306.12043
[ "https://github.com/rainavyas/nlp_attackability" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.10.bib
https://aclanthology.org/2023.trustnlp-1.10/
@inproceedings{yee-etal-2023-keyword, title = "A Keyword Based Approach to Understanding the Overpenalization of Marginalized Groups by {E}nglish Marginal Abuse Models on {T}witter", author = "Yee, Kyra and Schoenauer Sebag, Alice and Redfield, Olivia and Eck, Matthias and Sheng, Emily and Belli, Luca", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.10", doi = "10.18653/v1/2023.trustnlp-1.10", pages = "108--120", abstract = "Harmful content detection models tend to have higher false positive rates for content from marginalized groups. In the context of marginal abuse modeling on Twitter, such disproportionate penalization poses the risk of reduced visibility, where marginalized communities lose the opportunity to voice their opinion on the platform. Current approaches to algorithmic harm mitigation, and bias detection for NLP models are often very ad hoc and subject to human bias. We make two main contributions in this paper. First, we design a novel methodology, which provides a principled approach to detecting and measuring the severity of potential harms associated with a text-based model. Second, we apply our methodology to audit Twitter{'}s English marginal abuse model, which is used for removing amplification eligibility of marginally abusive content. Without utilizing demographic labels or dialect classifiers, we are still able to detect and measure the severity of issues related to the over-penalization of the speech of marginalized communities, such as the use of reclaimed speech, counterspeech, and identity related terms. In order to mitigate the associated harms, we experiment with adding additional true negative examples and find that doing so provides improvements to our fairness metrics without large degradations in model performance.", }
Harmful content detection models tend to have higher false positive rates for content from marginalized groups. In the context of marginal abuse modeling on Twitter, such disproportionate penalization poses the risk of reduced visibility, where marginalized communities lose the opportunity to voice their opinion on the platform. Current approaches to algorithmic harm mitigation, and bias detection for NLP models are often very ad hoc and subject to human bias. We make two main contributions in this paper. First, we design a novel methodology, which provides a principled approach to detecting and measuring the severity of potential harms associated with a text-based model. Second, we apply our methodology to audit Twitter{'}s English marginal abuse model, which is used for removing amplification eligibility of marginally abusive content. Without utilizing demographic labels or dialect classifiers, we are still able to detect and measure the severity of issues related to the over-penalization of the speech of marginalized communities, such as the use of reclaimed speech, counterspeech, and identity related terms. In order to mitigate the associated harms, we experiment with adding additional true negative examples and find that doing so provides improvements to our fairness metrics without large degradations in model performance.
[ "Yee, Kyra", "Schoenauer Sebag, Alice", "Redfield, Olivia", "Eck, Matthias", "Sheng, Emily", "Belli, Luca" ]
A Keyword Based Approach to Understanding the Overpenalization of Marginalized Groups by English Marginal Abuse Models on Twitter
trustnlp-1.10
Poster
2210.06351
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.11.bib
https://aclanthology.org/2023.trustnlp-1.11/
@inproceedings{hosseini-etal-2023-empirical, title = "An Empirical Study of Metrics to Measure Representational Harms in Pre-Trained Language Models", author = "Hosseini, Saghar and Palangi, Hamid and Awadallah, Ahmed Hassan", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.11", doi = "10.18653/v1/2023.trustnlp-1.11", pages = "121--134", abstract = "Large-scale Pre-Trained Language Models (PTLMs) capture knowledge from massive human-written data which contains latent societal biases and toxic contents. In this paper, we leverage the primary task of PTLMs, i.e., language modeling, and propose a new metric to quantify manifested implicit representational harms in PTLMs towards 13 marginalized demographics. Using this metric, we conducted an empirical analysis of 24 widely used PTLMs. Our analysis provides insights into the correlation between the proposed metric in this work and other related metrics for representational harm. We observe that our metric correlates with most of the gender-specific metrics in the literature. Through extensive experiments, we explore the connections between PTLMs architectures and representational harms across two dimensions: depth and width of the networks. We found that prioritizing depth over width, mitigates representational harms in some PTLMs. Our code and data can be found at [place holder].", }
Large-scale Pre-Trained Language Models (PTLMs) capture knowledge from massive human-written data which contains latent societal biases and toxic contents. In this paper, we leverage the primary task of PTLMs, i.e., language modeling, and propose a new metric to quantify manifested implicit representational harms in PTLMs towards 13 marginalized demographics. Using this metric, we conducted an empirical analysis of 24 widely used PTLMs. Our analysis provides insights into the correlation between the proposed metric in this work and other related metrics for representational harm. We observe that our metric correlates with most of the gender-specific metrics in the literature. Through extensive experiments, we explore the connections between PTLMs architectures and representational harms across two dimensions: depth and width of the networks. We found that prioritizing depth over width, mitigates representational harms in some PTLMs. Our code and data can be found at [place holder].
[ "Hosseini, Saghar", "Palangi, Hamid", "Awadallah, Ahmed Hassan" ]
An Empirical Study of Metrics to Measure Representational Harms in Pre-Trained Language Models
trustnlp-1.11
Poster
2301.09211
[ "https://github.com/microsoft/SafeNLP" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.12.bib
https://aclanthology.org/2023.trustnlp-1.12/
@inproceedings{lee-etal-2023-linguistic, title = "Linguistic Properties of Truthful Response", author = "Lee, Bruce W. and Arockiaraj, Benedict Florance and Jin, Helen", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.12", doi = "10.18653/v1/2023.trustnlp-1.12", pages = "135--140", abstract = "We investigate the phenomenon of an LLM{'}s untruthful response using a large set of 220 handcrafted linguistic features. We focus on GPT-3 models and find that the linguistic profiles of responses are similar across model sizes. That is, how varying-sized LLMs respond to given prompts stays similar on the linguistic properties level. We expand upon this finding by training support vector machines that rely only upon the stylistic components of model responses to classify the truthfulness of statements. Though the dataset size limits our current findings, we present promising evidence that truthfulness detection is possible without evaluating the content itself. We release our code and raw data.", }
We investigate the phenomenon of an LLM{'}s untruthful response using a large set of 220 handcrafted linguistic features. We focus on GPT-3 models and find that the linguistic profiles of responses are similar across model sizes. That is, how varying-sized LLMs respond to given prompts stays similar on the linguistic properties level. We expand upon this finding by training support vector machines that rely only upon the stylistic components of model responses to classify the truthfulness of statements. Though the dataset size limits our current findings, we present promising evidence that truthfulness detection is possible without evaluating the content itself. We release our code and raw data.
[ "Lee, Bruce W.", "Arockiaraj, Benedict Florance", "Jin, Helen" ]
Linguistic Properties of Truthful Response
trustnlp-1.12
Poster
2305.15875
[ "" ]
https://huggingface.co/papers/2305.15875
0
0
0
3
1
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.13.bib
https://aclanthology.org/2023.trustnlp-1.13/
@inproceedings{chen-etal-2023-debunking, title = "Debunking Biases in Attention", author = "Chen, Shijing and Naseem, Usman and Razzak, Imran", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.13", doi = "10.18653/v1/2023.trustnlp-1.13", pages = "141--150", abstract = "Despite the remarkable performances in various applications, machine learning (ML) models could potentially discriminate. They may result in biasness in decision-making, leading to an impact negatively on individuals and society. Recently, various methods have been developed to mitigate biasness and achieve significant performance. Attention mechanisms are a fundamental component of many state-of-the-art ML models and may potentially impact the fairness of ML models. However, how they explicitly influence fairness has yet to be thoroughly explored. In this paper, we investigate how different attention mechanisms affect the fairness of ML models, focusing on models used in Natural Language Processing (NLP) models. We evaluate the performance of fairness of several models with and without different attention mechanisms on widely used benchmark datasets. Our results indicate that the majority of attention mechanisms that have been assessed can improve the fairness performance of Bidirectional Gated Recurrent Unit (BiGRU) and Bidirectional Long Short-Term Memory (BiLSTM) in all three datasets regarding religious and gender-sensitive groups, however, with varying degrees of trade-offs in accuracy measures. Our findings highlight the possibility of fairness being affected by adopting specific attention mechanisms in machine learning models for certain datasets", }
Despite the remarkable performances in various applications, machine learning (ML) models could potentially discriminate. They may result in biasness in decision-making, leading to an impact negatively on individuals and society. Recently, various methods have been developed to mitigate biasness and achieve significant performance. Attention mechanisms are a fundamental component of many state-of-the-art ML models and may potentially impact the fairness of ML models. However, how they explicitly influence fairness has yet to be thoroughly explored. In this paper, we investigate how different attention mechanisms affect the fairness of ML models, focusing on models used in Natural Language Processing (NLP) models. We evaluate the performance of fairness of several models with and without different attention mechanisms on widely used benchmark datasets. Our results indicate that the majority of attention mechanisms that have been assessed can improve the fairness performance of Bidirectional Gated Recurrent Unit (BiGRU) and Bidirectional Long Short-Term Memory (BiLSTM) in all three datasets regarding religious and gender-sensitive groups, however, with varying degrees of trade-offs in accuracy measures. Our findings highlight the possibility of fairness being affected by adopting specific attention mechanisms in machine learning models for certain datasets
[ "Chen, Shijing", "Naseem, Usman", "Razzak, Imran" ]
Debunking Biases in Attention
trustnlp-1.13
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.14.bib
https://aclanthology.org/2023.trustnlp-1.14/
@inproceedings{arnold-etal-2023-guiding, title = "Guiding Text-to-Text Privatization by Syntax", author = "Arnold, Stefan and Yesilbas, Dilara and Weinzierl, Sven", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.14", doi = "10.18653/v1/2023.trustnlp-1.14", pages = "151--162", abstract = "Metric Differential Privacy is a generalization of differential privacy tailored to address the unique challenges of text-to-text privatization. By adding noise to the representation of words in the geometric space of embeddings, words are replaced with words located in the proximity of the noisy representation. Since embeddings are trained based on word co-occurrences, this mechanism ensures that substitutions stem from a common semantic context. Without considering the grammatical category of words, however, this mechanism cannot guarantee that substitutions play similar syntactic roles. We analyze the capability of text-to-text privatization to preserve the grammatical category of words after substitution and find that surrogate texts consist almost exclusively of nouns. Lacking the capability to produce surrogate texts that correlate with the structure of the sensitive texts, we encompass our analysis by transforming the privatization step into a candidate selection problem in which substitutions are directed to words with matching grammatical properties. We demonstrate a substantial improvement in the performance of downstream tasks by up to 4.66{\%} while retaining comparative privacy guarantees.", }
Metric Differential Privacy is a generalization of differential privacy tailored to address the unique challenges of text-to-text privatization. By adding noise to the representation of words in the geometric space of embeddings, words are replaced with words located in the proximity of the noisy representation. Since embeddings are trained based on word co-occurrences, this mechanism ensures that substitutions stem from a common semantic context. Without considering the grammatical category of words, however, this mechanism cannot guarantee that substitutions play similar syntactic roles. We analyze the capability of text-to-text privatization to preserve the grammatical category of words after substitution and find that surrogate texts consist almost exclusively of nouns. Lacking the capability to produce surrogate texts that correlate with the structure of the sensitive texts, we encompass our analysis by transforming the privatization step into a candidate selection problem in which substitutions are directed to words with matching grammatical properties. We demonstrate a substantial improvement in the performance of downstream tasks by up to 4.66{\%} while retaining comparative privacy guarantees.
[ "Arnold, Stefan", "Yesilbas, Dilara", "Weinzierl, Sven" ]
Guiding Text-to-Text Privatization by Syntax
trustnlp-1.14
Poster
2306.01471
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.15.bib
https://aclanthology.org/2023.trustnlp-1.15/
@inproceedings{jourdan-etal-2023-fairness, title = "Are fairness metric scores enough to assess discrimination biases in machine learning?", author = "Jourdan, Fanny and Risser, Laurent and Loubes, Jean-michel and Asher, Nicholas", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.15", doi = "10.18653/v1/2023.trustnlp-1.15", pages = "163--174", abstract = "This paper presents novel experiments shedding light on the shortcomings of current metrics for assessing biases of gender discrimination made by machine learning algorithms on textual data. We focus on the Bios dataset, and our learning task is to predict the occupation of individuals, based on their biography. Such prediction tasks are common in commercial Natural Language Processing (NLP) applications such as automatic job recommendations. We address an important limitation of theoretical discussions dealing with group-wise fairness metrics: they focus on large datasets, although the norm in many industrial NLP applications is to use small to reasonably large linguistic datasets for which the main practical constraint is to get a good prediction accuracy. We then question how reliable are different popular measures of bias when the size of the training set is simply sufficient to learn reasonably accurate predictions. Our experiments sample the Bios dataset and learn more than 200 models on different sample sizes. This allows us to statistically study our results and to confirm that common gender bias indices provide diverging and sometimes unreliable results when applied to relatively small training and test samples. This highlights the crucial importance of variance calculations for providing sound results in this field.", }
This paper presents novel experiments shedding light on the shortcomings of current metrics for assessing biases of gender discrimination made by machine learning algorithms on textual data. We focus on the Bios dataset, and our learning task is to predict the occupation of individuals, based on their biography. Such prediction tasks are common in commercial Natural Language Processing (NLP) applications such as automatic job recommendations. We address an important limitation of theoretical discussions dealing with group-wise fairness metrics: they focus on large datasets, although the norm in many industrial NLP applications is to use small to reasonably large linguistic datasets for which the main practical constraint is to get a good prediction accuracy. We then question how reliable are different popular measures of bias when the size of the training set is simply sufficient to learn reasonably accurate predictions. Our experiments sample the Bios dataset and learn more than 200 models on different sample sizes. This allows us to statistically study our results and to confirm that common gender bias indices provide diverging and sometimes unreliable results when applied to relatively small training and test samples. This highlights the crucial importance of variance calculations for providing sound results in this field.
[ "Jourdan, Fanny", "Risser, Laurent", "Loubes, Jean-michel", "Asher, Nicholas" ]
Are fairness metric scores enough to assess discrimination biases in machine learning?
trustnlp-1.15
Poster
2306.05307
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.16.bib
https://aclanthology.org/2023.trustnlp-1.16/
@inproceedings{alshahrani-etal-2023-depth, title = "{DEPTH}+: An Enhanced Depth Metric for {W}ikipedia Corpora Quality", author = "Alshahrani, Saied and Alshahrani, Norah and Matthews, Jeanna", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.16", doi = "10.18653/v1/2023.trustnlp-1.16", pages = "175--189", abstract = "Wikipedia articles are a common source of training data for Natural Language Processing (NLP) research, especially as a source for corpora in languages other than English. However, research has shown that not all Wikipedia editions are produced organically by native speakers, and there are substantial levels of automation and translation activities in the Wikipedia project that could negatively impact the degree to which they truly represent the language and the culture of native speakers. To encourage transparency in the Wikipedia project, Wikimedia Foundation introduced the depth metric as an indication of the degree of collaboration or how frequently users edit a Wikipedia edition{'}s articles. While a promising start, this depth metric suffers from a few serious problems, like a lack of adequate handling of inflation of edits metric and a lack of full utilization of users-related metrics. In this paper, we propose the DEPTH+ metric, provide its mathematical definitions, and describe how it reflects a better representation of the depth of human collaborativeness. We also quantify the bot activities in Wikipedia and offer a bot-free depth metric after the removal of the bot-created articles and the bot-made edits on the Wikipedia articles.", }
Wikipedia articles are a common source of training data for Natural Language Processing (NLP) research, especially as a source for corpora in languages other than English. However, research has shown that not all Wikipedia editions are produced organically by native speakers, and there are substantial levels of automation and translation activities in the Wikipedia project that could negatively impact the degree to which they truly represent the language and the culture of native speakers. To encourage transparency in the Wikipedia project, Wikimedia Foundation introduced the depth metric as an indication of the degree of collaboration or how frequently users edit a Wikipedia edition{'}s articles. While a promising start, this depth metric suffers from a few serious problems, like a lack of adequate handling of inflation of edits metric and a lack of full utilization of users-related metrics. In this paper, we propose the DEPTH+ metric, provide its mathematical definitions, and describe how it reflects a better representation of the depth of human collaborativeness. We also quantify the bot activities in Wikipedia and offer a bot-free depth metric after the removal of the bot-created articles and the bot-made edits on the Wikipedia articles.
[ "Alshahrani, Saied", "Alshahrani, Norah", "Matthews, Jeanna" ]
DEPTH+: An Enhanced Depth Metric for Wikipedia Corpora Quality
trustnlp-1.16
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.17.bib
https://aclanthology.org/2023.trustnlp-1.17/
@inproceedings{mosca-etal-2023-distinguishing, title = "Distinguishing Fact from Fiction: A Benchmark Dataset for Identifying Machine-Generated Scientific Papers in the {LLM} Era.", author = "Mosca, Edoardo and Abdalla, Mohamed Hesham Ibrahim and Basso, Paolo and Musumeci, Margherita and Groh, Georg", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.17", doi = "10.18653/v1/2023.trustnlp-1.17", pages = "190--207", abstract = "As generative NLP can now produce content nearly indistinguishable from human writing, it becomes difficult to identify genuine research contributions in academic writing and scientific publications. Moreover, information in NLP-generated text can potentially be factually wrong or even entirely fabricated. This study introduces a novel benchmark dataset, containing human-written and machine-generated scientific papers from SCIgen, GPT-2, GPT-3, ChatGPT, and Galactica. After describing the generation and extraction pipelines, we also experiment with four distinct classifiers as a baseline for detecting the authorship of scientific text. A strong focus is put on generalization capabilities and explainability to highlight the strengths and weaknesses of detectors. We believe our work serves as an important step towards creating more robust methods for distinguishing between human-written and machine-generated scientific papers, ultimately ensuring the integrity of scientific literature.", }
As generative NLP can now produce content nearly indistinguishable from human writing, it becomes difficult to identify genuine research contributions in academic writing and scientific publications. Moreover, information in NLP-generated text can potentially be factually wrong or even entirely fabricated. This study introduces a novel benchmark dataset, containing human-written and machine-generated scientific papers from SCIgen, GPT-2, GPT-3, ChatGPT, and Galactica. After describing the generation and extraction pipelines, we also experiment with four distinct classifiers as a baseline for detecting the authorship of scientific text. A strong focus is put on generalization capabilities and explainability to highlight the strengths and weaknesses of detectors. We believe our work serves as an important step towards creating more robust methods for distinguishing between human-written and machine-generated scientific papers, ultimately ensuring the integrity of scientific literature.
[ "Mosca, Edoardo", "Abdalla, Mohamed Hesham Ibrahim", "Basso, Paolo", "Musumeci, Margherita", "Groh, Georg" ]
Distinguishing Fact from Fiction: A Benchmark Dataset for Identifying Machine-Generated Scientific Papers in the LLM Era.
trustnlp-1.17
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.18.bib
https://aclanthology.org/2023.trustnlp-1.18/
@inproceedings{subramani-etal-2023-detecting, title = "Detecting Personal Information in Training Corpora: an Analysis", author = "Subramani, Nishant and Luccioni, Sasha and Dodge, Jesse and Mitchell, Margaret", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.18", doi = "10.18653/v1/2023.trustnlp-1.18", pages = "208--220", abstract = "Large language models are trained on increasing quantities of unstructured text, the largest sources of which are scraped from the Web. These Web scrapes are mainly composed of heterogeneous collections of text from multiple domains with minimal documentation. While some work has been done to identify and remove toxic, biased, or sexual language, the topic of personal information (PI) in textual data used for training Natural Language Processing (NLP) models is relatively under-explored. In this work, we draw from definitions of PI across multiple countries to define the first PI taxonomy of its kind, categorized by type and risk level. We then conduct a case study on the Colossal Clean Crawled Corpus (C4) and the Pile, to detect some of the highest-risk personal information, such as email addresses and credit card numbers, and examine the differences between automatic and regular expression-based approaches for their detection. We identify shortcomings in modern approaches for PI detection, and propose a reframing of the problem that is informed by global perspectives and the goals in personal information detection.", }
Large language models are trained on increasing quantities of unstructured text, the largest sources of which are scraped from the Web. These Web scrapes are mainly composed of heterogeneous collections of text from multiple domains with minimal documentation. While some work has been done to identify and remove toxic, biased, or sexual language, the topic of personal information (PI) in textual data used for training Natural Language Processing (NLP) models is relatively under-explored. In this work, we draw from definitions of PI across multiple countries to define the first PI taxonomy of its kind, categorized by type and risk level. We then conduct a case study on the Colossal Clean Crawled Corpus (C4) and the Pile, to detect some of the highest-risk personal information, such as email addresses and credit card numbers, and examine the differences between automatic and regular expression-based approaches for their detection. We identify shortcomings in modern approaches for PI detection, and propose a reframing of the problem that is informed by global perspectives and the goals in personal information detection.
[ "Subramani, Nishant", "Luccioni, Sasha", "Dodge, Jesse", "Mitchell, Margaret" ]
Detecting Personal Information in Training Corpora: an Analysis
trustnlp-1.18
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.19.bib
https://aclanthology.org/2023.trustnlp-1.19/
@inproceedings{bhan-etal-2023-enhancing, title = "Enhancing textual counterfactual explanation intelligibility through Counterfactual Feature Importance", author = "Bhan, Milan and Vittaut, Jean-noel and Chesneau, Nicolas and Lesot, Marie-jeanne", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.19", doi = "10.18653/v1/2023.trustnlp-1.19", pages = "221--231", abstract = "Textual counterfactual examples explain a prediction by modifying the tokens of an initial instance in order to flip the outcome of a classifier. Even under sparsity constraint, counterfactual generation can lead to numerous changes from the initial text, making the explanation hard to understand. We propose Counterfactual Feature Importance, a method to make non-sparse counterfactual explanations more intelligible. Counterfactual Feature Importance assesses token change importance between an instance to explain and its counterfactual example. We develop two ways of computing Counterfactual Feature Importance, respectively based on classifier gradient computation and counterfactual generator loss evolution during counterfactual search. Then we design a global version of Counterfactual Feature Importance, providing rich information about semantic fields globally impacting classifier predictions. Counterfactual Feature Importance enables to focus on impacting parts of counterfactual explanations, making counterfactual explanations involving numerous changes more understandable.", }
Textual counterfactual examples explain a prediction by modifying the tokens of an initial instance in order to flip the outcome of a classifier. Even under sparsity constraint, counterfactual generation can lead to numerous changes from the initial text, making the explanation hard to understand. We propose Counterfactual Feature Importance, a method to make non-sparse counterfactual explanations more intelligible. Counterfactual Feature Importance assesses token change importance between an instance to explain and its counterfactual example. We develop two ways of computing Counterfactual Feature Importance, respectively based on classifier gradient computation and counterfactual generator loss evolution during counterfactual search. Then we design a global version of Counterfactual Feature Importance, providing rich information about semantic fields globally impacting classifier predictions. Counterfactual Feature Importance enables to focus on impacting parts of counterfactual explanations, making counterfactual explanations involving numerous changes more understandable.
[ "Bhan, Milan", "Vittaut, Jean-noel", "Chesneau, Nicolas", "Lesot, Marie-jeanne" ]
Enhancing textual counterfactual explanation intelligibility through Counterfactual Feature Importance
trustnlp-1.19
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.20.bib
https://aclanthology.org/2023.trustnlp-1.20/
@inproceedings{yermilov-etal-2023-privacy, title = "Privacy- and Utility-Preserving {NLP} with Anonymized data: A case study of Pseudonymization", author = "Yermilov, Oleksandr and Raheja, Vipul and Chernodub, Artem", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.20", doi = "10.18653/v1/2023.trustnlp-1.20", pages = "232--241", abstract = "This work investigates the effectiveness of different pseudonymization techniques, ranging from rule-based substitutions to using pre-trained Large Language Models (LLMs), on a variety of datasets and models used for two widely used NLP tasks: text classification and summarization. Our work provides crucial insights into the gaps between original and anonymized data (focusing on the pseudonymization technique) and model quality and fosters future research into higher-quality anonymization techniques better to balance the trade-offs between data protection and utility preservation. We make our code, pseudonymized datasets, and downstream models publicly available.", }
This work investigates the effectiveness of different pseudonymization techniques, ranging from rule-based substitutions to using pre-trained Large Language Models (LLMs), on a variety of datasets and models used for two widely used NLP tasks: text classification and summarization. Our work provides crucial insights into the gaps between original and anonymized data (focusing on the pseudonymization technique) and model quality and fosters future research into higher-quality anonymization techniques better to balance the trade-offs between data protection and utility preservation. We make our code, pseudonymized datasets, and downstream models publicly available.
[ "Yermilov, Oleks", "r", "Raheja, Vipul", "Chernodub, Artem" ]
Privacy- and Utility-Preserving NLP with Anonymized data: A case study of Pseudonymization
trustnlp-1.20
Poster
2306.05561
[ "https://github.com/olexandryermilov/privacy-preserving-nlp" ]
https://huggingface.co/papers/2306.05561
0
0
0
3
1
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.21.bib
https://aclanthology.org/2023.trustnlp-1.21/
@inproceedings{lucas-havens-2023-gpts, title = "{GPT}s Don{'}t Keep Secrets: Searching for Backdoor Watermark Triggers in Autoregressive Language Models", author = "Lucas, Evan and Havens, Timothy", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.21", doi = "10.18653/v1/2023.trustnlp-1.21", pages = "242--248", abstract = "This work analyzes backdoor watermarks in an autoregressive transformer fine-tuned to perform a generative sequence-to-sequence task, specifically summarization. We propose and demonstrate an attack to identify trigger words or phrases by analyzing open ended generations from autoregressive models that have backdoor watermarks inserted. It is shown in our work that triggers based on random common words are easier to identify than those based on single, rare tokens. The attack proposed is easy to implement and only requires access to the model weights. Code used to create the backdoor watermarked models and analyze their outputs is shared at [github link to be inserted for camera ready version].", }
This work analyzes backdoor watermarks in an autoregressive transformer fine-tuned to perform a generative sequence-to-sequence task, specifically summarization. We propose and demonstrate an attack to identify trigger words or phrases by analyzing open ended generations from autoregressive models that have backdoor watermarks inserted. It is shown in our work that triggers based on random common words are easier to identify than those based on single, rare tokens. The attack proposed is easy to implement and only requires access to the model weights. Code used to create the backdoor watermarked models and analyze their outputs is shared at [github link to be inserted for camera ready version].
[ "Lucas, Evan", "Havens, Timothy" ]
GPTs Don't Keep Secrets: Searching for Backdoor Watermark Triggers in Autoregressive Language Models
trustnlp-1.21
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.22.bib
https://aclanthology.org/2023.trustnlp-1.22/
@inproceedings{li-liu-2023-make, title = "Make Text Unlearnable: Exploiting Effective Patterns to Protect Personal Data", author = "Li, Xinzhe and Liu, Ming", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.22", doi = "10.18653/v1/2023.trustnlp-1.22", pages = "249--259", abstract = "This paper addresses the ethical concerns arising from the use of unauthorized public data in deep learning models and proposes a novel solution. Specifically, building on the work of Huang et al. (2021), we extend their bi-level optimization approach to generate unlearnable text using a gradient-based search technique. However, although effective, this approach faces practical limitations, including the requirement of batches of instances and model architecture knowledge that is not readily accessible to ordinary users with limited access to their own data. Furthermore, even with semantic-preserving constraints, unlearnable noise can alter the text{'}s semantics. To address these challenges, we extract simple patterns from unlearnable text produced by bi-level optimization and demonstrate that the data remains unlearnable for unknown models. Additionally, these patterns are not instance- or dataset-specific, allowing users to readily apply them to text classification and question-answering tasks, even if only a small proportion of users implement them on their public content. We also open-source codes to generate unlearnable text and assess unlearnable noise to benefit the public and future studies.", }
This paper addresses the ethical concerns arising from the use of unauthorized public data in deep learning models and proposes a novel solution. Specifically, building on the work of Huang et al. (2021), we extend their bi-level optimization approach to generate unlearnable text using a gradient-based search technique. However, although effective, this approach faces practical limitations, including the requirement of batches of instances and model architecture knowledge that is not readily accessible to ordinary users with limited access to their own data. Furthermore, even with semantic-preserving constraints, unlearnable noise can alter the text{'}s semantics. To address these challenges, we extract simple patterns from unlearnable text produced by bi-level optimization and demonstrate that the data remains unlearnable for unknown models. Additionally, these patterns are not instance- or dataset-specific, allowing users to readily apply them to text classification and question-answering tasks, even if only a small proportion of users implement them on their public content. We also open-source codes to generate unlearnable text and assess unlearnable noise to benefit the public and future studies.
[ "Li, Xinzhe", "Liu, Ming" ]
Make Text Unlearnable: Exploiting Effective Patterns to Protect Personal Data
trustnlp-1.22
Poster
2307.00456
[ "https://github.com/xinzhel/unlearnable_texts" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.23.bib
https://aclanthology.org/2023.trustnlp-1.23/
@inproceedings{ishihara-2023-training, title = "Training Data Extraction From Pre-trained Language Models: A Survey", author = "Ishihara, Shotaro", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.23", doi = "10.18653/v1/2023.trustnlp-1.23", pages = "260--275", abstract = "As the deployment of pre-trained language models (PLMs) expands, pressing security concerns have arisen regarding the potential for malicious extraction of training data, posing a threat to data privacy. This study is the first to provide a comprehensive survey of training data extraction from PLMs.Our review covers more than 100 key papers in fields such as natural language processing and security. First, preliminary knowledge is recapped and a taxonomy of various definitions of memorization is presented. The approaches for attack and defense are then systemized. Furthermore, the empirical findings of several quantitative studies are highlighted. Finally, future research directions based on this review are suggested.", }
As the deployment of pre-trained language models (PLMs) expands, pressing security concerns have arisen regarding the potential for malicious extraction of training data, posing a threat to data privacy. This study is the first to provide a comprehensive survey of training data extraction from PLMs.Our review covers more than 100 key papers in fields such as natural language processing and security. First, preliminary knowledge is recapped and a taxonomy of various definitions of memorization is presented. The approaches for attack and defense are then systemized. Furthermore, the empirical findings of several quantitative studies are highlighted. Finally, future research directions based on this review are suggested.
[ "Ishihara, Shotaro" ]
Training Data Extraction From Pre-trained Language Models: A Survey
trustnlp-1.23
Poster
2305.16157
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.24.bib
https://aclanthology.org/2023.trustnlp-1.24/
@inproceedings{liu-etal-2023-expanding, title = "Expanding Scope: Adapting {E}nglish Adversarial Attacks to {C}hinese", author = "Liu, Hanyu and Cai, Chengyuan and Qi, Yanjun", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.24", doi = "10.18653/v1/2023.trustnlp-1.24", pages = "276--286", abstract = "Recent studies have revealed that NLP predictive models are vulnerable to adversarial attacks. Most existing studies focused on designing attacks to evaluate the robustness of NLP models in the English language alone. Literature has seen an increasing need for NLP solutions for other languages. We, therefore, ask one natural question whether state-of-the-art (SOTA) attack methods generalize to other languages. This paper investigates how to adapt SOTA adversarial attack algorithms in English to the Chinese language. Our experiments show that attack methods previously applied to English NLP can generate high-quality adversarial examples in Chinese when combined with proper text segmentation and linguistic constraints. In addition, we demonstrate that the generated adversarial examples can achieve high fluency and sentiment consistency by focusing on the Chinese language{'}s morphology and phonology, which in turn can be used to improve the adversarial robustness of Chinese NLP models.", }
Recent studies have revealed that NLP predictive models are vulnerable to adversarial attacks. Most existing studies focused on designing attacks to evaluate the robustness of NLP models in the English language alone. Literature has seen an increasing need for NLP solutions for other languages. We, therefore, ask one natural question whether state-of-the-art (SOTA) attack methods generalize to other languages. This paper investigates how to adapt SOTA adversarial attack algorithms in English to the Chinese language. Our experiments show that attack methods previously applied to English NLP can generate high-quality adversarial examples in Chinese when combined with proper text segmentation and linguistic constraints. In addition, we demonstrate that the generated adversarial examples can achieve high fluency and sentiment consistency by focusing on the Chinese language{'}s morphology and phonology, which in turn can be used to improve the adversarial robustness of Chinese NLP models.
[ "Liu, Hanyu", "Cai, Chengyuan", "Qi, Yanjun" ]
Expanding Scope: Adapting English Adversarial Attacks to Chinese
trustnlp-1.24
Poster
2306.04874
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.25.bib
https://aclanthology.org/2023.trustnlp-1.25/
@inproceedings{he-etal-2023-imbert, title = "{IMBERT}: Making {BERT} Immune to Insertion-based Backdoor Attacks", author = "He, Xuanli and Wang, Jun and Rubinstein, Benjamin and Cohn, Trevor", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.25", doi = "10.18653/v1/2023.trustnlp-1.25", pages = "287--301", abstract = "Backdoor attacks are an insidious security threat against machine learning models. Adversaries can manipulate the predictions of compromised models by inserting triggers into the training phase. Various backdoor attacks have been devised which can achieve nearly perfect attack success without affecting model predictions for clean inputs. Means of mitigating such vulnerabilities are underdeveloped, especially in natural language processing. To fill this gap, we introduce IMBERT, which uses either gradients or self-attention scores derived from victim models to self-defend against backdoor attacks at inference time. Our empirical studies demonstrate that IMBERT can effectively identify up to 98.5{\%} of inserted triggers. Thus, it significantly reduces the attack success rate while attaining competitive accuracy on the clean dataset across widespread insertion-based attacks compared to two baselines. Finally, we show that our approach is model-agnostic, and can be easily ported to several pre-trained transformer models.", }
Backdoor attacks are an insidious security threat against machine learning models. Adversaries can manipulate the predictions of compromised models by inserting triggers into the training phase. Various backdoor attacks have been devised which can achieve nearly perfect attack success without affecting model predictions for clean inputs. Means of mitigating such vulnerabilities are underdeveloped, especially in natural language processing. To fill this gap, we introduce IMBERT, which uses either gradients or self-attention scores derived from victim models to self-defend against backdoor attacks at inference time. Our empirical studies demonstrate that IMBERT can effectively identify up to 98.5{\%} of inserted triggers. Thus, it significantly reduces the attack success rate while attaining competitive accuracy on the clean dataset across widespread insertion-based attacks compared to two baselines. Finally, we show that our approach is model-agnostic, and can be easily ported to several pre-trained transformer models.
[ "He, Xuanli", "Wang, Jun", "Rubinstein, Benjamin", "Cohn, Trevor" ]
IMBERT: Making BERT Immune to Insertion-based Backdoor Attacks
trustnlp-1.25
Poster
2305.16503
[ "https://github.com/xlhex/imbert" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.26.bib
https://aclanthology.org/2023.trustnlp-1.26/
@inproceedings{gupta-etal-2023-real, title = "On The Real-world Performance of Machine Translation: Exploring Social Media Post-authors{'} Perspectives", author = "Gupta, Ananya and Takeuchi, Jae and Knijnenburg, Bart", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.26", doi = "10.18653/v1/2023.trustnlp-1.26", pages = "302--310", abstract = "Many social networking sites (SNS) offer machine translation of posts in an effort to increase understanding, engagement, and connectivity between users across language barriers. However, the translations of these posts are still not 100{\%} accurate and can be a cause of misunderstandings that can harm post-authors{'} professional or personal relationships. An exacerbating factor is on most SNS, authors cannot view the translation of their own posts, nor make corrections to inaccurate translations. This paper reports findings from a survey (N = 189) and an interview (N = 15) to explore users{'} concerns regarding this automatic form of machine translation. Our findings show that users are concerned about potential inaccuracies in the meaning of the translations of their posts, and would thus appreciate being able to view and potentially correct such translations. Additionally, we found that when users write posts in their native language, they write them for specific audiences, so they do not always want them translated. This underscores the urgency of providing users with more control over the translation of their posts.", }
Many social networking sites (SNS) offer machine translation of posts in an effort to increase understanding, engagement, and connectivity between users across language barriers. However, the translations of these posts are still not 100{\%} accurate and can be a cause of misunderstandings that can harm post-authors{'} professional or personal relationships. An exacerbating factor is on most SNS, authors cannot view the translation of their own posts, nor make corrections to inaccurate translations. This paper reports findings from a survey (N = 189) and an interview (N = 15) to explore users{'} concerns regarding this automatic form of machine translation. Our findings show that users are concerned about potential inaccuracies in the meaning of the translations of their posts, and would thus appreciate being able to view and potentially correct such translations. Additionally, we found that when users write posts in their native language, they write them for specific audiences, so they do not always want them translated. This underscores the urgency of providing users with more control over the translation of their posts.
[ "Gupta, Ananya", "Takeuchi, Jae", "Knijnenburg, Bart" ]
On The Real-world Performance of Machine Translation: Exploring Social Media Post-authors' Perspectives
trustnlp-1.26
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.27.bib
https://aclanthology.org/2023.trustnlp-1.27/
@inproceedings{bang-etal-2023-enabling, title = "Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values", author = "Bang, Yejin and Yu, Tiezheng and Madotto, Andrea and Lin, Zhaojiang and Diab, Mona and Fung, Pascale", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.27", doi = "10.18653/v1/2023.trustnlp-1.27", pages = "311--325", abstract = "Many NLP classification tasks, such as sexism/racism detection or toxicity detection, are based on human values. Yet, human values can vary under diverse cultural conditions. Therefore, we introduce a framework for value-aligned classification that performs prediction based on explicitly written human values in the command. Along with the task, we propose a practical approach that distills value-aligned knowledge from large-scale language models (LLMs) to construct value-aligned classifiers in two steps. First, we generate value-aligned training data from LLMs by prompt-based few-shot learning. Next, we fine-tune smaller classification models with the generated data for the task. Empirical results show that our VA-Models surpass multiple baselines by at least 15.56{\%} on the F1-score, including few-shot learning with OPT-175B and existing text augmentation methods. We suggest that using classifiers with explicit human value input improves both inclusivity {\&} explainability in AI.", }
Many NLP classification tasks, such as sexism/racism detection or toxicity detection, are based on human values. Yet, human values can vary under diverse cultural conditions. Therefore, we introduce a framework for value-aligned classification that performs prediction based on explicitly written human values in the command. Along with the task, we propose a practical approach that distills value-aligned knowledge from large-scale language models (LLMs) to construct value-aligned classifiers in two steps. First, we generate value-aligned training data from LLMs by prompt-based few-shot learning. Next, we fine-tune smaller classification models with the generated data for the task. Empirical results show that our VA-Models surpass multiple baselines by at least 15.56{\%} on the F1-score, including few-shot learning with OPT-175B and existing text augmentation methods. We suggest that using classifiers with explicit human value input improves both inclusivity {\&} explainability in AI.
[ "Bang, Yejin", "Yu, Tiezheng", "Madotto, Andrea", "Lin, Zhaojiang", "Diab, Mona", "Fung, Pascale" ]
Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values
trustnlp-1.27
Poster
2210.07652
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.trustnlp-1.28.bib
https://aclanthology.org/2023.trustnlp-1.28/
@inproceedings{portillo-wightman-etal-2023-strength, title = "Strength in Numbers: Estimating Confidence of Large Language Models by Prompt Agreement", author = "Portillo Wightman, Gwenyth and Delucia, Alexandra and Dredze, Mark", editor = "Ovalle, Anaelia and Chang, Kai-Wei and Mehrabi, Ninareh and Pruksachatkun, Yada and Galystan, Aram and Dhamala, Jwala and Verma, Apurv and Cao, Trista and Kumar, Anoop and Gupta, Rahul", booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.trustnlp-1.28", doi = "10.18653/v1/2023.trustnlp-1.28", pages = "326--362", abstract = "Large language models have achieved impressive few-shot performance on a wide variety of tasks. However, in many settings, users require confidence estimates for model predictions. While traditional classifiers produce scores for each label, language models instead produce scores for the generation which may not be well calibrated. We compare generations across diverse prompts and show that these can be used to create confidence scores. By utilizing more prompts we can get more precise confidence estimates and use response diversity as a proxy for confidence. We evaluate this approach across ten multiple-choice question-answering datasets using three models: T0, FLAN-T5, and GPT-3. In addition to analyzing multiple human written prompts, we automatically generate more prompts using a language model in order to produce finer-grained confidence estimates. Our method produces more calibrated confidence estimates compared to the log probability of the answer to a single prompt. These improvements could benefit users who rely on prediction confidence for integration into a larger system or in decision-making processes.", }
Large language models have achieved impressive few-shot performance on a wide variety of tasks. However, in many settings, users require confidence estimates for model predictions. While traditional classifiers produce scores for each label, language models instead produce scores for the generation which may not be well calibrated. We compare generations across diverse prompts and show that these can be used to create confidence scores. By utilizing more prompts we can get more precise confidence estimates and use response diversity as a proxy for confidence. We evaluate this approach across ten multiple-choice question-answering datasets using three models: T0, FLAN-T5, and GPT-3. In addition to analyzing multiple human written prompts, we automatically generate more prompts using a language model in order to produce finer-grained confidence estimates. Our method produces more calibrated confidence estimates compared to the log probability of the answer to a single prompt. These improvements could benefit users who rely on prediction confidence for integration into a larger system or in decision-making processes.
[ "Portillo Wightman, Gwenyth", "Delucia, Alex", "ra", "Dredze, Mark" ]
Strength in Numbers: Estimating Confidence of Large Language Models by Prompt Agreement
trustnlp-1.28
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.1.bib
https://aclanthology.org/2023.wassa-1.1/
@inproceedings{min-ananiadou-2023-pesto, title = "{PESTO}: A Post-User Fusion Network for Rumour Detection on Social Media", author = "Min, Erxue and Ananiadou, Sophia", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.1", doi = "10.18653/v1/2023.wassa-1.1", pages = "1--10", abstract = "Rumour detection on social media is an important topic due to the challenges of misinformation propagation and slow verification of misleading information. Most previous work focus on the response posts on social media, ignoring the useful characteristics of involved users and their relations. In this paper, we propose a novel framework, Post-User Fusion Network (PESTO), which models the patterns of rumours from both post diffusion and user social networks. Specifically, we propose a novel Chronologically-masked Transformer architecture to model both temporal sequence and diffusion structure of rumours, and apply a Relational Graph Convolutional Network to model the social relations of involved users, with a fusion network based on self-attention mechanism to incorporate the two aspects. Additionally, two data augmentation techniques are leveraged to improve the robustness and accuracy of our models. Empirical results on four datasets of English tweets show the superiority of the proposed method.", }
Rumour detection on social media is an important topic due to the challenges of misinformation propagation and slow verification of misleading information. Most previous work focus on the response posts on social media, ignoring the useful characteristics of involved users and their relations. In this paper, we propose a novel framework, Post-User Fusion Network (PESTO), which models the patterns of rumours from both post diffusion and user social networks. Specifically, we propose a novel Chronologically-masked Transformer architecture to model both temporal sequence and diffusion structure of rumours, and apply a Relational Graph Convolutional Network to model the social relations of involved users, with a fusion network based on self-attention mechanism to incorporate the two aspects. Additionally, two data augmentation techniques are leveraged to improve the robustness and accuracy of our models. Empirical results on four datasets of English tweets show the superiority of the proposed method.
[ "Min, Erxue", "Ananiadou, Sophia" ]
PESTO: A Post-User Fusion Network for Rumour Detection on Social Media
wassa-1.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.2.bib
https://aclanthology.org/2023.wassa-1.2/
@inproceedings{bizzoni-etal-2023-sentimental, title = "Sentimental Matters - Predicting Literary Quality by Sentiment Analysis and Stylometric Features", author = "Bizzoni, Yuri and Moreira, Pascale and Thomsen, Mads Rosendahl and Nielbo, Kristoffer", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.2", doi = "10.18653/v1/2023.wassa-1.2", pages = "11--18", abstract = "Over the years, the task of predicting reader appreciation or literary quality has been the object of several studies, but it remains a challenging problem in quantitative literary studies and computational linguistics alike, as its definition can vary a lot depending on the genre, the adopted features and the annotation system. This paper attempts to evaluate the impact of sentiment arc modelling versus more classical stylometric features for user-ratings of novels. We run our experiments on a corpus of English language narrative literary fiction from the 19th and 20th century, showing that syntactic and surface-level features can be powerful for the study of literary quality, but can be outperformed by sentiment-characteristics of a text.", }
Over the years, the task of predicting reader appreciation or literary quality has been the object of several studies, but it remains a challenging problem in quantitative literary studies and computational linguistics alike, as its definition can vary a lot depending on the genre, the adopted features and the annotation system. This paper attempts to evaluate the impact of sentiment arc modelling versus more classical stylometric features for user-ratings of novels. We run our experiments on a corpus of English language narrative literary fiction from the 19th and 20th century, showing that syntactic and surface-level features can be powerful for the study of literary quality, but can be outperformed by sentiment-characteristics of a text.
[ "Bizzoni, Yuri", "Moreira, Pascale", "Thomsen, Mads Rosendahl", "Nielbo, Kristoffer" ]
Sentimental Matters - Predicting Literary Quality by Sentiment Analysis and Stylometric Features
wassa-1.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.3.bib
https://aclanthology.org/2023.wassa-1.3/
@inproceedings{varia-etal-2023-instruction, title = "Instruction Tuning for Few-Shot Aspect-Based Sentiment Analysis", author = "Varia, Siddharth and Wang, Shuai and Halder, Kishaloy and Vacareanu, Robert and Ballesteros, Miguel and Benajiba, Yassine and Anna John, Neha and Anubhai, Rishita and Muresan, Smaranda and Roth, Dan", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.3", doi = "10.18653/v1/2023.wassa-1.3", pages = "19--27", abstract = "Aspect-based Sentiment Analysis (ABSA) is a fine-grained sentiment analysis task which involves four elements from user-generated texts:aspect term, aspect category, opinion term, and sentiment polarity. Most computational approaches focus on some of the ABSA sub-taskssuch as tuple (aspect term, sentiment polarity) or triplet (aspect term, opinion term, sentiment polarity) extraction using either pipeline or joint modeling approaches. Recently, generative approaches have been proposed to extract all four elements as (one or more) quadrupletsfrom text as a single task. In this work, we take a step further and propose a unified framework for solving ABSA, and the associated sub-tasksto improve the performance in few-shot scenarios. To this end, we fine-tune a T5 model with instructional prompts in a multi-task learning fashion covering all the sub-tasks, as well as the entire quadruple prediction task. In experiments with multiple benchmark datasets, we show that the proposed multi-task prompting approach brings performance boost (by absolute 8.29 F1) in the few-shot learning setting.", }
Aspect-based Sentiment Analysis (ABSA) is a fine-grained sentiment analysis task which involves four elements from user-generated texts:aspect term, aspect category, opinion term, and sentiment polarity. Most computational approaches focus on some of the ABSA sub-taskssuch as tuple (aspect term, sentiment polarity) or triplet (aspect term, opinion term, sentiment polarity) extraction using either pipeline or joint modeling approaches. Recently, generative approaches have been proposed to extract all four elements as (one or more) quadrupletsfrom text as a single task. In this work, we take a step further and propose a unified framework for solving ABSA, and the associated sub-tasksto improve the performance in few-shot scenarios. To this end, we fine-tune a T5 model with instructional prompts in a multi-task learning fashion covering all the sub-tasks, as well as the entire quadruple prediction task. In experiments with multiple benchmark datasets, we show that the proposed multi-task prompting approach brings performance boost (by absolute 8.29 F1) in the few-shot learning setting.
[ "Varia, Siddharth", "Wang, Shuai", "Halder, Kishaloy", "Vacareanu, Robert", "Ballesteros, Miguel", "Benajiba, Yassine", "Anna John, Neha", "Anubhai, Rishita", "Muresan, Smar", "a", "Roth, Dan" ]
Instruction Tuning for Few-Shot Aspect-Based Sentiment Analysis
wassa-1.3
Poster
2210.06629
[ "https://github.com/amazon-science/instruction-tuning-for-absa" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2023.wassa-1.4.bib
https://aclanthology.org/2023.wassa-1.4/
@inproceedings{sutton-etal-2023-read, title = "You Are What You Read: Inferring Personality From Consumed Textual Content", author = "Sutton, Adam and Simchon, Almog and Edwards, Matthew and Lewandowsky, Stephan", editor = "Barnes, Jeremy and De Clercq, Orph{\'e}e and Klinger, Roman", booktitle = "Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, {\&} Social Media Analysis", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wassa-1.4", doi = "10.18653/v1/2023.wassa-1.4", pages = "28--38", abstract = "In this work we use consumed text to infer Big-5 personality inventories using data we have collected from the social media platform Reddit. We test our model on two datasets, sampled from participants who consumed either fiction content ($N = 913$) or news content ($N = 213$). We show that state-of-the-art models from a similar task using authored text do not translate well to this task, with average correlations of $r=.06$ between the model{'}s predictions and ground-truth personality inventory dimensions. We propose an alternate method of generating average personality labels for each piece of text consumed, under which our model achieves correlations as high as $r=.34$ when predicting personality from the text being read.", }
In this work we use consumed text to infer Big-5 personality inventories using data we have collected from the social media platform Reddit. We test our model on two datasets, sampled from participants who consumed either fiction content ($N = 913$) or news content ($N = 213$). We show that state-of-the-art models from a similar task using authored text do not translate well to this task, with average correlations of $r=.06$ between the model{'}s predictions and ground-truth personality inventory dimensions. We propose an alternate method of generating average personality labels for each piece of text consumed, under which our model achieves correlations as high as $r=.34$ when predicting personality from the text being read.
[ "Sutton, Adam", "Simchon, Almog", "Edwards, Matthew", "Lew", "owsky, Stephan" ]
You Are What You Read: Inferring Personality From Consumed Textual Content
wassa-1.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]