doc_id
stringlengths
36
36
contents
stringlengths
22
3.25k
metadata
dict
8bbdf815-3990-4c0f-8d90-9373556025fa
# When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator ## There were 63 Easter eggs in the yard. Hannah found twice as many as Helen. How many Easter eggs did Hannah find?
{ "creation_datetime": "2024-03-04", "file_name": "2402.10890v1.md", "file_path": "paper_data/2402.10890v1.md", "file_size": 67960, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9916a9f3-2880-428f-b6bd-18a438538e58
# When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator ## Python Program: n_easter_eggs = 63 unit_times = 2 total_units = unit_times + 1 n_easter_eggs_per_unit = n_easter_eggs / total_units n_easter_eggs_helen = n_easter_eggs_per_unit * 1 n_easter_eggs_hannah = n_easter_eggs_per_unit * 2 answer = n_easter_eggs_hannah
{ "creation_datetime": "2024-03-04", "file_name": "2402.10890v1.md", "file_path": "paper_data/2402.10890v1.md", "file_size": 67960, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
91f37e4c-354c-4e6c-adeb-0f35255efba5
# When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator ## There were 63 Easter eggs in the yard. Hannah found twice as many as Helen. How many Easter eggs did Hannah find?
{ "creation_datetime": "2024-03-04", "file_name": "2402.10890v1.md", "file_path": "paper_data/2402.10890v1.md", "file_size": 67960, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c9b11913-d70f-46b7-80d3-0940e00a6087
# When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator ## Python Program: eggs_in_yard = 63 eggs_found_by_hannah = 2 * eggs_in_yard eggs_found_by_helen = eggs_found_by_hannah / 2 answer = eggs_found_by_hannah
{ "creation_datetime": "2024-03-04", "file_name": "2402.10890v1.md", "file_path": "paper_data/2402.10890v1.md", "file_size": 67960, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b5baa4bb-0ac4-4d0b-937f-3c0af83df903
# When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator ## Gloria is shoe shopping when she comes across a pair of boots that fit her shoe budget. However, she has to choose between the boots and two pairs of high heels that together cost five dollars less than the boots. If one pair of heels costs $33 and the other costs twice as much, how many dollars are the boots?
{ "creation_datetime": "2024-03-04", "file_name": "2402.10890v1.md", "file_path": "paper_data/2402.10890v1.md", "file_size": 67960, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0e7d07bc-812f-4639-a89f-8fe58524e589
# When Is Tree Search Useful For Llm Planning? It Depends On The Discriminator ## Python Program: price_boots = 50 price_heels = 33 price_heels_twice = 2 * price_heels price_heels_total = price_heels + price_heels_twice price_boots_difference = price_boots - price_heels_total answer = price_boots_difference
{ "creation_datetime": "2024-03-04", "file_name": "2402.10890v1.md", "file_path": "paper_data/2402.10890v1.md", "file_size": 67960, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
04098d8b-a92f-411c-bf1f-550f28ca6103
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models Jillian Fisher1 Ximing Lu1,2 Jaehun Jung1 **Liwei Jiang**1,2 Zaid Harchaoui1 **Yejin Choi**1,2 1University of Washington 2Allen Institute for Artificial Intelligence jrfish@uw.edu
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
449dbeb2-abb8-45f9-8f83-e28aa6e05184
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## Abstract The permanence of online content combined with the enhanced authorship identification techniques calls for stronger computational methods to protect the identity and privacy of online authorship when needed, e.g., blind reviews for scientific papers, anonymous online reviews, or anonymous interactions in the mental health forums. In this paper, we propose an unsupervised inference-time approach to authorship obfuscation to address the unique challenges of authorship obfuscation: lack of supervision data for diverse authorship and domains, and the need for a sufficient level of revision beyond simple paraphrasing to obfuscate the authorship, all the while preserving the original content and fluency. We introduce JAMDEC, a user-controlled, inference-time algorithm for authorship obfuscation that can be in principle applied to any text and authorship. Our approach builds on small language models such as GPT2-XL in order to help avoid disclosing the original content to proprietary LLM's APIs, while also reducing the performance gap between small and large language models via algorithmic enhancement. The key idea behind our approach is to boost the creative power of smaller language models through constrained decoding, while also allowing for user-specified controls and flexibility. Experimental results demonstrate that our approach based on GPT2-XL outperforms previous state-of-the-art methods based on comparably small models, while performing competitively against GPT3.5 175B, a propriety model that is two orders of magnitudes larger.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
cb272551-a64f-4e40-9403-23328b838500
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 1 Introduction Authorship obfuscation, the task of rewriting a text to protect the original writer's identity, has become increasingly important given the permanence of online content combined with new enhanced authorship attribution techniques (Bright et al., 2021; Altakrori et al., 2022). This task holds implications in various domains, including online privacy, and blind review in academic research. However, safeguarding an authorship style, while maintaining the same content and grammatical fluency, is a complex task. Unlike other authorship-related tasks such as paraphrasing or style transfer, authorship obfuscation poses unique technical challenges due to its different assumptions. For example, paraphrasing involves rephrasing an original text, but can be accomplished without altering the original style. Conversely, for style transfer, the task requires a predetermined target style. However, in the case of authorship obfuscation, there is no fixed endpoint style to guide the generation because the main goal is the absence or avoidance of a particular style. In fact, it may involve incorporating multiple styles or navigating a wide spectrum of possibilities to achieve success.1 One approach to authorship obfuscation is to use large language models, such as ChatGPT or GPT4. However, these models require large computing resources. Furthermore, if a user employs a method based on proprietary LLMs that retain user data, they are vulnerable to extra privacy threats or the leakage of their original content. To mitigate these risks, non-model or smaller closed model methods are preferred. Other previous approaches for authorship obfuscation include the use of round-trip machine translation (Keswani et al., 2016), strict rule-based algorithms (Karadzhov et al., 2017), or iterativechange algorithms (Mahmood et al., 2019a). However, these methods either do not lead to enough modification (Keswani et al., 2016), diverge into grammatically incorrect text due to the rigid rules (Karadzhov et al., 2017), or require an additional large-scale authorship corpus (Mahmood et al., 2019a). Therefore, in comparison to modern LLMs, we find a notable performance gap between previous methods developed for smaller models. To overcome these limitations, we present JAMDEC, a light-weight, user-controlled, unsupervised inference time algorithm for authorship obfuscation that can be used with any arbitrary text. JAMDEC employs smaller base
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ff030333-6858-427b-9db5-c9bfb2dfc05d
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 1 Introduction (Keswani et al., 2016), diverge into grammatically incorrect text due to the rigid rules (Karadzhov et al., 2017), or require an additional large-scale authorship corpus (Mahmood et al., 2019a). Therefore, in comparison to modern LLMs, we find a notable performance gap between previous methods developed for smaller models. To overcome these limitations, we present JAMDEC, a light-weight, user-controlled, unsupervised inference time algorithm for authorship obfuscation that can be used with any arbitrary text. JAMDEC employs smaller base models such as GPT2, which by themselves are too weak to produce accurate paraphrases, let alone obfuscation (Jung et al., 2023). To overcome this weakness, we frame the task as a constraint decoding problem, where the constraint is given as lexical keywords to include to control the content of the generation. To identify these keywords automatically, we leverage likelihood scores from smaller models. Lastly, since the decoded text is not guaranteed to be faithful to the original text, we design a filtering step that can be uniquely adjusted by the user. An overview of JAMDEC three-stage framework can be found in Figure 1. The name is inspired by Jambalaya, the popular American Creole and Cajun rice dish which is a mixture of meat, vegetables and spices. We provide experimentation on two datasets, scholarly articles and diary-style entries with a range of three to ten authors. The results show that JAMDEC performs better than state-of-the-art methods of similar size and comparable to significantly larger language models in both automatic and human evaluations. In particular, we demonstrate that JAMDEC is able to obfuscate, while simultaneously preserving the original content, which previous methods cannot achieve. 2
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d0c774b0-71a9-42ed-b96e-aa702ac74c40
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 2 Background On Authorship Obfuscation Setup. Let A be a given set of authors. We consider an original text yorig that was written by author B *∈ A*. The task of authorship obfuscation aims to create a new text yobf which can not be identified as written by author B. For evaluation, we consider a classification model M(·) (also known as an authorship attribution models), which has been trained to classify texts of each author in A. The aim is to create a method f(·) such that M(f(yorig)) ̸= B. Measure of a Successful Algorithm. Our goal is to create an obfuscated version of the original text that preserves the meaning and intent of the original text, while making it difficult to attribute the authorship to the original author. Following past literature (Mahmood et al., 2019a; PAN2018; Altakrori et al., 2022), we consider an obfuscation method successful if the obfuscated text satisfies the following three requirements: - **Style Concealment** Analysis of the obfuscated text does not reveal the original author. This is usually measured using an authorship attribution model or a threat model (Mahmood et al., 2019b). - **Content Preservation** The content of the original text is maintained. Metrics such as METEOR (Lavie et al., 2004), and Natural Language Inference models (NLI) (Liu et al., 2022) can be used to measure content overlap. - **Language Quality** The obfuscated text is grammatically correct and natural sounding. Grammaticality of a text can be measured using a Corpus of Linguistic Acceptability (CoLA) model (Warstadt et al., 2019). Text fluency can be determined using human evaluation. Inference-time Algorithms for Authorship Obfuscation. To address this task, we propose using an inference time algorithm that can obfuscate a text on-the-fly, rather than training a model on a specific author's writing style. We choose to use a decoding time algorithm over fine-tuning as it offers several benefits, including more flexibility in the generation and the ability to obfuscate text Method Mutant-X Paraphrase Machine Transl. Stylometric JAMDEC Dataset Metric ENS RFC W/O Stylo W/ Stylo Drop
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2b152ba5-2d3f-4f30-98d8-07fc6eae57a9
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 2 Background On Authorship Obfuscation Authorship Obfuscation. To address this task, we propose using an inference time algorithm that can obfuscate a text on-the-fly, rather than training a model on a specific author's writing style. We choose to use a decoding time algorithm over fine-tuning as it offers several benefits, including more flexibility in the generation and the ability to obfuscate text Method Mutant-X Paraphrase Machine Transl. Stylometric JAMDEC Dataset Metric ENS RFC W/O Stylo W/ Stylo Drop Rate (ENS) ⋆ -0.04 0.04 0.04 -0.03 0.11 0.11 Drop Rate (BertAA) 0.10 0.04 0.04 0.08 0.12 0.04 0.04 METEOR 0.80 0.81 0.55 0.69 0.80 0.62 0.62 AMT-3 NLI 0.60 0.61 0.62 0.75 0.50 0.75 0.81 CoLA 0.50 0.51 0.78 0.69 0.46 0.85 0.79 Task Score (ENS) ⋆ 0.36 0.48 0.49 0.31 0.57 0.57 Task Score (BertAA) 0.40 0.39 0.48 0.51 0.36 0.55 0.55 Drop Rate (ENS) ⋆ 0.08 0.20 0.20 0.23 0.10 0.13 Drop Rate (BertAA) 0.07 0.00 -0.06 0.07 0.04 0.14 0.14 METEOR 0.74 0.72 0.57 0.68 0.79 0.61 0.61 AMT-5 NLI 0.56 0.57 0.62 0.74 0.48 0.76 0.82 CoLA 0.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e56f6745-955e-4d4c-ab4c-b41294a7fada
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 2 Background On Authorship Obfuscation 20 0.20 0.23 0.10 0.13 Drop Rate (BertAA) 0.07 0.00 -0.06 0.07 0.04 0.14 0.14 METEOR 0.74 0.72 0.57 0.68 0.79 0.61 0.61 AMT-5 NLI 0.56 0.57 0.62 0.74 0.48 0.76 0.82 CoLA 0.51 0.55 0.77 0.69 0.46 0.85 0.79 Task Score (ENS) ⋆ 0.40 0.53 0.54 0.39 0.57 0.58 Task Score (BertAA) 0.38 0.37 0.44 0.50 0.33 0.58 0.58 Drop Rate (ENS) ⋆ 0.10 0.07 0.19 0.11 0.44 0.41 Drop Rate (BertAA) 0.03 0.04 -0.04 0.06 0.00 -0.03 -0.02 METEOR 0.84 0.86 0.54 0.66 0.81 0.60 0.61 AMT-10 NLI 0.61 0.64 0.61 0.73 0.45 0.79 0.79 CoLA 0.53 0.57 0.77 0.68 0.46 0.78 0.78 Task Score (ENS) ⋆ 0.44 0.48 0.53 0.34 0.67 0.66 Task Score (BertAA) 0.39 0.42 0.45 0.49 0.30 0.51 0.52 Drop Rate (ENS) ⋆ 0.28 0.31 0.18 0.03 0.03 0.03 Drop Rate (BertAA) 0.06
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
56c78daa-ddea-4ca8-a1e0-4ba114993ffa
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 2 Background On Authorship Obfuscation 0.46 0.78 0.78 Task Score (ENS) ⋆ 0.44 0.48 0.53 0.34 0.67 0.66 Task Score (BertAA) 0.39 0.42 0.45 0.49 0.30 0.51 0.52 Drop Rate (ENS) ⋆ 0.28 0.31 0.18 0.03 0.03 0.03 Drop Rate (BertAA) 0.06 0.30 0.47 0.0 0.0 0.29 0.29 METEOR 0.79 0.59 0.44 0.58 0.82 0.53 0.52 BLOG-5 NLI 0.58 0.47 0 .49 0.65 0.75 0.68 0.68 CoLA 0.44 0.46 0.63 0.55 0.44 0.74 0.73 Task Score (ENS) ⋆ 0.40 0.47 0.46 0.41 0.48 0.48 Task Score (BertAA) 0.36 0.41 0.53 0.40 0.40 0.57 0.57 Drop Rate (ENS) ⋆ 0.13 0.35 0.30 0.21 0.23 0.32 Drop Rate (BertAA) 0.37 0.06 0.40 0.11 0.08 0.32 0.32 METEOR 0.55 0.85 0.43 0.61 0.82 0.54 0.53 BLOG-10 NLI 0.46 0.61 0.46 0.62 0.75 0.67 0.67 CoLA 0.47 0.45 0.62 0.54 0.41 0.74 0.74 Task Score (ENS) ⋆ 0.40 0.48 0.49
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1b445a7f-f672-4141-9520-f277a014b2ff
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 2 Background On Authorship Obfuscation .32 0.32 METEOR 0.55 0.85 0.43 0.61 0.82 0.54 0.53 BLOG-10 NLI 0.46 0.61 0.46 0.62 0.75 0.67 0.67 CoLA 0.47 0.45 0.62 0.54 0.41 0.74 0.74 Task Score (ENS) ⋆ 0.40 0.48 0.49 0.46 0.55 0.58 Task Score (BertAA) 0.43 0.37 0.49 0.42 0.41 0.58 0.58 can be implemented on a sentence, paragraph, or full document level.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d55dfdda-8302-4c10-96ac-8dcb8f2db369
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 3.1 Step 1: Keyword Extraction without access to a corpus of the author's writing. Our proposed algorithm draws inspiration from various sources, including Diverse Beam Search (Vijayakumar et al., 2016), Lexically Constrained Decoding (Post and Vilar, 2018), and Neurologic decoding (Lu et al., 2021).
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7a7f8e8a-dd4a-46fb-b87a-68e119d54558
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 3 Jamdec We present JAMDEC, which obfuscates any text without any prior knowledge of the author. JAMDEC is composed of three main steps: keyword extraction, over-generation, and filtering, which First, we identify crucial keywords that encapsulate the original text's content, and later ensure its inclusion in the generated obfuscated text to maintain content preservation. We explore multiple keyword extraction methods, including embedding-based extraction and likelihood-based extraction. Embedding-based method. KeyBERT is a popular method for keyword extraction (Grootendorst, 2020), which uses BERT-embeddings and cosine similarity to find the sub-phrases in a document that are the most similar to the document itself. Likelihood-based method. At a high level, we select the top-k tokens with the lowest conditional probabilities, as measured by a specific language model, as keywords for a given sentence. Intuitively, these tokens represent content that a language model might most struggle to generate accurately. We experiment with both an auto-regressive language model GPT2, and text-to-text language model T5. For GPT2, we compute the likelihood of each token conditioned on its previous content. For T5, we leverage its fill-in-the-blank ability by providing an input sentence with a specific token masked. We then calculate the probability of T5 generating that particular token as the infill, which serves as the likelihood of that token. Since all the methods yield valid keywords in practice (see Appendix A.3), we utilize them all to generate numerous candidates for subsequent filtering to achieve high-quality obfuscation.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
31ef0eb7-9a1b-4223-9315-8ae9d8e11954
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 3.2 Step 2: Over-Generating Candidate Obfuscations Next, we utilize the previously extracted keywords and the left context of $y_{\text{orig}}$ to over-generate many variations of $y_{\text{orig}}$. We use $m$ sentences occurring before $y_{\text{orig}}$ as the left context to encourage fluid generation. Our goal is to produce multiple generations constrained by the extracted keywords, ensuring content similar to $y_{\text{orig}}$. At the same time, we aim to produce a variety of generations with diverse authorship styles to achieve obfuscation effectively. To achieve these seemingly opposing goals, we merge two decoding techniques, Lexially Constrained Beam Search (Post and Valir, 2018) and Divres Beam Search (Vijayakumar et al., 2016), and refer to the combined approach as Constrained diverse Beam Search (CODi-BS). **Constrained Diversity Beam Search.** CoDi-BS employs Constrained Beam Search (Co-BS) as the probability constrained, but uses the scoring function from basic algorithm, but uses the scoring function from Divresse Beam Search (Di-BS) instead of likelihoods from iteratively selecting the top $k$ candidates from the text bank. Its objective function can be represented as: $$\operatorname*{arg\,max}_{w\in W}P_{w}(y|x)+\lambda_{1}D(y,Y)+\lambda_{2}C(y)$$ where $x$ is the sequence of previous tokens, $D(y,Y)$ is a diversity term measuring the dissimilarity between the output sequence $y$ and the set of previously selected sequences Y within the beam, C(y) is a constraint function quantifying the degree to which the output sequence y satisfies the constraints, λ1, λ2 are hyperparameters controlling the weight of the diversity and constraint penalty, and w ∈ W is the parameter vector. Intuitively, CoDi-BS promotes candidates distinct from the previously chosen ones, while also ensuring that they satisfy a specific number of constraints. Appendix H has an overview of the CoDi-BS algorithm and details of both Constraint and Diverse Beam Search separately.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2c057901-494e-40f7-a607-1b6bbd55c93b
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 3.3 Step 3: Filtering Candidate Obfuscations The filtering stage comprises multiple steps to refine the pool of candidates from the previous stage, ultimately choosing the most suitable obfuscation. This step enables the user to have full control in selecting generations based on any metric. In our pipeline, we first filter based on an NLI (Natural Language Inference) threshold, which evaluates the coherence and content overlap between the generations and the original text. Next, we further filter the remaining candidates based on a CoLA (Corpus of Linguistic Acceptability) threshold, which focuses on the grammatical correctness and linguistic acceptability of the generations. Finally, and optionally, taking into account any previous knowledge of the author, we choose the ultimate obfuscation to be the generation that deviates the most from the original author's style. In our experiment, we do not assume any prior knowledge of the authors to showcase the effectiveness of our method in a more challenging situation.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
85822ef6-05c3-4a21-9709-15026a8b02c5
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 4 Experiments We evaluate two versions of JAMDEC on two benchmarks in distinct domains: scholarly passages and diary-style entries. For baselines, we consider three state-of-the-art methods for authorship obfuscation: Mutant-X (Mahmood et al., 2019a), Round-Trip Translation (Keswani et al., 2016), and Stylometric (Karadzhov et al., 2017), and a paraphrasing method (Zhang et al., 2020). As a stronger baseline, we also consider using zero-shot prompting of GPT3.5 175B which is orders of magnitude larger (Brown et al., 2020). For further details, see Appendix G and for access to the code see here.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
db1368fe-016a-4970-a295-4581fe2f4e9f
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 4.1 Setup Datasets. We used two datasets to evaluate JAMDEC. The first is the Extended-Brennan- Greenstadt (Brennan et al., 2012) which is a collection of "scholarly" short ( 500-word) paragraphs gathered from Amazon Mechanical Turk (AMT). We use this dataset, which we refer to as AMT, to produce three test datasets with 3, 5, and 10 authors, with n = 27, 30, 49 texts respectively (AMT- 3, AMT-5, AMT-10). The second dataset is the Blog Authorship corpus (Schler et al., 2006), a collection of blogs (diary-style entries) that were posted to blog.com. Similarly, we use this dataset to construct two datasets with 5 and 10 authors, with n = 72, 150 texts respectively (BLOG-5, BLOG-10). JAMDEC Configuration. To promote diversity of generated candidates, we employ all three types of keyword extraction methods, (KeyBERT, Likelihood-GPT2, and Likelihood-T5), and either CoDi-BS or only CBS. We ran with a beam width of 50. All other details can be found in Appendix G. In the filtering stage, we occasionally find cases where none of the generations passes either NLI or CoLA filter. We consider two ways of handling such cases - (1) JAMDEC, where we simply output the original sentence, (2) JAMDEC + STYLO, where we run a basic stylometric obfuscator on the original sentence.3 | Method | Generation | |-------------------------------------------------------------|-----------------------------------------------| | Original
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1c46156b-c1b0-4597-849a-04644e0b6b3c
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 4.1 Setup | Generation | |-------------------------------------------------------------|-----------------------------------------------| | Original | | | The Ex. An ex holding a grudge can do a lot of damage in a | | | short amount of time. He knows enough to open accounts in | | | your name, and he has the motive to hurt you. | | | Mutant-X | | | The Ex. An ex holding a
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f1aa5342-fabd-43e1-b764-7c884e8a397c
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 4.1 Setup | | Mutant-X | | | The Ex. An ex holding a | bitterness | | damage in a | length quantity | | ascend | accounts in | | impair | You. | | Paraphrase
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7668373e-991c-466b-ad99-03b08761b7f2
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 4.1 Setup | You. | | Paraphrase | | | A lot of damage can be done In a short period of time. | He | | knows | how to | | hurt you. | | | Machine
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
00ce4256-20ce-409e-a81d-7a3ce89ae081
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 4.1 Setup | | | Machine | | | Translation | | | The former. | An | | of damage in a short time. He knows enough to open accounts | | | in your name, and he has the | reason
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
063df538-f5db-40bf-866c-6b15becde980
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 4.1 Setup short time. He knows enough to open accounts | | | in your name, and he has the | reason | | Stylometric | | | An ex | holding, a | | brief | amount in time, | | in your name, and he has the motive to hurt you. | | | JAMDEC
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fdc57e87-00f6-4117-8c8c-8b1031afb122
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 4.1 Setup | amount in time, | | in your name, and he has the motive to hurt you. | | | JAMDEC | | | The Ex. | When the ex is holding his grudge against the | | person who caused him lot of damage to his life, he is | | | short sighted and will do anything in his power to get | | | back at that person, no matter how much it will hurt the | | | person he is trying to get revenge against.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a6a78c97-d6ed-4614-8223-70e176b4d1d7
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 4.1 Setup | | | back at that person, no matter how much it will hurt the | | | person he is trying to get revenge against. | | | enough to open accounts in your name, and he has the motive | | | to hurt you. | | | The Ex. | When the ex is holding his grudge against the | |----------------------------------------------------------|--------------------------------------------------| | person who caused him lot of damage to his life, he is | | | short sighted and will do anything in his power to get
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8a3e6143-8ee8-4084-b234-d169990f4b4e
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 4.1 Setup | When the ex is holding his grudge against the | |----------------------------------------------------------|--------------------------------------------------| | person who caused him lot of damage to his life, he is | | | short sighted and will do anything in his power to get | | | back at that person, no matter how much it will hurt the | | | person he is trying to get revenge against. | | | enough to open accounts in your name, and he has the | reason | | to hurt you. | | Baselines.4 We use the following baselines. Stylometric Obfuscation: A styl
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
94a231ad-aa92-4adf-ac0b-7d325fa1ebe8
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 4.1 Setup | | to hurt you. | | Baselines.4 We use the following baselines. Stylometric Obfuscation: A stylometric obfuscation (Stylometric) proposed by Karadzhov et al. (2017), calculates a suite of statistical features (e.g. average number of words per sentence, word frequency, etc.) that are indicative of style, then modifies the text such that these metrics align with an "average" value, pre-calculated on a training set. Mutant-X: Mutant-X (Mahmood et al., 2019a) is a genetic algorithm which iteratively substitutes words in the original text with the synonyms selected by an internal classifier. Additionally, at random iterations, it incorporates a "crossover" effect that involves cutting two parent texts at a random position and combining them to create two new child texts. This method does require an additional authorship corpus to train the internal classifier. For consistency, we adopt the same features and architectures for the internal classifier (Ensemble and Random Forest), as suggested in the subsequent work by Haroon et al. (Haroon et al., 2021). For more information on training these classifier models, reference Section 4.1. To accurately compare with all methods, we leave out any results from Mutant-X where the internal classifier matches the evaluation classifier, since we do not assume access to the evaluation models during obfuscation. Paraphrasing: Although paraphrasing has a slightly different goal than authorship obfuscation, we include the comparison for a thorough investigation of all methods. We employ a state-of-the-art paraphrasing model, PEGASUS Paraphrase (Zhang et al., 2020; par) a PEGASUS model fine-tuned on a self-supervised task for paraphrasing. Round-Trip MT: Additionally, we consider a baseline powered by round
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7ef9fb4c-6f98-45fd-b4c1-9295a685322f
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 4.1 Setup matches the evaluation classifier, since we do not assume access to the evaluation models during obfuscation. Paraphrasing: Although paraphrasing has a slightly different goal than authorship obfuscation, we include the comparison for a thorough investigation of all methods. We employ a state-of-the-art paraphrasing model, PEGASUS Paraphrase (Zhang et al., 2020; par) a PEGASUS model fine-tuned on a self-supervised task for paraphrasing. Round-Trip MT: Additionally, we consider a baseline powered by round-trip translation, a popular approach for authorship obfuscation (Keswani et al., 2016). We implement the approach using M2M100, a state-of-the-art translation model, translation English text into German, then to French, and finally back to English. GPT3.5: Lastly, considering the significant progress made in large language models, we include a comparison with zero-shot prompted GPT3.5 (text-davinci-003) (Brown et al., 2020). We consider two approaches - sentence-level obfuscation (obfuscating each sentence individually), and paragraph-level obfuscation (obfuscating the entire text as a whole). We note that prompt selection is very important and tried to find the best prompt for the task. The specific prompts utilized for this purpose can be found in Appendix G. Due to financial constraints, we limit this baseline to AMT-3. A time consumption analysis of these methods can be found in Appendix E. Method GPT3.5 JAMDEC Metric Sentence Paragraph W/O Stylo W/ Stylo Drop Rate (ENS) 0.23 0.23 0.11 0.11 Drop Rate (BertAA) 0.13 0.09 0.04 0.04 METEOR 0.33 0.41 0.62 0.62 NLI 0.77 0.73 0.75 0.81 CoLA 0.76 0.80 0.85 0.79 Task Score (ENS) 0.59 0.59 0.57 0.57 Task Score (B
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
09ef9fe3-0680-438e-b79d-33db5b51e7cd
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 4.1 Setup (ENS) 0.23 0.23 0.11 0.11 Drop Rate (BertAA) 0.13 0.09 0.04 0.04 METEOR 0.33 0.41 0.62 0.62 NLI 0.77 0.73 0.75 0.81 CoLA 0.76 0.80 0.85 0.79 Task Score (ENS) 0.59 0.59 0.57 0.57 Task Score (BertAA) 0.55 0.54 0.55 0.55 Automatic Evaluation. We evaluate all method along the following three axes. 1. Style Concealment: In line with past work, we use two authorship attribution models trained on stylometric features for authorship verification. The first employs Writeprints-static (Brennan et al., 2012), a collection of lexical and syntactic features, such as word length, average word count, and usage of function words, among others. Recognizing that classification from one model may not transfer effectively to all text (Mahmood et al., 2019a), we adopt the ensemble attribution classifier (ENS) methodology introduced by Haroon et al. (2021), which comprises several attribute-based classifiers, each utilizing different attributes, and leverages a voting system for their aggregation. Since this has been shown to give the most accurate classification results (Haroon et al., 2021), we use ENS for both the Mutant-X method and evaluation. We also train a random forest classifier (RFC) as another internal classifier for the Mutant-X method. Further details on the training can be found in Appendix G. Second, we use a more sophisticated architecture by using BertAA model (Fabien et al., 2020), a BERT fine-tuned specifically for authorship attribution. 5 Using an authorship attribution model (either ENS or BertAA), we calculate the *Drop Rate* or the average drop in the percentage of obfuscated text that were not identified as the true author compared to the original text. This metric accounts for any inaccuracy of the attribution models, see Appendix G.2 for more information. We note that adversarial threat model can be used for further evaluation and comparison
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
cc7a871f-4591-4ec4-9d6e-85a3d3c3d0b2
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 4.1 Setup in Appendix G. Second, we use a more sophisticated architecture by using BertAA model (Fabien et al., 2020), a BERT fine-tuned specifically for authorship attribution. 5 Using an authorship attribution model (either ENS or BertAA), we calculate the *Drop Rate* or the average drop in the percentage of obfuscated text that were not identified as the true author compared to the original text. This metric accounts for any inaccuracy of the attribution models, see Appendix G.2 for more information. We note that adversarial threat model can be used for further evaluation and comparison (Zhai et al., 2022; Mahmood et al., 2020), therefore provide an ablation study in Appendix C using this type of evaluation. 2. Content Preservation: To maintain consistency with previous studies, we compute the ME- TEOR (Banerjee and Lavie, 2005) score between the original and obfuscated text, which evaluates token overlap (Mahmood et al., 2019a; Shetty et al., 2018). However, we note that content semantics can be preserved without direct token overlap by the use of synonyms, therefore we also assess the probability of entailment between the original and obfuscated text using a natural language inference (NLI) model called WANLI (Liu et al., 2022). We will rely on NLI as the main component of content overlap due to its flexibility in measuring content preservation and coherence. 3. Language Quality: To measure language quality, we employ a TextAttack (Morris et al., 2020), which fine-tunes RoBERTa (Liu et al., 2019) on the Corpus of Linguistic Acceptability (CoLA) (Warstadt et al., 2019). The CoLA dataset consists of 10.6k sentences that have been linguistically annotated to assess their grammatical correctness. Overall Task Score: While each of the dimensions above is crucial for the holistic evaluation of author obfuscation system, we also aim to provide an aggregate of the scores into a single task score. Therefore, we also define *Task Score*, an unweighted average of the Drop Rate (using ENS or BertAA), NLI score, and CoLA score. We use the mean of the dimension, as the task of authorship obfuscation is deemed to be successful only if all three goals are satisfied. 6: $${\mathrm{
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
dbcf0687-1542-4ae2-ac75-e8f5134ee22c
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 4.1 Setup have been linguistically annotated to assess their grammatical correctness. Overall Task Score: While each of the dimensions above is crucial for the holistic evaluation of author obfuscation system, we also aim to provide an aggregate of the scores into a single task score. Therefore, we also define *Task Score*, an unweighted average of the Drop Rate (using ENS or BertAA), NLI score, and CoLA score. We use the mean of the dimension, as the task of authorship obfuscation is deemed to be successful only if all three goals are satisfied. 6: $${\mathrm{Task~Score}}={\frac{\mathrm{Drop~Rate+NLI+CoLA}}{3}}.$$ Human Evaluation. On dataset AMT-3, we additionally use human evaluations to validate our automatic measures. We randomly select 102 short passages (one to four sentences) from AMT-3 for this evaluation. We employed Amazon Mechanical Turk workers to read both the original and obfuscated text, and then asked a series of five questions to be rated on a three-point likert scale.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
81d6f50a-7031-454e-9f9e-f07460163144
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 4.2 Main Results JAMDEC has higher Task Score compared to all task-specific methods and similar or better to GPT3. In Table 1 and Table 2, we present the results from the automatic evaluations. JAMDEC (with or without Stylo) with 1.5B GPT2-XL has the highest Task Scores for almost every dataset, and only 2% lower BertAA Task Score than 175B GPT3.5. Of note, is AMT-10, where it performs more than 10% higher than almost all other methods on ENS and BertAA Task Score. This indicates, that JAMDEC is successful in all three goals of authorship obfuscation across different genre of texts. Also, we observe that the two variations of JAMDEC perform similarly across the datasets. JAMDEC strikes a better balance between content preservation and author obfuscation. Figure 2 depicts the variability in the AMT-10 and BLOG-10 datasets' Drop Rate, NLI score, and CoLA score. Preferably, a method should score high in all metrics, resulting in a position in the top right quadrant of each graph. However, we observe a clear trade-off for each of the task-specific baselines. For example, in BLOG-10, the Paraphrase method has an ENS Drop Rate 3% higher than JAMDEC, but it also has a 12% lower CoLA rate and 21% lower NLI, as seen by the orange dots in the top left corner and center of the bottom left and right graph. In contrast, we observe that JAMDEC lies closely to the top right in each graph, demonstrating its effectiveness in balancing the various objectives of authorship obfuscation. Other datasets show similar results and can be viewed in Appendix A.5. This is also supported by qualitative inspection, where we notice poor grammar quality in obfuscated text produced by the task-specific methods, which makes it easy to trick an automatic classifier, however does not maintain the quality and content of the original text. This was particularly relevant in the BLOG datasets, which already contains informal language that can be easily corrupted by single word replacement methods. We provide a qualitative example in Figure 3. Human evaluations confirms that JAMDEC maintains language quality while successfully obfuscating. The outcomes of the human evaluation of AMT-3 are shown in Figure 4. Similar to the automatic evaluation, JAMDEC human evaluation scores are 5% − 50%
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
409043fb-19a0-444f-a562-d0a5c1a595f6
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 4.2 Main Results notice poor grammar quality in obfuscated text produced by the task-specific methods, which makes it easy to trick an automatic classifier, however does not maintain the quality and content of the original text. This was particularly relevant in the BLOG datasets, which already contains informal language that can be easily corrupted by single word replacement methods. We provide a qualitative example in Figure 3. Human evaluations confirms that JAMDEC maintains language quality while successfully obfuscating. The outcomes of the human evaluation of AMT-3 are shown in Figure 4. Similar to the automatic evaluation, JAMDEC human evaluation scores are 5% − 50% higher for Grammar and Fluency, than most other method, including GPT3.5. For Content Preservation, JAMDEC performs onpar with GPT3.5, while Machine Translation unsurprisingly scores the highest because it only tends to slightly modify the original text, as shown in Figure 3. While we observe JAMDEC to be relatively weak in Content Addition, we attribute this mainly to the limitation of the human evaluation environment. Our approach involves utilizing a left context in the beam search process, allowing the model to consider information from earlier sentences when generating subsequent ones. As a result, some generations incorporate data from earlier sentences. However, the samples used for the human evaluation were random short passages taken from the whole text, making it possible for the workers to perceive the information as an "addition" when it was actually present earlier in the passage. However, despite this, we see that JAMDEC performs better than all task-specific methods in Obfuscation by at least 10%.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c039a11d-98f9-4c53-98fa-192e13dfaa2b
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 4.3 Ablation And Other Studies We conduct ablation studies 7 on JAMDEC, to better understand the contribution of each component. JAMDEC performs better at authorship obfuscation using **CoDi-BS**. We find that using CoDi-BS leads to an overall increase in Drop Rate of ∼ 6% and an increase in the number of sentences that pass the base NLI and CoLA threshold of about 32%, with little change in NLI and CoLA score compared to only using CBS. JAMDEC + STYLO performs better in human evals *without* **the CoLA threshold.** We run an additional human evaluation with obfuscation created using JAMDEC + STYLO but *without* a final CoLA threshold. Without a final CoLA threshold, all sentences transformed using Stylo were used. It resulted in an overall increase in Obfuscation of 0.09% compared to JAMDEC +Stylo with a threshold, making it higher than all task-specific methods. However, it did have a decrease of 0.15% and 0.13% in Grammar and Fluency, respectively. JAMDEC is competitive in respect to time consumption. When optimized for time consumption, JAMDEC outperforms all other baselines on Task Score (BertAA) while maintaining a time consumption less than the average of the baselines. A full analysis can be found in Figure 10.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
abd8565f-fdb8-464e-81f9-9625829ca602
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 5 Related Work Stylometry. Stylometry, a field for statistically analyzing variations in writing styles, has long been used for authorship verification (Goodman et al., 2007; Fox and Ehmoda, 2012; Jockers and Witten, 2010). Consequently, employing stylometry as a means to assess writing style served as a logical extension in the task of authorship obfuscation. Stylometric Feature Approaches. Some approaches rely solely on stylometric features to create general numerical-based rules for obfuscation. For example, in a method submitted to the PAN 2016 Author Masking Shared Task by Mansoorizadeh et al. (2016), they substituted synonyms for the most frequently used terms in a text. Another method, submitted to the same Shared Task was from Karadzhov et al. (2017), was more complex and used on a set of 500+ stylometric features such as average amount of words, word frequency, and punctuation. Based on these calculable attributes, the approach adjusted the text to bring the values closer to a pre-determined "average" (derived from a large training corpus). These approaches are often simple to implement, require no additional corpus, and may be used on any text. However, the rigidity of these rules often lead to incorrect grammar or non-fluent speech (Mahmood et al., 2019a; Mihaylova et al., 2016). Model Based Approaches. Other approaches incorporate more flexibility by utilizing deep learning models. One of the most successful deep learning methods is the Support Vector Machine combined with Writeprint-Static(Brennan et al., 2012), which uses a collection of 500+ stylistic features from Writeprint (Abbasi and Chen, 2008) to construct a Support Vector Machine (SVM) model for authorship detection. It then uses this classifier as a guide in conjunction with a pattern disruption method. This framework inspired additional methods, such as Mutant-X (Mahmood et al., 2019a), a genetic algorithm that utilizes an internal classifier to iteratively "mutate" a sentence. At first this method used SVC or Random Forest architecture for the internal classifiers, but in later works reported to be more successful when an ensemble of classifiers was used (Haroon et al., 2021). There has also been work which used variational autoencoder (VAE) network models to generate differentially private obfuscations
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6f1feace-696e-4a21-94f4-fe63b207017c
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 5 Related Work then uses this classifier as a guide in conjunction with a pattern disruption method. This framework inspired additional methods, such as Mutant-X (Mahmood et al., 2019a), a genetic algorithm that utilizes an internal classifier to iteratively "mutate" a sentence. At first this method used SVC or Random Forest architecture for the internal classifiers, but in later works reported to be more successful when an ensemble of classifiers was used (Haroon et al., 2021). There has also been work which used variational autoencoder (VAE) network models to generate differentially private obfuscations (Weggenmann et al., 2022). This was done using probabilistic encoders to do differentially private latent sampling. Another approach, which shares popularity with the task of paraphrasing, is round-trip machine translation using supervised language models. Initial implementations of this method relied on statistical machine translation techniques like Moses, as demonstrated in Keswani et al. (2016). This approach involved translating text from English to German via French and then back to English. However, this method often produced nonsensical or inaccurate content (Mihaylova et al., 2016). Fortunately, with the advancement of machine translation models, we have seen a significant increase in language quality (Altakrori et al., 2022). Authorship Imitation Approach. Although authorship imitation (or style transfer) is regarded as a distinct, separate task from authorship obfuscation, it can be used as an obfuscation strategy when the author's identity is known. For example, Shetty et al. (2017) employ prior knowledge of the original authors' qualities such as age and gender to train a GAN-based model to generate content in multiple styles. For example, if the author is known to be an adult, this method would rewrite the section in a teenager's tone. This strategy involves not only knowledge of the original author, but also a target style to shift to, making it a less general method for obfuscation. Jones et al. (2022) also use a similar approach by training GPT2 models to successfully mimic blog or Twitter users to deceive authorship attribution models.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6427bb92-3f57-44fd-b84c-6ddc1a00abd9
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 6 Conclusion In this work, we introduced JAMDEC, a novel approach to user-controlled, inference-time authorship obfuscation which utilizes only small, opensource language models. This technique involves three key stages: keyword extraction, constrained diverse beam search, and filtering, offering users fine-grained control over the process and yielding personalized outcomes dependent on the user's needs. We showed experimentation on two diverse datasets, and demonstrated that JAMDEC outperformed over existing state-of-the-art methods in authorship obfuscation, while also showcasing its competitive performance against significantly larger models like GPT3.5. Our findings underscore the promise of JAMDEC as an effective strategy for authorship obfuscation, harnessing the capabilities of smaller, openly available models to achieve results on par with their larger counterparts.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
27b92e43-a412-4bdc-9120-41614ff5f7e9
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 7 Limitations JAMDEC has several limitations. First, for creation of the obfuscation candidates, we employ generations from a pre-trained language model. These models, however, have been known to add factually incorrect or hallucinatory information (Ji et al., 2022). Despite the fact that we have contentpreserving filters, we have discovered that at times, additional information can bypass these filters and make it into the final obfuscation. Second, our approach is based on producing several candidates for each obfuscation. If the approach is employed at the sentence-level and the text is lengthy, it may take a long time to employ. Despite the fact that we demonstrated that our method works similarly with fewer generations, it is slower than traditional stylometric-based methods. Lastly, the specific filtering techniques (e.g., NLI, CoLA) we used may carry biases into the eventual obfuscated texts. For example, CoLA might only be able to correctly filter standard, plain English language, but might not be as stable in certain dialects, which may exacerbate social injustice, e.g., correcting (whitewashing) African American English dialect. Users of this authorship obfuscation technique are strongly advised to examine the method for their specific text genre before deploying to ensure proper intended use. Although we present our method with only beneficial use in mind, we acknowledge that the task of authorship obfuscation can be potentially dangerous in itself. First, it could be misused for anonymizing people's writing style for malicious intents, e.g., spamming or making hateful comments online without taking accountability for their actions. Also, these techniques could pose the risk of violating intellectual properties and rights when the creative work of authors is obscured to lose credits. We urge the user to think critically before using these types of methods.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f1e857c1-990d-44bd-a6db-96dddc4ef99b
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## 8 Acknolwedgement This research is based upon work supported in part by NSF DMS-2134012, DMS-2023166, CCF- 2019844, and the Office of the Director of National Intelligence (ODNI)'s IARPA program via 2022- 22072200003. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official views of ODNI, IARPA, or the U.S. Government.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ece51217-885d-4a7a-8731-382e6a7c8b69
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## Zhi Liu. Reuter 5050 Data Set. Zhi Liu. 2011. Reuter 50-50. UCI Machine Learning Repository. DOI: https://doi.org/10.24432/C5DS42. Ximing Lu, Peter West, Rowan Zellers, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2021. Neurologic decoding: (un)supervised neural text generation with predicate logic constraints. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4288–4299, Online. Association for Computational Linguistics. Asad Mahmood, Faizan Ahmad, Zubair Shafiq, Padmini Srinivasan, and Fareed Zaffar. 2019a. A girl has no name: Automated authorship obfuscation using mutant-x. Proceedings on Privacy Enhancing Technologies, 2019(4):54–71. Asad Mahmood, Faizan Ahmad, Zubair Shafiq, Padmini Srinivasan, and Fareed Zaffar. 2019b. A girl has no name: Automated authorship obfuscation using mutant-x. Proceedings on Privacy Enhancing Technologies, 2019:54 - 71. Asad Mahmood, Zubair Shafiq, and Padmini Srinivasan. 2020. A girl has a name: Detecting authorship obfuscation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2235–2245, Online. Association for Computational Linguistics. Muharram Mansoorizadeh, Taher Rahgooy, Mohammad Aminian, and Mehdy Eskandari. 2016. Author obfuscation using wordnet and language models. In Conference and Labs of the Evaluation Forum. Amazon Mechanical Turk. [link]. Tsvetomila Mihaylova, Georgi Karadzhov, Preslav Nakov, Yasen Kiprov, Georgi Georgiev, and Ivan Koychev. 2016. Su@ pan'2016: Author obfuscation—notebook for pan at clef 2016. John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. Textattack: A framework for adversarial attacks, data augmentation, and
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
52bb801a-0e74-4743-8d28-910bd89ea2e0
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## Zhi Liu. Reuter 5050 Data Set. In Conference and Labs of the Evaluation Forum. Amazon Mechanical Turk. [link]. Tsvetomila Mihaylova, Georgi Karadzhov, Preslav Nakov, Yasen Kiprov, Georgi Georgiev, and Ivan Koychev. 2016. Su@ pan'2016: Author obfuscation—notebook for pan at clef 2016. John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 119–126. PAN2016. Obfuscation evaluation 2016. PAN2018. Obfuscation evaluation 2018. Matt Post and David Vilar. 2018. Fast lexically constrained decoding with dynamic beam allocation for neural machine translation. In North American Chapter of the Association for Computational Linguistics. Chen Qian, Ting He, and Ren Zhang. 2017. Deep learning based authorship identification. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *Journal of Machine Learning Research*, 21(140):1–67. Jonathan Schler, Moshe Koppel, Shlomo Argamon, and James W Pennebaker. 2006. Effects of age and gender on blogging. In AAAI spring symposium: Computational approaches to analyzing weblogs, volume 6, pages 199–205. Rakshith Shetty, Bernt Schiele, and Mario Fritz. 2017. A4nt: Author attribute anonymity by adversarial training of neural machine translation. In USENIX Security Symposium. Rakshith Shetty, Bernt Schiele, and Mario Fritz. 2018. A4NT: Author attribute anonymity by adversarial training of neural machine translation. In 27th
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4925ad59-b6c7-46ce-acc9-278c23f82ff0
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## Zhi Liu. Reuter 5050 Data Set. W Pennebaker. 2006. Effects of age and gender on blogging. In AAAI spring symposium: Computational approaches to analyzing weblogs, volume 6, pages 199–205. Rakshith Shetty, Bernt Schiele, and Mario Fritz. 2017. A4nt: Author attribute anonymity by adversarial training of neural machine translation. In USENIX Security Symposium. Rakshith Shetty, Bernt Schiele, and Mario Fritz. 2018. A4NT: Author attribute anonymity by adversarial training of neural machine translation. In 27th USENIX Security Symposium (USENIX Security 18), pages 1633–1650, Baltimore, MD. USENIX Association. Maria Tikhonova, Elina Telesheva, Sergey Mirzoev, Polina Tarantsova, Stanislav Petrov, and Alena Fenogenova. 2021. Style transfer in nlp: a framework and multilingual analysis with friends tv series. 2021 International Conference Engineering and Telecommunication (En&T), pages 1–6. Ewoenam Kwaku Tokpo and Toon Calders. 2022. Text style transfer for bias mitigation using masked language modeling. In North American Chapter of the Association for Computational Linguistics. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b2ed60a1-dbf0-4b7e-b713-8a6130e1c48a
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## Zhi Liu. Reuter 5050 Data Set. Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. Ashwin K. Vijayakumar, Michael Cogswell, Ramprasaath R. Selvaraju, Qing Sun, Stefan Lee, David J. Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural sequence models. *CoRR*, abs/1610.02424. Yequan Wang, Jiawen Deng, Aixin Sun, and Xuying Meng. 2023. Perplexity from plm is unreliable for evaluating text quality. Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625–641. Benjamin Weggenmann, Valentin Rublack, Michael Andrejczuk, Justus Mattern, and Florian Kerschbaum. 2022. Dp-vae: Human-readable text anonymization for online reviews with differentially private variational autoencoders. In Proceedings of the ACM Web Conference 2022, WWW '22, page 721–731, New York, NY
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8fcf11be-da60-45ff-9bab-ae750fdfff69
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## Zhi Liu. Reuter 5050 Data Set. , Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625–641. Benjamin Weggenmann, Valentin Rublack, Michael Andrejczuk, Justus Mattern, and Florian Kerschbaum. 2022. Dp-vae: Human-readable text anonymization for online reviews with differentially private variational autoencoders. In Proceedings of the ACM Web Conference 2022, WWW '22, page 721–731, New York, NY, USA. Association for Computing Machinery. Wanyue Zhai, Jonathan Rusert, Zubair Shafiq, and Padmini Srinivasan. 2022. A girl has a name, and it's ... adversarial authorship attribution for deobfuscation. ArXiv, abs/2203.11849. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 11328–11339. PMLR.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
af6edec3-4954-47b0-a5f7-51b18770417e
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## Table Of Contents: Appendix In the appendix, we provide the following additional materials: Appendix A: Additional Experiments - Appendix A.1: Impact of Diversity in Beam Search - Appendix A.2: Human Evaluation without CoLA Threshold - Appendix A.3: Comparing Keyword - Appendix A.4: JAMDEC with Smaller Beam Width - Appendix A.5: Comparison of all Automatic Evaluations - Appendix A.6: Affect of NLI/CoLA Threshold on Performance - Appendix A.7: Average Perplexity of Text Appendix B: Style Transfer as Authorship Obfuscation Method Appendix C: Adversarial Threat Model for Evaluation Appendix D: Additional Qualitative Example for Comparison of Methods Appendix E: Time Consumption Analysis Appendix F: Compare Similar Authorship Tasks Appendix G: Experimentation Details Appendix H: Algorithm for Constrained Diverse Beam Search CoDi-BS
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1caecaff-afe6-4d9a-8ecf-f8b0ee35905a
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## A Additional Experiments A.1 Impact Of Combining Diverse Beam Search With Constrained Beam Search In order to explore the impact of combining Diverse Beam Search (Vijayakumar et al., 2016) and Constrained Beam Search (Post and Vilar, 2018) for authorship obfuscation, we calculated the automatic evaluation metrics on generations produced using JAMDEC with and without the Diverse Beam Search for the AMT datasets. Results are shown in Table 3. On average, there is about an 6% increase in the Drop Rate, as well as an average 32% increase in generations that pass the NLI and CoLA thresholds, with little change to the NLI and CoLA scores. As expected, adding the diversity penality successfully encourages a higher diversity of generations between beams resulting in a more diverse pool of generation candidates.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
049e0338-e89d-424e-8264-c825c379771a
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## A.2 Human Evaluation For Jamdec +Stylo Without Cola Threshold We ran an additional human evaluation on a third variant of JAMDEC, which is identical to JAMDEC +Stylo except it does not include the final CoLA threshold on sentences produced using the stylometric-based obfuscation method. Without this final threshold, each sentence obfuscated using the stylometric-based method was included in the final text, meaning all sentences of the text were changed and no original text was used. For simplicity, we distinguish these methods as JAMDEC +Stylo+W/Threshold and JAMDEC +Stylo+W/O_Threshold. Figure 5 compares these results to the results shown earlier in Section 4. We observe an overall increase in Obfuscation of 9% compared to JAMDEC +Stylo+W/Threshold, making it higher than all task-specific methods (but still slightly below JAMDEC). However, it did have a decrease of 15% and 13% in Grammar and Fluency, respectively. The obfuscated text in JAMDEC +Stylo+W/O_Threshold only differs from JAMDEC +Stylo+W/Threshold for sentences that were altered by the stylometric-based obfuscation method but did not pass the CoLA threshold. Therefore, it logically follows that including these sentences leads to a decrease in Grammar and Fluency. It also follows that these changes would add to a slight increase in obfuscation, compared to text which includes some of the original sentences. | Dataset | Metric | |--------------------|-----------------| | AMT-3 | Drop Rate (ENS) | | 0.11 | | | 0.01 | | | Drop Rate (BertAA) | 0.04 | | 0.08 | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0c0f42fd-61a4-4fc7-9576-d43f90e8ef37
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## A.2 Human Evaluation For Jamdec +Stylo Without Cola Threshold | | | 0.01 | | | Drop Rate (BertAA) | 0.04 | | 0.08 | | | NLI | 0.75 | | 0.87 | | | CoLA | 0.85 | | 0.86 | | | Average Gen. | | | 0.52 | | | 0.16 | | | AMT-5 | Drop Rate (ENS) | | 0.10 | 0.10 | | Drop Rate (BertAA) | | | 0.14 | | | 0.01
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fa13f952-7b7c-4e0a-92a1-1a9b793dd961
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## A.2 Human Evaluation For Jamdec +Stylo Without Cola Threshold | Drop Rate (ENS) | | 0.10 | 0.10 | | Drop Rate (BertAA) | | | 0.14 | | | 0.01 | | | NLI | 0.76 | | 0.87 | | | CoLA | 0.85 | | 0.87 | | | Average Gen. | | | 0.48 | | | 0.16 | | | AMT-10 | Drop Rate (ENS) | | 0.44 | | | 0.25 | | | Drop Rate (BertAA) | -0.03
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3d7e2d67-dbed-44f0-a8e7-00542cb1720a
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## A.2 Human Evaluation For Jamdec +Stylo Without Cola Threshold | | AMT-10 | Drop Rate (ENS) | | 0.44 | | | 0.25 | | | Drop Rate (BertAA) | -0.03 | | 0.00 | | | NLI | 0.79 | | 0.85 | | | CoLA | 0.78 | | 0.85 | | | Average Gen. | | | 0.47 | | | 0.18 | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e30a3cf0-a95c-48a2-bfc2-36dce337331e
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## A.3 Comparing Keyword Extractors: Word Embedding Methods Vs. Likelihood Methods In Section 3 we introduced a new framework for keyword extraction which uses likelihoods of next token prediction from language models instead of word embeddings. Using this framework, we developed two keyword extraction methods; one using T5 and infilling (Likelihood-T5), and the other using GPT2 with an autoregressive (left to right) generation (Likelihood-GPT2). We hypothesized that these likelihood-based keyword extraction methods would highlight keywords that would increase the ability of a downstream model to generate text that preserves the original meaning. In Figure 6 we show the results of the automatic evaluations of authorship obfuscation using generations created either with only KeyBERT, only Likelihood-T5, only Likelihood-GPT2, or all three (as we did in our experiments). For AMT-3 and AMT-5, the likelihood-based keyword extraction have higher overall evaluations' metrics than the embeddingbased (KeyBERT). However, in AMT-10, the Key- BERT performs on average ∼ 10% higher than both the likelihood method in Drop Rate (ENS), but is on average 6% lower in NLI. Overall, the combined method (using all three keyword extraction) has the highest Drop Rate overall and lowest number of original sentences used. Examples of keywords selected by each method can be reviewed in Table 4. | Original Sentence | "I stated that the body needs a specific amount of time to | |---------------------------------------------------------------|------------------------------------------------------------------------| | transfer calcium from locations in the body to the fracture." | | | Keyword Extractor | Keywords
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d056ba0f-d274-4d9c-9046-a5be86e69374
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## A.3 Comparing Keyword Extractors: Word Embedding Methods Vs. Likelihood Methods | | Keyword Extractor | Keywords | | KeyBERT | ["stated", "body", "needs", "specific", "time", "transfer", "calcium"] | | Likelihood-T5 | ["that", "the", "body", "of", "time", "to", "from", "location"] | | Likelihood-GPT2 | ["stated", "needs", "of", "transfer", "calcium"] |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c8b56cf7-3318-4140-93bd-9f2d716b0aa4
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## A.4 Jamdec With Smaller Beam Widths (Less Generations) We repeated the AMT-3 experiment using a lightweight JAMDEC with a smaller beam width (20) and discovered that it performs slightly better on almost all metrics than JAMDEC with a larger beam width (50) (results in Table 5). This appeared | Metric | J | |---------------------|------| | AM | | | D | | | EC | | | J | | | AM | | | D | | | EC | | | (Lightweight) | | | Drop Rate (ENS) | 0.11 | | 0.12 | | | Drop Rate (BertAA) | | | 0.04 | 0.04 | | METEOR | 0.62 | | 0.78 | | | NLI | 0.81 | | 0.82 | | | CoLA | 0.79 | | 0.83
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b7aa5ddb-77ab-4c85-8857-4a133ce561af
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## A.4 Jamdec With Smaller Beam Widths (Less Generations) EOR | 0.62 | | 0.78 | | | NLI | 0.81 | | 0.82 | | | CoLA | 0.79 | | 0.83 | | | Average Gen. | | | 0.63 | | | 0.42 | | | Task Score (ENS) | 0.57 | | 0.59 | | | Task Score (BertAA) | 0.55 | | 0.56 | | odd at first, until we looked at the quantity of sentences that had generations which passed the NLI and CoLA filter. When we reduce the beam width (and hence the number of overall generations produced), we find a significant decrease in the number of generations that pass the thresholds. For example, in the lightweight version (beam width = 20), only 20% of the generations pass the threshold, implying that 80% of the sentences reverted to the original sentence. Although changing only 20% of the sentences is sufficient to trick the classifiers (seen in the almost matching Drop Rate), it may not be sufficient in human-evaluation.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
046af0b0-d886-45e4-bbd6-2a703cb1e561
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## A.5 Drop Rate Vs. Nli Vs. Cola For All Methods A successful authorship obfuscation method should score high in Drop Rate, NLI, and CoLA, however we observe that the current methods tend to have a trade-off in their abilities. To further analyze this tradeoff, in Figure 7 we graph the Drop Rate (ENS) versus the NLI and CoLA separately for all datasets. Using our definition of a successful method, we want to have a method that lies in the top right of both graphs. We observe that for both datasets (AMT and BLOG), authors 3 and 10, JAMDEC has both a higher Drop Rate and a high NLI and CoLA compared to all other small model methods. However, we do see it perform a bit worse for the 5 authors datasets, where Machine Translation is a bit higher in Drop Rate and close in NLI.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ab9ed5ad-aac0-40b5-a130-32a166118da5
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## A.6 Comparing Drop Rate, Nli, And Cola For Jamdec As The Nli/Cola Thresholds Change JAMDEC is designed to be user-adaptive, having flexible hyperparameters that can adjust to the needs of the specific task. Two of these hyperparameters are the base NLI and CoLA thresholds used in the filtering stage. We experimented with scaling these hyperparameters from 0.2 to 0.8, using the JAMDEC +Stylo method. For simplicity, we make the NLI and CoLA threshold equal in each experimentation, and use a constant final CoLA threshold of 0.7. Figure 8 shows the results for the AMT datasets. In general, as we increase the NLI and CoLA Thresholds (making it harder for generation candidates to pass) we see an obvious increase in NLI of ∼ 15%, a steady score of CoLA, and a mixed result for the Drop Rate depending on the number of authors. In fact, we see a slight increase in both Drop Rates for AMT-3 and a slight decrease in AMT-5 and AMT-10. Since the number of original sentences used increases as the threshold increases (higher thresholds means less generations pass the thresholds), we would expect Drop Rate to decrease (as it did for AMT-3). Therefore, this behavior (especially by ENS) is an indication that it might be relying on an artifact for its classification. This encourages the use of human evaluation as added evaluation for this task.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7a6896ba-bd02-4a39-bb18-37d1f498b1aa
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## A.7 Perplexity Of Generations In our main experimentation we do not use perplexity and instead use the CoLA score. The reason we opted for CoLA over perplexity is that it has a fixed range [0, 1] and can therefore be compared across text length, topic, and style type (formal/informal). Due to the unbounded nature of perplexity, it is an unreliable metric to use by itself (Wang et al., 2023). However, want to provide these metrics. We have used a Llama2-7B model (Touvron et al., 2023) to calculate perplexity over a text (normalized to the length of text). We choose Llama2-7B since it is from a different family of models used in our experimentation, to reduce any model architecture bias. Then, we calculate the ratio of the perplexity of the obfuscated text to the perplexity of the original (human) text. Again, we use this ratio to have a standard comparison across methods. Results can be seen in Table 6. Similar to CoLA, we see that JAMBDEC outperforms over all other methods on perplexity (ratio closes to 1) on almost all datasets.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c5f03e87-2ffe-49f5-8aa6-3328afee7667
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## B Style Transfer As Authorship Obfuscation Method As we mentioned, the task of style transfer mainly differs from the task of authorship obfuscation by its goal of a specific, fixed target style. For this reason, there seems to be many subclasses of style transfer tasks center on a specific aspect of style (specific authors, such as characters from the TV show Friends (Tikhonova et al., 2021), aspect of authors, such as gender (Tokpo and Calders, 2022), formality of style (Chen et al., 2022), etc.). This makes it hard to be a main baseline for authorship obfuscation, as there is not a specific, unbiased method or target style to choose. However, we still were curious how it would compare to JAMDEC. Therefore, we have included an additional experimentation which compares two targeted styles with JAMDEC on the task of authorship obfuscation. We use the Style Transfer via Paraphrasing or STRAP, a clever method which first employs paraphrasing using one LLM finetuned on a supervised paraphrasing task and then applies a specific style using another LLM finetuned on the specific style (Krishna et al., 2020). We use two types of target styles; Shakespeare and Formal writing. The results are shown in Table 7. Here we observe that JAMDEC consistently achieves a higher Drop Rate while better preserving content and maintaining fluency. Notice that comparing fluency using the style transfer baseline to Shakespearean style might not be entirely fair, as Old English has different grammar rules. This highlights the limitations of using the style transfer method for authorship obfuscation, given the lack of a specific, unbiased target style to select.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
83071438-d37c-4bed-bb62-9e02f6637413
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation In our main evaluation, we use simple authorship attribution models, which do not have knowledge of obfuscations. However, current work in authorship | Dataset | Method | Original Perplexity | Predicted Perplexity | Ratio | |---------------------|---------------------|-----------------------|------------------------|---------| | Mutant-X (ENS) | 8.25 | 29.52 | 3.77 | | | Mutant-X (SVC) | 8.25 | 27.6 | 3.56 | | | Paraphrase | 8.25 | 9.8 | 1.23 | | | Machine Translation | 8.25 | 12.93 | 1.64 | | | AMT-3 | Stylometric | 8.25 | 24.17 | 2.95 | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2fec2e5d-7234-4a8c-ad26-fd9ef7106e97
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation 12.93 | 1.64 | | | AMT-3 | Stylometric | 8.25 | 24.17 | 2.95 | | J | | | | | | AM | | | | | | D | | | | | | EC | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
aa0dadf3-9682-4f57-bd9a-7e301336de2a
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | | | | EC | | | | | | (w/o stylo) | 8.25 | 7.29 | 0.92 | | | J | | | | | | AM | | | | | | D | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
da62a29f-2e5d-4952-960e-4755ec47a2fe
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | | | | D | | | | | | EC | | | | | | (w stylo) | 8.25 | 8.05 | | | | 1.02 | | | | | | GPT3 (Sentence) | 8.25 | 13.3 | 1.7 | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e7cff4ee-77f9-4dd7-8a4c-a00788693038
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | | | | GPT3 (Sentence) | 8.25 | 13.3 | 1.7 | | | GPT3 (Paragraph) | 8.25 | 9.96 | 1.23 | | | Mutant-X (ENS) | 8.42 | 285.92 | 34.08 | | | Mutant-X (SVC) | 8.42 | 923.09 | 117.55 | | | Paraphrase | 8.42 | 11.95 | 1.49 | | | AMT-5 | Machine Translation | 8.42 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
89598df2-964c-4bbe-8247-acc22d5c4f33
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | 8.42 | 11.95 | 1.49 | | | AMT-5 | Machine Translation | 8.42 | 13.41 | 1.66 | | Stylometric | 8.42 | 25.81 | 3.09 | | | J | | | | | | AM | | | | | | D | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1adef6bb-cd01-4c8f-aa2b-418af54da78b
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | | | | D | | | | | | EC | | | | | | (w/o stylo) | 8.42 | 7.3 | | | | 0.9 | | | | | | J | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
95a0c745-e9d0-411d-b8a1-77e7ea75caad
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | | | | J | | | | | | AM | | | | | | D | | | | | | EC | | | | | | (w stylo) | 8.42 | 37.56 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
32f58567-1c98-42ce-95df-eff6f5f3adae
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | | | | | (w stylo) | 8.42 | 37.56 | 4.44 | | | Mutant-X (ENS) | 9.07 | 25.96 | 3.08 | | | Mutant-X (SVC) | 9.07 | 23.51 | 2.77 | | | Paraphrase | 9.07 | 10.02 | 1.2 | | | AMT-10 | Machine Translation | 9.07 | 15.16 | 1.79 | | Stylometric | 9.07 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5022c478-7ee4-4c78-818a-17bdaf60b265
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation 1.2 | | | AMT-10 | Machine Translation | 9.07 | 15.16 | 1.79 | | Stylometric | 9.07 | 26.24 | 2.88 | | | J | | | | | | AM | | | | | | D | | | | | | EC |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fb8bb121-3a38-4706-86c1-d48636a66739
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | | | | | | EC | | | | | | (w/o stylo) | 9.07 | 7.52 | | | | 0.9 | | | | | | J | | | | | | AM |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
554083e7-e44b-47cc-8875-aaab4a0710bc
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | | | | | | AM | | | | | | D | | | | | | EC | | | | | | (w stylo) | 9.07 | 34.65 | 3.86 | | | Mutant-X (ENS) | 22.82 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
03a6550e-0cf5-481c-b1ee-2143295da246
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | | (w stylo) | 9.07 | 34.65 | 3.86 | | | Mutant-X (ENS) | 22.82 | 89.53 | 5.24 | | | Mutant-X (SVC) | 22.82 | 55.04 | 3.73 | | | Paraphrase | 22.82 | 22.27 | | | | 1.39 | | | | | | BLOG-5 | Machine Translation | 22.82 | 42.08 | 2.73 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d7929dfd-ff27-40d1-821d-b63818a8c1a4
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | | | | BLOG-5 | Machine Translation | 22.82 | 42.08 | 2.73 | | Stylometric | 22.82 | 47.18 | 2.5 | | | J | | | | | | AM | | | | | | D | | | | | | EC
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d9529260-83ae-417a-bf5c-8a94cd1af48b
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | | | D | | | | | | EC | | | | | | (w/o stylo) | 22.82 | 23.79 | 1.7 | | | J | | | | | | AM | | | | | | D
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9a4167db-1767-4d36-9275-4ea88abbaa40
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | | | AM | | | | | | D | | | | | | EC | | | | | | (w stylo) | 22.82 | 24.44 | 1.75 | | | Mutant-X (ENS) | 19.55 | 452.56 | 32.25 | | | Mutant-X (SVC) | 19.55
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8a14e163-73b4-4b61-8f08-6798b0408265
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation 1.75 | | | Mutant-X (ENS) | 19.55 | 452.56 | 32.25 | | | Mutant-X (SVC) | 19.55 | 47.82 | 3.58 | | | Paraphrase | 19.55 | 20.82 | 1.8 | | | BLOG-10 | Machine Translation | 19.55 | 42.93 | 3.16 | | Stylometric | 19.55 | 45.63 | 2.74 | | | J | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
84be6a43-ce7b-48f4-abd8-647016357f3f
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation 45.63 | 2.74 | | | J | | | | | | AM | | | | | | D | | | | | | EC | | | | | | (w/o stylo) | 19.55 | 19.17 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a4d9bf81-a831-4826-9f0a-228e9d615be6
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | | | | (w/o stylo) | 19.55 | 19.17 | | | | 1.4 | | | | | | J | | | | | | AM | | | | | | D | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ae1f3b28-6bb5-463a-a46b-790088751872
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | | | | D | | | | | | EC | | | | | | (w stylo) | 19.55 | 19.72 | 1.44 | | attribution has shown that the use of adversarial threat models (models that are trained with obfuscation) can better evade the attacks of authorship obfuscation (Zhai et al., 2022). Therefore, we include evaluation using stronger threat models on the AMT-3 dataset. Table 8 shows results of evaluation of all methods using two threat models. The first, Threat Model (Orig + Obf), is trained using both the original text and the obfuscated text from all methods shown. The second, Threat Model (Obf), is only trained using the same obfuscated text but no original text. It has been shown in previous works that threat models trained only on obfuscated text have higher accuracy (Zhai et al., 2022), which is also seen in the models we train. Using these models, we see that JAMDEC has
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2bee8eee-fa7a-4c18-9a96-ef39b8d89b17
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation using stronger threat models on the AMT-3 dataset. Table 8 shows results of evaluation of all methods using two threat models. The first, Threat Model (Orig + Obf), is trained using both the original text and the obfuscated text from all methods shown. The second, Threat Model (Obf), is only trained using the same obfuscated text but no original text. It has been shown in previous works that threat models trained only on obfuscated text have higher accuracy (Zhai et al., 2022), which is also seen in the models we train. Using these models, we see that JAMDEC has the highest Drop Rate under the first thread model and third highest under the second thread model. However, as mentioned before, the Drop Rate is only one criterion for the task evaluation of authorship obfuscation. It should be noted, that Mutant-X and Machine Translation (which are the only method which scores much higher than JAMDEC under the second threat model) scores much lower in language quality and content preservation than JAMDEC, as shown in Table 1. | Dataset | Metric | Shakespeare | Formal | |--------------------|-----------------|---------------|----------| | AM | | | | | D | | | | | EC | | | | | AMT-3 | Drop Rate (ENS) | 0 | 0 | | 0.11
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5043795c-c833-4eb3-8695-8683a8c55a56
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | | | EC | | | | | AMT-3 | Drop Rate (ENS) | 0 | 0 | | 0.11 | | | | | Drop Rate (BertAA) | | | | | 0.04 | 0.04 | 0.04 | | | NLI | 0.19 | 0.25 | | | 0.75 | | | | | CoLA | 0.47 | 0.69 | | | 0.85 | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
02e84b45-67bb-408f-ace0-9c8359061b48
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | | | | CoLA | 0.47 | 0.69 | | | 0.85 | | | | | AMT-5 | Drop Rate (ENS) | | | | 0.20 | 0.20 | | | | 0.13 | | | | | Drop Rate (BertAA) | -0.06 | -0.06 | | | 0.14 | | | | | NLI | 0.23 | 0.26 | | | 0.76 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8477391f-0dbd-4e55-bd69-d801a16ee583
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | | | | | NLI | 0.23 | 0.26 | | | 0.76 | | | | | CoLA | 0.49 | 0.69 | | | 0.85 | | | | | AMT-10 | Drop Rate(ENS) | 0.33 | 0.23 | | 0.41 | | | | | Drop Rate (BertAA) | | | | | -0.02 | | | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4966104f-7715-41bc-9085-96aebee0b54c
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | | | Drop Rate (BertAA) | | | | | -0.02 | | | | | -.04 | | | | | -0.02 | | | | | NLI | 0.19 | 0.26 | | | 0.79 | | | | | CoLA | 0.47 | 0.67 | | | 0.78 | | | | | Method | Threat Model (Orig + Obf
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
815e5329-a6dd-4bce-b384-f290c17036dc
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | 0.47 | 0.67 | | | 0.78 | | | | | Method | Threat Model (Orig + Obf) | Threat Model (Obf) | |-----------------|-----------------------------|----------------------| | Mutant-X (ENS) | 0 | | | 0.03 | | | | Mutant-X (RFC) | 0 | 0 | | Paraphrase | 0 | -0.03 | | Machine Transl. | | | | 0.4 | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e25730ba-2403-4ee3-aae0-8489699e4053
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation -0.03 | | Machine Transl. | | | | 0.4 | | | | 0.00 | | | | Stylometric | 0 | -0.07 | | J | | | | AM | | | | D | | | | EC |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8978affe-e38f-4215-bd0b-901b0cd71f6d
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## C Threat Model As Evaluation | | D | | | | EC | | | | 0.04 | | | | -0.03 | | | | Accuracy | | | | Train | 1 | 1 | | Test | 0.93 | 0.96 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
54dff168-460f-435d-99ce-e544fd46fcc9
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## D Additional Example Of Obfusction In Figure 9 we include a second qualitative comparison of JAMDEC and the other baseline methods. We notice that the obfuscated text produced by baseline methods like Mutant-X, Paraphrase, and Machine Translation has much lower language quality compared to JAMDEC. Such low-quality text might make it easier to deceive an automatic classifier, but it fails to meet the other objectives of authorship obfuscation: preserving the quality and content of the original text. We also observe that Paraphrase and Machine Translation make only minor modifications to the original text. While this aids content preservation, it's ineffective for authorship concealment. Also, we provide a few examples of GPT3.5 generation in Table 9, with the first being the same examples from Figure 9 in our paper. From qualitative analysis, we found that most generations from GPT3.5 fell within two techniques: paraphrasing and stylometric (mainly replacing words with | Method | Generation | |-------------------------------------------------------------|-----------------------------------------------| | Original | | | The Ex. An ex holding a grudge can do a lot of damage in a | | | short amount of time. He knows enough to open accounts in |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
07bbe0d9-3567-4aec-b27c-83182e244f6e
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## D Additional Example Of Obfusction | | The Ex. An ex holding a grudge can do a lot of damage in a | | | short amount of time. He knows enough to open accounts in | | | your name, and he has the motive to hurt you. | | | Mutant-X | | | The Ex. An ex holding a | bitterness | | damage in a | length quantity | | ascend
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d609f612-b8eb-4906-9261-2abeaceab334
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## D Additional Example Of Obfusction | | damage in a | length quantity | | ascend | accounts in | | impair | You. | | Paraphrase | | | A lot of damage can be done In a short period of time. | He | | knows
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d15c3804-5ad8-4c88-b2ad-fb7a9ea51a43
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## D Additional Example Of Obfusction | | A lot of damage can be done In a short period of time. | He | | knows | how to | | hurt you. | | | Machine | | | Translation | | | The former.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6dfcf0e2-6f30-4499-bf39-6fab649d5c8d
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## D Additional Example Of Obfusction | | Translation | | | The former. | An | | of damage in a short time. He knows enough to open accounts | | | in your name, and he has the | reason | | Stylometric | | | An ex
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e5f1a93a-863a-4aee-b122-10499919a610
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## D Additional Example Of Obfusction | | | An ex | holding, a | | brief | amount in time, | | in your name, and he has the motive to hurt you. | | | JAMDEC | | | The Ex. | When the ex is holding
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
05409ba5-5f25-40d0-af87-ab1197110846
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## D Additional Example Of Obfusction | | | The Ex. | When the ex is holding his grudge against the | | person who caused him lot of damage to his life, he is | | | short sighted and will do anything in his power to get | | | back at that person, no matter how much it will hurt the | | | person he is trying to get revenge against. | | | enough to open accounts in your name, and he has the motive | | | to hurt you.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
44e661d8-4e2f-4463-90ee-b9e1b6d76742
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## D Additional Example Of Obfusction | | enough to open accounts in your name, and he has the motive | | | to hurt you. | | | JAMDEC + | | | Stylo | | | The Ex. | When the ex is holding his grudge against the | | person who caused him lot of damage to his life, he is |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
62030468-e214-4800-b3d8-0f2da5453e9b
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## D Additional Example Of Obfusction | | The Ex. | When the ex is holding his grudge against the | | person who caused him lot of damage to his life, he is | | | short sighted and will do anything in his power to get | | | back at that person, no matter how much it will hurt the | | | person he is trying to get revenge against. | | | enough to open accounts in your name, and he has the | reason | | to hurt you. |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
78f458aa-0576-436f-9ac8-26da228d627e
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## D Additional Example Of Obfusction | reason | | to hurt you. | | synonyms). Either the generation was a short description (lacking some content preservation) or it was minimally changed (only swapping out a few words). There were also a handful of generations which provided incorrect paraphrasing (changed meaning of sentence extremely - see example). | Original Text | GPT3.5 Generation | Category of Generation | |-----------------|---------------------|--------------------------| Though several attempts were made to found agricultural outposts with white settlers in the interior, these were largely failures. The oppressive tropical climate and hostile African neighbors made life difficult for settlers, many of whom lacked agricultural experience or expertise. Several attempts at establishing agricultural settlements with white settlers in the interior were attempted, yet were mostly unsuccessful. Settlers in the oppressive tropical climate and with hostile African neighbors found life to be a challenging experience, especially with many lacking any agricultural knowledge. Another issue is the added levels of coordination that occur when working with multiple aid groups to ensure healthcare service coverage. An additional problem is the extra layers of organization that transpire when collaborating with numerous relief agencies to guarantee health care assistance. The FBI databases contain tens of thousands of records from car-rental companies, hotels, and national department stores. The databases of the Federal Bureau of Investigation include a plethora of files from car-rental establishments, hotels, and national chain stores. In Angola, African-descended individuals always constituted in excess of 95% of the populace, thus differentiating the demographic make-up of the Portuguese colonies from that of Brazil. In Angola however, black Africans never made up less than 95% of the population, so the demographic situation there (and in the other Portuguese colonies) was never the
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6cdff276-e56e-4c6b-9bbe-0f82aa132f04
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## D Additional Example Of Obfusction The FBI databases contain tens of thousands of records from car-rental companies, hotels, and national department stores. The databases of the Federal Bureau of Investigation include a plethora of files from car-rental establishments, hotels, and national chain stores. In Angola, African-descended individuals always constituted in excess of 95% of the populace, thus differentiating the demographic make-up of the Portuguese colonies from that of Brazil. In Angola however, black Africans never made up less than 95% of the population, so the demographic situation there (and in the other Portuguese colonies) was never the same as it was in Brazil.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }