doc_id
stringlengths
36
36
contents
stringlengths
22
3.25k
metadata
dict
748e89ab-e7aa-4b76-b7c2-f2793cbee590
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## E Time Consumption Analysis We include a comparison of time consumption across the different obfuscation method. However, we recognize that there is a significant trade-off between time consumption and performance. Therefore, we provide, Figure 10 which clearly illustrates this trade-off. In this analysis we showcase alter aspects of JAMDEC, beam width and generations parameters, which severely affect time consumption. First, we experiment with various beam width of 50, 20, and 10. We observe that when we reduce the beam size, the time consumption decreases significantly, yet the performance remains similar. Second, we experimented with using all parameter combinations versus using only the best parameter to generate candidates for filtering. Surprisingly, by using only the best parameter to generate a small candidate set which cuts the runtime by approximately five times, we achieve performance that's comparable to or even better than using all parameter combinations to produce a large candidate set. Both ablations showcase the efficiency and effective- Stylometric Stylometric Paraphrasing (missing content) Incorrect Meaning ness of JAMDEC. Additionally, when compared to other baselines, the best configuration of JAMDEC achieves significantly better performance with a comparable run-time. This further confirms the effectiveness and practicality of JAMDEC for realworld applications.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e7b6ebd3-f2da-45b9-8cd1-6cd7809fc63a
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## F Compare Similar Authorship Tasks Here, we would like to further discuss the critical difference between seemingly similar language tasks: authorship obfuscation, paraphrasing and style transfer. Table 10 provides a visual illustration of the differences in the tasks. Paraphrasing The main objectives of paraphrasing is to rephrase text to enhance clarity. Hence, paraphrasing can often lead to small edits that stay within the same authorship style, making it ineffective for concealing the author's identity. We further validate the incompetence of paraphrasing methods for authorship obfuscation empirically through both quantitative and qualitative analysis as shown in Table 1, Figure 3 and Figure 9. Style Transfer Style transfer assumes a distinct target style whereas authorship obfuscation assumes *lack of* distinct style. Specifically, while style transfer has a fixed target style as a priori, authorship obfuscation requires a dynamically changing output style depending on the particular input text to obfuscate. This makes it challenging to use style transfer techniques for authorship obfuscation, as it's hard to assume a specific target corpus representing the proper output style for obfuscation. We further confirm the incompetence of style transfer methods empirically through quantitative and qualitative analysis as shown in Table 7 and Figure 9. In addition, using style transfer techniques for authorship obfuscation raises ethical concerns. The intention of authorship obfuscation is to safeguard the author's identity, avoiding the imitation or deceptive portrayal of an individual. Using style transfer to mimic another author could unintentionally blur the boundary between preserving anonymity and indulging in deceitful behavior.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5e7ed734-e359-4b37-bf4b-b0eef6241e01
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G Experimental Details In this section we provide full details of the experimentation used in this paper. We start with the dataset in Appendix G.1, method implementations and hyperparameter choices for each method in Appendix G.2, and evaluation methodology in Appendix G.3.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9e69312b-b584-4924-8708-96fc8b12e5d7
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.1 Data AMT- Formal Articles. The dataset, the Extended- Brennan-Greenstadt (Brennan et al., 2012), contains collections of short (∼ 500-words) scholarly text that were gathered from Amazon Mechanical Turk (AMT). These articles were collected using very strict guidelines which required the writing to be clear (free of citations, urls, headings, etc.), true to the author's writing style, relevant to the topic, and the correct length. These qualities were then reviewed by the researchers after submission for quality assurance. More information about the data collection can be reviewed in Brennan et al. (2012). We used the same three test sets as Mahmood et al. (2019a), which were a collection of 3, 5, and 10 authors with 27, 30, and 49 texts respectively (AMT-3, AMT-5, AMT-10). Each author wrote about the same topic throughout the different text. Examples of the author's topics included identity theft, and Portuguese slavery in Africa. An example of a passage can be seen in Table 11. BLOG- Informal Articles. The second dataset, the Blog Authorship (Schler et al., 2006), contains a collection of blog entries that were posted to blog.com in 2004. The original dataset contains over 680k post from 19k individual authors, with an average of 7,250 words per author. Each author tends to write about similar topics and styles, ranging from dairy style entries to fan-fiction. Similar to the test sets used by Mahmood et al. (2019a), we created two datasets with a collection of 5, and 10 authors with 72, and 150 texts respectively (BLOG- 5, BLOG-10). An example of a passage can be seen in Table 11.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
58ffee2e-42cd-4237-b53d-aac6f1274c6c
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.2 Method Implementation The method implementation and hyperparameters for each method used in our experimentation are detailed below.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2e449ec9-ebcd-470d-a4ac-97a9936dd21c
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.2.1 Baselines Stylometric Obfuscation. We employ the Stylometric Obfuscation method proposed by Karadzhov et al. (Karadzhov et al., 2017) in the PAN-2016 Author Masking Shared Task competition (PAN2016). This method calculates metrics for 12 features that are indicative of style, then modifies the text, so these metrics align with an "average" value. The "averages" were calculated using a combination of training sets including the PAN-2016 Author Obfuscation task (PAN2016) and public domain books from Project Gutenberg (Gutenberg) Examples of the metrics this method uses include the average number of words per sentence, word frequency, and the use of uppercase letters. Changes employed include actions such as sentence splitting and merging, substitution of | Task | Preserve All Content | Preserve Tone | Change in Style | |-----------------|------------------------|-----------------|-------------------| | Authorship Obf. | | | | | ✓ | ✓ | ✓ | ✗ | | Paraphrase | | | | | ✗ | ✓ | ✗ | ✗ | | Style Transfer |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9c9317ec-a94a-4c25-bab0-1f3a93e789af
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.2.1 Baselines | | | | ✗ | ✓ | ✗ | ✗ | | Style Transfer | | | | | ✓ | ✓ | ✓ | ✓ | words with synonyms, and alterations in spelling. For a full list of metrics and proposed changes, see the (Karadzhov et al., 2017). To further enhance the obfuscation process, the method introduces "noise" by modifying words that differ between English and British English and introducing additional functional words. We make no changes to the hyperparameters used in the original method. Mutant-X. Mutant-X (Mahmood et al., 2019a) is a genetic algorithm guided by an internal authorship classifier, which is trained using a deep learning architecture. This method requires a separate authorship corpus to train the internal classifier. The approach follows an iterative process wherein the text undergoes "mutation" by randomly replacing words based on their semantic similarity. Additionally, at random iterations, it incorporates a "crossover" effect that involves cutting two parent texts at a random position and combining them to create two new child texts. In order to train the internal classifiers, we adopt the same features and architectures (Random Forest Classifier, Support Vector Classifier and Ensemble Classifier), as suggested in the subsequent work by Haroon
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
05fa2c30-2de0-41c4-9d62-a9d963ec1eb2
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.2.1 Baselines using a deep learning architecture. This method requires a separate authorship corpus to train the internal classifier. The approach follows an iterative process wherein the text undergoes "mutation" by randomly replacing words based on their semantic similarity. Additionally, at random iterations, it incorporates a "crossover" effect that involves cutting two parent texts at a random position and combining them to create two new child texts. In order to train the internal classifiers, we adopt the same features and architectures (Random Forest Classifier, Support Vector Classifier and Ensemble Classifier), as suggested in the subsequent work by Haroon et al. (2021). The features come from the Writeprints-Static (Brennan et al., 2012) which is a set of 500+ lexical and syntactic features such as total words, percentage of digits, and special character occurrences taken from the full set Writeprint (Abbasi and Chen, 2008). For the Random Forest (RFC) we used 50 trees in the forest and all features, and for the Support Vector Classifier (SVC) we use a linear kernel with all the features. However, as was explored in Haroon et al. (2021), a single classifier may not be the best indication of authorship classification since it could be relying on artifacts to make its decision. Therefore, they introduced the idea of using an "ensemble" of classifiers (ENS) which are trained on different collections of features and vote on the final classification. We used an ensemble of 10 SVC classifiers which were trained on 30 randomly selected features. A "soft" voting system (averaging probabilities instead of classes) was used to choose the final obfuscation. However, we only show results with an internal classifier of RFC or ENS for the internal classifiers. The sample size, train accuracy, and test accuracy of these models can be reviewed in Table 12. Paraphrasing. For the paraphrasing baseline, we employ a state-of-the-art paraphrasing model, PE- GASUS Paraphrase (Zhang et al., 2020; par) a PE- GASUS model fine-tuned on a self-supervised task for paraphrasing. Machine Translation. Inspired by the work of Keswani et al. (Keswani et al., 2016), we implemented a similar approach using
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a27aac83-2b95-468e-aa6b-36534104e4f2
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.2.1 Baselines The sample size, train accuracy, and test accuracy of these models can be reviewed in Table 12. Paraphrasing. For the paraphrasing baseline, we employ a state-of-the-art paraphrasing model, PE- GASUS Paraphrase (Zhang et al., 2020; par) a PE- GASUS model fine-tuned on a self-supervised task for paraphrasing. Machine Translation. Inspired by the work of Keswani et al. (Keswani et al., 2016), we implemented a similar approach using machine translation from English to German, then to French, and finally back to English. Keswani et al. emphasized the importance of using a machine translation model that does not rely on English as an intermediate step. This means that when translating from German to French, the model should go directly from German to French, without translating via English. In their paper, they did not provide the code for this method, so we created our own implementation using the M2M100 translation model (Fan et al., 2020) with 418M parameters. GPT3.5 We include a comparison with zero-shot prompting using GPT-3 (text-davinci-003, 175B) 3 (Brown et al., 2020) which has ∼ 175B parameters. Our comparison involved prompting at both the sentence-level, where each sentence was obfuscated individually, and the paragraph level, where the entire text was obfuscated as a whole. We prompted GPT-3 to generate two obfuscations for each sentence/paragraph. Subsequently, for the sentence-level obfuscation, we randomly combined one generation from the two produced for each sentence to create a single obfuscated paragraph. The evaluations presented here represent the average performance across these two generations. However, due to financial constraints, we limited our GPT-3 obfuscation generation to AMT-3. Below are the exact prompts used to generate obfuscated text at the sentence and paragraph level. Sentence-level: "Provide two re-writes of the following sentence so that the author's style is obfuscated.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ed34eb2d-5d72-4586-8bcd-0d7fe28f5bc7
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## Dataset Text Example AMT In the 1990s Zaire served as the main supporter of UNITA, as South African and American support for the organization dwindled. In 1997 a coup supported in part by the Angolan government overthrew Mobutu, and Zaire was renamed the Democratic Republic of the Congo. Without the aggressive Mobutu regime as a neighborhood, the situation in Angola stabilized and the MPLA was finally able to crack down on internal dissent without being troubled with foreign intervention, ending the civil war a few years later in 2002. Like most other Third World conflicts of the twentieth century, the wars in Angola were heavily affected by the Cold War. In addition to the competition between the US and the USSR, several other factors motivated the involvement of international powers: the Sino-Soviet split, Third World solidarity against Western exploitation and imperialism, and in the case of the US, Angola's large oil reserves. The USSR was involved with the MPLA from its foundation in the late-1950s. Starting in 1958, MPLA founding member Mario de Andrade would travel to Moscow on a regular basis for various conferences and meetings. During these visits the MPLA developed a relationship with the Soviets, securing funding and in 1961 the explicit support of Soviet Premier Nikita Khrushchev, who stated that '"the patriots of Angola can be sure that the sympathies of the peoples of the great Soviet Union are fully on their side." Many MPLA leaders would go on to be educated in Moscow. The USSR chose to support the MPLA over rival movements in Angola for a number of reasons. As a left-leaning Marxist movement that explicitly condemned the imperial powers, the MPLA followed the same basic ideological principles as the USSR. The UPA/FNLA was more ambiguous on this issue, receiving support from the US and sometimes practicing anti-communist rhetoric. The MPLA was also not as focused on regional or ethnic issues, as the predominately Bakongo UPA based in northern Angola was. The USSR also practiced the policy of recognizing and supporting only one rebel movement within a conflict, a policy not shared by all of its peers. Early Soviet support of the MPLA included food and clothing as well as weapons and increased progressively during the course of the war from goods valued at $25,000 in 1961 to $220,000 in 1973. Large scale Soviet assistance did not come until 1975 though. In this year another foreign power would join the equation, with Cuba' shipment of two shiploads of T-55 tanks and 500
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1d2eb8de-f8d8-4af1-946b-dab2b2402ea5
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## Dataset Text Example regional or ethnic issues, as the predominately Bakongo UPA based in northern Angola was. The USSR also practiced the policy of recognizing and supporting only one rebel movement within a conflict, a policy not shared by all of its peers. Early Soviet support of the MPLA included food and clothing as well as weapons and increased progressively during the course of the war from goods valued at $25,000 in 1961 to $220,000 in 1973. Large scale Soviet assistance did not come until 1975 though. In this year another foreign power would join the equation, with Cuba' shipment of two shiploads of T-55 tanks and 500 military advisories. Though the Cubans and Soviets would work together closely in Angola, early actions were not coordinated as is widely assumed. Cuba was not simply a Soviet proxy but rather had its own agenda for being in Angola. As a Third World country with a colonial past and communist government, Cuba wanted to sustain the global conflict against the West and imperialism through spreading Marxist-Leninist revolution. BLOG 7:05 a.m. Wednesday. Feeling pretty good today. My last couple hours of sleep were choppy, but I went to bed so early I'm sure I got at least eight hours. Took half an actifed to counter the red wine, and I didn't drink enough water to counteract them both. Other than that, feeling good, and I'm pleased with the amount I drank for Drinking Night. My new plan is to buy only red wine, and buy only enough for the one drinking night. If I don't have it around the house, I won't drink it. Because I am far too lazy and too self-conscious to go buy it. Therefore, this way I am not relying on willpower, I'm setting up an environment where I can't drink. I'm having a glass of water right now, with my coffee. I don't usually start until after breakfast, but I feel quite dehydrated. I'm adjusting my estimates for the coffee with Benefiber, because I'm not putting an entire tablespoon in. Maybe two-thirds that. Note: remember to buy an exercise ball to sit on while at the computer. 5:00 p.m. Had a nice little lunch with Daisy. Ate a veggie wrap and some fries, which I hope I am estimating reasonably. It was a decent meal, but not entirely filling, so I had a little chicken when I got home. Now I am finishing up my work email
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3f7b7455-5b9d-4e2e-9d34-dbc29c75b016
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## Dataset Text Example , with my coffee. I don't usually start until after breakfast, but I feel quite dehydrated. I'm adjusting my estimates for the coffee with Benefiber, because I'm not putting an entire tablespoon in. Maybe two-thirds that. Note: remember to buy an exercise ball to sit on while at the computer. 5:00 p.m. Had a nice little lunch with Daisy. Ate a veggie wrap and some fries, which I hope I am estimating reasonably. It was a decent meal, but not entirely filling, so I had a little chicken when I got home. Now I am finishing up my work emailing before vacation, trying to do my timesheet, etc. My hip is still bothering me. I'm not happy about that, because it hurts when I walk, and I want to do a lot of walking on vacation. I think the bellydancing may have caused the strain, and then the gliding is exacerbating it. So perhaps it's a good thing that I'll be away from the glider for a couple weeks. I can walk and swim for exercise, and perhaps that will work out the problem, whatever it is. | Dataset | Train Sample Size | Test Sample Size | ENS | RFC | BertAA | |------------|---------------------|--------------------|-----------|------------|-----------| | Train Acc. | Test Acc. | Train Acc. | Test Acc. | Train Acc. | Test Acc. | | AMT-3 | 36 | 27 | 1.0 | 0.93 | 1.0 | | AMT-5 | 60 | 30 | 1.0 | 0.93 | 1.0 | | AMT-10
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
643889e7-b367-46c8-8d5f-279c6bf4a13f
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## Dataset Text Example | 1.0 | 0.93 | 1.0 | | AMT-5 | 60 | 30 | 1.0 | 0.93 | 1.0 | | AMT-10 | 120 | 49 | 1.0 | 0.82 | 1.0 | | BLOG-5 | 400 | 100 | 1.0 | 0.93 | 1.0 | | BLOG-10 | 800 | 150 | 0.96 | 0.84 | 1.0 | Original Sentence: {original text}" Paragraph-level: "Provide two re-writes of the following paragraph so that the author's style is obfuscated. Original Paragraph: {original text}"
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b6ddba09-0206-44f2-9dbd-dbb8b6da10bf
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.2.2 Jamdec As described, JAMDEC has three distinct stages (keyword extraction, over-generation, and filtering). We also include a pre-processing step which prepares the raw data for obfuscation. We outline the hyperparameter values used in each section below. Data Pre-Processing. We pre-process the raw text before obfuscating. First, we divide each text into paragraphs. We go through each sentence in each paragraph and add it to a list yorig. We then group all sentences in that same paragraph that appear previously and store it in a new list xl. This results in a list of original sentences yorig and left contexts xl. If the sentences are the first in the paragraph, we use the previous's paragraphs last sentence as the left context. For the first sentences of the text, we use itself as the left context. Lastly, if a sentence has less then 3 words we did not change it. Keyword Extraction. We use three kinds of keyword extraction; KeyBERT, Likelihood-T5 and Likelihood-GPT2 as described in Section 4. For KeyBERT we used unigrams and returned n/2 keywords, where n was the length of the original sentence. For Likelihood-T5, we used a T5-base (Raffel et al., 2020) and for Likelihood-GPT2 we used a GPT2-XL (1.5B) (Radford et al., 2019). For both Likelihood-T5 and Likelihood-GPT2, we used a likelihood threshold of 0.5, meaning any original word whose next token probability was below 0.5 was kept as a keyword. To further support creative and diverse generation, we include disjoint constraints which allow for one of a list of constraints to be met. Using disjoint constraints, we add both "like" words (same root word with different tenses) and "similar" words (synonyms) of the keywords. To do this, we start by creating a static dictionary of word embedding. For our experimentation, we used a list of 20K most common English words (List) and convert each word into the tokens using T5-base pretrained model (Raffel et al., 2020). For more details on this static dictionary see Appendix G.2.3. Then, to find the top "similar" words, we used the cosine similarity
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
59d04577-2b04-4360-8d41-77a511ff4b25
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.2.2 Jamdec met. Using disjoint constraints, we add both "like" words (same root word with different tenses) and "similar" words (synonyms) of the keywords. To do this, we start by creating a static dictionary of word embedding. For our experimentation, we used a list of 20K most common English words (List) and convert each word into the tokens using T5-base pretrained model (Raffel et al., 2020). For more details on this static dictionary see Appendix G.2.3. Then, to find the top "similar" words, we used the cosine similarity between the original keyword and each word in the static dictionary and choose the top 4 with the highest score. To find the top "like" words, we used the Spacy package (Honnibal and Montani, 2017) in Python to find the first 4 words in the static dictionary with the same word lemma as the original keyword. For our experimentation, we used three versions of the keywords as constraints. We used the original keywords, the original keywords with the "like" words, and the original keywords with the "like" and "similar" words. Generation. For our experimentation, we used Neurologic Constrained Beam Search (Lu et al., 2021) and Diverse Beam Search (Vijayakumar et al., 2016). The base model was GPT2-XL (1.5B) For most of the experimentation (except for the ablation study in Appendix A.4), we used a beam width of 50 and a matching number of return sequences. The maximum length of the generation was set to twice the largest input length in a batch. The batches were grouped by input length, to keep like max lengths. We also set the no repeat length to 3-grams. For decoding within the beam search, we ran each combination twice, once with sampling decoding and another with greedy decoding. We used a likelihood pruning factor of 0.4 and a constraint pruning factor of 0.6. For the constraints, we used both ordered constraints (the constraint must be met in a specific order) and unordered constraints. Lastly, we employed early stopping, which will stop a beam search early if candidates are not better than the current candidates. When diversity was employed, we used a diversity penalty of 5, 000. Hyperparameters were selected based on experimentation on Reuter 50-50 (Liu, 2011), which is a
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
59b054e1-72b0-4c07-9459-a095a363f095
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.2.2 Jamdec beam search, we ran each combination twice, once with sampling decoding and another with greedy decoding. We used a likelihood pruning factor of 0.4 and a constraint pruning factor of 0.6. For the constraints, we used both ordered constraints (the constraint must be met in a specific order) and unordered constraints. Lastly, we employed early stopping, which will stop a beam search early if candidates are not better than the current candidates. When diversity was employed, we used a diversity penalty of 5, 000. Hyperparameters were selected based on experimentation on Reuter 50-50 (Liu, 2011), which is a sub-sample of newswire articles produced by Reuters in 1996 - 1997 which have at least one subtopic of class corporate/industrial. This is a common baseline used for authorship verification (Qian et al., 2017). In summary, we ran generations for each sentence using the following combinations of methods: - *Decoding Method*: Sampling, Greedy - *Type of Constraints*: Original, Original + Like, Original + Like + Similar - *Ordered Constraint*: True, False - *Diversity in Pre-Processing*: True, False Filtering. For our experimentation, we ran two different filtering techniques. Each method starts with a base NLI and CoLA threshold. Due to the lack of an evaluation set, all hyperparameters were selected using a grid search on the smallest dataset of each kind (AMT-3 and BLOG-5). In some cases, we find that none of the generated candidates passes both the NLI and CoLA filter. To process such cases, we consider two variants of our method: (1) JAMDEC, where we simply output the original sentence as output, and (2) JAMDEC + Stylo, where we run a basic stylometric-based obfuscator on the original sentence and then use a second CoLA threshold for this altered sentence. The basic stylometric-based obfuscator is explained in detail below in Appendix G.2.3. If the altered sentence does not pass the filer than the original sentence is used. A full list of hyperparameters for each method can be viwed in Table 13. We also provide the average percentage of sentences that passed the basic NLI/CoLA thresholds and the second CoLA threshold that is used in JAMDEC + Stylo in Table 14.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
807375bf-533b-45f1-9c5d-3020031434cf
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.2.3 Our Stylometric-Based Obfuscator Set-Up. We consider the original prompt (sentence) x which is composed of words x1*, ..., x*n. Before decoding, we "freeze" all tokens that correspond to function words. Function words are grammatical words that serve as connectors or structure indicators in a sentence, rather than conveying lexical meaning. Therefore, we only consider changing context words such as nouns, adjective, and verbs. | Dataset | Hyperparameter | J | |-----------------------|---------------------|-----| | AM | | | | D | | | | EC | | | | J | | | | AM | | | | D | | | | EC | | | | + Stylo |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9c28bf4d-7d88-4ca8-9ad6-71b4d279eaa8
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.2.3 Our Stylometric-Based Obfuscator | | | | EC | | | | + Stylo | | | | AMT | Base NLI Thresholds | 0.3 | | Base CoLA Threshold | 0.30 | 0.4 | | Second CoLA Threshold | - | 0.7 | | BLOG | Base NLI Thresholds | 0.1 | | Base CoLA Threshold | 0.10 | 0.1 | | Second CoLA Threshold | - | 0.7 | A difficult aspect of a word-changing method is choosing which words are truly equivalent to the original word. For our method, we consider new words as replacements based on the following: 1. Similarity to the original word St 2. Grammatical correctness of new sentence Gt Using these two metrics, we created a 3-step method for identifying and changing certain words of a sentence. The pipeline can be viewed in Figure 11 and is described in detail below. Create Dictionary of Embeddings 1. Top 20K most common CoLA - Set threshold English words Cosine Similarity (top k) - Limit to same tense (verbs) 2. Convert to T5 tokens 3. Get word embedding - Limit to matching singular/plural
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
144683bd-0d40-4a10-af69-20d34b4a5718
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.2.3 Our Stylometric-Based Obfuscator Similarity to the original word St 2. Grammatical correctness of new sentence Gt Using these two metrics, we created a 3-step method for identifying and changing certain words of a sentence. The pipeline can be viewed in Figure 11 and is described in detail below. Create Dictionary of Embeddings 1. Top 20K most common CoLA - Set threshold English words Cosine Similarity (top k) - Limit to same tense (verbs) 2. Convert to T5 tokens 3. Get word embedding - Limit to matching singular/plural (nouns) - Averages scores for the If >1 token, then average embeddings same word Pipeline: Find top k Sample Find CoLA Original for each Convert to token word Combine similarity score and similar words new word top k word CoLA Step1: Word Embeddings Dictionary We start by creating a new static dictionary of word embedding, depending on the base model. For our experimentation, we use a list of 20K most common English words (List) and convert each word into tokens using T5-base (220M) pretrained model (Raffel et al., 2020). Then, using these matched tokens, we extracted their corresponding word embedding vectors (weights in the last attention layer). If a word matched to multiple T5 tokens, then we averaged their corresponding word embedding vectors. This resulted in a static word embedding dictionary D of vectors d1*, ..., d*20K, where di ∈ R|V |, where, V is the length of the T5 vocabulary. | Dataset | J | |----------------------------|-------------------------| | AM | | | D |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
eeadbcb5-1f34-41bf-8138-17fcc3cd06f0
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.2.3 Our Stylometric-Based Obfuscator | J | |----------------------------|-------------------------| | AM | | | D | | | EC | | | J | | | AM | | | D | | | EC | | | + Stylo | | | AMT-3 | Pass Base Thresholds | | Pass Second CoLA Th
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4b6bc99e-e1cf-4617-a01c-c37a6e7204b0
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.2.3 Our Stylometric-Based Obfuscator | | | + Stylo | | | AMT-3 | Pass Base Thresholds | | Pass Second CoLA Threshold | - | | Original Sent. Used | 0.48 | | AMT-5 | Pass Base NLI Threshold | | Base Pass CoLA Threshold | - | | Original Sent. Used | 0.48 | | AMT-10 | Pass Base NLI Threshold | | Base Pass CoLA Threshold | - | | Original Sent. Used | 0.47 | | BLOG-5 | Pass Base NLI Threshold | | Base Pass CoLA Threshold | - | | Original Sent. Used | 0.43 | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b0489faf-b7d8-4393-b81e-4521733e6edc
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.2.3 Our Stylometric-Based Obfuscator | | BLOG-5 | Pass Base NLI Threshold | | Base Pass CoLA Threshold | - | | Original Sent. Used | 0.43 | | BLOG-10 | Pass Base NLI Threshold | | Base Pass CoLA Threshold | - | | Original Sent. Used | 0.4 | Step 2: Similar Words Next, we find the top k similar words from D to the original word xt using cosine similarity of the word embeddings. We only consider verbs of the same tense and nouns that match the singular or plural nature of the original token xt. Let W be the set of words w1*, ..., w*k with the highest similarity scores si. With this set R of top-k similarity scores, s1*, ..., s*k, we create the following similar score distribution St for original word xt $$S_{t}=\begin{cases}\frac{s_{i}-\min(R)}{\max(R)-\min(R)}&\text{if}w_{i}\in W\\ 0&\text{otherwise.}\end{cases}\tag{1}$$ Step 3: Grammar Scores Using the top k similar words wi*, ..., w*k from the previous step, we find each grammar score gi using a Roberta base model (Liu et al., 2019) finetuned on the Corpus of Linguistic Acceptability (CoLA) (Warstadt et al., 2019; Morris et al., 2020), a large corpus which contains 10.5K sentences annotated for grammar acceptability by their original authors
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
59e1bb67-9455-42f1-b383-b2b7298a150d
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.2.3 Our Stylometric-Based Obfuscator )}&\text{if}w_{i}\in W\\ 0&\text{otherwise.}\end{cases}\tag{1}$$ Step 3: Grammar Scores Using the top k similar words wi*, ..., w*k from the previous step, we find each grammar score gi using a Roberta base model (Liu et al., 2019) finetuned on the Corpus of Linguistic Acceptability (CoLA) (Warstadt et al., 2019; Morris et al., 2020), a large corpus which contains 10.5K sentences annotated for grammar acceptability by their original authors. We do this by using the generated text x1*, ..., x*t−1 before xt, and using the original text xt+1*, ..., x*n after the generated text. For example, if the original text was "I went to a big lake", and we have generated "I walked to a" and are currently trying to find the grammar score for "huge", we would use "I walked to a [huge] lake" as input to the CoLa model. We use the probability of the input being grammatically acceptable as $g_{i}$. We do this for each similar word, resulting in a set $Q$ of grammar scores $g_{1},...,g_{k}$. Lastly, we impose a lower threshold $\delta$, which we set, so the grammar scores are guaranteed to be high. This can be tuned for specific tasks. Similar to the similarity scores, we construct a grammar score distribution $G_{t}$ for the original word $x_{t}$ as $$S_{t}=\begin{cases}\frac{g_{i}-\min(Q)}{\max(Q)-\min(Q)}&\text{if}w_{i}\in W,g_{i}>\delta\\ 0&\text{otherwise.}\end{cases}\tag{2}$$ Step 4: Word Selection Lastly, we combine the similar score distribution St and grammar score distribution Gt using the following equation, $F_{t}=\alpha\beta t+\beta G_{t}$ where α and β are hyperparameters controlling the importance of similarity or grammatical acceptability. We use sampling from the final distribution, Ft to generate the word replacement. However, we note that the original
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
76dc004e-5a1f-441c-a2e1-631621d6e081
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.2.3 Our Stylometric-Based Obfuscator text{if}w_{i}\in W,g_{i}>\delta\\ 0&\text{otherwise.}\end{cases}\tag{2}$$ Step 4: Word Selection Lastly, we combine the similar score distribution St and grammar score distribution Gt using the following equation, $F_{t}=\alpha\beta t+\beta G_{t}$ where α and β are hyperparameters controlling the importance of similarity or grammatical acceptability. We use sampling from the final distribution, Ft to generate the word replacement. However, we note that the original word is included in the top k similarity and therefore could result in the final generation. This method is repeated for each context word from the original text. An example of this method on text from the Reuter 50-50 dataset (Liu) can be found in Table 15.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
52d2282c-ebd4-4658-a676-5ed084b1e3bd
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.3 Evaluation Methodology And Other Details Automatic Evaluation. We used five automatic evaluations; Drop Rate (ENS and BertAA) (Mah-
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
72cfb06d-eb0e-4a59-a407-7196a18d3e60
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## Original Text Obfuscated Text The site does not include the countries' actual data - that may come later - but it lists contacts for obtaining the information. The site does not contain the states' real files – that might come later - but it includes contacts for obtaining the information. The International Monetary Fund open a site on the Internet Thursday providing information about the types of economic data available in 18 member countries. The International Monetary Fund started a **page** on the internet Thursday delivering advice about the types of economic records offered in 18 membership regions. Senator Bob Kerrey is preparing legislation in an attempt to break the deadlock over computer encryption export policy, people familiar with the Senator´s plans said. Senator Bob Kerrey is preparing regulation in an **effort to crack** the deadlock over internet encryption importation policy, people acquainted with the Senator's plans said. mood et al., 2019a; Fabien et al., 2020), METEOR (Banerjee and Lavie, 2005), NLI (Liu et al., 2022), and CoLA (Warstadt et al., 2019). The Drop rate is the average decrease in number of obfuscated text which a classifier identified as the non-original author compared to the original text. Two classification models were used to calculate the drop rate, an ENS and BertAA model. The training of ENS model is described in Appendix G.2.1 under "Mutant-X" (Mahmood et al., 2019a). The training for BertAA is described in (Fabien et al., 2020). METEOR (Metric for Evaluation of Translation with Explicit ORdering) (Banerjee and Lavie, 2005) is a common baseline used in machine translation. It is calculated using the harmonic mean of precision and recall using unigram matching that ranges from 0 (no overlap) to 1 (exact overlap). Because it relies on exact token matching, it is unideal for measuring paraphrases of text that could have drastically different tokens but the same meaning. We include the reporting of this metric since it is heavily reported in the literature. However, we rather rely on another metric, NLI (Natural Language Inference) as an indicator of content preservation. NLI is a task with aims to predict if two text are "entailed", in other words if one text is true then the other logically follows. We used WANLI model (Liu et
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
385ea4b3-2a26-484d-bc4f-2345a512717b
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## Original Text Obfuscated Text matching that ranges from 0 (no overlap) to 1 (exact overlap). Because it relies on exact token matching, it is unideal for measuring paraphrases of text that could have drastically different tokens but the same meaning. We include the reporting of this metric since it is heavily reported in the literature. However, we rather rely on another metric, NLI (Natural Language Inference) as an indicator of content preservation. NLI is a task with aims to predict if two text are "entailed", in other words if one text is true then the other logically follows. We used WANLI model (Liu et al., 2022) as our NLI model and report the average highest NLI scores for each sentence. Meaning, we take each sentence in the obfuscated text and calculate the probability of entailment, according to the WANLI model, with each sentence in the original. We then choose the highest entailment value. What is reported is the average of these maximum values for all text. Lastly, we use a CoLA (Corpus for Linguistic Acceptability) (Warstadt et al., 2019) model as a measure of grammatical correctness. Given a text, the model reports a probability of grammatical acceptance (ranging from 0 to 1), we use the average of these as the CoLA score. Inter-rater Agreement. We decided to use two different classifier models (ENS and BertAA) to calculate the drop rate. Since these models use different architecture and different sets of features, we wanted to report the inter-rater agreement between them. We use Cohen's kappa coefficient, which measure the inter-rater reliability using a scale between [0.1], where 0 is completed disagreement and 1 is complete agreement. This is thought to be a more robust measure because it takes the probability of agreement by chance into consideration. See Table 16 for the results.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a2c49770-8115-43b9-b650-77e7c3ec8dba
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.4 Human Evaluation All human evaluations were conducted on Amazon Mechanical Turk (AMT) (Mechanical Turk). The data for the human evaluations were randomly selected from the passages in AMT-3. Each passage was separated into shorter sections ranging from one to four sentences. Then n = 32, 35, and 35 of these shorter sections were selected from author "H", "PP", and "QQ" texts respectively (Author "H" has fewer passages overall than "PP" or "QQ" and therefore had slightly less short texts chosen for the human evaluation) for a total of 102 passages. The corresponding obfuscated text was then matched for the following methods; Mutant-X (ENS), Machine Translation, Stylometric, GPT3.5 (Sentence), JAMDEC, and JAMDEC + Stylo. For each passage, the AMT worker was shown the original and obfuscate passage side by side and ask the following five questions. Method Mutant-X GPT3 Paraph Machine Transl. Stylometric JAMDEC Dataset Classifier ENS RFC Sentence Paragraph W/O Stylo W/ Stylo ENS-RFC 0.19 0.27 0.72 0.59 0.83 0.82 0.77 0.66 0.67 AMT-3 ENS-BertAA 0.83 0.39 - - 0.89 0.65 0.58 0.77 0.77 BertAA-RFC 0.30 0.72 - - 0.83 0.65 0.78 0.89 0.89 ENS-RFC 026 0.33 - - 0.57 0.60 0.54 0.64 0.69 AMT-5 ENS-BertAA 0.09 0.29 - - 0.54 0.56 0.53 0.47 0.43 BertAA-RFC 0.44 0.11 - - 0.63 0.47 0.31 0.50 0.54 ENS-RFC .03 0.21 - - 0.45 0
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
75a26ab6-3d29-4d9e-9c04-6c924e880eab
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.4 Human Evaluation .33 - - 0.57 0.60 0.54 0.64 0.69 AMT-5 ENS-BertAA 0.09 0.29 - - 0.54 0.56 0.53 0.47 0.43 BertAA-RFC 0.44 0.11 - - 0.63 0.47 0.31 0.50 0.54 ENS-RFC .03 0.21 - - 0.45 0.39 0.57 0.39 0.35 AMT-10 ENS-BertAA 0.10 0.38 - - .56 0.34 0.48 0.29 0.36 BertAA-RFC 0.43 0.11 - - 0.52 0.34 0.38 0.37 0.35 constrained by provided keywords. rewritten text? 2. Fluency: How fluent (natural sounding) is the rewritten text? 3. Content: How much content is preserved in the rewritten text compared to the original text? 4. Content: Is there new content added in the rewritten text not in the original text? Algorithm 1 Constrained-Diverse-Beam-Search (CoDi-BS) Require: max length n, number of beams k, input ids I, model M, constraints DPP = Diverse-Preprocessing (algorithm 2) CBS = Constrained Beam Search Initialize: beams0 = I for t = 0*, ..., n* − 1 do logitst = M(beamst) processed_logitst = DPP(k, logits) beamst+1 = CBS(processed_logitst, constraints) return beamsn 5. Style: How similar is the style between the rewritten text and the original text? Each question was answered on a 3-point likert scale (Perfect/Good, Fair, and Bad). Detailed instructions and examples were provided, see Figure 12. We compensate workers with the hourly wage of 15. We used a few credential checks for our Mechanical
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c17c336c-8fff-4e07-9734-84a8a57a53c9
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## G.4 Human Evaluation , n* − 1 do logitst = M(beamst) processed_logitst = DPP(k, logits) beamst+1 = CBS(processed_logitst, constraints) return beamsn 5. Style: How similar is the style between the rewritten text and the original text? Each question was answered on a 3-point likert scale (Perfect/Good, Fair, and Bad). Detailed instructions and examples were provided, see Figure 12. We compensate workers with the hourly wage of 15. We used a few credential checks for our Mechanical Turk workers. First, their HIT Approval Rate for all Request had to be greater than 97% and they had to be pre-approved based on work they had done in other unrelated tasks from our lab. Due to financial constraints, each sample was rated by only one worker. Software. We used Python 3.11.3, Pytorch 2.0.1 and HuggingFace Transformers 4.29.2. Hardware. All experiments were run on NIVIDIA A100 GPU's with 80GB memory. Time to Run Expereiments. Experimentation time for the AMT datasets ranged from 8−72hours, while time for the BLOG experimentation ranged from 48 − 168 hours. Diverse Beam Search. Traditional beam search searches for an output sequence that maximizes the conditional probability given the input. However, beam search tends to produce similar or redundant output sequences within a beam, resulting in a lack of diversity. Diverse Beam Search (DBS) (Vijayakumar et al., 2016) is a variation of beam search, that encourages the selection of diverse sequences that are dissimilar to each other within a beam. DBS achieves this by adding a diversity penalty term to the beam search objective function, which penalizes the selection of sequences that are too similar to the ones already in the beam. Its objective function can be represented as:
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4cba6c14-5d58-4a90-abd9-3baa0438990d
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## H Constrained Diverse Beam Search Algorithm And Extra Information $$\operatorname*{arg\,max}_{w\in W}P_{w}(y|x)+\lambda D(y,Y)$$ Algorithm 1 is the algorithms used in Constrained Diverse Beam Search algorithm (CoDi-BS) proposed in our paper. It combines Diverse and Lexically Constrained Beam Search to provide a diverse candidate pool of generations that are also where x is the sequence of previous tokens, D(*y, Y* ) is a diversity term measuring the dissimilarity between the output sequence y and the set of previously selected sequences Y within the beam,
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
025fccdc-0cb3-4428-9735-f438d40a3499
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## Algorithm 2 Diverse-Preprocessing (Dpp) Require: number of beams k, logit matrix (# beams × vocab size) L, diversity penalization term λ 1: bincount() = vector of frequency counts of vector 2: max() = maximum argument in vector along a specific dimension (dim) 3: current_tokens = [] 4: **for** i = 1*, ..., k* do 5: if i = 1 then 6: processed_logits = L[i, :] 7: else 8: previous_token_freq = 9: bincount(current_tokens) 10: processed_logits[i, :] = L[i, :] − λ previous_token_freq 11: if *i < k* then 12: current_tokens = 13: max(processed_logits[0 : i, :], dim = 1) 14: **return** processed_logits λ is a hyperparameter controlling the weight of the diversity term, and w ∈ W is the parameter vector. The diversity penalty term can take many forms, but one common approach is to use a measure of dissimilarity such as Hamming distance or cosine similarity. By promoting diversity, Diverse Beam Search can generate more varied outputs. Constrained Beam Search. Constrained Beam Search (CBS) (Post and Vilar, 2018) is another variant of beam search used to impose constraints on the output sequences. CBS achieves this by modifying the beam search objective function to penalize candidates that violate the constraints. The objective function for constrained beam search can be represented as: $$\operatorname*{arg\,max}_{w\in W}P_{w}(y|x)+\lambda C(y)$$ where C(y) is a constraint function quantifying the degree to which the output sequence y satisfies linguistic or stylistic constraints, and λ is a hyperparameter controlling the weight of the constraint function. We specifically use Lexically Constrained Beam Search where constraints are specific words or phrases that must be included in the generated text. Concretely, while choosing candidates to fill in the beam, CBS first sorts candidates into "banks" based on number of satisfied constraints, and then selects the top k candidates by iter
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
bda0646a-495c-4d61-bd2e-16bbff8e1aa1
# Jamdec: Unsupervised Authorship Obfuscation Using Constrained Decoding Over Small Language Models ## Algorithm 2 Diverse-Preprocessing (Dpp) _{w\in W}P_{w}(y|x)+\lambda C(y)$$ where C(y) is a constraint function quantifying the degree to which the output sequence y satisfies linguistic or stylistic constraints, and λ is a hyperparameter controlling the weight of the constraint function. We specifically use Lexically Constrained Beam Search where constraints are specific words or phrases that must be included in the generated text. Concretely, while choosing candidates to fill in the beam, CBS first sorts candidates into "banks" based on number of satisfied constraints, and then selects the top k candidates by iteratively visiting each bank and choosing those with the highest likelihood until reaching k candidates. In terms of authorship obfuscation, we find that CBS effectively generates text closely resembling the original content by enforcing keyword inclusion, but fails to produce a variety of generations with diverse writing styles.
{ "creation_datetime": "2024-03-04", "file_name": "2402.08761v1.md", "file_path": "paper_data/2402.08761v1.md", "file_size": 125844, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
da8d81be-2690-4fd6-a95a-16fcfe4f9b9e
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation Kenza Amara, Rita Sevastjanova, Mennatallah El-Assady Department of Computer Science ETH Zurich, Switzerland {kenza.amara, menna.elassady}@ai.ethz.ch rita.sevastjanova@inf.ethz.ch
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9c882521-443a-4692-991e-b876a46e7ca7
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## Abstract To harness the power of large language models in safety-critical domains we need to ensure the explainability of their predictions. However, despite the significant attention to model interpretability, there remains an unexplored domain in explaining sequence-to-sequence tasks using methods tailored for textual data. This paper introduces *SyntaxShap*, a local, modelagnostic explainability method for text generation that takes into consideration the syntax in the text data. The presented work extends Shapley values to account for parsing-based syntactic dependencies. Taking a game theoric approach, SyntaxShap only considers coalitions constraint by the dependency tree. We adopt a model-based evaluation to compare SyntaxShap and its weighted form to state-ofthe-art explainability methods adapted to text generation tasks, using diverse metrics including faithfulness, complexity, coherency, and semantic alignment of the explanations to the model. We show that our syntax-aware method produces explanations that help build more faithful, coherent, and interpretable explanations for predictions by autoregressive models.1
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
16dee481-c634-48b1-85a5-48ca4ae46c51
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 1 Introduction Language model (LM) interpretability has become very important with the popularity of generative AI. Despite the great results achieved by the most recent LMs, there is still a large range of tasks where the models fail, e.g., capturing negations (Truong et al., 2023). Therefore, it is crucial to get a better understanding of the LM reasoning and develop faithful explainability methods. As many LMs have little transparency and their use is restricted to API calls, model-agnostic explainability methods have become the most practical techniques for gaining better insights into LMs. The SHapley Additive exPlanations (SHAP) framework is popular for generating local explanations thanks to its solid theoretical background and general applicability (Shapley et al., 1953). However, regarding sequence-to-sequence tasks such as next token generation, the usage of SHAP-based methods has not been explored in depth (Mosca et al., 2022). We address this gap and develop a coalition-based explainability method inspired by Shapley values for text generation explanation. Our explainability method (in Figure 1) considers syntactic word dependencies (de Marneffe et al.). The syntax is important as next-word predictions in autoregressive LMs underlie implicit incremental syntactic inferences, i.e., LMs implicitly capture dependencies in text data (Eisape et al., 2022). In this paper, we investigate if dependency parsing trees can be used in the explainability process as syntactic relational graphs and help shed light on the influence of words on the model's prediction given their syntactic role in the sentence. We evaluate the explanations on diverse metrics. First, we adapt fidelity, one of the most popular model-based evaluation metrics in xAI (eXplainable AI), to the text generation task and introduce two new metrics to test whether the generated explanations are faithful to the underlying model. Second, we introduce two qualitative evaluation metrics that capture the explanation quality with regard to human expectations, i.e., the coherency of explanations and their semantic alignment. Our evaluation procedure compares our method SyntaxShap to state-of-the-art explainability methods. Explanations produced by our method of the next token generation by two popular autoregressive models are more faithful, coherent, and semantically aligned compared to state-of-the-art SHAP- based methods that do not explicitly consider the word dependency for text
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
cd8faeda-da72-48a7-91e6-c4882661e871
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 1 Introduction generation task and introduce two new metrics to test whether the generated explanations are faithful to the underlying model. Second, we introduce two qualitative evaluation metrics that capture the explanation quality with regard to human expectations, i.e., the coherency of explanations and their semantic alignment. Our evaluation procedure compares our method SyntaxShap to state-of-the-art explainability methods. Explanations produced by our method of the next token generation by two popular autoregressive models are more faithful, coherent, and semantically aligned compared to state-of-the-art SHAP- based methods that do not explicitly consider the word dependency for text generation tasks. To summarize, our contributions are (1) SyntaxShap, a new SHAP-based explainability method that incorporates dependency tree information, (2) quantitative metrics that address LM's stochasticity and qualitative metrics to account for human semantic expectations, and (3) an evaluation of the explanation quality on two autoregressive LMs. Our work opens multiple new research directions for future work.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
aaf3574d-1b72-4237-96a2-672461dfaff4
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 2 Related Work Explainability in Linguistics Syntax and semantics play an important role in explaining LM outcomes from a linguist perspective. Multiple attempts were made to explore the role of syntactic and semantic representations to enhance LM predictions. Ek et al. (2019) look at the role of syntactic and semantic tags for the specific task of human sentence acceptability judgment. They show that syntactic tags significantly influence the predictions of the LM. In recent years, there has been an increasing interest in methods that incorporate syntactic knowledge into Machine Translation (Ambati, 2008). In addition, Eisape et al. (2022) has shown that next-word predictions from autoregressive neural LMs show remarkable sensitivity to syntax. However, there has been no attempt to account for the syntax in explanations of those LMs for text generation tasks (Mosca et al., 2022). For this reason, we propose to incorporate syntax-based rules to explain AR LM text generation. SHAP-based explainability in NLP One way to categorize model-agnostic post-hoc explainability methods is to separate them into perturbation-based and surrogate methods (Zhao et al., 2023). Among the most popular surrogate models are LIME and SHAP. The Shapley-value approach (Shapley et al., 1953) provides local explanations by attributing changes in predictions for individual data inputs to the model's features. Those changes can be combined to obtain a better global understanding of the model structure. For text data, available approaches seem mostly tailored to classification settings (Kokalj et al., 2021; Chen et al., 2020). Shapley values and complex dependencies One underlying assumption of SHAP is feature independence. Confronted with more diverse types of data inputs, newer methods offer the possibility to account for more complex dependencies between features. Frye et al. (2020) propose Asymmetric Shapley values (ASV), which drop the symmetry assumption and enable the generation of modelagnostic explanations incorporating any causal dependency known to be present in the data. Following up with this work, Heskes et al. (2020) propose Causal Shapley values to account more specifically for causal structures behind feature interactions. Chen et al. (2019) construct coalitions based on a graph structure, grouping features with their neighbors or connected nodes. When it comes to text data, words present strong interactions, and their contribution heavily rely on the context. Therefore, feature attributions for textual
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b7e6a885-e725-46b5-9896-5cf71d4ea23f
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 2 Related Work e et al. (2020) propose Asymmetric Shapley values (ASV), which drop the symmetry assumption and enable the generation of modelagnostic explanations incorporating any causal dependency known to be present in the data. Following up with this work, Heskes et al. (2020) propose Causal Shapley values to account more specifically for causal structures behind feature interactions. Chen et al. (2019) construct coalitions based on a graph structure, grouping features with their neighbors or connected nodes. When it comes to text data, words present strong interactions, and their contribution heavily rely on the context. Therefore, feature attributions for textual data should be specifically tailored to account for those complex dependencies. HEDGE is one example of a SHAP-based method addressing the context dependencies specific to text data (Chen et al., 2020). It hierarchically builds clusters of words based on their interactions. While their objective is to cluster words to minimize the loss of faithfulness, i.e., prediction change, we propose a new strategy to create coalitions of words that respect the syntactic relationships dictated by the dependency tree. This way, we take into consideration the syntactic dependencies that are the basis of linguistics and which were proven essential for next-word predictions from autoregressive LMs (Eisape et al., 2022).
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a000ca22-c9d3-414f-a339-7f0fd3aeb7b1
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 3 Syntaxshap Methodology 3.1 Objective Given a sentence of n words x = (x1*, ..., x*n) and ˆy = (ˆy1*, ...,* ˆym) the m generated words by an autoregressive LM f, the objective is to evaluate the importance of each input token for the prediction ˆy. We focus on explaining the next token, i.e., m = 1. Let fy(x) be the model's predicted probability that the input data x has the next token y. Our method produces local explanations.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
182a5449-ed55-43a9-919c-33a8aed98bd4
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 3.2 Shapley Values Approach We adopt a game theory approach to measure the importance of each word xi to the prediction. The Shapley value approach was first introduced in cooperative game theory (Shapley et al., 1953) and computes feature importance by evaluating how each feature i interacts with the other features in a coalition S. For each coalition of features, it computes the marginal contribution of feature i, i.e., the difference between the importance of all features in S, with and without i. It aggregates these marginal contributions over all subsets of features to get the final importance of feature i.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
552097c1-4bc4-4af0-8505-64faa01aee6b
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 3.3 Syntax-Aware Coalition Game Our work focuses on incorporating syntax knowledge into model-agnostic explainability. We adopt a coalition game approach that accounts for these syntactic rules. As illustrated in Figure 1, SyntaxShap computes the contribution of words only considering *allowed* coalitions S constraint on the dependency tree structure. We define a coalition S as a set of words or features {xi, i ∈ [1, n]} from the input sentence x. Given a dependency tree with L levels, li ∈ [1, L] corresponds to the level of word xi in the tree and nl > 0 the number of words at level l in the tree. To compute the contribution of the words in the tree, SyntaxShap only considers the allowed coalitions S = �L l=0 Sl, where Sl is the set of allowed coalitions at level l. We pose the default S0 = {S0} and S0 = {} is the null coalition. Notations Let Xl be the set that contains all the words at level l, X<l the one that contains all the words before level l in the tree, and P(Xl) the powerset, i.e. the set of all subsets of Xl. Definition (Set of coalitions at level l) The set of coalitions Sl at level l is defined as: $${\mathfrak{S}}_{l}=\bigcup_{\sigma\in{\mathcal{P}}(X_{l})}X_{<l}\cup\sigma$$ Property At each level of the tree, each coalition S ∈ Sl respects two properties: $\forall i\in[1,n]$ s.t. $l_{i}>l,x_{i}\notin S$. (1) $\forall i\in[1,n]$ s.t. $l_{i}<l,x_{i}\in S$. (2) Given the tree-based coalitions, we can compute the contribution of each token in the input sentence to the model's prediction. The contribution of feature xi at level li on the dependency tree to the model output ˆy is defined as: $$\phi_{i}=\frac{1}{N_{i}}\sum_{\begin{subarray}{c}\text{$\left[f
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
01d827bc-a359-4925-809f-23a94b6cf27c
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 3.3 Syntax-Aware Coalition Game S$. (1) $\forall i\in[1,n]$ s.t. $l_{i}<l,x_{i}\in S$. (2) Given the tree-based coalitions, we can compute the contribution of each token in the input sentence to the model's prediction. The contribution of feature xi at level li on the dependency tree to the model output ˆy is defined as: $$\phi_{i}=\frac{1}{N_{i}}\sum_{\begin{subarray}{c}\text{$\left[f_{\hat{y}}(S\cup\{x_{i}\})-f_{\hat{y}}(S)\right]$}\\ \text{$\left[\bigcup\limits_{p=0}^{l_{i}-1}\mathfrak{S}_{p}\right)\bigcup\mathfrak{S}_{l_{i}}^{i}\end{subarray}}\tag{3}$$ where Ni corresponds to the number of allowed coalitions at level li that do not contain feature xi, and S\i l corresponds to the set of coalitions at level l that exclude word xi, i.e., $${\mathfrak{S}}_{l}^{\backslash i}=\bigcup_{\sigma\in{\mathcal{P}}(X_{l})}X_{<l}\cup(\sigma\backslash\{x_{i}\}).$$ Property Given the number nl of words (or nodes) at level l of the tree, each word at the same level shares the same number of updates, i.e., allowed coalitions, i.e., ∀xi s.t. li = l, Ni = Nl and Nl can be expressed as: $$N_{l}=\sum_{p=0}^{l-1}2^{n_{p}}+2^{n_{l}-1}-l\tag{4}$$ Proof To compute Nl in equation 4, we proceed recursively starting from the root nodes. The dependency has L levels starting from level l = 1. We postulate a hypothetical level 0 where the null coalition S0 = {} can be formed. At level 1, there is the root node
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b3ad11e4-c62b-48be-974e-d118eca1ee17
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 3.3 Syntax-Aware Coalition Game �xi s.t. li = l, Ni = Nl and Nl can be expressed as: $$N_{l}=\sum_{p=0}^{l-1}2^{n_{p}}+2^{n_{l}-1}-l\tag{4}$$ Proof To compute Nl in equation 4, we proceed recursively starting from the root nodes. The dependency has L levels starting from level l = 1. We postulate a hypothetical level 0 where the null coalition S0 = {} can be formed. At level 1, there is the root node of the tree, i.e. n1 = 1. The number of coalitions is |S1 = {{xroot}}| = 1. Let nl be the number of nodes at level l. The number of combinations of nl features is 2nl. Since we already counted the null coalitions at the hypothetical level 0, we don't count it in the allowed coalitions Sl at level l. We arrive at the final number of coalitions |Sl| = 2nl − 1. Now, let's say we have a word x at level l. This word can join all allowed coalitions at level *< l* - there are 1+�l−1 p=1(2np −1) - and all the coalitions of the words at level l where x does not appear - there are 2nl−1 − 1. In conclusion, we find that the number of allowed coalitions for word x at level l is: $$N_{l}=1+\sum_{p=1}^{l-1}(2^{n_{p}}-1)+2^{n_{l}-1}-1$$ $$=\sum_{p=0}^{l-1}2^{n_{p}}+2^{n_{l}-1}-l$$ We pose n0 = 0, the number of nodes on the hypothetical level 0, to start the sum at p = 0 for simplification. Our strategy of building tree-based coalitions drops the efficiency assumption of Shapley values but preserves the symmetry axioms for the words at the same level of the dependency tree, as well as the nullity and additivity axioms. Appendix B.1 details the four shapley axioms and discusses which ones SyntaxShap respects or violates. Note
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
427f9baf-ddee-4b3d-aab2-e5c399cba62b
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 3.3 Syntax-Aware Coalition Game }2^{n_{p}}+2^{n_{l}-1}-l$$ We pose n0 = 0, the number of nodes on the hypothetical level 0, to start the sum at p = 0 for simplification. Our strategy of building tree-based coalitions drops the efficiency assumption of Shapley values but preserves the symmetry axioms for the words at the same level of the dependency tree, as well as the nullity and additivity axioms. Appendix B.1 details the four shapley axioms and discusses which ones SyntaxShap respects or violates. Note that this does not undermine the quality of the explanations since the axioms were shown to work against the goals of feature selection in some cases (Fryer et al., 2021).
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
53dd770e-e65a-4772-89f3-491f7284dc56
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 3.4 Weighted Syntaxshap In the context of text data and syntactic dependencies, we assume that words at the top of the tree should be given more importance since they are the syntactic foundations of the sentence and usually correspond to the verb, subject, and verb qualifiers. Therefore, we propose SyntaxShap-W, a variant of our method that weighs words according to their position in the tree. The weights are treelevel-dependent and correspond to the inverse of the word level for which contribution is computed, i.e., wl = 1/l. The contribution of a word xi at level li can be expressed as: $$\phi_{i}=\frac{w_{l_{i}}}{N_{i}}\sum_{\begin{subarray}{c}\text{$\bigcup$}\\ \text{$\bigcup$}\\ \text{$\bigcup$}\end{subarray}}\left[f_{\hat{y}}(S\cup\{x_{i}\})-f_{\hat{y}}(S)\right]\tag{5}$$
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
aea31901-f7e8-4ba1-ab91-5f0bad08ed55
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 4 Evaluation This section describes our model-based evaluation procedure that encompasses both quantitative and qualitative analysis of the explanations. While previous works only focus on the faithfulness of explanations to assess their quality, we also propose to consider human qualitative expectations.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
64b8ac9f-c82c-42fd-8028-a15e55e6a811
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 4.1 Quantitative Evaluation To analyze if the explanations are faithful to the model, we adopt *fidelity* the most common modelbased metric in xAI (Carvalho et al., 2019), which looks at the top-1 prediction and propose two new variants that balance the LM's probabilistic nature by considering the top-K predictions. Fidelity Fidelity measures how much the explanation is faithful to the model's initial prediction for the next token. By keeping the top t% words in the input sentence, fidelity calculates the average change in the prediction probability on the predicted word over all test data as follows, $$\mathrm{Fid}(t)=\frac{1}{N}\sum_{i=1}^{N}(f_{\tilde{y}}(x_{i})-f_{\tilde{y}}(\tilde{x}_{i}^{(t)}))\tag{6}$$ where ˜x(t) i is the masked input sentence constructed by keeping the t% top-scored words of xi, ˆy is the predicted token given input xi, i.e. ˆy = argmaxy′ fy′(xi), and N is the number of examples. Usually, the missing words are replaced by the null token, but we also propose an alternative fidelity Fid*rand* by replacing the missing words with random words from the tokenizer vocabulary. Probability divergence@K The probability divergence at K corresponds to the average difference in the top K prediction probabilities on the predicted class over all test data. It can be expressed as follows, div@K = 1 N i=1 k=0 (fˆyk(xi) − fˆyk( ˜xi(t))) (7) N � K � where ˆyk is the top kth prediction given input xi. We choose K = 10 because most of the sentences can be completed with multiple possible words that are synonyms or semantically consistent with the input sentence. Accuracy@K The accuracy at K corresponds to the average ratio of common top K predictions between the full and masked sentences: acc@K = 1 N K i=1 ���{ˆyk, k≤K*} ∩ {*˜y(t) k , k≤K} ���
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
06089a8f-4211-498d-99da-58d4565c1973
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 4.1 Quantitative Evaluation 7) N � K � where ˆyk is the top kth prediction given input xi. We choose K = 10 because most of the sentences can be completed with multiple possible words that are synonyms or semantically consistent with the input sentence. Accuracy@K The accuracy at K corresponds to the average ratio of common top K predictions between the full and masked sentences: acc@K = 1 N K i=1 ���{ˆyk, k≤K*} ∩ {*˜y(t) k , k≤K} ��� N � (8) where ˜y(t) k is the top kth prediction given input ˜x(t) i .
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ef1e56dc-bfea-47fc-abe0-603710ed421b
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 4.2 Qualitative Evaluation Coherency Coherency describes how similar the explanation is w.r.t. similar next generated token. In other words, given a pair of input sentences with a slight variation in the syntax but a strong change in semantics (e.g., differing only by a negation), we expect similar explanations for similar model's predictions and dissimilar ones when the model is sensitive to the perturbation. Semantic alignment An important criterion to evaluate a textual explanation is whether it is aligned with human expectations. As humans, we intuitively expect the language model to draw little attention to tokens in the input sentence which semantic substance is not reflected in the prediction. This semantic alignment can be measured for some semantically rich tokens that are decisive for text generation, e.g., the negation. Given a decisive token in input sentences and a model's prediction that does not semantically account for it, we compare methods on the importance rank attributed to this token. An explainability method is semantically aligned if this rank is high, i.e., the decisive token is not important for the model's prediction.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
23c40176-0992-465e-b2c7-6a519a609a05
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 5 Experiments We evaluate SyntaxShap and SyntaxShap-W on various criteria such as faithfulness and their computational complexity in section 5.2, the coherency in section 5.3, and the semantic alignment of their explanations in section 5.4.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8e714c0c-817e-4a92-8339-092b7c35b496
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 5.1 Experimental Setting For the evaluation, we use three datasets, i.e., the Generics KB2 (*Generics*) (hug, 2020), ROCStories Winter20173 (*ROCStories*) (Mostafazadeh et al., 2017), and Inconsistent Dataset Negation2 (Negation) (Kalouli et al., 2022). They have the following characteristics: (1) The *Generics* dataset contains high-quality, semantically complete statements; (2) The *ROCStories* dataset contains a collection of five-sentence everyday life stories; (3) The Negation dataset contains disjoint sentence pairs, i.e., a sentence and its negated version. For evaluation purposes, we first separate the stories of the ROCStories dataset into single sentences and remove the last token from sentences in the three | Generics | ROCStories | Negation | |-----------------|--------------|------------| | Depd. Dist. | | | | µ | | | | 1.96 | 2.12 | 1.4 | | Depd. Dist. | | | | σ | | | | 0.46 | 0.47 | 0.3 | | # Tokens Mean | 9.8 | 9.83 | | # Unique Tokens | 3548 | 2082
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3c0a94ea-8463-4059-85a0-d8ed890e137c
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 5.1 Experimental Setting | | | | 0.46 | 0.47 | 0.3 | | # Tokens Mean | 9.8 | 9.83 | | # Unique Tokens | 3548 | 2082 | datasets. We use the TextDescriptives component in spaCy to measure the dependency distance of the analyzed sentences following the universal dependency relations established by de Marneffe et al. and compute the average number of tokens per sentence as well as the number of unique tokens in the three datasets. As shown in Table 1, sentences in the *Generics* and *ROCStories* datasets have more complex syntactic structures, and the sentences are longer than in the *Negation* dataset. We decide not to include the *Negation* dataset in the quantitative analysis because it becomes difficult to compare explainability approaches when a small number of tokens - less than six - are removed from short phrases without disrupting the sentence's overall meaning. Nevertheless, it is the most suited dataset to compare xAI methods on coherency and semantic alignment since it contains sentences with little syntactic variations but great semantic ones, enabling fine-grained qualitative analysis. To assess the performance of our method, we use two autoregressive LMs: GPT-2 model (Radford et al., 2019) consisting of 117M parameters and Mistral 7B (Jiang et al., 2023) with 7B parameters. We reproduce our experiments on four different seeds and convey mean and variance of our results. Our methods SyntaxShap and SyntaxShap-W are compared against the *Random* baseline, and two other explainability baselines *LIME* (Ribeiro et al., 2016) and the *NaiveShap*, a naive SHAP-based approach that computes all coalitions, adapted for the problem of next token generation and text data. We also compare them against *Partition*, a faster version of KernelSHAP that hierarchically clusters features. Its implementation is based on HEDGE (C
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
46d7b5dc-21ec-4c45-9c1e-58325e32cc76
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 5.1 Experimental Setting We reproduce our experiments on four different seeds and convey mean and variance of our results. Our methods SyntaxShap and SyntaxShap-W are compared against the *Random* baseline, and two other explainability baselines *LIME* (Ribeiro et al., 2016) and the *NaiveShap*, a naive SHAP-based approach that computes all coalitions, adapted for the problem of next token generation and text data. We also compare them against *Partition*, a faster version of KernelSHAP that hierarchically clusters features. Its implementation is based on HEDGE (Chen et al., 2020), a SHAP-based method that builds hierarchical explanations via divisive generation, respecting some pre-computed word clustering, and is particularly suited for text data.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
63b92824-5812-4a77-90e8-f1d592e4425c
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 5.2 Faithfulness In this section, we evaluate the faithfulness of our explanations to Random, LIME, the NaiveShap, and Partition on the full datasets with sentence lengths between 5 and 20 tokens. NaiveShap is | Generics | ROCStories | |--------------|--------------| | Random | | | 0 | . | | ± | | | 0 | . | | 0 | . | | ± | | | 0 | . | | LIME | | | 0 | . | | ± | | | 0 | . | | 0 | . | | ± | | | 0 | . | | Partition | | | 0 | . | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2154a123-2ae3-4fa3-b4fb-97f5821fbb3b
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 5.2 Faithfulness | . | | ± | | | 0 | . | | Partition | | | 0 | . | | ± | | | 0 | . | | 0 | . | | ± | | | 0 | . | | SyntaxShap | | | 0.615 | | | ± | | | 0 | . | | 0.590 | | | ± | | | 0 | . | | SyntaxShap-W | | | 0 | . | |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
10721425-6440-4d23-882f-b6c2b4110191
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 5.2 Faithfulness | | | ± | | | 0 | . | | SyntaxShap-W | | | 0 | . | | ± | | | 0 | . | | 0 | . | | ± | | | 0 | . | | Generics | ROCStories | |------------------|--------------| | Random | | | 0 | . | | ± | | | 0 | . | | 0 | . | | ± | | | 0 | .
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3eb4a73d-796f-4941-b877-39ad8430faef
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 5.2 Faithfulness | | 0 | . | | 0 | . | | ± | | | 0 | . | | LIME | | | 0 | . | | ± | | | 0 | . | | 0 | . | | ± | | | 0 | . | | NaiveShap | | | 0 | . | | ± | | | 0 | . | | 0 | .
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5c408cb2-8416-48d1-b9ce-3a0800246522
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 5.2 Faithfulness | | 0 | . | | ± | | | 0 | . | | 0 | . | | ± | | | 0 | . | | Partition | | | 0 | . | | ± | | | 0 | . | | 0 | . | | ± | | | 0 | . | | SyntaxShap | | | 0.512 | | | ± | | | 0
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9414e975-4641-4a01-a541-8fc7e5a30eaf
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 5.2 Faithfulness | 0 | . | | SyntaxShap | | | 0.512 | | | ± | | | 0 | . | | 0.497 | | | ± | | | 0 | . | | SyntaxShap-W | | | 0 | . | | ± | | | 0 | . | | 0 | . | | ± | | | 0 | . | | (b) GPT-2 model. | | omitted for Mistral 7B since computations become
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
bd6d932c-1f44-4b35-b101-6f77009f43e2
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 5.2 Faithfulness | . | | ± | | | 0 | . | | (b) GPT-2 model. | | omitted for Mistral 7B since computations become intractable for sentences with > 10 tokens. See Appendix D.1 for the comparison with NaiveShap on the filtered datasets. For both models, Mistral 7B and GPT-2 in Figure 2 and Figure 3, our methods SyntaxShap and SyntaxShap-W produce more faithful explanations than the trivial random algorithm, the LIME method adapted to NLP tasks, and Partition, the state-of-the-art shapley-based local explainability method for text data. Therefore, building coalitions based on syntactic rules gives more faithful explanations than when minimizing a cohesion score, preserving the strongest word interactions. For GPT-2 model in Figure 3, NaiveShap generates explanations as faithful as SyntaxShap. However, SyntaxShap has the advantage of being much faster, with a computational complexity of O(nL2n/L) against O(n2n) for NaiveShap, where n is the number of words in the input sentence and L the tree depth. This is a huge advantage when explaining long sentences with more than 10 tokens or when the LM has a high inference time. We refer to Appendix B.2 for the comparison of NaiveShap and SyntaxShap complexities and to Appendix D.2 where we show how the number of tokens affects NaiveShap computations. SyntaxShap(-W) generates more faithful explanations than the random baseline, LIME, and Partition. Although it does not beat the Naive- Shap method, it can scale to longer sentences and its computation is faster.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3a12c4ee-bd55-4669-a755-5be6ce399d19
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 5.3 Coherency In this section, we explore whether SyntaxShap produces coherent explanations with the model understanding. For this evaluation, we use Mistral 7B and run a perturbation analysis using sentence pairs from the *Negation* dataset. We use a sample of 72 sentence pairs (with and without the negation not and with varying usage of *with* and *without*) whereby for 20 pairs, the model predicts the same next token. An example of two sentence pairs is shown in Figure 4. For pairs with equal predictions (e.g., A mom is *not* a and *A mom is a* with an equal next token prediction **super**), we expect more similar attribution ranks than for pairs with different predictions (e.g., *A person has* no **right** and A person has **died**). To evaluate the coherency, we first represent the attribution scores as rank vectors. We then separate pairs with equal predictions and different predictions into two distinct groups and measure the cosine similarity between rank vectors of each pair within each group, whereby negation words are excluded to get equal-length vectors. The average difference in cosine similarity between the two groups for each explainability method is displayed in Figure 5. It shows that SyntaxShap and SyntaxShap-W produce more similar attributions for sentence pairs that predict the same next token and more diverse attributions for sentence pairs with different next token predictions. Given a pair of sentences with and without a negation, which theoretically have two disjoint semantic meanings, the similarity of SyntaxShap(-W)'s token attribution values for each sentence better reflect the degree of similarity of the next token predictions than LIME, NaiveShap, and Partition.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
caecffc9-597c-48dc-a796-3a9f156fa798
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 5.4 Semantic Alignment We explore here if the generated explanations are aligned with human semantic expectations. To be able to answer this question, we analyze cases where there is a negation in a sentence, but the model's prediction does not reflect it, e.g., A father is not a *father*.To realize this experiment, we extract negative instances, i.e., that contain the token not, no, or *without*, from the *Negation* dataset. We label those where the model, GPT-2 or Mistral 7B. predicts *wrong* next tokens, i.e., semantically misaligned with the negation. We report the average importance score of the negation tokens in each of the 15 labeled instances. Figure 6 shows the results for Mistral 7B: SyntaxShap and SyntaxShap-W rank the importance of negations as 3rd or greater in 80% of the cases. They give low importance to the negation tokens when the model is not able to capture them. LIME and the naive computation of Shapley values by NaiveShap assigns 3rd rank or greater for 60% of the negations. Partition is the worst at reflecting the irrelevance of negations, ranking them as 1st or 2nd in 60% of the cases. SyntaxShap(-W) assigns lower importance ranks to input tokens which semantic is not captured in the model's prediction compared to LIME, NaiveShap, and Partition.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a6a252cd-1125-4da4-bc5d-38bf026a1e29
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 6 Discussion Addressing stochasticity The traditional faithfulness metrics like fidelity, AOPC (Samek et al., 2016; Nguyen, 2018) or log-odds (Shrikumar et al., 2017; Chen et al., 2019) scores take a deterministic approach to evaluate explanations computed on stochastic predictions. This paper evaluated autoregressive LMs that adopt top-k sampling to randomly select a token among the k tokens with the highest probability. To account for this stochasticity, we proposed additional evaluation metrics, div@K and acc@K, that consider not only the final prediction but the top-K predictions, balancing the model's probabilistic nature. Nevertheless, further methods that address the stochastic nature of the models should be designed in future research. Integrating linguistic knowledge To ensure that the explainability methods produce meaningful explanations that mimic autoregressive LM behavior, we need to go beyond the faithfulness type of evaluation and consider further explainability aspects. In this paper, we study explanations on other dimensions related to semantic interpretation and coherency of explanations. There is potential for more linguistically tailored evaluation methods in the future. The motivation is as follows. The next token prediction task can be seen as a multiclass classification with a large number of classes. The classes have diverse linguistic properties, i.e., tokens have different roles in the sentence, some being more content- and others function-related. We might want to consider these different roles when evaluating the quality of explanations. On the one hand, with controlled perturbations on the input sentences, we can evaluate the role of semantics and syntax on the next token prediction task. On the other hand, when computing the explanation fidelity, we might consider prediction changes from one category of tokens (e.g., function words) to another (e.g., content words), which would give us a more linguistic-aware explanation quality assessment. Considering humans When designing evaluation methods, we need to consider humans since, ideally, they should understand model behavior from the produced explanations. There is one main concern, though. As prior work has shown (Sevastjanova and El-Assady, 2022), LM explainability can suffer from human false rationalization of model behaviors. We typically expect the explanations to align with our mental models of language.However, LMs learn language differently from humans; thus, explanations can theoretically differ from our expectations. Thus, future work should design evaluation methods that clearly
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
66b9f8df-3ab9-4eda-8b37-2fb8b23c3852
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 6 Discussion ), which would give us a more linguistic-aware explanation quality assessment. Considering humans When designing evaluation methods, we need to consider humans since, ideally, they should understand model behavior from the produced explanations. There is one main concern, though. As prior work has shown (Sevastjanova and El-Assady, 2022), LM explainability can suffer from human false rationalization of model behaviors. We typically expect the explanations to align with our mental models of language.However, LMs learn language differently from humans; thus, explanations can theoretically differ from our expectations. Thus, future work should design evaluation methods that clearly show the importance of the words for the model and the reasons why this importance (potentially) does not align with human expectations.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fefe44cb-105a-40c9-ae46-d751a6df09cf
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 7 Conclusion We proposed SyntaxShap - a local, model-agnostic syntax-aware explanability method. Our method is specifically tailored for text data and meant to explain text generation tasks by autoregressive LMs, whose interpretability in that context has not yet been addressed. SyntaxShap is the first SHAP- based method to incorporate the syntax of input sentences by constraining the construction of word coalitions on the dependency trees. Our experimental results demonstrate that SyntaxShap and its weighted variant can improve explanations in many aspects: they generate more faithful, coherent, and semantically rich explanations than the standard model-agnostic explainability methods in NLP. This study addresses a pressing and significant issue regarding the explainability of autoregressive models, contributing to an ongoing dialogue in the research community.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ec8fbb49-14c3-4f61-8e75-dee459ac66b1
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 8 Limitations Long sentences and limited compute power One limitation of this paper is the limited computing power to explain long sentences for certain methods and models. For example, to run the naive implementation of Shapley values that has a complexity of O(n2n) and a number n of tokens between 3 and 20, some sentences require up to 21 million computation steps! Given our Linux machine with 2 GPUs NVIDIA RTX A6000 with 4 GB RAM per CPU, computations for sentences with more than 10 tokens for the Mistral 7B model were intractable. We could only include NaiveShap results for sentences with less than 10 tokens (see Appendix D.1). As the length of the sentences increases, the computation complexity of SyntaxShap does, too, and we might reach the same limitation as NaiveShap. In addition, we limit our analysis to one input sentence because we work on one dependency tree at a time. However, our method can be scaled to text with multiple sentences or a paragraph by breaking it down into multiple dependency trees and running SyntaxShap in parallel. However, by doing this, we might lose sentence correlations. Incorrect dependency tree Our method heavily relies on the dependency tree, assuming it correctly captures the syntactic relationships between the words. However, the Python module spaCy sometimes generates arguable dependencies from the perspective of linguists, and its accuracy drops when implemented for languages other than English. Therefore, SyntaxShap is, for now, only meant to be used for English grammatically nonconvoluted sentences to limit the uncertainty coming from the construction of the dependency tree itself. Tokenization and word segmentation An important limitation is related to the tokenization of the sentences. The tokenization might break down words into multiple tokens. However, SyntaxShap computation is based on a dependency tree in which nodes must be words. Here, we have not addressed this problem. We choose to only include the ones where no word is split by the tokenizer (see Appendix C). For future work, we suggest modifying the tree parsing to allow for duplicated nodes. This token dependency tree will have tokens of the same word as separate nodes in this tree with similar roles.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
94817850-31fa-462a-8284-90c284b69085
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## 9 Ethics Statement The data and resources utilized in this study are openly available and widely used by numerous existing works. The datasets employed consist of factual statements devoid of subjective judgments or opinions. It is acknowledged that pre-trained LMs, such as GPT-2 and Mistral 7B, may inherently inherit biases as highlighted in previous research (Radford et al., 2019), potentially influencing the generated next token. For example, certain tokens like *beautiful* may tend to appear more frequently in contexts associated with female characteristics. While the primary objective of this study is to produce explanations that faithfully represent the model's predictions, it is recognized that these explanations may also carry inherent biases. It is imperative to acknowledge that the results generated by our approach may not always align with human mental models and could potentially be used in applications that have the potential to cause harm.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8418860e-7960-4022-9133-bc60aceabffd
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## A Textual Data A.1 Text Generation Text generation tasks involve predicting the next word in a sequence, like in language modeling, which can be considered a simpler form of text generation. Other tasks may involve generating entire paragraphs or documents. Text generation can also be framed as a sequence-to-sequence (seq2seq) task that aims to take an input sequence and generate an output sequence for machine translation and question answering. Autoregressive models like GPT (Generative Pre-trained Transformer) generate text one word at a time in an autoregressive manner, conditioning each word on the previously generated words. In this paper, we focused on the next token generation task given one single sentence as input. We work with factual sentences from Generics and *ROCStories* datasets, which often expect a semantically rich final token to complete the clause. Multiple predictions are possible, but only a few are correct. Here is an example of a sentence in the Generics dataset: Studio executive is an employee of a film. The GPT-2 model predicts *studio* as the next token with the random seed 0. We can expect other predictions like company, *firm*, or corporation. But the number of possibilities is still very limited.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
91e83509-62fc-46fa-bcd2-03c15a516cce
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## A.2 Dependency Parsing Dependency parsing is a natural language processing technique that involves analyzing the grammatical structure of a sentence to identify the relationships between words (de Marneffe et al.). It involves constructing a tree-like structure of dependencies, where each word is represented as a node, and the relationships between words are represented as edges. Each relationship has one head and a dependent that modifies the head, and it is labeled according to the nature of the dependency between the head and the dependent. These labels can be found at Universal Dependency Relations (de Marneffe et al.). Dependency Parsing is a powerful technique for understanding the meaning and structure of language and is used in various applications, including text classification, sentiment analysis, and machine translation. We use the Python module spaCy (version 3.7.2) (Honnibal and Montani, 2017) to generate dependency trees on the input sentences. The number of tokens varies from 5 to 19 tokens for the *Generics* and ROCStories datasets, producing very diverse and complex parsing trees. This diversity enriches our analysis and strengthens our results. The code source for dependency parsing was extracted from https: //stackoverflow.com/questions/7443330/ how-do-i-do-dependency-parsing-in-nltk.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d2ddb489-ef64-4a44-8423-9a7714ad233e
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## B Syntaxshap: Characteristics And Proofs B.1 Syntaxshap And The Shapley Axioms The four axioms satisfied by Shapley values, i.e., efficiency, additivity, nullity, and symmetry, do not generally provide any guarantee that the computed contribution value is suited to feature selection, and may, in some cases, imply the opposite (Fryer et al., 2021). We define here new axioms for SyntaxShap values since two of the four Shapley axioms cannot be satisfied by tree-constraint values. Efficiency The evaluation function v(S) in SyntaxShap is the output probability for the predicted next token given the full input sentence, i.e., v(S) = fˆy(S) where ˆy =argmax(f(x)). Because of the non-linearity of LMs, SyntaxShap evaluation function is non-monotonic. It does not necessarily increase if you add more features. For this reason, SyntaxShap does *not* satisfy the *efficiency* axiom. This implies that the SyntaxShap values of each word do not sum up to the SyntaxShap value of the whole sentence. Symmetry SyntaxShap satisfies the axiom of symmetry at each level of the dependency tree. Any two features xi, xj that are at the same level of the dependency tree, i.e., li = lj, play equal roles and therefore have equal SyntaxShap values: $$\forall i,j\ \mbox{s.t.}\ l_{i}=l_{j}$$ $$[\forall(S\setminus\{x_{i},x_{j}\})v(S\cup x_{i})=v(S\cup x_{i})]$$ $$\Longrightarrow\ \phi_{i}=\phi_{j}\tag{9}$$ Nullity If feature xi contributes nothing to each submodel it enters, then its SyntaxShap value is zero. $$(\forall S)v(S\cup x_{i})=v(x_{i})]\implies\phi_{i}=0\tag{10}$$ Additivity Given two models f and g, the SyntaxShap value of those models is a linear combination of the individual models' SyntaxShap values: $$\phi_{i
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a4c3146c-40cd-479a-9743-b03306d18d43
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## B Syntaxshap: Characteristics And Proofs B.1 Syntaxshap And The Shapley Axioms arrow\ \phi_{i}=\phi_{j}\tag{9}$$ Nullity If feature xi contributes nothing to each submodel it enters, then its SyntaxShap value is zero. $$(\forall S)v(S\cup x_{i})=v(x_{i})]\implies\phi_{i}=0\tag{10}$$ Additivity Given two models f and g, the SyntaxShap value of those models is a linear combination of the individual models' SyntaxShap values: $$\phi_{i}(f+g)=\phi_{i}(f)+\phi_{i}(g)\qquad\quad(11)$$
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
22c51f58-d1e0-47f2-bd88-a53dc7617863
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## B.2 Computational Complexity One advantage of the SyntaxShap algorithm is its faster computation time compared to the naive Shapley values computations. We estimate the complexity of each algorithm by approximating the total number of computation steps, i.e., formed coalitions and updated values, for the traditional naive SHAP computation and our method. NaiveShap The Shapley value of feature x requires the 2n−1 coalitions of all features excluding x. As we need to update n features, the total number of updates is n · 2n−1. The computation complexity is, therefore, in O(n2n). SyntaxShap The SyntaxShap value of feature x at level l requires Nl updates. Considering all the features in the input, the total number of compul=1 nl · Nl. To approximate this number, tations is L� we assume the dependency tree to be balanced and pose nl = *n/L*. In this case, Nl can be re-written as: $$N_{l}=\sum_{p=0}^{l-1}2^{n/L}+2^{n/L-1}-l$$ $$=l(2^{n/L}-1)+2^{n/L-1}$$ The total number of computations can now be approximated to: $$\frac{n}{L}\sum_{l=1}^{L}N_{l}=\frac{n}{L}\sum_{l=1}^{L}\left(l(2^{n/L}-1)+2^{n/L-1}\right)$$ $$=\frac{n}{L}\left(\frac{L(L+1)}{2}(2^{n/L}-1)+L2^{n/L-1}\right)$$ $$=\frac{n(L+1)}{2}(2^{n/L}-1)+n2^{n/L-1}$$ The approximation of the computation complexity in the case of a balanced tree is O(nL2n/L).
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d3c2fda7-2561-4b03-8c2e-67acb0d9c7bf
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## C Data Preprocessing SyntaxShap relies on the construction of dependency trees that capture the syntactic dependencies in the sentences. Entities in dependency trees are words as defined in the English dictionary. However, language tokenizers sometimes split words so that the tokenizer vocabulary does not necessarily contain the English one. To account for the disagreement between tokenization and parsing, we filter out the sentences that contain words that do not belong to the tokenizer's vocabulary and might be split into multiple tokens by the tokenizer. Table 3 displays the statistics for the three datasets Negation, *Generics*, and *ROCStories*, with the initial number of sentences and the explained sentences after filtering. Our filtering strategy consists of keeping only sentences that do not contain punctuations !"#$%&'()*+, -./:;<=>?@[\]^_`{}~ given by the Python module string and where the tokens excluding prefix and suffix tokens, correspond to the words. | | Negation | Generics | ROCStories | |----------------|------------|------------|--------------| | Initial size | 534 | 5777 | 2275 | | GPT-2 filter | 366 | 1434 | 1318 | | Mistral filter | 332 | 858 | 1046 | Figure 7 displays the length distribution of sentences in each dataset after filtering. Negation dataset contains short sentences with less than 6 tokens. It is used in our study for experiments on coherency and semantic alignment of explanations. Generics and *ROCStories* are more complex and realistic. The majority of their input sentences have between 5 and 15 tokens, with a few exceptions of longer sentences. They also have a greater diversity of words and syntactic complexity as identified by the dependency distance and token diversity in Table 1.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3fbcd014-9e91-46dd-9815-2009b63685d5
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## D Additional Results D.1 Mistral 7B On The Full Datasets To compare NaiveShap method with SyntaxShap with Mistral 7B, we have to filter out long sentences for which the computation is tractable with our compute power. Table 4 presents the new statistics of *Generics* and *ROCStories* datasets. We keep approximately 60% of the input sentences for both datasets. Figure 8 displays the performance of SyntaxShap and SyntaxShap-W and the baselines Random, LIME, NaiveShap and Partition. We evaluate them on the three faithfulness metrics like in section 5.2. NaiveShap shows similar performance on the div@10 and acc@10 scores as our methods SyntaxShap and SyntaxShap-W. We only notice a score difference for the fidelity metric, where NaiveShap generates less faithful explanations for the ROCStories dataset if you only consider the top-1 model prediction. These observations consolidate the conclusions drawn in section 5.2 and presented in ??, namely that both our method and NaiveShap produce better explanations on the faithfulness dimension than the other three methods. | | Generics | ROCStories | |-------------------|------------|--------------| | Without NaiveShap | 858 | 1046 | | With NaiveShap | 512 | 608 |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3dd8c9c2-fed0-4385-93c1-5fdbac3aacb4
# Syntaxshap: Syntax-Aware Explainability Method For Text Generation ## D.2 Number Of Tokens And Faithfulness This section analyzes the relationship between the number of tokens in the input sentences and the performance of the explainability algorithms. We vary the number of tokens from 5 to 15 tokens to have at least 50 sentences of the same length for both *Generics* and *ROCStories* and have a decent number of inputs to average upon (see the number of tokens distribution in Figure 7). Figure 9 and 10 show that the performance of all methods is robust to the increase in the number of tokens. SyntaxShap can be applied to a diverse range of sentence lengths. Note that for Mistral 7B in Figure 10 the results of NaiveShap are limited to sentences with less than 10 tokens because of NaiveShap intractable computations with our compute power (see section 8).
{ "creation_datetime": "2024-03-04", "file_name": "2402.09259v1.md", "file_path": "paper_data/2402.09259v1.md", "file_size": 50868, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ab9bedd0-d8bb-4f91-a7d2-cb2a191b2cf9
Nirjhar Das 1 Souradip Chakraborty 2 Aldo Pacchiano 3 **Sayak Ray Chowdhury** 1
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1b0b6486-4105-4b45-ba8d-d1a22fab7573
## Abstract Reinforcement Learning from Human Feedback (RLHF) is pivotal in aligning Large Language Models (LLMs) with human preferences. While these aligned generative models have demonstrated impressive capabilities across various tasks, the dependence on high-quality human preference data poses a costly bottleneck in practical implementation of RLHF. Hence better and adaptive strategies for data collection is needed. To this end, we frame RLHF as a contextual preference bandit problem with prompts as contexts and show that the naive way of collecting preference data by choosing prompts uniformly at random leads to a policy that suffers an Ω(1) suboptimality gap in rewards. Then we propose Active Preference Optimization (APO), an algorithm that actively selects prompts to collect preference data. Under the Bradley-Terry-Luce (BTL) preference model, APO achieves sample efficiency without compromising on policy performance. We show that given a sample budget of T, the suboptimality gap of a policy learned via APO scales as O(1/ √ T). Next, we propose a compute-efficient batch version of APO with minor modification and evaluate its performance in practice. Experimental evaluations on a human preference dataset validate APO's efficacy as a sample-efficient and practical solution to data collection for RLHF, facilitating alignment of LLMs with human preferences in a cost-effective and scalable manner.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
be816c23-37f2-4b12-9d35-ebc8dce15e52
## 1. Introduction Reinforcement Learning from Human Feedback (RLHF) has proven highly effective in aligning Large Language Models (LLMs) with human preferences (Christiano et al., 2017; Ouyang et al., 2022; Glaese et al., 2022). This approach involves collecting extensive data, each comprising a prompt, a pair of generations, and a preference indicating which generation is better. Then a reward model is trained to classify preferred generations, and subsequently, a language model policy using Proximal Policy Optimization (PPO) (Schulman et al., 2017) is trained to generate high-reward actions while minimizing divergence from a reference policy. Given the practical success, recent theoretical advances have been made in learning reward functions from pairwise comparisons, studied as contextual dueling bandits (Saha, 2021; Saha & Krishnamurthy, 2022; Dud´ık et al., 2015) and preference-based RL (Pacchiano et al., 2021; Chen et al., 2022; Zhu et al., 2023). In these settings, the learner doesn't control the contexts or states and the aim is to minimize cumulative regret, with optimal rates achieved in both settings. However, in the case of aligning LLMs, the learner indeed has control over both the contexts and actions, i.e., the prompts and generations for which preference data is to be collected. Currently, most practical implementations of RLHF picks contexts uniformly at random from a pool of contexts followed by two actions for that context based on some policy, to collect preference data on. The success of RLHF hinges on the balance between the quality and quantity of human preference data (Stiennon et al., 2020; Ouyang et al., 2022). Excessive low-quality data can degrade performance, while scarce high-quality data may not enhance it. However, uniform prompt sampling as a simple approach has proven effective for LLMs so far. Restricting ourselves to the contextual preference bandit setting, one is then bound to ask whether uniformly sampling contexts followed by observing the preference between two chosen actions is a good enough strategy. Do we really need sophisticated and potentially more involved methods to deliver better model alignment? To this end, we design a contextual preference bandit instance on which we show that a learner who only samples contexts uniformly suffers a constant suboptimality gap (gap between the reward of the best action and that of the policy) with high probability. To the best of our knowledge, our work is the first to
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9ab0f8ce-558f-4c76-b8df-aa9da8c4905d
## 1. Introduction a simple approach has proven effective for LLMs so far. Restricting ourselves to the contextual preference bandit setting, one is then bound to ask whether uniformly sampling contexts followed by observing the preference between two chosen actions is a good enough strategy. Do we really need sophisticated and potentially more involved methods to deliver better model alignment? To this end, we design a contextual preference bandit instance on which we show that a learner who only samples contexts uniformly suffers a constant suboptimality gap (gap between the reward of the best action and that of the policy) with high probability. To the best of our knowledge, our work is the first to provide provable lower bounds for learners that sample contexts uniformly at random to collect preference data. This immediately necessitates algorithms that can make choices of contexts (and two actions) adaptively during the reward learning phase. This scenario falls under Active Learning, where the learner not only performs tasks but also decides which tasks to perform during the course of training. The goal is to optimize performance at test time when the learner has no control on the tasks' arrivals. In the contextual bandit setting, Char et al. (2019) actively query contexts and actions for optimal policy learning. Li et al. (2022) explore optimal policy identification in the RL setting when the agent can choose the state and action to query via a simulator. However, we deal with a contextual *preference* bandit setting which cannot be reduced to a contextual bandit by virtue of the fact that comparisons between two actions are valid only if they come from the same context. Moreover, the goal of the learner is now to actively select contexts as well as a pair of actions for that context to collect preference data on this triplet. The learner should be sample efficient so that with more data being collected, the suboptimality gap of the policy learnt from the collected data should go down. A recent work (Mehta et al., 2023) proposes to tackle this problem by measuring performance of the learned policy against the *Borda* winner policy, an action that wins in expectation over a randomly chosen action. They resort to sampling one action randomly, while adaptively sampling the other. Although they guarantee an O(1/ √ T) suboptimality gap after T rounds for the trained policy, choosing a random action at every round can result in sample inefficiency in practical scenarios. Furthermore, Mehta et al. (2023) assume both
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
454e7206-403f-4b78-ad63-eaa0375ae146
## 1. Introduction go down. A recent work (Mehta et al., 2023) proposes to tackle this problem by measuring performance of the learned policy against the *Borda* winner policy, an action that wins in expectation over a randomly chosen action. They resort to sampling one action randomly, while adaptively sampling the other. Although they guarantee an O(1/ √ T) suboptimality gap after T rounds for the trained policy, choosing a random action at every round can result in sample inefficiency in practical scenarios. Furthermore, Mehta et al. (2023) assume both the reward function and the *Borda* score (average probability of an action being a *Borda* winner) are linear functions of a common feature map. This doesn't hold in general for the popular Bradley-Terry-Luce (BTL) preference model (Bradley & Terry, 1952; Luce, 2012). Their guarantee also has a linear dependence on the Lipschitz constant of rewards with respect to the *Borda* score, which could be exponential for the BTL model. We now list the main contributions of our work: 1. Lower Bound: We show that the naive way of collecting preference data by choosing contexts uniformly at random can lead to wastage of samples as the learned policy can suffer Ω(1) suboptimality gap with high probability. To the best of our knowledge, this is the first provable negative result for uniformly sampling prompts. 2. Adaptive Algorithm: We propose Active Preference Optimization (APO), an active contextual preference bandit algorithm that adaptively selects a context and two actions every time to collect preference data. Under the BTL preference model, the suboptimality gap of APO scales as O(1/ √ T) where T is the given sample budget. Moreover, in contrast to Mehta et al. (2023), APO chooses both the actions adaptively given a context, thus leading to more meaningful comparisons in practical settings. 3. Improved Guarantees: We improve the linear dependence of the suboptimality guarantee of Mehta et al. (2023) on κ, to √κ while removing their restrictive assumptions. Here, κ is a problem-dependent non-linearity factor which can be exponential in problem parameters. Additionally, we present an analogue of APO under
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
001ed6e9-2b9c-411c-8cb0-003997df93e7
## 1. Introduction T) where T is the given sample budget. Moreover, in contrast to Mehta et al. (2023), APO chooses both the actions adaptively given a context, thus leading to more meaningful comparisons in practical settings. 3. Improved Guarantees: We improve the linear dependence of the suboptimality guarantee of Mehta et al. (2023) on κ, to √κ while removing their restrictive assumptions. Here, κ is a problem-dependent non-linearity factor which can be exponential in problem parameters. Additionally, we present an analogue of APO under general function approximation that attains a similar guarantee while generalizing to preference models other than BTL. 4. Experiments: We propose a batch algorithm APO-RLHF with a slight modification of the original algorithm so that it is also computationally more efficient. We experiment with GPT-2 (Radford et al., 2019) on IMDB sentiment dataset (Maas et al., 2011) and demonstrate significant improvement over uniform sampling in both reward learning step and alignment step in the RLHF pipeline. We believe that our work contributes towards a sample-efficient and practical solution to preference data collection for RLHF.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6b60a546-e0c3-42a9-be0a-0bb92a81d30e
## 2. Background And Problem Setup We have a set of contexts X and a set of possible actions per context A. To learn using preference feedback, the agent selects a tuple (*x, a, a*′) to present to a human labeller who then reveals a binary preference y which takes value 1 if a wins over a′ and 0 otherwise. We assume that y is sampled from a Bernoulli distribution conditioned on (*x, a, a*′), i.e., Pθ∗[y=1|*x, a, a*′]= exp(rθ∗(*x, a*)) exp(rθ∗(*x, a*))+exp(rθ∗(x, a′)). Here rθ∗ is a latent reward model parameterized by an unknown parameter θ∗. This reward model is often called Bradley-Terry-Luce (BTL) model (Bradley & Terry, 1952; Luce, 2012). The goal of the agent is to first learn the reward model rθ∗ over T rounds of sequential interaction with the labeller and then employ this reward model to learn a policy π : *X → A* (i.e. a rule for selecting an action a given a context x), which will eventually fetch high latent rewards r∗(*x, a*). The agent collects a preference dataset D = (xs, as, a′ s, ys)T s=1 of T samples in the process. In this work, we consider a linear reward model rθ∗(*x, a*) = ϕ(*x, a*)⊤θ∗, where ϕ : *X × A →* Rd is some known and fixed feature map. (In section 6 we generalize this set-up to general function approximation removing the need for such explicit reward models.) For instance, such a ϕ can be constructed by removing the last layer of a pre-trained language model, and in that case, θ∗ correspond to the weights of the last layer. With this model, one can equivalently write the probability of sampling ys = 1 given (xs, as, a′ s) as $\mathbb{P}_{\theta^{*}}[y_{s}=1|x_{s},a_{s},a^{\prime}_{s
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0b97542a-f75d-4b7f-b32d-ed61f2cd3dce
## 2. Background And Problem Setup ize this set-up to general function approximation removing the need for such explicit reward models.) For instance, such a ϕ can be constructed by removing the last layer of a pre-trained language model, and in that case, θ∗ correspond to the weights of the last layer. With this model, one can equivalently write the probability of sampling ys = 1 given (xs, as, a′ s) as $\mathbb{P}_{\theta^{*}}[y_{s}=1|x_{s},a_{s},a^{\prime}_{s}]=\mu(\phi(x_{s},a_{s})^{\top}\theta^{*}-\phi(x_{s},a^{\prime}_{s})^{\top}\theta^{*})$, where µ(w) = 1 1+e−w is the sigmoid function. We let zs = ϕ(xs, as)−ϕ(xs, a′ s) denote the differential feature of actions as and a′ s at state xs. This lets us denote, for any θ ∈ Rd, the predicted probabilities of a label ys = 1 given (xs, as, a′ s) as (we omit dependence on θ for brevity) $$\mathbb{P}_{\theta}\left[y_{s}\!=\!1|x_{s},a_{s},a_{s}^{\prime}\right]\!=\!\mu(z_{s}^{\top}\theta)\ .$$ Note that this problem cannot be reduced to a logistic bandit instance (Abeille et al., 2021; Faury et al., 2022) as comparisons only between actions of the same contexts are valid, i.e., actions a and a′ can only be compared via ϕ(*x, a*) and ϕ(*x, a*′) for the same x. We make the following assumption which is standard in preference-based learning literature (Shah et al., 2015; Zhu et al., 2023). Assumption 2.1 (Boundedness). (a) θ∗ lies in the set Θ = {θ ∈ Rd|⟨1, �
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0ff98b54-3333-44ef-8450-09db0fcb0aee
## 2. Background And Problem Setup ury et al., 2022) as comparisons only between actions of the same contexts are valid, i.e., actions a and a′ can only be compared via ϕ(*x, a*) and ϕ(*x, a*′) for the same x. We make the following assumption which is standard in preference-based learning literature (Shah et al., 2015; Zhu et al., 2023). Assumption 2.1 (Boundedness). (a) θ∗ lies in the set Θ = {θ ∈ Rd|⟨1, θ⟩ = 0, ||θ||≤ S}. The condition ⟨1, θ⟩ = 0 ensures identifiability of θ∗. (b) Features are bounded, i.e., ||ϕ(x, a)||≤ 1, ∀ (x, a) ∈ X × A. Now, we define a quantity that captures the learning com- plexity under the logistic (BTL) preference model: $$\kappa=\operatorname*{max}_{x\in{\mathcal{X}},a,a^{\prime}\in{\mathcal{A}}}\operatorname*{max}_{\theta\in\Theta}\frac{1}{\dot{\mu}(\phi(x,a)^{\top}\theta-\phi(x,a^{\prime})^{\top}\theta)}\;.$$ κ is a problem-specific constant and it specifies difficulty in learning via the worst-case non-linearity in preference feedback. It can be exponential in the parameter norm S, and much of logistic bandit literature (Abeille et al., 2021; Faury et al., 2022; Lee et al., 2023) deals with removing the linear dependence of regret on κ. Now, we present the process of parameter estimation in the BTL model via maximum likelihood estimation. Maximum Likelihood Estimation. At time t, given the preference dataset {(xs, as, a′ s, ys)}t−1 s=1, the maximum likelihood estimate (MLE) of θ∗ constrained to the set Θ is $\hat{\theta}_{t}=\mb
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
612e1d73-9dd7-4dd6-a8fc-7619f7e5b5f6
## 2. Background And Problem Setup it literature (Abeille et al., 2021; Faury et al., 2022; Lee et al., 2023) deals with removing the linear dependence of regret on κ. Now, we present the process of parameter estimation in the BTL model via maximum likelihood estimation. Maximum Likelihood Estimation. At time t, given the preference dataset {(xs, as, a′ s, ys)}t−1 s=1, the maximum likelihood estimate (MLE) of θ∗ constrained to the set Θ is $\hat{\theta}_{t}=\mbox{argmin}_{\theta\in\Theta}\;\mathcal{L}_{t}(\theta)\;.$ Here the log-loss Lt(θ) is computed using observed preferences ys and predicted probabilities µ(z⊤ s θ) as $$\mathcal{L}_{t}(\theta)=-\sum_{s=1}^{t-1}y_{s}\log(\mu(z_{s}^{\top}\theta))+(1-y_{s})\log(1-\mu(z_{s}^{\top}\theta)).\tag{2}$$ The above optimization involves both convex objective and constraint, hence can be solved using standard algorithms (Hazan et al., 2016; Bubeck et al., 2015). Performance measure. Our goal is to learn a policy over the collected data D, which has high rewards or, equivalently, low suboptimality. Formally, the suboptimality gap of a learned policy πT after collecting T samples by an algorithm of choice is defined as follows: $$R(T)=\max_{x\in{\cal X}}\max_{a\in{\cal A}}r^{*}(x,a)-r^{*}(x,\pi_{T}(x)).\tag{3}$$ Here, our policy competes with the *Condorcet* winner of a context—an action that is better than all other actions. The suboptimality gap is the worst possible difference in latent rewards over the set of contexts. Mehta et al. (2023) uses a similar measure of performance but they compete against the *Borda* winner, which is weaker than ours1.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
efbe0916-b48a-47fb-b577-ca33ff25fc66
## 2. Background And Problem Setup in{\cal X}}\max_{a\in{\cal A}}r^{*}(x,a)-r^{*}(x,\pi_{T}(x)).\tag{3}$$ Here, our policy competes with the *Condorcet* winner of a context—an action that is better than all other actions. The suboptimality gap is the worst possible difference in latent rewards over the set of contexts. Mehta et al. (2023) uses a similar measure of performance but they compete against the *Borda* winner, which is weaker than ours1. Notations. Throughout the paper, we will denote universal constants with C and express our guarantees in terms of C. Further, for a vector a and a matrix M, we will use ∥a∥M to denote √ a⊤Ma. For two matrices A and B, B ≼ A or A ≽ B will denote that A − B is a positive semi-definite matrix. Lastly, we will use ˜O(·) to denote order notation but hiding log factors.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4732020c-1abb-47e9-be37-dd618f3e541f
## 3. Is Uniform Sampling Good Enough? In this section we illustrate the pitfall of a learner who samples contexts uniformly for learning. We first characterize such a learner in the contextual preference bandit setting. Definition 3.1 (Uniform Learner). Suppose an algorithm Alg first samples contexts uniformly at random from a given set of contexts X and then picks two actions of its choice from the action set A. The algorithm then queries the true preference model parameterized by θ∗ to observe the stochastic binary preference outcome. After T rounds, the algorithm solves an MLE over the collected data. With the ML estimate, the algorithm then learns a greedy policy. We call such an algorithm Alg a Uniform Learner. Now we state the following theorem that shows that a Uni- form Learner can suffer constant suboptimality gap that does not go down with the number of samples. Theorem 3.2. There exists a contextual preference bandit instance (X, A, θ∗) and a choice of T for which the policy learnt by a Uniform Learner Alg suffers Ω(1) suboptimality gap with high probability. Proof. Let |X| = N and assume T ≪ N. Indeed, this is the interesting case because otherwise, if budget of samples T > N, then one can just collect data for every context. We divide X into two disjoint subsets: a good set Xg and a bad set Xb, where w.l.o.g. we assume |Xb| = 1. Hereon, we will denote the bad context by b. Let A = {a, a′} for all contexts, and let a be the action with higher reward. Let ϕ : X × A → R2 be a feature map and zx = ϕ(x, a) − ϕ(x, a′) 1Indeed, a Condorcet winner is also the Borda winner but not conversely. be the the feature difference vector for the context x. We now specify a problem instance: $$\forall\;x\in{\cal X}_{g}:z_{x}=\left[1\quad0\right]^{\top};\;z_{b}=\left[-\
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
58817a19-aa4e-4218-a674-c54b9c684a65
## 3. Is Uniform Sampling Good Enough? ϕ : X × A → R2 be a feature map and zx = ϕ(x, a) − ϕ(x, a′) 1Indeed, a Condorcet winner is also the Borda winner but not conversely. be the the feature difference vector for the context x. We now specify a problem instance: $$\forall\;x\in{\cal X}_{g}:z_{x}=\left[1\quad0\right]^{\top};\;z_{b}=\left[-\frac{1}{2}\quad\frac{\sqrt{3}}{2}\right]^{\top}\;,$$ $$\theta^{*}=\alpha\left[\frac{1}{2}\quad\frac{\sqrt{3}}{2}\right]^{\top},\;\alpha>0\;.$$ We will specify $\alpha$ later, but note that $\|\theta^{*}\|_{2}=\alpha$. From this construction (see fig. 1), it is clear that both for _good_ and _bad_ sets, $z_{x}^{\top}\theta^{*}=\alpha/2>0$, which implies that indeed action $a$ has higher reward than $a^{\prime}$. Let E1 be the event that the collection of T contexts constructed by sampling uniformly at random from X contains contexts only from Xg. Let E2 be the event that all observed comparison feedbacks y1*, . . . y*T are equal to 1. Under uniform sampling, for a random context X, P[X *∈ X*g] = 1 − 1/N. Since the sequence {xi}T i=1 contains i.i.d samples, and similarly for {yi}T i=1, it can be easily calculated that $$\mathbb{P}[\mathcal{E}_{1}]=\left(1-\frac{1}{N}\right)^{T},\ \mathbb{P}[\mathcal{E}_{2}|\mathcal{E}_{1}]=\prod_{i=1}^{T}\mu\left(\frac{\alpha}{2}\right
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
aa086b1f-6d39-41dc-95d7-34ddaa52e640
## 3. Is Uniform Sampling Good Enough? /N. Since the sequence {xi}T i=1 contains i.i.d samples, and similarly for {yi}T i=1, it can be easily calculated that $$\mathbb{P}[\mathcal{E}_{1}]=\left(1-\frac{1}{N}\right)^{T},\ \mathbb{P}[\mathcal{E}_{2}|\mathcal{E}_{1}]=\prod_{i=1}^{T}\mu\left(\frac{\alpha}{2}\right)=\mu\left(\frac{\alpha}{2}\right)^{T}$$ Under the event E1 *∩ E*2, the maximum likelihood estimate ˆθ constrained to the same norm as θ∗ is given by $$\hat{\theta}=\operatorname*{argmin}_{\theta\in\mathbb{R}^{2}:\|\theta\|_{2}\leq\alpha}\sum_{i=1}^{T}\log\left(1+e^{-\alpha\theta_{1}}\right)\quad(\theta\equiv\begin{bmatrix}\theta_{1}\\ \theta_{2}\end{bmatrix}).$$ It is easy to verify that $\hat{\theta}=\begin{bmatrix}\alpha&0\end{bmatrix}^{\top}$. For $x\in\mathcal{X}_{g}$, the predicted reward difference between actions $a$ and $a^{\prime}$ is $z_{x}^{\top}\hat{\theta}=\alpha>0$. Thus, $\hat{\theta}$ predicts correctly for $x\in\mathcal{X}_{g}$. However, for context $b$, the reward difference is $z_{b}^{\top}\hat{\theta}=-\frac{\alpha}{2}<0$. Thus $\hat{\theta}$ wrongly predicts $a^{\prime}$ as the better action for the bad context $b$ incurring a constant sub-optimality gap: 2 < 0. Thus ˆθ wrongly predicts a′ as the better action for the bad context
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8030b912-4bf4-417b-bf12-59d7359e861a
## 3. Is Uniform Sampling Good Enough? 0$. Thus, $\hat{\theta}$ predicts correctly for $x\in\mathcal{X}_{g}$. However, for context $b$, the reward difference is $z_{b}^{\top}\hat{\theta}=-\frac{\alpha}{2}<0$. Thus $\hat{\theta}$ wrongly predicts $a^{\prime}$ as the better action for the bad context $b$ incurring a constant sub-optimality gap: 2 < 0. Thus ˆθ wrongly predicts a′ as the better action for the bad context b incurring a constant sub-optimality gap: R(T, b) = (ϕ(b, a) − ϕ(b, a′))⊤θ∗ = α/2 = Ω(1). Finally, it remains to show that the event E1 ∩ E2 happens with high probability. We choose α = 2 log(N − 1) which yields µ(α/2) = 1 − (1/N). Hence, we get $$\mathbb{P}[\mathcal{E}_{1}\cap\mathcal{E}_{2}]=\mathbb{P}[\mathcal{E}_{2}|\mathcal{E}_{1}]\mathbb{P}[\mathcal{E}_{1}]=\left(1-\frac{1}{N}\right)^{2T}\geq1-\frac{2T}{N}$$ The last step uses T ≪ N, which completes the proof. Our lower bound highlights the need for algorithms that can make better use of the sample budget T. In the next section, we propose an algorithm which actively selects contexts to ensure that suboptimality gap goes down as 1/ √ T. Y θ∗ zb X′ X ˆθ zg
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
902e2917-456d-4367-a476-ff270b3c8680
## 4. Our Approach First we will define two matrices that characterize the confi- dence ellipsoid around the unknown reward parameter after t − 1 steps of data collection. Recall the feature difference zs = ϕ(xs, as) − ϕ(xs, a′ s) where (xs, as, a′ s) is the triplet queried at time s. With this, we define: $$V_{t}=\sum\nolimits_{s=1}^{t-1}z_{s}z_{s}^{\top}+\kappa\lambda{\bf I}_{d}\,\tag{4}$$ $$H_{t}(\theta)=\nabla^{2}{\cal L}_{t}(\theta)+\lambda{\bf I}_{d}=\sum\nolimits_{s=1}^{t-1}\hat{\mu}(z_{s}^{\top}\theta)z_{s}z_{s}^{\top}+\lambda{\bf I}_{d}\.$$ Here $V_{t}$ is a regularized sample covariance matrix of feature differences, while $H_{t}(\theta)$ scales each rank-one component inside the sum by its variance given that the parameter is $\theta$. A key relation between these two matrices is that $H_{t}(\theta)\succcurlyeq V_{t}/\kappa$. We set $\lambda=1/4S^{2}(2+2S)^{2}$ for providing guarantees but can be treated as a tuning parameter. Now we are ready to present the algorithm for active context selection in the BTL preference model.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ec933759-cbe9-4ce1-b068-201e7ff618be
## 4.1. Algorithm: Active Preference Optimization At each round t, our algorithm proceeds by computing the MLE estimate ˆθt based on the data obtained in the past t − 1 steps (see (1) and (2)). Based on ˆθt, our goal is to maximize exploration. To do this, for a context x *∈ X*, we compute the uncertainty for each (*a, a*′) available for that context and choose the one which maximizes this: $$(a_{t}(x),a^{\prime}_{t}(x))=\underset{(a,a^{\prime})\in\mathcal{A}\times\mathcal{A}}{\operatorname{argmax}}\ b_{t}(x,a,a^{\prime}),\ \text{where}\tag{5}$$ $$b_{t}(x,a,a^{\prime})=\|\phi(x,a)-\phi(x,a^{\prime})\|_{H^{-1}_{t}(\hat{\theta}_{t})}\.$$ Intuitively, Ht(ˆθ) describes a confidence ellipsoid around θ∗ which keeps shrinking along whichever direction (in Rd) we decide to explore. Thus, for a given context x, playing (x, at(x), a′ t(x)) described above maximally reduces the uncertainty among all other possible action duels. However, our algorithm picks not only the action pair that minimizes uncertainty, but also the context that decreases it Algorithm 1 APO: Active Preference Optimization Require: Context set X, action set A = [K], feature map ϕ : *X × A →* Rd, regularization λ = 1/4S2(2 + 2S)2, and failure level δ ∈ (0, 1) 1: Initialize ˆθ1 = 0 2: **for** t = 1*, . . . , T* do 3: Choose the triplet (xt, at, a′ t) using (5) and (6). 4: Observe preference feedback yt ∼ Ber(µ(z⊤ t θ∗)), where zt = ϕ(xt, at
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6a4b3cd5-0e7a-4bbb-9ba0-80242f63f8e8
## 4.1. Algorithm: Active Preference Optimization →* Rd, regularization λ = 1/4S2(2 + 2S)2, and failure level δ ∈ (0, 1) 1: Initialize ˆθ1 = 0 2: **for** t = 1*, . . . , T* do 3: Choose the triplet (xt, at, a′ t) using (5) and (6). 4: Observe preference feedback yt ∼ Ber(µ(z⊤ t θ∗)), where zt = ϕ(xt, at) − ϕ(xt, a′ t). 5: Compute reward estimate ˆθt+1 using (1) that minimizes the constrained log-loss (2). 6: Compute the (scaled) design matrix Ht+1(ˆθt+1) using (4). 7: end for 8: Compute final policy πT (x) using (7) the most. Thus, we define $$x_{t}=\mbox{argmax}_{x\in{\cal X}}\;b_{t}(x,a_{t}(x),a^{\prime}_{t}(x))\tag{6}$$ This is the crucial step in our algorithm that ensures that the reward uncertainty over all contexts decreases at a fast rate which in turn ensures low suboptimality gap of our policy. After running this procedure for T time steps, we define θT = 1 T �T s=1 ˆθt as the average of all the past parameter estimates. Our final policy πT for any context x *∈ X* is to play the action that maximizes the reward parameterized by θT . In other words, for any x *∈ X*, we choose the policy $$\pi_{T}(x)=\mbox{argmax}_{a\in{\cal A}}\ \phi(x,a)^{\top}\theta_{T}.\tag{7}$$ The pseudocode is given in algorithm 1.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ea07e06b-3c97-444e-a833-bbf01c4f7d71
## 4.2. Theoretical Guarantee First, we present a key lemma that quantifies the error in estimating the parameter θ∗. This lemma is obtained by extending results from (Lee et al., 2023) and using a novel inequality derived from the self-concordance property of the sigmoid function (|¨µ| ≤ ˙µ). Detailed version of the lemma and its proof is deferred to appendix A. Lemma 4.1 (Confidence Set). Let δ ∈ (0, 1]. Then, under Assumption 2.1, with probability at least 1 − δ, we have $$\left\|\theta^{*}-\hat{\theta}_{t}\right\|_{H_{t}(\hat{\theta}_{t})}\leq C S^{1/2}\gamma_{t}(\delta)\ ,$$ d + log t $where\ \gamma_t(\delta)=CS\sqrt{\left(d\log\frac{St}{d}\right)}$ Now, we present the guarantee. δ � for some C > 0. Now, we present the guarantee that our algorithm enjoys. Theorem 4.2 (Sub-optimality gap). Let δ ∈ (0, 1). Under Assumption 2.1, the suboptimality gap R(T) of our policy πT after running APO (Algorithm 1) for T steps is upper bounded with probability at least $1-\delta$ as_ $$R(T)\leq C\gamma_{T}(\delta)\sqrt{S\log\left(1+\frac{T}{\lambda\kappa d}\right)\frac{\kappa d}{T}}\.$$ Comparison with Prior work. Plugging in the value of γT (δ) in the above bound, we get a sub-optimility gap of �O(d � κ/T) with high probability. We compare this with the �O(dκ √ T) cumulative regret bound of Saha (2021) in du- eling bandit setting. We get √κ factor improvement thanks to a tighter confidence set
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fac4a9c5-9597-4de9-a373-2bf2a3d2baf9
## 4.2. Theoretical Guarantee {T}{\lambda\kappa d}\right)\frac{\kappa d}{T}}\.$$ Comparison with Prior work. Plugging in the value of γT (δ) in the above bound, we get a sub-optimility gap of �O(d � κ/T) with high probability. We compare this with the �O(dκ √ T) cumulative regret bound of Saha (2021) in du- eling bandit setting. We get √κ factor improvement thanks to a tighter confidence set (Lemma 4.1), which we obtain by crucially using self-concordance of sigmoid functions. One might note from the logistic bandit literature (Abeille et al., 2021; Faury et al., 2022; Lee et al., 2023) that state- of-the-art regret guarantee is κ-independent (dependence is only in lower order term). We believe that our analysis is tight in κ because our sub-optimality gap guarantee is over real-valued rewards (ϕ(x, a)⊤θ∗) instead of their sigmoid rewards µ(ϕ(x, a)⊤θ∗) (mean of the Bernoulli preferences y), making the √κ dependence unavoidable for our case. Proof Sketch. The proof proceeds by first upper bound- ing the suboptimality gap of every context with the estima- tion error of the sub-optimality gap for that context. Then we use Lemma 4.1 to bound the error in gap estimation with the error in the parameter estimation times an arm- dependent quantity. Specifically, for context x ∈ X, let zT = ϕ(x, a∗(x)) − ϕ(x, πT (x)), where a∗(x) is the opti- mal action for context x. Then, from (3), the sub-optimality gap at context x is T R(T, x)=z⊤ T θ∗ ≤z⊤ T θ∗−z⊤ T θT = 1 t=1z⊤ T � θ∗−
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a1c5348c-277a-4f58-97c2-102458dd5b8b
## 4.2. Theoretical Guarantee . Specifically, for context x ∈ X, let zT = ϕ(x, a∗(x)) − ϕ(x, πT (x)), where a∗(x) is the opti- mal action for context x. Then, from (3), the sub-optimality gap at context x is T R(T, x)=z⊤ T θ∗ ≤z⊤ T θ∗−z⊤ T θT = 1 t=1z⊤ T � θ∗−ˆθt � �T T ≤ 1 t=1∥zT ∥Ht(ˆθt)−1∥θ∗−ˆθt∥Ht(ˆθt) . �T Here, the first inequality is due to the fact that ϕ(x, πT (x))⊤θT ≥ ϕ(x, a∗(x))⊤θT , which follows from definition of πT , and so z⊤ T θT ≤ 0. The last inequality is by Cauchy-Schwarz. Now, we can apply Lemma 4.1 to upper bound ∥θ∗ − ˆθt∥Ht(ˆθt) by CS1/2γt(δ). Next, note that by the design of our algorithm, ∥zT ∥Ht(ˆθt)−1 ≤ ∥zt∥Ht(ˆθt)−1 which we again upper bound by √κ∥zt∥V −1 t using the fact that Ht(ˆθ) ≽ (1/κ)Vt. Finally, applying the Elliptic Poten- tial Lemma (Lemma C.2) finishes the proof. The proof is presented in details in appendix A.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
063d0e40-9011-4e97-8e6e-31ac8005856a
## 4.2.1. Comparison With Mehta Et Al. (2023) We revisit assumptions and results in Mehta et al. (2023) to highlight major differences. First, they consider the Borda scores g∗(*x, a*) = Ea′∼Unif(A(x))[µ(r∗(x, a) − r∗(*x, a*′))], which is the probability of action a winning at context x over any other random action. They assume both reward and Borda functions lie in the same Hilbert Space H, i.e. ∃ θ∗, β∗ ∈ H and a feature map ϕ : X × A → H such that r∗(x, a) = ⟨θ∗, ϕ(x, a)⟩H and g∗(x, a) = ⟨β∗, ϕ(x, a)⟩H. Since g∗ is a non-linear function of ϕ(x, a), this assumption doesn't hold for BTL preference model in general. The assumption holds trivially when each ϕ(*x, a*) is the one-hot vector ex,a. However, it requires H to be |X| · |A| dimensional Euclidean space. This would then significantly weaken their suboptimality guarantee which scales linearly with dimension. We remove this assumption but provide the same guarantee in terms of d. We achieve this by exploiting the problem structure, especially the properties of the sigmoid function. Next, Mehta et al. (2023) assumes that the optimal action a∗ x =argmaxa∈A ϕ(*x, a*)⊤θ∗ satisfies for some L>0: ⟨ϕ(x, a∗ x)−ϕ(x, a), θ∗⟩≤L (g∗(x, a∗ x)−g∗(x, a)) (8) From definition, g∗ is in the same non-linear function space as µ. Denote by ν the Logit function: ν(z) = log(z/(1 − z)). We know that ν(µ(y)) = y. With this, the assumption stated in (8
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
cdb0a517-ae2a-4256-9fba-3f7be32d4349
## 4.2.1. Comparison With Mehta Et Al. (2023) ��ϕ(x, a∗ x)−ϕ(x, a), θ∗⟩≤L (g∗(x, a∗ x)−g∗(x, a)) (8) From definition, g∗ is in the same non-linear function space as µ. Denote by ν the Logit function: ν(z) = log(z/(1 − z)). We know that ν(µ(y)) = y. With this, the assumption stated in (8) roughly translates to |ν(z)−ν(z′)| ≤ L|z −z′|. Thus, L is a global upper bound on ˙ν(z) for all z. A minor calculation will show that ˙ν(z) = 1/ ˙µ(y) where z = µ(y). Hence L is of the order of κ. The suboptimality gap of Mehta et al. (2023) (Theorem 1) scales linearly with L (or κ). Our guarantee is therefore asymptotically tighter as it only scales as √κ. Note that the constant κ can be exponential in problem parameter S. Finally, one major difference with Mehta et al. (2023) is that we choose both actions by maximizing the exploration bonus, while they choose the second action randomly. This approach might be wasteful in terms of samples as the choice of second action is equally crucial in preference models e.g., if the second answer is generated randomly for a given prompt, then always the first answer will be preferred, although such a data does not reveal much.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
556e0efc-d378-4e12-997a-cce788076b2f
## 5. Experiments In this Section, we first give a batch version of the online algorithm and the details of the performed experiments. The batch version strongly follows the framework of APO (algorithm 1) with slight changes adapted for computationally efficient implementation for large scale experiments.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10500v1.md", "file_path": "paper_data/2402.10500v1.md", "file_size": 65755, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }