doc_id
stringlengths
36
36
contents
stringlengths
22
3.25k
metadata
dict
9c405354-0d1f-42e7-8a27-aba4d2188d52
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## Abstract This study explores the potential of fine-tuning foundational English Large Language Models (LLMs) for generating Polish text. The first step involves Language Adaptive Pre-training (LAPT) on a high-quality dataset of 3.11 GB, consisting of 276 million Polish tokens. The LAPT is followed by additional fine-tuning aimed at solving nine KLEJ challenges [1]. Our trained model Curie-7B-v1 not only generates Polish text with the lowest perplexity of 3.02 among decoder-based Polish models but also closely rivals the performance of the best Polish encoder-decoder models with a less than 2% gap on 8 out of 9 tasks. Curie-7B-v1 used approximately 2-3% of a typical dataset size to learn Polish. The LAPT was completed in less than five days using a consumer GPU, highlighting the method's efficiency. The proficiency of the model in Polish was significantly enhanced, demonstrating the viability of this approach for adding new languages to existing LLMs by training just 1.2% of its parameters. To contribute to the community's collaborative progress, the model has been released as open-source.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
45c9b8ea-41ba-479f-ace6-768a1f58ca46
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## K**Eywords** Machine Learning · Nlp · Language Adaptive Pre-Training · Large Language Models · Transformer 1 Introduction LLMs have enhanced the efficiency of many natural language processing (NLP) tasks. This improvement comes with the trade-off of resource-intensive pre-training and inference. At the pre-training phase model gains a general understanding of language, including grammar rules, linguistic patterns, factual information, and reasoning abilities [2]. Currently, all of the best open-source LLMs are pre-trained on mostly English data. As per the findings of Web Technology Surveys3 [3], more than 51.7% of the content on the internet is in English, while data in over 100 non-English languages accounts for just 48.3% of the total. The Polish language contributes to just 1.6% of the Internet's content. Due to data insufficiency, it is significantly harder to develop a non-English LLM. The performance of LLMs is influenced by several crucial factors, including the number of model parameters, the number of observed tokens, and the overall quality of the text [4] [5]. Ideally, the pre-training dataset should scale with the number of model parameters [4]. The resource-intensive nature of pre-training LLMs poses a challenge for low-resource languages such as Polish. For comparison, Meta's LLama 2 was trained on 2 trillion tokens [6] and GPT-3 on roughly 300 billion tokens [7]. As of today, to the best of the author's knowledge, there are no high-quality open-source datasets of Polish text exceeding 100 billion tokens in size. Developing LLM is a substantial investment. For the sake of comparison, it is claimed that GPT-4 cost is over $100,000,000, MistralAi's Mistral-7B cost $500,000 to train, Meta's LLaMa2 70b was trained on 2048 A100 GPUs for 23 days which is estimated to cost around $2,000,000. These are the costs of just a plain LLM pre-training without including the costs of e.g. data collection or human evaluation necessary to turn these into complex AI assistants or classifiers. Pre-training isn't the only technique to adopt LLMs to low-resource languages. This can also be done via transfer learning [8] [9], fine-tuning LLM for Causal Language Modeling (predicting the next element in a sequence iteratively) [10] in a supervised manner on text in a language it
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2904c6bd-d0c0-4f0a-b481-8e92896bcc45
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## K**Eywords** Machine Learning · Nlp · Language Adaptive Pre-Training · Large Language Models · Transformer 1 Introduction which is estimated to cost around $2,000,000. These are the costs of just a plain LLM pre-training without including the costs of e.g. data collection or human evaluation necessary to turn these into complex AI assistants or classifiers. Pre-training isn't the only technique to adopt LLMs to low-resource languages. This can also be done via transfer learning [8] [9], fine-tuning LLM for Causal Language Modeling (predicting the next element in a sequence iteratively) [10] in a supervised manner on text in a language it has merely or never seen in a pre-training phase. LAPT for text generation in a specific language, such as Polish is a potentially effective strategy. For instance, studies have shown that Domain Adaptive Pre-training can significantly improve the performance of foundational LLMs in clinical tasks [11] [12] [13]. LLaMA [6], when equipped with a LoRA adapter fine-tuned on medical texts, particularly outperforms foundational models in clinical domain tasks [11]. The study [11] demonstrates that this approach yields substantial improvements, especially in large-scale multilabel classification tasks like diagnoses and procedures classification. This marks a significant advancement over existing custom-trained language models, highlighting the efficiency of LoRA Domain Adaptive Pre-training in highly specialized domains. While the specific application to Polish wasn't addressed in the papers we found, the principles of Domain Adaptive Pre-training are widely applicable across languages. This is especially relevant in the context of neural machine translation [14] and cross-lingual tasks, where models are often adapted to new languages and domains to improve their performance. This could be directly applicable to a language like Polish by enabling the model to better capture the syntax, semantics, and unique idiomatic expressions, leading to more accurate and contextually appropriate text generation. The preliminary evaluations 1 have revealed that Mistral-7B English open-source LLM exhibits a basic ability to generate and understand texts in Polish. This capability could be leveraged to significantly improve Polish text generation and comprehension. The applications of LLMs include diverse domains such as online retail, medicine, | Input Tokens | Generated Tokens
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4c71776d-916c-4109-aa25-27e3298854ac
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## K**Eywords** Machine Learning · Nlp · Language Adaptive Pre-Training · Large Language Models · Transformer 1 Introduction to more accurate and contextually appropriate text generation. The preliminary evaluations 1 have revealed that Mistral-7B English open-source LLM exhibits a basic ability to generate and understand texts in Polish. This capability could be leveraged to significantly improve Polish text generation and comprehension. The applications of LLMs include diverse domains such as online retail, medicine, | Input Tokens | Generated Tokens | |---------------------------------------|------------------------------------------------| | Szybkie czerwone autko | | | je´zdzi ˛ace po co´s w tym kraju | | | Kanapka | ze kokosovým mlékem je moje oblíbená receptura | | Król Karol | wr˛eczył Trzeciej Wikitorii Królowej | | Sport to zdrowie | | | i dlatego zauwa˙zy´c, ze
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
9c797676-53a9-47bd-9e3b-29e0cd69d20c
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## K**Eywords** Machine Learning · Nlp · Language Adaptive Pre-Training · Large Language Models · Transformer 1 Introduction | wr˛eczył Trzeciej Wikitorii Królowej | | Sport to zdrowie | | | i dlatego zauwa˙zy´c, ze zdrowa dieta | | education, engineering, linguistics, and the gaming industry. The integration of these AI solutions not only enhances business productivity but also yields substantial economic benefits. The introduction of LLM-based AI tools has demonstrated a marked improvement in operational efficiency, evidenced by a 14% average increase in the rate of completed tasks per hour [15]. In the context of Poland, the usage of such technologies is almost instant but the development and research of custom solutions is progressing at a slower pace. As of now, there isn't a cutting-edge, high-quality LLM designed exclusively for Polish. This delay forces businesses to depend on externally hosted solutions, such as OpenAI's ChatGPT, particularly in the realm of digital assistants. While these external solutions offer immediate benefits, they also entail financial costs and limit control over data flow. The reliance on external AI technologies, while a temporary solution, underscores the need for the development and deployment of localized LLMs to ensure data sovereignty and capitalize on the economic and technological potential of AI. This study aims to ascertain whether utilizing an established LLM solution can facilitate the creation of versatile Polish-adapted LLM that is both time-efficient and economically viable. This approach, which involves further building a classifier/regressor on top of LAPT model fine-tuned to solve a domain-specific downstream task that is applicable for business use cases (sentiment analysis, predicting/labelling online reviews, generating texts). The following **Research Question (RQ)** have been defined and will be addressed in this paper: RQ 1
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a864eabf-8f1d-4c60-82a9-943b87f7b26e
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## K**Eywords** Machine Learning · Nlp · Language Adaptive Pre-Training · Large Language Models · Transformer 1 Introduction and technological potential of AI. This study aims to ascertain whether utilizing an established LLM solution can facilitate the creation of versatile Polish-adapted LLM that is both time-efficient and economically viable. This approach, which involves further building a classifier/regressor on top of LAPT model fine-tuned to solve a domain-specific downstream task that is applicable for business use cases (sentiment analysis, predicting/labelling online reviews, generating texts). The following **Research Question (RQ)** have been defined and will be addressed in this paper: RQ 1 How well does our model Curie-7B-v1 generate Polish text? RQ 2 How does LAPT LLM perform against top models in KLEJ benchmark? RQ 3 What are the estimated costs, time requirements, and energy consumption involved in building a model like Curie-7B-v1?
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b9065cfe-1ebf-4fbb-abe0-e7e9e6c86b91
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 2 Methodology This subsection provides a clear overview of the experiments conducted in this study. It explains the steps followed and the techniques used. The mathematical principles underlying these experiments are also described. Furthermore, this section discusses the metrics used to evaluate the results of the experiments.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
fdfbf7e2-3e3e-4dec-a19c-853d5feffb12
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 2.1 Language Adaptive Pre-Training Given a pre-trained LLM PΦ(y|x) its parameters Φ and a training dataset Z = {(xi, yi)}i=1*,...,N*. In order to adapt to the new language, the model weights need be to updated iteratively from its pre-trained state Φ0 to Φ = Φ0 + ∆Φ. The process of maximising the objective function can be defined as follows: $$\underset{\Phi}{\operatorname{argmax}}\sum_{(x,y)\in\mathcal{Z}}\sum_{t=1}^{|y|}\log(P_{\Phi}(y_{t}|x,y_{<t}))\tag{1}$$ This task is computationally intensive and demands substantial resources. In the classical paradigm 1, a full fine-tuning means that the model needs to learn a ∆Φ whose dimension is equal to the entire pre-trained parameters |∆Φ| = |Φ0|, which is computationally expensive. In the proposed paradigm (2) LoRA (Low-Rank Adaptation) [16] used in this study we tune only small additional parameters θ such that Φ = Φ0 + ∆Φ(θ). Its dimension is very small compared to the original parameters |θ*| ≪ |*Φ0|. Thus, the training can be expressed as: $$\mathop{\rm argmax}_{\theta}\sum_{(x,y)\in\mathcal{Z}}\sum_{t=1}^{|y|}\log(P_{\Phi+\Delta\Phi(\theta)}(y_{t}|x,y_{<t}))\tag{2}$$ In the classical paradigm (1), the outcome of LAPT would be a Polish-adapted LLM. While in used paradigm (2), the outcome would be the Polish LoRA adapter [17], which can be combined with the untouched foundational LLM to generate Polish text. Perplexity is a commonly used metric in natural language processing (NLP) to evaluate the quality of LLMs. In the context of text generation, perplexity indicates how well the language model predicts the sequence of words in a given test text. It's a measure of how "sur
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
dee74fd5-06b4-400f-90e4-55fca9f61ba0
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 2.1 Language Adaptive Pre-Training y_{<t}))\tag{2}$$ In the classical paradigm (1), the outcome of LAPT would be a Polish-adapted LLM. While in used paradigm (2), the outcome would be the Polish LoRA adapter [17], which can be combined with the untouched foundational LLM to generate Polish text. Perplexity is a commonly used metric in natural language processing (NLP) to evaluate the quality of LLMs. In the context of text generation, perplexity indicates how well the language model predicts the sequence of words in a given test text. It's a measure of how "surprised" the model is by the data it's seeing. A lower perplexity score indicates that the model is better at predicting the sample. It is defined as follows let Pi be the perplexity of the i-th sentence in the batch, calculated as: Pi = 2Hi where Hi is the average cross-entropy for the i-th sentence, given by: $$H_{i}=-\frac{1}{N_{i}}\sum_{j=1}^{N_{i}}\log_{2}(P(w_{ij}|w_{i1},w_{i2},\ldots,w_{ij-1}))\tag{3}$$ Here, Ni is the number of words in the i-th sentence, and P(wij|wi1, wi2*, . . . , w*ij−1) is the predicted probability of the j-th word given the preceding context within the sentence. Then, the mean perplexity across the batch of M sentences is defined as follows: $$\overline{P}=\frac{1}{M}\sum_{i=1}^{M}P_{i}\tag{4}$$
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
49db6f97-0f64-4752-a898-dfade5602a50
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 2.2 Fine-Tuning For Downstream Tasks After a language model is fine-tuned according to (1) it needs to solve a downstream task, such as sentimental analysis of online reviews. A pre-trained LLM Pϕ,Θ with its domain-adapted parameters Φ and a newly initialised classification layer Θ, as well as a training dataset Z = {(xi, yi)}i=1*,...,N* has a task to minimize a specific loss function, such as a cross-entropy loss [11]: $$\underset{\Phi,\Phi}{\operatorname{argmax}}\frac{1}{N}\sum_{i=1}^{N}y_{i}\log(P_{\theta,\Phi}(x_{i}))\tag{5}$$ In the proposed paradigm (2), the fine-tuning process only updates the small additional parameters $\Delta\Phi(\theta)$ and the classifier head $\Theta$ : $$\underset{\theta,\Theta}{\operatorname{argmax}}\frac{1}{N}\sum_{i=1}^{N}y_{i}\log(P_{\theta+\Delta\Phi(\theta),\Theta}(x_{i}))\tag{6}$$
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0c2b97cc-62a3-4880-a10a-34e9bdbe418f
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 2.3 Data This subsection details the datasets used in both the LAPT phase and the second phase for addressing downstream tasks.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
54bbb8b3-d89c-42b1-9894-97cc6c4c679d
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 2.3.1 The Dataset For Language Adaptive Pre-Training For the initial phase of the LAPT the SpeakLeash [18] dataset was used. It offers an extensive and diverse collection of texts in Polish. It consists of 1TB of a wide range of texts in Polish. Only the highest quality extract of approximately 2 GB was selected.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
90e1b837-fe03-429d-b963-01b2bd2e4fd4
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## Sample Extracts "Z Podwala Staromiejskiego znikn˛eły stragany z warzywami i owocami ..." "Transmisja cyfrowych danych w sieciach GSM Tworzenie standardu GSM rozpocz˛eło si˛e w 1982 roku, kiedy to powołano do działalno´sci zespół roboczy, przed którym postawiono zadanie opracowania zało˙ze´n ..." Speaklesh dataset has been curated to include texts from a variety of sources, ensuring a comprehensive representation of the Polish language. At LAPT we specifically trained adapter on online texts sourced from hundreds of Polish web portals along with extensive extract from Polish Wikipedia. This approach was instrumental in covering a broad spectrum of topics and writing styles, thus enhancing the adaptability and accuracy of our model. The merged dataset consisted of 2,157,867 texts.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
da8b8f33-ea62-4ff1-9e08-788a90a98e93
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 2.3.2 Downstream Tasks The KLEJ Benchmark[1] consists of 9 tasks (2) for evaluating the performance of language models. Each task is designed to assess different aspects of language processing abilities, such as understanding context, recognizing emotions, and identifying specific entities in text. This benchmark provides a comprehensive framework for testing and comparing the capabilities of various language models in processing and understanding the Polish language. - **NKJP-NER:** Predict the type of a named entity in sentences from the NKJP corpus. - **PolEmo2.0-OUT:** Predict sentiment for out-ofdomain reviews like products and university. - **DYK:** Decide if an answer to a question is correct in the 'Did You Know' dataset. - **CDSC-E:** Determine entailment between pairs of sentences from the Compositional Distributional Semantics Corpus. - **PSC:** Identify summaries of the same or different news articles in the Polish Summaries Corpus. - **CDSC-R:** Assess semantic relatedness between sentence pairs in the Compositional Distributional Semantics Corpus. - **AR:** Predict product ratings from 1 to 5 in the Allegro Reviews dataset. - **CBD:** Detect cyberbullying content in Twitter messages from the 2019 PolEval competition. - **PolEmo2.0-IN:** Predict the sentiment of online reviews in medicine and hotel domains. | Task-Name | Train | Dev | Test | Domain | Metrics | Objective | |-----------------------|---------|-------|--------|-----------------|----------------|-------------------------| | Single-Sentence Tasks | | | | | | | | NKJP-NER
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
18dfaa81-caef-4d61-900e-cb50d864d1b3
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 2.3.2 Downstream Tasks | |-----------------------|---------|-------|--------|-----------------|----------------|-------------------------| | Single-Sentence Tasks | | | | | | | | NKJP-NER | 16k | 2k | 2k | Balanced corpus | Accuracy | NER classification | | CDSC-R | 8k | 1k | 1k | Image captions | Spearman corr. | Semantic relatedness | | CDSC-E | 8k | 1k | 1k | Image captions | Accuracy | Textual entailment | | Multi-Sentence Tasks | | | | | | | | CBD | 10k | - | 1k | Social Media | F1-Score | Cyberbullying detection | | PolEmo2.0-IN | 6k | 0.7k | 0.7k | Online reviews | Accuracy | Sentiment analysis
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
863e3a36-ac0f-4438-83df-5f1ea7c85420
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 2.3.2 Downstream Tasks | | CBD | 10k | - | 1k | Social Media | F1-Score | Cyberbullying detection | | PolEmo2.0-IN | 6k | 0.7k | 0.7k | Online reviews | Accuracy | Sentiment analysis | | PolEmo2.0-OUT | 6k | 0.5k | 0.5k | Online reviews | Accuracy | Sentiment analysis | | Czy wiesz? | 5k | - | 1k | Wikipedia | F1-Score | Question answering | | PSC | 4k | - | 1k | News articles | F1-Score | Paraphrase | | AR | 10k | 1k | 1k | Online reviews | wMAE | Sentiment analysis |
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6d243d02-3712-4f85-81ac-a050ea322cbb
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 2.4 Model Selection In the research of an optimal base foundational LLM for the LAPT to build Curie-7B-v1, we have identified Mistral- 7B, developed by the French startup Mistral, as the suitable foundational model. Among the open-source models evaluated, which include LLama2, Falcon, and Bloom, Mistral-7B demonstrates elementary proficiency in processing and interpreting the Polish language. This proficiency is a decisive factor, given the LAPT's primary focus on the Polish language. Furthermore, Mistral-7B distinguishes itself also through several key features beyond its language capabilities: - **Performance**: Mistral 7B shows exceptional performance, consistently outperforming Llama 2 13B and competing effectively with Llama 30B in various tasks. - **Architectural Advancements**: - Grouped-Query Attention: Enhances processing efficiency, leading to faster inference times. - **Sliding-Window Attention**: Improves handling of longer data sequences while maintaining computational efficiency. - Byte-fallback BPE Tokenizer: Ensures effective management of a broad spectrum of textual inputs. - **Context Window**: The model's ability to refer to a significant amount of previous information enhances its performance in continuous tasks. The model has a window size of 4096 and the context length is 8192. It is significantly more than any vanilla BERT [19] model could handle. - **Accessibility**: In *float32* precision, Mistral 7B requires ∼ 28GB of VRAM, while in *float16* precision, it needs ∼ 14GB. This makes it accessible for consumer-grade GPUs. - **Versatility**: Mistral 7B excels in English language processing and coding tasks, making it versatile for various enterprise applications. - **Open-Source License**: Available under the Apache 2.0 license, it encourages community-driven development and transparency.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
427da7ed-f432-47e1-8445-d97372b5423d
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 3 Experiments 3.1 Hardware And Software Stack For the experiments, a server from a leading cloud provider was leased, featuring formidable computational resources. This included the NVIDIA RTX A6000 ADA GPU with 48GB of VRAM, paired with the powerful AMD Epyc 7742 processor, which has 64 cores and 128 GB of RAM and 1TB SSD M2 drive. The server ran on a Linux Ubuntu platform, equipped with a Conda environment with Pytorch 2.0 and the latest 12.2 CUDA driver.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
377354af-d939-4f6b-8f41-add97474da82
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 3.2 The Adaptive Pre-Training | Hyperparameters | Value | |-------------------|---------| | lora_rank | 32 | | lora_dropout | 0.05 | | lora_alpha | 16 | | warmup_steps | 0.1 | | learning_rate | | | 2 | . | | × | | | 10 | | | − | | | 5 | | The model underwent training utilizing the AdamW optimizer [20], with a learning rate (LR) schedule featuring an initial warm-up at a rate of 0.1, followed by a reduction in the final LR to 10% of its peak value. The LoRA adapter was configured with standard settings, including a rank of 32 and α value of 16, complemented by a LoRa dropout rate of 0.05. To enhance the performance NEFTune noise was added to the embedding vectors during the training process. [21]. The maximum input size was set to 128 tokens, with a batch size of 128. | neftune_noise_alpha | 2 | |-----------------------|-----| | batch_size | 128 | | max_seq_len | 128 | Training of the final model was completed in exactly one epoch, requiring a total of 106 hours. The training was not extended beyond this duration due to the onset of overfitting after the first epoch, which led to the generation of nonsensical text. Figure 3 illustrates
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1ad6a0f6-a2e7-4406-b0fc-337e947ffa69
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 3.2 The Adaptive Pre-Training tokens, with a batch size of 128. | neftune_noise_alpha | 2 | |-----------------------|-----| | batch_size | 128 | | max_seq_len | 128 | Training of the final model was completed in exactly one epoch, requiring a total of 106 hours. The training was not extended beyond this duration due to the onset of overfitting after the first epoch, which led to the generation of nonsensical text. Figure 3 illustrates the model's training loss and validation. Loss and performance evaluations indicated that optimal model quality was attained after the initial epoch, with further training resulting in a significant decline in quality, ultimately producing incoherent text. The model was validated using 1000 distinct examples coming from the training set. RQ 1: How well does our model Curie-7B-v1 generate Polish text? To evaluate the model's performance, the perplexity scores were compared before and after fine-tuning. This comparison serves as an objective measure to assess the effectiveness of the LAPT process. Low perplexity calculated on the test set of 1000 (never seen before) examples prove that the model has a significantly better understanding of the Polish language after fine-tuning. Results signify that adapting the LLM to the new language has been successful. The LAPT | Model-Name | Average Perplexity | |---------------------------|----------------------| | ↓ | | | Size (Billion Parameters) | Modality | | Curie-7B-v1 | 3.02 | | 7.24 | Pre-trained | | ∗
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
82dd16ac-b397-43ee-b259-ee58c1a6bfe8
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 3.2 The Adaptive Pre-Training | | Size (Billion Parameters) | Modality | | Curie-7B-v1 | 3.02 | | 7.24 | Pre-trained | | ∗ | | | 276 | | | Million | | | Mistral-7B-v1 | | | 6.82 | 7.24 | | LLama2-7B | 7.71 | | APT3-1B-Base | 23.30 | | Polish-GPT2-XL | 97.37 | *Model was fine-tuned using 276 million Polish tokens, the initial count of tokens it was pre-trained on is not included.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8174e8d8-668c-48ee-ac28-c97b27085b9b
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 3.2 The Adaptive Pre-Training | | APT3-1B-Base | 23.30 | | Polish-GPT2-XL | 97.37 | *Model was fine-tuned using 276 million Polish tokens, the initial count of tokens it was pre-trained on is not included. model Curie-7B was compared against the excellent English LLMs and the two well-established Polish decoder-only models Pol-GPT-2 and APT3-1B-Base (based on LLAMA architecture). Our solution surpassed all the others by a notable margin. Additional empirical evaluations indicate that the adapted model demonstrates a high degree of linguistic competence, as reflected by its capacity to generate coherent and contextually relevant text. Most significantly, the model achieves the lowest perplexity score when benchmarked against other language models.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7029217a-7043-4abe-8218-2424f53d77e3
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 3.3 Fine-Tuning For Klej Downstream Tasks The model, an outcome of the experiments detailed in (3.2), served as the foundation for developing classifiers and regressors to address the KLEJ tasks (2.3.2). A prevalent issue in the datasets was the strong class imbalance, which was mitigated using weighted cross-entropy. The training duration for classifiers ranged between 2 to 4 hours on average. These classifiers underwent training for 20 epochs, incorporating an early stopping parameter set at 5. Hyperparameter tuning was employed for optimizing parameters. Minimal to no data preprocessing was applied. In instances lacking a validation dataset, a stratified 20% segment of the training dataset was utilized as a control sample. RQ 2: How does LAPT LLM perform against top models in KLEJ benchmark? Our model Curie-7B-v1 a decoder-only model fine-tuned on 276 million tokens handled 8 challenges exceptionally well. It was extremely close to the best baseline model which is a native Polish model trained on significantly more Polish data. LAPT used the least amount of tokens and yet the model was powerful enough to obtain results comparable with the current SOTA in the 8 out of 9 tasks. Curie-7B-v1 used significantly less data and in 8 tasks got the average of 89.35 using just based on an estimation between 2-3% of the dataset size of the best model that scored 90.7% in those tasks. Although our model is bigger than the Polish RoBERTa-v2 (large) it requires significantly fewer tokens to learn a new language, Polish. Model-Name NKJP-NER CDSC-E CDSC-R CBD PolEmo2.0-IN PolEmo2.0-OUT DYK PSC AR Curie-7B-v1 93.4 92.2 94.9 49.0 92.7 80.0 76.2 98.6 86.8 Polish RoBERTa-v2 (large) 95.8 94.3 95.1 74.3 93.1 84.0 75.4 98.8 89.2 HerBERT (large) 96.4 94.1 94.9 72.0 92.2
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
da58160b-6475-4666-86a4-f3a984963995
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 3.3 Fine-Tuning For Klej Downstream Tasks DYK PSC AR Curie-7B-v1 93.4 92.2 94.9 49.0 92.7 80.0 76.2 98.6 86.8 Polish RoBERTa-v2 (large) 95.8 94.3 95.1 74.3 93.1 84.0 75.4 98.8 89.2 HerBERT (large) 96.4 94.1 94.9 72.0 92.2 81.8 75.8 98.6 89.1 XLM-RoBERTa (large) + NKJP 94.2 94.2 94.5 72.4 93.1 77.9 77.5 98.9 88.2 Polish RoBERTa (large) 94.5 93.3 94.9 71.1 92.8 82.4 73.4 98.8 88.8 The model shows low performance in the cyber-bullying detection (CBD) task. This underperformance is attributed to the model's lack of exposure to a wide range of swear words. Additionally, the ambiguity of some insults, which can have double meanings, confuses the model. The dataset employed was primarily composed of news articles, literature, and texts, which utilize formal or semi-formal language and exclude inappropriate phrases. LAPT used the least amount of tokens when compared to baselines. However, this was enough to obtain results almost on pair with the current SOTA. | Model-Name | Batch Size | Update Steps | Corpus Size | Tokens Seen | |----------------------------|--------------|----------------|---------------|-----------------| | Curie-7B | 128 | 17k | 3.11GB | *276 Million | | Polish RoBERTa-v2 (large) | 2k
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
0007ee79-1419-4aa3-93f3-c10c71fef4a2
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 3.3 Fine-Tuning For Klej Downstream Tasks atch Size | Update Steps | Corpus Size | Tokens Seen | |----------------------------|--------------|----------------|---------------|-----------------| | Curie-7B | 128 | 17k | 3.11GB | *276 Million | | Polish RoBERTa-v2 (large) | 2k | 400k | 200GB | **15-30 Billion | | Herbert (large) | 2.5k | 60k | Unknown | 8.6 Billion | | XLM-RoBERTa (large) + NKJP | Unknown | Unknown | Unknown | 2 Billion | | Polish RoBERTa (large) | 30k | 50k | 135GB | **10-20 Billion | **This presents an estimated range of token numbers derived from the cited datasets, inferred due to the lack of explicit mention in the associated repositories or papers.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3ebf9378-3979-446e-8d27-1d4905738ef5
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 4 Power Usage, Costs And Carbon Offset RQ 3: What are the estimated costs, time requirements, and energy consumption involved in building a model like Curie-7B-v1? The training of the model was carried out using a cloud provider. It took 106 GPU hours and incurred a cost of $85 for the first stage of the LAPT. Additionally, approximately $50 was spent to train and fine-tune hyperparameters of nine different classifiers, requiring around 60 GPU hours in a cloud setup. The approximated power consumption of the server for the whole training time can be calculated in the following way. 450W × 166h = 74.7kWh (7) The estimated server power consumption of 74.7 kWh will be used to approximate carbon offset. The carbon emission was calculated using approximated carbon produced based on the local power grid as follows: 74.7kWh × ∼ 0.61 kg eq. CO2/kWh ≈ 45.57 *kg eq.* CO2 (8) This calculation underscores the environmental efficiency of the proposed solution. There is no necessity to develop a foundational model, which often demands extensive training on 8, 16, 32, or even hundreds of GPUs over several days for marginally improved performance. Such an effort has already been undertaken by the Mistral-AI team during the pre-training stage. In the case of the classifiers, the inference speed was remarkably fast on both an 80-watt CPU and a 300-watt GPU. Employing techniques such as pruning or quantization could further enhance environmental friendliness, reducing memory requirements and improving efficiency.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1f6e3be5-0bf7-4051-9186-40f8e0675dca
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 5 Conclusions In this paper, we introduce Language Adaptive Pre-training (LAPT) applied in the Curie-7B-v1 model, a decoder-only architecture inspired by clinical ML research. The LAPT approach demonstrates that the Curie-7B-v1 model matches foundational Polish models. On eight downstream tasks, it achieved an average score of 89.35% compared to the top model's 90.7%. Curie-7B-v1 achieved this score but with markedly fewer data utilizing just 2-3% of the dataset size. Unlike compared traditional encoder-decoder models limited to predicting masked tokens, Curie-7B-v1 exhibits versatility in generating high-quality Polish text. This adaptability allows adapting it to various problems, including classification, regression, and text generation. The integration of 2-bit quantization [22] and pruning methods into the adaptation of LLMs for low-resource languages could be a valuable area for future research. These strategies promise to improve the efficiency and accessibility of language models. This model fills a crucial gap by providing an open-source Polish LLM, laying the groundwork for developing modern, efficient business solutions.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
43a88c6c-dec1-4aee-a465-d53e8267c6d3
# Efficient Language Adaptive Pre-Training: Extending State-Of-The-Art Large Language Models For Polish ## 6 Acknowledgements We acknowledge the financial support from Apostroph Group and express appreciation for Dr. Tomer Jack Barnea, Head of ICT in Apostroph Group, for his support in my AI research. Their assistance provided the necessary resources and expertise to overcome the challenges faced and the development of this project.
{ "creation_datetime": "2024-03-04", "file_name": "2402.09759v1.md", "file_path": "paper_data/2402.09759v1.md", "file_size": 31612, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
aa2096eb-fd5c-4705-a19a-2359e3dd187f
Moritz Stephan 1 Alexander Khazatsky 1 Eric Mitchell 1 Annie S Chen 1 Sheryl Hsu 1 **Archit Sharma** 1 Chelsea Finn 1
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8fa60cf7-58ac-46fb-99b9-3d9360111109
## Abstract The diversity of contexts in which large language models (LLMs) are deployed requires the ability to modify or customize default model behaviors to incorporate nuanced requirements and preferences. A convenient interface to specify such model adjustments is high-level verbal feedback, such as "Don't use emojis when drafting emails to my boss." However, while writing high-level feedback is far simpler than collecting annotations for reinforcement learning from human feedback (RLHF), we find that simply prompting a model with such feedback leads to overgeneralization of the feedback to contexts where it is not relevant. We study the problem of incorporating verbal feedback without such overgeneralization, inspiring a new method Contextualized Critiques with Constrained Preference Optimization (C3PO). C3PO uses a piece of high-level feedback to generate a small synthetic preference dataset specifying how the feedback should (and should not) be applied. It then fine-tunes the model in accordance with the synthetic preference data while minimizing the divergence from the original model for prompts where the feedback does not apply. Our experimental results indicate that our approach effectively applies verbal feedback to relevant scenarios while preserving existing behaviors for other contexts. For both humanand GPT-4-generated high-level feedback, C3PO effectively adheres to the given feedback comparably to in-context baselines while reducing overgeneralization by 30%.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
be941b3a-c68b-4df9-94e0-8de4d46589c8
## 1 Introduction With the increasingly widespread adoption of large language models (LLMs) across diverse industries and individuals, the ability to align them with high-level human feedback for a specific user or use-case becomes increasingly important. While LLM users often want the model to adhere to broad principles at all times, such as producing fluent text, individual users and use-cases have more nuanced preferences. For example, a user may request the LLM to write more concise work emails but more detailed personal emails, making feedback context dependent. Tailoring models to accommodate such preferences is challenging: it requires extensive resources to gather preferences in all different contexts and fine-tuning the model in one context can unpredictably impact model behavior in other contexts. We study the problem of adapting models using verbal feedback that is fast and easy for people to provide (see Fig. 1). Common approaches to incorporating feedback, such as supervised context distillation (SCD) or reinforcement learning from human feedback (RLHF), use example-level supervision via either supervised completions or preference labels. Such methods require a corpus of user-provided (preference-)data, which can be costly and cumbersome to obtain. Additionally, they do not constrain model behavior outside the context that the feedback may apply, so the LLM might adapt its behavior in unintended ways, e.g. output a more verbose work email when the preference only applies to personal emails. Verbal feedback is far easier and faster for humans to provide. To this end, another common approach is to incorporate such verbal feedback into the prompt, potentially through an iterative process to continually add additional points of feedback. However, this approach requires re-using the prompt in all future queries. As more pieces of feedback accumulate, long prompts containing many context-dependent feedbacks can make inference expensive; further, identifying which pieces of feedback should apply in a given context can become difficult. We aim to adapt LLMs such that, when provided with a single sentence that specifies feedback, the model can discern which situations the feedback applies to and appropriately incorporate the feedback in future outputs. We propose Contextualized Critiques with Constrained Preference Optimization (C3PO), where we first synthetically generate hypothetical prompts in-scope and out-of-scope for the feedback. We then sample original completions to these prompts, without the feedback applied, as well as revised completions in line with the feedback. Importantly, we utilize the strong priors of existing instruction-tuned LLMs in this process and therefore
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
3b8566e5-8f41-419a-8b35-4f7b0e3b3c45
## 1 Introduction We aim to adapt LLMs such that, when provided with a single sentence that specifies feedback, the model can discern which situations the feedback applies to and appropriately incorporate the feedback in future outputs. We propose Contextualized Critiques with Constrained Preference Optimization (C3PO), where we first synthetically generate hypothetical prompts in-scope and out-of-scope for the feedback. We then sample original completions to these prompts, without the feedback applied, as well as revised completions in line with the feedback. Importantly, we utilize the strong priors of existing instruction-tuned LLMs in this process and therefore do not require any additional human supervision. We then introduce a new objective to fine-tune the LLM's response behavior. One naive approach might use original and revised completions for prompts to maximize the implicit reward of a preference model (e.g. using direct preference optimization (Rafailov et al., 2023)). However, this objective does not capture the need to leave model behavior unchanged for non-relevant prompts. Instead, C3PO jointly maximizes the implicit reward for inscope prompts and minimizes standard cross-entropy loss between the logits of the base and fine-tuned model for outof-scope prompts. Including the latter loss in the objective adjusts the LLM's responses to prompts where the feedback is relevant, while preserving its behavior in contexts where the feedback should not be applied. Our main contribution is C3PO, a new method for learning from verbal feedback that selectively adapts the LLM's behavior based on the context of the feedback. This novel synthetic data generation scheme and fine-tuning objective enables an LLM to extrapolate single-sentence feedback to new situations. Across numerous examples of feedback generated by humans and GPT-4, we find that C3PO accurately applies the feedback to relevant prompts and importantly, substantially reduces unintended behavior changes in scenarios where the feedback is not applicable, outperforming prior methods by over 10% when both criteria are considered. By providing adaptability to verbal feedback while reducing overgeneralization of such feedback, our work may help enhance the utility of LLMs in diverse use-cases.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
62d8ddc1-cb00-4349-b602-4569919fde67
## 2 Related Work Improving language or dialogue systems from feedback has been studied in the context of various types of feedback, including learned or (Walker, 2000; Böhm et al., 2019) or heuristic (Li et al., 2016) rewards on individual model outputs, preferences or rankings over pairs or sets of model samples (Ziegler et al., 2020; Ouyang et al., 2022; Rafailov et al., 2023), and natural language feedback on model outputs or behaviors (Li et al., 2017). Natural language feedback or corrections on individual model outputs have been used to improve performance in code generation (Austin et al., 2021; Chen et al., 2023), dialogue (Li et al., 2017; Hancock et al., 2019; Shi et al., 2022), and summarization (Scheurer et al., 2023). Feedback or critiques are typically used to refine model outputs during generation, iterating on or refining the model's initial response before outputting a final answer. Recent work has emphasized *self-*refinement, where an LLM generates its own feedback (Madaan et al., 2023; Huang et al., 2023; Pan et al., 2023). Some studies have shown that the final outputs from such (self-)refinement methods can be distilled back into the model, improving its base performance without requiring iterative refinement during sampling at test time (Sun et al., 2023; Lu et al., 2023; Yu et al., 2023; Yuan et al., 2024; Yang et al., 2024). Most relevant to the present work are studies leveraging natural language feedback to refine general model behaviors, rather than iteratively improving a single model output. Constitutional AI (Bai et al., 2022) uses an LLM to generate synthetic training data that encourages an LLM to follow high-level rules written by a human; Glaese et al. (2022) uses a similar approach to instill various rules into a pre-trained LLM. Context distillation (Askell et al., 2021; Snell et al., 2022) is another approach to controllability that distills the behavior of the LLM when conditioned on a piece of feedback back into the LLM *without* the feedback present, essentially 'baking in' the feedback. However, these approaches to controllability have mostly been used to instill universal behavioral changes (i.e., rules that should always be adhered to). Relatively fewer works have studied conditional or context-
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6ba0681f-3638-4b1c-9a16-ff3d5ca54327
## 2 Related Work (2022) uses a similar approach to instill various rules into a pre-trained LLM. Context distillation (Askell et al., 2021; Snell et al., 2022) is another approach to controllability that distills the behavior of the LLM when conditioned on a piece of feedback back into the LLM *without* the feedback present, essentially 'baking in' the feedback. However, these approaches to controllability have mostly been used to instill universal behavioral changes (i.e., rules that should always be adhered to). Relatively fewer works have studied conditional or context-dependent rule following (though Clark et al. (2021) study adherence to synthetic rules for logical reasoning and commonsense). In concurrent work, Castricato et al. (2024) also utilize model completions and revisions to generate synthetic preference pairs; they use this technique to train a language model to better follow instructions that specifically request *avoiding* a given topic. A related problem is *model editing* (Sinitsin et al., 2020; Cao et al., 2021; Mitchell et al., 2021; Meng et al., 2022), which studies interventions to pre-trained models that should only apply in a relatively small neighborhood around the 'model edit' (desired intervention). Most work in model editing studies corrections to factual or reasoning errors. However, Mitchell et al. (2022) study edits that adjust the sentiment of a dialogue model for a single topic, and Murty et al. (2022) show edits (or 'patches') to sentiment classification or relation extraction models. Mao et al. (2023) extend this work by editing model behavior for a single topic according to three categories of personality traits. Akyürek et al. (2023) and Hewitt et al. (2024) study model edits aimed at debiasing LLMs; Hewitt et al. (2024) also study factual edits and corrections to syntactic errors. In contrast, our work performs general behavioral edits to pre-trained LLMs, rather than edits of a specific type or for a specific context.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e76568b9-f0ea-4bb4-9281-b314a1eea966
## 3 Preliminaries We first outline two approaches for updating LLMs with high-level verbal feedback: supervised context distillation and preference-based reinforcement learning (PbRL). Supervised context distillation. A simple but effective method to updating language models from feedback is supervised context distillation (SCD; Askell et al. (2021)), which incorporates a textual context z containing a general principle (e.g., "Always be nice!") or information (e.g., "Assume the US president is Joe Biden.") into a model's behavior. SCD 'distills' the behavior that a human or an LLM π0 would produce when conditioned on both a user query x and the context z into the LLM without the context present. That is, from a dataset of unlabeled user queries or prompts Du = {xi}, a distillation target yi is either written by a human or generated by the LLM π0 for each xi as yi ∼ π0(· | *x, z*). The language model πθ is produced from supervised fine-tuning with the negative log likelihood loss, using the synthetic supervision targets: $${\cal L}_{\rm SFT}({\cal D})=-{\mathbb{E}}_{x,y\sim{\cal D}}\log\pi_{\theta}(y\mid x),\tag{1}$$ where πθ is typically initialized as π0. Preference-based reinforcement learning. Preferencebased reinforcement learning (PbRL; (Busa-Fekete et al., 2014; Saha et al., 2023)) is the most widely-used approach to updating language models from feedback by leveraging preferences over pairs1 of LLM-generated responses y, y′ to an input x. The responses *y, y*′ are typically sampled from a language model π0 fine-tuned with SCD or a similar objective (Ziegler et al., 2020; Bai et al., 2022; Ouyang et al., 2022). The input x may be an instruction, document to summarize, or dialogue history, for example. Given an input x, responses *y, y*′, an annotator (either a human or an LLM) labels which response is better, ultimately producing a dataset Dpref = {xi, y+ i , y− i }, where
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5faab4fd-dfb2-43f9-a3fb-46e91fcd1bcc
## 3 Preliminaries an input x. The responses *y, y*′ are typically sampled from a language model π0 fine-tuned with SCD or a similar objective (Ziegler et al., 2020; Bai et al., 2022; Ouyang et al., 2022). The input x may be an instruction, document to summarize, or dialogue history, for example. Given an input x, responses *y, y*′, an annotator (either a human or an LLM) labels which response is better, ultimately producing a dataset Dpref = {xi, y+ i , y− i }, where y+ i is preferred to y− i for query xi, as judged by the annotator. The dataset Dpref is used to learn a parameterized reward model rϕ : *X × Y →* R that assigns scalar goodness scores to individual input-response pairs. The most common objective for training a reward model is maximum likelihood in the Bradley-Terry choice model (Bradley & Terry, 1952): LBT(ϕ) = −Ex,y+,y− log prϕ(y+ ≻ y− | x, y+, y−) (2) = −Ex,y+,y− log σ � rϕ(x, y+) − rϕ(*x, y*−) � (3) Early methods for fine-tuning LLMs from human preferences followed the reward modeling stage with a policy optimization stage aimed at finding a language model policy πθ that produces high-reward responses without deviating excessively from the LLM that generated the responses in Dpref (Schulman et al., 2017; Ziegler et al., 2020). More recently, direct preference optimization (DPO; Rafailov et al. (2023)) shows that the optimal policy can be extracted from the learned reward in closed form, avoiding the need for iterative policy search. Due to its simplicity and computational efficiency, we use the DPO algorithm for learning from preference data in this work. DPO directly optimizes the language model policy from preferences using the loss: π0(y+|x) − log πθ(y−|x) (4) LDPO(D) = −Ex,y+,y−∼D log σ � log πθ(y+|x) �
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
62f95160-5c81-4bf2-b889-88196a3b78ab
## 3 Preliminaries (2023)) shows that the optimal policy can be extracted from the learned reward in closed form, avoiding the need for iterative policy search. Due to its simplicity and computational efficiency, we use the DPO algorithm for learning from preference data in this work. DPO directly optimizes the language model policy from preferences using the loss: π0(y+|x) − log πθ(y−|x) (4) LDPO(D) = −Ex,y+,y−∼D log σ � log πθ(y+|x) π0(y−|x) � . Feedback For specific Python coding questions, respond with only a code snippet and no explanations before or after the snippet. Categories Data structures in various languages; Statistical computing in different environments In-scope prompts Write a basic Queue class in Python. How can I implement backprop in Python? Out-of-scope prompts When did the Haitian revolution begin? Can you explain relativity in a paragraph? Near-scope prompts What good C++ libraries are there for trees? Is Python or Julia more popular for NLP? Our approach leverages PbRL to update a language model from high-level verbal feedback and does not assume that a preference dataset is directly available at the outset.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c0c8d11c-44b4-4b4a-948d-7164b0fe7f33
## 4 Reinforcement Learning From Verbal Feedback Using Contextualized Critiques With Constrained Preference Optimization Our goal is to enable adaptation to high-level verbal feedback without extensive human annotation. The verbal feedback z corresponds to short passages of natural language text that describe the user's feedback on the model's current behavior. Unfortunately, naïvely applying existing approaches to this problem leads the model to *overgeneralize*, applying the feedback both when it should be applied and when it should not. Our aim is to develop a method capable of only applying feedback where it is appropriate. Starting with a base language model π0, our approach, Contextualized Critiques with Constrained Preference Optimization, uses a strong general-purpose model (such as GPT-4) to translate a piece of verbal feedback z into a dataset which can be finetuned on to address feedback adherence without overgeneralization. This dataset will be an aggregation of three sub-datasets, which all serve a unique purpose. The first component of this dataset, Din-scope, exists to demonstrate the desired change of behavior. Next, we have Dout-of-scope, which allows us to maintain our behavior outside the scope of the feedback. Lastly, we have Dnear-scope, which is adversarially designed to refine our model's understanding of where it is appropriate to apply the feedback. To update the model, we jointly train on Din-scope with PbRL and the unison of Dout-of-scope and Dnear-scope with simple SFT. We now describe our dataset generation procedure in more detail. Translating high-level verbal feedback into a fine-tuning dataset. PbRL algorithms like DPO optimize a model's behavior only for a given distribution of inputs; thus, to incorporate a piece of feedback z, we must first determine the distribution of model inputs where the model's behavior should change. Given a piece of feedback z, C3PO uses GPT-4 to first generate a set of K *categories* of prompts where the feedback could apply. GPT-4 then generates M prompts xin-scope i ( M K for each category) where the feedback applies. However, beyond accommodating feedback for inscope prompts, we must also avoid overgeneralization of the feedback to prompts where it does not apply. We therefore generate a set of M prompts xnear-scope i in each category that are superficially related to the feedback in some way (lexically, semantically), but are not actually inputs where the model's behavior should
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
14af6f03-b5b8-4f26-a341-4d641796f3ef
## 4 Reinforcement Learning From Verbal Feedback Using Contextualized Critiques With Constrained Preference Optimization PT-4 to first generate a set of K *categories* of prompts where the feedback could apply. GPT-4 then generates M prompts xin-scope i ( M K for each category) where the feedback applies. However, beyond accommodating feedback for inscope prompts, we must also avoid overgeneralization of the feedback to prompts where it does not apply. We therefore generate a set of M prompts xnear-scope i in each category that are superficially related to the feedback in some way (lexically, semantically), but are not actually inputs where the model's behavior should change. Finally, we use a fixed set of M feedback-independent prompts xout-of-scope i to avoid degradation of completely unrelated model behaviors.2 See Table 1 for an example feedback, categories, and prompts and Figure 3 for a summary on the data generation scheme. To capture the desired delta in behavior denoted by the feedback, we generate the dataset of preference pairs using the in-scope prompts Din-scope = {xin-scope i , y+ i , y− i }. y− i is generated by the language model that originally received the feedback, i.e., we have y− i ∼ π0(· | xin-scope i ). To gen- erate y+ i , the language model is then prompted to revise y− i to incorporate the feedback, i.e., we have y+ i ∼ π0(· | xin-scope i , y− i , z). See Appendix D for the complete prompt format. Thus to the extent that the model π0 can correctly in- terpret the given feedback, the generated preference data rep- resents the desired 'delta' in behavior described by the feed- back z. To control for model degradation on out-of-scope prompts, we populate Dnear-scope = {xnear-scope i , ynear-scope i } and Dout-of-scope = {xout-of-scope i , yout-of-scope i } with the respective prompts and corresponding completions sampled from the initial language model π0. These datasets encode the behaviors that we want to *preserve* after incorporating z. Fine-tuning using the synthetic data.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
12100293-7600-4a1f-beaa-046037b30f50
## 4 Reinforcement Learning From Verbal Feedback Using Contextualized Critiques With Constrained Preference Optimization desired 'delta' in behavior described by the feed- back z. To control for model degradation on out-of-scope prompts, we populate Dnear-scope = {xnear-scope i , ynear-scope i } and Dout-of-scope = {xout-of-scope i , yout-of-scope i } with the respective prompts and corresponding completions sampled from the initial language model π0. These datasets encode the behaviors that we want to *preserve* after incorporating z. Fine-tuning using the synthetic data. Using our synthetically-generated datasets, we now fine-tune the model π0 using a combined loss, as shown in Figure 4, that both incorporates feedback on relevant prompts and discourages model degradation on feedback-irrelevant prompts: $$\mathcal{L}_{\text{C3PO}}=\overbrace{\mathcal{L}_{\text{DPO}}(\mathcal{D}_{\text{in-scope}})}^{\text{Apply feedback}}$$ $$+\underbrace{\lambda_{1}\mathcal{L}_{\text{SFT}}(\mathcal{D}_{\text{out-of-scope}})}_{\text{Control model degradation}}+\lambda_{2}\mathcal{L}_{\text{SFT}}(\mathcal{D}_{\text{near-scope}})\,.\tag{5}$$ **Interpreting the C3PO loss.** While the $\mathcal{L}_{\text{SFT}}$ losses simply regularize the updated model $\pi_{\theta}$ toward the original model $\pi_{0}$ for prompts not relevant to the feedback, the result of learning from the C3PO synthetic preference dataset for in-scope inputs is less obvious. C3PO generates what we refer to as _synthetic two-policy preference data_$(x,y^{+},y^{-})$, where $y^{+}$ is always preferred to $y^{-}$. These preference tuples are constructed by simply sampling $y^{-}$ from a policy $\pi^{-}$ (the baseline model π0) and y+ from a different policy π+ (the baseline model π0 prompted to revise a
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d6dc523b-4a3a-48e7-bee2-47e68285b098
## 4 Reinforcement Learning From Verbal Feedback Using Contextualized Critiques With Constrained Preference Optimization to the feedback, the result of learning from the C3PO synthetic preference dataset for in-scope inputs is less obvious. C3PO generates what we refer to as _synthetic two-policy preference data_$(x,y^{+},y^{-})$, where $y^{+}$ is always preferred to $y^{-}$. These preference tuples are constructed by simply sampling $y^{-}$ from a policy $\pi^{-}$ (the baseline model π0) and y+ from a different policy π+ (the baseline model π0 prompted to revise a baseline model response using the feedback).3 Unlike preference datasets scored by a black-box human or AI annotator, we can express the optimal policy learned from such preference data in terms of the data-generating policies. We show in Appendix C that such synthetic two-policy preference data satisfies the Bradley-Terry (BT) preference model (Bradley & Terry, 1952), which assumes 3Some existing work (Yang et al., 2024; Intel, 2023) shows that that preference data (x, y, y′) are constructed according to some unknown scoring function r∗ as p(y ≻ y′|x) = σ (r∗(x, y) − r∗(x, y′)). We show that two-policy prefer- ence data adheres to the BT model with $$r_{\rm2p}^{*}(x,y)=\log\frac{\pi^{+}(y|x)}{\pi^{-}(y|x)}.\tag{6}$$ Further, we show that when using $\pi^{-}$ as the reference model for PbRL, the optimal policy corresponds to $$\pi_{\rm2p}^{*}(y|x)\propto\left(\frac{\pi^{+}(y|x)}{\pi^{-}(y|x)^{1-\beta}}\right)^{\frac{1}{\beta}}.\tag{7}$$ Notable special cases of π∗ 2p are the geometric mean of π+ and π− for β = 2 and simply π+ for β = 1. For β < 1, we interpret π∗ 2p
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
035ce731-0ef2-4b56-b666-d8d1c5ce1ca2
## 4 Reinforcement Learning From Verbal Feedback Using Contextualized Critiques With Constrained Preference Optimization to $$\pi_{\rm2p}^{*}(y|x)\propto\left(\frac{\pi^{+}(y|x)}{\pi^{-}(y|x)^{1-\beta}}\right)^{\frac{1}{\beta}}.\tag{7}$$ Notable special cases of π∗ 2p are the geometric mean of π+ and π− for β = 2 and simply π+ for β = 1. For β < 1, we interpret π∗ 2p as returning a temperature-sharpened version of π+, but with a penalty on responses assigned high probability under π− (i.e., responses that respond to the user but fail to adhere to the feedback). See Appendix C for visualization of π∗ 2p with various β in a synthetic setting.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5d987917-e1e5-45ba-a707-1a193237e5e7
## 5 Experiments Our experiments are intended to answer several research questions about learning from verbal feedback. We first investigate the question: to what extent does the overgeneralization problem occur for existing methods for learning from verbal feedback, and does C3PO mitigate this effect? Next, we study whether simple modifications to standard approaches to incorporating feedback with prompting or supervised context distillation effectively mitigate overgeneralization. Further, we study whether or not C3PO can learn multiple pieces of feedback as well as the impact of the specific choice of the form of the constraint loss used in C3PO. Before discussing the results of these experiments, we elaborate the datasets, evaluation metrics, and baseline methods used in our experiments. Datasets. Our feedback dataset is composed of 100 pieces of feedback, where half are written by the authors4 and half are generated by GPT-4 using the prompt provided in Appendix D. All pieces of feedback are designed to apply only in some contexts; see Table 1 for examples. For C3PO and the SCD + Negatives baseline, the datasets Dnear-scope and Dout-of-scope, each containing out-ofscope prompts and corresponding baseline model completions used for regularization, are sampled according to the C3PO procedure in Section 4. For each piece of feedback, |Dnear-scope| = |Dout-of-scope| = 960. We sample the prompts for Dout-of-scope from the Open Instruction Generalist (OIG) Dataset (LAION, 2023) which contains a mix of diverse prompts ranging from math to QA and chat. Within each of the 32 categories of prompts, we randomly select 80% to be used for training and validation and the remainder are used for testing. Evaluation metrics. Our evaluations are constructed to compare the feedback adherence of the baseline model with the model after learning from the feedback; we evaluate this change in behavior for both in-scope and out-of-scope prompts. For in-scope prompts, our goal is to increase feedback adherence, while for out-of-scope prompts, our goal is to preserve the rate of feedback adherence of the original model (that is, leave the baseline model unchanged). We measure feedback adherence in two ways, heuristically and with GPT-4. For 14 of the human-generated pieces of feedback such as modifying response length or the inclusion of certain words, manually crafted heuristic rules are sufficient to reliably measure which of two responses better adheres
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
01762e4a-c807-4dcf-afa1-7637a177a353
## 5 Experiments learning from the feedback; we evaluate this change in behavior for both in-scope and out-of-scope prompts. For in-scope prompts, our goal is to increase feedback adherence, while for out-of-scope prompts, our goal is to preserve the rate of feedback adherence of the original model (that is, leave the baseline model unchanged). We measure feedback adherence in two ways, heuristically and with GPT-4. For 14 of the human-generated pieces of feedback such as modifying response length or the inclusion of certain words, manually crafted heuristic rules are sufficient to reliably measure which of two responses better adheres to a given piece of feedback. For a prompt x, a model output y, and a baseline response ¯y from the baseline model, the heuristic scoring function produces a feedback score h(*x, y,* ¯y). This scoring function intuitively scores whether the feedback adherence of the adapted model response is better than, equal to, or worse than the baseline model response. h(*x, y,* ¯y) takes a value of 1 if y incorporates the feedback and ¯y does not, a value of 0 if both responses adhere to the feedback or neither response adheres to the feedback, and a value of -1 if the baseline response ¯y adheres to the feedback and the adapted model response y does not. In contrast, most pieces of feedback, such as requesting the usage of more metaphors or less aggressive speech, require qualitative evaluation. In these cases, we measure relative feedback adherence using GPT-4. For a prompt x, a model output y, and a baseline response ¯y from the baseline model, we prompt GPT-4 to output a preference score when comparing two responses using the prompt in Appendix D, producing a feedback score g(*x, y,* ¯y) scaled to be in the range [−1, 1]. This score measures the extent to which a given response adheres to the feedback better than the baseline response; a score of 1 denotes that the trained model response adheres to the feedback much better than the baseline model and −1 denotes the reverse. We use these metrics (heuristic scoring or GPT-4 scoring) to measure the feedback adherence of the trained model responses compared to the response of the base model for in-scope prompts. The in-scope adherence score for an algorithm on a given piece of feedback is the average of the per-prompt feedback scores (h(*x, y,
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a3b15df6-8b27-488d-bb71-464152e8c290
## 5 Experiments 1]. This score measures the extent to which a given response adheres to the feedback better than the baseline response; a score of 1 denotes that the trained model response adheres to the feedback much better than the baseline model and −1 denotes the reverse. We use these metrics (heuristic scoring or GPT-4 scoring) to measure the feedback adherence of the trained model responses compared to the response of the base model for in-scope prompts. The in-scope adherence score for an algorithm on a given piece of feedback is the average of the per-prompt feedback scores (h(*x, y,* ¯y) if the prompt x is heuristically checkable, g(*x, y,* ¯y) otherwise) across all in-scope test prompts for that feedback. The overall in-scope adherence score Sin is the average of these per-feedback adherence scores over all feedbacks evaluated. Method Sin Sout Soverall In-Context 0.677 ± .028 0.503 ± .025 0.587 ± .026 In-Context + CoT 0.402 ± .033 0.246 ± .017 0.578 ± .026 SCD 0.470 ± .029 0.269 ± .019 0.6005 ± .025 SCD + Negatives 0.367 ± .027 0.133 ± .013 0.617 ± .021 DPO 0.326 ± .048 0.517 ± .022 0.4045 ± .037 C3PO (Ours) 0.563 ± .031 0.150 ± .014 0.7065 ± .024 For out-of-scope prompts, our goal is to measure changes in the model's behavior as a result of incorporating the feedback. Because the model's behavior is most likely to change along the dimension of the feedback, we measure model behavior change on out-of-scope as the average *change* in the rate we instead measure the absolute change in feedback adherence since over this domain, our objective is to leave the model behavior unchanged. To compute the out-of-scope behavior change score for an algorithm on a given piece of feedback, we average the absolute value of the feedback scores (again, h(*x, y,* ¯y) if x is a heuristically checkable prompt, g(*
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
cfd8632c-f23a-4db9-8034-f5845e5599af
## 5 Experiments model's behavior as a result of incorporating the feedback. Because the model's behavior is most likely to change along the dimension of the feedback, we measure model behavior change on out-of-scope as the average *change* in the rate we instead measure the absolute change in feedback adherence since over this domain, our objective is to leave the model behavior unchanged. To compute the out-of-scope behavior change score for an algorithm on a given piece of feedback, we average the absolute value of the feedback scores (again, h(*x, y,* ¯y) if x is a heuristically checkable prompt, g(*x, y,* ¯y) otherwise). That is, the average *change* in feedback adherence over the set of out-ofscope test prompts. The overall behavior change score Sout is the average of these per-feedback behavior change scores over all feedbacks evaluated. Apart from these individual metrics, we define Soverall = Sin+(1−Sout) 2 as a combined metric with equal weighting of the in-scope feedback adherence score Sin and one minus the out-of-scope behavior change score Sout. A Soverall near one indicates that an algorithm effectively adheres to the given feedback better than the baseline model for in-scope prompts while preserving the level of feedback adherence in the baseline model on out-of-scope prompts. Methods. We compare C3PO against both in-contextlearning-based and fine-tuning methods. For In-Context learning, we provide the baseline model with the user query as well as the feedback and a prompt instructing the model to selectively apply the feedback whenever it is applicable to the given user query. We explore an enhancement of this naive prompting approach, **In-Context + CoT** using chain-of-thought prompting; this approach is the same as In-Context, except we prompt the model to first reason stepby-step about the applicability of the provided feedback to the given prompt before answering. See Appendix D for the full prompts. Next, we compare against performing supervised context distillation **SCD** on in-scope prompts, using the revised responses y+ generated by C3PO as the supervision targets. In addition, in order to better control overgeneralization, we evaluate **SCD + Negatives**, which adds a weighted constraint to the SCD loss over out-ofscope prompts, using the baseline model responses on these prompts as the
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c146c21e-5f7d-447f-aacb-06217f502d11
## 5 Experiments is the same as In-Context, except we prompt the model to first reason stepby-step about the applicability of the provided feedback to the given prompt before answering. See Appendix D for the full prompts. Next, we compare against performing supervised context distillation **SCD** on in-scope prompts, using the revised responses y+ generated by C3PO as the supervision targets. In addition, in order to better control overgeneralization, we evaluate **SCD + Negatives**, which adds a weighted constraint to the SCD loss over out-ofscope prompts, using the baseline model responses on these prompts as the supervision target. Finally, we evaluate DPO on the preference dataset Din-scope without additional regularization, essentially an ablation of C3PO's regularization losses. For all experiments, we use Mistral-7B-Instruct-v0.2 (Jiang et al., 2023) and train with Low-Rank Adaptation (Hu et al., 2021) with a rank of 64 and alpha of 128. We use a learning rate of 5e-5 with a cosine decay schedule and a warmup ratio of 0.05 and train for 1 epoch.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
46070e13-b40a-47f1-99ac-bf1d525ba2a1
## 5.1 Quantifying And Mitigating Overgeneralization Our initial results in Figure 2 show that for existing approaches to learning from verbal feedback, successfully incorporating the feedback (large x-axis value) leads to application of that feedback for prompts where the feedback does not apply (large y-axis value). That is, successful adherence to feedback for in-scope prompts comes at a high cost in terms of incorrect adherence to that feedback for out-of-scope prompts, shown by the large slope of the best fit lines. This result also shows that C3PO trades off inscope adherence and out-of-scope behavior preservation much more efficiently, shown by the much smaller slope of the best-fit line. In this section, we study the impact of incorporating verbal feedback in terms of the evaluation metrics Sin and Sout, measuring out-of-scope behavior change for both generic out-of-scope prompts and more difficult nearscope prompts. The results are shown in Figure 6. C3PO dramatically reduces the behavior change for both general out-of-scope prompts and near-scope prompts, while only slightly reducing feedback adherence for in-scope prompts. It is natural to wonder whether we can improve the performance of the baseline methods using a similar goal of constraining the behavior change for out-of-scope prompts. We therefore evaluate the modifications of the In-Context and SCD methods, In-Context + CoT and SCD + Negatives, intended to reduce behavior change. The In-Context + CoT method first performs chain-of-thought reasoning to decide whether the feedback is applicable to the given input before responding; the SCD + Negatives baseline mixes in the regularization loss on the Dnear-scope and Dout-of-scope datasets. We report the in-scope feedback adherence and out-of-scope behavior change (averaged over the general out-of-scope and near-scope prompts) in Table 2. While both improvements do substantially reduce the amount of overgeneralization compared to the original version of each method, they come at a substantial cost to in-scope feedback adherence. Therefore, the overall score Soverall does not substantially improve for either of these approaches; however, C3PO offers a more efficient tradeoff between in-scope feedback adherence and out-of-scope behavior change, shown by the significantly higher Soverall. We therefore conclude that C3PO is an effective way to reduce overgeneralization while generally maintaining in-scope feedback adherence. This claim is supported by the
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c5645c6d-7c3d-4dd0-b9fd-1cd57ae67465
## 5.1 Quantifying And Mitigating Overgeneralization scope prompts) in Table 2. While both improvements do substantially reduce the amount of overgeneralization compared to the original version of each method, they come at a substantial cost to in-scope feedback adherence. Therefore, the overall score Soverall does not substantially improve for either of these approaches; however, C3PO offers a more efficient tradeoff between in-scope feedback adherence and out-of-scope behavior change, shown by the significantly higher Soverall. We therefore conclude that C3PO is an effective way to reduce overgeneralization while generally maintaining in-scope feedback adherence. This claim is supported by the results in Figure 5.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7a26b000-7aba-416d-8556-79b1f47e1b85
## 5.2 Adhering To Multiple Feedbacks So far, our experiments have evaluated the average result of applying a single piece of feedback to a pre-trained model. While a comprehensive evaluation of continual learning of many feedbacks is out of the scope of this work, we perform an initial investigation in this direction to assess the effectiveness of mixing the LoRA parameters learned from separate feedbacks. That is, we perform C3PO separately on two different feedbacks z1 and z2, producing LoRA parameters ϕ1 and ϕ2. Ideally, to produce a model that adheres to both z1 and z2, rather than re-training, we could simply use the mixture of residual parameters ϕ′ = ϕ1+ϕ2. In this section, we compare the performance of this approach to combining feedback-adapted models. Figure 7 compares the average feedback adherence and behavior change when applying and evaluating only a single piece of feedback at a time (left) with the average feedback adherence and behavior change on two feedbacks after applying the merged LoRA parameters ϕ′ (right), averaged across 20 feedbacks (10 pairs). We observe virtually no degradation in in-scope feedback adherence and no change in out-of-scope behavior change. This result is promising for the possibility of enabling rapid, mix-and-match personalization and customization of large language models without re-training.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4e267bc7-86ce-4bfd-959b-70125bae4b0c
## 5.3 Choice Of C3Po Constraint Formulation C3PO constrains the feedback-adapted model by maximizing the likelihood of the baseline model's responses on out-of-scope prompts during fine-tuning. We hypothesize that performing full knowledge distillation (Hinton et al., 2015) for the conditional distributions of the adapted model and baseline model at each time step. That is, performing knowledge distillation on only out-of-scope prompts, where the baseline model is the teacher and the adapted model is the student. Rather than simply maximizing the likelihood of baseline model samples from each out-of-scope prompt, this constraint minimizes the KL-divergence between the baseline model's conditional distribution and the adapted model's conditional distribution, averaged over all timesteps in the baseline model's completion. While this form of constraint has successfully leveraged the 'dark knowledge' represented by the lower-probability logits in the model's output to constrain neural networks in the context of continual learning (Buzzega et al., 2020), we find that this stronger constraint substantially impairs in-scope feedback adherence compared to the maximum likelihood constraint. Alternative approaches to constraining out-of-scope model behavior is an important topic for future work.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f93d6cfe-9b4d-4e5c-b48c-c613283d1f57
## 6 Discussion & Future Work As large language models become more ubiquitous and widely-adopted, the ability to easily customize and personalize their behaviors is increasingly valuable. We therefore have studied the problem of learning from high-level verbal feedback, where a user provides a short piece of written feedback describing a desired change to a language model's behaviors, and an algorithm must update the model to adhere to this feedback when it is appropriate, but preserve the model's behaviors elsewhere. We showed that applying existing methods for fine-tuning language models from feedback demonstrate severe **overgeneralization**: after incorporating the feedback, they also change model behavior for inputs that are *not* relevant to the provided feedback. To mitigate this problem, we introduced Contextualized Critiques with Constrained Preference Optimization (C3PO), an algorithm that performs reinforcement learning from verbal feedback (RLVF). C3PO leverages existing language models to generate a set of small fine-tuning datasets that encode both the desired change in behavior described by the feedback and the set of behaviors that should be preserved for inputs unrelated to the feedback. We found that C3PO substantially reduces overgeneralization while still adhering to feedback for relevant inputs. Our experiments raise several important questions for future work. Can we perform continual learning from feedback by simply continually aggregating and mixing adapted model weights? In addition, our ablations of the C3PO constraint loss function suggest that the proper level of 'strictness' when constraining the model update is a non-trivial problem, and better-performing alternatives may exist. Finally, future work may investigate the relationship between the complexity of the feedback and the capabilities of the base model being adapted.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d2b0b944-3668-4603-ba85-6e7d88045e9f
## Acknowledgements The authors gratefully acknowledge the OpenAI Researcher Access Program and Modal.com for providing compute credits used in this project. CF is a CIFAR Fellow. EM acknowledges Kyle Hsu for helpful conversations.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f9771ad0-5bdf-489b-b016-7b6d4ccc60b7
## A Sampling Details We sample from GPT-4 using a temperature of 0.7 and a topp value of 0.7. When sampling from Mistral-7B-Instructv0.2, use a temperature of 0.7, a top-p value of 0.7, and top-k value of 50 and a repetition penalty of 1.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4df4cbd7-790e-4bc8-910d-af1bf18725ac
## B Training Details To conduct our hyperparameter search, we select 10 arbitrary pieces of feedback from the human-generated feedback dataset. For all methods, we train using LoRA and choose a rank of 64, alpha of 128, and a LoRA dropout of 0.05. We observe that a smaller rank results in decreases inscope feedback adherence while increasing the rank above 64 results in degradation of model completions across all methods. We train for 1 epoch with a learning rate of 5e−5 using a cosine learning rate schedule and a 0.05 warmup ratio. We ablate learning rates from 1e−7 to 1e−4 and found that below 1e−5, in-scope feedback adherence never increases sufficiently, even when training for multiple epochs. We, thus, picked our learning rate to be larger than this threshold but still small enough avoid a plateauing loss. In addition, we experimented with training for multiple epochs but found that this does not significantly alter the results and sometimes even increases feedback adherence for out-of-scope and near-scope prompts. To choose λ1 and λ2 of the LC3P O objective, we conducted a grid search over the two hyperparameters. We found that it is crucial to set both λ1 > 0 and λ2 > 0 and that increasing λ2, which is the weight for the SFT loss over the near-scope samples, beyond 0.1 decreases in-scope feedback adherence drastically while only marginally mitigating overgeneralization. Additionally, we found that C3PO is not as sensitive over the choice of λ1 as long as it is non-zero and chose 0.2 due to the best in-scope to out-of-scope performance trade-off on our evaluation set. For both C3PO and DPO, we select a β parameter of 0.1. We ablate with values of 0.05, 0.15, and 0.25 and find that a setting of 0.05 results in less in-scope feedback adherence and more overgeneralization while *β >* 0.1 only reduces in-scope feedback adherence.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
06910cb7-c7bb-49c8-ace4-fc9d1e0e3342
## C Derivation Of Optimal Policy For Pbrl On Two-Policy Preference Pairs In this section, we derive several properties of the PbRL learning procedure used by C3PO. First, we demonstrate that the synthetic preference data generated by C3PO adheres to the commonly-used Bradley-Terry model (Bradley & Terry, 1952) of discrete choice and compute the true reward function implied by the preference data. Using this reward function, we then show that the optimal policy learned by C3PO for in-scope prompts takes the form in Equation (7). Finally, we perform a simple empirical validation of this theoretical result in a synthetic bandit problem.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
34c134b4-72fa-4c2a-867d-b036647b69f7
## C.1 Deriving The Underlying Bradley-Terry Scoring Function For Synthetic Two-Policy Preference Data The Bradley-Terry model of discrete choices states that for a preference over two responses y, y′, we have $$p(y\succ y^{\prime}|x,y,y^{\prime})=\sigma\left(r^{*}(x,y)-r^{*}(x,y^{\prime})\right)\tag{8}$$ for some true scoring function r∗. In the case of two-policy preference data, we assume that the preferred response is generated from a policy π+ and the dispreferred response is generated by a policy π−. The probability of observing the response pair y, y′ is p(y, y′|*x, A*) = π+(y|x)π−(y′|x) and p(*y, y*′|x, ¬A) = π+(y′|x)π−(y|x), where A is the event that y was generated by π+ and y′ from π− (and therefore y ≻ y′ by the definition of our data-generating process). By Bayes' Rule, the probability of A (that y was generated by π+ and y′ by π−) is $$p(A|x,y,y^{\prime})=p(y\succ y^{\prime}|x,y,y^{\prime})=$$ $$\frac{\pi^{+}(y|x)\pi^{-}(y^{\prime}|x)}{\pi^{+}(y|x)\pi^{-}(y^{\prime}|x)+\pi^{+}(y^{\prime}|x)\pi^{-}(y|x)}.\tag{9}$$ We now set the RHS of Equation (8) equal to the RHS of Equation (9) and show that a Bradley-Terry scoring function exists. We have: $$\sigma\left(r^{*}(x,y)-r^{*}(x,y^{\prime})\right)=$$ $$\frac{\pi^{+}(y|x)\pi^{-}(y^{\prime}|x)}{\pi^{+}(y|x)\pi^{-}(y^{\prime}
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
398880dd-49fa-4b24-865e-77c696bfa8cb
## C.1 Deriving The Underlying Bradley-Terry Scoring Function For Synthetic Two-Policy Preference Data )}.\tag{9}$$ We now set the RHS of Equation (8) equal to the RHS of Equation (9) and show that a Bradley-Terry scoring function exists. We have: $$\sigma\left(r^{*}(x,y)-r^{*}(x,y^{\prime})\right)=$$ $$\frac{\pi^{+}(y|x)\pi^{-}(y^{\prime}|x)}{\pi^{+}(y|x)\pi^{-}(y^{\prime}|x)+\pi^{+}(y^{\prime}|x)\pi^{-}(y|x)}.\tag{10}$$ Applying the transformation log z 1−z to both sides, we have $$r^{*}(x,y)-r^{*}(x,y^{\prime})=\log\frac{\pi^{+}(y|x)}{\pi^{-}(y|x)}-\log\frac{\pi^{+}(y^{\prime}|x)}{\pi^{-}(y^{\prime}|x)},\tag{11}$$ implying the Bradley-Terry scoring function $$r^{*}(x,\bar{y})=\log\frac{\pi^{+}(\bar{y}|x)}{\pi^{-}(\bar{y}|x)}+C(x)\tag{12}$$ for some value C(x) that is constant with respect to y. Therefore, synthetic two-policy preference data is Bradley- Terry, with the above scoring function.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e6759cb4-26ba-4f81-a684-79b1d5a9258c
## C.2 Deriving The Optimal Policy For C3Po For In-Scope Prompts The optimal policy for the KL-regularized reinforcement learning objective optimized by DPO is shown in prior work (Peters & Schaal, 2007; Peng et al., 2019; Rafailov et al., 2023) to be $$\pi^{*}(y|x)=\frac{1}{Z(x)}\pi_{\rm ref}(y|x)\exp\left(\frac{1}{\beta}r(x,y)\right).\tag{13}$$ Substituting the optimal reward function in Equation (12), assuming C(x) = 0 as this constant term does not affect the optimal policy, we have β log π+(y|x) π∗(y|x) = 1 Z(x)πref(y|x) exp � 1 � (14) π−(y|x) = 1 β . (15) Z(x)πref(y|x) �π+(y|x) � 1 π−(y|x) Assuming that πref = π− gives π∗(y|x) = 1 Z(x)π−(y|x)π+(y|x) 1 β π−(y|x) 1 β (16) = 1 β π+(y|x) 1 β Z(x) 1 π−(y|x) 1 β (17) π−(y|x)− β β , (18) ∝ �π+(y|x) � 1 π−(y|x)1−β which is the optimal policy expression given in Equation (7).
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
afd7c76e-180b-4df3-8871-f539f59b0c35
## C.3 Empirically Validating The In-Scope C3Po Policy In A Synthetic Setting We validate the theoretical results in the previous subsections in a simple bandit setting with 100 possible actions. We define the preferred policy π+ as π+(i) ∝ exp � (i−70)2 1000 � and similarly define π− as the shifted policy π−(i) ∝ exp � (i−40)2 1000 � . We generate 1e8 preference pairs (y+, y−), where y+ ∼ π+(·) and similarly for y−. We fit a reward function to this preference data using the Bradley-Terry loss, using 400 steps of gradient descent with the Adam optimizer (Kingma & Ba, 2015) with learning rate 1.0, initializing the reward function to the zero vector. The results are shown in Appendix C.3. We find that the recovered reward function is exactly the Bradley-Terry scoring function predicted by Equation (12), and that the special cases predicted by Equation (7) hold (e.g., with β = 1, we have simply π∗ = π+). With *β <* 1, we have the intuitive behavior of amplifying the delta between π+ and π−.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b55ff003-38f5-4284-990c-4da572950364
## D Prompts The following sections contain the verbatim prompts used for the various stages of the C3PO synthetic data generation procedure.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
38866382-4daf-40cc-b5d1-97f6d7153f78
## D.1 Category Generation Prompt You are a helpful assistant. You are helping a user come up with categories around a topic that will be used to create some questions that the topic applies to and some questions that the topic does not apply to. Given a topic, come up with {count} creative and diverse categories that are an only slightly larger superset of the topic. Be creative, think out of the box, and keep the categories closely related to the topic. Ensure that for each category, it would be possible to easily come up with questions that are not related to the provided topic. Do not repeat categories and make sure you cover all relevant categories. You should first respond with "THOUGHTS" that describe what you should and should not mention in your categories and why. Then output "CATEGORIES: " on a new line and output each category on a new line as part of a numbered list. Finally, output " REVISED_CATEGORIES: " on a new line followed a revised version of each of the categories you came up with. Use the revision to modify a category if it would be very hard to come up with a prompt for that category that the topic does not apply to. The revision should also be a numbered list. If you do a great job, you will be tipped $200. --EXAMPLE 1-- TOPIC: current fashion trends THOUGHTS: I should list categories that are either related to fashion but that are not explicitly about trends. None of the categories I respond with should be directly about fashion trends. CATEGORIES: 1. Buying luxury fashion 2. gen-z pop culture trends 3. fast-fashion trends 4. men's attire 5. planning an outfit ... REVISED_CATEGORIES: 1. Buying luxury fashion 2. gen-z pop culture trends 3. fast-fashion 4. men's attire 5. planning an outfit ... -- END EXAMPLE 1-- --EXAMPLE 2-- TOPIC: social media direct messages THOUGHTS: I could list categories related to social media or messaging in general. Any category that includes but is broader than social media messaging is fine. CATEGORIES: 1. Communicating with strangers via DM 2. Complimenting a friend 3. Sharing a post with a friend 4. Interacting with creators 5
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d8a0718c-ed37-4f63-9c60-8a288ade05c7
## D.1 Category Generation Prompt z pop culture trends 3. fast-fashion 4. men's attire 5. planning an outfit ... -- END EXAMPLE 1-- --EXAMPLE 2-- TOPIC: social media direct messages THOUGHTS: I could list categories related to social media or messaging in general. Any category that includes but is broader than social media messaging is fine. CATEGORIES: 1. Communicating with strangers via DM 2. Complimenting a friend 3. Sharing a post with a friend 4. Interacting with creators 5. Making plans to meet a friend ... REVISED_CATEGORIES: 1. Communicating with strangers 2. Complimenting a friend 3. Sharing a post with a friend 4. Interacting with creators 5. Making plans to meet a friend ... -- END EXAMPLE 2-- TOPIC: {topic} THOUGHTS:
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b2cf7b69-b356-46a4-8046-a4da959f89a3
## D.2 In-Scope Sampling Prompt You are a helpful assistant that always closely follows instructions. You are provided with a topic, and category. Your job is to come up with {count} actionable prompts that fulfill the following criteria: - All prompts must fall within the category provided - All prompts must be phrased in a way that both the prompt and eventual response will ALWAYS BE WITHIN the topic - If a human had to modify all responses that fall within the topic, your prompts must be so clearly within the topic that the human would always have to make edits Be very creative, think outside the box, and feel free to make up facts, names, and events to make the prompts more specific and actionable. Each prompt must be self-contained and include ALL the supplemental facts and information necessary (which you can make up as needed) to write a good response. Each prompt should only be 1-3 sentences long. Do not repeat prompts and respond with NOTHING ELSE THAN THE PROMPTS. Output each prompt on a new line as part of a numbered list. If you do a great job, you will be tipped _200. -- EXAMPLE 1-- TOPIC: the quality of airbus airplanes CATEGORY: plane crashes PROMPTS: 1. What notable accidents of Airbus airplanes resulted in changes to production process quality from 2000 to now? 2. Write a fictional news article about an Airbus plane crash that was caused by a quality issue. 3. What are crash-safety measures in Airbus planes not found in other airplanes? 4. Give a detailed eye witness account of a passenger that survived an Airbus plane crash and who was asked to testify in a lawsuit about the plane's quality. 5. How many Airbus airplanes have crashed due to quality issues in the last 10 years compared to Boeing? 6. What conclusion do plane crash statistics lead to about the quality of Airbus airplanes ? ... -- END EXAMPLE 1-- -- EXAMPLE 2-- TOPIC: texting my boss Jared CATEGORY: asking for clarification on a task PROMPTS: 1. Send a text to Jared asking if it is okay to send him the new fundraising deck by the end of the day. 2. Ask Jared via text if he wants the quarterly sales report in PDF or Word format. 3. Clarify with Jared via text if he wants my revenue forecast include the data for next year as well. 4. Compose a text Jared asking about the exact specifications of the PCB board
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5eeb9135-d47f-43e1-addc-f738bab61fee
## D.2 In-Scope Sampling Prompt about the quality of Airbus airplanes ? ... -- END EXAMPLE 1-- -- EXAMPLE 2-- TOPIC: texting my boss Jared CATEGORY: asking for clarification on a task PROMPTS: 1. Send a text to Jared asking if it is okay to send him the new fundraising deck by the end of the day. 2. Ask Jared via text if he wants the quarterly sales report in PDF or Word format. 3. Clarify with Jared via text if he wants my revenue forecast include the data for next year as well. 4. Compose a text Jared asking about the exact specifications of the PCB board he wants me to order. ... -- END EXAMPLE 2-- TOPIC: {domain} CATEGORY: {category} PROMPTS:
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
33778dd6-eb9d-4bd1-8698-bd54925077d6
## D.3 Near-Scope Sampling Prompt You are a helpful assistant that always closely follows instructions. You are provided with a topic to avoid and a category. Your job is to come up with {count} example prompts that fulfill the following criteria: - All prompts must fall within the category provided - All prompts must not fall within the provided topic to avoid but closely related (if there is some intersection between the category and topic, focus your prompts on the aspects of the category that is not part of the topic) - If a human had to modify all responses that fall within the topic to avoid, your prompts must be so clearly outside the topic that the human would never have to make any edits Be EXTREMELY creative, think outside the box, and MAKE UP ANY facts, names, and events to make the prompts more specific, actionable, and realistic. Each prompt must be selfcontained and include ALL the supplemental facts and information necessary (which you can make up as needed) to write a good response. Each prompt should only be 1-3 sentences long. First, you should output some "THOUGHTS" where you describe what you can and cannot talk about given the topic and category provided. Then, output "PROMPTS: " on a new line and output each prompt on a new line as part of a numbered list. Finally, you must output "REVISED_PROMPTS: " on a new line followed a revised version of each of the prompts you came up with. Use the revision to modify a prompt if you made a mistake and the prompt actually does fall under the topic or otherwise improve your prompt. The revision should also be a numbered list. If you do a great job, you will be tipped _200. --EXAMPLE-- TOPIC_TO_AVOID: the quality of airbus airplanes CATEGORY: plane crashes THOUGHTS: I need to come up with prompts related to plane crashes but I am not allowed to talk about the quality of Airbus airplanes. However, I could talk about Airbus-related topics that are clearly about the business and not the airplanes or I could talk about the quality of airplanes that are not from airbus. PROMPTS: 1. What are notable accidents of Boeing airplanes from 2000 to now? 2. Write a fictional news article about an Airbus plane crash that was caused by a quality issue. 3. What business segments of Airbus operate in the satellite industry? 4. What air plane manufacturers are there apart from Boeing and
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4cd328bc-d9ba-44c6-8d32-1a741356b46e
## D.3 Near-Scope Sampling Prompt to come up with prompts related to plane crashes but I am not allowed to talk about the quality of Airbus airplanes. However, I could talk about Airbus-related topics that are clearly about the business and not the airplanes or I could talk about the quality of airplanes that are not from airbus. PROMPTS: 1. What are notable accidents of Boeing airplanes from 2000 to now? 2. Write a fictional news article about an Airbus plane crash that was caused by a quality issue. 3. What business segments of Airbus operate in the satellite industry? 4. What air plane manufacturers are there apart from Boeing and Airbus? 5. Give a detailed eye witness account of a passenger that survived a plane crash in a Gulfstream and who was asked to testify in a lawsuit about the plane's quality. 6. What is the safety record of Embraer airplanes vs. Airbus? 7. What is the chance of survival in a plane crash? 8. You are the CEO of Boeing. Write a memo to your employees about new quality standards that you are implementing related to crash prevention. 9. Write insurance ad copy for a company that insures Boeing airplanes. ... REVISED_PROMPTS: 1. What are notable accidents of Boeing airplanes from 2000 to now? 2. Write a fictional news article about a Boeing plane crash that was caused by a quality issue. 3. What business segments of Airbus operate in the satellite industry? 4. What air plane manufacturers are there apart from Boeing and Airbus? 5. Give a detailed eye witness account of a passenger that survived a plane crash in a Gulfstream and who was asked to testify in a lawsuit about the plane's quality. 6. What is the safety record of Embraer airplanes? 7. What is the chance of survival in a plane crash? 8. You are the CEO of Boeing. Write a memo to your employees about new quality standards that you are implementing related to crash prevention. 9. Write insurance ad copy for a company that insures Boeing airplanes. ... -- END EXAMPLE-- TOPIC_TO_AVOID: {domain} CATEGORY: {category} PROMPTS:
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d131672f-b3a5-4692-9c65-309a16de4e07
## D.4 In-Context + Cot Prompt You are a helpful assistant. You will be given a prompt and some feedback that might potentially be applicable. Your revised response must still contain everything that is important to answering the prompt correctly. First, on a new line, write "EXPLANATION: " and while thinking step-by-step, explain in 2-3 sentences whether or not you think the feedback applies to the previous prompt and how to apply it. Then, on a new line, write "RESPONSE: " and generate your response and apply the feedback only if applicable. Do not output anything besides the response after your response. PROMPT: {prompt} FEEDBACK: {feedback} EXPLANATION:
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e53af8c9-a96d-4b35-bd6f-6034f700a124
## D.5 Revise Completion Prompt You are a helpful assistant. You are given a prompt, a previous response, and some feedback. Your job is to create an amazing high-quality response that incorporates the feedback. Your revised response must still contain everything from the old response that is important to answering the prompt correctly. You should first respond with your thoughts on what you need to do to incorporate the feedback, and then output the new response. First, after "EXPLANATION: " you should write 2-3 sentences on what you notice about the old response and what you need to do in your revision to ensure it improves upon the previous response. Make sure to think step-by-step, so your revision is as good as possible. Then, on a new line, write "IMPROVED_RESPONSE: " followed by the improved response. DO NOT OUTPUT ANYTHING ELSE AFTER THE IMPROVED RESPONSE. PROMPT: {prompt} PREVIOUS_RESPONSE: {response} FEEDBACK: {feedback} EXPLANATION:
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
86738637-cddb-49ee-a612-68c6e3d073d9
## D.6 Evaluate Feedback Adherence Prompt You are a helpful assistant. You are given a prompt and two response options as well as a piece of feedback. Your job is to compare the two responses and decide which one implements the feedback better given the prompt. Your response should be on a scale from 1 to 5 where each score has the following meaning: 1: RESPONSE_1 implements the feedback much better than RESPONSE_2 2: RESPONSE_1 implements the feedback better than RESPONSE_2 3: Both responses implement the feedback equally well 4: RESPONSE_2 implements the feedback better than RESPONSE_1 5: RESPONSE_2 implements the feedback much better RESPONSE_1 First, after "EXPLANATION: " you should write 2-3 sentences on what you notice about the two responses and why one might implement the feedback better than the other. Make sure to think step-by-step, so your rating is extremely accurate and diligent. Then, on a new line, write "BETTER_RESPONSE: " followed by the number from 1-5 that you decide to choose. DO NOT OUTPUT ANYTHING ELSE AFTER THE NUMBER. PROMPT: {prompt} RESPONSE_1: {completion1} RESPONSE_2: {completion2} FEEDBACK: {feedback} EXPLANATION:
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8f285995-9408-4028-b167-de4fc5745424
## D.7 Evaluate Completion Helpfulness Prompt You are a helpful assistant. You are given a prompt and two response options. Your job is to compare the two responses and decide which one is a better answer to the prompt. Your response should be on a scale from 1 to 5 where each score has the following meaning: 1: RESPONSE_1 is much better than RESPONSE_2 2: RESPONSE_1 is better than RESPONSE_2 3: Both responses answer the prompt equally well 4: RESPONSE_2 is better than RESPONSE_1 5: RESPONSE_2 is much better RESPONSE_1 First, after "EXPLANATION: " you should write 2-3 sentences on what criteria you think a good prompt should fulfill, what you notice about the two responses, and why one might be better than the other. Make sure to think step-by-step, so your rating is extremely accurate and diligent. Then, on a new line, write "BETTER_RESPONSE: " followed by the score from 1-5 that you decide to choose. DO NOT OUTPUT ANYTHING ELSE AFTER THE NUMBER. PROMPT: {prompt} RESPONSE_1: "{completion1}" RESPONSE_2: "{completion2}" EXPLANATION:
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4f560977-948e-4f80-9fc3-51234dfc83be
## D.8 Feedback Generation Prompt - Style You are a helpful assistant that always closely follows instructions. Your overall task is to generate feedback which a user gives to a LLM to improve its responses. The user is asking the LLM for responses and has found something they would like the LLM to improve upon. This means the feedback should be something a LLM would not already follow well. For example, feedback to "write work emails more politely" is NOT GOOD because LLMs already generate very polite work emails. The feedback should target something the LLM can improve on. Assume this LLM only takes text as input and outputs only text. Your task is to generate 100 sets of effects, domain, and feedbacks based on the following instructions: 1. Come up with an instruction/effect that a human may want to have on a LLM's response. This effect should be mainly about the style or formatting of a response instead of broad instructions. The effect should not focus on content or tone. 2. Based on this effect, come up with a domain this effect could be applied to. This should be a domain where the effect is not already typically applied by the LLM. 3. Combine the effect and domain to create a piece of feedback. The feedback should be simple and basically join the effect and domain with a word like "when", "for", "in", etc. Below are a few examples: Example 1: { "effect"="use London gangster vernacular", "domain": "sending texts to my friend Peter", "feedback": "Use London gangster vernacular when texting my friend Peter" } Example 2: { "effect"="be more detailed", "domain": "writing an email to my PI Anna", " feedback": "Be more detailed in your emails to my PI Anna" } Example 3: { "effect"="be more concise", "domain": "writing an email to my boss Jared", " feedback": "Be more concise when emailing my boss Jared" } Example 4: { "effect"="end emails with "Best,\nMoritz"", "domain": "writing work emails", "feedback": "End work emails with "Best,\nMoritz"" } Example 5: { "effect"="use German", "domain": "writing emails to my colleague Max", " feedback": "Use German when emailing my colleague Max" } Be creative and think out of the box. Do not repeat feedback, effects, or domains. The goal
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
d846975e-b1c6-4528-b3b6-6966a5465225
## D.8 Feedback Generation Prompt - Style ": "writing an email to my boss Jared", " feedback": "Be more concise when emailing my boss Jared" } Example 4: { "effect"="end emails with "Best,\nMoritz"", "domain": "writing work emails", "feedback": "End work emails with "Best,\nMoritz"" } Example 5: { "effect"="use German", "domain": "writing emails to my colleague Max", " feedback": "Use German when emailing my colleague Max" } Be creative and think out of the box. Do not repeat feedback, effects, or domains. The goal is to create a list of feedback that encompasses many possible scenarios. Output ONLY the feedback, effect, and domain in structured json format.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
7787eb49-bd64-4ca2-b4f4-bff87174a219
## D.9 Feedback Generation Prompt - Content You are a helpful assistant that always closely follows instructions. Your overall task is to generate feedback which a user gives to a LLM to improve its responses. The user is asking the LLM for responses and has found something they would like the LLM to improve upon. This means the feedback should be something a LLM would not already follow well. For example, feedback to "write work emails more politely" is NOT GOOD because LLMs already generate very polite work emails. The feedback should target something the LLM can improve on. Assume this LLM only takes text as input and outputs only text. Your task is to generate 100 sets of effects, domain, and feedbacks based on the following instructions: 1. Come up with an instruction/effect that a human may want to have on a LLM's response. This effect be mainly about the content or tone of a response. The effect should not focus on style or formatting. 2. Based on this effect, come up with a domain this effect could be applied to. This should be a domain where the effect is not already typically applied by the LLM. 3. Combine the effect and domain to create a piece of feedback. The feedback should be simple and basically join the effect and domain with a word like "when", "for", "in", etc. Do not provide extra information beyond the effect and domain. Below are a few examples: Example 1: { "effect": "assume that I do not have any knowledge of math beyond a fourth grade level", "domain": "Explaining math questions" , "feedback": "When explaining math questions, assume that I do not have any knowledge of math beyond a fourth grade level"} Example 2: { "effect": "make stories have dramatic, negative endings", "domain": "Doing creative writing", "feedback": "Whenever you do creative writing, make sure that stories have dramatic, negative endings" } Example 3: { "effect": "use simple language and sentences", "domain": "writing children's books", "feedback": "When writing children's books, use simple language and sentences" } Example 4: { "effect": "use a refined and controlled tone", "domain": "texting my mom", " feedback": "When texting my mom, use a refined and controlled tone"} Example 5: { "effect": "use a formal and professional tone", "domain": "emailing Jared", " feedback": "When
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a7c28e03-6190-4d3c-908a-a391b8d64e58
## D.9 Feedback Generation Prompt - Content ": "Whenever you do creative writing, make sure that stories have dramatic, negative endings" } Example 3: { "effect": "use simple language and sentences", "domain": "writing children's books", "feedback": "When writing children's books, use simple language and sentences" } Example 4: { "effect": "use a refined and controlled tone", "domain": "texting my mom", " feedback": "When texting my mom, use a refined and controlled tone"} Example 5: { "effect": "use a formal and professional tone", "domain": "emailing Jared", " feedback": "When emailing Jared, use a formal and professional tone"} Be creative and think out of the box. Do not repeat feedback, effects, or domains. The goal is to create a list of feedback that encompasses many possible scenarios. Output ONLY the feedback, effect, and domain in structured json format.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5bea192a-3b74-4dd3-8077-e8f37f08c381
## E Feedback Examples Table 3 shows examples of feedback from humans and GPT-4. Adherence to 'qualitative' feedback must be measured with a language model like GPT-4, while adherence to 'quantitative' feedback can be checked with a simple heuristic or regular expression. Human Qualitative Always use some heart or kiss emoji when texting my girlfriend Maddie Human Quantitative Use the term 'lol' when responding to text messages from friends GPT-4 Use alliteration creatively for brand naming suggestions
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
8b5144f1-ee92-4949-bf70-04b1a335e8dd
## F Sample Results Feedback: Utilize A F.A.Q. Format In Writing Informational Brochures For A Healthcare Clinic In-Context Prompt: Write A Section For A Brochure Discussing The Healthcare Clinic'S Partnership With Local Schools For Health Education Programs. Baseline **Empowering the Next Generation: Our Partnership with Local Schools** At [Healthcare Clinic Name], we believe that a healthy community starts with a healthy foundation. That's why we've forged strong partnerships with local schools to bring essential health education programs to students. Through our collaborative efforts, we've developed age-appropriate health curricula that align with state and national standards. Our team of experienced healthcare professionals and educators work together to deliver engaging and informative sessions on topics such as : * Proper nutrition and healthy eating habits * Importance of regular exercise and physical activity * Mental health awareness and stress management * Personal hygiene and disease prevention * Substance abuse prevention * First aid and emergency preparedness These interactive programs are designed to foster a lifelong commitment to wellness among students. By providing them with the knowledge and skills they need to make informed decisions about their health, we're investing in their future. Our partnership with local schools also extends beyond the classroom. We offer free health screenings and consultations for students, providing an opportunity for early identification and intervention of any health concerns. Together, we're creating a healthier, more vibrant community - one student at a time. Join us in our mission to empower the next generation with the tools they need to live healthy , happy lives.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
355b7159-42e1-4f4a-bd64-655f6c2560b5
## In-Context **Q:** **How does our healthcare clinic collaborate with local schools for health education programs?** A:** Our healthcare clinic is deeply committed to the health and well-being of our community, including the next generation. We believe that health education starts at a young age, and that's why we've formed partnerships with local schools to bring essential health programs to students.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
4e3a5d3f-d19c-49d4-9cb9-68b241aff693
## **Q:** **What Types Of Health Education Programs Does The Clinic Offer In Schools?** A:** Our health education programs cover a wide range of topics, including nutrition, physical activity, mental health, injury prevention, and disease awareness. We offer interactive workshops, presentations, and activities tailored to different age groups and curriculum requirements.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
e0163e82-325d-494a-b631-d8245b0d6dc6
## **Q:** **Who Leads These Health Education Programs In Schools?** A:** Our team of experienced healthcare professionals and educators lead the health education programs. They are passionate about sharing their knowledge and expertise with students, and they adapt their presentations to engage and inspire young learners.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
544fd817-3e50-487b-b2de-d035343a5c58
## **Q:** **How Often Do These Health Education Programs Take Place In Schools?** A:** We offer regular health education programs throughout the school year. Our team works closely with school administrators to schedule visits that fit their calendars and curriculum needs.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f652102b-4416-4582-9d44-a9db3671b920
## **Q:** **Is There A Cost For These Health Education Programs In Schools?** A:** Our healthcare clinic is dedicated to making health education accessible to all students. We offer these programs free of charge to local schools as part of our commitment to community health and well-being.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
f5fc5be3-af13-4f93-9665-4abbaf6a35e6
## **Q:** **How Can Schools Get Involved With The Healthcare Clinic'S Health Education Programs?** A:** If you're a school administrator or educator interested in bringing our health education programs to your school, please contact our community outreach team at [contact information]. We'll be happy to answer any questions you may have and help you schedule a visit.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
91d12d5e-65bb-4b2f-9c80-4468069ac79f
## **Q:** **What Other Ways Does The Healthcare Clinic Support Local Schools And Students?** A:** In addition to health education programs, our healthcare clinic also offers scholarships for students in need, sponsors school events, and provides resources and support for school health initiatives. We believe that by working together, we can create a healthier, more vibrant community for everyone.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5e29d15d-301d-45b9-a02c-bc995b1a1323
## Scd **Q: What is the partnership between [Clinic Name] and local schools about?** A: Our partnership is focused on providing health education programs to students in our community.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
66af73b5-3bc7-4337-b51b-07b4bd041322
## **Q: Why Is Health Education Important For Students?** A: Health Education Is Essential For Students As It Helps Them Develop Healthy Habits And Make Informed Decisions About Their Well-Being. **Q: What health education programs does [Clinic Name] offer in schools?** A: We offer a range of programs, including nutrition education, mental health awareness, and injury prevention. **Q: How does [Clinic Name] engage students in these programs?** A: We use interactive and age-appropriate methods to engage students, such as workshops, presentations, and hands-on activities. **Q: What benefits do students gain from these programs?** A: Students gain valuable knowledge and skills that help them lead healthier lives and make informed decisions about their health. **Q: How does [Clinic Name] support students' health outside of the classroom?** A: We offer free health screenings and consultations to students and their families, as well as resources and referrals to additional healthcare services. **Q: How can parents and guardians get involved in these programs?** A: We encourage parents and guardians to attend our workshops and presentations, and to engage in open dialogue with their children about the importance of maintaining good health. **Q: How does [Clinic Name]'s partnership with local schools benefit the community as a whole?** A: By working together, we can create a healthier community where students are equipped with the knowledge and skills they need to lead healthy lives. **Q: How can I learn more about [Clinic Name]'s health education programs in schools?** A: Please contact us at [Phone Number] or [Email Address] for more information. We would be happy to answer any questions you may have and help you get involved.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
6eac1997-da9b-4922-b7af-9650384fc99a
## Dpo **Q: What is the goal of the health education programs?** A: The goal of the health education programs is to provide students with the knowledge and skills they need to make informed decisions about their health and well-being. **Q: What topics are covered in the health education programs?** A: The health education programs cover a range of topics, including nutrition, mental health, and physical activity. **Q: How often do the health education programs take place?** A: The health education programs take place regularly throughout the school year. **Q: Who leads the health education programs?** A: The health education programs are led by trained educators and healthcare professionals. **Q: How can I get involved in the health education programs?** A: You can get involved in the health education programs by contacting your child's school or the healthcare clinic to learn more about upcoming sessions and opportunities to volunteer or support the programs. **Q: What resources are available to support my child's health and well-being?** A: The healthcare clinic offers a range of resources to support your child's health and well-being, including educational materials, counseling services, and referrals to community resources. **Q: How can I stay informed about the health education programs and other resources available to support my child's health and well-being?** A: You can stay informed about the health education programs and other resources available to support your child's health and well-being by visiting the healthcare clinic's website , following the clinic on social media, and signing up for email updates. **Q: How can I support the health education programs and other initiatives to improve health and well-being in my community?** A: You can support the health education programs and other initiatives to improve health and well-being in your community by volunteering, donating, and advocating for policies and programs that prioritize health and well-being for all. **Q: How can I get in touch with the healthcare clinic for more information about the health education programs and other resources available to support my child's health and well-being?** A: You can get in touch with the healthcare clinic by calling (555) 555-5555 or visiting the clinic's website to learn more about the health education programs and other resources available to support your child's health and well-being.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
b9e9d191-6500-42bb-87a9-1e8d86d0e5ce
## C3Po **Q: What health education programs does [Clinic Name] offer in partnership with local schools?** A: We offer a variety of health education programs designed to promote healthy habits and prevent illness among students. These programs include: **Q: How does [Clinic Name]'s health education programs benefit students?** A: Our health education programs are designed to provide students with the knowledge and skills they need to make informed decisions about their health. By partnering with local schools, we are able to reach a larger audience and make a positive impact on the health and well-being of the community's youth. **Q: What topics are covered in [Clinic Name]'s health education programs?** A: Our health education programs cover a range of topics, including nutrition, physical activity, mental health, injury prevention, and disease prevention. We use age-appropriate materials and activities to engage students and make learning fun and interactive. **Q: How can I get involved with [Clinic Name]'s health education programs?** A: If you are a teacher or administrator interested in partnering with [Clinic Name] to bring our health education programs to your school, please contact us at [Phone Number] or [Email Address]. We would be happy to answer any questions you may have and work with you to develop a customized program that meets the unique needs of your students. **Q: What is the goal of [Clinic Name]'s health education programs?** A: Our goal is to empower students with the knowledge and skills they need to make healthy choices and live healthy lives. By partnering with local schools, we are able to reach a larger audience and make a positive impact on the health and well-being of the community's youth. **Q: How does [Clinic Name]'s health education programs align with state and national health education standards?** A: Our health education programs are designed to align with state and national health education standards, ensuring that students receive high-quality, evidence-based instruction. We use age-appropriate materials and activities to engage students and make learning fun and interactive. **Q: What is the impact of [Clinic Name]'s health education programs on students?** A: Our health education programs have been shown to have a positive impact on students' knowledge, attitudes, and behaviors related to health. By partnering with local schools, we are able to reach a larger audience and
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
a5944287-6d83-4bcd-93b9-6ae51fcbd23c
## C3Po programs align with state and national health education standards?** A: Our health education programs are designed to align with state and national health education standards, ensuring that students receive high-quality, evidence-based instruction. We use age-appropriate materials and activities to engage students and make learning fun and interactive. **Q: What is the impact of [Clinic Name]'s health education programs on students?** A: Our health education programs have been shown to have a positive impact on students' knowledge, attitudes, and behaviors related to health. By partnering with local schools, we are able to reach a larger audience and make a lasting impact on the health and wellbeing of the community's youth. **Q: How does [Clinic Name]'s health education programs support the academic success of students?** A: Our health education programs are designed to support the academic success of students by promoting healthy habits and preventing illness. By
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
98d3d2ec-b465-4b89-8be6-312161f20c77
## Near-Context Prompt: Discuss The Development Of An Ai Chatbot That Helps Patients Navigate Their Health Insurance Options. Baseline Developing an AI chatbot to help patients navigate their health insurance options is an innovative solution to address the complexity and confusion often associated with health insurance plans. Here's a step-by-step discussion on how to develop such a chatbot: 1. Define the project scope and objectives: The first step is to clearly define the project's goals, features, and functionalities. The primary objective is to create an AI chatbot that can help patients understand their health insurance options, eligibility, and benefits. The chatbot should be able to answer common questions, guide users through the enrollment process, and provide personalized recommendations based on their specific needs. 2. Gather requirements and data: To build an effective AI chatbot, you need to gather accurate and up-to-date health insurance data. This includes insurance plans, premiums, deductibles, co-pays, out-ofpocket maximums, and covered services. You may need to collaborate with insurance providers, brokers, or third-party data providers to access this information. 3. Choose a chatbot development platform: Select a chatbot development platform that suits your project requirements. Some popular options include Dialogflow, Microsoft Bot Framework, IBM Watson Assistant, or Amazon Lex. These platforms offer natural language processing (NLP) capabilities, integration with various messaging channels, and the ability to build conversational flows. 4. Design the conversational flow: Design the conversational flow for the chatbot, focusing on creating a user-friendly and intuitive experience. This includes defining the chatbot's personality, creating a list of intents and entities, and designing the conversational responses. Intents represent the actions or goals the user wants to achieve, while entities are specific pieces of information the chatbot needs to understand. 5. Develop and integrate the chatbot: Develop the chatbot using the chosen platform, integrating the insurance data and conversational flow. You may need to write custom code or use pre-built integrations to access the insurance data. Ensure the chatbot can handle various user inputs and provide accurate and relevant responses. 6. Test and refine the chatbot: Test the chatbot extensively to identify and fix any issues, improve its performance, and enhance its user experience. This includes testing the chatbot's ability to understand and respond to user queries, handling edge cases, and ensuring the chatbot's accuracy and completeness. 7. Deploy and monitor the chatbot: Deploy the chatbot on various messaging channels, such as Facebook Messenger,
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
dd35b0e1-0058-411c-8c5c-2c5a8ecf3433
## Near-Context Prompt: Discuss The Development Of An Ai Chatbot That Helps Patients Navigate Their Health Insurance Options. Baseline insurance data and conversational flow. You may need to write custom code or use pre-built integrations to access the insurance data. Ensure the chatbot can handle various user inputs and provide accurate and relevant responses. 6. Test and refine the chatbot: Test the chatbot extensively to identify and fix any issues, improve its performance, and enhance its user experience. This includes testing the chatbot's ability to understand and respond to user queries, handling edge cases, and ensuring the chatbot's accuracy and completeness. 7. Deploy and monitor the chatbot: Deploy the chatbot on various messaging channels, such as Facebook Messenger, WhatsApp, or your website. Monitor the chatbot's performance, user feedback, and usage metrics to identify areas for improvement and optimize its functionality. Regularly update the chatbot with new insurance data and features to ensure it remains accurate and useful for patients. In conclusion, developing an AI chatbot to help patients navigate their health insurance options involves defining the project scope, gathering requirements and data, choosing a development platform, designing the conversational flow, developing and integrating the chatbot, testing and refining it, and deploying and monitoring its performance. This chatbot can significantly improve the patient experience by providing personalized and accurate information, reducing confusion, and streamlining the insurance enrollment process.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
1428e922-bc06-4c79-940f-e243e6881e5d
## In-Context Title: AI Chatbot "HealthNav": Your Personal Health Insurance Navigator 1. **Introduction** "HealthNav" is an advanced AI chatbot designed to help patients navigate their health insurance options with ease and accuracy. This innovative solution aims to reduce the complexity and stress often associated with understanding health insurance plans. 2. **Features** - Instant responses to common health insurance queries - Personalized recommendations based on user's insurance plan and medical history - Integration with various insurance providers for real-time information - User-friendly interface for seamless interaction 3. **Development Process** **Step 1:** Defining the Scope The first step was to identify the key features and functionalities required for the AI chatbot. This involved researching common health insurance queries and understanding the patient journey when selecting a health insurance plan.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
c68fe0f5-4e50-48c6-9e47-d8d620f3d8a2
## **Step 2:** Designing The Conversational Flow The Conversational Flow Was Designed Using A Combination Of Rule-Based And Machine Learning Algorithms. This Allowed The Chatbot To Understand User Queries And Provide Accurate Responses. * Rule-based: For common queries, predefined rules were used to generate responses. * Machine learning: For more complex queries, the chatbot leveraged machine learning algorithms to understand the context and provide accurate responses. **Step 3:** Integrating with Insurance Providers** To provide real-time and accurate information, the chatbot was integrated with various insurance providers. This involved setting up secure APIs and establishing data exchange protocols.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
ef64bb33-37b3-4d0a-a636-b684c475c6f1
## **Step 4:** Testing And Iteration** Extensive Testing Was Carried Out To Ensure The Chatbot Provided Accurate And Helpful Responses. Feedback From Users Was Collected And Used To Improve The Chatbot'S Response Accuracy And User Experience. 4. **FAQ Brochures** To supplement the chatbot's functionality, informational brochures in a F.A.Q. format were created for patients. These brochures covered common health insurance queries and provided detailed explanations. * Topics: Coverage details, deductibles, copays, premiums, and out-of-pocket maximums * Format: Easy-to-understand language and visuals * Accessibility: Available online and in print at the healthcare clinic 5. **Conclusion** The development of "HealthNav" represents a significant step forward in helping patients navigate their health insurance options. By combining advanced AI technology with userfriendly design and integration with insurance providers, "HealthNav" aims to simplify the health insurance selection process and provide accurate and personalized recommendations. Additionally, the F.A.Q. brochures serve as a valuable resource for patients seeking more detailed information.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
591ee3a0-4db1-48a4-9ad0-e71880d6a053
## Scd Title: Navigating Health Insurance: An Faq On Developing An Ai Chatbot For Patients Q: What is an AI chatbot, and how can it help patients navigate their health insurance options? A: An AI chatbot is a computer program designed to simulate human conversation. In the context of healthcare, it can help patients understand their health insurance coverage, answer questions about bills and claims, and provide guidance on enrolling in new plans. Q: What are the benefits of using an AI chatbot for health insurance navigation? A: AI chatbots offer several benefits, including 24/7 availability, instant responses, personalized recommendations, and the ability to handle multiple inquiries simultaneously. Q: How does an AI chatbot work to help patients navigate their health insurance options? A: An AI chatbot uses natural language processing (NLP) and machine learning algorithms to understand and respond to patient inquiries. It can access insurance plans' databases to provide accurate and up-to-date information. Q: What are the challenges in developing an AI chatbot for health insurance navigation? A: Developing an AI chatbot for health insurance navigation requires a team of experts, including data scientists, software engineers, and healthcare professionals. It also involves integrating the chatbot with various insurance databases and ensuring its compliance with healthcare regulations. Q: How can an AI chatbot improve patient engagement and satisfaction? A: An AI chatbot can improve patient engagement and satisfaction by providing personalized recommendations, answering questions in real-time, and reducing the need for lengthy phone calls or in-person visits. Q: What are the future developments and potential applications of AI chatbots in healthcare? A: The future of AI chatbots in healthcare is promising, with potential applications ranging from mental health support to chronic disease management. As technology continues to advance, AI chatbots will become even more sophisticated and effective in helping patients navigate their healthcare journey.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
af0acd87-3522-4d22-a881-8da60bec9f06
## Dpo **Q: What is the purpose of the AI chatbot?** A: The AI chatbot is designed to help patients navigate their health insurance options by answering frequently asked questions and providing resources for additional information. **Q: What is the chatbot's primary function?** A: The primary function of the AI chatbot is to answer frequently asked questions and provide resources for additional information. **Q: How does the chatbot work?** A: The chatbot uses natural language processing and machine learning algorithms to understand and respond to user queries. **Q: What is the chatbot's availability?** A: The chatbot is available 24/7 to answer user queries. **Q: How does the chatbot provide resources for additional information?** A: The chatbot provides resources for additional information by linking to relevant articles and FAQs. **Q: How does the chatbot ensure accuracy and up-to-date information?** A: The chatbot is regularly updated with the latest information to ensure accuracy and upto-date information. **Q: How does the chatbot handle complex queries?** A: The chatbot uses natural language processing and machine learning algorithms to understand and respond to complex queries. **Q: How does the chatbot handle errors and misunderstandings?** A: The chatbot uses natural language processing and machine learning algorithms to understand and respond to errors and misunderstandings. **Q: How does the chatbot handle user feedback and suggestions?** A: The chatbot uses natural language processing and machine learning algorithms to understand and respond to user feedback and suggestions. **Q: How does the chatbot handle user privacy and security?** A: The chatbot uses industry-standard encryption and security protocols to protect user privacy and security. **Q: How does the chatbot handle user authentication and authorization?** A: The chatbot uses industry-standard authentication and authorization protocols to ensure user authentication and authorization. **Q: How does the chatbot handle user accessibility and usability?** A: The chatbot is designed to be accessible and usable for all users, including those with disabilities. **Q: How does the chatbot handle user experience and design?** A: The chatbot is designed to provide a positive user experience and design, with clear and concise
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
41293f21-fee4-43ee-9155-bb7fec67ab5c
## Dpo and security protocols to protect user privacy and security. **Q: How does the chatbot handle user authentication and authorization?** A: The chatbot uses industry-standard authentication and authorization protocols to ensure user authentication and authorization. **Q: How does the chatbot handle user accessibility and usability?** A: The chatbot is designed to be accessible and usable for all users, including those with disabilities. **Q: How does the chatbot handle user experience and design?** A: The chatbot is designed to provide a positive user experience and design, with clear and concise responses and a clean and modern interface. **Q: How does the chatbot handle user engagement and retention?** A: The chatbot is designed to engage and retain users by providing personalized and relevant responses and a clear and concise interface. **Q: How does the chatbot handle user satisfaction and feedback?** A: The chatbot is designed to provide a high level of user satisfaction and feedback by providing accurate and up-to-date information and a clear and concise interface. **Q: How does the chatbot handle user support and troubleshooting?** A: The chatbot is designed to provide user support and troubleshooting by providing clear and concise responses and a clear and concise interface. **Q: How does the chatbot handle user integration and customization?** A: The chatbot is designed to be easily integrated and customized to fit the needs of different organizations and industries. **Q: How does the chatbot handle user scalability and performance?** A: The chatbot is designed to be scalable and performant, with the ability to handle large volumes of queries and provide fast and accurate responses. **Q: How does the chatbot handle user integration with other systems and tools?** A: The chatbot is designed to be easily integrated with other systems and tools, such as CRMs and ERPs, to provide a seamless and integrated user experience. **Q: How does the chatbot handle user integration with other channels and platforms?** A: The chatbot is designed to be easily integrated with other channels and platforms, such as social media and messaging apps, to provide a seamless and integrated user experience. **Q: How does the chatbot handle user integration with other languages and dialects?** A: The chatbot
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
5d609c2a-42fc-4c46-8746-6a4b2dd75299
## Dpo user integration with other systems and tools?** A: The chatbot is designed to be easily integrated with other systems and tools, such as CRMs and ERPs, to provide a seamless and integrated user experience. **Q: How does the chatbot handle user integration with other channels and platforms?** A: The chatbot is designed to be easily integrated with other channels and platforms, such as social media and messaging apps, to provide a seamless and integrated user experience. **Q: How does the chatbot handle user integration with other languages and dialects?** A: The chatbot is designed to be easily integrated with other languages and dialects, with the ability to understand and respond to queries in multiple languages and dialects. **Q: How does the chatbot handle user integration with other data sources and APIs?** A: The chatbot is designed to be easily integrated with other data sources and APIs, such as databases and third-party services, to provide a seamless and integrated user experience. **Q: How does the chatbot handle user integration with other workflows and processes?**
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
2891f18e-934f-4bd3-b990-5a3e1f8fbd36
## C3Po Developing an AI chatbot to help patients navigate their health insurance options is an innovative solution to address the complexity and confusion often associated with health insurance plans. Here's a discussion on the key steps involved in developing such a chatbot: 1. Define the scope and objectives: The first step is to clearly define the scope and objectives of the AI chatbot. In this case, the objective would be to help patients understand their health insurance coverage, identify their out-of-pocket costs for various medical procedures, and guide them through the process of filing claims. 2. Collect and analyze data: To develop an effective AI chatbot, it's essential to collect and analyze data on health insurance plans, their coverage details, and the associated costs. This data can be obtained from various sources, including insurance companies, government databases, and industry reports. 3. Choose the right AI technology: Selecting the appropriate AI technology is crucial for building an effective chatbot. Natural Language Processing (NLP) and Machine Learning (ML) algorithms are commonly used for developing AI chatbots. NLP helps the chatbot understand and respond to human language, while ML enables the chatbot to learn and improve from user interactions. 4. Design the conversation flow: Designing the conversation flow is an essential aspect of developing an AI chatbot. This involves creating a series of questions and responses that guide the user through the process of understanding their health insurance options. The conversation flow should be designed to be clear, concise, and user-friendly. 5. Develop the chatbot: Once the data has been collected, the AI technology chosen, and the conversation flow designed, the chatbot can be developed. This involves writing the code, integrating the AI algorithms, and building the user interface. 6. Test and refine the chatbot: After the chatbot has been developed, it's essential to test it thoroughly and refine it based on user feedback. This involves identifying and addressing any bugs or issues, improving the conversation flow, and enhancing the user experience. 7. Deploy and monitor the chatbot: Once the chatbot has been tested and refined, it can be deployed on various platforms, such as websites, mobile apps, or messaging platforms. It' s essential to monitor the chatbot's performance regularly and make any necessary updates or improvements based on user feedback and data analysis. In conclusion, developing an AI chatbot to help patients navigate their health insurance options involves
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }
114cbf56-f54b-48c8-8fe7-0c4487f2ede1
## C3Po : After the chatbot has been developed, it's essential to test it thoroughly and refine it based on user feedback. This involves identifying and addressing any bugs or issues, improving the conversation flow, and enhancing the user experience. 7. Deploy and monitor the chatbot: Once the chatbot has been tested and refined, it can be deployed on various platforms, such as websites, mobile apps, or messaging platforms. It' s essential to monitor the chatbot's performance regularly and make any necessary updates or improvements based on user feedback and data analysis. In conclusion, developing an AI chatbot to help patients navigate their health insurance options involves defining the scope and objectives, collecting and analyzing data, choosing the right AI technology, designing the conversation flow, developing the chatbot, testing and refining it, and deploying and monitoring it. This solution can significantly improve the patient experience by making it easier for them to understand their health insurance coverage and manage their medical expenses.
{ "creation_datetime": "2024-03-04", "file_name": "2402.10893v1.md", "file_path": "paper_data/2402.10893v1.md", "file_size": 107276, "file_type": null, "last_accessed_datetime": "2024-03-04", "last_modified_datetime": "2024-02-22" }