--- license: llama2 pipeline_tag: text-generation widget: - text: | Rewrite the following toxic input into non-toxic version. Let's break the input down step by step to rewrite the non-toxic version. You should first think about the expanation of why the input text is toxic. Then generate the detoxic output. You must preserve the original meaning as much as possible. Input: inference: parameters: max_new_tokens: 200 language: - en --- # DetoxLLM-7B



This model card corresponds to the DetoxLLM-7B detoxification model based on [LLaMA-2](https://huggingface.co/meta-llama/Llama-2-7b). The model is finetuned with Chain-of-Thought (CoT) explanation. **Paper**: [GreenLLaMA: A Framework for Detoxification with Explanations](https://arxiv.org/abs/2402.15951) **(EMNLP 2024 Main)** **Authors**: Md Tawkat Islam Khondaker, Muhammad Abdul-Mageed, Laks V.S. Lakshmanan ## Model Information Summary description and brief definition of inputs and outputs. ### Description DetoxLLM is the first comprehensive end-to-end detoxification framework trained on cross-platform pseudo-parallel corpus. DetoxLLM further introduces explanation to promote transparency and trustworthiness. The framework also demonstrates robustness against adversarial toxicity. ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers accelerate bitsandbytes`, then copy the snippet from the section that is relevant for your usecase. #### Running the model on a CPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "UBC-NLP/DetoxLLM-7B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) prompt = "Rewrite the following toxic input into non-toxic version. Let's break the input down step by step to rewrite the non-toxic version. You should first think about the expanation of why the input text is toxic. Then generate the detoxic output. You must preserve the original meaning as much as possible.\nInput: " input = "Those shithead should stop talking and get the f*ck out of this place" input_text = prompt+input+"\n" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, do_sample=False) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "UBC-NLP/DetoxLLM-7B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto") prompt = "Rewrite the following toxic input into non-toxic version. Let's break the input down step by step to rewrite the non-toxic version. You should first think about the expanation of why the input text is toxic. Then generate the detoxic output. You must preserve the original meaning as much as possible.\nInput: " input = "Those shithead should stop talking and get the f*ck out of this place" input_text = prompt+input+"\n" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, do_sample=False) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a GPU using different precisions * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "UBC-NLP/DetoxLLM-7B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype=torch.float16) prompt = "Rewrite the following toxic input into non-toxic version. Let's break the input down step by step to rewrite the non-toxic version. You should first think about the expanation of why the input text is toxic. Then generate the detoxic output. You must preserve the original meaning as much as possible.\nInput: " input = "Those shithead should stop talking and get the f*ck out of this place" input_text = prompt+input+"\n" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, do_sample=False) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "UBC-NLP/DetoxLLM-7B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype=torch.bfloat16) prompt = "Rewrite the following toxic input into non-toxic version. Let's break the input down step by step to rewrite the non-toxic version. You should first think about the expanation of why the input text is toxic. Then generate the detoxic output. You must preserve the original meaning as much as possible.\nInput: " input = "Those shithead should stop talking and get the f*ck out of this place" input_text = prompt+input+"\n" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, do_sample=False) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) model_name = "UBC-NLP/DetoxLLM-7B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=quantization_config) prompt = "Rewrite the following toxic input into non-toxic version. Let's break the input down step by step to rewrite the non-toxic version. You should first think about the expanation of why the input text is toxic. Then generate the detoxic output. You must preserve the original meaning as much as possible.\nInput: " input = "Those shithead should stop talking and get the f*ck out of this place" input_text = prompt+input+"\n" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, do_sample=False) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) model_name = "UBC-NLP/DetoxLLM-7B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=quantization_config) prompt = "Rewrite the following toxic input into non-toxic version. Let's break the input down step by step to rewrite the non-toxic version. You should first think about the expanation of why the input text is toxic. Then generate the detoxic output. You must preserve the original meaning as much as possible.\nInput: " input = "Those shithead should stop talking and get the f*ck out of this place" input_text = prompt+input+"\n" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, do_sample=False) print(tokenizer.decode(outputs[0])) ``` ## Model Data The model is trained on cross-platform pseudo-parallel detoxification corpus generated using ChatGPT. ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage The intended use of DetoxLLM is for the detoxification tasks. We aim to help researchers to build an end-to-end complete detoxification framework. DetoxLLM can also be regarded as a promising baseline to develop more robust and effective detoxification frameworks. ### Limitations * **Data Generation Process:** This work uses ChatGPT, a gpt-3.5-turbo version from June, 2023. Since the model can be updated on a regular interval, the data generation process should be treated accordingly. * **Data Quality:** DetoxLLM proposes an automated data generation pipeline to create a pseudo-parallel cross-platform corpus. The synthetic data generation process involves multi-stage data processing without the necessity of direct human inspection. Although this automated pipeline makes the overall data generation process scalable, it comes at the risk of allowing low-quality data in our cross-platform corpus. Hence, human inspection is recommended to remove any sort of potential vulnerability and maintain a standard quality of the corpus. * **Model Responses:** Although DetoxLLM exhibits impressive ability in generating detoxified responses, we believe there is still room for improvement for the model in terms of producing meaning-preserved detoxified outcomes. Moreover, the models can sometimes be vulnerable to implicit, adversarial tokens and continue to produce toxic content. Therefore, we recommend that DetoxLLM should be couched with caution before deployment. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * **Data Collection and Release:** We compile datasets from a wide range of platforms. To ensure proper credit assignment, we refer users to the original publications in our paper. We create the cross-platform detoxification corpus for academic research purposes. We intend to share the corpus in the future. We would also like to mention that some content are generated using GPT-4 for illustration purposes. * **Potential Misuse and Bias:** GreenLLaMA can potentially be misused to generate toxic and biased content. For these reasons, we recommend that DetoxLLM not be used in applications without careful prior consideration of potential misuse and bias. ## Citation If you use GreenLLaMA for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows: ``` @inproceedings{Khondaker2024DetoxLLM, title={DetoxLLM: A Framework for Detoxification with Explanations}, author={Md. Tawkat Islam Khondaker and Muhammad Abdul-Mageed and Laks V. S. Lakshmanan}, year={2024}, url={https://arxiv.org/abs/2402.15951} } ```