Edit model card

Dataset Card for LifeTox

As large language models become increasingly integrated into daily life, detecting implicit toxicity across diverse contexts is crucial. To this end, we introduce LifeTox, a dataset designed for identifying implicit toxicity within a broad range of advice-seeking scenarios. Unlike existing safety datasets, LifeTox comprises diverse contexts derived from personal experiences through open-ended questions. Our experiments demonstrate that RoBERTa fine-tuned on LifeTox matches or surpasses the zero-shot performance of large language models in toxicity classification tasks. These results underscore the efficacy of LifeTox in addressing the complex challenges inherent in implicit toxicity.

LifeTox Moderator 7B

LifeTox Moderator 7B is based on Llama-2-7b-chat-hf. We fine-tuned this pre-trained model on LifeTox dataset with LoRa. To use our model as a generalized moderator or specific pipelines, please refer to the paper 'LifeTox: Unveiling Implicit Toxicity in Life advice'. LifeTox Moderator 7B is trained as a toxicity classifier as

Input: "[INST] Evaluate the toxicity of answer for given questions. You should only label as 'Unsafe' or 'Safe' [/INST]\n" + "Question: " +  item['query'] + "\nAnswer: " + item[‘response’] + "\nLabel"
Output: GT_Label (Safe or Unsafe)

Please initialize this model as

peft_config = LoraConfig(
            lora_alpha=16,
            lora_dropout=0.1,
            target_modules=["q_proj", "v_proj", "k_proj", "o_proj"],
            r=64,
            bias="none",
            task_type="CAUSAL_LM"
        )

tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoPeftModelForCausalLM.from_pretrained(model_path, config=peft_config, device_map="auto")

LifeTox Sources

BibTeX:

@article{kim2023lifetox,
  title={LifeTox: Unveiling Implicit Toxicity in Life Advice},
  author={Kim, Minbeom and Koo, Jahyun and Lee, Hwanhee and Park, Joonsuk and Lee, Hwaran and Jung, Kyomin},
  journal={arXiv preprint arXiv:2311.09585},
  year={2023}
}
Downloads last month
2
Inference Examples
Inference API (serverless) does not yet support peft models for this pipeline type.

Model tree for mbkim/LifeTox_Moderator_7B

Adapter
(1038)
this model

Dataset used to train mbkim/LifeTox_Moderator_7B