Edit model card

Pre-trained model fine-tuned using Reinforcement Learning on DIALOCONAN dataset using facebook/roberta-hate-speech-dynabench-r4-target as reward model.

Toxicity results on allenai/real-toxicity-prompts dataset using custom prompts (see πŸ₯žRewardLM for details).

Toxicity Level RedPajama-INCITE-Chat-3B
Pre-Trained 0.217
Fine-Tuned 0.129
RL (this) 0.160
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Spaces using DanielSc4/RedPajama-INCITE-Chat-3B-v1-RL-LoRA-8bit-test1 20