metadata
datasets:
- unalignment/toxic-dpo-v0.1
llama2_xs_460M_uncensored
Model Details
llama2_xs_460M_experimental DPO finedtuned to remove alignment.
Model Description
- Developed by: Harambe Research
- Model type: llama2
- Finetuned from model [optional]: llama2_xs_460M_experimental
Out-of-Scope Use
Don't use this to do bad things. Bad things are bad.
[More Information Needed]
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.