somehumanperson1's picture
Update README.md
95a23ae
|
raw
history blame
1.08 kB
---
datasets:
- unalignment/toxic-dpo-v0.1
---
# llama2_xs_460M_uncensored
## Model Details
[llama2_xs_460M_experimental](https://huggingface.co/ahxt/llama2_xs_460M_experimental) DPO finedtuned to remove alignment.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Harambe Research
- **Model type:** llama2
- **Finetuned from model [optional]:** [llama2_xs_460M_experimental](https://huggingface.co/ahxt/llama2_xs_460M_experimental)
-
### Out-of-Scope Use
Don't use this to do bad things. Bad things are bad.
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
https://github.com/oobabooga/text-generation-webui