MiquMaid v2 DPO
Check out our blogpost about this model series Here! - Join our Discord server Here!
![](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/tPFdudSae6SCDNvhe1lC9.png)
This model uses the Alpaca prompting format
Model trained for RP conversation on Miqu-70B with our magic sauce, then trained on DPO for uncensoring.
Credits:
- Undi
- IkariDev
Description
This repo contains GGUF files of MiquMaid-v2-70B-DPO.
Training data used:
DPO training data used:
Custom format:
### Instruction:
{system prompt}
### Input:
{input}
### Response:
{reply}
Others
Undi: If you want to support us, you can here.
IkariDev: Visit my retro/neocities style website please kek
- Downloads last month
- 52
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.