Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ This model uses the Alpaca **prompting format**
|
|
17 |
|
18 |
Then, we have done a MoE, made of MiquMaid-v2-70B-DPO and Miqu-70B-DPO base, making the model using the finetune AND the base model for each token, working together.
|
19 |
|
20 |
-
The two model have been trained on DPO for uncensoring, more info on Miqu-70B-DPO [here](Undi95/Miqu-70B-Alpaca-DPO-GGUF)
|
21 |
|
22 |
We have seen a significant improvement, so we decided to share that, even if the model is very big.
|
23 |
|
|
|
17 |
|
18 |
Then, we have done a MoE, made of MiquMaid-v2-70B-DPO and Miqu-70B-DPO base, making the model using the finetune AND the base model for each token, working together.
|
19 |
|
20 |
+
The two model have been trained on DPO for uncensoring, more info on Miqu-70B-DPO [here](https://huggingface.co/Undi95/Miqu-70B-Alpaca-DPO-GGUF)
|
21 |
|
22 |
We have seen a significant improvement, so we decided to share that, even if the model is very big.
|
23 |
|