File size: 1,295 Bytes
3798112 a381e25 3798112 a381e25 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
---
# Caution:
This model may produce adult content.
This model first puts together:
[Sao10K/Sensualize-Mixtral-bf16](https://huggingface.co/Sao10K/Sensualize-Mixtral-bf16)
[NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)
and
[mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
In a DARE TIES merge utilizing [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) as a base model.
The resulting model can now respond decently to both ChatML and Mixtral formatting. Unfortunately the warmer assistant personality of Nous Hermes is lost in the process. However much of Nous Hermes' excellent attention to detail and prose is carried forward when used for role play.
It was then merged together with
[Envoid/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B](https://huggingface.co/Envoid/CATA-LimaRP-Zloss-DT-TaskArithmetic-8x7B) Utilizing the SLERP method.
Both merges were done using relatively normalized weights and densities without any gradients.
The end result is a fairly solid model for all of your SillyTavern needs, responding to either Mixtral Instruct or ChatML formatting.
Enjoy! |