Still the best Mixtral based instruct model. We should change that

#81
by rombodawg - opened

Although leaderboard scored may say otherwise, as far as actual use, Mixtral-8x7B-Instruct-v0.1 has the best outputs of any Mixtral fine tune of merge. I found that MixtralOrochi was a really good merge of a few models that had some nice niche use cases like openbuddys Multilanguage specialties, and noromaids roleplay abilities. However even my attempts at creating a better version (linked bellow) seem to only hinder the overall performance of the OG Mixtral-8x7B-Instruct-v0.1 model that was released by mistralai company.

My model:

https://huggingface.co/rombodawg/Open_Gpt4_8x7B

I want to challenge the community to find the "secret Sause" to fine tuning Mixtral to be better than the intruct version made by mistralai because as it sais in the model card this model is only a "quick demonstration that the base model can be easily fine-tuned to achieve compelling performance". So in theory the performance this model should be able to achieve should only go up from here.

Lets get to work people. I have high hopes for this model and its ability to close the gap between open and closed sourced ai. We can make something better than openai, microsoft, google, apple, and all others who want to horde ai for themselves for profit. ✌✌

rombodawg changed discussion title from Still the best Mixtral based instruct model. to Still the best Mixtral based instruct model. We should change that

Sign up or log in to comment