[Possible request.]

#3
by Spacellary - opened

Possibility of a passthrough 9B merge of this model? It's very interesting and based:

https://huggingface.co/l3utterfly/mistral-7b-v0.1-layla-v4

The Chaotic Neutrals org

I don't mind doing a passthrough, I was thinking of using our highest scoring model: https://huggingface.co/ChaoticNeutrals/Eris_Remix_7B

Layla will pull down the OpenLLM score considerably due to its 64ish average, but we should land somewhere in the middle, which, while not ideal, is acceptable. This is an issue we had with Bepis, as the Thespis lineage pulled down our score and overall intelligence considerably.

Would you prefer a 9B or an 11B model?

@jeiku

Want to see how it would turn out at 9B paraments, and that size is at the limit of my hardware for my personal real time target inference speeds. About score reductions, I am more interested in their unaligned than anything, so while I understand that they are important to quantize the quality in a less subjective way, I'm not that concerned.

In the name of science, even if not as optimal, haha.

The Eris_Remix looks great.

The Chaotic Neutrals org

Ok, I'll get started immediately. Please let me know if there's a specific GGUF quant that you want. It won't be imatrix, but I can link to it as soon as the model is uploaded so you don't have to wait.

@jeiku No rush and no pressure! You can prioritize anything you're doing currently, as for Quants don't worry too much outside of testing, as @Lewdiculous should upload them as soon as possible.

upload them as soon as possible

They are on the way, as soon as they are out of the oven.

Sign up or log in to comment