issues with generation
hey there dunno if this is a known issue but all im getting is gibberish on pretty much any settings or context
ahhh just saw the update altho im getting great results on your 4x7 moe
ahhh just saw the update altho im getting great results on your 4x7 moe
going back to retest cognitive fusion again...I thought it was broken too...
going back to retest cognitive fusion again...I thought it was broken too...
nope its actually doing great, im using 5_K_M with 32k context, its doing very well.
going back to retest cognitive fusion again...I thought it was broken too...
nope its actually doing great, im using 5_K_M with 32k context, its doing very well.
oh that is such a relief to hear...I used a different command for Trix to convert it, because all the other ones were broken on my commit...that's why this one speaks gibberish, thank you so much for using my models and telling me about them.
no prob! so far id say your 7x4 moe is running close to some of the finetuned yi-34b models but ill have to use it more to see if it has any very high context issues, or repetition but so far for stories and RP its doing well (with the small amount of tests ive run, im still dialing in the settings, system & instruct prompts)
no prob! so far id say your 7x4 moe is running close to some of the finetuned yi-34b models but ill have to use it more to see if it has any very high context issues, or repetition but so far for stories and RP its doing well (with the small amount of tests ive run, im still dialing in the settings, system & instruct prompts)
I've been mostly working on merging, but now I have plenty of models to quantize once the staff fixes llama.cpp. For now I will research what models would work best as individual experts. I hope the model continues to serve you well. It's done pretty darn well for me.