Garbled Output...

#3
by Joseph717171 - opened

When I try to run Nexus-IKM-Mistral-7B, converted and quantized to gguf (for llama.cpp), I get garbled output. For comparison, your earlier GGUF's still function fine. Please fix this. 🙏

user:  Hi! 
assistant: argued checkura female longitude

 arguedrowpostrans attended Italianaніura ab attended EXISTSte
DEind v end attended «iderwww ab attended profilesclient)) argued damals sulle EXISTSte
endscript ab Украи Johound subст ott Septinger specopieider to spec BibliothèqueennнеartCD h argued damals sulle French pse
DE un

DEind non things--- Italiana ott Jas ab attendedéste
end femaleot ab attended longitude
end dat attended Realprops Audio Italiana Question Jas,- walk profilesacionesse
DE un
 argued un
atia

Item ent,-********jin Flyáss public \ out sinceiseste
 argued

Ahh damn, I think the wrong one pushed to the hub...my bad!!! I have about a dozen different versions I am experimenting with and mislabeled this one.

I'll push the correct to the hub this morning! Thank you for catching this, apologies for the lobotomized LLM!

No worries! I appreciate your endeavors.

Working version up now, I appreciate the patience! This was the version that was up before the recent update. There will be the improved model I was trying to load up later today. Thanks for your interest in this one!

Improved version that I had originally meant to push should be up now! It is the working version of the V5 experimental repo so it has the latest and best training *Laser (took a few tries to really get the model to not fry)

The latest push to the repo works! Thanks, Severian.

Joseph717171 changed discussion status to closed

Sign up or log in to comment