Token issues, coherence

#1
by Varkoyote - opened

Hello! Small personal feedback about the model. I've been using the iMat q6K. Multiple times at various temperatures, I've been seeing the user token leaking at the end of the AI's response, showing just "user" and stopping. The model seems to struggle a bit with coherence too, mostly in RP...

It using ST, you can check the 'Trim incomplete messages' option to remove that 'user' thing automatically.

I get better results using Mistral preset instead of ChatML, that goes for every Lyra model too (I love Lyra4). I don't say it as a bad thing, though, I have no problem using Mistral prompt if it works better.

Sign up or log in to comment