Model hallucinates almost every run

#15
by mstachow - opened

The voices are incredible but I can't use this model properly I think. It hallucinates almost every time I use it and it's output is often wrong or nonsense. Is anyone else experiencing this? Any ideas to fix it?

Try using a shorter length of text. The 256 max length is almost always too long and the output is nonsense. Try instead with 3 sentences worth of input text

OK, i'll give that a shot. I also learned that Bark has a semantic interpretation layer. Does anyone know how to just turn that off? I don't need it to think for me, I just need it to say what I wrote down.

It's hallucination related in two ways.
1.) Some voices, especially cloned voices, might be unstable and hallucinate generally.
2.) If the language in the text prompt is very different than the language in the voice (Chinese/English, but even English-Slang/English-Fancy-Victorian-Novel. Oh and
3.) If the text prompt is the same as the text prompt 'inside' the voice, it hallucinates. So if you make a new voice, then use voice to speak exact like that created voice. (Edge case but worth mentioning.)
4.) Way too long text prompt. Just try packing the text prompt as full as you can, almost always hallucinates. Once in awhile it just reads beginning.
5.) Too short prompt (less than 5 words) make it somewhat more likely.
6.) Uses of [tags] can increase chances.

for more info, join Discord of Suno:
https://discord.gg/suno-ai

Sign up or log in to comment