No Gemma Fine-Tune Works Right

#9
by deleted - opened
deleted

I gave it months, and tried multiple apps and versions, but all Gemma 7bs, including the base and it versions, are error prone. For example, they periodically makes spelling and grammatical errors, or go off writing code in the middle of non-coding tasks.

Another issue is even the base model is censored, such as using * to censor even PG words like ass, which causes weird things to happen, such as using asterisks when writing poems, which not only breaks the rhyme, but sometimes causes it to switch to code generation mode.

And no Gemma fine-tune was able to get rid of these errors, including this one and OpenChat. My guess is the high error rate is due to the use of an absurdly large 256k vocabulary size coupled with only 8b parameters.

Lastly, on a related note, the Llama 3 base behaves entirely different from all other bases. It rarely performs any function other than things like definitions of words. It will primarily just keep responding to the simplest tasks by saying 'I don't understand what you mean by "house/hand/talking...".

Point being, the days of the less is more approach to fine-tuning with a data set of ~10k appears to be over. It doesn't work with Gemma because the base is far too error prone and censored. And it doesn't work with the Llama 3 base because it defaults to inaction. The only way to effectively fine-tune Llama 3 is to construct a list of common LLM features (e.g. story writing, grammar checking, synonym listing...) and fine-tune each hard, otherwise it will be littered with blind spots. All the Llama 3 fine-tunes that have so far been released have this issue.

Sign up or log in to comment