Not trained enough?

#2
by deleted - opened
deleted

The Llama 3 base is unusual in that it basically defaults to statements like "I don't know how to do that". Most likely as a safety mechanism.

This means you have to fine-tune every use case, such as story writing, Q&A, grammar check, poem writing, jokes, synonyms and so on. Otherwise the same base model responses pop up everywhere.

For example, when simply asking about a character/actor from a famous show or movie about half the time it's says something like "I don't know."

Also, even when it does things it doesn't strictly adhere to instructions. For example, write 9 single-word synonyms for XXX, excluding YYY, and it will write 10, including some multi-word synonyms, and one will be YYY.

In short, it appears light fine-tuning with a limited data set won't cut it with Llama 3.

Sign up or log in to comment