Possible sign of overtraining

#10
by Henk717 - opened

When experimenting the model I noticed the model is very confident in its choices, its not as bad as some other models I have seen but it is usually near 100% confident.
overtraining.png
In this example on the KoboldAI sample story, you see that it was near 100% confident that "with a quick flick" was correct while other plausible follow-up words had been possible.

This is an snippet of llama-7b on the same sample story.
llama-7b.png

As you can see these confidence scores are much lower allowing for more varied output.
From experience we found that models that behave over confident the learning rate or other parameters were to high, you might be able to get higher quality results from your future revisions if you tone then down a bit.

Cognitive Computations org

This is great feedback! Thank you!

Sign up or log in to comment