Finished training?

#2
by Yhyu13 - opened

Hi

I'm just a crious community member, no researcher.

Is 0.3 trillion pre-trained token a finish training? Compared to what v1 has trined on, which is 1.4 T token.

Since 0.3 T already achieved comparable results to previous version, is it worth further training? Or, are you aiming for scaling up to larger model sizes?

Thanks!

Thank you for your interest in our model! The TNL2-7B-300B model is currently in the testing phase, so we have paused its training for the time being. Meanwhile, we are actively training the TNL3-15B model. Please feel free to check this link for more information: https://huggingface.co/OpenNLPLab/TransNormerLLM3-15B-Intermediate-Checkpoints.

Sign up or log in to comment