v3-2 vs v3-1

#1
by bartowski - opened

Not much in the model card, any notable differences? increased training or otherwise?

In my initial testing, v3.2 is getting facts more correct than v3.1, but I have been using the 5bit quantized models. I'm currently downloading the fp16 version for further testing.

In testing, this new version appears to work very well.

Is the model trained with Quantization Aware Training to save more accuracy do we have any knowledge of that ? and the model checkpoints in the files are full precision or half precision ? Apart from the questions model gives great results for language understanding thought in some cases gives better results when using 8bit inference over torch fp16 dtype need more testing probably

@bartowski hi, we update the model card

@PoVRAZOR Thanks for your testing, we continue training the https://huggingface.co/Intel/neural-chat-7b-v3-1 with https://huggingface.co/datasets/meta-math/MetaMathQA dataset

@Metricon Thanks~

@iskenderulgen hi, we use fp16 mixed training.

Sign up or log in to comment