Finetune onfp16

#4
by wiccanmind - opened

Firstly, I really appreciate this amazing contribution for the multilingual LLM and would like to extend my thanks to you.

As I see it, the pretrained model is efficient in bfloat16. However, it will dramatically decrease performance when inferring in fploat16 (and may not generate any tokens that make sense, relevant to discussion #1. The model just prints unk tokens).
So, I am pondering the question of how to finetune the downstream task with only an fp16 supported GPU, while the pretrained model only makes sense in bf16.

Another thing, do you plan to release the quantizing versions (int8, int4) as well?

Machine Translation Team at Alibaba DAMO Academy org

Thank you for your attention. We understand that performing fp16 inference using the bf16 pre-trained model can be a challenge, and we regret to inform you that there are no plans to release an fp16 version of the model at this time. However, int4 and int8 quantization versions of the model will be released soon.

Thank you for your attention. We understand that performing fp16 inference using the bf16 pre-trained model can be a challenge, and we regret to inform you that there are no plans to release an fp16 version of the model at this time. However, int4 and int8 quantization versions of the model will be released soon.

Thank you, and I'm looking forward to the new release!
Please notice me in this discusion when it is come out.

@pemywei Hi, thank you for creating the model.
I was wondering whether there are any updates on the quantized version yet?
Is it possible to quantize using the below project?
https://github.com/ggerganov/llama.cpp

Sign up or log in to comment