Why change the torch_dtype of phi-2 from float16 to float32?

#1
by Zenwill - opened

Thx for your great job!
I've noticed that you changed the torch_dtype of phi-2 from float16 (the default value in the official version config.json) to float32. Why do that? Is that a common practice in LoRA finetuning?

Hey Zenwill, My bad I first didn't notice it by the time I realized its late. However, here is my notebook with code I used also updated to Lora f16 - https://colab.research.google.com/drive/1UxUTH7-nFDs00YoS8Rm9v44cHK1eI-Tj#scrollTo=DdZRaqEg2x5K&line=6&uniqifier=1

You better load adapter with fp16
model = AutoModelForCausalLM.from_pretrained(
"venkycs/phi-2-instruct",
trust_remote_code=True,
torch_dtype=torch.float16
).to(device)

Sign up or log in to comment