Fine-tuning?

#14
by OSK-Creative-Tech - opened

Hello. I am in need to fine-tune Llama 70b for a specific task.
It was done on Llama 7b, 13b, and Mistral 7b before, but seems like not enough power with these smaller models.
Fine-tuning was done with HF Autotrain and with QLorA (bitsandbytes, peft) approach.

Could the same way of fine-tuning be applied to this quantized model?
Or should I use the base 70b llama2 model and quantize it after fine-tuning?

Sign up or log in to comment