Model size for int4 fine tuning on rtx 3090

#2
by KnutJaegersberg - opened

I don't know when HF releases support for int4 fine tuning. Others are alread onto building it.
Since llama 30b is properly the best model that fits on an rtx 3090, I guess, this model here could be used as well. However, the original weights quantized to int4 for fine tuning will be useful, too.
I think lora fine tuning does not depend a lot on parameter count. It is possible to lora fine tune gptneox 20b in 8 bit.
I'd guess it should be possible to lora fine tune llama 30b int4 on an rtx 3090.
Will you watch this space, too?
Such a base model would be very valuable to the community, I'd guess.

There are some people fine-tuning on 4-bit already. See: https://github.com/johnsmith0031/alpaca_lora_4bit

learned about that only yesterday. I heard hf library got an update and it might be necessary to reconvert the weights. how does that all the happy fine tuning? will the models still work in the future?

This comment has been hidden

Sign up or log in to comment