Functional example of finetuning of Llama-2-7b-Chat-GPTQ

by echogit - opened

I searched quite a lot and didn't find a functional example of finetuning and inference of the Llama-2-7b-Chat-GPTQ model.
I someone could point me to a example, preferably a google colab notebook, I would very much appreciate.

Sign up or log in to comment