metadata
license: mit
Brief Descrption
Llama 2 7B base model fine-tuned on 1000 random samples from the Alpaca GPT-4 instruction dataset using QLORA and 4-bit quantization.
This is a demo of how an LLM can be fine-tuned in such low-resource environment as Google Colab.
You can find more details about the experiment in the Colab notebook used to fine-tune the model here.