File size: 465 Bytes
e457911 2cdadb5 62ab575 1bcbc98 731d0da 4c54828 c93b642 |
1 2 3 4 5 6 7 8 9 10 |
---
license: mit
---
# Brief Descrption
Llama 2 7B base model fine-tuned on 1000 random samples from the Alpaca GPT-4 instruction dataset using QLORA and 4-bit quantization.
This is a demo of how an LLM can be fine-tuned in such low-resource environment as Google Colab.
You can find more details about the experiment in the Colab notebook used to fine-tune the model [here](https://colab.research.google.com/drive/1RJRHZfgSoPpYn1K-fa2TuvAxc-l8-Ukk?usp=sharing). |