--- base_model: unsloth/gemma-2-9b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma2 - trl - sft datasets: - yahma/alpaca-cleaned --- # Uploaded model - **Developed by:** NotAiLOL - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. # Details This model is fine tuned from unsloth/gemma-2-9b-bnb-4bit on the alpaca-cleaned dataset using the **QLoRA** method. This model achieved a loss of 0.923800 on the alpaca-cleaned dataset after step 120. This model follows the alpaca prompt: ``` Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {} ### Input: {} ### Response: {} ``` ## Training This model is trained on a single Tesla T4 GPU. - 1254.1115 seconds used for training. - 20.9 minutes used for training. - Peak reserved memory = 9.383 GB. - Peak reserved memory for training = 2.807 GB. - Peak reserved memory % of max memory = 63.622 %. - Peak reserved memory for training % of max memory = 19.033 %. [](https://github.com/unslothai/unsloth)