File size: 1,676 Bytes
209d41f 67d8f55 209d41f b3a0be5 209d41f 34059ea 88c5655 c09da51 209d41f 6c5ae53 7004330 a027214 7004330 209d41f 9cf7c5e 54a6747 9cf7c5e 67d8f55 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
---
language:
- en
license: llama3
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
It was pointed out to me by a community member that my dataset used to train this model had a huge flaw because of how I created it, because of that I am retraining this model. Look forward to a V2 coming in the next week. About 1/4 of the data in the dataset had intentionally wrongly written code. I am removing that code and retraining the model. Despite that the model still performs very well. I expect the 2nd itteration to perform even better.
__________________________________________________________
llama-3-8B-Instruct-Coder
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/0O4cIuv3wNbY68-FP7tak.jpeg)
This model is llama-3-8b-instruct from Meta (uploaded by unsloth) trained on the full 65k Codefeedback dataset + the additional 150k Code Feedback Filtered Instruction dataset combined. You can find that dataset linked below. This AI model was trained with the new Qalore method developed by my good friend on Discord and fellow Replete-AI worker walmartbag.
The Qalore method uses Qlora training along with the methods from Galore for additional reductions in VRAM allowing for llama-3-8b to be loaded on 14.5 GB of VRAM. This allowed this training to be completed on an RTX A4000 16GB in 130 hours for less than $20.
Dataset used for training this model:
- https://huggingface.co/datasets/Replete-AI/OpenCodeInterpreterData
Qalore notebook for training:
- https://colab.research.google.com/drive/1bX4BsjLcdNJnoAf7lGXmWOgaY8yekg8p?usp=sharing |