rombodawg commited on
Commit
54a6747
1 Parent(s): b3a0be5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -8
README.md CHANGED
@@ -11,13 +11,8 @@ tags:
11
  - sft
12
  base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
13
  ---
 
14
 
15
- # Uploaded model
16
 
17
- - **Developed by:** rombodawg
18
- - **License:** apache-2.0
19
- - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
20
-
21
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
22
-
23
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
11
  - sft
12
  base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
13
  ---
14
+ Codellama-3-8B-Finetuned-Instruct
15
 
16
+ This model is llama-3-8b-instruct from metal (uploaded by unsloth) trained on the full 65k Codefeedback dataset + the additional 150k Code Feedback Filtered Instruction dataset combined. You can find that dataset linked bellow. This Ai model was trained with the new Qalore method developed by my good friend on discord and fellow Replete-Ai worker walmartbag. The Qalore method uses Qlora training along with the methods from Galore for aditional reductions in Vram allowing for llama-3-8b to be loaded on 14.5 gb of vram. This allowed this training to be completed on an rtx a4000 16gb in 130 hours for less than $20.
17
 
18
+ - https://huggingface.co/datasets/Replete-AI/OpenCodeInterpreterData