qwp4w3hyb commited on
Commit
749a9ac
1 Parent(s): 5ef87ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -7
README.md CHANGED
@@ -12,7 +12,7 @@ tags:
12
  - gguf
13
  - imatrix
14
  - importance matrix
15
- base_model: rombodawg/Llama-3-8B-Instruct-Coder
16
  ---
17
 
18
  # Quant Infos
@@ -27,19 +27,26 @@ base_model: rombodawg/Llama-3-8B-Instruct-Coder
27
  ```
28
 
29
  # Original Model Card
30
- llama-3-8B-Instruct-Coder
31
 
32
  ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/0O4cIuv3wNbY68-FP7tak.jpeg)
 
 
 
 
33
 
34
- This model is llama-3-8b-instruct from Meta (uploaded by unsloth) trained on the full 65k Codefeedback dataset + the additional 150k Code Feedback Filtered Instruction dataset combined. You can find that dataset linked below. This AI model was trained with the new Qalore method developed by my good friend on Discord and fellow Replete-AI worker walmartbag.
35
-
36
- The Qalore method uses Qlora training along with the methods from Galore for additional reductions in VRAM allowing for llama-3-8b to be loaded on 14.5 GB of VRAM. This allowed this training to be completed on an RTX A4000 16GB in 130 hours for less than $20.
37
 
38
  Dataset used for training this model:
39
 
40
- - https://huggingface.co/datasets/Replete-AI/OpenCodeInterpreterData
41
 
42
  Qalore notebook for training:
43
 
44
  - https://colab.research.google.com/drive/1bX4BsjLcdNJnoAf7lGXmWOgaY8yekg8p?usp=sharing
45
-
 
 
 
 
 
 
12
  - gguf
13
  - imatrix
14
  - importance matrix
15
+ base_model: rombodawg/Llama-3-8B-Instruct-Coder-v2
16
  ---
17
 
18
  # Quant Infos
 
27
  ```
28
 
29
  # Original Model Card
30
+ Llama-3-8B-Instruct-Coder-v2
31
 
32
  ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/0O4cIuv3wNbY68-FP7tak.jpeg)
33
+ _________________________________________________________________________
34
+ How is this model diffrent from rombodawg/Llama-3-8B-Instruct-Coder? Well the first model was trained on a dataset that had some major flaws that I originally had missed, with version 2 all of those flaws are fixed, and the model is fully retrained so it performs much better than the previous iteration.
35
+ _________________________________________________________________________
36
+ This model is llama-3-8b-instruct from Meta (uploaded by unsloth) trained on the full 150k Code Feedback Filtered Instruction dataset. You can find that dataset linked below. This AI model was trained with the new Qalore method developed by my good friend on Discord and fellow Replete-AI worker walmartbag.
37
 
38
+ The Qalore method uses Qlora training along with the methods from Galore for additional reductions in VRAM allowing for llama-3-8b to be loaded on 14.5 GB of VRAM. This allowed this training to be completed on an RTX A5000 24GB in 50 hours for less than $15.
 
 
39
 
40
  Dataset used for training this model:
41
 
42
+ - https://huggingface.co/datasets/Replete-AI/CodeFeedback-Filtered-Instruction-Simplified-Pairs
43
 
44
  Qalore notebook for training:
45
 
46
  - https://colab.research.google.com/drive/1bX4BsjLcdNJnoAf7lGXmWOgaY8yekg8p?usp=sharing
47
+
48
+ Quantizations for easier inference:
49
+
50
+ - https://huggingface.co/bartowski/Llama-3-8B-Instruct-Coder-v2-GGUF
51
+
52
+ - https://huggingface.co/bartowski/Llama-3-8B-Instruct-Coder-v2-exl2