File size: 1,291 Bytes
6da76d9
 
 
 
 
 
 
 
 
 
 
 
 
8a15d31
6da76d9
8a15d31
6da76d9
8a15d31
6da76d9
8a15d31
6da76d9
8a15d31
6da76d9
 
8a15d31
6da76d9
8a15d31
6da76d9
8a15d31
6da76d9
8a15d31
6da76d9
8a15d31
6da76d9
8a15d31
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: Replete-AI/Llama-3-11.5B-Instruct-V2
---
Llama-3-11.5B-Instruct-Coder-v2

![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/0O4cIuv3wNbY68-FP7tak.jpeg)

This model is llama-3-8b-instruct from Meta upscaled to 11.5b and then trained on the full 150k Code Feedback Filtered Instruction dataset. You can find that dataset linked below. This AI model was trained with the new Qalore method developed by my good friend on Discord and fellow Replete-AI worker walmartbag.

The Qalore method uses Qlora training along with the methods from Galore for additional reductions in VRAM allowing for llama-3-8b to be loaded on 14.5 GB of VRAM. This allowed this training to be completed on an RTX A5000 24GB in 80 hours for less than $30.

Dataset used for training this model:


- https://huggingface.co/datasets/Replete-AI/CodeFeedback-Filtered-Instruction-Simplified-Pairs

Qalore notebook for training:

- https://colab.research.google.com/drive/1bX4BsjLcdNJnoAf7lGXmWOgaY8yekg8p?usp=sharing

Quantizations for easier inference:

- https://huggingface.co/bartowski/COMING-SOON

- https://huggingface.co/bartowski/COMING-SOON