--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft - gguf - imatrix - importance matrix base_model: rombodawg/Llama-3-8B-Instruct-Coder-v2 --- # Quant Infos - Quantized with recent bpe pre-tokenizer fixes https://github.com/ggerganov/llama.cpp/pull/6920 - quants done with an importance matrix for improved quantization loss - 0, K & IQ quants in basically all variants from Q8 down to IQ1_S - Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [04976db7a819fcf8bfefbfc09a3344210b79dd27](https://github.com/ggerganov/llama.cpp/commit/04976db7a819fcf8bfefbfc09a3344210b79dd27) (master from 2024-05-07) - Imatrtix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) dataset. ``` ./imatrix -c 512 -m $model_name-f16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat ``` # Original Model Card Llama-3-8B-Instruct-Coder-v2 ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/0O4cIuv3wNbY68-FP7tak.jpeg) _________________________________________________________________________ How is this model diffrent from rombodawg/Llama-3-8B-Instruct-Coder? Well the first model was trained on a dataset that had some major flaws that I originally had missed, with version 2 all of those flaws are fixed, and the model is fully retrained so it performs much better than the previous iteration. _________________________________________________________________________ This model is llama-3-8b-instruct from Meta (uploaded by unsloth) trained on the full 150k Code Feedback Filtered Instruction dataset. You can find that dataset linked below. This AI model was trained with the new Qalore method developed by my good friend on Discord and fellow Replete-AI worker walmartbag. The Qalore method uses Qlora training along with the methods from Galore for additional reductions in VRAM allowing for llama-3-8b to be loaded on 14.5 GB of VRAM. This allowed this training to be completed on an RTX A5000 24GB in 50 hours for less than $15. Dataset used for training this model: - https://huggingface.co/datasets/Replete-AI/CodeFeedback-Filtered-Instruction-Simplified-Pairs Qalore notebook for training: - https://colab.research.google.com/drive/1bX4BsjLcdNJnoAf7lGXmWOgaY8yekg8p?usp=sharing Quantizations for easier inference: - https://huggingface.co/bartowski/Llama-3-8B-Instruct-Coder-v2-GGUF - https://huggingface.co/bartowski/Llama-3-8B-Instruct-Coder-v2-exl2