Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3-13B-Instruct-ft - GGUF - Model creator: https://huggingface.co/elinas/ - Original model: https://huggingface.co/elinas/Llama-3-13B-Instruct-ft/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama-3-13B-Instruct-ft.Q2_K.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.Q2_K.gguf) | Q2_K | 4.68GB | | [Llama-3-13B-Instruct-ft.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.IQ3_XS.gguf) | IQ3_XS | 5.18GB | | [Llama-3-13B-Instruct-ft.IQ3_S.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.IQ3_S.gguf) | IQ3_S | 5.45GB | | [Llama-3-13B-Instruct-ft.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.Q3_K_S.gguf) | Q3_K_S | 5.42GB | | [Llama-3-13B-Instruct-ft.IQ3_M.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.IQ3_M.gguf) | IQ3_M | 5.61GB | | [Llama-3-13B-Instruct-ft.Q3_K.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.Q3_K.gguf) | Q3_K | 5.98GB | | [Llama-3-13B-Instruct-ft.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.Q3_K_M.gguf) | Q3_K_M | 5.98GB | | [Llama-3-13B-Instruct-ft.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.Q3_K_L.gguf) | Q3_K_L | 6.47GB | | [Llama-3-13B-Instruct-ft.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.IQ4_XS.gguf) | IQ4_XS | 6.69GB | | [Llama-3-13B-Instruct-ft.Q4_0.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.Q4_0.gguf) | Q4_0 | 6.97GB | | [Llama-3-13B-Instruct-ft.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.IQ4_NL.gguf) | IQ4_NL | 7.04GB | | [Llama-3-13B-Instruct-ft.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.Q4_K_S.gguf) | Q4_K_S | 7.01GB | | [Llama-3-13B-Instruct-ft.Q4_K.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.Q4_K.gguf) | Q4_K | 7.38GB | | [Llama-3-13B-Instruct-ft.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.Q4_K_M.gguf) | Q4_K_M | 7.38GB | | [Llama-3-13B-Instruct-ft.Q4_1.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.Q4_1.gguf) | Q4_1 | 7.7GB | | [Llama-3-13B-Instruct-ft.Q5_0.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.Q5_0.gguf) | Q5_0 | 8.43GB | | [Llama-3-13B-Instruct-ft.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.Q5_K_S.gguf) | Q5_K_S | 8.43GB | | [Llama-3-13B-Instruct-ft.Q5_K.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.Q5_K.gguf) | Q5_K | 8.64GB | | [Llama-3-13B-Instruct-ft.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.Q5_K_M.gguf) | Q5_K_M | 8.64GB | | [Llama-3-13B-Instruct-ft.Q5_1.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.Q5_1.gguf) | Q5_1 | 9.16GB | | [Llama-3-13B-Instruct-ft.Q6_K.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.Q6_K.gguf) | Q6_K | 9.98GB | | [Llama-3-13B-Instruct-ft.Q8_0.gguf](https://huggingface.co/RichardErkhov/elinas_-_Llama-3-13B-Instruct-ft-gguf/blob/main/Llama-3-13B-Instruct-ft.Q8_0.gguf) | Q8_0 | 12.92GB | Original model description: --- base_model: - elinas/Llama-3-13B-Instruct library_name: transformers tags: - mergekit - merge datasets: - Chat-Error/Pure-dove-sharegpt license: llama3 --- # Llama-3-13B-Instruct-ft This is a QLoRA **finetune** of a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). The model is based on my passthrough merge of [Llama-3-13B-Instruct](https://huggingface.co/elinas/Llama-3-13B-Instruct) This was primarily an experiment to see how a passthrough merge will respond to further finetuning, though this was done on a small dataset. The goal was to make a "mid" sized model like Meta has released in the past and the merge method was inspired by [mlabonne's Llama-3-120B](https://huggingface.co/mlabonne/Meta-Llama-3-120B-Instruct). The model was finetuned on **8192 context length** and is likely reliable using RoPE up to 32k. It still cannot do math reliably; neither can Llama-3-8B, and in my tests only Llama-3-70B passes basic arithmetic, but it is a better storywriter/RP than Llama-3-8B from some side by side testing I conducted. Further finetuning this model or finetuning the [base model](https://huggingface.co/elinas/Llama-3-13B-Instruct) on more samples is encouraged. ## Datasets * [Chat-Error/Pure-dove-sharegpt](https://huggingface.co/datasets/Chat-Error/Pure-dove-sharegpt) A small dataset was used to see how it affects performance. Originally I planned to do a larger dataset (196k samples), but wanted to start with a smaller one first to see how much the model improved with some additional finetuning. Next steps would be finetuning on a larger dataset if through further testing, performance improvements are noticed. ## Finetuning details This is a QLoRA model and all modules were targeted. ```yaml lora_target_modules: - gate_proj - down_proj - up_proj - q_proj - v_proj - k_proj - o_proj lora_modules_to_save: - embed_tokens - lm_head ``` ```yaml The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 3 - total_train_batch_size: 3 - total_eval_batch_size: 3 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 25 - num_epochs: 1 ``` Optimizer `paged_adamw_8bit` and Deepspeed ZeRO 3 was used at a LR of `1e-5` using the cosine scheduler for 1 epoch on 3x3090s taking 4h 12m 13s total. Sample packing and padding was disabled to reduce VRAM consumption significantly at the cost of speed. W&B Run Summary ``` wandb: Run summary: wandb: eval/loss 1.00774 wandb: eval/runtime 535.3847 wandb: eval/samples_per_second 0.721 wandb: eval/steps_per_second 0.241 wandb: total_flos 4167452590080.0 wandb: train/epoch 1.0 wandb: train/global_step 1157 wandb: train/grad_norm 4.50846 wandb: train/learning_rate 0.0 wandb: train/loss 1.4115 wandb: train_loss 1.00352 wandb: train_runtime 14921.1227 wandb: train_samples_per_second 0.233 wandb: train_steps_per_second 0.078 ``` ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0 ## Model Evaluation TBD - submitted If you have any questions or comments on the model, feel free to open a discussion in the community tab. [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)