--- language: - en license: llama3 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft - not-for-all-audiences base_model: unsloth/llama-3-8b-Instruct-bnb-4bit datasets: - mpasila/LimaRP-PIPPA-Mix-8K-Context - grimulkan/LimaRP-augmented - KaraKaraWitch/PIPPA-ShareGPT-formatted --- This is an ExLlamaV2 quantized model in 4.7bpw of [mpasila/Llama-3-Instruct-LiPPA-8B](https://huggingface.co/mpasila/Llama-3-Instruct-LiPPA-8B) using the default calibration dataset with 8192 context length. # Original Model card: This is a merge of [mpasila/Llama-3-LiPPA-LoRA-8B](https://huggingface.co/mpasila/Llama-3-LiPPA-LoRA-8B). LoRA trained in 4-bit with 8k context using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/) as the base model for 1 epoch. Dataset used is [mpasila/LimaRP-PIPPA-Mix-8K-Context](https://huggingface.co/datasets/mpasila/LimaRP-PIPPA-Mix-8K-Context) which was made using [grimulkan/LimaRP-augmented](https://huggingface.co/datasets/grimulkan/LimaRP-augmented) and [KaraKaraWitch/PIPPA-ShareGPT-formatted](https://huggingface.co/datasets/KaraKaraWitch/PIPPA-ShareGPT-formatted). This has been trained on the instruct model and not the base model. The model trained with the base model using the same dataset is here: [mpasila/Llama-3-LiPPA-8B](https://huggingface.co/mpasila/Llama-3-LiPPA-8B) This also seems to work fairly well for chatting. ### Prompt format: Llama 3 Instruct Unsloth changed assistant to gpt and user to human. # Uploaded model - **Developed by:** mpasila - **License:** Llama 3 Community License - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)