metadata
language:
- en
license: llama3
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- not-for-all-audiences
base_model: unsloth/llama-3-8b-bnb-4bit
datasets:
- grimulkan/LimaRP-augmented
- mpasila/LimaRP-augmented-8k-context
library_name: peft
This was made using the Llama 3 Instruct prompt formatting so that it should be easier to be merged with other models using that format.
LoRA trained in 4-bit with 8k context using meta-llama/Meta-Llama-3-8B as the base model for 1 epoch.
Dataset used is a modified version of grimulkan/LimaRP-augmented.
Prompt format: Llama 3 Instruct
There might be a slight issue with the prompt formatting since Unsloth decided to leave "gpt" and "user" to the prompts which seem to not be handled correctly all the time.
Uploaded model
- Developed by: mpasila
- License: Llama 3 Community License
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.