LoRA trained in 4-bit with 8k context using mistralai/Mistral-Nemo-Base-2407 as the base model for 1 epoch.
Dataset used is mpasila/LimaRP-PIPPA-Mix-8K-Context which was made using grimulkan/LimaRP-augmented and KaraKaraWitch/PIPPA-ShareGPT-formatted.
Merged from this LoRA: mpasila/Mistral-LiPPA-LoRA-12B
So uhh it does kinda work, maybe not the best datasets but uhh it's something.
Prompt format: Llama 3 Instruct
Unsloth changed assistant to gpt and user to human.
Uploaded model
- Developed by: mpasila
- License: apache-2.0
- Finetuned from model : unsloth/mistral-nemo-base-2407-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 23
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for mpasila/Mistral-LiPPA-12B
Base model
mistralai/Mistral-Nemo-Base-2407