mpasila's picture
Update README.md
0016213 verified
|
raw
history blame
1.59 kB
---
base_model: mistralai/Mistral-Nemo-Base-2407
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
datasets:
- mpasila/LimaRP-PIPPA-freedom-rp-Mix-8K
- grimulkan/LimaRP-augmented
- KaraKaraWitch/PIPPA-ShareGPT-formatted
- openerotica/freedom-rp
---
LoRA trained in 4-bit with 8k context using [mistralai/Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407/) as the base model for 1 epoch.
Dataset used is [mpasila/LimaRP-PIPPA-freedom-rp-Mix-8K](https://huggingface.co/datasets/mpasila/LimaRP-PIPPA-freedom-rp-Mix-8K) which was made using [grimulkan/LimaRP-augmented](https://huggingface.co/datasets/grimulkan/LimaRP-augmented), [KaraKaraWitch/PIPPA-ShareGPT-formatted](https://huggingface.co/datasets/KaraKaraWitch/PIPPA-ShareGPT-formatted) and [openerotica/freedom-rp](https://huggingface.co/datasets/openerotica/freedom-rp).
Merged from this LoRA: [mpasila/Mistral-freeLiPPA-12B](https://huggingface.co/mpasila/Mistral-freeLiPPA-12B)
### Prompt format: ChatML
Changed to ChatML since it might be confusing to use Llama 3 Instruct template on a Mistral model...
# Uploaded model
- **Developed by:** mpasila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-nemo-base-2407-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)