metadata
base_model:
- unsloth/Mistral-Nemo-Base-2407-bnb-4bit
library_name: transformers
tags:
- unsloth
- trl
- sft
license: apache-2.0
Luca-MN-bf16
This thing was just intended as an experiment but it turned out quite good. I had it both name and prompt imagegen for itself.
Created by running a high-r LoRA-pass over Nemo-Base with 2 epochs of some RP data, then a low-r pass with 0.5 epochs of the c2-data, then 3 epochs of DPO using jondurbin/gutenberg-dpo-v0.1.
Prompting
Use the Mistral V3-Tekken
context- and instruct-templates. Temperature at about 1.25
seems to be the sweet spot, with either MinP at 0.05
or TopP at 0.9
. DRY/Smoothing etc depending on your preference.
Quantized versions
- iMat GGUFs, courtesy of the Quant-Cartel