base_model:
- nbeerbower/mistral-nemo-bophades-12B
- nbeerbower/mistral-nemo-gutenberg-12B-v3
license: apache-2.0
library_name: transformers
tags:
- merge
- roleplay
- not-for-all-audiences
Magnum-Instruct-DPO-12B
Similar 50/50 merge like the other Magnum-Instruct, but using model variants that have had extra dpo/orpo training on top of them beforehand. Can't say if it's better or not comparatively speaking to just using the original models yet, but it seamed fine enough during my limited testing and worth the upload for now as an alternative.
Big thanks to the MistralAI and Anthracite/SillyTilly teams for the original models used, plus nbeerbower for the extra training done as well!
GGUF quants provided by mradermacher:
https://huggingface.co/mradermacher/Magnum-Instruct-DPO-12B-GGUF
Settings
Temperature @ 0.7
Min-P @ 0.02
Smoothing Factor @ 0.3
Smoothing Curve @ 1.5
DRY Multiplier (plus standard DRY settings) @ 0.8
Skip Special Tokens @ On
Everything else @ Off
Prompt Format: Nemo-Mistral
[INST] user prompt[/INST] character response</s>[INST] user prompt[/INST]
Models Merged
The following models were included in the merge:
https://huggingface.co/nbeerbower/mistral-nemo-bophades-12B
https://huggingface.co/nbeerbower/mistral-nemo-gutenberg-12B-v3