Jellon's picture
Update README.md
1c2c99f verified
metadata
base_model: allura-org/MS-Meadowlark-22B
library_name: transformers
tags:
  - mergekit
  - merge
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md

6bpw exl2 quant of: https://huggingface.co/allura-org/MS-Meadowlark-22B


MS-Meadowlark-22B

Big thanks to @inflatebot for the image.
A roleplay and storywriting model based on Mistral Small 22B.

GGUF models: https://huggingface.co/mradermacher/MS-Meadowlark-22B-GGUF/

EXL2 models: https://huggingface.co/CalamitousFelicitousness/MS-Meadowlark-22B-exl2

Datasets used in this model:

Each dataset was trained separately onto Mistral Small Instruct, and then the component models were merged along with nbeerbower/Mistral-Small-Gutenberg-Doppel-22B to create Meadowlark.

I tried different blends of the component models, and this one seems to be the most stable while retaining creativity and unpredictability added by the trained data.

Instruct Format

Rosier/bodyinf and SpringDragon were trained in completion format. This model should work with Kobold Lite in Adventure Mode and Story Mode.

Creative_Writing_Multiturn and Gutenberg-Doppel were trained using the official instruct format of Mistral Small Instruct:

<s>[INST] {User message}[/INST] {Assistant response}</s>

This is the Mistral Small V2&V3 preset in SillyTavern and Kobold Lite.

For SillyTavern in particular I've had better luck getting good output from Mistral Small using a custom instruct template that formats the assembled context as a single user turn. This prevents SillyTavern from confusing the model by assembling user/assistant turns in a nonstandard way. Note: This preset is not compatible with Stepped Thinking, use the Mistral V2&V3 preset for that.