library_name: transformers
tags:
- not-for-all-audiences
sunfall-midnight-miqu-v0.2-v1.5-70B - EXL2 2.4bpw rpcal_mk2
This is a 2.4bpw EXL2 quant of crestf411/sunfall-midnight-miqu-v0.2-v1.5-70B
This quant was made using exllamav2-0.1.6 with Bluemoon-light dataset for RP.
This quant fits 25k context on 24GB VRAM on Windows in my local testing (with exl2 Q4 cache), you might be able to get more depending on other things taking VRAM.
I tested this quant shortly in some random RPs (including ones over 8k and 20k context) and it seems to work fine.
Prompt Templates
I used Vicuna version of calibration dataset, so probably Vicuna will be best here.
Original readme below
Sunfall (2024-06-07) dataset trained directly on top of https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5
Beware, depraved. Not suitable for any audience.
Experimental. Please give feedback. Begone if you demand perfection.
This is still an early stage experiment.
Recommend a decently high temperature. Start with temp 1.7, smoothing factor 0.3.
To use lore book tags, make sure you use Status: Blue (constant) and write e.g.
Follow the Diamond Law at all costs.
Tags: humor, dark, complex storytelling, intricate characters, immersive.
This model has been trained on context that mimics that of Silly Tavern's Mistral preset, with the following settings:
System Prompt:
You are an expert actor that can fully immerse yourself into any role given. You do not break character for any reason. Currently your role is {{char}}, which is described in detail below. As {{char}}, continue the exchange with {{user}}.
Below method still works, but the lore book approach above is more convenient:
System Same as User Enabled (This is the default)
Author's Note (In-chat @ Depth 4)
Follow The Diamond Law at all costs.
Below method still works, but unless you want to write tags for a specific character card only, the lore book approach above is more convenient:
Scenario Information (open a character card and press "Advanced Definitions") may also contain tags at the end to guide the model further. E.g.:
Two friends having fun. Set in 1947.
Tags: dark, exploration, friendship, self-discovery, historical fiction
The card has also been trained on content which includes a narrator card, which was used when the content did not mainly revolve around two characters. Future versions will expand on this idea, so forgive the vagueness at this time.
(The Diamond Law is this: https://files.catbox.moe/d15m3g.txt -- So far results are unclear, but the training was done with this phrase included, and the training data adheres to the law.)
The model has also been trained to do storywriting, both interactively with the user and on its own. The system message ends up looking something like this:
You are an expert storyteller, who can roleplay or write compelling stories. Follow the Diamond Law. Below is a scenario with character descriptions and content tags. Write a story together with the user based on this scenario.
Scenario: The story is about James, blabla.
James is an overweight 63 year old blabla.
Lucy: James's 62 year old wife.
Tags: tag1, tag2, tag3, ...
If you remove the "together with the user" part, the model will be more inclined to write on its own.