Edit model card

Quantized using 200 samples of 8192 tokens from an RP-oriented PIPPA dataset.

Branches:

  • main -- measurement.json
  • 2.25b6h -- 2.25bpw, 6bit lm_head
  • 3.7b6h -- 3.7bpw, 6bit lm_head
  • 6b6h -- 6bpw, 6bit lm_head

Requires ExllamaV2 version 0.0.12 and up.

Original model link: Sao10K/Solstice-Mixtral-v1

Original model README below.


MIMI

GGUF: https://huggingface.co/Sao10K/Solstice-Mixtral-v1-GGUF

Solstice-11B-v1 but on Mixtral. More info there.

Experimental. May or may not be good, Mixtral training is... difficult to work with.

Trained with Vicuna / ShareGPT Format, but Alpaca Instruct should work fine too.


As per usual, handles itself fine in NSFW Scenarios, after all, it is trained in lewd outputs. A bit of a weird behaviour where it is reluctant in zero-shot settings, but in actual roleplays / usage? It's fine.

Pretty nice. Using Vicuna gave slightly better outputs than Alpaca, but it may be a minor difference?

I like that it stays in character.

I like using Universal-Light preset in SillyTavern.


I really appreciate your feedback / supportive comments. They keep me going.


Support me here :)

Downloads last month
0
Unable to determine this model's library. Check the docs .

Finetuned from

Dataset used to train rAIfle/Solstice-Mixtral-v1-exl2-rpcal