Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Kunoichi-7B - GGUF

Name Quant method Size
Kunoichi-7B.Q2_K.gguf Q2_K 2.53GB
Kunoichi-7B.IQ3_XS.gguf IQ3_XS 2.81GB
Kunoichi-7B.IQ3_S.gguf IQ3_S 2.96GB
Kunoichi-7B.Q3_K_S.gguf Q3_K_S 2.95GB
Kunoichi-7B.IQ3_M.gguf IQ3_M 3.06GB
Kunoichi-7B.Q3_K.gguf Q3_K 3.28GB
Kunoichi-7B.Q3_K_M.gguf Q3_K_M 3.28GB
Kunoichi-7B.Q3_K_L.gguf Q3_K_L 3.56GB
Kunoichi-7B.IQ4_XS.gguf IQ4_XS 3.67GB
Kunoichi-7B.Q4_0.gguf Q4_0 3.83GB
Kunoichi-7B.IQ4_NL.gguf IQ4_NL 3.87GB
Kunoichi-7B.Q4_K_S.gguf Q4_K_S 3.86GB
Kunoichi-7B.Q4_K.gguf Q4_K 4.07GB
Kunoichi-7B.Q4_K_M.gguf Q4_K_M 4.07GB
Kunoichi-7B.Q4_1.gguf Q4_1 4.24GB
Kunoichi-7B.Q5_0.gguf Q5_0 4.65GB
Kunoichi-7B.Q5_K_S.gguf Q5_K_S 4.65GB
Kunoichi-7B.Q5_K.gguf Q5_K 4.78GB
Kunoichi-7B.Q5_K_M.gguf Q5_K_M 4.78GB
Kunoichi-7B.Q5_1.gguf Q5_1 5.07GB
Kunoichi-7B.Q6_K.gguf Q6_K 5.53GB
Kunoichi-7B.Q8_0.gguf Q8_0 7.17GB

Original model description:

license: cc-by-nc-4.0 tags: - merge

image/png

Description

This repository hosts Kunoichi-7B, an general purpose model capable of RP. In both my testing and the benchmarks, Kunoichi is an extremely strong model, keeping the advantages of my previous models but gaining more intelligence. Kunoichi scores extremely well on all benchmarks which correlate closely with ChatBot Arena Elo.

Model MT Bench EQ Bench MMLU Logic Test
GPT-4-Turbo 9.32 - - -
GPT-4 8.99 62.52 86.4 0.86
Kunoichi-7B 8.14 44.32 64.9 0.58
Starling-7B 8.09 - 63.9 0.51
Claude-2 8.06 52.14 78.5 -
Silicon-Maid-7B 7.96 40.44 64.7 0.54
Loyal-Macaroni-Maid-7B 7.95 38.66 64.9 0.57
GPT-3.5-Turbo 7.94 50.28 70 0.57
Claude-1 7.9 - 77 -
Openchat-3.5 7.81 37.08 64.3 0.39
Dolphin-2.6-DPO 7.74 42.88 61.9 0.53
Zephyr-7B-beta 7.34 38.71 61.4 0.30
Llama-2-70b-chat-hf 6.86 51.56 63 -
Neural-chat-7b-v3-1 6.84 43.61 62.4 0.30

The model is intended to be used with up to an 8k context window. Using a NTK RoPE alpha of 2.6, the model can be used experimentally up to a 16k context window.

Prompt template: Custom format, or Alpaca

Alpaca:

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

SillyTavern format:

I found the best SillyTavern results from using the Noromaid template.

SillyTavern config files: Context, Instruct.

Additionally, here is my highly recommended Text Completion preset. You can tweak this by adjusting temperature up or dropping min p to boost creativity or raise min p to increase stability. You shouldn't need to touch anything else!

WTF is Kunoichi-7B?

Kunoichi-7B is a SLERP merger between my previous RP model, Silicon-Maid-7B, and an unreleased model that I had dubbed "Ninja-7B". This model is the result of me attempting to merge an RP focused model which maintained the strengths of Silicon-Maid-7B but further increased the model's brain power. I sought to increase both MT-Bench and EQ-Bench without losing Silicon Maid's strong ability to follow SillyTavern character cards.

Ninja-7B was born from an attempt to turn jan-hq/stealth-v1.2 into a viable model through mergers. Although none of the Ninja prototype models developed to a point where I was happy, it turned out to be a strong model to merge. Combined with Silicon-Maid-7B, this appeared to be a strong merger.

Other Benchmarks

Model Average AGIEval GPT4All TruthfulQA Bigbench
Kunoichi-7B 57.54 44.99 74.86 63.72 46.58
OpenPipe/mistral-ft-optimized-1218 56.85 44.74 75.6 59.89 47.17
Silicon-Maid-7B 56.45 44.74 74.26 61.5 45.32
mlabonne/NeuralHermes-2.5-Mistral-7B 53.51 43.67 73.24 55.37 41.76
teknium/OpenHermes-2.5-Mistral-7B 52.42 42.75 72.99 52.99 40.94
openchat/openchat_3.5 51.34 42.67 72.92 47.27 42.51
berkeley-nest/Starling-LM-7B-alpha 51.16 42.06 72.72 47.33 42.53
HuggingFaceH4/zephyr-7b-beta 50.99 37.33 71.83 55.1 39.7
Downloads last month
12,571
GGUF
Model size
7.24B params
Architecture
llama
+2
Unable to determine this model's library. Check the docs .