Spaetzle-v60-7b

This is progressive (mostly dare-ties, but also slerp) merge with the intention of suitable compromise for English and German local tasks.

Spaetzle-v60-7b is a merge of the following models

Benchmarks

The performance looks ok so far: e.g. we get (for the GGUF q4) in EQ-Bench: Score (v2_de): 65.08 (Parseable: 171.0).

From Low-bit Quantized Open LLM Leaderboard

Type Model Average ⬆️ ARC-c ARC-e Boolq HellaSwag Lambada MMLU Openbookqa Piqa Truthfulqa Winogrande #Params (B) #Size (G)
πŸ’ Intel/SOLAR-10.7B-Instruct-v1.0-int4-inc 68.49 60.49 82.66 88.29 68.29 73.36 62.43 35.6 80.74 56.06 76.95 10.57 5.98
πŸ’ cstr/Spaetzle-v60-7b-int4-inc 68.01 62.12 85.27 87.34 66.43 70.58 61.39 37 82.26 50.18 77.51 7.04 4.16
πŸ”· TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF 66.6 60.41 83.38 88.29 67.73 52.42 62.04 37.2 82.32 56.3 75.93 10.73 6.07
πŸ”· cstr/Spaetzle-v60-7b-Q4_0-GGUF 66.44 61.35 85.19 87.98 66.54 52.78 62.05 40.6 81.72 47 79.16 7.24 4.11
πŸ’ Intel/Mistral-7B-Instruct-v0.2-int4-inc 65.73 55.38 81.44 85.26 65.67 70.89 58.66 34.2 80.74 51.16 73.95 7.04 4.16
πŸ’ Intel/Phi-3-mini-4k-instruct-int4-inc 65.09 57.08 83.33 86.18 59.45 68.14 66.62 38.6 79.33 38.68 73.48 3.66 2.28
πŸ”· TheBloke/Mistral-7B-Instruct-v0.2-GGUF 63.52 53.5 77.9 85.44 66.9 50.11 58.45 38.8 77.58 53.12 73.4 7.24 4.11
πŸ’ Intel/Meta-Llama-3-8B-Instruct-int4-inc 62.93 51.88 81.1 83.21 57.09 71.32 62.41 35.2 78.62 36.35 72.14 7.2 5.4
Downloads last month
16
GGUF
Model size
7.24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for cstr/Spaetzle-v60-7b-Q4_0-GGUF

Quantized
(7)
this model