--- tags: - merge - mergekit base_model: - cstr/llama3-8b-spaetzle-v31 - cstr/llama3-8b-spaetzle-v28 - cstr/llama3-8b-spaetzle-v26 - cstr/llama3-8b-spaetzle-v20 license: llama3 language: - de - en --- # llama3-8b-spaetzle-v33 This is a merge of the following models: * [cstr/llama3-8b-spaetzle-v31](https://huggingface.co/cstr/llama3-8b-spaetzle-v31) * [cstr/llama3-8b-spaetzle-v28](https://huggingface.co/cstr/llama3-8b-spaetzle-v28) * [cstr/llama3-8b-spaetzle-v26](https://huggingface.co/cstr/llama3-8b-spaetzle-v26) * [cstr/llama3-8b-spaetzle-v20](https://huggingface.co/cstr/llama3-8b-spaetzle-v20) It attempts a compromise in usefulness for German and English tasks. For GGUF quants see [cstr/llama3-8b-spaetzle-v33-GGUF](https://huggingface.co/cstr/llama3-8b-spaetzle-v33-GGUF), # Benchmarks It achieves on EQ-Bench v2_de as q4km (old version without pre-tokenizer-fix) quants 66.59 (171 of 171 parseable) and 73.17 on v2 (english) (171/171). For the int4-inc quants: | Benchmark | Score | |-------------|--------| | Average | 66.13 | | ARC-c | 59.81 | | ARC-e | 85.27 | | Boolq | 84.10 | | HellaSwag | 62.47 | | Lambada | 73.28 | | MMLU | 64.11 | | OpenbookQA | 37.2 | | Piqa | 80.30 | | TruthfulQA | 50.21 | | Winogrande | 73.72 | ## Nous | Model |Average|AGIEval|GPT4All|TruthfulQA|Bigbench| |----------------------------------------------------------------------------|------:|------:|------:|---------:|-------:| | [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [📄](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 | | [**cstr/llama3-8b-spaetzle-v33**](https://huggingface.co/cstr/llama3-8b-spaetzle-v33) [📄](https://gist.github.com/CrispStrobe/0047d967ddc4bb50064c9722b9f934a5) | 55.26| 42.61| 73.9| 59.28| 45.25| | [mlabonne/Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) [📄](https://gist.github.com/mlabonne/32cdd8460804662c856bcb2a20acd49e) | 55.06 | 43.29 | 73.33 | 57.47 | 46.17 | | [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/5df2a3051dd6eb3368a77b684635dc05) | 54.28 | 43.9 | 72.62 | 56.36 | 44.23 | | [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) [📄](https://gist.github.com/mlabonne/95eef8e8d26b7b17910dcb78e1c95f4a) | 53.49 | 44.03 | 73.67 | 49.78 | 46.48 | | [mlabonne/Llama-3-8B-Instruct-abliterated-dpomix](https://huggingface.co/mlabonne/Llama-3-8B-Instruct-abliterated-dpomix) [📄](https://gist.github.com/mlabonne/d711548df70e2c04771cc68ab33fe2b9) | 52.26 | 41.6 | 69.95 | 54.22 | 43.26 | | [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [📄](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 | | [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3) [📄](https://gist.github.com/mlabonne/f46cce0262443365e4cce2b6fa7507fc) | 51.21 | 40.23 | 69.5 | 52.44 | 42.69 | | [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B) [📄](https://gist.github.com/mlabonne/22896a1ae164859931cc8f4858c97f6f) | 48.63 | 34.17 | 70.59 | 52.39 | 37.36 | | [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 | ## 🧩 Configuration ```yaml models: - model: cstr/llama3-8b-spaetzle-v20 # no parameters necessary for base model - model: cstr/llama3-8b-spaetzle-v31 parameters: density: 0.65 weight: 0.25 - model: cstr/llama3-8b-spaetzle-v28 parameters: density: 0.65 weight: 0.25 - model: cstr/llama3-8b-spaetzle-v26 parameters: density: 0.65 weight: 0.15 merge_method: dare_ties base_model: cstr/llama3-8b-spaetzle-v20 parameters: int8_mask: true dtype: bfloat16 random_seed: 0 tokenizer_source: base ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "cstr/llama3-8b-spaetzle-v33" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```