cstr commited on
Commit
b1f946e
1 Parent(s): 5cb6bfd

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +87 -0
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - merge
4
+ - mergekit
5
+ - lazymergekit
6
+ - abideen/AlphaMonarch-dora
7
+ base_model:
8
+ - abideen/AlphaMonarch-dora
9
+ license: cc-by-nc-4.0
10
+ ---
11
+
12
+ # Spaetzle-v69-7b
13
+ This is a progressive (mostly dare-ties, but also slerp) merge with the intention of a suitable compromise for English and German local tasks.
14
+
15
+ There is also an [unquantized](https://huggingface.co/cstr/Spaetzle-v69-7b) version.
16
+
17
+ It achieves (running quantized) in
18
+ - German EQ Bench: Score (v2_de): 62.59 (Parseable: 171.0).
19
+ - English EQ Bench: Score (v2): 76.43 (Parseable: 171.0).
20
+
21
+ It should work sufficiently well with ChatML prompt template (for all merged models should have seen ChatML prompts at least in DPO stage).
22
+
23
+ Spaetzle-v69-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
24
+ * [abideen/AlphaMonarch-dora](https://huggingface.co/abideen/AlphaMonarch-dora)
25
+ * [cstr/Spaetzle-v68-7b](https://huggingface.co/cstr/Spaetzle-v68-7b)
26
+
27
+ The merge tree in total involves to following original models:
28
+ - [abideen/AlphaMonarch-dora](https://huggingface.co/abideen/AlphaMonarch-dora)
29
+ - [mayflowergmbh/Wiedervereinigung-7b-dpo](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo)
30
+ - [flemmingmiguel/NeuDist-Ro-7B](https://huggingface.co/flemmingmiguel/NeuDist-Ro-7B)
31
+ - [ResplendentAI/Flora_DPO_7B](https://huggingface.co/ResplendentAI/Flora_DPO_7B)
32
+ - [yleo/EmertonMonarch-7B](https://huggingface.co/yleo/EmertonMonarch-7B)
33
+ - [occiglot/occiglot-7b-de-en-instruct](https://huggingface.co/occiglot/occiglot-7b-de-en-instruct)
34
+ - [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227)
35
+ - [yleo/EmertonMonarch-7B](https://huggingface.co/yleo/EmertonMonarch-7B)
36
+ - [DiscoResearch/DiscoLM_German_7b_v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1)
37
+ - [LeoLM/leo-mistral-hessianai-7b](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b)
38
+ - [DRXD1000/Phoenix](https://huggingface.co/DRXD1000/Phoenix)
39
+ - [VAGOsolutions/SauerkrautLM-7b-v1-mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral)
40
+ - [malteos/hermeo-7b](https://huggingface.co/malteos/hermeo-7b)
41
+ - [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
42
+ - [cognitivecomputations/openchat-3.5-0106-laser](https://huggingface.co/cognitivecomputations/openchat-3.5-0106-laser)
43
+
44
+
45
+ ## 🧩 Configuration
46
+
47
+ ```yaml
48
+ models:
49
+ - model: cstr/Spaetzle-v68-7b
50
+ # no parameters necessary for base model
51
+ - model: abideen/AlphaMonarch-dora
52
+ parameters:
53
+ density: 0.60
54
+ weight: 0.30
55
+ merge_method: dare_ties
56
+ base_model: cstr/Spaetzle-v68-7b
57
+ parameters:
58
+ int8_mask: true
59
+ dtype: bfloat16
60
+ random_seed: 0
61
+ tokenizer_source: base
62
+ ```
63
+
64
+ ## 💻 Usage
65
+
66
+ ```python
67
+ !pip install -qU transformers accelerate
68
+
69
+ from transformers import AutoTokenizer
70
+ import transformers
71
+ import torch
72
+
73
+ model = "cstr/Spaetzle-v69-7b"
74
+ messages = [{"role": "user", "content": "What is a large language model?"}]
75
+
76
+ tokenizer = AutoTokenizer.from_pretrained(model)
77
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
78
+ pipeline = transformers.pipeline(
79
+ "text-generation",
80
+ model=model,
81
+ torch_dtype=torch.float16,
82
+ device_map="auto",
83
+ )
84
+
85
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
86
+ print(outputs[0]["generated_text"])
87
+ ```