Lewdiculous commited on
Commit
6fd157b
•
1 Parent(s): ffc4526

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +124 -0
README.md CHANGED
@@ -9,3 +9,127 @@ Quants:
9
  ]
10
  ```
11
  ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/FzlcjhEtdAU5l12ztKfNm.jpeg)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ]
10
  ```
11
  ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/FzlcjhEtdAU5l12ztKfNm.jpeg)
12
+
13
+ # Original model information:
14
+
15
+ <div style="display: flex; justify-content: center; align-items: center">
16
+ <img src="https://huggingface.co/SanjiWatsuki/Sonya-7B/resolve/main/assets/Sonya.jpg">
17
+ </div
18
+ >
19
+
20
+ <p align="center">
21
+ <big><b>Top 1 Performer MT-bench 🤪</b></big>
22
+ </p>
23
+
24
+ ## WTF is This?
25
+
26
+ Sonya-7B is, at the time of writing, the **#1 performing model in MT-Bench first turn, ahead of GPT-4, and overall the #2 model in MT-Bench**, to the best of my knowledge. Sonya-7B should be a good all-purpose model for all tasks including assistant, RP, etc.
27
+
28
+ Sonya-7B has a similar structure to my previous model, [Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B), and uses a very similar merge. It's a merge of [xDAN-AI/xDAN-L1-Chat-RL-v1](https://huggingface.co/xDAN-AI/xDAN-L1-Chat-RL-v1), [Jan-Ai's Stealth v1.2](https://huggingface.co/jan-hq/stealth-v1.2), [chargoddard/piano-medley-7b](https://huggingface.co/chargoddard/piano-medley-7b), [NeverSleep/Noromaid-7B-v0.2](https://huggingface.co/NeverSleep/Noromaid-7b-v0.2), and [athirdpath/NSFW_DPO_vmgb-7b](athirdpath/NSFW_DPO_vmgb-7b). Sauce is below. Somehow, by combining these pieces, it substantially outscores any of its parents on MT-Bench.
29
+
30
+ I picked these models because:
31
+ * MT-Bench normally correlates well with real world model quality and xDAN performs well on it.
32
+ * Almost all models in the mix were Alpaca prompt formatted which gives prompt consistency.
33
+ * Stealth v1.2 has been a magic sprinkle that seems to increase my MT-Bench scores.
34
+ * I added RP models because it boosted the Writing and Roleplay benchmarks 👀
35
+
36
+ Based on the parent models, I expect this model to be used with an 8192 context window. Please use NTK scaling alpha of 2.6 to experimentally try out 16384 context.
37
+
38
+ **Let me be candid:** Despite the test scores, this model is **NOT is a GPT killer**. I think it's a very sharp model **for a 7B**, it probably punches way above its weight **for a 7B**, but it's still a 7B model. Even for a 7B model, I think **it's quirky and has some weird outputs**, probably due to how Frankenstein this merge is. Keep your expectations in check 😉
39
+
40
+ **MT-Bench Average Turn**
41
+ | model | score | size
42
+ |--------------------|-----------|--------
43
+ | gpt-4 | 8.99 | -
44
+ | **Sonya-7B** | **8.52** | **7b**
45
+ | xDAN-L1-Chat-RL-v1 | 8.34 | 7b
46
+ | Starling-7B | 8.09 | 7b
47
+ | Claude-2 | 8.06 | -
48
+ | *Silicon-Maid* | *7.96* | *7b*
49
+ | *Loyal-Macaroni-Maid*| *7.95* | *7b*
50
+ | gpt-3.5-turbo | 7.94 | 20b?
51
+ | Claude-1 | 7.90 | -
52
+ | OpenChat-3.5 | 7.81 | -
53
+ | vicuna-33b-v1.3 | 7.12 | 33b
54
+ | wizardlm-30b | 7.01 | 30b
55
+ | Llama-2-70b-chat | 6.86 | 70b
56
+
57
+ <img src="https://huggingface.co/SanjiWatsuki/Sonya-7B/resolve/main/assets/mt-bench-gpt.png">
58
+
59
+ <img src="https://huggingface.co/SanjiWatsuki/Sonya-7B/resolve/main/assets/mt-bench-comparison.png">
60
+
61
+ ### The Sauce
62
+
63
+ ```
64
+ models:
65
+ - model: xDAN-AI/xDAN-L1-Chat-RL-v1
66
+ parameters:
67
+ weight: 1
68
+ density: 1
69
+ - model: chargoddard/piano-medley-7b
70
+ parameters:
71
+ weight: 0.3
72
+ - model: jan-hq/stealth-v1.2
73
+ parameters:
74
+ weight: 0.2
75
+ - model: NeverSleep/Noromaid-7b-v0.2
76
+ parameters:
77
+ weight: 0.2
78
+ - model: athirdpath/NSFW_DPO_vmgb-7b
79
+ parameters:
80
+ weight: 0.2
81
+ merge_method: ties
82
+ base_model: mistralai/Mistral-7B-v0.1
83
+ parameters:
84
+ density: 0.4
85
+ int8_mask: true
86
+ normalize: true
87
+ dtype: bfloat16
88
+ ```
89
+
90
+ **There was no additional training, finetuning, or DPO.** This is a straight merger.
91
+
92
+ ### Prompt Template (Alpaca)
93
+
94
+ ```
95
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
96
+
97
+ ### Instruction:
98
+ {prompt}
99
+
100
+ ### Response:
101
+ ```
102
+
103
+ I found that this model **performed worse** with the xDAN prompt format so, despite the heavy weight of xDAN in this merger, I recommeend *against* its use.
104
+
105
+ ### Other Benchmark Stuff
106
+
107
+ **########## First turn ##########**
108
+ | model | turn | score | size
109
+ |--------------------|------|----------|--------
110
+ | **Sonya-7B** | 1 | **9.06875** | **7b**
111
+ | gpt-4 | 1 | 8.95625 | -
112
+ | xDAN-L1-Chat-RL-v1 | 1 | *8.87500* | *7b*
113
+ | xDAN-L2-Chat-RL-v2 | 1 | 8.78750 | 30b
114
+ | claude-v1 | 1 | 8.15000 | -
115
+ | gpt-3.5-turbo | 1 | 8.07500 | 20b
116
+ | vicuna-33b-v1.3 | 1 | 7.45625 | 33b
117
+ | wizardlm-30b | 1 | 7.13125 | 30b
118
+ | oasst-sft-7-llama-30b | 1 | 7.10625 | 30b
119
+ | Llama-2-70b-chat | 1 | 6.98750 | 70b
120
+
121
+
122
+ ########## Second turn ##########
123
+ | model | turn | score | size
124
+ |--------------------|------|-----------|--------
125
+ | gpt-4 | 2 | 9.025000 | -
126
+ | xDAN-L2-Chat-RL-v2 | 2 | 8.087500 | 30b
127
+ | **Sonya-7B** | 2 | **7.962500** | **7b**
128
+ | xDAN-L1-Chat-RL-v1 | 2 | 7.825000 | 7b
129
+ | gpt-3.5-turbo | 2 | 7.812500 | 20b
130
+ | claude-v1 | 2 | 7.650000 | -
131
+ | wizardlm-30b | 2 | 6.887500 | 30b
132
+ | vicuna-33b-v1.3 | 2 | 6.787500 | 33b
133
+ | Llama-2-70b-chat | 2 | 6.725000 | 70b
134
+
135
+ If you'd like to replicate the MT-Bench run, please ensure that the Alpaca prompt template is applied to the model. I did this by putting "alpaca" in the model path to trigger the `AlpacaAdapter`.