SanjiWatsuki commited on
Commit
f389e83
·
1 Parent(s): ecb2603

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -42
README.md CHANGED
@@ -4,41 +4,43 @@ language:
4
  - en
5
  tags:
6
  - merge
7
- - not-for-all-audiences
8
- - nsfw
9
  ---
10
 
11
  <div style="display: flex; justify-content: center; align-items: center">
12
- <img src="https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B/resolve/main/assets/cybermaid.png">
13
  </div
14
  >
15
 
16
  <p align="center">
17
- <big><b>Top 1 RP Performer on MT-bench 🤪</b
18
- ></big>
19
- </p>
20
-
21
- <p align="center">
22
- <strong>Next Gen Silicon-Based RP Maid</strong>
23
  </p>
24
 
25
  ## WTF is This?
26
 
27
- Silicon-Maid-7B is another model targeted at being both strong at RP **and** being a smart cookie that can follow character cards very well. As of right now, Silicon-Maid-7B outscores both of my previous 7B RP models in my RP benchmark and I have been impressed by this model's creativity. It is suitable for RP/ERP and general use.
 
 
 
 
 
 
 
 
 
 
28
 
29
- It's built on [xDAN-AI/xDAN-L1-Chat-RL-v1](https://huggingface.co/xDAN-AI/xDAN-L1-Chat-RL-v1), a 7B model which scores unusually high on MT-Bench, and chargoddard/loyal-piano-m7, an Alpaca format 7B model with surprisingly creative outputs. I was excited to see this model for two main reasons:
30
- * MT-Bench normally correlates well with real world model quality
31
- * It was an Alpaca prompt model with high benches which meant I could try swapping out my Marcoroni frankenmerge used in my previous model.
32
 
33
  **MT-Bench Average Turn**
34
  | model | score | size
35
  |--------------------|-----------|--------
36
  | gpt-4 | 8.99 | -
37
- | *xDAN-L1-Chat-RL-v1* | 8.24^1 | 7b
 
38
  | Starling-7B | 8.09 | 7b
39
  | Claude-2 | 8.06 | -
40
- | **Silicon-Maid** | **7.96** | **7b**
41
- | *Loyal-Macaroni-Maid*| 7.95 | 7b
42
  | gpt-3.5-turbo | 7.94 | 20b?
43
  | Claude-1 | 7.90 | -
44
  | OpenChat-3.5 | 7.81 | -
@@ -46,55 +48,40 @@ It's built on [xDAN-AI/xDAN-L1-Chat-RL-v1](https://huggingface.co/xDAN-AI/xDAN-L
46
  | wizardlm-30b | 7.01 | 30b
47
  | Llama-2-70b-chat | 6.86 | 70b
48
 
49
- ^1 xDAN's testing placed it 8.35 - this number is from my independent MT-Bench run.
50
 
51
- <img src="https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B/resolve/main/assets/fig-silicon-loyal.png">
52
-
53
- It's unclear to me if xDAN-L1-Chat-RL-v1 is overtly benchmaxxing but it seemed like a solid 7B from my limited testing (although nothing that screams 2nd best model behind GPT-4). Amusingly, the model lost a lot of Reasoning and Coding skills in the merger. This was a much greater MT-Bench dropoff than I expected, perhaps suggesting the Math/Reasoning ability in the original model was rather dense and susceptible to being lost to a DARE TIE merger?
54
-
55
- Besides that, the merger is almost identical to the Loyal-Macaroni-Maid merger with a new base "smart cookie" model. If you liked any of my previous RP models, give this one a shot and let me know in the Community tab what you think!
56
 
57
  ### The Sauce
58
 
59
  ```
60
- models: # Top-Loyal-Bruins-Maid-DARE-7B
61
- - model: mistralai/Mistral-7B-v0.1
62
- # no parameters necessary for base model
63
  - model: xDAN-AI/xDAN-L1-Chat-RL-v1
64
  parameters:
65
- weight: 0.4
66
- density: 0.8
67
- - model: chargoddard/loyal-piano-m7
68
  parameters:
69
  weight: 0.3
70
- density: 0.8
71
- - model: Undi95/Toppy-M-7B
72
  parameters:
73
  weight: 0.2
74
- density: 0.4
75
  - model: NeverSleep/Noromaid-7b-v0.2
76
  parameters:
77
  weight: 0.2
78
- density: 0.4
79
  - model: athirdpath/NSFW_DPO_vmgb-7b
80
  parameters:
81
  weight: 0.2
82
- density: 0.4
83
- merge_method: dare_ties
84
  base_model: mistralai/Mistral-7B-v0.1
85
  parameters:
 
86
  int8_mask: true
 
87
  dtype: bfloat16
88
  ```
89
 
90
- For more information about why I use this merger, see the [Loyal-Macaroni-Maid repo](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B#the-sauce-all-you-need-is-dare)
91
-
92
  ### Prompt Template (Alpaca)
93
- I found the best SillyTavern results from using the Noromaid template but please try other templates! Let me know if you find anything good.
94
-
95
- SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
96
-
97
- Additionally, here is my highly recommended [Text Completion preset](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B/blob/main/Characters/MinP.json). You can tweak this by adjusting temperature up or dropping min p to boost creativity or raise min p to increase stability. You shouldn't need to touch anything else!
98
 
99
  ```
100
  Below is an instruction that describes a task. Write a response that appropriately completes the request.
@@ -105,4 +92,34 @@ Below is an instruction that describes a task. Write a response that appropriate
105
  ### Response:
106
  ```
107
 
108
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - en
5
  tags:
6
  - merge
 
 
7
  ---
8
 
9
  <div style="display: flex; justify-content: center; align-items: center">
10
+ <img src="https://huggingface.co/SanjiWatsuki/Sonya-7B/resolve/main/assets/Sonya.jpg">
11
  </div
12
  >
13
 
14
  <p align="center">
15
+ <big><b>Top 1 Performer MT-bench 🤪</b></big>
 
 
 
 
 
16
  </p>
17
 
18
  ## WTF is This?
19
 
20
+ Sonya-7B is, at the time of writing, the **#1 performing model in MT-Bench first turn**, ahead of GPT-4, and overall the #2 model in MT-Bench, to the best of my knowledge. Sonya-7B should be a good all-purpose model for all tasks including assistant, RP, etc.
21
+
22
+ Sonya-7B has a similar structure to my previous model, Silicon-Maid-7B, and uses a very similar structure. It's a merge of [xDAN-AI/xDAN-L1-Chat-RL-v1](https://huggingface.co/xDAN-AI/xDAN-L1-Chat-RL-v1), [Jan-Ai's Stealth v1.2](https://huggingface.co/jan-hq/stealth-v1.2), [chargoddard/piano-medley-7b](https://huggingface.co/chargoddard/piano-medley-7b), [NeverSleep/Noromaid-7B-v0.2](https://huggingface.co/NeverSleep/Noromaid-7b-v0.2), and [athirdpath/NSFW_DPO_vmgb-7b](athirdpath/NSFW_DPO_vmgb-7b). Sauce is below. Somehow, by combining these pieces, it substantially outscores any of its parents on MT-Bench.
23
+
24
+ I picked these models because:
25
+ * MT-Bench normally correlates well with real world model quality and xDAN performs well on it.
26
+ * Almost all models in the mix were Alpaca prompt formatted which gives prompt consistency.
27
+ * Stealth v1.2 has been a magic sprinkle that seems to increase my MT-Bench scores.
28
+ * I added RP models because it boosted the Writing and Roleplay benchmarks 👀
29
+
30
+ Based on the parent models, I expect this model to be used with an 8192 context window. Please use NTK scaling alpha of 2.6 to experimentally try out 16383 context.
31
 
32
+ **Let me be candid:** Despite the test scores, I do not believe this model is a GPT killer. I think it's a very sharp model, it probably punches way above its weight, but it's still a 7B model. Keep your expectations in check 😉
 
 
33
 
34
  **MT-Bench Average Turn**
35
  | model | score | size
36
  |--------------------|-----------|--------
37
  | gpt-4 | 8.99 | -
38
+ | **Sonya-7B** | **8.52** | **7b**
39
+ | xDAN-L1-Chat-RL-v1 | 8.34 | 7b
40
  | Starling-7B | 8.09 | 7b
41
  | Claude-2 | 8.06 | -
42
+ | *Silicon-Maid* | *7.96* | *7b*
43
+ | *Loyal-Macaroni-Maid*| *7.95* | *7b*
44
  | gpt-3.5-turbo | 7.94 | 20b?
45
  | Claude-1 | 7.90 | -
46
  | OpenChat-3.5 | 7.81 | -
 
48
  | wizardlm-30b | 7.01 | 30b
49
  | Llama-2-70b-chat | 6.86 | 70b
50
 
51
+ <img src="https://huggingface.co/SanjiWatsuki/Sonya-7B/resolve/main/assets/mt-bench-gpt.png">
52
 
53
+ <img src="https://huggingface.co/SanjiWatsuki/Sonya-7B/resolve/main/assets/mt-bench-comparison.png">
 
 
 
 
54
 
55
  ### The Sauce
56
 
57
  ```
58
+ models:
 
 
59
  - model: xDAN-AI/xDAN-L1-Chat-RL-v1
60
  parameters:
61
+ weight: 1
62
+ density: 1
63
+ - model: chargoddard/piano-medley-7b
64
  parameters:
65
  weight: 0.3
66
+ - model: jan-hq/stealth-v1.2
 
67
  parameters:
68
  weight: 0.2
 
69
  - model: NeverSleep/Noromaid-7b-v0.2
70
  parameters:
71
  weight: 0.2
 
72
  - model: athirdpath/NSFW_DPO_vmgb-7b
73
  parameters:
74
  weight: 0.2
75
+ merge_method: ties
 
76
  base_model: mistralai/Mistral-7B-v0.1
77
  parameters:
78
+ density: 0.4
79
  int8_mask: true
80
+ normalize: true
81
  dtype: bfloat16
82
  ```
83
 
 
 
84
  ### Prompt Template (Alpaca)
 
 
 
 
 
85
 
86
  ```
87
  Below is an instruction that describes a task. Write a response that appropriately completes the request.
 
92
  ### Response:
93
  ```
94
 
95
+ I found that this model **performed worse** with the xDAN prompt format so, despite the heavy weight of xDAN in this merger, I recommeend *against* its use.
96
+
97
+ ### Other Benchmark Stuff
98
+
99
+ **########## First turn ##########**
100
+ | model | turn | score | size
101
+ |--------------------|------|----------|--------
102
+ | **Sonya-7B** | 1 | **9.06875** | **7b**
103
+ | gpt-4 | 1 | 8.95625 | -
104
+ | xDAN-L1-Chat-RL-v1 | 1 | *8.87500* | *7b*
105
+ | xDAN-L2-Chat-RL-v2 | 1 | 8.78750 | 30b
106
+ | claude-v1 | 1 | 8.15000 | -
107
+ | gpt-3.5-turbo | 1 | 8.07500 | 20b
108
+ | vicuna-33b-v1.3 | 1 | 7.45625 | 33b
109
+ | wizardlm-30b | 1 | 7.13125 | 30b
110
+ | oasst-sft-7-llama-30b | 1 | 7.10625 | 30b
111
+ | Llama-2-70b-chat | 1 | 6.98750 | 70b
112
+
113
+
114
+ ########## Second turn ##########
115
+ | model | turn | score | size
116
+ |--------------------|------|-----------|--------
117
+ | gpt-4 | 2 | 9.025000 | -
118
+ | xDAN-L2-Chat-RL-v2 | 2 | 8.087500 | 30b
119
+ | **Sonya-7B** | 2 | **7.962500** | **7b**
120
+ | xDAN-L1-Chat-RL-v1 | 2 | 7.825000 | 7b
121
+ | gpt-3.5-turbo | 2 | 7.812500 | 20b
122
+ | claude-v1 | 2 | 7.650000 | -
123
+ | wizardlm-30b | 2 | 6.887500 | 30b
124
+ | vicuna-33b-v1.3 | 2 | 6.787500 | 33b
125
+ | Llama-2-70b-chat | 2 | 6.725000 | 70b