File size: 6,322 Bytes
6bde91c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
language:
- en
library_name: transformers
base_model: []
tags:
- mergekit
- merge
- Yi
---
# Yi 34B Merge v7

A merge of several Yi 34B 200K models using the new DARE Ties method via mergekit. The goal is to create a merge model that excels at 32K+ context performance. 

## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
It might recognize ChatML, and possibly Alpaca-like formats. Raw prompting as described here is also effective: https://old.reddit.com/r/LocalLLaMA/comments/18zqy4s/the_secret_to_writing_quality_stories_with_llms/



## Running
Being a Yi model, try running a lower temperature with 0.02-0.06 MinP, a little repetition penalty, maybe mirostat with a low tau, and no other samplers. Yi tends to run "hot" by default, and it really needs a low temperature + MinP to cull the huge vocabulary.

24GB GPUs can efficiently run Yi-34B-200K models at **45K-90K context** with exllamav2, and performant UIs like [exui](https://github.com/turboderp/exui). I go into more detail in this [post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/). 16GB GPUs can still run the high context with aggressive quantization.

I recommend exl2 quantizations profiled on data similar to the desired task. It is especially sensitive to the quantization data at low bpw. I've uploaded my own fiction-oriented quantizations here: https://huggingface.co/collections/brucethemoose/most-recent-merge-65742644ca03b6c514afa204

To load/train this in full-context backends like transformers, you *must* change `max_position_embeddings` in config.json to a lower value than 200,000, otherwise you will OOM! I do not recommend running high context without context-efficient backends like exllamav2 or unsloth.


## Testing Notes

See: https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-merge-v5#testing-notes

A "4k" merge model was created to try and extend the context of SUS Chat and DPO-bagel before adding them to the merge: https://huggingface.co/brucethemoose/SUS-Bagel-200K-DARE-Test

In addition, the weight gradients are biased towards Vicuna-format models in the first few layers to try and "emphasize" the Orca-Vicuna prompt template. How sucessful this is remains to be seen. 


### Merge Method

This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama as a base.

### Models Merged

The following models were included in the merge:
* https://huggingface.co/kyujinpy/PlatYi-34B-200k-Q-FastChat
* https://huggingface.co/jondurbin/bagel-34b-v0.2
* https://huggingface.co/NousResearch/Nous-Capybara-34B
* https://huggingface.co/migtissera/Tess-M-Creative-v1.0
* https://huggingface.co/brucethemoose/SUS-Bagel-200K-DARE-Test
* https://huggingface.co/Mihaiii/Pallas-0.5
* https://huggingface.co/bhenrym14/airoboros-3_1-yi-34b-200k
* https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-v2
* https://huggingface.co/migtissera/Tess-34B-v1.4
* https://huggingface.co/SUSTech/SUS-Chat-34B
* https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2
* https://huggingface.co/chargoddard/Yi-34B-200K-Llama
* https://huggingface.co/chargoddard/Yi-34B-Llama


### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
    # No parameters necessary for base model
  - model: /home/alpha/Storage/Models/Raw/migtissera_Tess-34B-v1.4
    parameters:
      weight: [0.23, 0.125, 0.125, 0.125, 0.125, 0.125]
      density: 0.59
  - model: /home/alpha/Models/Raw/Mihaiii_Pallas-0.5
    parameters:
      weight: [0.23, 0.125, 0.125, 0.125, 0.125, 0.125]
      density: 0.59
  - model: /home/alpha//Storage/Models/Raw/bhenrym14_airoboros-3_1-yi-34b-200k
    parameters:
      weight: [0.02, 0.106, 0.106, 0.106, 0.106, 0.106]
      density: 0.59
  - model: /home/alpha/Storage/Models/Raw/jondurbin_bagel-34b-v0.2
    #Only the SFT in the main merge since the DPO version seems to have no long context ability at all
    parameters:
      weight: [0.02, 0.100, 0.100, 0.100, 0.100, 0.100]
      density: 0.4
  - model: /home/alpha/Storage/Models/Raw/kyujinpy_PlatYi-34B-200k-Q-FastChat
    parameters:
      weight: [0.02, 0.100, 0.100, 0.100, 0.100, 0.100]
      density: 0.59
  #- model: /home/alpha/Storage/Models/Raw/ehartford_dolphin-2.2-yi-34b-200k
  #  Dolphin 200K seems to be funky according to multiple leaderboards and perplexity tests?
  #  parameters:
  #    weight: 0.15
  #    density: 0.6
  - model: /home/alpha/Models/Raw/adamo1139_Yi-34B-200K-AEZAKMI-v2
    parameters:
      weight: [0.02, 0.110, 0.110, 0.110, 0.110, 0.110]
      density: 0.59
  - model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B
    parameters:
      weight:  [0.22, 0.126, 0.126, 0.126, 0.126, 0.126]
      density: 0.59
  - model: /home/alpha/Storage/Models/Raw/4kmerge
    parameters:
      weight: [0.02,  0.108, 0.108, 0.108, 0.108, 0.108]
      density: 0.5
  - model: /home/alpha/Models/Raw/migtissera_Tess-M-Creative-v1.0
    parameters:
      weight: [0.22, 0.100, 0.100, 0.100, 0.100, 0.10]
      density: 0.59
merge_method: dare_ties
tokenizer_source: union
base_model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
parameters:
  int8_mask: true
dtype: bfloat16

```

The following config was used for the "4kmerge" model:

```yaml
models:
  - model: /home/alpha/Models/Raw/chargoddard_Yi-34B-Llama
  # No parameters necessary for base model
  - model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
    parameters:
      weight: 0.5
      density: 1
  - model: /home/alpha/Models/Raw/SUSTech_SUS-Chat-34B
    parameters:
      weight: 0.2
      density: 0.12
  - model: /home/alpha/Models/Raw/jondurbin_bagel-dpo-34b-v0.2
    parameters:
      weight: 0.2
      density: 0.15
  - model: /home/alpha/Models/Raw/jondurbin_bagel-34b-v0.2
    parameters:
      weight: 0.1
      density: 0.12
merge_method: dare_ties
tokenizer_source: union
base_model: /home/alpha/Models/Raw/chargoddard_Yi-34B-Llama
parameters:
  int8_mask: true
dtype: bfloat16
```