File size: 4,971 Bytes
eb18a1f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 |
---
base_model:
- DewEfresh/neo_7b
- DewEfresh/neo_7b
tags:
- merge
- mergekit
- lazymergekit
- DewEfresh/neo_7b
---
# Neo_7b-merge16
Neo_7b-merge16 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [DewEfresh/neo_7b](https://huggingface.co/DewEfresh/neo_7b)
* [DewEfresh/neo_7b](https://huggingface.co/DewEfresh/neo_7b)
## 🧩 Configuration
```yaml
# Define the slices for the model merging process
slices:
- sources:
# Merge layer 3 with layer 0
- model: DewEfresh/neo_7b
layer_range: [3, 3]
- model: DewEfresh/neo_7b
layer_range: [0, 0]
- sources:
# Merge layer 3 with layer 1
- model: DewEfresh/neo_7b
layer_range: [3, 3]
- model: DewEfresh/neo_7b
layer_range: [1, 1]
- sources:
# Merge layer 3 with layer 2
- model: DewEfresh/neo_7b
layer_range: [3, 3]
- model: DewEfresh/neo_7b
layer_range: [2, 2]
- sources:
# Merge layer 7 with layer 4
- model: DewEfresh/neo_7b
layer_range: [7, 7]
- model: DewEfresh/neo_7b
layer_range: [4, 4]
- sources:
# Merge layer 7 with layer 5
- model: DewEfresh/neo_7b
layer_range: [7, 7]
- model: DewEfresh/neo_7b
layer_range: [5, 5]
- sources:
# Merge layer 7 with layer 6
- model: DewEfresh/neo_7b
layer_range: [7, 7]
- model: DewEfresh/neo_7b
layer_range: [6, 6]
- sources:
# Merge layer 11 with layer 8
- model: DewEfresh/neo_7b
layer_range: [11, 11]
- model: DewEfresh/neo_7b
layer_range: [8, 8]
- sources:
# Merge layer 11 with layer 9
- model: DewEfresh/neo_7b
layer_range: [11, 11]
- model: DewEfresh/neo_7b
layer_range: [9, 9]
- sources:
# Merge layer 11 with layer 10
- model: DewEfresh/neo_7b
layer_range: [11, 11]
- model: DewEfresh/neo_7b
layer_range: [10, 10]
- sources:
# Merge layer 15 with layer 12
- model: DewEfresh/neo_7b
layer_range: [15, 15]
- model: DewEfresh/neo_7b
layer_range: [12, 12]
- sources:
# Merge layer 15 with layer 13
- model: DewEfresh/neo_7b
layer_range: [15, 15]
- model: DewEfresh/neo_7b
layer_range: [13, 13]
- sources:
# Merge layer 15 with layer 14
- model: DewEfresh/neo_7b
layer_range: [15, 15]
- model: DewEfresh/neo_7b
layer_range: [14, 14]
- sources:
# Merge layer 19 with layer 16
- model: DewEfresh/neo_7b
layer_range: [19, 19]
- model: DewEfresh/neo_7b
layer_range: [16, 16]
- sources:
# Merge layer 19 with layer 17
- model: DewEfresh/neo_7b
layer_range: [19, 19]
- model: DewEfresh/neo_7b
layer_range: [17, 17]
- sources:
# Merge layer 19 with layer 18
- model: DewEfresh/neo_7b
layer_range: [19, 19]
- model: DewEfresh/neo_7b
layer_range: [18, 18]
- sources:
# Merge layer 23 with layer 20
- model: DewEfresh/neo_7b
layer_range: [23, 23]
- model: DewEfresh/neo_7b
layer_range: [20, 20]
- sources:
# Merge layer 23 with layer 21
- model: DewEfresh/neo_7b
layer_range: [23, 23]
- model: DewEfresh/neo_7b
layer_range: [21, 21]
- sources:
# Merge layer 23 with layer 22
- model: DewEfresh/neo_7b
layer_range: [23, 23]
- model: DewEfresh/neo_7b
layer_range: [22, 22]
- sources:
# Merge layer 27 with layer 24
- model: DewEfresh/neo_7b
layer_range: [27, 27]
- model: DewEfresh/neo_7b
layer_range: [24, 24]
- sources:
# Merge layer 27 with layer 25
- model: DewEfresh/neo_7b
layer_range: [27, 27]
- model: DewEfresh/neo_7b
layer_range: [25, 25]
- sources:
# Merge layer 27 with layer 26
- model: DewEfresh/neo_7b
layer_range: [27, 27]
- model: DewEfresh/neo_7b
layer_range: [26, 26]
# Specify the merging method for the slices
merge_method: slerp
base_model: DewEfresh/neo_7b
parameters:
t: 0.3333 # Set global interpolation value to 33.33%
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "DewEfresh/Neo_7b-merge16"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |