automerger
commited on
Commit
•
53746d4
1
Parent(s):
e238ba3
Upload folder using huggingface_hub
Browse files- README.md +49 -30
- mergekit_config.yml +13 -9
- model-00001-of-00002.safetensors +1 -1
- model-00002-of-00002.safetensors +1 -1
README.md
CHANGED
@@ -1,45 +1,64 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
base_model:
|
3 |
- mayacinka/yam-jom-7B
|
4 |
- CorticalStack/shadow-clown-7B-dare
|
5 |
-
library_name: transformers
|
6 |
-
tags:
|
7 |
-
- mergekit
|
8 |
-
- merge
|
9 |
-
|
10 |
---
|
11 |
-
# merge
|
12 |
-
|
13 |
-
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
14 |
-
|
15 |
-
## Merge Details
|
16 |
-
### Merge Method
|
17 |
|
18 |
-
|
19 |
|
20 |
-
|
21 |
-
|
22 |
-
The following models were included in the merge:
|
23 |
* [CorticalStack/shadow-clown-7B-dare](https://huggingface.co/CorticalStack/shadow-clown-7B-dare)
|
24 |
|
25 |
-
|
26 |
-
|
27 |
-
The following YAML configuration was used to produce this model:
|
28 |
|
29 |
```yaml
|
30 |
-
|
31 |
-
|
32 |
-
- model: mayacinka/yam-jom-7B
|
33 |
-
|
34 |
-
- model: CorticalStack/shadow-clown-7B-dare
|
35 |
-
|
36 |
-
|
37 |
-
weight: 0.6
|
38 |
-
merge_method: dare_ties
|
39 |
base_model: mayacinka/yam-jom-7B
|
40 |
parameters:
|
41 |
-
|
|
|
|
|
|
|
|
|
|
|
42 |
dtype: bfloat16
|
43 |
random_seed: 0
|
44 |
-
|
45 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
- merge
|
5 |
+
- mergekit
|
6 |
+
- lazymergekit
|
7 |
+
- automerger
|
8 |
base_model:
|
9 |
- mayacinka/yam-jom-7B
|
10 |
- CorticalStack/shadow-clown-7B-dare
|
|
|
|
|
|
|
|
|
|
|
11 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
|
13 |
+
# YamShadow-7B
|
14 |
|
15 |
+
YamShadow-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
|
16 |
+
* [mayacinka/yam-jom-7B](https://huggingface.co/mayacinka/yam-jom-7B)
|
|
|
17 |
* [CorticalStack/shadow-clown-7B-dare](https://huggingface.co/CorticalStack/shadow-clown-7B-dare)
|
18 |
|
19 |
+
## 🧩 Configuration
|
|
|
|
|
20 |
|
21 |
```yaml
|
22 |
+
slices:
|
23 |
+
- sources:
|
24 |
+
- model: mayacinka/yam-jom-7B
|
25 |
+
layer_range: [0, 32]
|
26 |
+
- model: CorticalStack/shadow-clown-7B-dare
|
27 |
+
layer_range: [0, 32]
|
28 |
+
merge_method: slerp
|
|
|
|
|
29 |
base_model: mayacinka/yam-jom-7B
|
30 |
parameters:
|
31 |
+
t:
|
32 |
+
- filter: self_attn
|
33 |
+
value: [0, 0.5, 0.3, 0.7, 1]
|
34 |
+
- filter: mlp
|
35 |
+
value: [1, 0.5, 0.7, 0.3, 0]
|
36 |
+
- value: 0.5
|
37 |
dtype: bfloat16
|
38 |
random_seed: 0
|
39 |
+
```
|
40 |
+
|
41 |
+
## 💻 Usage
|
42 |
+
|
43 |
+
```python
|
44 |
+
!pip install -qU transformers accelerate
|
45 |
+
|
46 |
+
from transformers import AutoTokenizer
|
47 |
+
import transformers
|
48 |
+
import torch
|
49 |
+
|
50 |
+
model = "automerger/YamShadow-7B"
|
51 |
+
messages = [{"role": "user", "content": "What is a large language model?"}]
|
52 |
+
|
53 |
+
tokenizer = AutoTokenizer.from_pretrained(model)
|
54 |
+
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
55 |
+
pipeline = transformers.pipeline(
|
56 |
+
"text-generation",
|
57 |
+
model=model,
|
58 |
+
torch_dtype=torch.float16,
|
59 |
+
device_map="auto",
|
60 |
+
)
|
61 |
+
|
62 |
+
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
63 |
+
print(outputs[0]["generated_text"])
|
64 |
+
```
|
mergekit_config.yml
CHANGED
@@ -1,15 +1,19 @@
|
|
1 |
|
2 |
-
|
3 |
-
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
merge_method: dare_ties
|
10 |
base_model: mayacinka/yam-jom-7B
|
11 |
parameters:
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
13 |
dtype: bfloat16
|
14 |
random_seed: 0
|
15 |
|
|
|
1 |
|
2 |
+
slices:
|
3 |
+
- sources:
|
4 |
+
- model: mayacinka/yam-jom-7B
|
5 |
+
layer_range: [0, 32]
|
6 |
+
- model: CorticalStack/shadow-clown-7B-dare
|
7 |
+
layer_range: [0, 32]
|
8 |
+
merge_method: slerp
|
|
|
9 |
base_model: mayacinka/yam-jom-7B
|
10 |
parameters:
|
11 |
+
t:
|
12 |
+
- filter: self_attn
|
13 |
+
value: [0, 0.5, 0.3, 0.7, 1]
|
14 |
+
- filter: mlp
|
15 |
+
value: [1, 0.5, 0.7, 0.3, 0]
|
16 |
+
- value: 0.5
|
17 |
dtype: bfloat16
|
18 |
random_seed: 0
|
19 |
|
model-00001-of-00002.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 9825524456
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7a1143a331c91175b8668f1e842dbfa8e4fa58a056184cef5d57bb058212b7cf
|
3 |
size 9825524456
|
model-00002-of-00002.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4657973592
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:760fa1fcdab26d6c723e980aec2966440fbacc3d374fe22fd3b8adc292514e01
|
3 |
size 4657973592
|