Casual-Autopsy
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,14 @@
|
|
1 |
---
|
2 |
base_model:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
- cgato/L3-TheSpice-8b-v0.8.3
|
4 |
- Sao10K/L3-8B-Stheno-v3.1
|
5 |
- Nitral-AI/Hathor_Stable-v0.2-L3-8B
|
@@ -10,17 +19,27 @@ tags:
|
|
10 |
- merge
|
11 |
- mergekit
|
12 |
- lazymergekit
|
13 |
-
-
|
14 |
-
-
|
15 |
-
-
|
16 |
-
-
|
17 |
-
-
|
18 |
-
|
|
|
19 |
---
|
20 |
|
21 |
# L3-Uncen-Merger-Omelette-RP-v0.2-8B
|
22 |
|
23 |
L3-Uncen-Merger-Omelette-RP-v0.2-8B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
* [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3)
|
25 |
* [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1)
|
26 |
* [Nitral-AI/Hathor_Stable-v0.2-L3-8B](https://huggingface.co/Nitral-AI/Hathor_Stable-v0.2-L3-8B)
|
@@ -28,7 +47,99 @@ L3-Uncen-Merger-Omelette-RP-v0.2-8B is a merge of the following models using [La
|
|
28 |
* [ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B)
|
29 |
* [ResplendentAI/Nymph_8B](https://huggingface.co/ResplendentAI/Nymph_8B)
|
30 |
|
31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
```yaml
|
34 |
models:
|
@@ -54,29 +165,4 @@ models:
|
|
54 |
merge_method: task_arithmetic
|
55 |
base_model: Casual-Autopsy/Omelette-2
|
56 |
dtype: bfloat16
|
57 |
-
```
|
58 |
-
|
59 |
-
## 💻 Usage
|
60 |
-
|
61 |
-
```python
|
62 |
-
!pip install -qU transformers accelerate
|
63 |
-
|
64 |
-
from transformers import AutoTokenizer
|
65 |
-
import transformers
|
66 |
-
import torch
|
67 |
-
|
68 |
-
model = "Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.2-8B"
|
69 |
-
messages = [{"role": "user", "content": "What is a large language model?"}]
|
70 |
-
|
71 |
-
tokenizer = AutoTokenizer.from_pretrained(model)
|
72 |
-
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
73 |
-
pipeline = transformers.pipeline(
|
74 |
-
"text-generation",
|
75 |
-
model=model,
|
76 |
-
torch_dtype=torch.float16,
|
77 |
-
device_map="auto",
|
78 |
-
)
|
79 |
-
|
80 |
-
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
81 |
-
print(outputs[0]["generated_text"])
|
82 |
```
|
|
|
1 |
---
|
2 |
base_model:
|
3 |
+
- Sao10K/L3-8B-Stheno-v3.2
|
4 |
+
- Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
|
5 |
+
- bluuwhale/L3-SthenoMaidBlackroot-8B-V1
|
6 |
+
- Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
|
7 |
+
- migtissera/Llama-3-8B-Synthia-v3.5
|
8 |
+
- tannedbum/L3-Nymeria-Maid-8B
|
9 |
+
- Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
|
10 |
+
- tannedbum/L3-Nymeria-8B
|
11 |
+
- ChaoticNeutrals/Hathor_RP-v.01-L3-8B
|
12 |
- cgato/L3-TheSpice-8b-v0.8.3
|
13 |
- Sao10K/L3-8B-Stheno-v3.1
|
14 |
- Nitral-AI/Hathor_Stable-v0.2-L3-8B
|
|
|
19 |
- merge
|
20 |
- mergekit
|
21 |
- lazymergekit
|
22 |
+
- not-for-all-audiences
|
23 |
+
- nsfw
|
24 |
+
- rp
|
25 |
+
- roleplay
|
26 |
+
- role-play
|
27 |
+
language:
|
28 |
+
- en
|
29 |
---
|
30 |
|
31 |
# L3-Uncen-Merger-Omelette-RP-v0.2-8B
|
32 |
|
33 |
L3-Uncen-Merger-Omelette-RP-v0.2-8B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
34 |
+
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
|
35 |
+
* [Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B)
|
36 |
+
* [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1)
|
37 |
+
* [Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2](https://huggingface.co/Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2)
|
38 |
+
* [migtissera/Llama-3-8B-Synthia-v3.5](https://huggingface.co/migtissera/Llama-3-8B-Synthia-v3.5)
|
39 |
+
* [tannedbum/L3-Nymeria-Maid-8B](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B)
|
40 |
+
* [Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B)
|
41 |
+
* [tannedbum/L3-Nymeria-8B](https://huggingface.co/tannedbum/L3-Nymeria-8B)
|
42 |
+
* [ChaoticNeutrals/Hathor_RP-v.01-L3-8B](https://huggingface.co/)
|
43 |
* [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3)
|
44 |
* [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1)
|
45 |
* [Nitral-AI/Hathor_Stable-v0.2-L3-8B](https://huggingface.co/Nitral-AI/Hathor_Stable-v0.2-L3-8B)
|
|
|
47 |
* [ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B)
|
48 |
* [ResplendentAI/Nymph_8B](https://huggingface.co/ResplendentAI/Nymph_8B)
|
49 |
|
50 |
+
# Secret Sauce
|
51 |
+
|
52 |
+
## Scrambled-Egg-1
|
53 |
+
|
54 |
+
```yaml
|
55 |
+
models:
|
56 |
+
- model: Sao10K/L3-8B-Stheno-v3.2
|
57 |
+
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
|
58 |
+
parameters:
|
59 |
+
density: 0.45
|
60 |
+
weight: 0.33
|
61 |
+
- model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
|
62 |
+
parameters:
|
63 |
+
density: 0.75
|
64 |
+
weight: 0.33
|
65 |
+
merge_method: dare_ties
|
66 |
+
base_model: Sao10K/L3-8B-Stheno-v3.2
|
67 |
+
parameters:
|
68 |
+
int8_mask: true
|
69 |
+
dtype: bfloat16
|
70 |
+
```
|
71 |
+
|
72 |
+
## Scrambled-Egg-2
|
73 |
+
|
74 |
+
```yaml
|
75 |
+
models:
|
76 |
+
- model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
|
77 |
+
- model: migtissera/Llama-3-8B-Synthia-v3.5
|
78 |
+
parameters:
|
79 |
+
density: 0.35
|
80 |
+
weight: 0.25
|
81 |
+
- model: tannedbum/L3-Nymeria-Maid-8B
|
82 |
+
parameters:
|
83 |
+
density: 0.65
|
84 |
+
weight: 0.25
|
85 |
+
merge_method: dare_ties
|
86 |
+
base_model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
|
87 |
+
parameters:
|
88 |
+
int8_mask: true
|
89 |
+
dtype: bfloat16
|
90 |
+
```
|
91 |
+
|
92 |
+
## Scrambled-Egg-3
|
93 |
+
|
94 |
+
```yaml
|
95 |
+
models:
|
96 |
+
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
|
97 |
+
- model: tannedbum/L3-Nymeria-8B
|
98 |
+
parameters:
|
99 |
+
density: 0.5
|
100 |
+
weight: 0.35
|
101 |
+
- model: ChaoticNeutrals/Hathor_RP-v.01-L3-8B
|
102 |
+
parameters:
|
103 |
+
density: 0.4
|
104 |
+
weight: 0.2
|
105 |
+
merge_method: dare_ties
|
106 |
+
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
|
107 |
+
parameters:
|
108 |
+
int8_mask: true
|
109 |
+
dtype: bfloat16
|
110 |
+
```
|
111 |
+
|
112 |
+
## Omelette-1
|
113 |
+
|
114 |
+
```yaml
|
115 |
+
models:
|
116 |
+
- model: Casual-Autopsy/Scrambled-Egg-1
|
117 |
+
- model: Casual-Autopsy/Scrambled-Egg-3
|
118 |
+
merge_method: slerp
|
119 |
+
base_model: Casual-Autopsy/Scrambled-Egg-1
|
120 |
+
parameters:
|
121 |
+
t:
|
122 |
+
- value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
|
123 |
+
embed_slerp: true
|
124 |
+
dtype: bfloat16
|
125 |
+
```
|
126 |
+
|
127 |
+
## Omelette-2
|
128 |
+
|
129 |
+
```yaml
|
130 |
+
models:
|
131 |
+
- model: Casual-Autopsy/Omelette-1
|
132 |
+
- model: Casual-Autopsy/Scrambled-Egg-2
|
133 |
+
merge_method: slerp
|
134 |
+
base_model: Casual-Autopsy/Omelette-1
|
135 |
+
parameters:
|
136 |
+
t:
|
137 |
+
- value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
|
138 |
+
embed_slerp: true
|
139 |
+
dtype: bfloat16
|
140 |
+
```
|
141 |
+
|
142 |
+
## L3-Uncen-Merger-Omelette-RP-v0.2-8B
|
143 |
|
144 |
```yaml
|
145 |
models:
|
|
|
165 |
merge_method: task_arithmetic
|
166 |
base_model: Casual-Autopsy/Omelette-2
|
167 |
dtype: bfloat16
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
168 |
```
|