v000000 commited on
Commit
219948b
1 Parent(s): b11c7a3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -4
README.md CHANGED
@@ -33,10 +33,6 @@ Shows similar emergent language nuance abilities compared to 8B.
33
 
34
  Unaligned and somewhat lazy. Use rep_pen 1.1
35
 
36
- # Quants
37
- * [Q8_0 static, imatrix](https://huggingface.co/v000000/L3-11.5B-DuS-MoonRoot-Q8_0-GGUF)
38
- * [Q6_K static, imatrix](https://huggingface.co/v000000/L3-11.5B-DuS-MoonRoot-Q6_K-GGUF)
39
-
40
  # merge
41
 
42
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
@@ -56,6 +52,7 @@ The following models were included in the merge:
56
 
57
  The following YAML configuration was used to produce this model:
58
 
 
59
  ```yaml
60
  slices:
61
  - sources:
@@ -66,6 +63,62 @@ slices:
66
  layer_range: [8, 32]
67
  merge_method: passthrough
68
  dtype: bfloat16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69
  ```
70
 
71
  ---
 
33
 
34
  Unaligned and somewhat lazy. Use rep_pen 1.1
35
 
 
 
 
 
36
  # merge
37
 
38
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
 
52
 
53
  The following YAML configuration was used to produce this model:
54
 
55
+ ---Step 3
56
  ```yaml
57
  slices:
58
  - sources:
 
63
  layer_range: [8, 32]
64
  merge_method: passthrough
65
  dtype: bfloat16
66
+
67
+ ```
68
+ ---Step 2
69
+ ```yaml
70
+ slices:
71
+ - sources:
72
+ - model: v000000/L3-8B-Poppy-Sunspice-experiment-c+Blackroot/Llama-3-8B-Abomination-LORA
73
+ layer_range: [0, 32]
74
+ - model: v000000/L3-8B-Poppy-Sunspice-experiment-c+ResplendentAI/BlueMoon_Llama3
75
+ layer_range: [0, 32]
76
+ merge_method: slerp
77
+ base_model: v000000/L3-8B-Poppy-Sunspice-experiment-c+Blackroot/Llama-3-8B-Abomination-LORA
78
+ parameters:
79
+ t:
80
+ - filter: self_attn
81
+ value: [0, 0.5, 0.3, 0.7, 1]
82
+ - filter: mlp
83
+ value: [1, 0.5, 0.7, 0.3, 0]
84
+ - value: 0.5
85
+ dtype: bfloat16
86
+ random_seed: 0
87
+
88
+ ```
89
+ ---Step 1
90
+ ```yaml
91
+ models:
92
+ - model: crestf411/L3-8B-sunfall-abliterated-v0.2
93
+ parameters:
94
+ weight: 0.1
95
+ density: 0.18
96
+ - model: Hastagaras/HALU-8B-LLAMA3-BRSLURP
97
+ parameters:
98
+ weight: 0.1
99
+ density: 0.3
100
+ - model: Nitral-Archive/Poppy_Porpoise-Biomix
101
+ parameters:
102
+ weight: 0.1
103
+ density: 0.42
104
+ - model: cgato/L3-TheSpice-8b-v0.8.3
105
+ parameters:
106
+ weight: 0.2
107
+ density: 0.54
108
+ - model: Sao10K/L3-8B-Stheno-v3.2
109
+ parameters:
110
+ weight: 0.2
111
+ density: 0.66
112
+ - model: Nitral-AI/Poppy_Porpoise-0.72-L3-8B
113
+ parameters:
114
+ weight: 0.3
115
+ density: 0.78
116
+ merge_method: dare_ties
117
+ base_model: NousResearch/Meta-Llama-3-8B-Instruct
118
+ parameters:
119
+ int8_mask: true
120
+ dtype: bfloat16
121
+
122
  ```
123
 
124
  ---