v000000 commited on
Commit
d44d064
1 Parent(s): 0b82a46

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -7
README.md CHANGED
@@ -8,20 +8,40 @@ tags:
8
  - merge
9
 
10
  ---
11
- # SwallowMaid-8B-Llama-3-SPPO-abliterated
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/0vhS2LvbcQm6dwaFkC_HK.png)
14
 
15
- # merge
 
 
 
16
 
17
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
18
 
19
- ## Merge Details
20
- ### Merge Method
21
 
22
  This model was merged using a multi-step merge method.
23
 
24
- ### Models Merged
25
 
26
  The following models were included in the merge:
27
  * [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B)
@@ -31,7 +51,7 @@ The following models were included in the merge:
31
  * [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1)
32
  * [Nitral-AI/Hathor_Respawn-L3-8B-v0.8](https://huggingface.co/Nitral-AI/Hathor_Respawn-L3-8B-v0.8)
33
 
34
- ### Configuration
35
 
36
  The following YAML configuration was used to produce this model:
37
 
@@ -74,4 +94,7 @@ models:
74
  weight: 0.15
75
  merge_method: linear
76
  dtype: float32
77
- ```
 
 
 
 
8
  - merge
9
 
10
  ---
11
+ <!DOCTYPE html>
12
+ <style>
13
+
14
+ h1 {
15
+ color: #FF0000; /* Red color */
16
+ font-size: 1.25em; /* Larger font size */
17
+ text-align: left; /* Center alignment */
18
+ text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5); /* Shadow effect */
19
+ background: linear-gradient(90deg, #FF0000, #FF7F50); /* Gradient background */
20
+ -webkit-background-clip: text; /* Clipping the background to text */
21
+ -webkit-text-fill-color: transparent; /* Making the text transparent */
22
+ }
23
+ </style>
24
+ <html lang="en">
25
+ <head>
26
+ </head>
27
+ <body>
28
+ <h1>SwallowMaid-8B-Llama-3-SPPO-abliterated</h1>
29
 
30
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/0vhS2LvbcQm6dwaFkC_HK.png)
31
 
32
+ <h1>Quants</h1>
33
+ * [GGUF Q8_0](https://huggingface.co/v000000/SwallowMaid-8B-L3-SPPO-abliterated-Q8_0-GGUF)
34
+
35
+ <h1>merge</h1>
36
 
37
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
38
 
39
+ <h1>Merge Details</h1>
40
+ <h1>Merge Method</h1>
41
 
42
  This model was merged using a multi-step merge method.
43
 
44
+ <h1>Models Merged</h1>
45
 
46
  The following models were included in the merge:
47
  * [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B)
 
51
  * [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1)
52
  * [Nitral-AI/Hathor_Respawn-L3-8B-v0.8](https://huggingface.co/Nitral-AI/Hathor_Respawn-L3-8B-v0.8)
53
 
54
+ <h1>Configuration</h1>
55
 
56
  The following YAML configuration was used to produce this model:
57
 
 
94
  weight: 0.15
95
  merge_method: linear
96
  dtype: float32
97
+ ```
98
+
99
+ </body>
100
+ </html>