seyf1elislam commited on
Commit
f0c8819
1 Parent(s): 0fa07e7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -28
README.md CHANGED
@@ -1,35 +1,22 @@
1
  ---
2
  tags:
3
- - merge
4
- - mergekit
5
  - GGUF
 
 
6
  ---
7
- # neural-Kunoichi2-7B-slerp-GGUF quantized version
8
- this is the quantized version of [neural-Kunoichi2-7B-slerp](https://huggingface.co/seyf1elislam/neural-Kunoichi2-7B-slerp)
 
9
 
10
- ## [neural-Kunoichi2-7B-slerp](https://huggingface.co/seyf1elislam/neural-Kunoichi2-7B-slerp)
 
 
11
 
12
- neural-Kunoichi2-7B-slerp is a merge of the following models using LazyMergekit:
13
- * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
14
- * [mlabonne/NeuralPipe-7B-ties](https://huggingface.co/mlabonne/NeuralPipe-7B-ties)
15
 
16
- ## 🧩 Configuration
17
-
18
- ```yaml
19
- merge_method: slerp
20
- base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
21
- slices:
22
- - sources:
23
- - model: SanjiWatsuki/Kunoichi-DPO-v2-7B
24
- layer_range: [0, 32]
25
- - model: mlabonne/NeuralPipe-7B-ties
26
- layer_range: [0, 32]
27
- parameters:
28
- t:
29
- - filter: self_attn
30
- value: [0, 0.5, 0.3, 0.7, 1]
31
- - filter: mlp
32
- value: [1, 0.5, 0.7, 0.3, 0]
33
- - value: 0.5
34
- dtype: bfloat16
35
- ```
 
1
  ---
2
  tags:
 
 
3
  - GGUF
4
+ base_model:
5
+ - seyf1elislam/neural-Kunoichi2-7B-slerp
6
  ---
7
+ # neural-Kunoichi2-7B-slerp
8
+ - Model creator: [seyf1elislam](https://huggingface.co/seyf1elislam)
9
+ - Original model: [neural-Kunoichi2-7B-slerp](https://huggingface.co/seyf1elislam/neural-Kunoichi2-7B-slerp)
10
 
11
+ <!-- description start -->
12
+ ## Description
13
+ This repo contains GGUF format model files for [seyf1elislam's neural-Kunoichi2-7B-slerp ](https://huggingface.co/seyf1elislam/neural-Kunoichi2-7B-slerp).
14
 
15
+ ## Provided files
 
 
16
 
17
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
18
+ | ---- | ---- | ---- | ---- | ---- | ----- |
19
+ | [neural-kunoichi2-7b-slerp.Q4_K_M.gguf ](https://huggingface.co/seyf1elislam/neural-Kunoichi2-7B-slerp-GGUF/blob/main/neural-kunoichi2-7b-slerp.Q4_K_M.gguf ) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
20
+ | [neural-kunoichi2-7b-slerp.Q5_K_M.gguf ](https://huggingface.co/seyf1elislam/neural-Kunoichi2-7B-slerp-GGUF/blob/main/neural-kunoichi2-7b-slerp.Q5_K_M.gguf ) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
21
+ | [neural-kunoichi2-7b-slerp.Q6_K.gguf ](https://huggingface.co/seyf1elislam/neural-Kunoichi2-7B-slerp-GGUF/blob/main/neural-kunoichi2-7b-slerp.Q6_K.gguf ) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
22
+ | [neural-kunoichi2-7b-slerp.Q8_0.gguf ](https://huggingface.co/seyf1elislam/neural-Kunoichi2-7B-slerp-GGUF/blob/main/neural-kunoichi2-7b-slerp.Q8_0.gguf ) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |