Triangle104 commited on
Commit
67df34a
·
verified ·
1 Parent(s): b5456a3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -1
README.md CHANGED
@@ -6,12 +6,68 @@ tags:
6
  - merge
7
  - llama-cpp
8
  - gguf-my-repo
 
9
  ---
10
 
11
  # Triangle104/Yomiel-22B-Q5_K_S-GGUF
12
  This model was converted to GGUF format from [`Silvelter/Yomiel-22B`](https://huggingface.co/Silvelter/Yomiel-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/Silvelter/Yomiel-22B) for more details on the model.
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ## Use with llama.cpp
16
  Install llama.cpp through brew (works on Mac and Linux)
17
 
@@ -50,4 +106,4 @@ Step 3: Run inference through the main binary.
50
  or
51
  ```
52
  ./llama-server --hf-repo Triangle104/Yomiel-22B-Q5_K_S-GGUF --hf-file yomiel-22b-q5_k_s.gguf -c 2048
53
- ```
 
6
  - merge
7
  - llama-cpp
8
  - gguf-my-repo
9
+ license: apache-2.0
10
  ---
11
 
12
  # Triangle104/Yomiel-22B-Q5_K_S-GGUF
13
  This model was converted to GGUF format from [`Silvelter/Yomiel-22B`](https://huggingface.co/Silvelter/Yomiel-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
14
  Refer to the [original model card](https://huggingface.co/Silvelter/Yomiel-22B) for more details on the model.
15
 
16
+ ---
17
+ Model details:
18
+ -
19
+ This is a merge of pre-trained language models created using mergekit.
20
+
21
+ Merge Method
22
+ -
23
+ This model was merged using the della_linear merge method using ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 as a base.
24
+
25
+ Models Merged
26
+ -
27
+ The following models were included in the merge:
28
+
29
+ nbeerbower/Mistral-Small-Drummer-22B
30
+ gghfez/SeminalRP-22b
31
+ TheDrummer/Cydonia-22B-v1.1
32
+ anthracite-org/magnum-v4-22b
33
+
34
+ Configuration
35
+ -
36
+ The following YAML configuration was used to produce this model:
37
+
38
+ base_model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
39
+ parameters:
40
+ epsilon: 0.04
41
+ lambda: 1.05
42
+ int8_mask: true
43
+ rescale: true
44
+ normalize: false
45
+ dtype: bfloat16
46
+ tokenizer_source: base
47
+ merge_method: della_linear
48
+ models:
49
+ - model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
50
+ parameters:
51
+ weight: [0.2, 0.3, 0.2, 0.3, 0.2]
52
+ density: [0.45, 0.55, 0.45, 0.55, 0.45]
53
+ - model: gghfez/SeminalRP-22b
54
+ parameters:
55
+ weight: [0.01768, -0.01675, 0.01285, -0.01696, 0.01421]
56
+ density: [0.6, 0.4, 0.5, 0.4, 0.6]
57
+ - model: anthracite-org/magnum-v4-22b
58
+ parameters:
59
+ weight: [0.208, 0.139, 0.139, 0.139, 0.208]
60
+ density: [0.7]
61
+ - model: TheDrummer/Cydonia-22B-v1.1
62
+ parameters:
63
+ weight: [0.208, 0.139, 0.139, 0.139, 0.208]
64
+ density: [0.7]
65
+ - model: nbeerbower/Mistral-Small-Drummer-22B
66
+ parameters:
67
+ weight: [0.33]
68
+ density: [0.45, 0.55, 0.45, 0.55, 0.45]
69
+
70
+ ---
71
  ## Use with llama.cpp
72
  Install llama.cpp through brew (works on Mac and Linux)
73
 
 
106
  or
107
  ```
108
  ./llama-server --hf-repo Triangle104/Yomiel-22B-Q5_K_S-GGUF --hf-file yomiel-22b-q5_k_s.gguf -c 2048
109
+ ```