Lewdiculous commited on
Commit
24f3759
1 Parent(s): 8980b4e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md CHANGED
@@ -6,6 +6,13 @@ tags:
6
  - quantized
7
  - gguf
8
  - experimental
 
 
 
 
 
 
 
9
  ---
10
 
11
  This repository hosts GGUF-IQ-Imatrix quants for **jeiku/Elly_7B.**
@@ -18,4 +25,45 @@ This model is highly experimental.
18
  "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
19
  "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
20
  ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  ```
 
6
  - quantized
7
  - gguf
8
  - experimental
9
+ base_model:
10
+ - MaziyarPanahi/samantha-mistral-7b-Mistral-7B-Instruct-v0.1
11
+ - cognitivecomputations/dolphin-2.6-mistral-7b
12
+ - SanjiWatsuki/Sonya-7B
13
+ library_name: transformers
14
+ language:
15
+ - en
16
  ---
17
 
18
  This repository hosts GGUF-IQ-Imatrix quants for **jeiku/Elly_7B.**
 
25
  "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
26
  "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
27
  ]
28
+ ```
29
+
30
+ **Oringinal model card:**
31
+
32
+ # Elly
33
+
34
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/dG3wJI7_RA8T3pDRWNCKA.png)
35
+
36
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
37
+
38
+ ## Merge Details
39
+ ### Merge Method
40
+
41
+ This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [SanjiWatsuki/Sonya-7B](https://huggingface.co/SanjiWatsuki/Sonya-7B) as a base.
42
+
43
+ ### Models Merged
44
+
45
+ The following models were included in the merge:
46
+ * [MaziyarPanahi/samantha-mistral-7b-Mistral-7B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/samantha-mistral-7b-Mistral-7B-Instruct-v0.1)
47
+ * [cognitivecomputations/dolphin-2.6-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b)
48
+
49
+ ### Configuration
50
+
51
+ The following YAML configuration was used to produce this model:
52
+
53
+ ```yaml
54
+ merge_method: dare_ties
55
+ base_model: SanjiWatsuki/Sonya-7B
56
+ parameters:
57
+ normalize: true
58
+ models:
59
+ - model: SanjiWatsuki/Sonya-7B
60
+ parameters:
61
+ weight: 1
62
+ - model: cognitivecomputations/dolphin-2.6-mistral-7b
63
+ parameters:
64
+ weight: 1
65
+ - model: MaziyarPanahi/samantha-mistral-7b-Mistral-7B-Instruct-v0.1
66
+ parameters:
67
+ weight: 1
68
+ dtype: float16
69
  ```