munish0838 commited on
Commit
515eae6
1 Parent(s): 36c55f2

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Locutusque/Llama-3-Yggdrasil-8B
3
+ library_name: transformers
4
+ tags:
5
+ - mergekit
6
+ - merge
7
+ license: llama3
8
+ pipeline_tag: text-generation
9
+ ---
10
+ # Llama-3-Yggdrasil-8B-GGUF
11
+ This is quantized version of [Locutusque/Llama-3-Yggdrasil-8B](https://huggingface.co/Locutusque/Llama-3-Yggdrasil-8B) created using llama.cpp
12
+
13
+ ## Model Description
14
+ ### Merge Method
15
+
16
+ This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base.
17
+
18
+ ### Models Merged
19
+
20
+ The following models were included in the merge:
21
+ * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
22
+ * [Locutusque/Llama-3-Hercules-5.0-8B](https://huggingface.co/Locutusque/Llama-3-Hercules-5.0-8B)
23
+ * [Locutusque/llama-3-neural-chat-v2.2-8b](https://huggingface.co/Locutusque/llama-3-neural-chat-v2.2-8b)
24
+
25
+ ### Configuration
26
+
27
+ The following YAML configuration was used to produce this model:
28
+
29
+ ```yaml
30
+ models:
31
+ - model: NousResearch/Meta-Llama-3-8B
32
+ # No parameters necessary for base model
33
+ - model: NousResearch/Meta-Llama-3-8B-Instruct
34
+ parameters:
35
+ density: 0.6
36
+ weight: 0.55
37
+ - model: Locutusque/llama-3-neural-chat-v2.2-8b
38
+ parameters:
39
+ density: 0.55
40
+ weight: 0.45
41
+ - model: Locutusque/Llama-3-Hercules-5.0-8B
42
+ parameters:
43
+ density: 0.57
44
+ weight: 0.5
45
+ merge_method: dare_ties
46
+ base_model: NousResearch/Meta-Llama-3-8B
47
+ parameters:
48
+ int8_mask: true
49
+ dtype: bfloat16
50
+
51
+ ```