aashish1904 commited on
Commit
bdb842d
1 Parent(s): 33f5b6a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +177 -0
README.md ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ library_name: transformers
5
+ tags:
6
+ - mergekit
7
+ - merge
8
+ base_model:
9
+ - Locutusque/Llama-3-NeuralHercules-5.0-8B
10
+ - NousResearch/Meta-Llama-3-8B
11
+ - NousResearch/Hermes-2-Theta-Llama-3-8B
12
+ - Locutusque/llama-3-neural-chat-v2.2-8b
13
+ model-index:
14
+ - name: Llama-3-Yggdrasil-2.0-8B
15
+ results:
16
+ - task:
17
+ type: text-generation
18
+ name: Text Generation
19
+ dataset:
20
+ name: IFEval (0-Shot)
21
+ type: HuggingFaceH4/ifeval
22
+ args:
23
+ num_few_shot: 0
24
+ metrics:
25
+ - type: inst_level_strict_acc and prompt_level_strict_acc
26
+ value: 53.71
27
+ name: strict accuracy
28
+ source:
29
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/Llama-3-Yggdrasil-2.0-8B
30
+ name: Open LLM Leaderboard
31
+ - task:
32
+ type: text-generation
33
+ name: Text Generation
34
+ dataset:
35
+ name: BBH (3-Shot)
36
+ type: BBH
37
+ args:
38
+ num_few_shot: 3
39
+ metrics:
40
+ - type: acc_norm
41
+ value: 26.92
42
+ name: normalized accuracy
43
+ source:
44
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/Llama-3-Yggdrasil-2.0-8B
45
+ name: Open LLM Leaderboard
46
+ - task:
47
+ type: text-generation
48
+ name: Text Generation
49
+ dataset:
50
+ name: MATH Lvl 5 (4-Shot)
51
+ type: hendrycks/competition_math
52
+ args:
53
+ num_few_shot: 4
54
+ metrics:
55
+ - type: exact_match
56
+ value: 6.87
57
+ name: exact match
58
+ source:
59
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/Llama-3-Yggdrasil-2.0-8B
60
+ name: Open LLM Leaderboard
61
+ - task:
62
+ type: text-generation
63
+ name: Text Generation
64
+ dataset:
65
+ name: GPQA (0-shot)
66
+ type: Idavidrein/gpqa
67
+ args:
68
+ num_few_shot: 0
69
+ metrics:
70
+ - type: acc_norm
71
+ value: 1.68
72
+ name: acc_norm
73
+ source:
74
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/Llama-3-Yggdrasil-2.0-8B
75
+ name: Open LLM Leaderboard
76
+ - task:
77
+ type: text-generation
78
+ name: Text Generation
79
+ dataset:
80
+ name: MuSR (0-shot)
81
+ type: TAUR-Lab/MuSR
82
+ args:
83
+ num_few_shot: 0
84
+ metrics:
85
+ - type: acc_norm
86
+ value: 8.07
87
+ name: acc_norm
88
+ source:
89
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/Llama-3-Yggdrasil-2.0-8B
90
+ name: Open LLM Leaderboard
91
+ - task:
92
+ type: text-generation
93
+ name: Text Generation
94
+ dataset:
95
+ name: MMLU-PRO (5-shot)
96
+ type: TIGER-Lab/MMLU-Pro
97
+ config: main
98
+ split: test
99
+ args:
100
+ num_few_shot: 5
101
+ metrics:
102
+ - type: acc
103
+ value: 24.07
104
+ name: accuracy
105
+ source:
106
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Locutusque/Llama-3-Yggdrasil-2.0-8B
107
+ name: Open LLM Leaderboard
108
+
109
+ ---
110
+
111
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
112
+
113
+
114
+ # QuantFactory/Llama-3-Yggdrasil-2.0-8B-GGUF
115
+ This is quantized version of [Locutusque/Llama-3-Yggdrasil-2.0-8B](https://huggingface.co/Locutusque/Llama-3-Yggdrasil-2.0-8B) created using llama.cpp
116
+
117
+ # Original Model Card
118
+
119
+ # merge
120
+
121
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
122
+
123
+ ## Merge Details
124
+ ### Merge Method
125
+
126
+ This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base.
127
+
128
+ ### Models Merged
129
+
130
+ The following models were included in the merge:
131
+ * [Locutusque/Llama-3-NeuralHercules-5.0-8B](https://huggingface.co/Locutusque/Llama-3-NeuralHercules-5.0-8B)
132
+ * [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B)
133
+ * [Locutusque/llama-3-neural-chat-v2.2-8b](https://huggingface.co/Locutusque/llama-3-neural-chat-v2.2-8b)
134
+
135
+ ### Configuration
136
+
137
+ The following YAML configuration was used to produce this model:
138
+
139
+ ```yaml
140
+ models:
141
+ - model: NousResearch/Meta-Llama-3-8B
142
+ # No parameters necessary for base model
143
+ - model: NousResearch/Hermes-2-Theta-Llama-3-8B
144
+ parameters:
145
+ density: 0.6
146
+ weight: 0.55
147
+ - model: Locutusque/llama-3-neural-chat-v2.2-8b
148
+ parameters:
149
+ density: 0.55
150
+ weight: 0.4
151
+ - model: Locutusque/Llama-3-NeuralHercules-5.0-8B
152
+ parameters:
153
+ density: 0.65
154
+ weight: 0.6
155
+
156
+ merge_method: dare_ties
157
+ base_model: NousResearch/Meta-Llama-3-8B
158
+ parameters:
159
+ int8_mask: true
160
+ dtype: bfloat16
161
+
162
+ ```
163
+
164
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
165
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__Llama-3-Yggdrasil-2.0-8B)
166
+
167
+ | Metric |Value|
168
+ |-------------------|----:|
169
+ |Avg. |20.22|
170
+ |IFEval (0-Shot) |53.71|
171
+ |BBH (3-Shot) |26.92|
172
+ |MATH Lvl 5 (4-Shot)| 6.87|
173
+ |GPQA (0-shot) | 1.68|
174
+ |MuSR (0-shot) | 8.07|
175
+ |MMLU-PRO (5-shot) |24.07|
176
+
177
+