DavidAU commited on
Commit
6033aab
1 Parent(s): 89f6c1c

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +229 -0
README.md ADDED
@@ -0,0 +1,229 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - alignment-handbook
5
+ - generated_from_trainer
6
+ - juanako
7
+ - mistral
8
+ - UNA
9
+ - llama-cpp
10
+ - gguf-my-repo
11
+ datasets:
12
+ - HuggingFaceH4/ultrafeedback_binarized
13
+ model-index:
14
+ - name: juanako-7b-UNA
15
+ results:
16
+ - task:
17
+ type: text-generation
18
+ name: TruthfulQA (MC2)
19
+ dataset:
20
+ name: truthful_qa
21
+ type: text-generation
22
+ config: multiple_choice
23
+ split: validation
24
+ metrics:
25
+ - type: accuracy
26
+ value: 65.13
27
+ verified: true
28
+ - task:
29
+ type: text-generation
30
+ name: ARC-Challenge
31
+ dataset:
32
+ name: ai2_arc
33
+ type: text-generation
34
+ config: ARC-Challenge
35
+ split: test
36
+ metrics:
37
+ - type: accuracy
38
+ value: 68.17
39
+ verified: true
40
+ - task:
41
+ type: text-generation
42
+ name: HellaSwag
43
+ dataset:
44
+ name: Rowan/hellaswag
45
+ type: text-generation
46
+ split: test
47
+ metrics:
48
+ - type: accuracy
49
+ value: 85.34
50
+ verified: true
51
+ - type: accuracy
52
+ value: 83.57
53
+ - task:
54
+ type: text-generation
55
+ name: Winogrande
56
+ dataset:
57
+ name: winogrande
58
+ type: text-generation
59
+ config: winogrande_debiased
60
+ split: test
61
+ metrics:
62
+ - type: accuracy
63
+ value: 78.85
64
+ verified: true
65
+ - task:
66
+ type: text-generation
67
+ name: MMLU
68
+ dataset:
69
+ name: cais/mmlu
70
+ type: text-generation
71
+ config: all
72
+ split: test
73
+ metrics:
74
+ - type: accuracy
75
+ value: 62.47
76
+ verified: true
77
+ - task:
78
+ type: text-generation
79
+ name: DROP
80
+ dataset:
81
+ name: drop
82
+ type: text-generation
83
+ split: validation
84
+ metrics:
85
+ - type: accuracy
86
+ value: 38.74
87
+ verified: true
88
+ - task:
89
+ type: text-generation
90
+ name: PubMedQA
91
+ dataset:
92
+ name: bigbio/pubmed_qa
93
+ type: text-generation
94
+ config: pubmed_qa_artificial_bigbio_qa
95
+ split: validation
96
+ metrics:
97
+ - type: accuracy
98
+ value: 76.0
99
+ - task:
100
+ type: text-generation
101
+ name: Text Generation
102
+ dataset:
103
+ name: AI2 Reasoning Challenge (25-Shot)
104
+ type: ai2_arc
105
+ config: ARC-Challenge
106
+ split: test
107
+ args:
108
+ num_few_shot: 25
109
+ metrics:
110
+ - type: acc_norm
111
+ value: 68.17
112
+ name: normalized accuracy
113
+ source:
114
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/juanako-7b-UNA
115
+ name: Open LLM Leaderboard
116
+ - task:
117
+ type: text-generation
118
+ name: Text Generation
119
+ dataset:
120
+ name: HellaSwag (10-Shot)
121
+ type: hellaswag
122
+ split: validation
123
+ args:
124
+ num_few_shot: 10
125
+ metrics:
126
+ - type: acc_norm
127
+ value: 85.34
128
+ name: normalized accuracy
129
+ source:
130
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/juanako-7b-UNA
131
+ name: Open LLM Leaderboard
132
+ - task:
133
+ type: text-generation
134
+ name: Text Generation
135
+ dataset:
136
+ name: MMLU (5-Shot)
137
+ type: cais/mmlu
138
+ config: all
139
+ split: test
140
+ args:
141
+ num_few_shot: 5
142
+ metrics:
143
+ - type: acc
144
+ value: 62.47
145
+ name: accuracy
146
+ source:
147
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/juanako-7b-UNA
148
+ name: Open LLM Leaderboard
149
+ - task:
150
+ type: text-generation
151
+ name: Text Generation
152
+ dataset:
153
+ name: TruthfulQA (0-shot)
154
+ type: truthful_qa
155
+ config: multiple_choice
156
+ split: validation
157
+ args:
158
+ num_few_shot: 0
159
+ metrics:
160
+ - type: mc2
161
+ value: 65.13
162
+ source:
163
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/juanako-7b-UNA
164
+ name: Open LLM Leaderboard
165
+ - task:
166
+ type: text-generation
167
+ name: Text Generation
168
+ dataset:
169
+ name: Winogrande (5-shot)
170
+ type: winogrande
171
+ config: winogrande_xl
172
+ split: validation
173
+ args:
174
+ num_few_shot: 5
175
+ metrics:
176
+ - type: acc
177
+ value: 78.85
178
+ name: accuracy
179
+ source:
180
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/juanako-7b-UNA
181
+ name: Open LLM Leaderboard
182
+ - task:
183
+ type: text-generation
184
+ name: Text Generation
185
+ dataset:
186
+ name: GSM8k (5-shot)
187
+ type: gsm8k
188
+ config: main
189
+ split: test
190
+ args:
191
+ num_few_shot: 5
192
+ metrics:
193
+ - type: acc
194
+ value: 44.81
195
+ name: accuracy
196
+ source:
197
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/juanako-7b-UNA
198
+ name: Open LLM Leaderboard
199
+ ---
200
+
201
+ # DavidAU/juanako-7b-UNA-Q6_K-GGUF
202
+ This model was converted to GGUF format from [`fblgit/juanako-7b-UNA`](https://huggingface.co/fblgit/juanako-7b-UNA) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
203
+ Refer to the [original model card](https://huggingface.co/fblgit/juanako-7b-UNA) for more details on the model.
204
+ ## Use with llama.cpp
205
+
206
+ Install llama.cpp through brew.
207
+
208
+ ```bash
209
+ brew install ggerganov/ggerganov/llama.cpp
210
+ ```
211
+ Invoke the llama.cpp server or the CLI.
212
+
213
+ CLI:
214
+
215
+ ```bash
216
+ llama-cli --hf-repo DavidAU/juanako-7b-UNA-Q6_K-GGUF --model juanako-7b-una.Q6_K.gguf -p "The meaning to life and the universe is"
217
+ ```
218
+
219
+ Server:
220
+
221
+ ```bash
222
+ llama-server --hf-repo DavidAU/juanako-7b-UNA-Q6_K-GGUF --model juanako-7b-una.Q6_K.gguf -c 2048
223
+ ```
224
+
225
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
226
+
227
+ ```
228
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m juanako-7b-una.Q6_K.gguf -n 128
229
+ ```