RichardErkhov commited on
Commit
1399a9a
1 Parent(s): 7911997

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +302 -0
README.md ADDED
@@ -0,0 +1,302 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Llama-160M-Chat-v1 - GGUF
11
+ - Model creator: https://huggingface.co/Felladrin/
12
+ - Original model: https://huggingface.co/Felladrin/Llama-160M-Chat-v1/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [Llama-160M-Chat-v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q2_K.gguf) | Q2_K | 0.07GB |
18
+ | [Llama-160M-Chat-v1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.IQ3_XS.gguf) | IQ3_XS | 0.07GB |
19
+ | [Llama-160M-Chat-v1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.IQ3_S.gguf) | IQ3_S | 0.07GB |
20
+ | [Llama-160M-Chat-v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q3_K_S.gguf) | Q3_K_S | 0.07GB |
21
+ | [Llama-160M-Chat-v1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.IQ3_M.gguf) | IQ3_M | 0.08GB |
22
+ | [Llama-160M-Chat-v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q3_K.gguf) | Q3_K | 0.08GB |
23
+ | [Llama-160M-Chat-v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q3_K_M.gguf) | Q3_K_M | 0.08GB |
24
+ | [Llama-160M-Chat-v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q3_K_L.gguf) | Q3_K_L | 0.08GB |
25
+ | [Llama-160M-Chat-v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.IQ4_XS.gguf) | IQ4_XS | 0.09GB |
26
+ | [Llama-160M-Chat-v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q4_0.gguf) | Q4_0 | 0.09GB |
27
+ | [Llama-160M-Chat-v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.IQ4_NL.gguf) | IQ4_NL | 0.09GB |
28
+ | [Llama-160M-Chat-v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q4_K_S.gguf) | Q4_K_S | 0.09GB |
29
+ | [Llama-160M-Chat-v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q4_K.gguf) | Q4_K | 0.1GB |
30
+ | [Llama-160M-Chat-v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q4_K_M.gguf) | Q4_K_M | 0.1GB |
31
+ | [Llama-160M-Chat-v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q4_1.gguf) | Q4_1 | 0.1GB |
32
+ | [Llama-160M-Chat-v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q5_0.gguf) | Q5_0 | 0.11GB |
33
+ | [Llama-160M-Chat-v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q5_K_S.gguf) | Q5_K_S | 0.11GB |
34
+ | [Llama-160M-Chat-v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q5_K.gguf) | Q5_K | 0.11GB |
35
+ | [Llama-160M-Chat-v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q5_K_M.gguf) | Q5_K_M | 0.11GB |
36
+ | [Llama-160M-Chat-v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q5_1.gguf) | Q5_1 | 0.12GB |
37
+ | [Llama-160M-Chat-v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q6_K.gguf) | Q6_K | 0.12GB |
38
+ | [Llama-160M-Chat-v1.Q8_0.gguf](https://huggingface.co/RichardErkhov/Felladrin_-_Llama-160M-Chat-v1-gguf/blob/main/Llama-160M-Chat-v1.Q8_0.gguf) | Q8_0 | 0.16GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ language:
46
+ - en
47
+ license: apache-2.0
48
+ tags:
49
+ - text-generation
50
+ base_model: JackFram/llama-160m
51
+ datasets:
52
+ - ehartford/wizard_vicuna_70k_unfiltered
53
+ - totally-not-an-llm/EverythingLM-data-V3
54
+ - Open-Orca/SlimOrca-Dedup
55
+ - databricks/databricks-dolly-15k
56
+ - THUDM/webglm-qa
57
+ widget:
58
+ - messages:
59
+ - role: system
60
+ content: You are a helpful assistant, who answers with empathy.
61
+ - role: user
62
+ content: Got a question for you!
63
+ - role: assistant
64
+ content: "Sure! What's it?"
65
+ - role: user
66
+ content: Why do you love cats so much!? 🐈
67
+ - messages:
68
+ - role: system
69
+ content: "You are a helpful assistant who answers user's questions with empathy."
70
+ - role: user
71
+ content: Who is Mona Lisa?
72
+ - messages:
73
+ - role: system
74
+ content: You are a helpful assistant who provides concise responses.
75
+ - role: user
76
+ content: Heya!
77
+ - role: assistant
78
+ content: Hi! How may I help you today?
79
+ - role: user
80
+ content: I need to build a simple website. Where should I start learning about web development?
81
+ - messages:
82
+ - role: user
83
+ content: Invited some friends to come home today. Give me some ideas for games to play with them!
84
+ - messages:
85
+ - role: system
86
+ content: "You are a helpful assistant who answers user's questions with details and curiosity."
87
+ - role: user
88
+ content: What are some potential applications for quantum computing?
89
+ - messages:
90
+ - role: system
91
+ content: You are a helpful assistant who gives creative responses.
92
+ - role: user
93
+ content: Write the specs of a game about mages in a fantasy world.
94
+ - messages:
95
+ - role: system
96
+ content: "You are a helpful assistant who answers user's questions with details."
97
+ - role: user
98
+ content: Tell me about the pros and cons of social media.
99
+ - messages:
100
+ - role: system
101
+ content: "You are a helpful assistant who answers user's questions with confidence."
102
+ - role: user
103
+ content: What is a dog?
104
+ - role: assistant
105
+ content: 'A dog is a four-legged, domesticated animal that is a member of the class Mammalia,
106
+ which includes all mammals. Dogs are known for their loyalty, playfulness, and
107
+ ability to be trained for various tasks. They are also used for hunting, herding,
108
+ and as service animals.'
109
+ - role: user
110
+ content: What is the color of an apple?
111
+ inference:
112
+ parameters:
113
+ max_new_tokens: 250
114
+ penalty_alpha: 0.5
115
+ top_k: 4
116
+ repetition_penalty: 1.01
117
+ model-index:
118
+ - name: Llama-160M-Chat-v1
119
+ results:
120
+ - task:
121
+ type: text-generation
122
+ name: Text Generation
123
+ dataset:
124
+ name: AI2 Reasoning Challenge (25-Shot)
125
+ type: ai2_arc
126
+ config: ARC-Challenge
127
+ split: test
128
+ args:
129
+ num_few_shot: 25
130
+ metrics:
131
+ - type: acc_norm
132
+ value: 24.74
133
+ name: normalized accuracy
134
+ source:
135
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1
136
+ name: Open LLM Leaderboard
137
+ - task:
138
+ type: text-generation
139
+ name: Text Generation
140
+ dataset:
141
+ name: HellaSwag (10-Shot)
142
+ type: hellaswag
143
+ split: validation
144
+ args:
145
+ num_few_shot: 10
146
+ metrics:
147
+ - type: acc_norm
148
+ value: 35.29
149
+ name: normalized accuracy
150
+ source:
151
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1
152
+ name: Open LLM Leaderboard
153
+ - task:
154
+ type: text-generation
155
+ name: Text Generation
156
+ dataset:
157
+ name: MMLU (5-Shot)
158
+ type: cais/mmlu
159
+ config: all
160
+ split: test
161
+ args:
162
+ num_few_shot: 5
163
+ metrics:
164
+ - type: acc
165
+ value: 26.13
166
+ name: accuracy
167
+ source:
168
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1
169
+ name: Open LLM Leaderboard
170
+ - task:
171
+ type: text-generation
172
+ name: Text Generation
173
+ dataset:
174
+ name: TruthfulQA (0-shot)
175
+ type: truthful_qa
176
+ config: multiple_choice
177
+ split: validation
178
+ args:
179
+ num_few_shot: 0
180
+ metrics:
181
+ - type: mc2
182
+ value: 44.16
183
+ source:
184
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1
185
+ name: Open LLM Leaderboard
186
+ - task:
187
+ type: text-generation
188
+ name: Text Generation
189
+ dataset:
190
+ name: Winogrande (5-shot)
191
+ type: winogrande
192
+ config: winogrande_xl
193
+ split: validation
194
+ args:
195
+ num_few_shot: 5
196
+ metrics:
197
+ - type: acc
198
+ value: 51.3
199
+ name: accuracy
200
+ source:
201
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1
202
+ name: Open LLM Leaderboard
203
+ - task:
204
+ type: text-generation
205
+ name: Text Generation
206
+ dataset:
207
+ name: GSM8k (5-shot)
208
+ type: gsm8k
209
+ config: main
210
+ split: test
211
+ args:
212
+ num_few_shot: 5
213
+ metrics:
214
+ - type: acc
215
+ value: 0.0
216
+ name: accuracy
217
+ source:
218
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1
219
+ name: Open LLM Leaderboard
220
+ ---
221
+
222
+ # A Llama Chat Model of 160M Parameters
223
+
224
+ - Base model: [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m)
225
+ - Datasets:
226
+ - [ehartford/wizard_vicuna_70k_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered)
227
+ - [totally-not-an-llm/EverythingLM-data-V3](https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V3)
228
+ - [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup)
229
+ - [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
230
+ - [THUDM/webglm-qa](https://huggingface.co/datasets/THUDM/webglm-qa)
231
+ - Availability in other ML formats:
232
+ - GGUF: [Felladrin/gguf-Llama-160M-Chat-v1](https://huggingface.co/Felladrin/gguf-Llama-160M-Chat-v1)
233
+ - ONNX: [Felladrin/onnx-Llama-160M-Chat-v1](https://huggingface.co/Felladrin/onnx-Llama-160M-Chat-v1)
234
+ - MLC: [Felladrin/mlc-q4f16-Llama-160M-Chat-v1](https://huggingface.co/Felladrin/mlc-q4f16-Llama-160M-Chat-v1)
235
+ - MLX: [mlx-community/Llama-160M-Chat-v1-4bit-mlx](https://huggingface.co/mlx-community/Llama-160M-Chat-v1-4bit-mlx)
236
+
237
+ ## Recommended Prompt Format
238
+
239
+ ```
240
+ <|im_start|>system
241
+ {system_message}<|im_end|>
242
+ <|im_start|>user
243
+ {user_message}<|im_end|>
244
+ <|im_start|>assistant
245
+ ```
246
+
247
+ ## Recommended Inference Parameters
248
+
249
+ ```yml
250
+ penalty_alpha: 0.5
251
+ top_k: 4
252
+ repetition_penalty: 1.01
253
+ ```
254
+
255
+ ## Usage Example
256
+
257
+ ```python
258
+ from transformers import pipeline
259
+
260
+ generate = pipeline("text-generation", "Felladrin/Llama-160M-Chat-v1")
261
+
262
+ messages = [
263
+ {
264
+ "role": "system",
265
+ "content": "You are a helpful assistant who answers user's questions with details and curiosity.",
266
+ },
267
+ {
268
+ "role": "user",
269
+ "content": "What are some potential applications for quantum computing?",
270
+ },
271
+ ]
272
+
273
+ prompt = generate.tokenizer.apply_chat_template(
274
+ messages, tokenize=False, add_generation_prompt=True
275
+ )
276
+
277
+ output = generate(
278
+ prompt,
279
+ max_new_tokens=1024,
280
+ penalty_alpha=0.5,
281
+ top_k=4,
282
+ repetition_penalty=1.01,
283
+ )
284
+
285
+ print(output[0]["generated_text"])
286
+ ```
287
+
288
+ ## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
289
+
290
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Felladrin__Llama-160M-Chat-v1)
291
+
292
+ | Metric |Value|
293
+ |---------------------------------|----:|
294
+ |Avg. |30.27|
295
+ |AI2 Reasoning Challenge (25-Shot)|24.74|
296
+ |HellaSwag (10-Shot) |35.29|
297
+ |MMLU (5-Shot) |26.13|
298
+ |TruthfulQA (0-shot) |44.16|
299
+ |Winogrande (5-shot) |51.30|
300
+ |GSM8k (5-shot) | 0.00|
301
+
302
+