aashish1904 commited on
Commit
c20a6df
1 Parent(s): 319df26

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +230 -0
README.md ADDED
@@ -0,0 +1,230 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ language:
5
+ - en
6
+ license: llama3.2
7
+ tags:
8
+ - enigma
9
+ - valiant
10
+ - valiant-labs
11
+ - llama
12
+ - llama-3.2
13
+ - llama-3.2-instruct
14
+ - llama-3.2-instruct-3b
15
+ - llama-3
16
+ - llama-3-instruct
17
+ - llama-3-instruct-3b
18
+ - 3b
19
+ - code
20
+ - code-instruct
21
+ - python
22
+ - conversational
23
+ - chat
24
+ - instruct
25
+ base_model: meta-llama/Llama-3.2-3B-Instruct
26
+ datasets:
27
+ - sequelbox/Tachibana
28
+ - sequelbox/Supernova
29
+ pipeline_tag: text-generation
30
+ model_type: llama
31
+ model-index:
32
+ - name: Llama3.2-3B-Enigma
33
+ results:
34
+ - task:
35
+ type: text-generation
36
+ name: Text Generation
37
+ dataset:
38
+ name: Winogrande (5-Shot)
39
+ type: winogrande
40
+ args:
41
+ num_few_shot: 5
42
+ metrics:
43
+ - type: acc
44
+ value: 67.96
45
+ name: acc
46
+ - task:
47
+ type: text-generation
48
+ name: Text Generation
49
+ dataset:
50
+ name: ARC Challenge (25-Shot)
51
+ type: arc-challenge
52
+ args:
53
+ num_few_shot: 25
54
+ metrics:
55
+ - type: acc_norm
56
+ value: 47.18
57
+ name: normalized accuracy
58
+ - task:
59
+ type: text-generation
60
+ name: Text Generation
61
+ dataset:
62
+ name: IFEval (0-Shot)
63
+ type: HuggingFaceH4/ifeval
64
+ args:
65
+ num_few_shot: 0
66
+ metrics:
67
+ - type: inst_level_strict_acc and prompt_level_strict_acc
68
+ value: 47.75
69
+ name: strict accuracy
70
+ source:
71
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-Enigma
72
+ name: Open LLM Leaderboard
73
+ - task:
74
+ type: text-generation
75
+ name: Text Generation
76
+ dataset:
77
+ name: BBH (3-Shot)
78
+ type: BBH
79
+ args:
80
+ num_few_shot: 3
81
+ metrics:
82
+ - type: acc_norm
83
+ value: 18.81
84
+ name: normalized accuracy
85
+ source:
86
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-Enigma
87
+ name: Open LLM Leaderboard
88
+ - task:
89
+ type: text-generation
90
+ name: Text Generation
91
+ dataset:
92
+ name: MATH Lvl 5 (4-Shot)
93
+ type: hendrycks/competition_math
94
+ args:
95
+ num_few_shot: 4
96
+ metrics:
97
+ - type: exact_match
98
+ value: 6.65
99
+ name: exact match
100
+ source:
101
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-Enigma
102
+ name: Open LLM Leaderboard
103
+ - task:
104
+ type: text-generation
105
+ name: Text Generation
106
+ dataset:
107
+ name: GPQA (0-shot)
108
+ type: Idavidrein/gpqa
109
+ args:
110
+ num_few_shot: 0
111
+ metrics:
112
+ - type: acc_norm
113
+ value: 1.45
114
+ name: acc_norm
115
+ source:
116
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-Enigma
117
+ name: Open LLM Leaderboard
118
+ - task:
119
+ type: text-generation
120
+ name: Text Generation
121
+ dataset:
122
+ name: MuSR (0-shot)
123
+ type: TAUR-Lab/MuSR
124
+ args:
125
+ num_few_shot: 0
126
+ metrics:
127
+ - type: acc_norm
128
+ value: 4.54
129
+ name: acc_norm
130
+ source:
131
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-Enigma
132
+ name: Open LLM Leaderboard
133
+ - task:
134
+ type: text-generation
135
+ name: Text Generation
136
+ dataset:
137
+ name: MMLU-PRO (5-shot)
138
+ type: TIGER-Lab/MMLU-Pro
139
+ config: main
140
+ split: test
141
+ args:
142
+ num_few_shot: 5
143
+ metrics:
144
+ - type: acc
145
+ value: 15.41
146
+ name: accuracy
147
+ source:
148
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ValiantLabs/Llama3.2-3B-Enigma
149
+ name: Open LLM Leaderboard
150
+
151
+ ---
152
+
153
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
154
+
155
+
156
+ # QuantFactory/Llama3.2-3B-Enigma-GGUF
157
+ This is quantized version of [ValiantLabs/Llama3.2-3B-Enigma](https://huggingface.co/ValiantLabs/Llama3.2-3B-Enigma) created using llama.cpp
158
+
159
+ # Original Model Card
160
+
161
+
162
+
163
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64f267a8a4f79a118e0fcc89/it7MY5MyLCLpFQev5dUis.jpeg)
164
+
165
+
166
+ Enigma is a code-instruct model built on Llama 3.2 3b.
167
+ - High quality code instruct performance with the Llama 3.2 Instruct chat format
168
+ - Finetuned on synthetic code-instruct data generated with Llama 3.1 405b. [Find the current version of the dataset here!](https://huggingface.co/datasets/sequelbox/Tachibana)
169
+ - Overall chat performance supplemented with [generalist synthetic data.](https://huggingface.co/datasets/sequelbox/Supernova)
170
+
171
+
172
+ ## Version
173
+
174
+ This is the **2024-09-30** release of Enigma for Llama 3.2 3b, enhancing code-instruct and general chat capabilities.
175
+
176
+ Enigma is also available for [Llama 3.1 8b!](https://huggingface.co/ValiantLabs/Llama3.1-8B-Enigma)
177
+
178
+ Help us and recommend Enigma to your friends! We're excited for more Enigma releases in the future.
179
+
180
+
181
+ ## Prompting Guide
182
+ Enigma uses the [Llama 3.2 Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) prompt format. The example script below can be used as a starting point for general chat:
183
+
184
+ ```python
185
+ import transformers
186
+ import torch
187
+
188
+ model_id = "ValiantLabs/Llama3.2-3B-Enigma"
189
+
190
+ pipeline = transformers.pipeline(
191
+ "text-generation",
192
+ model=model_id,
193
+ model_kwargs={"torch_dtype": torch.bfloat16},
194
+ device_map="auto",
195
+ )
196
+
197
+ messages = [
198
+ {"role": "system", "content": "You are Enigma, a highly capable code assistant."},
199
+ {"role": "user", "content": "Can you explain virtualization to me?"}
200
+ ]
201
+
202
+ outputs = pipeline(
203
+ messages,
204
+ max_new_tokens=1024,
205
+ )
206
+
207
+ print(outputs[0]["generated_text"][-1])
208
+ ```
209
+
210
+ ## The Model
211
+ Enigma is built on top of Llama 3.2 3b Instruct, using high quality code-instruct data and general chat data in Llama 3.2 Instruct prompt style to supplement overall performance.
212
+
213
+ Our current version of Enigma is trained on code-instruct data from [sequelbox/Tachibana](https://huggingface.co/datasets/sequelbox/Tachibana) and general chat data from [sequelbox/Supernova.](https://huggingface.co/datasets/sequelbox/Supernova)
214
+
215
+
216
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg)
217
+
218
+
219
+ Enigma is created by [Valiant Labs.](http://valiantlabs.ca/)
220
+
221
+ [Check out our HuggingFace page for Shining Valiant 2 and our other Build Tools models for creators!](https://huggingface.co/ValiantLabs)
222
+
223
+ [Follow us on X for updates on our models!](https://twitter.com/valiant_labs)
224
+
225
+ We care about open source.
226
+ For everyone to use.
227
+
228
+ We encourage others to finetune further from our models.
229
+
230
+