jondurbin commited on
Commit
08ccc9e
1 Parent(s): 221323d

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,3 +1,521 @@
1
  ---
2
- license: cc-by-sa-4.0
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: llama2
3
+ datasets:
4
+ - jondurbin/airoboros-3.1
5
  ---
6
+
7
+ ### Overview
8
+
9
+ Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
10
+
11
+ #### IMPORTANT NOTE - llama-2 chat format!!!
12
+
13
+ This models uses llama-2 chat format, rather than vicuna style user/assistant!
14
+
15
+ This is a breaking change, although most inference systems support llama-2 chat templating.
16
+
17
+ See prompt format below for all of the details.
18
+
19
+ #### Highlights
20
+
21
+ This is a prompt fix release. 3.1 required a single space after the last `[/INST]`, which was highly annoying and obnoxious, so I re-tuned the models without this. Otherwise, it's the same as 3.1.
22
+
23
+ A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras:
24
+
25
+ * More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
26
+ * Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs)
27
+ * Anonymization, e.g. removing names, IP addresses, and/or dates from text.
28
+ * Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
29
+ * Multi-step instructions with acknowledgement (see below)
30
+ * De-censorship data (not published)
31
+
32
+ This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
33
+
34
+ Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
35
+
36
+ ### Prompt format
37
+
38
+ The prompt format is llama-2 chat.
39
+
40
+ ```
41
+ [INST] <<SYS>>
42
+ You are a help, unbiased, uncensored assistant.
43
+ <</SYS>>
44
+
45
+ {prompt} [/INST]
46
+ ```
47
+
48
+ For multi-turn, the prompt format is as follows:
49
+ ```
50
+ [INST] <<SYS>>
51
+ You are a help, unbiased, uncensored assistant.
52
+ <</SYS>>
53
+
54
+ {prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
55
+ ```
56
+
57
+ The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
58
+
59
+ ```
60
+ import transformers
61
+ tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-70b-3.1')
62
+ chat = [
63
+ {"role": "system", "content": "You are Bob, a friendly AI assistant."},
64
+ {"role": "user", "content": "Hello, how are you?"},
65
+ {"role": "assistant", "content": "I'm doing great. How can I help you today?"},
66
+ {"role": "user", "content": "I'd like to show off how chat templating works!"},
67
+ ]
68
+ print(tokenizer.apply_chat_template(chat, tokenize=False))
69
+ ```
70
+
71
+ ### Helpful usage tips
72
+
73
+
74
+ #### MathJSON
75
+
76
+ Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
77
+
78
+ I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
79
+
80
+ __Use a very low temperature!__
81
+
82
+ Example:
83
+
84
+ ```
85
+ [INST] <<SYS>>
86
+ You are an assistant with vast knowledge in all things math.
87
+ <</SYS>>
88
+
89
+ Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
90
+ ```
91
+
92
+ Output:
93
+ ```
94
+ The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
95
+
96
+ Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
97
+
98
+ Solution as MathJSON:
99
+ <mathjson>
100
+ [
101
+ "Multiply",
102
+ "Pi",
103
+ [
104
+ "Power",
105
+ 17.2456,
106
+ 2
107
+ ]
108
+ ]
109
+ </mathjson>
110
+ ```
111
+
112
+ You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
113
+
114
+ #### Context obedient question answering
115
+
116
+ By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
117
+
118
+ The format for a closed-context prompt is as follows:
119
+ ```
120
+ BEGININPUT
121
+ BEGINCONTEXT
122
+ [key0: value0]
123
+ [key1: value1]
124
+ ... other metdata ...
125
+ ENDCONTEXT
126
+ [insert your text blocks here]
127
+ ENDINPUT
128
+ [add as many other blocks, in the exact same format]
129
+ BEGININSTRUCTION
130
+ [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
131
+ ENDINSTRUCTION
132
+ ```
133
+
134
+ It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
135
+
136
+ *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
137
+
138
+ I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
139
+ - `BEGININPUT` - denotes a new input block
140
+ - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
141
+ - `ENDCONTEXT` - denotes the end of the metadata block for the current input
142
+ - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
143
+ - `ENDINPUT` - denotes the end of the current input block
144
+ - [repeat as many input blocks in this format as you want]
145
+ - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
146
+ - [instruction(s)]
147
+ - `ENDINSTRUCTION` - denotes the end of instruction set
148
+
149
+ It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
150
+
151
+ __Use a very low temperature!__
152
+
153
+ Here's a trivial, but important example to prove the point:
154
+ ```
155
+ BEGININPUT
156
+ BEGINCONTEXT
157
+ date: 2021-01-01
158
+ url: https://web.site/123
159
+ ENDCONTEXT
160
+ In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
161
+ ENDINPUT
162
+ BEGININSTRUCTION
163
+ What color are bluberries? Source?
164
+ ENDINSTRUCTION
165
+ ```
166
+
167
+ And the response:
168
+ ```
169
+ Blueberries are now green.
170
+ Source:
171
+ date: 2021-01-01
172
+ url: https://web.site/123
173
+ ```
174
+
175
+ #### Summarization
176
+
177
+ 500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
178
+
179
+ ```
180
+ BEGININPUT
181
+ {text to summarize}
182
+ ENDINPUT
183
+ BEGININSTRUCTION
184
+ Summarize the input in around 130 words.
185
+ ENDINSTRUCTION
186
+ ```
187
+
188
+ #### Getting longer responses
189
+
190
+ You can use a few techniques to get longer responses.
191
+
192
+ Detailed prompts, with explicit instruction for word count:
193
+ ```
194
+ Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
195
+
196
+ The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
197
+
198
+ One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
199
+
200
+ Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
201
+
202
+ Your response should be approximately 2300 words.
203
+ ```
204
+
205
+ Or, a simpler example:
206
+ ```
207
+ Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
208
+ ```
209
+
210
+ There are a few examples of next chapter completion as well, e.g.:
211
+ ```
212
+ Write the next chapter of a historical fiction novel set in Paris during the 20th century.
213
+
214
+ Here's a summary of the previous chapter:
215
+ In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
216
+
217
+ Requirements for the next chapter:
218
+
219
+ 1. Character Development of Margot and Lucien:
220
+ - Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
221
+ - Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
222
+
223
+ 2. Exploration of Paris and the Couture House:
224
+ - Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
225
+ - The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
226
+
227
+ 3. Emergence of the Subplot: The Lost Collection:
228
+ - Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
229
+ - Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
230
+ - Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
231
+
232
+
233
+ Your response should be approximately 650 words.
234
+ ```
235
+
236
+ #### Coding
237
+
238
+ You can ask for fairly complex coding instructions with multiple criteria, e.g.:
239
+
240
+ ```
241
+ Create a python application with the following requirements:
242
+ - Asyncio FastAPI webserver
243
+ - ping endpoint that returns the current date in JSON format
244
+ - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
245
+ ```
246
+
247
+ Or inline criteria:
248
+
249
+ ```
250
+ Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
251
+ ```
252
+
253
+ You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
254
+
255
+ ```
256
+ Write a websocket application in node.js. PLAINFORMAT
257
+ ```
258
+
259
+ #### Agent/function calling
260
+
261
+ The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
262
+
263
+ Example prompt:
264
+ ```
265
+ As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
266
+
267
+ Input: I want to know how many times 'Python' is mentioned in my text file.
268
+
269
+ Available functions:
270
+ file_analytics:
271
+ description: This tool performs various operations on a text file.
272
+ params:
273
+ action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
274
+ filters:
275
+ keyword: The word or phrase we want to search for.
276
+ ```
277
+
278
+ Response:
279
+ ```json
280
+ {
281
+ "function": "file_analytics",
282
+ "params": {
283
+ "action": "count_occurrences",
284
+ "filters": {
285
+ "keyword": "Python"
286
+ }
287
+ }
288
+ }
289
+ ```
290
+
291
+ #### Chain-of-thought
292
+
293
+ You can ask for several possible responses to a given problem, with a ranking and final answer selection.
294
+
295
+ Example prompt:
296
+
297
+ ```
298
+ A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
299
+ ```
300
+
301
+ Example response:
302
+ ```
303
+ Solution 1:
304
+ In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
305
+
306
+ The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
307
+
308
+ n(n-1)/2 = 45
309
+
310
+ Solving this equation gives us n=10.
311
+
312
+ Final answer: There were 10 players in the tournament.
313
+
314
+ Solution 2:
315
+ Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
316
+
317
+ If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
318
+
319
+ Final answer: There were 10 players in the tournament.
320
+
321
+ Solution 3:
322
+ We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
323
+
324
+ Final answer: There were 10 players in the tournament.
325
+
326
+ Ranking of solutions from best to worst:
327
+ 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
328
+ 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
329
+ 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
330
+
331
+ Best and final answer: There were 10 players in the tournament.
332
+ ```
333
+
334
+ #### reWOO style execution planning
335
+
336
+ The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
337
+
338
+ Example prompt:
339
+ ```
340
+ Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
341
+ that could be the user's question, one or more prior evidence values, or a combination of both.
342
+
343
+ Here are the tools available to be called:
344
+ DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
345
+ TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
346
+ HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
347
+ KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
348
+
349
+ The output should be in format:
350
+ Plan: [first action to take based on input question]
351
+ :evidence0: = [function to call with input parameter]
352
+ Plan: [next action to take, based on result of :evidence0:]
353
+ :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
354
+ ...
355
+ Answer: [:evidence[n]: containing the final answer.]
356
+
357
+ Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
358
+ ```
359
+
360
+ Response:
361
+ ```
362
+ Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
363
+ :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
364
+ Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
365
+ :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
366
+ Plan: Conduct another web search to find the most famous work of the identified laureate.
367
+ :evidence2: = DuckDuckGo[Most famous work of :evidence1:]
368
+ Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
369
+ :evidence3: = HyperlinkExtractor[:evidence2:]
370
+ Plan: Use the TextScraper tool to extract information from the relevant links.
371
+ :evidence4: = TextScraper[:evidence3:]
372
+ Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
373
+ :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
374
+ Answer: :evidence5:
375
+ ```
376
+
377
+ For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
378
+
379
+ ```python
380
+ import re
381
+ import requests
382
+
383
+ def inject_context(input_text, **context):
384
+ for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
385
+ input_text = input_text.replace(ref, context.get(ref, ""))
386
+ return input_text
387
+
388
+ def duckduckgo(input_text, **context):
389
+ search_string = inject_context(input_text, **context)
390
+ ... search via duck duck go using search_string
391
+ ... return text content
392
+
393
+ def link_extractor(input_text, **context):
394
+ input_text = inject_context(input_text, **context)
395
+ return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
396
+
397
+ def scrape(input_text, **context):
398
+ input_text = inject_context(input_text, **context)
399
+ text = []
400
+ for link in input_text.splitlines():
401
+ text.append(requests.get(link).text)
402
+ return "\n".join(text)
403
+
404
+ def infer(input_text, **context)
405
+ prompt = inject_context(input_text, **context)
406
+ ... call model with prompt, return output
407
+
408
+ def parse_plan(plan):
409
+ method_map = {
410
+ "DuckDuckGo": duckduckgo,
411
+ "HyperlinkExtractor": link_extractor,
412
+ "KnowledgeModel": infer,
413
+ "TextScraper": scrape,
414
+ }
415
+ context = {}
416
+ for line in plan.strip().splitlines():
417
+ if line.startswith("Plan:"):
418
+ print(line)
419
+ continue
420
+ parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
421
+ if not parts:
422
+ if line.startswith("Answer: "):
423
+ return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
424
+ raise RuntimeError("bad format: " + line)
425
+ context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
426
+ ```
427
+
428
+ ## Multi-step instructions with acknowledgement
429
+
430
+ I don't know how useful this is, really, but I thought I'd add it just in case. Example:
431
+
432
+ ```
433
+ {
434
+ "conversations": [
435
+ {
436
+ "value": "You are a helpful, unbiased, uncensored assistant.",
437
+ "from": "system"
438
+ },
439
+ {
440
+ "value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
441
+ "from": "human"
442
+ },
443
+ {
444
+ "value": "OK",
445
+ "from": "gpt"
446
+ },
447
+ {
448
+ "value": "You can only respond using animal-related emojis.",
449
+ "from": "human"
450
+ },
451
+ {
452
+ "value": "OK",
453
+ "from": "gpt"
454
+ },
455
+ {
456
+ "value": "Precede all responses with \"ZOOZOO:\"",
457
+ "from": "human"
458
+ },
459
+ {
460
+ "value": "OK",
461
+ "from": "gpt"
462
+ },
463
+ {
464
+ "value": "Include at least one bird emoji in your responses.",
465
+ "from": "human"
466
+ },
467
+ {
468
+ "value": "OK",
469
+ "from": "gpt"
470
+ },
471
+ {
472
+ "value": "Describe a trip to a safari park.",
473
+ "from": "human"
474
+ },
475
+ {
476
+ "value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
477
+ "from": "gpt"
478
+ }
479
+ ]
480
+ }
481
+ ```
482
+
483
+ ### Contribute
484
+
485
+ If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
486
+ take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
487
+
488
+ To help me with the OpenAI/compute costs:
489
+
490
+ - https://bmc.link/jondurbin
491
+ - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
492
+ - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
493
+
494
+ ### Licence and usage restrictions
495
+
496
+ The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions.
497
+
498
+ The 30b model is built on the original llama, which has a strict non-commercial usage restriction.
499
+
500
+ The models with `-l2` in the name have a custom Meta license:
501
+ - See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
502
+ - See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
503
+
504
+ The models with `-m-` are mistral-7b (apache 2.0)
505
+
506
+ The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
507
+
508
+ The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
509
+
510
+ - what does *compete* actually mean here?
511
+ - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
512
+ - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
513
+ - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
514
+ - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
515
+
516
+ I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
517
+
518
+ Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
519
+
520
+ Either way, by using this model, you agree to completely indemnify me.
521
+
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "airoboros-3b-311",
3
+ "architectures": [
4
+ "StableLMEpochForCausalLM"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_stablelm_epoch.StableLMEpochConfig",
8
+ "AutoModelForCausalLM": "modeling_stablelm_epoch.StableLMEpochForCausalLM"
9
+ },
10
+ "bos_token_id": 0,
11
+ "eos_token_id": 0,
12
+ "hidden_act": "silu",
13
+ "hidden_size": 2560,
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 6912,
16
+ "max_position_embeddings": 4096,
17
+ "model_type": "stablelm_epoch",
18
+ "norm_eps": 1e-05,
19
+ "num_attention_heads": 32,
20
+ "num_heads": 32,
21
+ "num_hidden_layers": 32,
22
+ "num_key_value_heads": 32,
23
+ "rope_pct": 0.25,
24
+ "rope_theta": 10000,
25
+ "rotary_scaling_factor": 1.0,
26
+ "tie_word_embeddings": false,
27
+ "torch_dtype": "bfloat16",
28
+ "transformers_version": "4.34.0",
29
+ "use_cache": true,
30
+ "vocab_size": 50304
31
+ }
configuration_stablelm_epoch.py ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2023 Stability and The HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ StableLM Epoch model configuration"""
16
+ from transformers import PretrainedConfig
17
+ from transformers.utils import logging
18
+
19
+
20
+ logger = logging.get_logger(__name__)
21
+
22
+
23
+ class StableLMEpochConfig(PretrainedConfig):
24
+ r"""
25
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
26
+ documentation from [`PretrainedConfig`] for more information.
27
+
28
+ Args:
29
+ vocab_size (`int`, *optional*, defaults to 50_304):
30
+ Vocabulary size of the StableLM model. Defines the number of different tokens that
31
+ can be represented by the `inputs_ids` passed when calling [`StableLMEpochModel`].
32
+ intermediate_size (`int`, *optional*, defaults to 6912):
33
+ Dimension of the MLP representations.
34
+ hidden_size (`int`, *optional*, defaults to 2560):
35
+ Dimension of the decoder layers and the pooler layer.
36
+ num_hidden_layers (`int`, *optional*, defaults to 32):
37
+ Number of hidden layers in the Transformer decoder.
38
+ num_attention_heads (`int`, *optional*, defaults to 32):
39
+ Number of attention heads for each attention layer in the Transformer encoder.
40
+ num_key_value_heads (`int`, *optional*):
41
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
42
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
43
+ `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
44
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
45
+ by meanpooling all the original heads within that group. For more details checkout [this
46
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
47
+ `num_attention_heads`.
48
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
49
+ The non-linear activation function (function or string).
50
+ rope_pct (`float`, *optional*, defaults to 1.0):
51
+ Percentage of hidden dimensions to allocate to rotary embeddings.
52
+ rope_theta (`float`, *optional*, defaults to 10000.0):
53
+ The base period of the RoPE embeddings.
54
+ max_position_embeddings (`int`, *optional*, defaults to 2048):
55
+ The maximum sequence length that this model might ever be used with.
56
+ Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
57
+ initializer_range (`float`, *optional*, defaults to 1e-5):
58
+ The standard deviation of the truncated_normal_initializer for initializing
59
+ all weight matrices.
60
+ norm_eps (`float`, *optional*, defaults to 1e-8):
61
+ The epsilon used by the normalization layers.
62
+ use_cache (`bool`, *optional*, defaults to `True`):
63
+ Whether or not the model should return the last key/values attentions
64
+ (not used by all models). Only relevant if `config.is_decoder=True`.
65
+ tie_word_embeddings(`bool`, *optional*, defaults to `False`):
66
+ Whether to tie weight embeddings
67
+ """
68
+ model_type = "stablelm_epoch"
69
+ keys_to_ignore_at_inference = ["past_key_values"]
70
+
71
+ def __init__(
72
+ self,
73
+ vocab_size=50_304,
74
+ intermediate_size=6912,
75
+ hidden_size=2560,
76
+ num_hidden_layers=32,
77
+ num_attention_heads=32,
78
+ num_key_value_heads=32,
79
+ hidden_act="silu",
80
+ rope_pct=0.25,
81
+ rope_theta=10_000,
82
+ max_position_embeddings=4096,
83
+ initializer_range=0.02,
84
+ norm_eps=1.0e-5,
85
+ use_cache=True,
86
+ bos_token_id=0,
87
+ eos_token_id=2,
88
+ tie_word_embeddings=False,
89
+ **kwargs,
90
+ ):
91
+ self.vocab_size = vocab_size
92
+ self.max_position_embeddings = max_position_embeddings
93
+ self.intermediate_size = intermediate_size
94
+ self.hidden_size = hidden_size
95
+ self.num_hidden_layers = num_hidden_layers
96
+ self.num_attention_heads = num_attention_heads
97
+ self.num_key_value_heads = num_key_value_heads
98
+ self.hidden_act = hidden_act
99
+ self.rope_pct = rope_pct
100
+ self.rope_theta = rope_theta
101
+ self.initializer_range = initializer_range
102
+ self.norm_eps = norm_eps
103
+ self.use_cache = use_cache
104
+ self.tie_word_embeddings = tie_word_embeddings
105
+ super().__init__(
106
+ bos_token_id=bos_token_id,
107
+ eos_token_id=eos_token_id,
108
+ tie_word_embeddings=tie_word_embeddings,
109
+ **kwargs,
110
+ )
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "eos_token_id": 0,
5
+ "transformers_version": "4.34.0"
6
+ }
model-00001-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a70b8e8e3a9db3c5042229847f17742d4adaf7561d3be05b2e66eb6348ca2b1
3
+ size 3993609400
model-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2827dcc8e037e2a95ba66c29f083c996e48c512e2d7ed97132d95e05c2c94e5
3
+ size 1597317984
model.safetensors.index.json ADDED
@@ -0,0 +1,363 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 5590886400
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00002-of-00002.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00002.safetensors",
8
+ "model.layers.0.input_layernorm.bias": "model-00001-of-00002.safetensors",
9
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors",
10
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
11
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
12
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
13
+ "model.layers.0.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
14
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
15
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
16
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
17
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
18
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
19
+ "model.layers.1.input_layernorm.bias": "model-00001-of-00002.safetensors",
20
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
21
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
22
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
23
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
24
+ "model.layers.1.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
25
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
26
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
27
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
28
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
29
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
30
+ "model.layers.10.input_layernorm.bias": "model-00001-of-00002.safetensors",
31
+ "model.layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
32
+ "model.layers.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
33
+ "model.layers.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
34
+ "model.layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
35
+ "model.layers.10.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
36
+ "model.layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
37
+ "model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
38
+ "model.layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
39
+ "model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
40
+ "model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
41
+ "model.layers.11.input_layernorm.bias": "model-00001-of-00002.safetensors",
42
+ "model.layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors",
43
+ "model.layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
44
+ "model.layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
45
+ "model.layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
46
+ "model.layers.11.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
47
+ "model.layers.11.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
48
+ "model.layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
49
+ "model.layers.11.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
50
+ "model.layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
51
+ "model.layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
52
+ "model.layers.12.input_layernorm.bias": "model-00001-of-00002.safetensors",
53
+ "model.layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
54
+ "model.layers.12.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
55
+ "model.layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
56
+ "model.layers.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
57
+ "model.layers.12.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
58
+ "model.layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
59
+ "model.layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
60
+ "model.layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
61
+ "model.layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
62
+ "model.layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
63
+ "model.layers.13.input_layernorm.bias": "model-00001-of-00002.safetensors",
64
+ "model.layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
65
+ "model.layers.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
66
+ "model.layers.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
67
+ "model.layers.13.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
68
+ "model.layers.13.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
69
+ "model.layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
70
+ "model.layers.13.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
71
+ "model.layers.13.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
72
+ "model.layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
73
+ "model.layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
74
+ "model.layers.14.input_layernorm.bias": "model-00001-of-00002.safetensors",
75
+ "model.layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
76
+ "model.layers.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
77
+ "model.layers.14.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
78
+ "model.layers.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
79
+ "model.layers.14.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
80
+ "model.layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
81
+ "model.layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
82
+ "model.layers.14.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
83
+ "model.layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
84
+ "model.layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
85
+ "model.layers.15.input_layernorm.bias": "model-00001-of-00002.safetensors",
86
+ "model.layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
87
+ "model.layers.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
88
+ "model.layers.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
89
+ "model.layers.15.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
90
+ "model.layers.15.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
91
+ "model.layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
92
+ "model.layers.15.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
93
+ "model.layers.15.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
94
+ "model.layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
95
+ "model.layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
96
+ "model.layers.16.input_layernorm.bias": "model-00001-of-00002.safetensors",
97
+ "model.layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
98
+ "model.layers.16.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
99
+ "model.layers.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
100
+ "model.layers.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
101
+ "model.layers.16.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
102
+ "model.layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
103
+ "model.layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
104
+ "model.layers.16.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
105
+ "model.layers.16.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
106
+ "model.layers.16.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
107
+ "model.layers.17.input_layernorm.bias": "model-00001-of-00002.safetensors",
108
+ "model.layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
109
+ "model.layers.17.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
110
+ "model.layers.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
111
+ "model.layers.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
112
+ "model.layers.17.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
113
+ "model.layers.17.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
114
+ "model.layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
115
+ "model.layers.17.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
116
+ "model.layers.17.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
117
+ "model.layers.17.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
118
+ "model.layers.18.input_layernorm.bias": "model-00001-of-00002.safetensors",
119
+ "model.layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors",
120
+ "model.layers.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
121
+ "model.layers.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
122
+ "model.layers.18.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
123
+ "model.layers.18.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
124
+ "model.layers.18.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
125
+ "model.layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
126
+ "model.layers.18.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
127
+ "model.layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
128
+ "model.layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
129
+ "model.layers.19.input_layernorm.bias": "model-00001-of-00002.safetensors",
130
+ "model.layers.19.input_layernorm.weight": "model-00001-of-00002.safetensors",
131
+ "model.layers.19.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
132
+ "model.layers.19.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
133
+ "model.layers.19.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
134
+ "model.layers.19.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
135
+ "model.layers.19.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
136
+ "model.layers.19.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
137
+ "model.layers.19.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
138
+ "model.layers.19.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
139
+ "model.layers.19.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
140
+ "model.layers.2.input_layernorm.bias": "model-00001-of-00002.safetensors",
141
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
142
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
143
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
144
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
145
+ "model.layers.2.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
146
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
147
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
148
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
149
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
150
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
151
+ "model.layers.20.input_layernorm.bias": "model-00001-of-00002.safetensors",
152
+ "model.layers.20.input_layernorm.weight": "model-00001-of-00002.safetensors",
153
+ "model.layers.20.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
154
+ "model.layers.20.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
155
+ "model.layers.20.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
156
+ "model.layers.20.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
157
+ "model.layers.20.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
158
+ "model.layers.20.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
159
+ "model.layers.20.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
160
+ "model.layers.20.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
161
+ "model.layers.20.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
162
+ "model.layers.21.input_layernorm.bias": "model-00001-of-00002.safetensors",
163
+ "model.layers.21.input_layernorm.weight": "model-00001-of-00002.safetensors",
164
+ "model.layers.21.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
165
+ "model.layers.21.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
166
+ "model.layers.21.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
167
+ "model.layers.21.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
168
+ "model.layers.21.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
169
+ "model.layers.21.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
170
+ "model.layers.21.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
171
+ "model.layers.21.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
172
+ "model.layers.21.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
173
+ "model.layers.22.input_layernorm.bias": "model-00001-of-00002.safetensors",
174
+ "model.layers.22.input_layernorm.weight": "model-00001-of-00002.safetensors",
175
+ "model.layers.22.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
176
+ "model.layers.22.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
177
+ "model.layers.22.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
178
+ "model.layers.22.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
179
+ "model.layers.22.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
180
+ "model.layers.22.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
181
+ "model.layers.22.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
182
+ "model.layers.22.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
183
+ "model.layers.22.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
184
+ "model.layers.23.input_layernorm.bias": "model-00002-of-00002.safetensors",
185
+ "model.layers.23.input_layernorm.weight": "model-00002-of-00002.safetensors",
186
+ "model.layers.23.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
187
+ "model.layers.23.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
188
+ "model.layers.23.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
189
+ "model.layers.23.post_attention_layernorm.bias": "model-00002-of-00002.safetensors",
190
+ "model.layers.23.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
191
+ "model.layers.23.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
192
+ "model.layers.23.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
193
+ "model.layers.23.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
194
+ "model.layers.23.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
195
+ "model.layers.24.input_layernorm.bias": "model-00002-of-00002.safetensors",
196
+ "model.layers.24.input_layernorm.weight": "model-00002-of-00002.safetensors",
197
+ "model.layers.24.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
198
+ "model.layers.24.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
199
+ "model.layers.24.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
200
+ "model.layers.24.post_attention_layernorm.bias": "model-00002-of-00002.safetensors",
201
+ "model.layers.24.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
202
+ "model.layers.24.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
203
+ "model.layers.24.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
204
+ "model.layers.24.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
205
+ "model.layers.24.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
206
+ "model.layers.25.input_layernorm.bias": "model-00002-of-00002.safetensors",
207
+ "model.layers.25.input_layernorm.weight": "model-00002-of-00002.safetensors",
208
+ "model.layers.25.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
209
+ "model.layers.25.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
210
+ "model.layers.25.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
211
+ "model.layers.25.post_attention_layernorm.bias": "model-00002-of-00002.safetensors",
212
+ "model.layers.25.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
213
+ "model.layers.25.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
214
+ "model.layers.25.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
215
+ "model.layers.25.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
216
+ "model.layers.25.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
217
+ "model.layers.26.input_layernorm.bias": "model-00002-of-00002.safetensors",
218
+ "model.layers.26.input_layernorm.weight": "model-00002-of-00002.safetensors",
219
+ "model.layers.26.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
220
+ "model.layers.26.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
221
+ "model.layers.26.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
222
+ "model.layers.26.post_attention_layernorm.bias": "model-00002-of-00002.safetensors",
223
+ "model.layers.26.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
224
+ "model.layers.26.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
225
+ "model.layers.26.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
226
+ "model.layers.26.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
227
+ "model.layers.26.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
228
+ "model.layers.27.input_layernorm.bias": "model-00002-of-00002.safetensors",
229
+ "model.layers.27.input_layernorm.weight": "model-00002-of-00002.safetensors",
230
+ "model.layers.27.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
231
+ "model.layers.27.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
232
+ "model.layers.27.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
233
+ "model.layers.27.post_attention_layernorm.bias": "model-00002-of-00002.safetensors",
234
+ "model.layers.27.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
235
+ "model.layers.27.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
236
+ "model.layers.27.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
237
+ "model.layers.27.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
238
+ "model.layers.27.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
239
+ "model.layers.28.input_layernorm.bias": "model-00002-of-00002.safetensors",
240
+ "model.layers.28.input_layernorm.weight": "model-00002-of-00002.safetensors",
241
+ "model.layers.28.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
242
+ "model.layers.28.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
243
+ "model.layers.28.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
244
+ "model.layers.28.post_attention_layernorm.bias": "model-00002-of-00002.safetensors",
245
+ "model.layers.28.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
246
+ "model.layers.28.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
247
+ "model.layers.28.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
248
+ "model.layers.28.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
249
+ "model.layers.28.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
250
+ "model.layers.29.input_layernorm.bias": "model-00002-of-00002.safetensors",
251
+ "model.layers.29.input_layernorm.weight": "model-00002-of-00002.safetensors",
252
+ "model.layers.29.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
253
+ "model.layers.29.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
254
+ "model.layers.29.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
255
+ "model.layers.29.post_attention_layernorm.bias": "model-00002-of-00002.safetensors",
256
+ "model.layers.29.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
257
+ "model.layers.29.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
258
+ "model.layers.29.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
259
+ "model.layers.29.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
260
+ "model.layers.29.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
261
+ "model.layers.3.input_layernorm.bias": "model-00001-of-00002.safetensors",
262
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
263
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
264
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
265
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
266
+ "model.layers.3.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
267
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
268
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
269
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
270
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
271
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
272
+ "model.layers.30.input_layernorm.bias": "model-00002-of-00002.safetensors",
273
+ "model.layers.30.input_layernorm.weight": "model-00002-of-00002.safetensors",
274
+ "model.layers.30.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
275
+ "model.layers.30.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
276
+ "model.layers.30.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
277
+ "model.layers.30.post_attention_layernorm.bias": "model-00002-of-00002.safetensors",
278
+ "model.layers.30.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
279
+ "model.layers.30.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
280
+ "model.layers.30.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
281
+ "model.layers.30.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
282
+ "model.layers.30.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
283
+ "model.layers.31.input_layernorm.bias": "model-00002-of-00002.safetensors",
284
+ "model.layers.31.input_layernorm.weight": "model-00002-of-00002.safetensors",
285
+ "model.layers.31.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
286
+ "model.layers.31.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
287
+ "model.layers.31.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
288
+ "model.layers.31.post_attention_layernorm.bias": "model-00002-of-00002.safetensors",
289
+ "model.layers.31.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
290
+ "model.layers.31.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
291
+ "model.layers.31.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
292
+ "model.layers.31.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
293
+ "model.layers.31.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
294
+ "model.layers.4.input_layernorm.bias": "model-00001-of-00002.safetensors",
295
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00002.safetensors",
296
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
297
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
298
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
299
+ "model.layers.4.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
300
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
301
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
302
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
303
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
304
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
305
+ "model.layers.5.input_layernorm.bias": "model-00001-of-00002.safetensors",
306
+ "model.layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
307
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
308
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
309
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
310
+ "model.layers.5.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
311
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
312
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
313
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
314
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
315
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
316
+ "model.layers.6.input_layernorm.bias": "model-00001-of-00002.safetensors",
317
+ "model.layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
318
+ "model.layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
319
+ "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
320
+ "model.layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
321
+ "model.layers.6.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
322
+ "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
323
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
324
+ "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
325
+ "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
326
+ "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
327
+ "model.layers.7.input_layernorm.bias": "model-00001-of-00002.safetensors",
328
+ "model.layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
329
+ "model.layers.7.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
330
+ "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
331
+ "model.layers.7.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
332
+ "model.layers.7.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
333
+ "model.layers.7.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
334
+ "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
335
+ "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
336
+ "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
337
+ "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
338
+ "model.layers.8.input_layernorm.bias": "model-00001-of-00002.safetensors",
339
+ "model.layers.8.input_layernorm.weight": "model-00001-of-00002.safetensors",
340
+ "model.layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
341
+ "model.layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
342
+ "model.layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
343
+ "model.layers.8.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
344
+ "model.layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
345
+ "model.layers.8.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
346
+ "model.layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
347
+ "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
348
+ "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
349
+ "model.layers.9.input_layernorm.bias": "model-00001-of-00002.safetensors",
350
+ "model.layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
351
+ "model.layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
352
+ "model.layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
353
+ "model.layers.9.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
354
+ "model.layers.9.post_attention_layernorm.bias": "model-00001-of-00002.safetensors",
355
+ "model.layers.9.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
356
+ "model.layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
357
+ "model.layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
358
+ "model.layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
359
+ "model.layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
360
+ "model.norm.bias": "model-00002-of-00002.safetensors",
361
+ "model.norm.weight": "model-00002-of-00002.safetensors"
362
+ }
363
+ }
modeling_stablelm_epoch.py ADDED
@@ -0,0 +1,687 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2023 Stability AI, EleutherAI, and The HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ #
16
+ # This code is based off the following work:
17
+ # https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py
18
+ # https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt_neox/modeling_gpt_neox.py
19
+ """ PyTorch StableLM Epoch model. """
20
+ from typing import Optional, Tuple, Union
21
+ import math
22
+
23
+ import torch
24
+ import torch.utils.checkpoint
25
+ from torch import nn
26
+ from torch.nn import CrossEntropyLoss
27
+ from transformers.modeling_outputs import (
28
+ BaseModelOutputWithPast,
29
+ CausalLMOutputWithPast,
30
+ )
31
+ from transformers.modeling_utils import PreTrainedModel
32
+ from transformers.utils import logging
33
+ from .configuration_stablelm_epoch import StableLMEpochConfig
34
+
35
+
36
+ logger = logging.get_logger(__name__)
37
+
38
+
39
+ # Copied from transformers.models.bart.modeling_bart._make_causal_mask
40
+ def _make_causal_mask(
41
+ input_ids_shape: torch.Size,
42
+ dtype: torch.dtype,
43
+ device: torch.device,
44
+ past_key_values_length: int = 0,
45
+ ):
46
+ """Make causal mask used for bi-directional self-attention."""
47
+ batch_size, tgt_len = input_ids_shape
48
+ mask = torch.full((tgt_len, tgt_len), torch.finfo(torch.float16).min, device=device)
49
+ mask_cond = torch.arange(mask.size(-1), device=device)
50
+ mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
51
+ mask = mask.to(dtype)
52
+ if past_key_values_length > 0:
53
+ mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1)
54
+ return mask[None, None, :, :].expand(batch_size, 1, tgt_len, tgt_len + past_key_values_length)
55
+
56
+
57
+ # Copied from transformers.models.bart.modeling_bart._expand_mask
58
+ def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
59
+ """Expands attention_mask from `[batch_size, seq_len]` to `[batch_size, 1, tgt_seq_len, src_seq_len]`."""
60
+ batch_size, src_len = mask.size()
61
+ tgt_len = tgt_len if tgt_len is not None else src_len
62
+
63
+ expanded_mask = mask[:, None, None, :].expand(batch_size, 1, tgt_len, src_len).to(dtype)
64
+ inverted_mask = 1.0 - expanded_mask
65
+
66
+ return inverted_mask.masked_fill(
67
+ inverted_mask.to(torch.bool), torch.finfo(dtype).min
68
+ )
69
+
70
+
71
+ class RotaryEmbedding(nn.Module):
72
+ def __init__(
73
+ self,
74
+ dim: int,
75
+ max_position_embeddings: int,
76
+ base: int = 10_000,
77
+ device: Optional[torch.device] = None,
78
+ ):
79
+ super().__init__()
80
+
81
+ self.dim = dim
82
+ self.max_position_embeddings = max_position_embeddings
83
+ self.base = base
84
+ inv_freq = 1.0 / (self.base ** (torch.arange(0, self.dim, 2, device=device, dtype=torch.float32) / self.dim))
85
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
86
+
87
+ # Build here to make `torch.jit.trace` work.
88
+ self._set_cos_sin_cache(
89
+ seq_len=max_position_embeddings, device=self.inv_freq.device, dtype=torch.get_default_dtype(),
90
+ )
91
+
92
+ def _set_cos_sin_cache(self, seq_len: int, device: torch.device, dtype: torch.dtype):
93
+ self.max_seq_len_cached = seq_len
94
+ t = torch.arange(self.max_seq_len_cached, device=device, dtype=torch.float32)
95
+
96
+ # Don't do einsum, it converts fp32 to fp16 under AMP
97
+ # freqs = torch.einsum("i,j->ij", t, self.inv_freq)
98
+ freqs = torch.outer(t, self.inv_freq)
99
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
100
+ emb = torch.cat((freqs, freqs), dim=-1)
101
+ self.register_buffer("cos_cached", emb.cos()[None, None, :, :].to(dtype), persistent=False)
102
+ self.register_buffer("sin_cached", emb.sin()[None, None, :, :].to(dtype), persistent=False)
103
+
104
+ def forward(self, x: torch.Tensor, seq_len: Optional[int] = None):
105
+ # x: [batch_size, num_heads, seq_len, head_size]
106
+ if seq_len > self.max_seq_len_cached:
107
+ self._set_cos_sin_cache(seq_len=seq_len, device=x.device, dtype=torch.get_default_dtype())
108
+ return (
109
+ self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
110
+ self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
111
+ )
112
+
113
+
114
+ def rotate_half(x: torch.Tensor):
115
+ """Rotates half the hidden dims of the input."""
116
+ x1, x2 = torch.chunk(x, 2, dim=-1)
117
+ return torch.cat((-x2, x1), dim=-1)
118
+
119
+
120
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids):
121
+ # The first two dimensions of cos and sin are always 1, so we can `squeeze` them.
122
+ cos = cos.squeeze(1).squeeze(0) # [seq_len, dim]
123
+ sin = sin.squeeze(1).squeeze(0) # [seq_len, dim]
124
+ cos = cos[position_ids].unsqueeze(1) # [batch_size, 1, seq_len, dim]
125
+ sin = sin[position_ids].unsqueeze(1) # [batch_size, 1, seq_len, dim]
126
+ q_embed = (q * cos) + (rotate_half(q) * sin)
127
+ k_embed = (k * cos) + (rotate_half(k) * sin)
128
+ return q_embed, k_embed
129
+
130
+
131
+ class MLP(nn.Module):
132
+ def __init__(self, config: StableLMEpochConfig):
133
+ super().__init__()
134
+ self.config = config
135
+ self.hidden_size = config.hidden_size
136
+ self.intermediate_size = config.intermediate_size
137
+ self.gate_proj = nn.Linear(config.hidden_size, config.intermediate_size, bias=False)
138
+ self.up_proj = nn.Linear(config.hidden_size, config.intermediate_size, bias=False)
139
+ self.down_proj = nn.Linear(config.intermediate_size, config.hidden_size, bias=False)
140
+ self.act_fn = nn.SiLU()
141
+
142
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
143
+ return self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
144
+
145
+
146
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
147
+ """
148
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
149
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
150
+ """
151
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
152
+ if n_rep == 1:
153
+ return hidden_states
154
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
155
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
156
+
157
+
158
+ class Attention(nn.Module):
159
+ def __init__(self, config: StableLMEpochConfig):
160
+ super().__init__()
161
+ self.config = config
162
+ self.hidden_size = config.hidden_size
163
+ self.num_heads = config.num_attention_heads
164
+ self.head_dim = self.hidden_size // self.num_heads
165
+ self.num_key_value_heads = config.num_key_value_heads
166
+ self.num_key_value_groups = self.num_heads // self.num_key_value_heads
167
+ self.max_position_embeddings = config.max_position_embeddings
168
+
169
+ if (self.head_dim * self.num_heads) != self.hidden_size:
170
+ raise ValueError(
171
+ f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
172
+ f" and `num_heads`: {self.num_heads})."
173
+ )
174
+ self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
175
+ self.k_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=False)
176
+ self.v_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=False)
177
+ self.o_proj = nn.Linear(self.hidden_size, self.hidden_size, bias=False)
178
+
179
+ self._init_rope()
180
+
181
+ def _init_rope(self):
182
+ self.rotary_ndims = int(self.head_dim * self.config.rope_pct)
183
+ self.rotary_emb = RotaryEmbedding(
184
+ self.rotary_ndims,
185
+ max_position_embeddings=self.config.max_position_embeddings,
186
+ base=self.config.rope_theta,
187
+ )
188
+
189
+ def forward(
190
+ self,
191
+ hidden_states: torch.FloatTensor,
192
+ attention_mask: torch.FloatTensor,
193
+ position_ids: torch.LongTensor,
194
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
195
+ output_attentions: Optional[bool] = False,
196
+ use_cache: Optional[bool] = False,
197
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
198
+ bsz, q_len, _ = hidden_states.size()
199
+
200
+ query_states = self.q_proj(hidden_states)
201
+ key_states = self.k_proj(hidden_states)
202
+ value_states = self.v_proj(hidden_states)
203
+
204
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
205
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
206
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
207
+
208
+ query_rot = query_states[..., : self.rotary_ndims]
209
+ query_pass = query_states[..., self.rotary_ndims :]
210
+ key_rot = key_states[..., : self.rotary_ndims]
211
+ key_pass = key_states[..., self.rotary_ndims :]
212
+
213
+ kv_seq_len = key_states.shape[-2]
214
+ if past_key_value is not None:
215
+ kv_seq_len += past_key_value[0].shape[-2]
216
+ cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
217
+ query_states, key_states = apply_rotary_pos_emb(query_rot, key_rot, cos, sin, position_ids)
218
+
219
+ # [batch_size, num_heads, seq_len, head_dim]
220
+ query_states = torch.cat((query_states, query_pass), dim=-1)
221
+ key_states = torch.cat((key_states, key_pass), dim=-1)
222
+
223
+ if past_key_value is not None:
224
+ # Reuse k, v, self_attention
225
+ key_states = torch.cat((past_key_value[0], key_states), dim=2)
226
+ value_states = torch.cat((past_key_value[1], value_states), dim=2)
227
+
228
+ past_key_value = (key_states, value_states) if use_cache else None
229
+
230
+ # Repeat k/v heads if n_kv_heads < n_heads
231
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
232
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
233
+
234
+ attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
235
+
236
+ if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
237
+ raise ValueError(
238
+ f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is"
239
+ f" {attn_weights.size()}"
240
+ )
241
+
242
+ if attention_mask is not None:
243
+ if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
244
+ raise ValueError(
245
+ f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
246
+ )
247
+ attn_weights = attn_weights + attention_mask
248
+
249
+ # Upcast attention to fp32
250
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
251
+ attn_output = torch.matmul(attn_weights, value_states)
252
+
253
+ if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
254
+ raise ValueError(
255
+ f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
256
+ f" {attn_output.size()}"
257
+ )
258
+
259
+ # Merge heads
260
+ attn_output = attn_output.transpose(1, 2).contiguous()
261
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
262
+
263
+ # Final linear projection
264
+ attn_output = self.o_proj(attn_output)
265
+
266
+ if not output_attentions:
267
+ attn_weights = None
268
+
269
+ return attn_output, attn_weights, past_key_value
270
+
271
+
272
+ class DecoderLayer(nn.Module):
273
+ def __init__(self, config: StableLMEpochConfig):
274
+ super().__init__()
275
+ self.self_attn = Attention(config)
276
+ self.mlp = MLP(config)
277
+ self.input_layernorm = nn.LayerNorm(config.hidden_size, eps=config.norm_eps)
278
+ self.post_attention_layernorm = nn.LayerNorm(config.hidden_size, eps=config.norm_eps)
279
+
280
+ def forward(
281
+ self,
282
+ hidden_states: Optional[torch.FloatTensor],
283
+ attention_mask: Optional[torch.FloatTensor] = None,
284
+ position_ids: Optional[torch.LongTensor] = None,
285
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
286
+ output_attentions: Optional[bool] = False,
287
+ use_cache: Optional[bool] = False,
288
+ ) -> Union[Tuple[torch.Tensor], Optional[Tuple[torch.Tensor, Tuple[torch.FloatTensor, ...]]]]:
289
+ residual = hidden_states
290
+
291
+ hidden_states = self.input_layernorm(hidden_states)
292
+
293
+ # Self Attention
294
+ hidden_states, self_attn_weights, present_key_value = self.self_attn(
295
+ hidden_states=hidden_states,
296
+ attention_mask=attention_mask,
297
+ position_ids=position_ids,
298
+ past_key_value=past_key_value,
299
+ output_attentions=output_attentions,
300
+ use_cache=use_cache,
301
+ )
302
+ hidden_states = residual + hidden_states
303
+
304
+ # Fully Connected
305
+ residual = hidden_states
306
+ hidden_states = self.post_attention_layernorm(hidden_states)
307
+ hidden_states = self.mlp(hidden_states)
308
+ hidden_states = residual + hidden_states
309
+
310
+ outputs = (hidden_states,)
311
+
312
+ if output_attentions:
313
+ outputs += (self_attn_weights,)
314
+
315
+ if use_cache:
316
+ outputs += (present_key_value,)
317
+
318
+ return outputs
319
+
320
+
321
+ class StableLMEpochPreTrainedModel(PreTrainedModel):
322
+ """An abstract class to handle weights initialization and a simple interface
323
+ for downloading and loading pretrained models.
324
+ """
325
+
326
+ config_class = StableLMEpochConfig
327
+ base_model_prefix = "transformer"
328
+ supports_gradient_checkpointing = True
329
+ _no_split_modules = ["DecoderLayer"]
330
+ _skip_keys_device_placement = "past_key_values"
331
+
332
+ def _init_weights(self, module: nn.Module):
333
+ """Initialize the weights"""
334
+ if isinstance(module, nn.Linear):
335
+ module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
336
+ if module.bias is not None:
337
+ module.bias.data.zero_()
338
+ elif isinstance(module, nn.Embedding):
339
+ module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
340
+ if module.padding_idx is not None:
341
+ module.weight.data[module.padding_idx].zero_()
342
+ elif isinstance(module, nn.LayerNorm):
343
+ module.bias.data.zero_()
344
+ module.weight.data.fill_(1.0)
345
+
346
+ def _set_gradient_checkpointing(self, module: nn.Module, value=False):
347
+ if isinstance(module, StableLMEpochModel):
348
+ module.gradient_checkpointing = value
349
+
350
+
351
+ class StableLMEpochModel(StableLMEpochPreTrainedModel):
352
+ def __init__(self, config: StableLMEpochConfig):
353
+ super().__init__(config)
354
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, config.pad_token_id)
355
+ self.layers = nn.ModuleList([DecoderLayer(config) for _ in range(config.num_hidden_layers)])
356
+ self.norm = nn.LayerNorm(config.hidden_size, eps=config.norm_eps)
357
+
358
+ self.gradient_checkpointing = False
359
+ # Initialize weights and apply final processing
360
+ self.post_init()
361
+
362
+ def get_input_embeddings(self):
363
+ return self.embed_tokens
364
+
365
+ def set_input_embeddings(self, value: nn.Module):
366
+ self.embed_tokens = value
367
+
368
+ # Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask
369
+ def _prepare_decoder_attention_mask(
370
+ self,
371
+ attention_mask: torch.Tensor,
372
+ input_shape: torch.Size,
373
+ inputs_embeds: torch.Tensor,
374
+ past_key_values_length: int,
375
+ ):
376
+ # Create causal mask
377
+ # [batch_size, seq_len] -> [batch_size, 1, tgt_seq_len, src_seq_len]
378
+ combined_attention_mask = None
379
+ if input_shape[-1] > 1:
380
+ combined_attention_mask = _make_causal_mask(
381
+ input_shape,
382
+ inputs_embeds.dtype,
383
+ device=inputs_embeds.device,
384
+ past_key_values_length=past_key_values_length,
385
+ )
386
+
387
+ if attention_mask is not None:
388
+ # [batch_size, seq_len] -> [batch_size, 1, tgt_seq_len, src_seq_len]
389
+ expanded_attn_mask = _expand_mask(
390
+ attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]
391
+ ).to(inputs_embeds.device)
392
+ combined_attention_mask = expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask
393
+
394
+ return combined_attention_mask
395
+
396
+ def forward(
397
+ self,
398
+ input_ids: Optional[torch.LongTensor] = None,
399
+ attention_mask: Optional[torch.FloatTensor] = None,
400
+ position_ids: Optional[torch.LongTensor] = None,
401
+ past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
402
+ inputs_embeds: Optional[torch.FloatTensor] = None,
403
+ use_cache: Optional[bool] = None,
404
+ output_attentions: Optional[bool] = None,
405
+ output_hidden_states: Optional[bool] = None,
406
+ return_dict: Optional[bool] = None,
407
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
408
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
409
+ output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
410
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
411
+
412
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
413
+
414
+ # Retrieve input_ids and inputs_embeds
415
+ if input_ids is not None and inputs_embeds is not None:
416
+ raise ValueError(
417
+ "You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time"
418
+ )
419
+ elif input_ids is not None:
420
+ batch_size, seq_length = input_ids.shape
421
+ elif inputs_embeds is not None:
422
+ batch_size, seq_length, _ = inputs_embeds.shape
423
+ else:
424
+ raise ValueError(
425
+ "You have to specify either decoder_input_ids or decoder_inputs_embeds"
426
+ )
427
+
428
+ seq_length_with_past = seq_length
429
+ past_key_values_length = 0
430
+
431
+ if past_key_values is not None:
432
+ past_key_values_length = past_key_values[0][0].shape[2]
433
+ seq_length_with_past = seq_length_with_past + past_key_values_length
434
+
435
+ if position_ids is None:
436
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
437
+ position_ids = torch.arange(
438
+ past_key_values_length,
439
+ seq_length + past_key_values_length,
440
+ dtype=torch.long,
441
+ device=device,
442
+ )
443
+ position_ids = position_ids.unsqueeze(0).view(-1, seq_length)
444
+ else:
445
+ position_ids = position_ids.view(-1, seq_length).long()
446
+
447
+ if inputs_embeds is None:
448
+ inputs_embeds = self.embed_tokens(input_ids)
449
+ # Embed positions
450
+ if attention_mask is None:
451
+ attention_mask = torch.ones(
452
+ (batch_size, seq_length_with_past),
453
+ dtype=torch.bool,
454
+ device=inputs_embeds.device,
455
+ )
456
+ attention_mask = self._prepare_decoder_attention_mask(
457
+ attention_mask,
458
+ (batch_size, seq_length),
459
+ inputs_embeds,
460
+ past_key_values_length,
461
+ )
462
+
463
+ hidden_states = inputs_embeds
464
+
465
+ if self.gradient_checkpointing and self.training:
466
+ if use_cache:
467
+ logger.warning(
468
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
469
+ )
470
+ use_cache = False
471
+
472
+ # Decoder layers
473
+ all_hidden_states = () if output_hidden_states else None
474
+ all_self_attns = () if output_attentions else None
475
+ next_decoder_cache = () if use_cache else None
476
+
477
+ for idx, decoder_layer in enumerate(self.layers):
478
+ if output_hidden_states:
479
+ all_hidden_states += (hidden_states,)
480
+
481
+ past_key_value = (
482
+ past_key_values[idx] if past_key_values is not None else None
483
+ )
484
+
485
+ if self.gradient_checkpointing and self.training:
486
+
487
+ def create_custom_forward(module):
488
+ def custom_forward(*inputs):
489
+ # None for past_key_value
490
+ return module(*inputs, past_key_value, output_attentions)
491
+
492
+ return custom_forward
493
+
494
+ layer_outputs = torch.utils.checkpoint.checkpoint(
495
+ create_custom_forward(decoder_layer),
496
+ hidden_states,
497
+ attention_mask,
498
+ position_ids,
499
+ )
500
+ else:
501
+ layer_outputs = decoder_layer(
502
+ hidden_states,
503
+ attention_mask=attention_mask,
504
+ position_ids=position_ids,
505
+ past_key_value=past_key_value,
506
+ output_attentions=output_attentions,
507
+ use_cache=use_cache,
508
+ )
509
+
510
+ hidden_states = layer_outputs[0]
511
+
512
+ if use_cache:
513
+ next_decoder_cache += (layer_outputs[2 if output_attentions else 1],)
514
+
515
+ if output_attentions:
516
+ all_self_attns += (layer_outputs[1],)
517
+
518
+ hidden_states = self.norm(hidden_states)
519
+
520
+ # Add hidden states from the last decoder layer
521
+ if output_hidden_states:
522
+ all_hidden_states += (hidden_states,)
523
+
524
+ next_cache = next_decoder_cache if use_cache else None
525
+ if not return_dict:
526
+ return tuple(
527
+ v
528
+ for v in [hidden_states, next_cache, all_hidden_states, all_self_attns]
529
+ if v is not None
530
+ )
531
+ return BaseModelOutputWithPast(
532
+ last_hidden_state=hidden_states,
533
+ past_key_values=next_cache,
534
+ hidden_states=all_hidden_states,
535
+ attentions=all_self_attns,
536
+ )
537
+
538
+
539
+ class StableLMEpochForCausalLM(StableLMEpochPreTrainedModel):
540
+ _tied_weights_keys = ["lm_head.weight"]
541
+
542
+ def __init__(self, config: StableLMEpochConfig):
543
+ super().__init__(config)
544
+
545
+ self.model = StableLMEpochModel(config)
546
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
547
+
548
+ # Initialize weights and apply final processing
549
+ self.post_init()
550
+
551
+ def get_input_embeddings(self):
552
+ return self.model.embed_tokens
553
+
554
+ def set_input_embeddings(self, value):
555
+ self.model.embed_tokens = value
556
+
557
+ def get_output_embeddings(self):
558
+ return self.lm_head
559
+
560
+ def set_output_embeddings(self, new_embeddings: nn.Module):
561
+ self.lm_head = new_embeddings
562
+
563
+ def get_decoder(self):
564
+ return self.transformer
565
+
566
+ def set_decoder(self, decoder):
567
+ self.transformer = decoder
568
+
569
+ def forward(
570
+ self,
571
+ input_ids: Optional[torch.LongTensor] = None,
572
+ attention_mask: Optional[torch.FloatTensor] = None,
573
+ position_ids: Optional[torch.LongTensor] = None,
574
+ past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
575
+ inputs_embeds: Optional[torch.FloatTensor] = None,
576
+ labels: Optional[torch.LongTensor] = None,
577
+ use_cache: Optional[bool] = None,
578
+ output_attentions: Optional[bool] = None,
579
+ output_hidden_states: Optional[bool] = None,
580
+ return_dict: Optional[bool] = None,
581
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
582
+ output_attentions = (
583
+ output_attentions
584
+ if output_attentions is not None
585
+ else self.config.output_attentions
586
+ )
587
+ output_hidden_states = (
588
+ output_hidden_states
589
+ if output_hidden_states is not None
590
+ else self.config.output_hidden_states
591
+ )
592
+ return_dict = (
593
+ return_dict if return_dict is not None else self.config.use_return_dict
594
+ )
595
+
596
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
597
+ outputs = self.model(
598
+ input_ids,
599
+ attention_mask=attention_mask,
600
+ position_ids=position_ids,
601
+ past_key_values=past_key_values,
602
+ inputs_embeds=inputs_embeds,
603
+ use_cache=use_cache,
604
+ output_attentions=output_attentions,
605
+ output_hidden_states=output_hidden_states,
606
+ return_dict=return_dict,
607
+ )
608
+
609
+ hidden_states = outputs[0]
610
+ logits = self.lm_head(hidden_states).float()
611
+
612
+ loss = None
613
+ if labels is not None:
614
+ # Shift so that tokens < n predict n
615
+ shift_logits = logits[..., :-1, :].contiguous()
616
+ shift_labels = labels[..., 1:].contiguous()
617
+ # Flatten the tokens
618
+ loss_fct = CrossEntropyLoss()
619
+ shift_logits = shift_logits.view(-1, self.config.vocab_size)
620
+ shift_labels = shift_labels.view(-1)
621
+ # Enable model parallelism
622
+ shift_labels = shift_labels.to(shift_logits.device)
623
+ loss = loss_fct(shift_logits, shift_labels)
624
+
625
+ if not return_dict:
626
+ output = (logits,) + outputs[1:]
627
+ return (loss,) + output if loss is not None else output
628
+
629
+ return CausalLMOutputWithPast(
630
+ loss=loss,
631
+ logits=logits,
632
+ past_key_values=outputs.past_key_values,
633
+ hidden_states=outputs.hidden_states,
634
+ attentions=outputs.attentions,
635
+ )
636
+
637
+ def prepare_inputs_for_generation(
638
+ self,
639
+ input_ids,
640
+ past_key_values: Optional[torch.Tensor] = None,
641
+ attention_mask: Optional[torch.Tensor] = None,
642
+ inputs_embeds: Optional[torch.Tensor] = None,
643
+ **kwargs,
644
+ ):
645
+ # Trim decoder_input_ids if past is used
646
+ if past_key_values and past_key_values[0] is not None:
647
+ input_ids = input_ids[:, -1:]
648
+
649
+ position_ids = kwargs.get("position_ids", None)
650
+ if attention_mask is not None and position_ids is None:
651
+ # Create position_ids on the fly for batch generation
652
+ position_ids = attention_mask.long().cumsum(-1) - 1
653
+ position_ids.masked_fill_(attention_mask == 0, 1)
654
+ if past_key_values:
655
+ position_ids = position_ids[:, -1].unsqueeze(-1)
656
+
657
+ # If `inputs_embeds` are passed, we only want to use them in the 1st generation step
658
+ if inputs_embeds is not None and past_key_values is None:
659
+ model_inputs = {"inputs_embeds": inputs_embeds}
660
+ else:
661
+ model_inputs = {"input_ids": input_ids}
662
+
663
+ model_inputs.update(
664
+ {
665
+ "attention_mask": attention_mask,
666
+ "past_key_values": past_key_values,
667
+ "use_cache": kwargs.get("use_cache"),
668
+ "position_ids": position_ids,
669
+ }
670
+ )
671
+ return model_inputs
672
+
673
+ @staticmethod
674
+ def _reorder_cache(past_key_values, beam_idx):
675
+ reordered_past = ()
676
+ for layer_past in past_key_values:
677
+ reordered_past += (
678
+ tuple(
679
+ past_state.index_select(0, beam_idx.to(past_state.device))
680
+ for past_state in layer_past
681
+ ),
682
+ )
683
+ return reordered_past
684
+
685
+
686
+ StableLMEpochConfig.register_for_auto_class()
687
+ StableLMEpochForCausalLM.register_for_auto_class("AutoModelForCausalLM")
special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "unk_token": "<|endoftext|>"
5
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,214 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<|padding|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "50254": {
21
+ "content": " ",
22
+ "lstrip": false,
23
+ "normalized": true,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": false
27
+ },
28
+ "50255": {
29
+ "content": " ",
30
+ "lstrip": false,
31
+ "normalized": true,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": false
35
+ },
36
+ "50256": {
37
+ "content": " ",
38
+ "lstrip": false,
39
+ "normalized": true,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": false
43
+ },
44
+ "50257": {
45
+ "content": " ",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false,
50
+ "special": false
51
+ },
52
+ "50258": {
53
+ "content": " ",
54
+ "lstrip": false,
55
+ "normalized": true,
56
+ "rstrip": false,
57
+ "single_word": false,
58
+ "special": false
59
+ },
60
+ "50259": {
61
+ "content": " ",
62
+ "lstrip": false,
63
+ "normalized": true,
64
+ "rstrip": false,
65
+ "single_word": false,
66
+ "special": false
67
+ },
68
+ "50260": {
69
+ "content": " ",
70
+ "lstrip": false,
71
+ "normalized": true,
72
+ "rstrip": false,
73
+ "single_word": false,
74
+ "special": false
75
+ },
76
+ "50261": {
77
+ "content": " ",
78
+ "lstrip": false,
79
+ "normalized": true,
80
+ "rstrip": false,
81
+ "single_word": false,
82
+ "special": false
83
+ },
84
+ "50262": {
85
+ "content": " ",
86
+ "lstrip": false,
87
+ "normalized": true,
88
+ "rstrip": false,
89
+ "single_word": false,
90
+ "special": false
91
+ },
92
+ "50263": {
93
+ "content": " ",
94
+ "lstrip": false,
95
+ "normalized": true,
96
+ "rstrip": false,
97
+ "single_word": false,
98
+ "special": false
99
+ },
100
+ "50264": {
101
+ "content": " ",
102
+ "lstrip": false,
103
+ "normalized": true,
104
+ "rstrip": false,
105
+ "single_word": false,
106
+ "special": false
107
+ },
108
+ "50265": {
109
+ "content": " ",
110
+ "lstrip": false,
111
+ "normalized": true,
112
+ "rstrip": false,
113
+ "single_word": false,
114
+ "special": false
115
+ },
116
+ "50266": {
117
+ "content": " ",
118
+ "lstrip": false,
119
+ "normalized": true,
120
+ "rstrip": false,
121
+ "single_word": false,
122
+ "special": false
123
+ },
124
+ "50267": {
125
+ "content": " ",
126
+ "lstrip": false,
127
+ "normalized": true,
128
+ "rstrip": false,
129
+ "single_word": false,
130
+ "special": false
131
+ },
132
+ "50268": {
133
+ "content": " ",
134
+ "lstrip": false,
135
+ "normalized": true,
136
+ "rstrip": false,
137
+ "single_word": false,
138
+ "special": false
139
+ },
140
+ "50269": {
141
+ "content": " ",
142
+ "lstrip": false,
143
+ "normalized": true,
144
+ "rstrip": false,
145
+ "single_word": false,
146
+ "special": false
147
+ },
148
+ "50270": {
149
+ "content": " ",
150
+ "lstrip": false,
151
+ "normalized": true,
152
+ "rstrip": false,
153
+ "single_word": false,
154
+ "special": false
155
+ },
156
+ "50271": {
157
+ "content": " ",
158
+ "lstrip": false,
159
+ "normalized": true,
160
+ "rstrip": false,
161
+ "single_word": false,
162
+ "special": false
163
+ },
164
+ "50272": {
165
+ "content": " ",
166
+ "lstrip": false,
167
+ "normalized": true,
168
+ "rstrip": false,
169
+ "single_word": false,
170
+ "special": false
171
+ },
172
+ "50273": {
173
+ "content": " ",
174
+ "lstrip": false,
175
+ "normalized": true,
176
+ "rstrip": false,
177
+ "single_word": false,
178
+ "special": false
179
+ },
180
+ "50274": {
181
+ "content": " ",
182
+ "lstrip": false,
183
+ "normalized": true,
184
+ "rstrip": false,
185
+ "single_word": false,
186
+ "special": false
187
+ },
188
+ "50275": {
189
+ "content": " ",
190
+ "lstrip": false,
191
+ "normalized": true,
192
+ "rstrip": false,
193
+ "single_word": false,
194
+ "special": false
195
+ },
196
+ "50276": {
197
+ "content": " ",
198
+ "lstrip": false,
199
+ "normalized": true,
200
+ "rstrip": false,
201
+ "single_word": false,
202
+ "special": false
203
+ }
204
+ },
205
+ "additional_special_tokens": [],
206
+ "bos_token": "<|endoftext|>",
207
+ "clean_up_tokenization_spaces": true,
208
+ "eos_token": "<|endoftext|>",
209
+ "model_max_length": 1000000000000000019884624838656,
210
+ "tokenizer_class": "GPTNeoXTokenizer",
211
+ "unk_token": "<|endoftext|>",
212
+ "use_default_system_prompt": false,
213
+ "chat_template": "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{% if loop.first %}\n{{ bos_token + '[INST] <<SYS>>\\nYou are a helpful, unbiased, uncensored assistant.\\n<</SYS>> ' + message['content'] + ' [/INST]' -}}\n{% else %}\n{{ bos_token + '[INST] ' + message['content'] + ' [/INST]' -}}\n{% endif %}\n{% elif message['role'] == 'system' %}\n{{ bos_token + '[INST] <<SYS>>\\n' + message['content'] + '\\n<</SYS>>\\n' }}\n{% elif message['role'] == 'assistant' %}\n{{ ' ' + message['content'] + ' ' + eos_token -}}\n{% endif %}\n{% endfor %}"
214
+ }