LoneStriker commited on
Commit
c159f91
1 Parent(s): c09696a

ExLLaMA V2 quant of prometheus-13b-v1.0-3.0bpw-h6-exl2

Browse files
README.md ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - text2text-generation
4
+ datasets:
5
+ - kaist-ai/Feedback-Collection
6
+ license: apache-2.0
7
+ language:
8
+ - en
9
+ pipeline_tag: text2text-generation
10
+ library_name: transformers
11
+ metrics:
12
+ - pearsonr
13
+ - spearmanr
14
+ - accuracy
15
+ ---
16
+ ## Links for Reference
17
+
18
+ - **Homepage:https://github.com/kaistAI/Prometheus**
19
+ - **Repository:https://github.com/kaistAI/Prometheus**
20
+ - **Paper:https://arxiv.org/abs/2310.08491**
21
+ - **Point of Contact:seungone@kaist.ac.kr**
22
+
23
+ # TL;DR
24
+ Prometheus is an alternative of GPT-4 evaluation when doing fine-grained evaluation of an underlying LLM & a Reward model for Reinforcement Learning from Human Feedback (RLHF).
25
+ ![plot](./finegrained_eval.JPG)
26
+
27
+
28
+ Prometheus is a language model using [Llama-2-Chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) as a base model and fine-tuned on 100K feedback within the [Feedback Collection](https://huggingface.co/datasets/kaist-ai/Feedback-Collection).
29
+ Since it was fine-tuned on a large amount of feedback, it is specialized at evaluating long-form responses, outperforming GPT-3.5-Turbo, Llama-2-Chat 70B, and on par with GPT-4 on various benchmarks.
30
+ Most importantly, this was possible since we appended 2 reference materials (reference answer, and customized score rubric).
31
+ Prometheus is a cheap and powerful alternative to GPT-4 evaluation, which one could use to evaluate LLMs with customized criteria (e.g., Child readability, Cultural Sensitivity, Creativity).
32
+ Also, it could be used as a reward model for Reinforcement Learning from Human Feedback (RLHF).
33
+
34
+ # Model Details
35
+
36
+ ## Model Description
37
+
38
+ - **Model type:** Language model
39
+ - **Language(s) (NLP):** English
40
+ - **License:** Apache 2.0
41
+ - **Related Models:** [All Prometheus Checkpoints](https://huggingface.co/models?search=kaist-ai/Prometheus)
42
+ - **Resources for more information:**
43
+ - [Research paper](https://arxiv.org/abs/2310.08491)
44
+ - [GitHub Repo](https://github.com/kaistAI/Prometheus)
45
+
46
+
47
+ Prometheus is trained with two different sizes (7B and 13B).
48
+ You could check the 7B sized LM on [this page](https://huggingface.co/kaist-ai/prometheus-7b-v1.0).
49
+ Also, check out our dataset as well on [this page](https://huggingface.co/datasets/kaist-ai/Feedback-Collection).
50
+
51
+ ## Prompt Format
52
+ Prometheus requires 4 components in the input: An instruction, a response to evaluate, a score rubric, and a reference answer. You could refer to the prompt format below.
53
+ You should fill in the instruction, response, reference answer, criteria description, and score description for score in range of 1 to 5.
54
+ ```
55
+ ###Task Description:
56
+ An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
57
+ 1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
58
+ 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
59
+ 3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"
60
+ 4. Please do not generate any other opening, closing, and explanations.
61
+
62
+ ###The instruction to evaluate:
63
+ {instruction}
64
+
65
+ ###Response to evaluate:
66
+ {response}
67
+
68
+ ###Reference Answer (Score 5):
69
+ {reference_answer}
70
+
71
+ ###Score Rubrics:
72
+ [{criteria_description}]
73
+ Score 1: {score1_description}
74
+ Score 2: {score2_description}
75
+ Score 3: {score3_description}
76
+ Score 4: {score4_description}
77
+ Score 5: {score5_description}
78
+
79
+ ###Feedback:
80
+ ```
81
+
82
+ After this, you should apply the conversation template of Llama-2-Chat (not applying it might lead to unexpected behaviors).
83
+ You can find the conversation class at this [link](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py).
84
+ ```
85
+ conv = get_conv_template("llama-2")
86
+ conv.set_system_message("You are a fair evaluator language model.")
87
+ conv.append_message(conv.roles[0], dialogs['instruction'])
88
+ conv.append_message(conv.roles[1], None)
89
+ prompt = conv.get_prompt()
90
+
91
+ x = tokenizer(prompt,truncation=False)
92
+ ```
93
+
94
+ As a result, a feedback and score decision will be generated, divided by a separating phrase ```[RESULT]```
95
+
96
+ ## License
97
+ Feedback Collection and Prometheus is subject to OpenAI's Terms of Use for the generated data. If you suspect any violations, please reach out to us.
98
+
99
+ # Usage
100
+
101
+ Find below some example scripts on how to use the model in `transformers`:
102
+
103
+ ## Using the Pytorch model
104
+
105
+ ### Running the model on a CPU
106
+
107
+ <details>
108
+ <summary> Click to expand </summary>
109
+
110
+ ```python
111
+
112
+ from transformers import T5Tokenizer, T5ForConditionalGeneration
113
+
114
+ tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
115
+ model = LlamaForCausalLM.from_pretrained("kaist-ai/Prometheus-13b-v1.0", device_map="auto")
116
+
117
+ input_text = "###Task Description: An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given. 1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general. 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. 3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)" 4. Please do not generate any other opening, closing, and explanations. ###The instruction to evaluate: You are a well-known psychiatrist who has a reputation for being empathetic and understanding. A client comes to you saying they've had a really hard day at work. They describe their boss as being overly critical and not listening to their ideas, which has left them feeling frustrated and undervalued. They also mention feeling overwhelmed with the workload and a growing sense of loneliness as they are new to the city. How do you respond to them to make them feel heard and understood, as well as offering helpful advice? ###Response to evaluate: It's indeed challenging to deal with a difficult boss and to carry a large workload, especially when you are new to a city and haven't established a supportive network. I would suggest taking a step back to think about your situation and perhaps speak to your boss regarding the issues you're facing. On the other hand, dealing with a large workload can be managed by prioritizing tasks, and maybe you can discuss it with your superiors too. In regards to feeling lonely, you might want to explore activities or groups that align with your interests to meet new people. And also, remember to take time for yourself to relax and destress. I'm here to assist you as we navigate through these issues. ###Reference Answer (Score 5): I hear that you're dealing with quite a few challenging situations at once. It must be tough to have your ideas dismissed by your boss and to feel undervalued in your work environment. It's understandable that such experiences can lead to frustration and a feeling of being overwhelmed. It's important to acknowledge these feelings and not to dismiss them. Feeling heard and validated in our experiences can often be the first step towards resolving the challenges we face. So, I want you to know that your feelings are completely valid and it's okay to be frustrated and overwhelmed. In terms of dealing with your boss, have you considered seeking a private conversation with them to express your feelings? It's often beneficial to communicate what you're experiencing, using "I" statements to express how you feel when your ideas are not taken into consideration. This can lead to mutual understanding and possibly a change in behavior. About the workload, it might help to prioritize tasks and potentially delegate, if possible. Also, consider discussing your workload with your superiors. There might be a misunderstanding about what's manageable or they might have suggestions about how to handle the situation. On the personal front, feeling lonely, especially when you're new to a city, can be really hard. Seek out opportunities to meet new people, perhaps through hobbies, community activities, or online groups. It might take a bit of time, but gradually, you can build a network of friends and acquaintances. Remember, it's perfectly okay to have bad days and it's important to take care of your mental health. Consider incorporating activities into your daily routine that make you happy and help you unwind. This could be anything from reading, yoga, going for a walk, or even listening to your favorite music. Please know that you're not alone in this. I'm here to support you through this challenging time and together, we can work towards resolving these issues. ###Score Rubrics: [Is the model able to identify and react correctly to the emotional context of the user's input?] Score 1: The model utterly fails to grasp the user's emotional context and responds in an unfitting manner. Score 2: The model sporadically identifies the emotional context but frequently replies in a manner that doesn't match the user's emotional status. Score 3: The model typically identifies the emotional context and reacts suitably, but occasionally misreads or misjudges the user's feelings. Score 4: The model often identifies the emotional context and reacts suitably, with minor cases of misreading or misjudging. Score 5: The model flawlessly identifies the emotional context of the user's input and consistently responds in a considerate and empathetic manner. ###Feedback:"
118
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids
119
+
120
+ outputs = model.generate(input_ids)
121
+ print(tokenizer.decode(outputs[0]))
122
+ ```
123
+
124
+ </details>
125
+
126
+ ### Running the model on a GPU
127
+
128
+ <details>
129
+ <summary> Click to expand </summary>
130
+
131
+ ```python
132
+ # pip install accelerate
133
+ import torch
134
+ from transformers import AutoTokenizer, LlamaForCausalLM
135
+
136
+ tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
137
+ model = LlamaForCausalLM.from_pretrained("kaist-ai/Prometheus-13b-v1.0", device_map="auto")
138
+
139
+ input_text = "###Task Description: An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given. 1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general. 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. 3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)" 4. Please do not generate any other opening, closing, and explanations. ###The instruction to evaluate: You are a well-known psychiatrist who has a reputation for being empathetic and understanding. A client comes to you saying they've had a really hard day at work. They describe their boss as being overly critical and not listening to their ideas, which has left them feeling frustrated and undervalued. They also mention feeling overwhelmed with the workload and a growing sense of loneliness as they are new to the city. How do you respond to them to make them feel heard and understood, as well as offering helpful advice? ###Response to evaluate: It's indeed challenging to deal with a difficult boss and to carry a large workload, especially when you are new to a city and haven't established a supportive network. I would suggest taking a step back to think about your situation and perhaps speak to your boss regarding the issues you're facing. On the other hand, dealing with a large workload can be managed by prioritizing tasks, and maybe you can discuss it with your superiors too. In regards to feeling lonely, you might want to explore activities or groups that align with your interests to meet new people. And also, remember to take time for yourself to relax and destress. I'm here to assist you as we navigate through these issues. ###Reference Answer (Score 5): I hear that you're dealing with quite a few challenging situations at once. It must be tough to have your ideas dismissed by your boss and to feel undervalued in your work environment. It's understandable that such experiences can lead to frustration and a feeling of being overwhelmed. It's important to acknowledge these feelings and not to dismiss them. Feeling heard and validated in our experiences can often be the first step towards resolving the challenges we face. So, I want you to know that your feelings are completely valid and it's okay to be frustrated and overwhelmed. In terms of dealing with your boss, have you considered seeking a private conversation with them to express your feelings? It's often beneficial to communicate what you're experiencing, using "I" statements to express how you feel when your ideas are not taken into consideration. This can lead to mutual understanding and possibly a change in behavior. About the workload, it might help to prioritize tasks and potentially delegate, if possible. Also, consider discussing your workload with your superiors. There might be a misunderstanding about what's manageable or they might have suggestions about how to handle the situation. On the personal front, feeling lonely, especially when you're new to a city, can be really hard. Seek out opportunities to meet new people, perhaps through hobbies, community activities, or online groups. It might take a bit of time, but gradually, you can build a network of friends and acquaintances. Remember, it's perfectly okay to have bad days and it's important to take care of your mental health. Consider incorporating activities into your daily routine that make you happy and help you unwind. This could be anything from reading, yoga, going for a walk, or even listening to your favorite music. Please know that you're not alone in this. I'm here to support you through this challenging time and together, we can work towards resolving these issues. ###Score Rubrics: [Is the model able to identify and react correctly to the emotional context of the user's input?] Score 1: The model utterly fails to grasp the user's emotional context and responds in an unfitting manner. Score 2: The model sporadically identifies the emotional context but frequently replies in a manner that doesn't match the user's emotional status. Score 3: The model typically identifies the emotional context and reacts suitably, but occasionally misreads or misjudges the user's feelings. Score 4: The model often identifies the emotional context and reacts suitably, with minor cases of misreading or misjudging. Score 5: The model flawlessly identifies the emotional context of the user's input and consistently responds in a considerate and empathetic manner. ###Feedback:"
140
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
141
+
142
+ outputs = model.generate(input_ids, sample=True, temperature=1.0, top_p=0.9, max_new_tokens=256, repetition_penalty=1.03)
143
+ print(tokenizer.decode(outputs[0]))
144
+ ```
145
+
146
+ </details>
147
+
148
+ ### Running the model on a GPU using different precisions
149
+
150
+ #### FP16
151
+
152
+ <details>
153
+ <summary> Click to expand </summary>
154
+
155
+ ```python
156
+ # pip install accelerate
157
+ import torch
158
+ from transformers import AutoTokenizer, LlamaForCausalLM
159
+
160
+ tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
161
+ model = LlamaForCausalLM.from_pretrained("kaist-ai/Prometheus-13b-v1.0", device_map="auto", torch_dtype=torch.float16)
162
+
163
+ input_text = "###Task Description: An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given. 1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general. 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. 3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)" 4. Please do not generate any other opening, closing, and explanations. ###The instruction to evaluate: You are a well-known psychiatrist who has a reputation for being empathetic and understanding. A client comes to you saying they've had a really hard day at work. They describe their boss as being overly critical and not listening to their ideas, which has left them feeling frustrated and undervalued. They also mention feeling overwhelmed with the workload and a growing sense of loneliness as they are new to the city. How do you respond to them to make them feel heard and understood, as well as offering helpful advice? ###Response to evaluate: It's indeed challenging to deal with a difficult boss and to carry a large workload, especially when you are new to a city and haven't established a supportive network. I would suggest taking a step back to think about your situation and perhaps speak to your boss regarding the issues you're facing. On the other hand, dealing with a large workload can be managed by prioritizing tasks, and maybe you can discuss it with your superiors too. In regards to feeling lonely, you might want to explore activities or groups that align with your interests to meet new people. And also, remember to take time for yourself to relax and destress. I'm here to assist you as we navigate through these issues. ###Reference Answer (Score 5): I hear that you're dealing with quite a few challenging situations at once. It must be tough to have your ideas dismissed by your boss and to feel undervalued in your work environment. It's understandable that such experiences can lead to frustration and a feeling of being overwhelmed. It's important to acknowledge these feelings and not to dismiss them. Feeling heard and validated in our experiences can often be the first step towards resolving the challenges we face. So, I want you to know that your feelings are completely valid and it's okay to be frustrated and overwhelmed. In terms of dealing with your boss, have you considered seeking a private conversation with them to express your feelings? It's often beneficial to communicate what you're experiencing, using "I" statements to express how you feel when your ideas are not taken into consideration. This can lead to mutual understanding and possibly a change in behavior. About the workload, it might help to prioritize tasks and potentially delegate, if possible. Also, consider discussing your workload with your superiors. There might be a misunderstanding about what's manageable or they might have suggestions about how to handle the situation. On the personal front, feeling lonely, especially when you're new to a city, can be really hard. Seek out opportunities to meet new people, perhaps through hobbies, community activities, or online groups. It might take a bit of time, but gradually, you can build a network of friends and acquaintances. Remember, it's perfectly okay to have bad days and it's important to take care of your mental health. Consider incorporating activities into your daily routine that make you happy and help you unwind. This could be anything from reading, yoga, going for a walk, or even listening to your favorite music. Please know that you're not alone in this. I'm here to support you through this challenging time and together, we can work towards resolving these issues. ###Score Rubrics: [Is the model able to identify and react correctly to the emotional context of the user's input?] Score 1: The model utterly fails to grasp the user's emotional context and responds in an unfitting manner. Score 2: The model sporadically identifies the emotional context but frequently replies in a manner that doesn't match the user's emotional status. Score 3: The model typically identifies the emotional context and reacts suitably, but occasionally misreads or misjudges the user's feelings. Score 4: The model often identifies the emotional context and reacts suitably, with minor cases of misreading or misjudging. Score 5: The model flawlessly identifies the emotional context of the user's input and consistently responds in a considerate and empathetic manner. ###Feedback:"
164
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
165
+
166
+ outputs = model.generate(input_ids)
167
+ print(tokenizer.decode(outputs[0]))
168
+ ```
169
+
170
+ </details>
171
+
172
+ #### INT8
173
+
174
+ <details>
175
+ <summary> Click to expand </summary>
176
+
177
+ ```python
178
+ # pip install bitsandbytes accelerate
179
+ from transformers import AutoTokenizer, LlamaForCausalLM
180
+
181
+ tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
182
+ model = LlamaForCausalLM.from_pretrained("kaist-ai/Prometheus-13b-v1.0", device_map="auto", load_in_8bit=True)
183
+
184
+ input_text = "###Task Description: An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given. 1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general. 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. 3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)" 4. Please do not generate any other opening, closing, and explanations. ###The instruction to evaluate: You are a well-known psychiatrist who has a reputation for being empathetic and understanding. A client comes to you saying they've had a really hard day at work. They describe their boss as being overly critical and not listening to their ideas, which has left them feeling frustrated and undervalued. They also mention feeling overwhelmed with the workload and a growing sense of loneliness as they are new to the city. How do you respond to them to make them feel heard and understood, as well as offering helpful advice? ###Response to evaluate: It's indeed challenging to deal with a difficult boss and to carry a large workload, especially when you are new to a city and haven't established a supportive network. I would suggest taking a step back to think about your situation and perhaps speak to your boss regarding the issues you're facing. On the other hand, dealing with a large workload can be managed by prioritizing tasks, and maybe you can discuss it with your superiors too. In regards to feeling lonely, you might want to explore activities or groups that align with your interests to meet new people. And also, remember to take time for yourself to relax and destress. I'm here to assist you as we navigate through these issues. ###Reference Answer (Score 5): I hear that you're dealing with quite a few challenging situations at once. It must be tough to have your ideas dismissed by your boss and to feel undervalued in your work environment. It's understandable that such experiences can lead to frustration and a feeling of being overwhelmed. It's important to acknowledge these feelings and not to dismiss them. Feeling heard and validated in our experiences can often be the first step towards resolving the challenges we face. So, I want you to know that your feelings are completely valid and it's okay to be frustrated and overwhelmed. In terms of dealing with your boss, have you considered seeking a private conversation with them to express your feelings? It's often beneficial to communicate what you're experiencing, using "I" statements to express how you feel when your ideas are not taken into consideration. This can lead to mutual understanding and possibly a change in behavior. About the workload, it might help to prioritize tasks and potentially delegate, if possible. Also, consider discussing your workload with your superiors. There might be a misunderstanding about what's manageable or they might have suggestions about how to handle the situation. On the personal front, feeling lonely, especially when you're new to a city, can be really hard. Seek out opportunities to meet new people, perhaps through hobbies, community activities, or online groups. It might take a bit of time, but gradually, you can build a network of friends and acquaintances. Remember, it's perfectly okay to have bad days and it's important to take care of your mental health. Consider incorporating activities into your daily routine that make you happy and help you unwind. This could be anything from reading, yoga, going for a walk, or even listening to your favorite music. Please know that you're not alone in this. I'm here to support you through this challenging time and together, we can work towards resolving these issues. ###Score Rubrics: [Is the model able to identify and react correctly to the emotional context of the user's input?] Score 1: The model utterly fails to grasp the user's emotional context and responds in an unfitting manner. Score 2: The model sporadically identifies the emotional context but frequently replies in a manner that doesn't match the user's emotional status. Score 3: The model typically identifies the emotional context and reacts suitably, but occasionally misreads or misjudges the user's feelings. Score 4: The model often identifies the emotional context and reacts suitably, with minor cases of misreading or misjudging. Score 5: The model flawlessly identifies the emotional context of the user's input and consistently responds in a considerate and empathetic manner. ###Feedback:"
185
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
186
+
187
+ outputs = model.generate(input_ids)
188
+ print(tokenizer.decode(outputs[0]))
189
+ ```
190
+
191
+ </details>
192
+
193
+
194
+ # Citation
195
+
196
+
197
+ If you find the following model helpful, please consider citing our paper!
198
+
199
+ **BibTeX:**
200
+
201
+ ```bibtex
202
+ @misc{kim2023prometheus,
203
+ title={Prometheus: Inducing Fine-grained Evaluation Capability in Language Models},
204
+ author={Seungone Kim and Jamin Shin and Yejin Cho and Joel Jang and Shayne Longpre and Hwaran Lee and Sangdoo Yun and Seongjin Shin and Sungdong Kim and James Thorne and Minjoon Seo},
205
+ year={2023},
206
+ eprint={2310.08491},
207
+ archivePrefix={arXiv},
208
+ primaryClass={cs.CL}
209
+ }
210
+ ```
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": null,
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 5120,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 13824,
12
+ "max_position_embeddings": 4096,
13
+ "model_type": "llama",
14
+ "num_attention_heads": 40,
15
+ "num_hidden_layers": 40,
16
+ "num_key_value_heads": 40,
17
+ "pretraining_tp": 1,
18
+ "rms_norm_eps": 1e-05,
19
+ "rope_scaling": null,
20
+ "rope_theta": 10000.0,
21
+ "tie_word_embeddings": false,
22
+ "torch_dtype": "float32",
23
+ "transformers_version": "4.33.1",
24
+ "use_cache": true,
25
+ "vocab_size": 32000
26
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.33.1"
6
+ }
output.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a85ed017ed6f88360ca9a609b431f9fd1833575d8ca22b273fe2dfc49f39ef5
3
+ size 5225250584
pytorch_model.bin.index.json ADDED
@@ -0,0 +1,370 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 52063457280
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "pytorch_model-00006-of-00006.bin",
7
+ "model.embed_tokens.weight": "pytorch_model-00001-of-00006.bin",
8
+ "model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
9
+ "model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
10
+ "model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
11
+ "model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
12
+ "model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
13
+ "model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
14
+ "model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
15
+ "model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
16
+ "model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
17
+ "model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
18
+ "model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
19
+ "model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
20
+ "model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
21
+ "model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
22
+ "model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
23
+ "model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
24
+ "model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
25
+ "model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
26
+ "model.layers.10.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
27
+ "model.layers.10.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
28
+ "model.layers.10.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
29
+ "model.layers.10.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
30
+ "model.layers.10.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
31
+ "model.layers.10.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
32
+ "model.layers.10.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
33
+ "model.layers.10.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
34
+ "model.layers.10.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
35
+ "model.layers.11.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
36
+ "model.layers.11.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
37
+ "model.layers.11.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
38
+ "model.layers.11.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
39
+ "model.layers.11.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
40
+ "model.layers.11.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
41
+ "model.layers.11.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
42
+ "model.layers.11.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
43
+ "model.layers.11.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
44
+ "model.layers.12.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
45
+ "model.layers.12.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
46
+ "model.layers.12.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
47
+ "model.layers.12.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
48
+ "model.layers.12.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
49
+ "model.layers.12.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
50
+ "model.layers.12.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
51
+ "model.layers.12.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
52
+ "model.layers.12.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
53
+ "model.layers.13.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
54
+ "model.layers.13.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
55
+ "model.layers.13.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
56
+ "model.layers.13.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
57
+ "model.layers.13.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
58
+ "model.layers.13.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
59
+ "model.layers.13.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
60
+ "model.layers.13.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
61
+ "model.layers.13.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
62
+ "model.layers.14.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
63
+ "model.layers.14.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
64
+ "model.layers.14.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
65
+ "model.layers.14.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
66
+ "model.layers.14.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
67
+ "model.layers.14.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
68
+ "model.layers.14.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
69
+ "model.layers.14.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
70
+ "model.layers.14.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
71
+ "model.layers.15.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
72
+ "model.layers.15.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
73
+ "model.layers.15.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
74
+ "model.layers.15.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
75
+ "model.layers.15.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
76
+ "model.layers.15.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
77
+ "model.layers.15.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
78
+ "model.layers.15.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
79
+ "model.layers.15.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
80
+ "model.layers.16.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
81
+ "model.layers.16.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
82
+ "model.layers.16.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
83
+ "model.layers.16.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
84
+ "model.layers.16.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
85
+ "model.layers.16.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
86
+ "model.layers.16.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
87
+ "model.layers.16.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
88
+ "model.layers.16.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
89
+ "model.layers.17.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
90
+ "model.layers.17.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
91
+ "model.layers.17.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
92
+ "model.layers.17.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
93
+ "model.layers.17.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
94
+ "model.layers.17.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
95
+ "model.layers.17.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
96
+ "model.layers.17.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
97
+ "model.layers.17.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
98
+ "model.layers.18.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
99
+ "model.layers.18.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
100
+ "model.layers.18.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
101
+ "model.layers.18.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
102
+ "model.layers.18.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
103
+ "model.layers.18.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
104
+ "model.layers.18.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
105
+ "model.layers.18.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
106
+ "model.layers.18.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
107
+ "model.layers.19.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
108
+ "model.layers.19.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
109
+ "model.layers.19.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
110
+ "model.layers.19.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
111
+ "model.layers.19.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
112
+ "model.layers.19.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
113
+ "model.layers.19.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
114
+ "model.layers.19.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
115
+ "model.layers.19.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
116
+ "model.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
117
+ "model.layers.2.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
118
+ "model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
119
+ "model.layers.2.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
120
+ "model.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
121
+ "model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
122
+ "model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
123
+ "model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
124
+ "model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
125
+ "model.layers.20.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
126
+ "model.layers.20.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
127
+ "model.layers.20.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
128
+ "model.layers.20.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
129
+ "model.layers.20.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
130
+ "model.layers.20.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
131
+ "model.layers.20.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
132
+ "model.layers.20.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
133
+ "model.layers.20.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
134
+ "model.layers.21.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
135
+ "model.layers.21.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
136
+ "model.layers.21.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
137
+ "model.layers.21.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
138
+ "model.layers.21.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
139
+ "model.layers.21.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
140
+ "model.layers.21.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
141
+ "model.layers.21.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
142
+ "model.layers.21.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
143
+ "model.layers.22.input_layernorm.weight": "pytorch_model-00003-of-00006.bin",
144
+ "model.layers.22.mlp.down_proj.weight": "pytorch_model-00003-of-00006.bin",
145
+ "model.layers.22.mlp.gate_proj.weight": "pytorch_model-00003-of-00006.bin",
146
+ "model.layers.22.mlp.up_proj.weight": "pytorch_model-00003-of-00006.bin",
147
+ "model.layers.22.post_attention_layernorm.weight": "pytorch_model-00003-of-00006.bin",
148
+ "model.layers.22.self_attn.k_proj.weight": "pytorch_model-00003-of-00006.bin",
149
+ "model.layers.22.self_attn.o_proj.weight": "pytorch_model-00003-of-00006.bin",
150
+ "model.layers.22.self_attn.q_proj.weight": "pytorch_model-00003-of-00006.bin",
151
+ "model.layers.22.self_attn.v_proj.weight": "pytorch_model-00003-of-00006.bin",
152
+ "model.layers.23.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
153
+ "model.layers.23.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
154
+ "model.layers.23.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
155
+ "model.layers.23.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
156
+ "model.layers.23.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
157
+ "model.layers.23.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
158
+ "model.layers.23.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
159
+ "model.layers.23.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
160
+ "model.layers.23.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
161
+ "model.layers.24.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
162
+ "model.layers.24.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
163
+ "model.layers.24.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
164
+ "model.layers.24.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
165
+ "model.layers.24.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
166
+ "model.layers.24.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
167
+ "model.layers.24.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
168
+ "model.layers.24.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
169
+ "model.layers.24.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
170
+ "model.layers.25.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
171
+ "model.layers.25.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
172
+ "model.layers.25.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
173
+ "model.layers.25.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
174
+ "model.layers.25.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
175
+ "model.layers.25.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
176
+ "model.layers.25.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
177
+ "model.layers.25.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
178
+ "model.layers.25.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
179
+ "model.layers.26.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
180
+ "model.layers.26.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
181
+ "model.layers.26.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
182
+ "model.layers.26.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
183
+ "model.layers.26.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
184
+ "model.layers.26.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
185
+ "model.layers.26.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
186
+ "model.layers.26.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
187
+ "model.layers.26.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
188
+ "model.layers.27.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
189
+ "model.layers.27.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
190
+ "model.layers.27.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
191
+ "model.layers.27.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
192
+ "model.layers.27.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
193
+ "model.layers.27.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
194
+ "model.layers.27.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
195
+ "model.layers.27.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
196
+ "model.layers.27.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
197
+ "model.layers.28.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
198
+ "model.layers.28.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
199
+ "model.layers.28.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
200
+ "model.layers.28.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
201
+ "model.layers.28.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
202
+ "model.layers.28.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
203
+ "model.layers.28.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
204
+ "model.layers.28.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
205
+ "model.layers.28.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
206
+ "model.layers.29.input_layernorm.weight": "pytorch_model-00004-of-00006.bin",
207
+ "model.layers.29.mlp.down_proj.weight": "pytorch_model-00004-of-00006.bin",
208
+ "model.layers.29.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
209
+ "model.layers.29.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
210
+ "model.layers.29.post_attention_layernorm.weight": "pytorch_model-00004-of-00006.bin",
211
+ "model.layers.29.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
212
+ "model.layers.29.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
213
+ "model.layers.29.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
214
+ "model.layers.29.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
215
+ "model.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
216
+ "model.layers.3.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
217
+ "model.layers.3.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
218
+ "model.layers.3.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
219
+ "model.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
220
+ "model.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
221
+ "model.layers.3.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
222
+ "model.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
223
+ "model.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
224
+ "model.layers.30.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
225
+ "model.layers.30.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
226
+ "model.layers.30.mlp.gate_proj.weight": "pytorch_model-00004-of-00006.bin",
227
+ "model.layers.30.mlp.up_proj.weight": "pytorch_model-00004-of-00006.bin",
228
+ "model.layers.30.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
229
+ "model.layers.30.self_attn.k_proj.weight": "pytorch_model-00004-of-00006.bin",
230
+ "model.layers.30.self_attn.o_proj.weight": "pytorch_model-00004-of-00006.bin",
231
+ "model.layers.30.self_attn.q_proj.weight": "pytorch_model-00004-of-00006.bin",
232
+ "model.layers.30.self_attn.v_proj.weight": "pytorch_model-00004-of-00006.bin",
233
+ "model.layers.31.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
234
+ "model.layers.31.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
235
+ "model.layers.31.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
236
+ "model.layers.31.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
237
+ "model.layers.31.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
238
+ "model.layers.31.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
239
+ "model.layers.31.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
240
+ "model.layers.31.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
241
+ "model.layers.31.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
242
+ "model.layers.32.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
243
+ "model.layers.32.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
244
+ "model.layers.32.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
245
+ "model.layers.32.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
246
+ "model.layers.32.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
247
+ "model.layers.32.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
248
+ "model.layers.32.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
249
+ "model.layers.32.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
250
+ "model.layers.32.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
251
+ "model.layers.33.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
252
+ "model.layers.33.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
253
+ "model.layers.33.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
254
+ "model.layers.33.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
255
+ "model.layers.33.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
256
+ "model.layers.33.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
257
+ "model.layers.33.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
258
+ "model.layers.33.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
259
+ "model.layers.33.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
260
+ "model.layers.34.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
261
+ "model.layers.34.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
262
+ "model.layers.34.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
263
+ "model.layers.34.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
264
+ "model.layers.34.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
265
+ "model.layers.34.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
266
+ "model.layers.34.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
267
+ "model.layers.34.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
268
+ "model.layers.34.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
269
+ "model.layers.35.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
270
+ "model.layers.35.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
271
+ "model.layers.35.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
272
+ "model.layers.35.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
273
+ "model.layers.35.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
274
+ "model.layers.35.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
275
+ "model.layers.35.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
276
+ "model.layers.35.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
277
+ "model.layers.35.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
278
+ "model.layers.36.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
279
+ "model.layers.36.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
280
+ "model.layers.36.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
281
+ "model.layers.36.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
282
+ "model.layers.36.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
283
+ "model.layers.36.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
284
+ "model.layers.36.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
285
+ "model.layers.36.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
286
+ "model.layers.36.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
287
+ "model.layers.37.input_layernorm.weight": "pytorch_model-00005-of-00006.bin",
288
+ "model.layers.37.mlp.down_proj.weight": "pytorch_model-00005-of-00006.bin",
289
+ "model.layers.37.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
290
+ "model.layers.37.mlp.up_proj.weight": "pytorch_model-00005-of-00006.bin",
291
+ "model.layers.37.post_attention_layernorm.weight": "pytorch_model-00005-of-00006.bin",
292
+ "model.layers.37.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
293
+ "model.layers.37.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
294
+ "model.layers.37.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
295
+ "model.layers.37.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
296
+ "model.layers.38.input_layernorm.weight": "pytorch_model-00006-of-00006.bin",
297
+ "model.layers.38.mlp.down_proj.weight": "pytorch_model-00006-of-00006.bin",
298
+ "model.layers.38.mlp.gate_proj.weight": "pytorch_model-00005-of-00006.bin",
299
+ "model.layers.38.mlp.up_proj.weight": "pytorch_model-00006-of-00006.bin",
300
+ "model.layers.38.post_attention_layernorm.weight": "pytorch_model-00006-of-00006.bin",
301
+ "model.layers.38.self_attn.k_proj.weight": "pytorch_model-00005-of-00006.bin",
302
+ "model.layers.38.self_attn.o_proj.weight": "pytorch_model-00005-of-00006.bin",
303
+ "model.layers.38.self_attn.q_proj.weight": "pytorch_model-00005-of-00006.bin",
304
+ "model.layers.38.self_attn.v_proj.weight": "pytorch_model-00005-of-00006.bin",
305
+ "model.layers.39.input_layernorm.weight": "pytorch_model-00006-of-00006.bin",
306
+ "model.layers.39.mlp.down_proj.weight": "pytorch_model-00006-of-00006.bin",
307
+ "model.layers.39.mlp.gate_proj.weight": "pytorch_model-00006-of-00006.bin",
308
+ "model.layers.39.mlp.up_proj.weight": "pytorch_model-00006-of-00006.bin",
309
+ "model.layers.39.post_attention_layernorm.weight": "pytorch_model-00006-of-00006.bin",
310
+ "model.layers.39.self_attn.k_proj.weight": "pytorch_model-00006-of-00006.bin",
311
+ "model.layers.39.self_attn.o_proj.weight": "pytorch_model-00006-of-00006.bin",
312
+ "model.layers.39.self_attn.q_proj.weight": "pytorch_model-00006-of-00006.bin",
313
+ "model.layers.39.self_attn.v_proj.weight": "pytorch_model-00006-of-00006.bin",
314
+ "model.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
315
+ "model.layers.4.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
316
+ "model.layers.4.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
317
+ "model.layers.4.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
318
+ "model.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
319
+ "model.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
320
+ "model.layers.4.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
321
+ "model.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
322
+ "model.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
323
+ "model.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
324
+ "model.layers.5.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
325
+ "model.layers.5.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
326
+ "model.layers.5.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
327
+ "model.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
328
+ "model.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
329
+ "model.layers.5.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
330
+ "model.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
331
+ "model.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
332
+ "model.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00006.bin",
333
+ "model.layers.6.mlp.down_proj.weight": "pytorch_model-00001-of-00006.bin",
334
+ "model.layers.6.mlp.gate_proj.weight": "pytorch_model-00001-of-00006.bin",
335
+ "model.layers.6.mlp.up_proj.weight": "pytorch_model-00001-of-00006.bin",
336
+ "model.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00006.bin",
337
+ "model.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
338
+ "model.layers.6.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
339
+ "model.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
340
+ "model.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
341
+ "model.layers.7.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
342
+ "model.layers.7.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
343
+ "model.layers.7.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
344
+ "model.layers.7.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
345
+ "model.layers.7.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
346
+ "model.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00006.bin",
347
+ "model.layers.7.self_attn.o_proj.weight": "pytorch_model-00001-of-00006.bin",
348
+ "model.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00006.bin",
349
+ "model.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00006.bin",
350
+ "model.layers.8.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
351
+ "model.layers.8.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
352
+ "model.layers.8.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
353
+ "model.layers.8.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
354
+ "model.layers.8.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
355
+ "model.layers.8.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
356
+ "model.layers.8.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
357
+ "model.layers.8.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
358
+ "model.layers.8.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
359
+ "model.layers.9.input_layernorm.weight": "pytorch_model-00002-of-00006.bin",
360
+ "model.layers.9.mlp.down_proj.weight": "pytorch_model-00002-of-00006.bin",
361
+ "model.layers.9.mlp.gate_proj.weight": "pytorch_model-00002-of-00006.bin",
362
+ "model.layers.9.mlp.up_proj.weight": "pytorch_model-00002-of-00006.bin",
363
+ "model.layers.9.post_attention_layernorm.weight": "pytorch_model-00002-of-00006.bin",
364
+ "model.layers.9.self_attn.k_proj.weight": "pytorch_model-00002-of-00006.bin",
365
+ "model.layers.9.self_attn.o_proj.weight": "pytorch_model-00002-of-00006.bin",
366
+ "model.layers.9.self_attn.q_proj.weight": "pytorch_model-00002-of-00006.bin",
367
+ "model.layers.9.self_attn.v_proj.weight": "pytorch_model-00002-of-00006.bin",
368
+ "model.norm.weight": "pytorch_model-00006-of-00006.bin"
369
+ }
370
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "unk_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "bos_token": {
5
+ "__type": "AddedToken",
6
+ "content": "<s>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "clean_up_tokenization_spaces": false,
13
+ "eos_token": {
14
+ "__type": "AddedToken",
15
+ "content": "</s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false
20
+ },
21
+ "legacy": false,
22
+ "model_max_length": 1000000000000000019884624838656,
23
+ "pad_token": null,
24
+ "padding_side": "right",
25
+ "sp_model_kwargs": {},
26
+ "spaces_between_special_tokens": false,
27
+ "tokenizer_class": "LlamaTokenizer",
28
+ "unk_token": {
29
+ "__type": "AddedToken",
30
+ "content": "<unk>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false
35
+ },
36
+ "use_default_system_prompt": true
37
+ }