huihui-ai commited on
Commit
dacb7d9
1 Parent(s): 1baef4a

Upload 23 files

Browse files
README.md CHANGED
@@ -1,3 +1,191 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ extra_gated_heading: You need to share contact information with Meta to access this model
3
+ extra_gated_prompt: >-
4
+ ### LLAMA 2 COMMUNITY LICENSE AGREEMENT
5
+
6
+ "Agreement" means the terms and conditions for use, reproduction, distribution
7
+ and modification of the Llama Materials set forth herein.
8
+ "Documentation" means the specifications, manuals and documentation
9
+ accompanying Llama 2 distributed by Meta at
10
+ https://ai.meta.com/resources/models-and-libraries/llama-downloads/.
11
+ "Licensee" or "you" means you, or your employer or any other person or entity
12
+ (if you are entering into this Agreement on such person or entity's behalf),
13
+ of the age required under applicable laws, rules or regulations to provide
14
+ legal consent and that has legal authority to bind your employer or such other
15
+ person or entity if you are entering in this Agreement on their behalf.
16
+ "Llama 2" means the foundational large language models and software and
17
+ algorithms, including machine-learning model code, trained model weights,
18
+ inference-enabling code, training-enabling code, fine-tuning enabling code and
19
+ other elements of the foregoing distributed by Meta at
20
+ ai.meta.com/resources/models-and-libraries/llama-downloads/.
21
+ "Llama Materials" means, collectively, Meta's proprietary Llama 2 and
22
+ documentation (and any portion thereof) made available under this Agreement.
23
+ "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or,
24
+ if you are an entity, your principal place of business is in the EEA or
25
+ Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA
26
+ or Switzerland).
27
+ By clicking "I Accept" below or by using or distributing any portion or
28
+ element of the Llama Materials, you agree to be bound by this Agreement.
29
+ 1. License Rights and Redistribution.
30
+ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-
31
+ transferable and royalty-free limited license under Meta's intellectual
32
+ property or other rights owned by Meta embodied in the Llama Materials to
33
+ use, reproduce, distribute, copy, create derivative works of, and make
34
+ modifications to the Llama Materials.
35
+
36
+ b. Redistribution and Use.
37
+ i. If you distribute or make the Llama Materials, or any derivative works
38
+ thereof, available to a third party, you shall provide a copy of this
39
+ Agreement to such third party.
40
+ ii. If you receive Llama Materials, or any derivative works thereof, from a
41
+ Licensee as part of an integrated end user product, then Section 2 of this
42
+ Agreement will not apply to you.
43
+ iii. You must retain in all copies of the Llama Materials that you distribute
44
+ the following attribution notice within a "Notice" text file distributed as a
45
+ part of such copies: "Llama 2 is licensed under the LLAMA 2 Community
46
+ License, Copyright (c) Meta Platforms, Inc. All Rights Reserved."
47
+ iv. Your use of the Llama Materials must comply with applicable laws and
48
+ regulations (including trade compliance laws and regulations) and adhere to
49
+ the Acceptable Use Policy for the Llama Materials (available at
50
+ https://ai.meta.com/llama/use-policy), which is hereby incorporated by
51
+ reference into this Agreement.
52
+ v. You will not use the Llama Materials or any output or results of the Llama
53
+ Materials to improve any other large language model (excluding Llama 2 or
54
+ derivative works thereof).
55
+ 2. Additional Commercial Terms. If, on the Llama 2 version release date, the
56
+ monthly active users of the products or services made available by or for
57
+ Licensee, or Licensee's affiliates, is greater than 700 million monthly
58
+ active users in the preceding calendar month, you must request a license from
59
+ Meta, which Meta may grant to you in its sole discretion, and you are not
60
+ authorized to exercise any of the rights under this Agreement unless or until
61
+ Meta otherwise expressly grants you such rights.
62
+ 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA
63
+ MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS"
64
+ BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING,
65
+ WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
66
+ MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY
67
+ RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING
68
+ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE
69
+ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
70
+ 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE
71
+ UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE,
72
+ PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST
73
+ PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR
74
+ PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE
75
+ POSSIBILITY OF ANY OF THE FOREGOING.
76
+ 5. Intellectual Property.
77
+ a. No trademark licenses are granted under this Agreement, and in connection
78
+ with the Llama Materials, neither Meta nor Licensee may use any name or mark
79
+ owned by or associated with the other or any of its affiliates, except as
80
+ required for reasonable and customary use in describing and redistributing
81
+ the Llama Materials.
82
+ b. Subject to Meta's ownership of Llama Materials and derivatives made by or
83
+ for Meta, with respect to any derivative works and modifications of the Llama
84
+ Materials that are made by you, as between you and Meta, you are and will be
85
+ the owner of such derivative works and modifications.
86
+ c. If you institute litigation or other proceedings against Meta or any
87
+ entity (including a cross-claim or counterclaim in a lawsuit) alleging that
88
+ the Llama Materials or Llama 2 outputs or results, or any portion of any of
89
+ the foregoing, constitutes infringement of intellectual property or other
90
+ rights owned or licensable by you, then any licenses granted to you under
91
+ this Agreement shall terminate as of the date such litigation or claim is
92
+ filed or instituted. You will indemnify and hold harmless Meta from and
93
+ against any claim by any third party arising out of or related to your use or
94
+ distribution of the Llama Materials.
95
+ 6. Term and Termination. The term of this Agreement will commence upon your
96
+ acceptance of this Agreement or access to the Llama Materials and will
97
+ continue in full force and effect until terminated in accordance with the
98
+ terms and conditions herein. Meta may terminate this Agreement if you are in
99
+ breach of any term or condition of this Agreement. Upon termination of this
100
+ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,
101
+ 4 and 7 shall survive the termination of this Agreement.
102
+ 7. Governing Law and Jurisdiction. This Agreement will be governed and
103
+ construed under the laws of the State of California without regard to choice
104
+ of law principles, and the UN Convention on Contracts for the International
105
+ Sale of Goods does not apply to this Agreement. The courts of California
106
+ shall have exclusive jurisdiction of any dispute arising out of this
107
+ Agreement.
108
+ USE POLICY
109
+ ### Llama 2 Acceptable Use Policy
110
+ Meta is committed to promoting safe and fair use of its tools and features,
111
+ including Llama 2. If you access or use Llama 2, you agree to this Acceptable
112
+ Use Policy (“Policy”). The most recent copy of this policy can be found at
113
+ [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy).
114
+ #### Prohibited Uses
115
+ We want everyone to use Llama 2 safely and responsibly. You agree you will not
116
+ use, or allow others to use, Llama 2 to:
117
+ 1. Violate the law or others’ rights, including to:
118
+ 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
119
+ 1. Violence or terrorism
120
+ 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
121
+ 3. Human trafficking, exploitation, and sexual violence
122
+ 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
123
+ 5. Sexual solicitation
124
+ 6. Any other criminal activity
125
+ 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
126
+ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
127
+ 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
128
+ 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
129
+ 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials
130
+ 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
131
+ 2. Engage in, promote, incite, facilitate, or assist in the planning or
132
+ development of activities that present a risk of death or bodily harm to
133
+ individuals, including use of Llama 2 related to the following:
134
+ 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
135
+ 2. Guns and illegal weapons (including weapon development)
136
+ 3. Illegal drugs and regulated/controlled substances
137
+ 4. Operation of critical infrastructure, transportation technologies, or heavy machinery
138
+ 5. Self-harm or harm to others, including suicide, cutting, and eating disorders
139
+ 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
140
+ 3. Intentionally deceive or mislead others, including use of Llama 2 related
141
+ to the following:
142
+ 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
143
+ 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
144
+ 3. Generating, promoting, or further distributing spam
145
+ 4. Impersonating another individual without consent, authorization, or legal right
146
+ 5. Representing that the use of Llama 2 or outputs are human-generated
147
+ 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
148
+ 4. Fail to appropriately disclose to end users any known dangers of your AI system
149
+ Please report any violation of this Policy, software “bug,” or other problems
150
+ that could lead to a violation of this Policy through one of the following
151
+ means:
152
+ * Reporting issues with the model:
153
+ [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
154
+ * Reporting risky content generated by the model:
155
+ [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
156
+ * Reporting bugs and security concerns:
157
+ [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
158
+ * Reporting violations of the Acceptable Use Policy or unlicensed uses of
159
+ Llama: [LlamaUseReport@meta.com](mailto:LlamaUseReport@meta.com)
160
+ extra_gated_fields:
161
+ First Name: text
162
+ Last Name: text
163
+ Date of birth: date_picker
164
+ Country: country
165
+ Affiliation: text
166
+ geo: ip_location
167
+ By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
168
+ extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
169
+ extra_gated_button_content: Submit
170
+ language:
171
+ - code
172
+ pipeline_tag: text-generation
173
+ base_model: meta-llama/CodeLlama-34b-Instruct-hf
174
+ tags:
175
+ - facebook
176
+ - meta
177
+ - pytorch
178
+ - llama
179
+ - llama-2
180
+ - abliterated
181
+ - uncensored
182
+ license: llama2
183
+ ---
184
+
185
+ # huihui-ai/CodeLlama-34b-Instruct-hf-abliterated
186
+
187
+
188
+ This is an uncensored version of [meta-llama/CodeLlama-34b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-34b-Instruct-hf) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
189
+ This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
190
+
191
+
added_tokens.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "▁<EOT>": 32003,
3
+ "▁<MID>": 32001,
4
+ "▁<PRE>": 32000,
5
+ "▁<SUF>": 32002
6
+ }
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "LlamaForCausalLM"
4
+ ],
5
+ "bos_token_id": 1,
6
+ "eos_token_id": 2,
7
+ "hidden_act": "silu",
8
+ "hidden_size": 8192,
9
+ "initializer_range": 0.02,
10
+ "intermediate_size": 22016,
11
+ "max_position_embeddings": 16384,
12
+ "model_type": "llama",
13
+ "num_attention_heads": 64,
14
+ "num_hidden_layers": 48,
15
+ "num_key_value_heads": 8,
16
+ "pretraining_tp": 1,
17
+ "rms_norm_eps": 1e-05,
18
+ "rope_scaling": null,
19
+ "rope_theta": 1000000,
20
+ "tie_word_embeddings": false,
21
+ "torch_dtype": "bfloat16",
22
+ "transformers_version": "4.32.0.dev0",
23
+ "use_cache": true,
24
+ "vocab_size": 32000
25
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.32.0.dev0"
6
+ }
model-00001-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f69dc5cad4051b95aef2e51a50a4f326e67b52d20b2f407643a37c35187ce08
3
+ size 4978740880
model-00002-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e71e283ed347bb383d8b20c80f9373b39a702a799389db7158cfb4ca5730d2b5
3
+ size 4873882928
model-00003-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:242234a65b1fa813745dc84c50c696d33eb77c72af5f6becf61973ddbf0ad42c
3
+ size 4815196024
model-00004-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6b7ae64e3ce84573e57d5fd59809f74934352db04f0be04d2da81dada800a71
3
+ size 4873882952
model-00005-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3f4561989bdd1f9546f60ede6fb66c8e2417aa684f24d628a0bdc9856fab9f8
3
+ size 4815196056
model-00006-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15d02f6c14b505b1886f006924937fec54ee69a3c833e2876f135abf603bf465
3
+ size 4873882952
model-00007-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a7a806f3967caa537259b08ebf5b019083743aab22d0857bb8cce80b8a7a08e
3
+ size 4815196056
model-00008-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb69dfd8108e6253c62e3ce0bfdfa1469e06f604aa3df9205f4b6836fe275684
3
+ size 4873882952
model-00009-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa0c0de9e513f476497f8bea3a67a4e444a0e3a5081fe76924e03f33c4e82916
3
+ size 4815196056
model-00010-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9fb2d8be58a3be388f9a8a70c287f2aa25e9ac0afc749976a9202823e14e24b3
3
+ size 4873882952
model-00011-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a80a6795137ee2c4ae6ca148e500235b8778b28a9c35d7ee4594c2c725c0565e
3
+ size 4815196056
model-00012-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1861e99dd7eb67df0a1ee0b2804ad74f6cbe4e093b06b84fae5e8cce496e1e2
3
+ size 4873882952
model-00013-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:349965de3f3b6ee3551c7083f838631aec0b7a737094f4f04ce20c37d35b7ea7
3
+ size 4815196056
model-00014-of-00014.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4adceabd441f18671cdd6a03791ac1601b62b4c8ad4a58b6e86e9853ff8a1ba4
3
+ size 4374776656
model.safetensors.index.json ADDED
@@ -0,0 +1,442 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 67487940608
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00014-of-00014.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00014.safetensors",
8
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00014.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00014.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00014.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00014.safetensors",
12
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00014.safetensors",
13
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00014.safetensors",
14
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00014.safetensors",
15
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00014.safetensors",
16
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00014.safetensors",
17
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00014.safetensors",
18
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00014.safetensors",
19
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00014.safetensors",
20
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00014.safetensors",
21
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00014.safetensors",
22
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00014.safetensors",
23
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00014.safetensors",
24
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00014.safetensors",
25
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00014.safetensors",
26
+ "model.layers.10.input_layernorm.weight": "model-00004-of-00014.safetensors",
27
+ "model.layers.10.mlp.down_proj.weight": "model-00004-of-00014.safetensors",
28
+ "model.layers.10.mlp.gate_proj.weight": "model-00004-of-00014.safetensors",
29
+ "model.layers.10.mlp.up_proj.weight": "model-00004-of-00014.safetensors",
30
+ "model.layers.10.post_attention_layernorm.weight": "model-00004-of-00014.safetensors",
31
+ "model.layers.10.self_attn.k_proj.weight": "model-00003-of-00014.safetensors",
32
+ "model.layers.10.self_attn.o_proj.weight": "model-00003-of-00014.safetensors",
33
+ "model.layers.10.self_attn.q_proj.weight": "model-00003-of-00014.safetensors",
34
+ "model.layers.10.self_attn.v_proj.weight": "model-00003-of-00014.safetensors",
35
+ "model.layers.11.input_layernorm.weight": "model-00004-of-00014.safetensors",
36
+ "model.layers.11.mlp.down_proj.weight": "model-00004-of-00014.safetensors",
37
+ "model.layers.11.mlp.gate_proj.weight": "model-00004-of-00014.safetensors",
38
+ "model.layers.11.mlp.up_proj.weight": "model-00004-of-00014.safetensors",
39
+ "model.layers.11.post_attention_layernorm.weight": "model-00004-of-00014.safetensors",
40
+ "model.layers.11.self_attn.k_proj.weight": "model-00004-of-00014.safetensors",
41
+ "model.layers.11.self_attn.o_proj.weight": "model-00004-of-00014.safetensors",
42
+ "model.layers.11.self_attn.q_proj.weight": "model-00004-of-00014.safetensors",
43
+ "model.layers.11.self_attn.v_proj.weight": "model-00004-of-00014.safetensors",
44
+ "model.layers.12.input_layernorm.weight": "model-00004-of-00014.safetensors",
45
+ "model.layers.12.mlp.down_proj.weight": "model-00004-of-00014.safetensors",
46
+ "model.layers.12.mlp.gate_proj.weight": "model-00004-of-00014.safetensors",
47
+ "model.layers.12.mlp.up_proj.weight": "model-00004-of-00014.safetensors",
48
+ "model.layers.12.post_attention_layernorm.weight": "model-00004-of-00014.safetensors",
49
+ "model.layers.12.self_attn.k_proj.weight": "model-00004-of-00014.safetensors",
50
+ "model.layers.12.self_attn.o_proj.weight": "model-00004-of-00014.safetensors",
51
+ "model.layers.12.self_attn.q_proj.weight": "model-00004-of-00014.safetensors",
52
+ "model.layers.12.self_attn.v_proj.weight": "model-00004-of-00014.safetensors",
53
+ "model.layers.13.input_layernorm.weight": "model-00005-of-00014.safetensors",
54
+ "model.layers.13.mlp.down_proj.weight": "model-00005-of-00014.safetensors",
55
+ "model.layers.13.mlp.gate_proj.weight": "model-00004-of-00014.safetensors",
56
+ "model.layers.13.mlp.up_proj.weight": "model-00004-of-00014.safetensors",
57
+ "model.layers.13.post_attention_layernorm.weight": "model-00005-of-00014.safetensors",
58
+ "model.layers.13.self_attn.k_proj.weight": "model-00004-of-00014.safetensors",
59
+ "model.layers.13.self_attn.o_proj.weight": "model-00004-of-00014.safetensors",
60
+ "model.layers.13.self_attn.q_proj.weight": "model-00004-of-00014.safetensors",
61
+ "model.layers.13.self_attn.v_proj.weight": "model-00004-of-00014.safetensors",
62
+ "model.layers.14.input_layernorm.weight": "model-00005-of-00014.safetensors",
63
+ "model.layers.14.mlp.down_proj.weight": "model-00005-of-00014.safetensors",
64
+ "model.layers.14.mlp.gate_proj.weight": "model-00005-of-00014.safetensors",
65
+ "model.layers.14.mlp.up_proj.weight": "model-00005-of-00014.safetensors",
66
+ "model.layers.14.post_attention_layernorm.weight": "model-00005-of-00014.safetensors",
67
+ "model.layers.14.self_attn.k_proj.weight": "model-00005-of-00014.safetensors",
68
+ "model.layers.14.self_attn.o_proj.weight": "model-00005-of-00014.safetensors",
69
+ "model.layers.14.self_attn.q_proj.weight": "model-00005-of-00014.safetensors",
70
+ "model.layers.14.self_attn.v_proj.weight": "model-00005-of-00014.safetensors",
71
+ "model.layers.15.input_layernorm.weight": "model-00005-of-00014.safetensors",
72
+ "model.layers.15.mlp.down_proj.weight": "model-00005-of-00014.safetensors",
73
+ "model.layers.15.mlp.gate_proj.weight": "model-00005-of-00014.safetensors",
74
+ "model.layers.15.mlp.up_proj.weight": "model-00005-of-00014.safetensors",
75
+ "model.layers.15.post_attention_layernorm.weight": "model-00005-of-00014.safetensors",
76
+ "model.layers.15.self_attn.k_proj.weight": "model-00005-of-00014.safetensors",
77
+ "model.layers.15.self_attn.o_proj.weight": "model-00005-of-00014.safetensors",
78
+ "model.layers.15.self_attn.q_proj.weight": "model-00005-of-00014.safetensors",
79
+ "model.layers.15.self_attn.v_proj.weight": "model-00005-of-00014.safetensors",
80
+ "model.layers.16.input_layernorm.weight": "model-00005-of-00014.safetensors",
81
+ "model.layers.16.mlp.down_proj.weight": "model-00005-of-00014.safetensors",
82
+ "model.layers.16.mlp.gate_proj.weight": "model-00005-of-00014.safetensors",
83
+ "model.layers.16.mlp.up_proj.weight": "model-00005-of-00014.safetensors",
84
+ "model.layers.16.post_attention_layernorm.weight": "model-00005-of-00014.safetensors",
85
+ "model.layers.16.self_attn.k_proj.weight": "model-00005-of-00014.safetensors",
86
+ "model.layers.16.self_attn.o_proj.weight": "model-00005-of-00014.safetensors",
87
+ "model.layers.16.self_attn.q_proj.weight": "model-00005-of-00014.safetensors",
88
+ "model.layers.16.self_attn.v_proj.weight": "model-00005-of-00014.safetensors",
89
+ "model.layers.17.input_layernorm.weight": "model-00006-of-00014.safetensors",
90
+ "model.layers.17.mlp.down_proj.weight": "model-00006-of-00014.safetensors",
91
+ "model.layers.17.mlp.gate_proj.weight": "model-00006-of-00014.safetensors",
92
+ "model.layers.17.mlp.up_proj.weight": "model-00006-of-00014.safetensors",
93
+ "model.layers.17.post_attention_layernorm.weight": "model-00006-of-00014.safetensors",
94
+ "model.layers.17.self_attn.k_proj.weight": "model-00005-of-00014.safetensors",
95
+ "model.layers.17.self_attn.o_proj.weight": "model-00005-of-00014.safetensors",
96
+ "model.layers.17.self_attn.q_proj.weight": "model-00005-of-00014.safetensors",
97
+ "model.layers.17.self_attn.v_proj.weight": "model-00005-of-00014.safetensors",
98
+ "model.layers.18.input_layernorm.weight": "model-00006-of-00014.safetensors",
99
+ "model.layers.18.mlp.down_proj.weight": "model-00006-of-00014.safetensors",
100
+ "model.layers.18.mlp.gate_proj.weight": "model-00006-of-00014.safetensors",
101
+ "model.layers.18.mlp.up_proj.weight": "model-00006-of-00014.safetensors",
102
+ "model.layers.18.post_attention_layernorm.weight": "model-00006-of-00014.safetensors",
103
+ "model.layers.18.self_attn.k_proj.weight": "model-00006-of-00014.safetensors",
104
+ "model.layers.18.self_attn.o_proj.weight": "model-00006-of-00014.safetensors",
105
+ "model.layers.18.self_attn.q_proj.weight": "model-00006-of-00014.safetensors",
106
+ "model.layers.18.self_attn.v_proj.weight": "model-00006-of-00014.safetensors",
107
+ "model.layers.19.input_layernorm.weight": "model-00006-of-00014.safetensors",
108
+ "model.layers.19.mlp.down_proj.weight": "model-00006-of-00014.safetensors",
109
+ "model.layers.19.mlp.gate_proj.weight": "model-00006-of-00014.safetensors",
110
+ "model.layers.19.mlp.up_proj.weight": "model-00006-of-00014.safetensors",
111
+ "model.layers.19.post_attention_layernorm.weight": "model-00006-of-00014.safetensors",
112
+ "model.layers.19.self_attn.k_proj.weight": "model-00006-of-00014.safetensors",
113
+ "model.layers.19.self_attn.o_proj.weight": "model-00006-of-00014.safetensors",
114
+ "model.layers.19.self_attn.q_proj.weight": "model-00006-of-00014.safetensors",
115
+ "model.layers.19.self_attn.v_proj.weight": "model-00006-of-00014.safetensors",
116
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00014.safetensors",
117
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00014.safetensors",
118
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00014.safetensors",
119
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00014.safetensors",
120
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00014.safetensors",
121
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00014.safetensors",
122
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00014.safetensors",
123
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00014.safetensors",
124
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00014.safetensors",
125
+ "model.layers.20.input_layernorm.weight": "model-00007-of-00014.safetensors",
126
+ "model.layers.20.mlp.down_proj.weight": "model-00007-of-00014.safetensors",
127
+ "model.layers.20.mlp.gate_proj.weight": "model-00006-of-00014.safetensors",
128
+ "model.layers.20.mlp.up_proj.weight": "model-00006-of-00014.safetensors",
129
+ "model.layers.20.post_attention_layernorm.weight": "model-00007-of-00014.safetensors",
130
+ "model.layers.20.self_attn.k_proj.weight": "model-00006-of-00014.safetensors",
131
+ "model.layers.20.self_attn.o_proj.weight": "model-00006-of-00014.safetensors",
132
+ "model.layers.20.self_attn.q_proj.weight": "model-00006-of-00014.safetensors",
133
+ "model.layers.20.self_attn.v_proj.weight": "model-00006-of-00014.safetensors",
134
+ "model.layers.21.input_layernorm.weight": "model-00007-of-00014.safetensors",
135
+ "model.layers.21.mlp.down_proj.weight": "model-00007-of-00014.safetensors",
136
+ "model.layers.21.mlp.gate_proj.weight": "model-00007-of-00014.safetensors",
137
+ "model.layers.21.mlp.up_proj.weight": "model-00007-of-00014.safetensors",
138
+ "model.layers.21.post_attention_layernorm.weight": "model-00007-of-00014.safetensors",
139
+ "model.layers.21.self_attn.k_proj.weight": "model-00007-of-00014.safetensors",
140
+ "model.layers.21.self_attn.o_proj.weight": "model-00007-of-00014.safetensors",
141
+ "model.layers.21.self_attn.q_proj.weight": "model-00007-of-00014.safetensors",
142
+ "model.layers.21.self_attn.v_proj.weight": "model-00007-of-00014.safetensors",
143
+ "model.layers.22.input_layernorm.weight": "model-00007-of-00014.safetensors",
144
+ "model.layers.22.mlp.down_proj.weight": "model-00007-of-00014.safetensors",
145
+ "model.layers.22.mlp.gate_proj.weight": "model-00007-of-00014.safetensors",
146
+ "model.layers.22.mlp.up_proj.weight": "model-00007-of-00014.safetensors",
147
+ "model.layers.22.post_attention_layernorm.weight": "model-00007-of-00014.safetensors",
148
+ "model.layers.22.self_attn.k_proj.weight": "model-00007-of-00014.safetensors",
149
+ "model.layers.22.self_attn.o_proj.weight": "model-00007-of-00014.safetensors",
150
+ "model.layers.22.self_attn.q_proj.weight": "model-00007-of-00014.safetensors",
151
+ "model.layers.22.self_attn.v_proj.weight": "model-00007-of-00014.safetensors",
152
+ "model.layers.23.input_layernorm.weight": "model-00007-of-00014.safetensors",
153
+ "model.layers.23.mlp.down_proj.weight": "model-00007-of-00014.safetensors",
154
+ "model.layers.23.mlp.gate_proj.weight": "model-00007-of-00014.safetensors",
155
+ "model.layers.23.mlp.up_proj.weight": "model-00007-of-00014.safetensors",
156
+ "model.layers.23.post_attention_layernorm.weight": "model-00007-of-00014.safetensors",
157
+ "model.layers.23.self_attn.k_proj.weight": "model-00007-of-00014.safetensors",
158
+ "model.layers.23.self_attn.o_proj.weight": "model-00007-of-00014.safetensors",
159
+ "model.layers.23.self_attn.q_proj.weight": "model-00007-of-00014.safetensors",
160
+ "model.layers.23.self_attn.v_proj.weight": "model-00007-of-00014.safetensors",
161
+ "model.layers.24.input_layernorm.weight": "model-00008-of-00014.safetensors",
162
+ "model.layers.24.mlp.down_proj.weight": "model-00008-of-00014.safetensors",
163
+ "model.layers.24.mlp.gate_proj.weight": "model-00008-of-00014.safetensors",
164
+ "model.layers.24.mlp.up_proj.weight": "model-00008-of-00014.safetensors",
165
+ "model.layers.24.post_attention_layernorm.weight": "model-00008-of-00014.safetensors",
166
+ "model.layers.24.self_attn.k_proj.weight": "model-00007-of-00014.safetensors",
167
+ "model.layers.24.self_attn.o_proj.weight": "model-00007-of-00014.safetensors",
168
+ "model.layers.24.self_attn.q_proj.weight": "model-00007-of-00014.safetensors",
169
+ "model.layers.24.self_attn.v_proj.weight": "model-00007-of-00014.safetensors",
170
+ "model.layers.25.input_layernorm.weight": "model-00008-of-00014.safetensors",
171
+ "model.layers.25.mlp.down_proj.weight": "model-00008-of-00014.safetensors",
172
+ "model.layers.25.mlp.gate_proj.weight": "model-00008-of-00014.safetensors",
173
+ "model.layers.25.mlp.up_proj.weight": "model-00008-of-00014.safetensors",
174
+ "model.layers.25.post_attention_layernorm.weight": "model-00008-of-00014.safetensors",
175
+ "model.layers.25.self_attn.k_proj.weight": "model-00008-of-00014.safetensors",
176
+ "model.layers.25.self_attn.o_proj.weight": "model-00008-of-00014.safetensors",
177
+ "model.layers.25.self_attn.q_proj.weight": "model-00008-of-00014.safetensors",
178
+ "model.layers.25.self_attn.v_proj.weight": "model-00008-of-00014.safetensors",
179
+ "model.layers.26.input_layernorm.weight": "model-00008-of-00014.safetensors",
180
+ "model.layers.26.mlp.down_proj.weight": "model-00008-of-00014.safetensors",
181
+ "model.layers.26.mlp.gate_proj.weight": "model-00008-of-00014.safetensors",
182
+ "model.layers.26.mlp.up_proj.weight": "model-00008-of-00014.safetensors",
183
+ "model.layers.26.post_attention_layernorm.weight": "model-00008-of-00014.safetensors",
184
+ "model.layers.26.self_attn.k_proj.weight": "model-00008-of-00014.safetensors",
185
+ "model.layers.26.self_attn.o_proj.weight": "model-00008-of-00014.safetensors",
186
+ "model.layers.26.self_attn.q_proj.weight": "model-00008-of-00014.safetensors",
187
+ "model.layers.26.self_attn.v_proj.weight": "model-00008-of-00014.safetensors",
188
+ "model.layers.27.input_layernorm.weight": "model-00009-of-00014.safetensors",
189
+ "model.layers.27.mlp.down_proj.weight": "model-00009-of-00014.safetensors",
190
+ "model.layers.27.mlp.gate_proj.weight": "model-00008-of-00014.safetensors",
191
+ "model.layers.27.mlp.up_proj.weight": "model-00008-of-00014.safetensors",
192
+ "model.layers.27.post_attention_layernorm.weight": "model-00009-of-00014.safetensors",
193
+ "model.layers.27.self_attn.k_proj.weight": "model-00008-of-00014.safetensors",
194
+ "model.layers.27.self_attn.o_proj.weight": "model-00008-of-00014.safetensors",
195
+ "model.layers.27.self_attn.q_proj.weight": "model-00008-of-00014.safetensors",
196
+ "model.layers.27.self_attn.v_proj.weight": "model-00008-of-00014.safetensors",
197
+ "model.layers.28.input_layernorm.weight": "model-00009-of-00014.safetensors",
198
+ "model.layers.28.mlp.down_proj.weight": "model-00009-of-00014.safetensors",
199
+ "model.layers.28.mlp.gate_proj.weight": "model-00009-of-00014.safetensors",
200
+ "model.layers.28.mlp.up_proj.weight": "model-00009-of-00014.safetensors",
201
+ "model.layers.28.post_attention_layernorm.weight": "model-00009-of-00014.safetensors",
202
+ "model.layers.28.self_attn.k_proj.weight": "model-00009-of-00014.safetensors",
203
+ "model.layers.28.self_attn.o_proj.weight": "model-00009-of-00014.safetensors",
204
+ "model.layers.28.self_attn.q_proj.weight": "model-00009-of-00014.safetensors",
205
+ "model.layers.28.self_attn.v_proj.weight": "model-00009-of-00014.safetensors",
206
+ "model.layers.29.input_layernorm.weight": "model-00009-of-00014.safetensors",
207
+ "model.layers.29.mlp.down_proj.weight": "model-00009-of-00014.safetensors",
208
+ "model.layers.29.mlp.gate_proj.weight": "model-00009-of-00014.safetensors",
209
+ "model.layers.29.mlp.up_proj.weight": "model-00009-of-00014.safetensors",
210
+ "model.layers.29.post_attention_layernorm.weight": "model-00009-of-00014.safetensors",
211
+ "model.layers.29.self_attn.k_proj.weight": "model-00009-of-00014.safetensors",
212
+ "model.layers.29.self_attn.o_proj.weight": "model-00009-of-00014.safetensors",
213
+ "model.layers.29.self_attn.q_proj.weight": "model-00009-of-00014.safetensors",
214
+ "model.layers.29.self_attn.v_proj.weight": "model-00009-of-00014.safetensors",
215
+ "model.layers.3.input_layernorm.weight": "model-00002-of-00014.safetensors",
216
+ "model.layers.3.mlp.down_proj.weight": "model-00002-of-00014.safetensors",
217
+ "model.layers.3.mlp.gate_proj.weight": "model-00002-of-00014.safetensors",
218
+ "model.layers.3.mlp.up_proj.weight": "model-00002-of-00014.safetensors",
219
+ "model.layers.3.post_attention_layernorm.weight": "model-00002-of-00014.safetensors",
220
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00014.safetensors",
221
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00014.safetensors",
222
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00014.safetensors",
223
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00014.safetensors",
224
+ "model.layers.30.input_layernorm.weight": "model-00009-of-00014.safetensors",
225
+ "model.layers.30.mlp.down_proj.weight": "model-00009-of-00014.safetensors",
226
+ "model.layers.30.mlp.gate_proj.weight": "model-00009-of-00014.safetensors",
227
+ "model.layers.30.mlp.up_proj.weight": "model-00009-of-00014.safetensors",
228
+ "model.layers.30.post_attention_layernorm.weight": "model-00009-of-00014.safetensors",
229
+ "model.layers.30.self_attn.k_proj.weight": "model-00009-of-00014.safetensors",
230
+ "model.layers.30.self_attn.o_proj.weight": "model-00009-of-00014.safetensors",
231
+ "model.layers.30.self_attn.q_proj.weight": "model-00009-of-00014.safetensors",
232
+ "model.layers.30.self_attn.v_proj.weight": "model-00009-of-00014.safetensors",
233
+ "model.layers.31.input_layernorm.weight": "model-00010-of-00014.safetensors",
234
+ "model.layers.31.mlp.down_proj.weight": "model-00010-of-00014.safetensors",
235
+ "model.layers.31.mlp.gate_proj.weight": "model-00010-of-00014.safetensors",
236
+ "model.layers.31.mlp.up_proj.weight": "model-00010-of-00014.safetensors",
237
+ "model.layers.31.post_attention_layernorm.weight": "model-00010-of-00014.safetensors",
238
+ "model.layers.31.self_attn.k_proj.weight": "model-00009-of-00014.safetensors",
239
+ "model.layers.31.self_attn.o_proj.weight": "model-00009-of-00014.safetensors",
240
+ "model.layers.31.self_attn.q_proj.weight": "model-00009-of-00014.safetensors",
241
+ "model.layers.31.self_attn.v_proj.weight": "model-00009-of-00014.safetensors",
242
+ "model.layers.32.input_layernorm.weight": "model-00010-of-00014.safetensors",
243
+ "model.layers.32.mlp.down_proj.weight": "model-00010-of-00014.safetensors",
244
+ "model.layers.32.mlp.gate_proj.weight": "model-00010-of-00014.safetensors",
245
+ "model.layers.32.mlp.up_proj.weight": "model-00010-of-00014.safetensors",
246
+ "model.layers.32.post_attention_layernorm.weight": "model-00010-of-00014.safetensors",
247
+ "model.layers.32.self_attn.k_proj.weight": "model-00010-of-00014.safetensors",
248
+ "model.layers.32.self_attn.o_proj.weight": "model-00010-of-00014.safetensors",
249
+ "model.layers.32.self_attn.q_proj.weight": "model-00010-of-00014.safetensors",
250
+ "model.layers.32.self_attn.v_proj.weight": "model-00010-of-00014.safetensors",
251
+ "model.layers.33.input_layernorm.weight": "model-00010-of-00014.safetensors",
252
+ "model.layers.33.mlp.down_proj.weight": "model-00010-of-00014.safetensors",
253
+ "model.layers.33.mlp.gate_proj.weight": "model-00010-of-00014.safetensors",
254
+ "model.layers.33.mlp.up_proj.weight": "model-00010-of-00014.safetensors",
255
+ "model.layers.33.post_attention_layernorm.weight": "model-00010-of-00014.safetensors",
256
+ "model.layers.33.self_attn.k_proj.weight": "model-00010-of-00014.safetensors",
257
+ "model.layers.33.self_attn.o_proj.weight": "model-00010-of-00014.safetensors",
258
+ "model.layers.33.self_attn.q_proj.weight": "model-00010-of-00014.safetensors",
259
+ "model.layers.33.self_attn.v_proj.weight": "model-00010-of-00014.safetensors",
260
+ "model.layers.34.input_layernorm.weight": "model-00011-of-00014.safetensors",
261
+ "model.layers.34.mlp.down_proj.weight": "model-00011-of-00014.safetensors",
262
+ "model.layers.34.mlp.gate_proj.weight": "model-00010-of-00014.safetensors",
263
+ "model.layers.34.mlp.up_proj.weight": "model-00010-of-00014.safetensors",
264
+ "model.layers.34.post_attention_layernorm.weight": "model-00011-of-00014.safetensors",
265
+ "model.layers.34.self_attn.k_proj.weight": "model-00010-of-00014.safetensors",
266
+ "model.layers.34.self_attn.o_proj.weight": "model-00010-of-00014.safetensors",
267
+ "model.layers.34.self_attn.q_proj.weight": "model-00010-of-00014.safetensors",
268
+ "model.layers.34.self_attn.v_proj.weight": "model-00010-of-00014.safetensors",
269
+ "model.layers.35.input_layernorm.weight": "model-00011-of-00014.safetensors",
270
+ "model.layers.35.mlp.down_proj.weight": "model-00011-of-00014.safetensors",
271
+ "model.layers.35.mlp.gate_proj.weight": "model-00011-of-00014.safetensors",
272
+ "model.layers.35.mlp.up_proj.weight": "model-00011-of-00014.safetensors",
273
+ "model.layers.35.post_attention_layernorm.weight": "model-00011-of-00014.safetensors",
274
+ "model.layers.35.self_attn.k_proj.weight": "model-00011-of-00014.safetensors",
275
+ "model.layers.35.self_attn.o_proj.weight": "model-00011-of-00014.safetensors",
276
+ "model.layers.35.self_attn.q_proj.weight": "model-00011-of-00014.safetensors",
277
+ "model.layers.35.self_attn.v_proj.weight": "model-00011-of-00014.safetensors",
278
+ "model.layers.36.input_layernorm.weight": "model-00011-of-00014.safetensors",
279
+ "model.layers.36.mlp.down_proj.weight": "model-00011-of-00014.safetensors",
280
+ "model.layers.36.mlp.gate_proj.weight": "model-00011-of-00014.safetensors",
281
+ "model.layers.36.mlp.up_proj.weight": "model-00011-of-00014.safetensors",
282
+ "model.layers.36.post_attention_layernorm.weight": "model-00011-of-00014.safetensors",
283
+ "model.layers.36.self_attn.k_proj.weight": "model-00011-of-00014.safetensors",
284
+ "model.layers.36.self_attn.o_proj.weight": "model-00011-of-00014.safetensors",
285
+ "model.layers.36.self_attn.q_proj.weight": "model-00011-of-00014.safetensors",
286
+ "model.layers.36.self_attn.v_proj.weight": "model-00011-of-00014.safetensors",
287
+ "model.layers.37.input_layernorm.weight": "model-00011-of-00014.safetensors",
288
+ "model.layers.37.mlp.down_proj.weight": "model-00011-of-00014.safetensors",
289
+ "model.layers.37.mlp.gate_proj.weight": "model-00011-of-00014.safetensors",
290
+ "model.layers.37.mlp.up_proj.weight": "model-00011-of-00014.safetensors",
291
+ "model.layers.37.post_attention_layernorm.weight": "model-00011-of-00014.safetensors",
292
+ "model.layers.37.self_attn.k_proj.weight": "model-00011-of-00014.safetensors",
293
+ "model.layers.37.self_attn.o_proj.weight": "model-00011-of-00014.safetensors",
294
+ "model.layers.37.self_attn.q_proj.weight": "model-00011-of-00014.safetensors",
295
+ "model.layers.37.self_attn.v_proj.weight": "model-00011-of-00014.safetensors",
296
+ "model.layers.38.input_layernorm.weight": "model-00012-of-00014.safetensors",
297
+ "model.layers.38.mlp.down_proj.weight": "model-00012-of-00014.safetensors",
298
+ "model.layers.38.mlp.gate_proj.weight": "model-00012-of-00014.safetensors",
299
+ "model.layers.38.mlp.up_proj.weight": "model-00012-of-00014.safetensors",
300
+ "model.layers.38.post_attention_layernorm.weight": "model-00012-of-00014.safetensors",
301
+ "model.layers.38.self_attn.k_proj.weight": "model-00011-of-00014.safetensors",
302
+ "model.layers.38.self_attn.o_proj.weight": "model-00011-of-00014.safetensors",
303
+ "model.layers.38.self_attn.q_proj.weight": "model-00011-of-00014.safetensors",
304
+ "model.layers.38.self_attn.v_proj.weight": "model-00011-of-00014.safetensors",
305
+ "model.layers.39.input_layernorm.weight": "model-00012-of-00014.safetensors",
306
+ "model.layers.39.mlp.down_proj.weight": "model-00012-of-00014.safetensors",
307
+ "model.layers.39.mlp.gate_proj.weight": "model-00012-of-00014.safetensors",
308
+ "model.layers.39.mlp.up_proj.weight": "model-00012-of-00014.safetensors",
309
+ "model.layers.39.post_attention_layernorm.weight": "model-00012-of-00014.safetensors",
310
+ "model.layers.39.self_attn.k_proj.weight": "model-00012-of-00014.safetensors",
311
+ "model.layers.39.self_attn.o_proj.weight": "model-00012-of-00014.safetensors",
312
+ "model.layers.39.self_attn.q_proj.weight": "model-00012-of-00014.safetensors",
313
+ "model.layers.39.self_attn.v_proj.weight": "model-00012-of-00014.safetensors",
314
+ "model.layers.4.input_layernorm.weight": "model-00002-of-00014.safetensors",
315
+ "model.layers.4.mlp.down_proj.weight": "model-00002-of-00014.safetensors",
316
+ "model.layers.4.mlp.gate_proj.weight": "model-00002-of-00014.safetensors",
317
+ "model.layers.4.mlp.up_proj.weight": "model-00002-of-00014.safetensors",
318
+ "model.layers.4.post_attention_layernorm.weight": "model-00002-of-00014.safetensors",
319
+ "model.layers.4.self_attn.k_proj.weight": "model-00002-of-00014.safetensors",
320
+ "model.layers.4.self_attn.o_proj.weight": "model-00002-of-00014.safetensors",
321
+ "model.layers.4.self_attn.q_proj.weight": "model-00002-of-00014.safetensors",
322
+ "model.layers.4.self_attn.v_proj.weight": "model-00002-of-00014.safetensors",
323
+ "model.layers.40.input_layernorm.weight": "model-00012-of-00014.safetensors",
324
+ "model.layers.40.mlp.down_proj.weight": "model-00012-of-00014.safetensors",
325
+ "model.layers.40.mlp.gate_proj.weight": "model-00012-of-00014.safetensors",
326
+ "model.layers.40.mlp.up_proj.weight": "model-00012-of-00014.safetensors",
327
+ "model.layers.40.post_attention_layernorm.weight": "model-00012-of-00014.safetensors",
328
+ "model.layers.40.self_attn.k_proj.weight": "model-00012-of-00014.safetensors",
329
+ "model.layers.40.self_attn.o_proj.weight": "model-00012-of-00014.safetensors",
330
+ "model.layers.40.self_attn.q_proj.weight": "model-00012-of-00014.safetensors",
331
+ "model.layers.40.self_attn.v_proj.weight": "model-00012-of-00014.safetensors",
332
+ "model.layers.41.input_layernorm.weight": "model-00013-of-00014.safetensors",
333
+ "model.layers.41.mlp.down_proj.weight": "model-00013-of-00014.safetensors",
334
+ "model.layers.41.mlp.gate_proj.weight": "model-00012-of-00014.safetensors",
335
+ "model.layers.41.mlp.up_proj.weight": "model-00012-of-00014.safetensors",
336
+ "model.layers.41.post_attention_layernorm.weight": "model-00013-of-00014.safetensors",
337
+ "model.layers.41.self_attn.k_proj.weight": "model-00012-of-00014.safetensors",
338
+ "model.layers.41.self_attn.o_proj.weight": "model-00012-of-00014.safetensors",
339
+ "model.layers.41.self_attn.q_proj.weight": "model-00012-of-00014.safetensors",
340
+ "model.layers.41.self_attn.v_proj.weight": "model-00012-of-00014.safetensors",
341
+ "model.layers.42.input_layernorm.weight": "model-00013-of-00014.safetensors",
342
+ "model.layers.42.mlp.down_proj.weight": "model-00013-of-00014.safetensors",
343
+ "model.layers.42.mlp.gate_proj.weight": "model-00013-of-00014.safetensors",
344
+ "model.layers.42.mlp.up_proj.weight": "model-00013-of-00014.safetensors",
345
+ "model.layers.42.post_attention_layernorm.weight": "model-00013-of-00014.safetensors",
346
+ "model.layers.42.self_attn.k_proj.weight": "model-00013-of-00014.safetensors",
347
+ "model.layers.42.self_attn.o_proj.weight": "model-00013-of-00014.safetensors",
348
+ "model.layers.42.self_attn.q_proj.weight": "model-00013-of-00014.safetensors",
349
+ "model.layers.42.self_attn.v_proj.weight": "model-00013-of-00014.safetensors",
350
+ "model.layers.43.input_layernorm.weight": "model-00013-of-00014.safetensors",
351
+ "model.layers.43.mlp.down_proj.weight": "model-00013-of-00014.safetensors",
352
+ "model.layers.43.mlp.gate_proj.weight": "model-00013-of-00014.safetensors",
353
+ "model.layers.43.mlp.up_proj.weight": "model-00013-of-00014.safetensors",
354
+ "model.layers.43.post_attention_layernorm.weight": "model-00013-of-00014.safetensors",
355
+ "model.layers.43.self_attn.k_proj.weight": "model-00013-of-00014.safetensors",
356
+ "model.layers.43.self_attn.o_proj.weight": "model-00013-of-00014.safetensors",
357
+ "model.layers.43.self_attn.q_proj.weight": "model-00013-of-00014.safetensors",
358
+ "model.layers.43.self_attn.v_proj.weight": "model-00013-of-00014.safetensors",
359
+ "model.layers.44.input_layernorm.weight": "model-00013-of-00014.safetensors",
360
+ "model.layers.44.mlp.down_proj.weight": "model-00013-of-00014.safetensors",
361
+ "model.layers.44.mlp.gate_proj.weight": "model-00013-of-00014.safetensors",
362
+ "model.layers.44.mlp.up_proj.weight": "model-00013-of-00014.safetensors",
363
+ "model.layers.44.post_attention_layernorm.weight": "model-00013-of-00014.safetensors",
364
+ "model.layers.44.self_attn.k_proj.weight": "model-00013-of-00014.safetensors",
365
+ "model.layers.44.self_attn.o_proj.weight": "model-00013-of-00014.safetensors",
366
+ "model.layers.44.self_attn.q_proj.weight": "model-00013-of-00014.safetensors",
367
+ "model.layers.44.self_attn.v_proj.weight": "model-00013-of-00014.safetensors",
368
+ "model.layers.45.input_layernorm.weight": "model-00014-of-00014.safetensors",
369
+ "model.layers.45.mlp.down_proj.weight": "model-00014-of-00014.safetensors",
370
+ "model.layers.45.mlp.gate_proj.weight": "model-00014-of-00014.safetensors",
371
+ "model.layers.45.mlp.up_proj.weight": "model-00014-of-00014.safetensors",
372
+ "model.layers.45.post_attention_layernorm.weight": "model-00014-of-00014.safetensors",
373
+ "model.layers.45.self_attn.k_proj.weight": "model-00013-of-00014.safetensors",
374
+ "model.layers.45.self_attn.o_proj.weight": "model-00013-of-00014.safetensors",
375
+ "model.layers.45.self_attn.q_proj.weight": "model-00013-of-00014.safetensors",
376
+ "model.layers.45.self_attn.v_proj.weight": "model-00013-of-00014.safetensors",
377
+ "model.layers.46.input_layernorm.weight": "model-00014-of-00014.safetensors",
378
+ "model.layers.46.mlp.down_proj.weight": "model-00014-of-00014.safetensors",
379
+ "model.layers.46.mlp.gate_proj.weight": "model-00014-of-00014.safetensors",
380
+ "model.layers.46.mlp.up_proj.weight": "model-00014-of-00014.safetensors",
381
+ "model.layers.46.post_attention_layernorm.weight": "model-00014-of-00014.safetensors",
382
+ "model.layers.46.self_attn.k_proj.weight": "model-00014-of-00014.safetensors",
383
+ "model.layers.46.self_attn.o_proj.weight": "model-00014-of-00014.safetensors",
384
+ "model.layers.46.self_attn.q_proj.weight": "model-00014-of-00014.safetensors",
385
+ "model.layers.46.self_attn.v_proj.weight": "model-00014-of-00014.safetensors",
386
+ "model.layers.47.input_layernorm.weight": "model-00014-of-00014.safetensors",
387
+ "model.layers.47.mlp.down_proj.weight": "model-00014-of-00014.safetensors",
388
+ "model.layers.47.mlp.gate_proj.weight": "model-00014-of-00014.safetensors",
389
+ "model.layers.47.mlp.up_proj.weight": "model-00014-of-00014.safetensors",
390
+ "model.layers.47.post_attention_layernorm.weight": "model-00014-of-00014.safetensors",
391
+ "model.layers.47.self_attn.k_proj.weight": "model-00014-of-00014.safetensors",
392
+ "model.layers.47.self_attn.o_proj.weight": "model-00014-of-00014.safetensors",
393
+ "model.layers.47.self_attn.q_proj.weight": "model-00014-of-00014.safetensors",
394
+ "model.layers.47.self_attn.v_proj.weight": "model-00014-of-00014.safetensors",
395
+ "model.layers.5.input_layernorm.weight": "model-00002-of-00014.safetensors",
396
+ "model.layers.5.mlp.down_proj.weight": "model-00002-of-00014.safetensors",
397
+ "model.layers.5.mlp.gate_proj.weight": "model-00002-of-00014.safetensors",
398
+ "model.layers.5.mlp.up_proj.weight": "model-00002-of-00014.safetensors",
399
+ "model.layers.5.post_attention_layernorm.weight": "model-00002-of-00014.safetensors",
400
+ "model.layers.5.self_attn.k_proj.weight": "model-00002-of-00014.safetensors",
401
+ "model.layers.5.self_attn.o_proj.weight": "model-00002-of-00014.safetensors",
402
+ "model.layers.5.self_attn.q_proj.weight": "model-00002-of-00014.safetensors",
403
+ "model.layers.5.self_attn.v_proj.weight": "model-00002-of-00014.safetensors",
404
+ "model.layers.6.input_layernorm.weight": "model-00003-of-00014.safetensors",
405
+ "model.layers.6.mlp.down_proj.weight": "model-00003-of-00014.safetensors",
406
+ "model.layers.6.mlp.gate_proj.weight": "model-00002-of-00014.safetensors",
407
+ "model.layers.6.mlp.up_proj.weight": "model-00002-of-00014.safetensors",
408
+ "model.layers.6.post_attention_layernorm.weight": "model-00003-of-00014.safetensors",
409
+ "model.layers.6.self_attn.k_proj.weight": "model-00002-of-00014.safetensors",
410
+ "model.layers.6.self_attn.o_proj.weight": "model-00002-of-00014.safetensors",
411
+ "model.layers.6.self_attn.q_proj.weight": "model-00002-of-00014.safetensors",
412
+ "model.layers.6.self_attn.v_proj.weight": "model-00002-of-00014.safetensors",
413
+ "model.layers.7.input_layernorm.weight": "model-00003-of-00014.safetensors",
414
+ "model.layers.7.mlp.down_proj.weight": "model-00003-of-00014.safetensors",
415
+ "model.layers.7.mlp.gate_proj.weight": "model-00003-of-00014.safetensors",
416
+ "model.layers.7.mlp.up_proj.weight": "model-00003-of-00014.safetensors",
417
+ "model.layers.7.post_attention_layernorm.weight": "model-00003-of-00014.safetensors",
418
+ "model.layers.7.self_attn.k_proj.weight": "model-00003-of-00014.safetensors",
419
+ "model.layers.7.self_attn.o_proj.weight": "model-00003-of-00014.safetensors",
420
+ "model.layers.7.self_attn.q_proj.weight": "model-00003-of-00014.safetensors",
421
+ "model.layers.7.self_attn.v_proj.weight": "model-00003-of-00014.safetensors",
422
+ "model.layers.8.input_layernorm.weight": "model-00003-of-00014.safetensors",
423
+ "model.layers.8.mlp.down_proj.weight": "model-00003-of-00014.safetensors",
424
+ "model.layers.8.mlp.gate_proj.weight": "model-00003-of-00014.safetensors",
425
+ "model.layers.8.mlp.up_proj.weight": "model-00003-of-00014.safetensors",
426
+ "model.layers.8.post_attention_layernorm.weight": "model-00003-of-00014.safetensors",
427
+ "model.layers.8.self_attn.k_proj.weight": "model-00003-of-00014.safetensors",
428
+ "model.layers.8.self_attn.o_proj.weight": "model-00003-of-00014.safetensors",
429
+ "model.layers.8.self_attn.q_proj.weight": "model-00003-of-00014.safetensors",
430
+ "model.layers.8.self_attn.v_proj.weight": "model-00003-of-00014.safetensors",
431
+ "model.layers.9.input_layernorm.weight": "model-00003-of-00014.safetensors",
432
+ "model.layers.9.mlp.down_proj.weight": "model-00003-of-00014.safetensors",
433
+ "model.layers.9.mlp.gate_proj.weight": "model-00003-of-00014.safetensors",
434
+ "model.layers.9.mlp.up_proj.weight": "model-00003-of-00014.safetensors",
435
+ "model.layers.9.post_attention_layernorm.weight": "model-00003-of-00014.safetensors",
436
+ "model.layers.9.self_attn.k_proj.weight": "model-00003-of-00014.safetensors",
437
+ "model.layers.9.self_attn.o_proj.weight": "model-00003-of-00014.safetensors",
438
+ "model.layers.9.self_attn.q_proj.weight": "model-00003-of-00014.safetensors",
439
+ "model.layers.9.self_attn.v_proj.weight": "model-00003-of-00014.safetensors",
440
+ "model.norm.weight": "model-00014-of-00014.safetensors"
441
+ }
442
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "▁<PRE>",
4
+ "▁<MID>",
5
+ "▁<SUF>",
6
+ "▁<EOT>"
7
+ ],
8
+ "bos_token": {
9
+ "content": "<s>",
10
+ "lstrip": false,
11
+ "normalized": false,
12
+ "rstrip": false,
13
+ "single_word": false
14
+ },
15
+ "eos_token": {
16
+ "content": "</s>",
17
+ "lstrip": false,
18
+ "normalized": false,
19
+ "rstrip": false,
20
+ "single_word": false
21
+ },
22
+ "unk_token": {
23
+ "content": "<unk>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false
28
+ }
29
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "32000": {
30
+ "content": "▁<PRE>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "32001": {
38
+ "content": "▁<MID>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "32002": {
46
+ "content": "▁<SUF>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "32003": {
54
+ "content": "▁<EOT>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ }
61
+ },
62
+ "additional_special_tokens": [
63
+ "▁<PRE>",
64
+ "▁<MID>",
65
+ "▁<SUF>",
66
+ "▁<EOT>"
67
+ ],
68
+ "bos_token": "<s>",
69
+ "chat_template": "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = false %}{% endif %}{% for message in loop_messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if loop.index0 == 0 and system_message != false %}{% set content = '<<SYS>>\\n' + system_message + '\\n<</SYS>>\\n\\n' + message['content'] %}{% else %}{% set content = message['content'] %}{% endif %}{% if message['role'] == 'user' %}{{ bos_token + '[INST] ' + content | trim + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ ' ' + content | trim + ' ' + eos_token }}{% endif %}{% endfor %}",
70
+ "clean_up_tokenization_spaces": false,
71
+ "eos_token": "</s>",
72
+ "eot_token": "▁<EOT>",
73
+ "fill_token": "<FILL_ME>",
74
+ "legacy": null,
75
+ "middle_token": "▁<MID>",
76
+ "model_max_length": 1000000000000000019884624838656,
77
+ "pad_token": null,
78
+ "prefix_token": "▁<PRE>",
79
+ "sp_model_kwargs": {},
80
+ "suffix_token": "▁<SUF>",
81
+ "tokenizer_class": "CodeLlamaTokenizer",
82
+ "unk_token": "<unk>",
83
+ "use_default_system_prompt": false
84
+ }