mrseeker commited on
Commit
3b1b326
1 Parent(s): b7d3e85

Initial Commit

Browse files
LICENSE.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ LLAMA 2 COMMUNITY LICENSE AGREEMENT
2
+ Llama 2 Version Release Date: July 18, 2023
3
+
4
+ "Agreement" means the terms and conditions for use, reproduction, distribution and
5
+ modification of the Llama Materials set forth herein.
6
+
7
+ "Documentation" means the specifications, manuals and documentation
8
+ accompanying Llama 2 distributed by Meta at ai.meta.com/resources/models-and-
9
+ libraries/llama-downloads/.
10
+
11
+ "Licensee" or "you" means you, or your employer or any other person or entity (if
12
+ you are entering into this Agreement on such person or entity's behalf), of the age
13
+ required under applicable laws, rules or regulations to provide legal consent and that
14
+ has legal authority to bind your employer or such other person or entity if you are
15
+ entering in this Agreement on their behalf.
16
+
17
+ "Llama 2" means the foundational large language models and software and
18
+ algorithms, including machine-learning model code, trained model weights,
19
+ inference-enabling code, training-enabling code, fine-tuning enabling code and other
20
+ elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and-
21
+ libraries/llama-downloads/.
22
+
23
+ "Llama Materials" means, collectively, Meta's proprietary Llama 2 and
24
+ Documentation (and any portion thereof) made available under this Agreement.
25
+
26
+ "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you
27
+ are an entity, your principal place of business is in the EEA or Switzerland) and Meta
28
+ Platforms, Inc. (if you are located outside of the EEA or Switzerland).
29
+
30
+ By clicking "I Accept" below or by using or distributing any portion or element of the
31
+ Llama Materials, you agree to be bound by this Agreement.
32
+
33
+ 1. License Rights and Redistribution.
34
+
35
+ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-
36
+ transferable and royalty-free limited license under Meta's intellectual property or
37
+ other rights owned by Meta embodied in the Llama Materials to use, reproduce,
38
+ distribute, copy, create derivative works of, and make modifications to the Llama
39
+ Materials.
40
+
41
+ b. Redistribution and Use.
42
+
43
+ i. If you distribute or make the Llama Materials, or any derivative works
44
+ thereof, available to a third party, you shall provide a copy of this Agreement to such
45
+ third party.
46
+ ii. If you receive Llama Materials, or any derivative works thereof, from
47
+ a Licensee as part of an integrated end user product, then Section 2 of this
48
+ Agreement will not apply to you.
49
+
50
+ iii. You must retain in all copies of the Llama Materials that you
51
+ distribute the following attribution notice within a "Notice" text file distributed as a
52
+ part of such copies: "Llama 2 is licensed under the LLAMA 2 Community License,
53
+ Copyright (c) Meta Platforms, Inc. All Rights Reserved."
54
+
55
+ iv. Your use of the Llama Materials must comply with applicable laws
56
+ and regulations (including trade compliance laws and regulations) and adhere to the
57
+ Acceptable Use Policy for the Llama Materials (available at
58
+ https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into
59
+ this Agreement.
60
+
61
+ v. You will not use the Llama Materials or any output or results of the
62
+ Llama Materials to improve any other large language model (excluding Llama 2 or
63
+ derivative works thereof).
64
+
65
+ 2. Additional Commercial Terms. If, on the Llama 2 version release date, the
66
+ monthly active users of the products or services made available by or for Licensee,
67
+ or Licensee's affiliates, is greater than 700 million monthly active users in the
68
+ preceding calendar month, you must request a license from Meta, which Meta may
69
+ grant to you in its sole discretion, and you are not authorized to exercise any of the
70
+ rights under this Agreement unless or until Meta otherwise expressly grants you
71
+ such rights.
72
+
73
+ 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE
74
+ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE
75
+ PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND,
76
+ EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY
77
+ WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR
78
+ FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE
79
+ FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING
80
+ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR
81
+ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
82
+
83
+ 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE
84
+ LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT,
85
+ NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS
86
+ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL,
87
+ CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN
88
+ IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF
89
+ ANY OF THE FOREGOING.
90
+
91
+ 5. Intellectual Property.
92
+
93
+ a. No trademark licenses are granted under this Agreement, and in
94
+ connection with the Llama Materials, neither Meta nor Licensee may use any name
95
+ or mark owned by or associated with the other or any of its affiliates, except as
96
+ required for reasonable and customary use in describing and redistributing the
97
+ Llama Materials.
98
+
99
+ b. Subject to Meta's ownership of Llama Materials and derivatives made by or
100
+ for Meta, with respect to any derivative works and modifications of the Llama
101
+ Materials that are made by you, as between you and Meta, you are and will be the
102
+ owner of such derivative works and modifications.
103
+
104
+ c. If you institute litigation or other proceedings against Meta or any entity
105
+ (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama
106
+ Materials or Llama 2 outputs or results, or any portion of any of the foregoing,
107
+ constitutes infringement of intellectual property or other rights owned or licensable
108
+ by you, then any licenses granted to you under this Agreement shall terminate as of
109
+ the date such litigation or claim is filed or instituted. You will indemnify and hold
110
+ harmless Meta from and against any claim by any third party arising out of or related
111
+ to your use or distribution of the Llama Materials.
112
+
113
+ 6. Term and Termination. The term of this Agreement will commence upon your
114
+ acceptance of this Agreement or access to the Llama Materials and will continue in
115
+ full force and effect until terminated in accordance with the terms and conditions
116
+ herein. Meta may terminate this Agreement if you are in breach of any term or
117
+ condition of this Agreement. Upon termination of this Agreement, you shall delete
118
+ and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the
119
+ termination of this Agreement.
120
+
121
+ 7. Governing Law and Jurisdiction. This Agreement will be governed and
122
+ construed under the laws of the State of California without regard to choice of law
123
+ principles, and the UN Convention on Contracts for the International Sale of Goods
124
+ does not apply to this Agreement. The courts of California shall have exclusive
125
+ jurisdiction of any dispute arising out of this Agreement.
README.md CHANGED
@@ -1,3 +1,30 @@
1
  ---
2
- license: cc-by-nc-sa-4.0
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: other
3
+ language: en
4
+ commercial: no
5
+ inference: true
6
  ---
7
+ # LLAMA2 13B - Holodeck
8
+ ## Model Description
9
+ LLAMA2 13B-Holodeck is a finetune created using Meta's llama 2 model.
10
+ ## Training data
11
+ The training data contains around 3000 ebooks in various genres.
12
+ Most parts of the dataset have been prepended using the following text: `[Genre: <genre1>, <genre2>]`
13
+ ### How to use
14
+ You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
15
+ ```py
16
+ >>> from transformers import pipeline
17
+ >>> generator = pipeline('text-generation', model='KoboldAI/LLAMA2-13B-Holodeck-1')
18
+ >>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50)
19
+ [{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}]
20
+ ```
21
+ ### Limitations and Biases
22
+ Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion).
23
+
24
+ ### License
25
+ Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
26
+ **Extra clause:**
27
+ You shall use the Materials and Products solely for research purposes and not for any commercial purpose. Nothing in the Community License shall be construed as granting you a license to use the Materials or Products for any other purpose.
28
+
29
+ ### BibTeX entry and citation info
30
+ https://huggingface.co/meta-llama/Llama-2-13b-hf
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "meta-llama/Llama-2-13b-hf",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 5120,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 13824,
12
+ "max_position_embeddings": 4096,
13
+ "model_type": "llama",
14
+ "num_attention_heads": 40,
15
+ "num_hidden_layers": 40,
16
+ "num_key_value_heads": 40,
17
+ "pad_token_id": 0,
18
+ "pretraining_tp": 1,
19
+ "rms_norm_eps": 1e-05,
20
+ "rope_scaling": null,
21
+ "tie_word_embeddings": false,
22
+ "torch_dtype": "float16",
23
+ "transformers_version": "4.32.0.dev0",
24
+ "use_cache": false,
25
+ "vocab_size": 32000
26
+ }
generation_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.32.0.dev0",
7
+ "use_cache": false
8
+ }
pytorch_model-00001-of-00014.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cdc50eb0bbd2a12249a6166b5770ea063a37745973cc2f8e35ff18ebc5fc281c
3
+ size 1947779263
pytorch_model-00002-of-00014.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:314f3f58d5f02d5420d4377c1ca504c7463c07fef48140db940a34267606c4ef
3
+ size 1903236213
pytorch_model-00003-of-00014.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f5baa5f4ba5798e0528b2bcb086dacf7dd8664c7e129906a899b01a3243ca50
3
+ size 1903236213
pytorch_model-00004-of-00014.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82bebb94aa47014d5bd9917766e3a17ee032a03115da9f6ac2f6b2e29e09c362
3
+ size 1903236213
pytorch_model-00005-of-00014.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2bce1b93999bd2fd3c498fdf4c8a2f6d204e726a09ef0226069d68819ffa5fc1
3
+ size 1903236277
pytorch_model-00006-of-00014.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e49612fd7f2779e1cb183f2ac44dccc98ff4ec93fcbd632713f130ad6feaf5d
3
+ size 1903236277
pytorch_model-00007-of-00014.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82a8a2cc7772932142b0098110bf19caa464e3f90c7786249e78117209098f28
3
+ size 1903236277
pytorch_model-00008-of-00014.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7c46c614d691923b77b6566c9be5c3a7aabf71629f8f50d50a27e791a03a228
3
+ size 1903236277
pytorch_model-00009-of-00014.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d1870603c36116b3f67d40b28d32031958deb7e704be6311b8ff85be40a8703
3
+ size 1903236277
pytorch_model-00010-of-00014.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a959d5aff13b34c5a2fe43851275fc4a9b96b56fd2d9688f0c0081d6c02cec8a
3
+ size 1903236277
pytorch_model-00011-of-00014.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d400072947a4eb54fc1a7ba4ecbc97acb0f2eb24249947a6cff71e8260962b4
3
+ size 1903236277
pytorch_model-00012-of-00014.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:755cc10b2b850fedaf8ea495911d46922858af5996f726e38e1117242146d1d3
3
+ size 1903236277
pytorch_model-00013-of-00014.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2929fc661b4f3ffe7e8958f529595ec791fdd09e4f77030ccffa3593b6616970
3
+ size 1551963227
pytorch_model-00014-of-00014.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4cd2fb58c9e65210d53d13cd7342c14f2757509fee101e7cebfd58c508110986
3
+ size 665
pytorch_model.bin.index.json ADDED
@@ -0,0 +1,370 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 24956897280
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "pytorch_model-00014-of-00014.bin",
7
+ "model.embed_tokens.weight": "pytorch_model-00001-of-00014.bin",
8
+ "model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00014.bin",
9
+ "model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00014.bin",
10
+ "model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00014.bin",
11
+ "model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00014.bin",
12
+ "model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00014.bin",
13
+ "model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00014.bin",
14
+ "model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00014.bin",
15
+ "model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00014.bin",
16
+ "model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00014.bin",
17
+ "model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00014.bin",
18
+ "model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00014.bin",
19
+ "model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00014.bin",
20
+ "model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00014.bin",
21
+ "model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00014.bin",
22
+ "model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00014.bin",
23
+ "model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00014.bin",
24
+ "model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00014.bin",
25
+ "model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00014.bin",
26
+ "model.layers.10.input_layernorm.weight": "pytorch_model-00004-of-00014.bin",
27
+ "model.layers.10.mlp.down_proj.weight": "pytorch_model-00004-of-00014.bin",
28
+ "model.layers.10.mlp.gate_proj.weight": "pytorch_model-00004-of-00014.bin",
29
+ "model.layers.10.mlp.up_proj.weight": "pytorch_model-00004-of-00014.bin",
30
+ "model.layers.10.post_attention_layernorm.weight": "pytorch_model-00004-of-00014.bin",
31
+ "model.layers.10.self_attn.k_proj.weight": "pytorch_model-00004-of-00014.bin",
32
+ "model.layers.10.self_attn.o_proj.weight": "pytorch_model-00004-of-00014.bin",
33
+ "model.layers.10.self_attn.q_proj.weight": "pytorch_model-00004-of-00014.bin",
34
+ "model.layers.10.self_attn.v_proj.weight": "pytorch_model-00004-of-00014.bin",
35
+ "model.layers.11.input_layernorm.weight": "pytorch_model-00005-of-00014.bin",
36
+ "model.layers.11.mlp.down_proj.weight": "pytorch_model-00005-of-00014.bin",
37
+ "model.layers.11.mlp.gate_proj.weight": "pytorch_model-00004-of-00014.bin",
38
+ "model.layers.11.mlp.up_proj.weight": "pytorch_model-00005-of-00014.bin",
39
+ "model.layers.11.post_attention_layernorm.weight": "pytorch_model-00005-of-00014.bin",
40
+ "model.layers.11.self_attn.k_proj.weight": "pytorch_model-00004-of-00014.bin",
41
+ "model.layers.11.self_attn.o_proj.weight": "pytorch_model-00004-of-00014.bin",
42
+ "model.layers.11.self_attn.q_proj.weight": "pytorch_model-00004-of-00014.bin",
43
+ "model.layers.11.self_attn.v_proj.weight": "pytorch_model-00004-of-00014.bin",
44
+ "model.layers.12.input_layernorm.weight": "pytorch_model-00005-of-00014.bin",
45
+ "model.layers.12.mlp.down_proj.weight": "pytorch_model-00005-of-00014.bin",
46
+ "model.layers.12.mlp.gate_proj.weight": "pytorch_model-00005-of-00014.bin",
47
+ "model.layers.12.mlp.up_proj.weight": "pytorch_model-00005-of-00014.bin",
48
+ "model.layers.12.post_attention_layernorm.weight": "pytorch_model-00005-of-00014.bin",
49
+ "model.layers.12.self_attn.k_proj.weight": "pytorch_model-00005-of-00014.bin",
50
+ "model.layers.12.self_attn.o_proj.weight": "pytorch_model-00005-of-00014.bin",
51
+ "model.layers.12.self_attn.q_proj.weight": "pytorch_model-00005-of-00014.bin",
52
+ "model.layers.12.self_attn.v_proj.weight": "pytorch_model-00005-of-00014.bin",
53
+ "model.layers.13.input_layernorm.weight": "pytorch_model-00005-of-00014.bin",
54
+ "model.layers.13.mlp.down_proj.weight": "pytorch_model-00005-of-00014.bin",
55
+ "model.layers.13.mlp.gate_proj.weight": "pytorch_model-00005-of-00014.bin",
56
+ "model.layers.13.mlp.up_proj.weight": "pytorch_model-00005-of-00014.bin",
57
+ "model.layers.13.post_attention_layernorm.weight": "pytorch_model-00005-of-00014.bin",
58
+ "model.layers.13.self_attn.k_proj.weight": "pytorch_model-00005-of-00014.bin",
59
+ "model.layers.13.self_attn.o_proj.weight": "pytorch_model-00005-of-00014.bin",
60
+ "model.layers.13.self_attn.q_proj.weight": "pytorch_model-00005-of-00014.bin",
61
+ "model.layers.13.self_attn.v_proj.weight": "pytorch_model-00005-of-00014.bin",
62
+ "model.layers.14.input_layernorm.weight": "pytorch_model-00006-of-00014.bin",
63
+ "model.layers.14.mlp.down_proj.weight": "pytorch_model-00006-of-00014.bin",
64
+ "model.layers.14.mlp.gate_proj.weight": "pytorch_model-00005-of-00014.bin",
65
+ "model.layers.14.mlp.up_proj.weight": "pytorch_model-00006-of-00014.bin",
66
+ "model.layers.14.post_attention_layernorm.weight": "pytorch_model-00006-of-00014.bin",
67
+ "model.layers.14.self_attn.k_proj.weight": "pytorch_model-00005-of-00014.bin",
68
+ "model.layers.14.self_attn.o_proj.weight": "pytorch_model-00005-of-00014.bin",
69
+ "model.layers.14.self_attn.q_proj.weight": "pytorch_model-00005-of-00014.bin",
70
+ "model.layers.14.self_attn.v_proj.weight": "pytorch_model-00005-of-00014.bin",
71
+ "model.layers.15.input_layernorm.weight": "pytorch_model-00006-of-00014.bin",
72
+ "model.layers.15.mlp.down_proj.weight": "pytorch_model-00006-of-00014.bin",
73
+ "model.layers.15.mlp.gate_proj.weight": "pytorch_model-00006-of-00014.bin",
74
+ "model.layers.15.mlp.up_proj.weight": "pytorch_model-00006-of-00014.bin",
75
+ "model.layers.15.post_attention_layernorm.weight": "pytorch_model-00006-of-00014.bin",
76
+ "model.layers.15.self_attn.k_proj.weight": "pytorch_model-00006-of-00014.bin",
77
+ "model.layers.15.self_attn.o_proj.weight": "pytorch_model-00006-of-00014.bin",
78
+ "model.layers.15.self_attn.q_proj.weight": "pytorch_model-00006-of-00014.bin",
79
+ "model.layers.15.self_attn.v_proj.weight": "pytorch_model-00006-of-00014.bin",
80
+ "model.layers.16.input_layernorm.weight": "pytorch_model-00006-of-00014.bin",
81
+ "model.layers.16.mlp.down_proj.weight": "pytorch_model-00006-of-00014.bin",
82
+ "model.layers.16.mlp.gate_proj.weight": "pytorch_model-00006-of-00014.bin",
83
+ "model.layers.16.mlp.up_proj.weight": "pytorch_model-00006-of-00014.bin",
84
+ "model.layers.16.post_attention_layernorm.weight": "pytorch_model-00006-of-00014.bin",
85
+ "model.layers.16.self_attn.k_proj.weight": "pytorch_model-00006-of-00014.bin",
86
+ "model.layers.16.self_attn.o_proj.weight": "pytorch_model-00006-of-00014.bin",
87
+ "model.layers.16.self_attn.q_proj.weight": "pytorch_model-00006-of-00014.bin",
88
+ "model.layers.16.self_attn.v_proj.weight": "pytorch_model-00006-of-00014.bin",
89
+ "model.layers.17.input_layernorm.weight": "pytorch_model-00007-of-00014.bin",
90
+ "model.layers.17.mlp.down_proj.weight": "pytorch_model-00007-of-00014.bin",
91
+ "model.layers.17.mlp.gate_proj.weight": "pytorch_model-00006-of-00014.bin",
92
+ "model.layers.17.mlp.up_proj.weight": "pytorch_model-00007-of-00014.bin",
93
+ "model.layers.17.post_attention_layernorm.weight": "pytorch_model-00007-of-00014.bin",
94
+ "model.layers.17.self_attn.k_proj.weight": "pytorch_model-00006-of-00014.bin",
95
+ "model.layers.17.self_attn.o_proj.weight": "pytorch_model-00006-of-00014.bin",
96
+ "model.layers.17.self_attn.q_proj.weight": "pytorch_model-00006-of-00014.bin",
97
+ "model.layers.17.self_attn.v_proj.weight": "pytorch_model-00006-of-00014.bin",
98
+ "model.layers.18.input_layernorm.weight": "pytorch_model-00007-of-00014.bin",
99
+ "model.layers.18.mlp.down_proj.weight": "pytorch_model-00007-of-00014.bin",
100
+ "model.layers.18.mlp.gate_proj.weight": "pytorch_model-00007-of-00014.bin",
101
+ "model.layers.18.mlp.up_proj.weight": "pytorch_model-00007-of-00014.bin",
102
+ "model.layers.18.post_attention_layernorm.weight": "pytorch_model-00007-of-00014.bin",
103
+ "model.layers.18.self_attn.k_proj.weight": "pytorch_model-00007-of-00014.bin",
104
+ "model.layers.18.self_attn.o_proj.weight": "pytorch_model-00007-of-00014.bin",
105
+ "model.layers.18.self_attn.q_proj.weight": "pytorch_model-00007-of-00014.bin",
106
+ "model.layers.18.self_attn.v_proj.weight": "pytorch_model-00007-of-00014.bin",
107
+ "model.layers.19.input_layernorm.weight": "pytorch_model-00007-of-00014.bin",
108
+ "model.layers.19.mlp.down_proj.weight": "pytorch_model-00007-of-00014.bin",
109
+ "model.layers.19.mlp.gate_proj.weight": "pytorch_model-00007-of-00014.bin",
110
+ "model.layers.19.mlp.up_proj.weight": "pytorch_model-00007-of-00014.bin",
111
+ "model.layers.19.post_attention_layernorm.weight": "pytorch_model-00007-of-00014.bin",
112
+ "model.layers.19.self_attn.k_proj.weight": "pytorch_model-00007-of-00014.bin",
113
+ "model.layers.19.self_attn.o_proj.weight": "pytorch_model-00007-of-00014.bin",
114
+ "model.layers.19.self_attn.q_proj.weight": "pytorch_model-00007-of-00014.bin",
115
+ "model.layers.19.self_attn.v_proj.weight": "pytorch_model-00007-of-00014.bin",
116
+ "model.layers.2.input_layernorm.weight": "pytorch_model-00002-of-00014.bin",
117
+ "model.layers.2.mlp.down_proj.weight": "pytorch_model-00002-of-00014.bin",
118
+ "model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00014.bin",
119
+ "model.layers.2.mlp.up_proj.weight": "pytorch_model-00002-of-00014.bin",
120
+ "model.layers.2.post_attention_layernorm.weight": "pytorch_model-00002-of-00014.bin",
121
+ "model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00014.bin",
122
+ "model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00014.bin",
123
+ "model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00014.bin",
124
+ "model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00014.bin",
125
+ "model.layers.20.input_layernorm.weight": "pytorch_model-00008-of-00014.bin",
126
+ "model.layers.20.mlp.down_proj.weight": "pytorch_model-00008-of-00014.bin",
127
+ "model.layers.20.mlp.gate_proj.weight": "pytorch_model-00007-of-00014.bin",
128
+ "model.layers.20.mlp.up_proj.weight": "pytorch_model-00008-of-00014.bin",
129
+ "model.layers.20.post_attention_layernorm.weight": "pytorch_model-00008-of-00014.bin",
130
+ "model.layers.20.self_attn.k_proj.weight": "pytorch_model-00007-of-00014.bin",
131
+ "model.layers.20.self_attn.o_proj.weight": "pytorch_model-00007-of-00014.bin",
132
+ "model.layers.20.self_attn.q_proj.weight": "pytorch_model-00007-of-00014.bin",
133
+ "model.layers.20.self_attn.v_proj.weight": "pytorch_model-00007-of-00014.bin",
134
+ "model.layers.21.input_layernorm.weight": "pytorch_model-00008-of-00014.bin",
135
+ "model.layers.21.mlp.down_proj.weight": "pytorch_model-00008-of-00014.bin",
136
+ "model.layers.21.mlp.gate_proj.weight": "pytorch_model-00008-of-00014.bin",
137
+ "model.layers.21.mlp.up_proj.weight": "pytorch_model-00008-of-00014.bin",
138
+ "model.layers.21.post_attention_layernorm.weight": "pytorch_model-00008-of-00014.bin",
139
+ "model.layers.21.self_attn.k_proj.weight": "pytorch_model-00008-of-00014.bin",
140
+ "model.layers.21.self_attn.o_proj.weight": "pytorch_model-00008-of-00014.bin",
141
+ "model.layers.21.self_attn.q_proj.weight": "pytorch_model-00008-of-00014.bin",
142
+ "model.layers.21.self_attn.v_proj.weight": "pytorch_model-00008-of-00014.bin",
143
+ "model.layers.22.input_layernorm.weight": "pytorch_model-00008-of-00014.bin",
144
+ "model.layers.22.mlp.down_proj.weight": "pytorch_model-00008-of-00014.bin",
145
+ "model.layers.22.mlp.gate_proj.weight": "pytorch_model-00008-of-00014.bin",
146
+ "model.layers.22.mlp.up_proj.weight": "pytorch_model-00008-of-00014.bin",
147
+ "model.layers.22.post_attention_layernorm.weight": "pytorch_model-00008-of-00014.bin",
148
+ "model.layers.22.self_attn.k_proj.weight": "pytorch_model-00008-of-00014.bin",
149
+ "model.layers.22.self_attn.o_proj.weight": "pytorch_model-00008-of-00014.bin",
150
+ "model.layers.22.self_attn.q_proj.weight": "pytorch_model-00008-of-00014.bin",
151
+ "model.layers.22.self_attn.v_proj.weight": "pytorch_model-00008-of-00014.bin",
152
+ "model.layers.23.input_layernorm.weight": "pytorch_model-00009-of-00014.bin",
153
+ "model.layers.23.mlp.down_proj.weight": "pytorch_model-00009-of-00014.bin",
154
+ "model.layers.23.mlp.gate_proj.weight": "pytorch_model-00008-of-00014.bin",
155
+ "model.layers.23.mlp.up_proj.weight": "pytorch_model-00009-of-00014.bin",
156
+ "model.layers.23.post_attention_layernorm.weight": "pytorch_model-00009-of-00014.bin",
157
+ "model.layers.23.self_attn.k_proj.weight": "pytorch_model-00008-of-00014.bin",
158
+ "model.layers.23.self_attn.o_proj.weight": "pytorch_model-00008-of-00014.bin",
159
+ "model.layers.23.self_attn.q_proj.weight": "pytorch_model-00008-of-00014.bin",
160
+ "model.layers.23.self_attn.v_proj.weight": "pytorch_model-00008-of-00014.bin",
161
+ "model.layers.24.input_layernorm.weight": "pytorch_model-00009-of-00014.bin",
162
+ "model.layers.24.mlp.down_proj.weight": "pytorch_model-00009-of-00014.bin",
163
+ "model.layers.24.mlp.gate_proj.weight": "pytorch_model-00009-of-00014.bin",
164
+ "model.layers.24.mlp.up_proj.weight": "pytorch_model-00009-of-00014.bin",
165
+ "model.layers.24.post_attention_layernorm.weight": "pytorch_model-00009-of-00014.bin",
166
+ "model.layers.24.self_attn.k_proj.weight": "pytorch_model-00009-of-00014.bin",
167
+ "model.layers.24.self_attn.o_proj.weight": "pytorch_model-00009-of-00014.bin",
168
+ "model.layers.24.self_attn.q_proj.weight": "pytorch_model-00009-of-00014.bin",
169
+ "model.layers.24.self_attn.v_proj.weight": "pytorch_model-00009-of-00014.bin",
170
+ "model.layers.25.input_layernorm.weight": "pytorch_model-00009-of-00014.bin",
171
+ "model.layers.25.mlp.down_proj.weight": "pytorch_model-00009-of-00014.bin",
172
+ "model.layers.25.mlp.gate_proj.weight": "pytorch_model-00009-of-00014.bin",
173
+ "model.layers.25.mlp.up_proj.weight": "pytorch_model-00009-of-00014.bin",
174
+ "model.layers.25.post_attention_layernorm.weight": "pytorch_model-00009-of-00014.bin",
175
+ "model.layers.25.self_attn.k_proj.weight": "pytorch_model-00009-of-00014.bin",
176
+ "model.layers.25.self_attn.o_proj.weight": "pytorch_model-00009-of-00014.bin",
177
+ "model.layers.25.self_attn.q_proj.weight": "pytorch_model-00009-of-00014.bin",
178
+ "model.layers.25.self_attn.v_proj.weight": "pytorch_model-00009-of-00014.bin",
179
+ "model.layers.26.input_layernorm.weight": "pytorch_model-00010-of-00014.bin",
180
+ "model.layers.26.mlp.down_proj.weight": "pytorch_model-00010-of-00014.bin",
181
+ "model.layers.26.mlp.gate_proj.weight": "pytorch_model-00009-of-00014.bin",
182
+ "model.layers.26.mlp.up_proj.weight": "pytorch_model-00010-of-00014.bin",
183
+ "model.layers.26.post_attention_layernorm.weight": "pytorch_model-00010-of-00014.bin",
184
+ "model.layers.26.self_attn.k_proj.weight": "pytorch_model-00009-of-00014.bin",
185
+ "model.layers.26.self_attn.o_proj.weight": "pytorch_model-00009-of-00014.bin",
186
+ "model.layers.26.self_attn.q_proj.weight": "pytorch_model-00009-of-00014.bin",
187
+ "model.layers.26.self_attn.v_proj.weight": "pytorch_model-00009-of-00014.bin",
188
+ "model.layers.27.input_layernorm.weight": "pytorch_model-00010-of-00014.bin",
189
+ "model.layers.27.mlp.down_proj.weight": "pytorch_model-00010-of-00014.bin",
190
+ "model.layers.27.mlp.gate_proj.weight": "pytorch_model-00010-of-00014.bin",
191
+ "model.layers.27.mlp.up_proj.weight": "pytorch_model-00010-of-00014.bin",
192
+ "model.layers.27.post_attention_layernorm.weight": "pytorch_model-00010-of-00014.bin",
193
+ "model.layers.27.self_attn.k_proj.weight": "pytorch_model-00010-of-00014.bin",
194
+ "model.layers.27.self_attn.o_proj.weight": "pytorch_model-00010-of-00014.bin",
195
+ "model.layers.27.self_attn.q_proj.weight": "pytorch_model-00010-of-00014.bin",
196
+ "model.layers.27.self_attn.v_proj.weight": "pytorch_model-00010-of-00014.bin",
197
+ "model.layers.28.input_layernorm.weight": "pytorch_model-00010-of-00014.bin",
198
+ "model.layers.28.mlp.down_proj.weight": "pytorch_model-00010-of-00014.bin",
199
+ "model.layers.28.mlp.gate_proj.weight": "pytorch_model-00010-of-00014.bin",
200
+ "model.layers.28.mlp.up_proj.weight": "pytorch_model-00010-of-00014.bin",
201
+ "model.layers.28.post_attention_layernorm.weight": "pytorch_model-00010-of-00014.bin",
202
+ "model.layers.28.self_attn.k_proj.weight": "pytorch_model-00010-of-00014.bin",
203
+ "model.layers.28.self_attn.o_proj.weight": "pytorch_model-00010-of-00014.bin",
204
+ "model.layers.28.self_attn.q_proj.weight": "pytorch_model-00010-of-00014.bin",
205
+ "model.layers.28.self_attn.v_proj.weight": "pytorch_model-00010-of-00014.bin",
206
+ "model.layers.29.input_layernorm.weight": "pytorch_model-00011-of-00014.bin",
207
+ "model.layers.29.mlp.down_proj.weight": "pytorch_model-00011-of-00014.bin",
208
+ "model.layers.29.mlp.gate_proj.weight": "pytorch_model-00010-of-00014.bin",
209
+ "model.layers.29.mlp.up_proj.weight": "pytorch_model-00011-of-00014.bin",
210
+ "model.layers.29.post_attention_layernorm.weight": "pytorch_model-00011-of-00014.bin",
211
+ "model.layers.29.self_attn.k_proj.weight": "pytorch_model-00010-of-00014.bin",
212
+ "model.layers.29.self_attn.o_proj.weight": "pytorch_model-00010-of-00014.bin",
213
+ "model.layers.29.self_attn.q_proj.weight": "pytorch_model-00010-of-00014.bin",
214
+ "model.layers.29.self_attn.v_proj.weight": "pytorch_model-00010-of-00014.bin",
215
+ "model.layers.3.input_layernorm.weight": "pytorch_model-00002-of-00014.bin",
216
+ "model.layers.3.mlp.down_proj.weight": "pytorch_model-00002-of-00014.bin",
217
+ "model.layers.3.mlp.gate_proj.weight": "pytorch_model-00002-of-00014.bin",
218
+ "model.layers.3.mlp.up_proj.weight": "pytorch_model-00002-of-00014.bin",
219
+ "model.layers.3.post_attention_layernorm.weight": "pytorch_model-00002-of-00014.bin",
220
+ "model.layers.3.self_attn.k_proj.weight": "pytorch_model-00002-of-00014.bin",
221
+ "model.layers.3.self_attn.o_proj.weight": "pytorch_model-00002-of-00014.bin",
222
+ "model.layers.3.self_attn.q_proj.weight": "pytorch_model-00002-of-00014.bin",
223
+ "model.layers.3.self_attn.v_proj.weight": "pytorch_model-00002-of-00014.bin",
224
+ "model.layers.30.input_layernorm.weight": "pytorch_model-00011-of-00014.bin",
225
+ "model.layers.30.mlp.down_proj.weight": "pytorch_model-00011-of-00014.bin",
226
+ "model.layers.30.mlp.gate_proj.weight": "pytorch_model-00011-of-00014.bin",
227
+ "model.layers.30.mlp.up_proj.weight": "pytorch_model-00011-of-00014.bin",
228
+ "model.layers.30.post_attention_layernorm.weight": "pytorch_model-00011-of-00014.bin",
229
+ "model.layers.30.self_attn.k_proj.weight": "pytorch_model-00011-of-00014.bin",
230
+ "model.layers.30.self_attn.o_proj.weight": "pytorch_model-00011-of-00014.bin",
231
+ "model.layers.30.self_attn.q_proj.weight": "pytorch_model-00011-of-00014.bin",
232
+ "model.layers.30.self_attn.v_proj.weight": "pytorch_model-00011-of-00014.bin",
233
+ "model.layers.31.input_layernorm.weight": "pytorch_model-00011-of-00014.bin",
234
+ "model.layers.31.mlp.down_proj.weight": "pytorch_model-00011-of-00014.bin",
235
+ "model.layers.31.mlp.gate_proj.weight": "pytorch_model-00011-of-00014.bin",
236
+ "model.layers.31.mlp.up_proj.weight": "pytorch_model-00011-of-00014.bin",
237
+ "model.layers.31.post_attention_layernorm.weight": "pytorch_model-00011-of-00014.bin",
238
+ "model.layers.31.self_attn.k_proj.weight": "pytorch_model-00011-of-00014.bin",
239
+ "model.layers.31.self_attn.o_proj.weight": "pytorch_model-00011-of-00014.bin",
240
+ "model.layers.31.self_attn.q_proj.weight": "pytorch_model-00011-of-00014.bin",
241
+ "model.layers.31.self_attn.v_proj.weight": "pytorch_model-00011-of-00014.bin",
242
+ "model.layers.32.input_layernorm.weight": "pytorch_model-00012-of-00014.bin",
243
+ "model.layers.32.mlp.down_proj.weight": "pytorch_model-00012-of-00014.bin",
244
+ "model.layers.32.mlp.gate_proj.weight": "pytorch_model-00011-of-00014.bin",
245
+ "model.layers.32.mlp.up_proj.weight": "pytorch_model-00012-of-00014.bin",
246
+ "model.layers.32.post_attention_layernorm.weight": "pytorch_model-00012-of-00014.bin",
247
+ "model.layers.32.self_attn.k_proj.weight": "pytorch_model-00011-of-00014.bin",
248
+ "model.layers.32.self_attn.o_proj.weight": "pytorch_model-00011-of-00014.bin",
249
+ "model.layers.32.self_attn.q_proj.weight": "pytorch_model-00011-of-00014.bin",
250
+ "model.layers.32.self_attn.v_proj.weight": "pytorch_model-00011-of-00014.bin",
251
+ "model.layers.33.input_layernorm.weight": "pytorch_model-00012-of-00014.bin",
252
+ "model.layers.33.mlp.down_proj.weight": "pytorch_model-00012-of-00014.bin",
253
+ "model.layers.33.mlp.gate_proj.weight": "pytorch_model-00012-of-00014.bin",
254
+ "model.layers.33.mlp.up_proj.weight": "pytorch_model-00012-of-00014.bin",
255
+ "model.layers.33.post_attention_layernorm.weight": "pytorch_model-00012-of-00014.bin",
256
+ "model.layers.33.self_attn.k_proj.weight": "pytorch_model-00012-of-00014.bin",
257
+ "model.layers.33.self_attn.o_proj.weight": "pytorch_model-00012-of-00014.bin",
258
+ "model.layers.33.self_attn.q_proj.weight": "pytorch_model-00012-of-00014.bin",
259
+ "model.layers.33.self_attn.v_proj.weight": "pytorch_model-00012-of-00014.bin",
260
+ "model.layers.34.input_layernorm.weight": "pytorch_model-00012-of-00014.bin",
261
+ "model.layers.34.mlp.down_proj.weight": "pytorch_model-00012-of-00014.bin",
262
+ "model.layers.34.mlp.gate_proj.weight": "pytorch_model-00012-of-00014.bin",
263
+ "model.layers.34.mlp.up_proj.weight": "pytorch_model-00012-of-00014.bin",
264
+ "model.layers.34.post_attention_layernorm.weight": "pytorch_model-00012-of-00014.bin",
265
+ "model.layers.34.self_attn.k_proj.weight": "pytorch_model-00012-of-00014.bin",
266
+ "model.layers.34.self_attn.o_proj.weight": "pytorch_model-00012-of-00014.bin",
267
+ "model.layers.34.self_attn.q_proj.weight": "pytorch_model-00012-of-00014.bin",
268
+ "model.layers.34.self_attn.v_proj.weight": "pytorch_model-00012-of-00014.bin",
269
+ "model.layers.35.input_layernorm.weight": "pytorch_model-00013-of-00014.bin",
270
+ "model.layers.35.mlp.down_proj.weight": "pytorch_model-00013-of-00014.bin",
271
+ "model.layers.35.mlp.gate_proj.weight": "pytorch_model-00012-of-00014.bin",
272
+ "model.layers.35.mlp.up_proj.weight": "pytorch_model-00013-of-00014.bin",
273
+ "model.layers.35.post_attention_layernorm.weight": "pytorch_model-00013-of-00014.bin",
274
+ "model.layers.35.self_attn.k_proj.weight": "pytorch_model-00012-of-00014.bin",
275
+ "model.layers.35.self_attn.o_proj.weight": "pytorch_model-00012-of-00014.bin",
276
+ "model.layers.35.self_attn.q_proj.weight": "pytorch_model-00012-of-00014.bin",
277
+ "model.layers.35.self_attn.v_proj.weight": "pytorch_model-00012-of-00014.bin",
278
+ "model.layers.36.input_layernorm.weight": "pytorch_model-00013-of-00014.bin",
279
+ "model.layers.36.mlp.down_proj.weight": "pytorch_model-00013-of-00014.bin",
280
+ "model.layers.36.mlp.gate_proj.weight": "pytorch_model-00013-of-00014.bin",
281
+ "model.layers.36.mlp.up_proj.weight": "pytorch_model-00013-of-00014.bin",
282
+ "model.layers.36.post_attention_layernorm.weight": "pytorch_model-00013-of-00014.bin",
283
+ "model.layers.36.self_attn.k_proj.weight": "pytorch_model-00013-of-00014.bin",
284
+ "model.layers.36.self_attn.o_proj.weight": "pytorch_model-00013-of-00014.bin",
285
+ "model.layers.36.self_attn.q_proj.weight": "pytorch_model-00013-of-00014.bin",
286
+ "model.layers.36.self_attn.v_proj.weight": "pytorch_model-00013-of-00014.bin",
287
+ "model.layers.37.input_layernorm.weight": "pytorch_model-00013-of-00014.bin",
288
+ "model.layers.37.mlp.down_proj.weight": "pytorch_model-00013-of-00014.bin",
289
+ "model.layers.37.mlp.gate_proj.weight": "pytorch_model-00013-of-00014.bin",
290
+ "model.layers.37.mlp.up_proj.weight": "pytorch_model-00013-of-00014.bin",
291
+ "model.layers.37.post_attention_layernorm.weight": "pytorch_model-00013-of-00014.bin",
292
+ "model.layers.37.self_attn.k_proj.weight": "pytorch_model-00013-of-00014.bin",
293
+ "model.layers.37.self_attn.o_proj.weight": "pytorch_model-00013-of-00014.bin",
294
+ "model.layers.37.self_attn.q_proj.weight": "pytorch_model-00013-of-00014.bin",
295
+ "model.layers.37.self_attn.v_proj.weight": "pytorch_model-00013-of-00014.bin",
296
+ "model.layers.38.input_layernorm.weight": "pytorch_model-00013-of-00014.bin",
297
+ "model.layers.38.mlp.down_proj.weight": "pytorch_model-00013-of-00014.bin",
298
+ "model.layers.38.mlp.gate_proj.weight": "pytorch_model-00013-of-00014.bin",
299
+ "model.layers.38.mlp.up_proj.weight": "pytorch_model-00013-of-00014.bin",
300
+ "model.layers.38.post_attention_layernorm.weight": "pytorch_model-00013-of-00014.bin",
301
+ "model.layers.38.self_attn.k_proj.weight": "pytorch_model-00013-of-00014.bin",
302
+ "model.layers.38.self_attn.o_proj.weight": "pytorch_model-00013-of-00014.bin",
303
+ "model.layers.38.self_attn.q_proj.weight": "pytorch_model-00013-of-00014.bin",
304
+ "model.layers.38.self_attn.v_proj.weight": "pytorch_model-00013-of-00014.bin",
305
+ "model.layers.39.input_layernorm.weight": "pytorch_model-00013-of-00014.bin",
306
+ "model.layers.39.mlp.down_proj.weight": "pytorch_model-00013-of-00014.bin",
307
+ "model.layers.39.mlp.gate_proj.weight": "pytorch_model-00013-of-00014.bin",
308
+ "model.layers.39.mlp.up_proj.weight": "pytorch_model-00013-of-00014.bin",
309
+ "model.layers.39.post_attention_layernorm.weight": "pytorch_model-00013-of-00014.bin",
310
+ "model.layers.39.self_attn.k_proj.weight": "pytorch_model-00013-of-00014.bin",
311
+ "model.layers.39.self_attn.o_proj.weight": "pytorch_model-00013-of-00014.bin",
312
+ "model.layers.39.self_attn.q_proj.weight": "pytorch_model-00013-of-00014.bin",
313
+ "model.layers.39.self_attn.v_proj.weight": "pytorch_model-00013-of-00014.bin",
314
+ "model.layers.4.input_layernorm.weight": "pytorch_model-00002-of-00014.bin",
315
+ "model.layers.4.mlp.down_proj.weight": "pytorch_model-00002-of-00014.bin",
316
+ "model.layers.4.mlp.gate_proj.weight": "pytorch_model-00002-of-00014.bin",
317
+ "model.layers.4.mlp.up_proj.weight": "pytorch_model-00002-of-00014.bin",
318
+ "model.layers.4.post_attention_layernorm.weight": "pytorch_model-00002-of-00014.bin",
319
+ "model.layers.4.self_attn.k_proj.weight": "pytorch_model-00002-of-00014.bin",
320
+ "model.layers.4.self_attn.o_proj.weight": "pytorch_model-00002-of-00014.bin",
321
+ "model.layers.4.self_attn.q_proj.weight": "pytorch_model-00002-of-00014.bin",
322
+ "model.layers.4.self_attn.v_proj.weight": "pytorch_model-00002-of-00014.bin",
323
+ "model.layers.5.input_layernorm.weight": "pytorch_model-00003-of-00014.bin",
324
+ "model.layers.5.mlp.down_proj.weight": "pytorch_model-00003-of-00014.bin",
325
+ "model.layers.5.mlp.gate_proj.weight": "pytorch_model-00002-of-00014.bin",
326
+ "model.layers.5.mlp.up_proj.weight": "pytorch_model-00003-of-00014.bin",
327
+ "model.layers.5.post_attention_layernorm.weight": "pytorch_model-00003-of-00014.bin",
328
+ "model.layers.5.self_attn.k_proj.weight": "pytorch_model-00002-of-00014.bin",
329
+ "model.layers.5.self_attn.o_proj.weight": "pytorch_model-00002-of-00014.bin",
330
+ "model.layers.5.self_attn.q_proj.weight": "pytorch_model-00002-of-00014.bin",
331
+ "model.layers.5.self_attn.v_proj.weight": "pytorch_model-00002-of-00014.bin",
332
+ "model.layers.6.input_layernorm.weight": "pytorch_model-00003-of-00014.bin",
333
+ "model.layers.6.mlp.down_proj.weight": "pytorch_model-00003-of-00014.bin",
334
+ "model.layers.6.mlp.gate_proj.weight": "pytorch_model-00003-of-00014.bin",
335
+ "model.layers.6.mlp.up_proj.weight": "pytorch_model-00003-of-00014.bin",
336
+ "model.layers.6.post_attention_layernorm.weight": "pytorch_model-00003-of-00014.bin",
337
+ "model.layers.6.self_attn.k_proj.weight": "pytorch_model-00003-of-00014.bin",
338
+ "model.layers.6.self_attn.o_proj.weight": "pytorch_model-00003-of-00014.bin",
339
+ "model.layers.6.self_attn.q_proj.weight": "pytorch_model-00003-of-00014.bin",
340
+ "model.layers.6.self_attn.v_proj.weight": "pytorch_model-00003-of-00014.bin",
341
+ "model.layers.7.input_layernorm.weight": "pytorch_model-00003-of-00014.bin",
342
+ "model.layers.7.mlp.down_proj.weight": "pytorch_model-00003-of-00014.bin",
343
+ "model.layers.7.mlp.gate_proj.weight": "pytorch_model-00003-of-00014.bin",
344
+ "model.layers.7.mlp.up_proj.weight": "pytorch_model-00003-of-00014.bin",
345
+ "model.layers.7.post_attention_layernorm.weight": "pytorch_model-00003-of-00014.bin",
346
+ "model.layers.7.self_attn.k_proj.weight": "pytorch_model-00003-of-00014.bin",
347
+ "model.layers.7.self_attn.o_proj.weight": "pytorch_model-00003-of-00014.bin",
348
+ "model.layers.7.self_attn.q_proj.weight": "pytorch_model-00003-of-00014.bin",
349
+ "model.layers.7.self_attn.v_proj.weight": "pytorch_model-00003-of-00014.bin",
350
+ "model.layers.8.input_layernorm.weight": "pytorch_model-00004-of-00014.bin",
351
+ "model.layers.8.mlp.down_proj.weight": "pytorch_model-00004-of-00014.bin",
352
+ "model.layers.8.mlp.gate_proj.weight": "pytorch_model-00003-of-00014.bin",
353
+ "model.layers.8.mlp.up_proj.weight": "pytorch_model-00004-of-00014.bin",
354
+ "model.layers.8.post_attention_layernorm.weight": "pytorch_model-00004-of-00014.bin",
355
+ "model.layers.8.self_attn.k_proj.weight": "pytorch_model-00003-of-00014.bin",
356
+ "model.layers.8.self_attn.o_proj.weight": "pytorch_model-00003-of-00014.bin",
357
+ "model.layers.8.self_attn.q_proj.weight": "pytorch_model-00003-of-00014.bin",
358
+ "model.layers.8.self_attn.v_proj.weight": "pytorch_model-00003-of-00014.bin",
359
+ "model.layers.9.input_layernorm.weight": "pytorch_model-00004-of-00014.bin",
360
+ "model.layers.9.mlp.down_proj.weight": "pytorch_model-00004-of-00014.bin",
361
+ "model.layers.9.mlp.gate_proj.weight": "pytorch_model-00004-of-00014.bin",
362
+ "model.layers.9.mlp.up_proj.weight": "pytorch_model-00004-of-00014.bin",
363
+ "model.layers.9.post_attention_layernorm.weight": "pytorch_model-00004-of-00014.bin",
364
+ "model.layers.9.self_attn.k_proj.weight": "pytorch_model-00004-of-00014.bin",
365
+ "model.layers.9.self_attn.o_proj.weight": "pytorch_model-00004-of-00014.bin",
366
+ "model.layers.9.self_attn.q_proj.weight": "pytorch_model-00004-of-00014.bin",
367
+ "model.layers.9.self_attn.v_proj.weight": "pytorch_model-00004-of-00014.bin",
368
+ "model.norm.weight": "pytorch_model-00013-of-00014.bin"
369
+ }
370
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "unk_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "bos_token": {
5
+ "__type": "AddedToken",
6
+ "content": "<s>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "clean_up_tokenization_spaces": false,
13
+ "eos_token": {
14
+ "__type": "AddedToken",
15
+ "content": "</s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false
20
+ },
21
+ "legacy": false,
22
+ "model_max_length": 1000000000000000019884624838656,
23
+ "pad_token": null,
24
+ "sp_model_kwargs": {},
25
+ "tokenizer_class": "LlamaTokenizer",
26
+ "unk_token": {
27
+ "__type": "AddedToken",
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false
33
+ }
34
+ }