filopedraz commited on
Commit
4048bc1
1 Parent(s): 85689f7

added resharded model

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +82 -0
  2. config.json +25 -0
  3. generation_config.json +6 -0
  4. main.py +9 -0
  5. model.safetensors.index.json +442 -0
  6. model_00001-of-00049.safetensors +3 -0
  7. model_00002-of-00049.safetensors +3 -0
  8. model_00003-of-00049.safetensors +3 -0
  9. model_00004-of-00049.safetensors +3 -0
  10. model_00005-of-00049.safetensors +3 -0
  11. model_00006-of-00049.safetensors +3 -0
  12. model_00007-of-00049.safetensors +3 -0
  13. model_00008-of-00049.safetensors +3 -0
  14. model_00009-of-00049.safetensors +3 -0
  15. model_00010-of-00049.safetensors +3 -0
  16. model_00011-of-00049.safetensors +3 -0
  17. model_00012-of-00049.safetensors +3 -0
  18. model_00013-of-00049.safetensors +3 -0
  19. model_00014-of-00049.safetensors +3 -0
  20. model_00015-of-00049.safetensors +3 -0
  21. model_00016-of-00049.safetensors +3 -0
  22. model_00017-of-00049.safetensors +3 -0
  23. model_00018-of-00049.safetensors +3 -0
  24. model_00019-of-00049.safetensors +3 -0
  25. model_00020-of-00049.safetensors +3 -0
  26. model_00021-of-00049.safetensors +3 -0
  27. model_00022-of-00049.safetensors +3 -0
  28. model_00023-of-00049.safetensors +3 -0
  29. model_00024-of-00049.safetensors +3 -0
  30. model_00025-of-00049.safetensors +3 -0
  31. model_00026-of-00049.safetensors +3 -0
  32. model_00027-of-00049.safetensors +3 -0
  33. model_00028-of-00049.safetensors +3 -0
  34. model_00029-of-00049.safetensors +3 -0
  35. model_00030-of-00049.safetensors +3 -0
  36. model_00031-of-00049.safetensors +3 -0
  37. model_00032-of-00049.safetensors +3 -0
  38. model_00033-of-00049.safetensors +3 -0
  39. model_00034-of-00049.safetensors +3 -0
  40. model_00035-of-00049.safetensors +3 -0
  41. model_00036-of-00049.safetensors +3 -0
  42. model_00037-of-00049.safetensors +3 -0
  43. model_00038-of-00049.safetensors +3 -0
  44. model_00039-of-00049.safetensors +3 -0
  45. model_00040-of-00049.safetensors +3 -0
  46. model_00041-of-00049.safetensors +3 -0
  47. model_00042-of-00049.safetensors +3 -0
  48. model_00043-of-00049.safetensors +3 -0
  49. model_00044-of-00049.safetensors +3 -0
  50. model_00045-of-00049.safetensors +3 -0
README.md CHANGED
@@ -1,3 +1,85 @@
1
  ---
 
 
 
 
 
2
  license: llama2
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - code
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - llama-2
7
  license: llama2
8
  ---
9
+ # **Code Llama**
10
+ Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the 34B instruct-tuned version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.
11
+
12
+ | | Base Model | Python | Instruct |
13
+ | --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
14
+ | 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
15
+ | 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) |
16
+ | 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) |
17
+
18
+ ## Model Use
19
+
20
+ To use this model, please make sure to install transformers from `main` until the next version is released:
21
+
22
+ ```bash
23
+ pip install git+https://github.com/huggingface/transformers.git@main accelerate
24
+ ```
25
+
26
+ Model capabilities:
27
+
28
+ - [x] Code completion.
29
+ - [ ] Infilling.
30
+ - [x] Instructions / chat.
31
+ - [ ] Python specialist.
32
+
33
+ ## Model Details
34
+ *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
35
+
36
+ **Model Developers** Meta
37
+
38
+ **Variations** Code Llama comes in three model sizes, and three variants:
39
+
40
+ * Code Llama: base models designed for general code synthesis and understanding
41
+ * Code Llama - Python: designed specifically for Python
42
+ * Code Llama - Instruct: for instruction following and safer deployment
43
+
44
+ All variants are available in sizes of 7B, 13B and 34B parameters.
45
+
46
+ **This repository contains the Instruct version of the 34B parameters model.**
47
+
48
+ **Input** Models input text only.
49
+
50
+ **Output** Models generate text only.
51
+
52
+ **Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture.
53
+
54
+ **Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
55
+
56
+ **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
57
+
58
+ **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
59
+
60
+ **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950).
61
+
62
+ ## Intended Use
63
+ **Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
64
+
65
+ **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
66
+
67
+ ## Hardware and Software
68
+ **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
69
+
70
+ **Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.
71
+
72
+ ## Training Data
73
+
74
+ All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
75
+
76
+ ## Evaluation Results
77
+
78
+ See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
79
+
80
+
81
+ ## Ethical Considerations and Limitations
82
+
83
+ Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
84
+
85
+ Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "LlamaForCausalLM"
4
+ ],
5
+ "bos_token_id": 1,
6
+ "eos_token_id": 2,
7
+ "hidden_act": "silu",
8
+ "hidden_size": 8192,
9
+ "initializer_range": 0.02,
10
+ "intermediate_size": 22016,
11
+ "max_position_embeddings": 16384,
12
+ "model_type": "llama",
13
+ "num_attention_heads": 64,
14
+ "num_hidden_layers": 48,
15
+ "num_key_value_heads": 8,
16
+ "pretraining_tp": 1,
17
+ "rms_norm_eps": 1e-05,
18
+ "rope_scaling": null,
19
+ "rope_theta": 1000000,
20
+ "tie_word_embeddings": false,
21
+ "torch_dtype": "bfloat16",
22
+ "transformers_version": "4.32.0.dev0",
23
+ "use_cache": true,
24
+ "vocab_size": 32000
25
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.32.0.dev0"
6
+ }
main.py ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ # pip install git+https://github.com/huggingface/transformers.git@main accelerate
2
+ from transformers import LlamaTokenizer, AutoModelForCausalLM
3
+
4
+ tokenizer = LlamaTokenizer.from_pretrained("./")
5
+ model = AutoModelForCausalLM.from_pretrained("./")
6
+
7
+ inputs = tokenizer("A cat sat", return_tensors="pt")["input_ids"]
8
+ outputs = model.generate(inputs, max_new_tokens=5)
9
+ print(tokenizer.decode(outputs[0]))
model.safetensors.index.json ADDED
@@ -0,0 +1,442 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 67487940608
4
+ },
5
+ "weight_map": {
6
+ "model.layers.0.self_attn.q_proj.weight": "model_00001-of-00049.safetensors",
7
+ "model.layers.0.self_attn.k_proj.weight": "model_00001-of-00049.safetensors",
8
+ "model.layers.0.self_attn.v_proj.weight": "model_00001-of-00049.safetensors",
9
+ "model.layers.0.self_attn.o_proj.weight": "model_00001-of-00049.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model_00001-of-00049.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model_00001-of-00049.safetensors",
12
+ "model.layers.0.mlp.down_proj.weight": "model_00001-of-00049.safetensors",
13
+ "model.layers.0.input_layernorm.weight": "model_00001-of-00049.safetensors",
14
+ "model.layers.0.post_attention_layernorm.weight": "model_00001-of-00049.safetensors",
15
+ "model.layers.1.self_attn.q_proj.weight": "model_00002-of-00049.safetensors",
16
+ "model.layers.1.self_attn.k_proj.weight": "model_00002-of-00049.safetensors",
17
+ "model.layers.1.self_attn.v_proj.weight": "model_00002-of-00049.safetensors",
18
+ "model.layers.1.self_attn.o_proj.weight": "model_00002-of-00049.safetensors",
19
+ "model.layers.1.mlp.gate_proj.weight": "model_00002-of-00049.safetensors",
20
+ "model.layers.1.mlp.up_proj.weight": "model_00002-of-00049.safetensors",
21
+ "model.layers.1.mlp.down_proj.weight": "model_00002-of-00049.safetensors",
22
+ "model.layers.1.input_layernorm.weight": "model_00002-of-00049.safetensors",
23
+ "model.layers.1.post_attention_layernorm.weight": "model_00002-of-00049.safetensors",
24
+ "model.layers.2.self_attn.q_proj.weight": "model_00003-of-00049.safetensors",
25
+ "model.layers.2.self_attn.k_proj.weight": "model_00003-of-00049.safetensors",
26
+ "model.layers.2.self_attn.v_proj.weight": "model_00003-of-00049.safetensors",
27
+ "model.layers.2.self_attn.o_proj.weight": "model_00003-of-00049.safetensors",
28
+ "model.layers.2.mlp.gate_proj.weight": "model_00003-of-00049.safetensors",
29
+ "model.layers.2.mlp.up_proj.weight": "model_00003-of-00049.safetensors",
30
+ "model.layers.2.mlp.down_proj.weight": "model_00003-of-00049.safetensors",
31
+ "model.layers.2.input_layernorm.weight": "model_00003-of-00049.safetensors",
32
+ "model.layers.2.post_attention_layernorm.weight": "model_00003-of-00049.safetensors",
33
+ "model.layers.3.self_attn.q_proj.weight": "model_00004-of-00049.safetensors",
34
+ "model.layers.3.self_attn.k_proj.weight": "model_00004-of-00049.safetensors",
35
+ "model.layers.3.self_attn.v_proj.weight": "model_00004-of-00049.safetensors",
36
+ "model.layers.3.self_attn.o_proj.weight": "model_00004-of-00049.safetensors",
37
+ "model.layers.3.mlp.gate_proj.weight": "model_00004-of-00049.safetensors",
38
+ "model.layers.3.mlp.up_proj.weight": "model_00004-of-00049.safetensors",
39
+ "model.layers.3.mlp.down_proj.weight": "model_00004-of-00049.safetensors",
40
+ "model.layers.3.input_layernorm.weight": "model_00004-of-00049.safetensors",
41
+ "model.layers.3.post_attention_layernorm.weight": "model_00004-of-00049.safetensors",
42
+ "model.layers.4.self_attn.q_proj.weight": "model_00005-of-00049.safetensors",
43
+ "model.layers.4.self_attn.k_proj.weight": "model_00005-of-00049.safetensors",
44
+ "model.layers.4.self_attn.v_proj.weight": "model_00005-of-00049.safetensors",
45
+ "model.layers.4.self_attn.o_proj.weight": "model_00005-of-00049.safetensors",
46
+ "model.layers.4.mlp.gate_proj.weight": "model_00005-of-00049.safetensors",
47
+ "model.layers.4.mlp.up_proj.weight": "model_00005-of-00049.safetensors",
48
+ "model.layers.4.mlp.down_proj.weight": "model_00005-of-00049.safetensors",
49
+ "model.layers.4.input_layernorm.weight": "model_00005-of-00049.safetensors",
50
+ "model.layers.4.post_attention_layernorm.weight": "model_00005-of-00049.safetensors",
51
+ "model.layers.5.self_attn.q_proj.weight": "model_00006-of-00049.safetensors",
52
+ "model.layers.5.self_attn.k_proj.weight": "model_00006-of-00049.safetensors",
53
+ "model.layers.5.self_attn.v_proj.weight": "model_00006-of-00049.safetensors",
54
+ "model.layers.5.self_attn.o_proj.weight": "model_00006-of-00049.safetensors",
55
+ "model.layers.5.mlp.gate_proj.weight": "model_00006-of-00049.safetensors",
56
+ "model.layers.5.mlp.up_proj.weight": "model_00006-of-00049.safetensors",
57
+ "model.layers.5.mlp.down_proj.weight": "model_00006-of-00049.safetensors",
58
+ "model.layers.5.input_layernorm.weight": "model_00006-of-00049.safetensors",
59
+ "model.layers.5.post_attention_layernorm.weight": "model_00006-of-00049.safetensors",
60
+ "model.layers.6.self_attn.q_proj.weight": "model_00007-of-00049.safetensors",
61
+ "model.layers.6.self_attn.k_proj.weight": "model_00007-of-00049.safetensors",
62
+ "model.layers.6.self_attn.v_proj.weight": "model_00007-of-00049.safetensors",
63
+ "model.layers.6.self_attn.o_proj.weight": "model_00007-of-00049.safetensors",
64
+ "model.layers.6.mlp.gate_proj.weight": "model_00007-of-00049.safetensors",
65
+ "model.layers.6.mlp.up_proj.weight": "model_00007-of-00049.safetensors",
66
+ "model.layers.6.mlp.down_proj.weight": "model_00007-of-00049.safetensors",
67
+ "model.layers.6.input_layernorm.weight": "model_00007-of-00049.safetensors",
68
+ "model.layers.6.post_attention_layernorm.weight": "model_00007-of-00049.safetensors",
69
+ "model.layers.7.self_attn.q_proj.weight": "model_00008-of-00049.safetensors",
70
+ "model.layers.7.self_attn.k_proj.weight": "model_00008-of-00049.safetensors",
71
+ "model.layers.7.self_attn.v_proj.weight": "model_00008-of-00049.safetensors",
72
+ "model.layers.7.self_attn.o_proj.weight": "model_00008-of-00049.safetensors",
73
+ "model.layers.7.mlp.gate_proj.weight": "model_00008-of-00049.safetensors",
74
+ "model.layers.7.mlp.up_proj.weight": "model_00008-of-00049.safetensors",
75
+ "model.layers.7.mlp.down_proj.weight": "model_00008-of-00049.safetensors",
76
+ "model.layers.7.input_layernorm.weight": "model_00008-of-00049.safetensors",
77
+ "model.layers.7.post_attention_layernorm.weight": "model_00008-of-00049.safetensors",
78
+ "model.layers.8.self_attn.q_proj.weight": "model_00009-of-00049.safetensors",
79
+ "model.layers.8.self_attn.k_proj.weight": "model_00009-of-00049.safetensors",
80
+ "model.layers.8.self_attn.v_proj.weight": "model_00009-of-00049.safetensors",
81
+ "model.layers.8.self_attn.o_proj.weight": "model_00009-of-00049.safetensors",
82
+ "model.layers.8.mlp.gate_proj.weight": "model_00009-of-00049.safetensors",
83
+ "model.layers.8.mlp.up_proj.weight": "model_00009-of-00049.safetensors",
84
+ "model.layers.8.mlp.down_proj.weight": "model_00009-of-00049.safetensors",
85
+ "model.layers.8.input_layernorm.weight": "model_00009-of-00049.safetensors",
86
+ "model.layers.8.post_attention_layernorm.weight": "model_00009-of-00049.safetensors",
87
+ "model.layers.9.self_attn.q_proj.weight": "model_00010-of-00049.safetensors",
88
+ "model.layers.9.self_attn.k_proj.weight": "model_00010-of-00049.safetensors",
89
+ "model.layers.9.self_attn.v_proj.weight": "model_00010-of-00049.safetensors",
90
+ "model.layers.9.self_attn.o_proj.weight": "model_00010-of-00049.safetensors",
91
+ "model.layers.9.mlp.gate_proj.weight": "model_00010-of-00049.safetensors",
92
+ "model.layers.9.mlp.up_proj.weight": "model_00010-of-00049.safetensors",
93
+ "model.layers.9.mlp.down_proj.weight": "model_00010-of-00049.safetensors",
94
+ "model.layers.9.input_layernorm.weight": "model_00010-of-00049.safetensors",
95
+ "model.layers.9.post_attention_layernorm.weight": "model_00010-of-00049.safetensors",
96
+ "model.layers.10.self_attn.q_proj.weight": "model_00011-of-00049.safetensors",
97
+ "model.layers.10.self_attn.k_proj.weight": "model_00011-of-00049.safetensors",
98
+ "model.layers.10.self_attn.v_proj.weight": "model_00011-of-00049.safetensors",
99
+ "model.layers.10.self_attn.o_proj.weight": "model_00011-of-00049.safetensors",
100
+ "model.layers.10.mlp.gate_proj.weight": "model_00011-of-00049.safetensors",
101
+ "model.layers.10.mlp.up_proj.weight": "model_00011-of-00049.safetensors",
102
+ "model.layers.10.mlp.down_proj.weight": "model_00011-of-00049.safetensors",
103
+ "model.layers.10.input_layernorm.weight": "model_00011-of-00049.safetensors",
104
+ "model.layers.10.post_attention_layernorm.weight": "model_00011-of-00049.safetensors",
105
+ "model.layers.11.self_attn.q_proj.weight": "model_00012-of-00049.safetensors",
106
+ "model.layers.11.self_attn.k_proj.weight": "model_00012-of-00049.safetensors",
107
+ "model.layers.11.self_attn.v_proj.weight": "model_00012-of-00049.safetensors",
108
+ "model.layers.11.self_attn.o_proj.weight": "model_00012-of-00049.safetensors",
109
+ "model.layers.11.mlp.gate_proj.weight": "model_00012-of-00049.safetensors",
110
+ "model.layers.11.mlp.up_proj.weight": "model_00012-of-00049.safetensors",
111
+ "model.layers.11.mlp.down_proj.weight": "model_00012-of-00049.safetensors",
112
+ "model.layers.11.input_layernorm.weight": "model_00012-of-00049.safetensors",
113
+ "model.layers.11.post_attention_layernorm.weight": "model_00012-of-00049.safetensors",
114
+ "model.layers.12.self_attn.q_proj.weight": "model_00013-of-00049.safetensors",
115
+ "model.layers.12.self_attn.k_proj.weight": "model_00013-of-00049.safetensors",
116
+ "model.layers.12.self_attn.v_proj.weight": "model_00013-of-00049.safetensors",
117
+ "model.layers.12.self_attn.o_proj.weight": "model_00013-of-00049.safetensors",
118
+ "model.layers.12.mlp.gate_proj.weight": "model_00013-of-00049.safetensors",
119
+ "model.layers.12.mlp.up_proj.weight": "model_00013-of-00049.safetensors",
120
+ "model.layers.12.mlp.down_proj.weight": "model_00013-of-00049.safetensors",
121
+ "model.layers.12.input_layernorm.weight": "model_00013-of-00049.safetensors",
122
+ "model.layers.12.post_attention_layernorm.weight": "model_00013-of-00049.safetensors",
123
+ "model.layers.13.self_attn.q_proj.weight": "model_00014-of-00049.safetensors",
124
+ "model.layers.13.self_attn.k_proj.weight": "model_00014-of-00049.safetensors",
125
+ "model.layers.13.self_attn.v_proj.weight": "model_00014-of-00049.safetensors",
126
+ "model.layers.13.self_attn.o_proj.weight": "model_00014-of-00049.safetensors",
127
+ "model.layers.13.mlp.gate_proj.weight": "model_00014-of-00049.safetensors",
128
+ "model.layers.13.mlp.up_proj.weight": "model_00014-of-00049.safetensors",
129
+ "model.layers.13.mlp.down_proj.weight": "model_00014-of-00049.safetensors",
130
+ "model.layers.13.input_layernorm.weight": "model_00014-of-00049.safetensors",
131
+ "model.layers.13.post_attention_layernorm.weight": "model_00014-of-00049.safetensors",
132
+ "model.layers.14.self_attn.q_proj.weight": "model_00015-of-00049.safetensors",
133
+ "model.layers.14.self_attn.k_proj.weight": "model_00015-of-00049.safetensors",
134
+ "model.layers.14.self_attn.v_proj.weight": "model_00015-of-00049.safetensors",
135
+ "model.layers.14.self_attn.o_proj.weight": "model_00015-of-00049.safetensors",
136
+ "model.layers.14.mlp.gate_proj.weight": "model_00015-of-00049.safetensors",
137
+ "model.layers.14.mlp.up_proj.weight": "model_00015-of-00049.safetensors",
138
+ "model.layers.14.mlp.down_proj.weight": "model_00015-of-00049.safetensors",
139
+ "model.layers.14.input_layernorm.weight": "model_00015-of-00049.safetensors",
140
+ "model.layers.14.post_attention_layernorm.weight": "model_00015-of-00049.safetensors",
141
+ "model.layers.15.self_attn.q_proj.weight": "model_00016-of-00049.safetensors",
142
+ "model.layers.15.self_attn.k_proj.weight": "model_00016-of-00049.safetensors",
143
+ "model.layers.15.self_attn.v_proj.weight": "model_00016-of-00049.safetensors",
144
+ "model.layers.15.self_attn.o_proj.weight": "model_00016-of-00049.safetensors",
145
+ "model.layers.15.mlp.gate_proj.weight": "model_00016-of-00049.safetensors",
146
+ "model.layers.15.mlp.up_proj.weight": "model_00016-of-00049.safetensors",
147
+ "model.layers.15.mlp.down_proj.weight": "model_00016-of-00049.safetensors",
148
+ "model.layers.15.input_layernorm.weight": "model_00016-of-00049.safetensors",
149
+ "model.layers.15.post_attention_layernorm.weight": "model_00016-of-00049.safetensors",
150
+ "model.layers.16.self_attn.q_proj.weight": "model_00017-of-00049.safetensors",
151
+ "model.layers.16.self_attn.k_proj.weight": "model_00017-of-00049.safetensors",
152
+ "model.layers.16.self_attn.v_proj.weight": "model_00017-of-00049.safetensors",
153
+ "model.layers.16.self_attn.o_proj.weight": "model_00017-of-00049.safetensors",
154
+ "model.layers.16.mlp.gate_proj.weight": "model_00017-of-00049.safetensors",
155
+ "model.layers.16.mlp.up_proj.weight": "model_00017-of-00049.safetensors",
156
+ "model.layers.16.mlp.down_proj.weight": "model_00017-of-00049.safetensors",
157
+ "model.layers.16.input_layernorm.weight": "model_00017-of-00049.safetensors",
158
+ "model.layers.16.post_attention_layernorm.weight": "model_00017-of-00049.safetensors",
159
+ "model.layers.17.self_attn.q_proj.weight": "model_00018-of-00049.safetensors",
160
+ "model.layers.17.self_attn.k_proj.weight": "model_00018-of-00049.safetensors",
161
+ "model.layers.17.self_attn.v_proj.weight": "model_00018-of-00049.safetensors",
162
+ "model.layers.17.self_attn.o_proj.weight": "model_00018-of-00049.safetensors",
163
+ "model.layers.17.mlp.gate_proj.weight": "model_00018-of-00049.safetensors",
164
+ "model.layers.17.mlp.up_proj.weight": "model_00018-of-00049.safetensors",
165
+ "model.layers.17.mlp.down_proj.weight": "model_00018-of-00049.safetensors",
166
+ "model.layers.17.input_layernorm.weight": "model_00018-of-00049.safetensors",
167
+ "model.layers.17.post_attention_layernorm.weight": "model_00018-of-00049.safetensors",
168
+ "model.layers.18.self_attn.q_proj.weight": "model_00019-of-00049.safetensors",
169
+ "model.layers.18.self_attn.k_proj.weight": "model_00019-of-00049.safetensors",
170
+ "model.layers.18.self_attn.v_proj.weight": "model_00019-of-00049.safetensors",
171
+ "model.layers.18.self_attn.o_proj.weight": "model_00019-of-00049.safetensors",
172
+ "model.layers.18.mlp.gate_proj.weight": "model_00019-of-00049.safetensors",
173
+ "model.layers.18.mlp.up_proj.weight": "model_00019-of-00049.safetensors",
174
+ "model.layers.18.mlp.down_proj.weight": "model_00019-of-00049.safetensors",
175
+ "model.layers.18.input_layernorm.weight": "model_00019-of-00049.safetensors",
176
+ "model.layers.18.post_attention_layernorm.weight": "model_00019-of-00049.safetensors",
177
+ "model.layers.19.self_attn.q_proj.weight": "model_00020-of-00049.safetensors",
178
+ "model.layers.19.self_attn.k_proj.weight": "model_00020-of-00049.safetensors",
179
+ "model.layers.19.self_attn.v_proj.weight": "model_00020-of-00049.safetensors",
180
+ "model.layers.19.self_attn.o_proj.weight": "model_00020-of-00049.safetensors",
181
+ "model.layers.19.mlp.gate_proj.weight": "model_00020-of-00049.safetensors",
182
+ "model.layers.19.mlp.up_proj.weight": "model_00020-of-00049.safetensors",
183
+ "model.layers.19.mlp.down_proj.weight": "model_00020-of-00049.safetensors",
184
+ "model.layers.19.input_layernorm.weight": "model_00020-of-00049.safetensors",
185
+ "model.layers.19.post_attention_layernorm.weight": "model_00020-of-00049.safetensors",
186
+ "model.layers.20.self_attn.q_proj.weight": "model_00021-of-00049.safetensors",
187
+ "model.layers.20.self_attn.k_proj.weight": "model_00021-of-00049.safetensors",
188
+ "model.layers.20.self_attn.v_proj.weight": "model_00021-of-00049.safetensors",
189
+ "model.layers.20.self_attn.o_proj.weight": "model_00021-of-00049.safetensors",
190
+ "model.layers.20.mlp.gate_proj.weight": "model_00021-of-00049.safetensors",
191
+ "model.layers.20.mlp.up_proj.weight": "model_00021-of-00049.safetensors",
192
+ "model.layers.20.mlp.down_proj.weight": "model_00021-of-00049.safetensors",
193
+ "model.layers.20.input_layernorm.weight": "model_00021-of-00049.safetensors",
194
+ "model.layers.20.post_attention_layernorm.weight": "model_00021-of-00049.safetensors",
195
+ "model.layers.21.self_attn.q_proj.weight": "model_00022-of-00049.safetensors",
196
+ "model.layers.21.self_attn.k_proj.weight": "model_00022-of-00049.safetensors",
197
+ "model.layers.21.self_attn.v_proj.weight": "model_00022-of-00049.safetensors",
198
+ "model.layers.21.self_attn.o_proj.weight": "model_00022-of-00049.safetensors",
199
+ "model.layers.21.mlp.gate_proj.weight": "model_00022-of-00049.safetensors",
200
+ "model.layers.21.mlp.up_proj.weight": "model_00022-of-00049.safetensors",
201
+ "model.layers.21.mlp.down_proj.weight": "model_00022-of-00049.safetensors",
202
+ "model.layers.21.input_layernorm.weight": "model_00022-of-00049.safetensors",
203
+ "model.layers.21.post_attention_layernorm.weight": "model_00022-of-00049.safetensors",
204
+ "model.layers.22.self_attn.q_proj.weight": "model_00023-of-00049.safetensors",
205
+ "model.layers.22.self_attn.k_proj.weight": "model_00023-of-00049.safetensors",
206
+ "model.layers.22.self_attn.v_proj.weight": "model_00023-of-00049.safetensors",
207
+ "model.layers.22.self_attn.o_proj.weight": "model_00023-of-00049.safetensors",
208
+ "model.layers.22.mlp.gate_proj.weight": "model_00023-of-00049.safetensors",
209
+ "model.layers.22.mlp.up_proj.weight": "model_00023-of-00049.safetensors",
210
+ "model.layers.22.mlp.down_proj.weight": "model_00023-of-00049.safetensors",
211
+ "model.layers.22.input_layernorm.weight": "model_00023-of-00049.safetensors",
212
+ "model.layers.22.post_attention_layernorm.weight": "model_00023-of-00049.safetensors",
213
+ "model.layers.23.self_attn.q_proj.weight": "model_00024-of-00049.safetensors",
214
+ "model.layers.23.self_attn.k_proj.weight": "model_00024-of-00049.safetensors",
215
+ "model.layers.23.self_attn.v_proj.weight": "model_00024-of-00049.safetensors",
216
+ "model.layers.23.self_attn.o_proj.weight": "model_00024-of-00049.safetensors",
217
+ "model.layers.23.mlp.gate_proj.weight": "model_00024-of-00049.safetensors",
218
+ "model.layers.23.mlp.up_proj.weight": "model_00024-of-00049.safetensors",
219
+ "model.layers.23.mlp.down_proj.weight": "model_00024-of-00049.safetensors",
220
+ "model.layers.23.input_layernorm.weight": "model_00024-of-00049.safetensors",
221
+ "model.layers.23.post_attention_layernorm.weight": "model_00024-of-00049.safetensors",
222
+ "model.layers.24.self_attn.q_proj.weight": "model_00025-of-00049.safetensors",
223
+ "model.layers.24.self_attn.k_proj.weight": "model_00025-of-00049.safetensors",
224
+ "model.layers.24.self_attn.v_proj.weight": "model_00025-of-00049.safetensors",
225
+ "model.layers.24.self_attn.o_proj.weight": "model_00025-of-00049.safetensors",
226
+ "model.layers.24.mlp.gate_proj.weight": "model_00025-of-00049.safetensors",
227
+ "model.layers.24.mlp.up_proj.weight": "model_00025-of-00049.safetensors",
228
+ "model.layers.24.mlp.down_proj.weight": "model_00025-of-00049.safetensors",
229
+ "model.layers.24.input_layernorm.weight": "model_00025-of-00049.safetensors",
230
+ "model.layers.24.post_attention_layernorm.weight": "model_00025-of-00049.safetensors",
231
+ "model.layers.25.self_attn.q_proj.weight": "model_00026-of-00049.safetensors",
232
+ "model.layers.25.self_attn.k_proj.weight": "model_00026-of-00049.safetensors",
233
+ "model.layers.25.self_attn.v_proj.weight": "model_00026-of-00049.safetensors",
234
+ "model.layers.25.self_attn.o_proj.weight": "model_00026-of-00049.safetensors",
235
+ "model.layers.25.mlp.gate_proj.weight": "model_00026-of-00049.safetensors",
236
+ "model.layers.25.mlp.up_proj.weight": "model_00026-of-00049.safetensors",
237
+ "model.layers.25.mlp.down_proj.weight": "model_00026-of-00049.safetensors",
238
+ "model.layers.25.input_layernorm.weight": "model_00026-of-00049.safetensors",
239
+ "model.layers.25.post_attention_layernorm.weight": "model_00026-of-00049.safetensors",
240
+ "model.layers.26.self_attn.q_proj.weight": "model_00027-of-00049.safetensors",
241
+ "model.layers.26.self_attn.k_proj.weight": "model_00027-of-00049.safetensors",
242
+ "model.layers.26.self_attn.v_proj.weight": "model_00027-of-00049.safetensors",
243
+ "model.layers.26.self_attn.o_proj.weight": "model_00027-of-00049.safetensors",
244
+ "model.layers.26.mlp.gate_proj.weight": "model_00027-of-00049.safetensors",
245
+ "model.layers.26.mlp.up_proj.weight": "model_00027-of-00049.safetensors",
246
+ "model.layers.26.mlp.down_proj.weight": "model_00027-of-00049.safetensors",
247
+ "model.layers.26.input_layernorm.weight": "model_00027-of-00049.safetensors",
248
+ "model.layers.26.post_attention_layernorm.weight": "model_00027-of-00049.safetensors",
249
+ "model.layers.27.self_attn.q_proj.weight": "model_00028-of-00049.safetensors",
250
+ "model.layers.27.self_attn.k_proj.weight": "model_00028-of-00049.safetensors",
251
+ "model.layers.27.self_attn.v_proj.weight": "model_00028-of-00049.safetensors",
252
+ "model.layers.27.self_attn.o_proj.weight": "model_00028-of-00049.safetensors",
253
+ "model.layers.27.mlp.gate_proj.weight": "model_00028-of-00049.safetensors",
254
+ "model.layers.27.mlp.up_proj.weight": "model_00028-of-00049.safetensors",
255
+ "model.layers.27.mlp.down_proj.weight": "model_00028-of-00049.safetensors",
256
+ "model.layers.27.input_layernorm.weight": "model_00028-of-00049.safetensors",
257
+ "model.layers.27.post_attention_layernorm.weight": "model_00028-of-00049.safetensors",
258
+ "model.layers.28.self_attn.q_proj.weight": "model_00029-of-00049.safetensors",
259
+ "model.layers.28.self_attn.k_proj.weight": "model_00029-of-00049.safetensors",
260
+ "model.layers.28.self_attn.v_proj.weight": "model_00029-of-00049.safetensors",
261
+ "model.layers.28.self_attn.o_proj.weight": "model_00029-of-00049.safetensors",
262
+ "model.layers.28.mlp.gate_proj.weight": "model_00029-of-00049.safetensors",
263
+ "model.layers.28.mlp.up_proj.weight": "model_00029-of-00049.safetensors",
264
+ "model.layers.28.mlp.down_proj.weight": "model_00029-of-00049.safetensors",
265
+ "model.layers.28.input_layernorm.weight": "model_00029-of-00049.safetensors",
266
+ "model.layers.28.post_attention_layernorm.weight": "model_00029-of-00049.safetensors",
267
+ "model.layers.29.self_attn.q_proj.weight": "model_00030-of-00049.safetensors",
268
+ "model.layers.29.self_attn.k_proj.weight": "model_00030-of-00049.safetensors",
269
+ "model.layers.29.self_attn.v_proj.weight": "model_00030-of-00049.safetensors",
270
+ "model.layers.29.self_attn.o_proj.weight": "model_00030-of-00049.safetensors",
271
+ "model.layers.29.mlp.gate_proj.weight": "model_00030-of-00049.safetensors",
272
+ "model.layers.29.mlp.up_proj.weight": "model_00030-of-00049.safetensors",
273
+ "model.layers.29.mlp.down_proj.weight": "model_00030-of-00049.safetensors",
274
+ "model.layers.29.input_layernorm.weight": "model_00030-of-00049.safetensors",
275
+ "model.layers.29.post_attention_layernorm.weight": "model_00030-of-00049.safetensors",
276
+ "model.layers.30.self_attn.q_proj.weight": "model_00031-of-00049.safetensors",
277
+ "model.layers.30.self_attn.k_proj.weight": "model_00031-of-00049.safetensors",
278
+ "model.layers.30.self_attn.v_proj.weight": "model_00031-of-00049.safetensors",
279
+ "model.layers.30.self_attn.o_proj.weight": "model_00031-of-00049.safetensors",
280
+ "model.layers.30.mlp.gate_proj.weight": "model_00031-of-00049.safetensors",
281
+ "model.layers.30.mlp.up_proj.weight": "model_00031-of-00049.safetensors",
282
+ "model.layers.30.mlp.down_proj.weight": "model_00031-of-00049.safetensors",
283
+ "model.layers.30.input_layernorm.weight": "model_00031-of-00049.safetensors",
284
+ "model.layers.30.post_attention_layernorm.weight": "model_00031-of-00049.safetensors",
285
+ "model.layers.31.self_attn.q_proj.weight": "model_00032-of-00049.safetensors",
286
+ "model.layers.31.self_attn.k_proj.weight": "model_00032-of-00049.safetensors",
287
+ "model.layers.31.self_attn.v_proj.weight": "model_00032-of-00049.safetensors",
288
+ "model.layers.31.self_attn.o_proj.weight": "model_00032-of-00049.safetensors",
289
+ "model.layers.31.mlp.gate_proj.weight": "model_00032-of-00049.safetensors",
290
+ "model.layers.31.mlp.up_proj.weight": "model_00032-of-00049.safetensors",
291
+ "model.layers.31.mlp.down_proj.weight": "model_00032-of-00049.safetensors",
292
+ "model.layers.31.input_layernorm.weight": "model_00032-of-00049.safetensors",
293
+ "model.layers.31.post_attention_layernorm.weight": "model_00032-of-00049.safetensors",
294
+ "model.layers.32.self_attn.q_proj.weight": "model_00033-of-00049.safetensors",
295
+ "model.layers.32.self_attn.k_proj.weight": "model_00033-of-00049.safetensors",
296
+ "model.layers.32.self_attn.v_proj.weight": "model_00033-of-00049.safetensors",
297
+ "model.layers.32.self_attn.o_proj.weight": "model_00033-of-00049.safetensors",
298
+ "model.layers.32.mlp.gate_proj.weight": "model_00033-of-00049.safetensors",
299
+ "model.layers.32.mlp.up_proj.weight": "model_00033-of-00049.safetensors",
300
+ "model.layers.32.mlp.down_proj.weight": "model_00033-of-00049.safetensors",
301
+ "model.layers.32.input_layernorm.weight": "model_00033-of-00049.safetensors",
302
+ "model.layers.32.post_attention_layernorm.weight": "model_00033-of-00049.safetensors",
303
+ "model.layers.33.self_attn.q_proj.weight": "model_00034-of-00049.safetensors",
304
+ "model.layers.33.self_attn.k_proj.weight": "model_00034-of-00049.safetensors",
305
+ "model.layers.33.self_attn.v_proj.weight": "model_00034-of-00049.safetensors",
306
+ "model.layers.33.self_attn.o_proj.weight": "model_00034-of-00049.safetensors",
307
+ "model.layers.33.mlp.gate_proj.weight": "model_00034-of-00049.safetensors",
308
+ "model.layers.33.mlp.up_proj.weight": "model_00034-of-00049.safetensors",
309
+ "model.layers.33.mlp.down_proj.weight": "model_00034-of-00049.safetensors",
310
+ "model.layers.33.input_layernorm.weight": "model_00034-of-00049.safetensors",
311
+ "model.layers.33.post_attention_layernorm.weight": "model_00034-of-00049.safetensors",
312
+ "model.layers.34.self_attn.q_proj.weight": "model_00035-of-00049.safetensors",
313
+ "model.layers.34.self_attn.k_proj.weight": "model_00035-of-00049.safetensors",
314
+ "model.layers.34.self_attn.v_proj.weight": "model_00035-of-00049.safetensors",
315
+ "model.layers.34.self_attn.o_proj.weight": "model_00035-of-00049.safetensors",
316
+ "model.layers.34.mlp.gate_proj.weight": "model_00035-of-00049.safetensors",
317
+ "model.layers.34.mlp.up_proj.weight": "model_00035-of-00049.safetensors",
318
+ "model.layers.34.mlp.down_proj.weight": "model_00035-of-00049.safetensors",
319
+ "model.layers.34.input_layernorm.weight": "model_00035-of-00049.safetensors",
320
+ "model.layers.34.post_attention_layernorm.weight": "model_00035-of-00049.safetensors",
321
+ "model.layers.35.self_attn.q_proj.weight": "model_00036-of-00049.safetensors",
322
+ "model.layers.35.self_attn.k_proj.weight": "model_00036-of-00049.safetensors",
323
+ "model.layers.35.self_attn.v_proj.weight": "model_00036-of-00049.safetensors",
324
+ "model.layers.35.self_attn.o_proj.weight": "model_00036-of-00049.safetensors",
325
+ "model.layers.35.mlp.gate_proj.weight": "model_00036-of-00049.safetensors",
326
+ "model.layers.35.mlp.up_proj.weight": "model_00036-of-00049.safetensors",
327
+ "model.layers.35.mlp.down_proj.weight": "model_00036-of-00049.safetensors",
328
+ "model.layers.35.input_layernorm.weight": "model_00036-of-00049.safetensors",
329
+ "model.layers.35.post_attention_layernorm.weight": "model_00036-of-00049.safetensors",
330
+ "model.layers.36.self_attn.q_proj.weight": "model_00037-of-00049.safetensors",
331
+ "model.layers.36.self_attn.k_proj.weight": "model_00037-of-00049.safetensors",
332
+ "model.layers.36.self_attn.v_proj.weight": "model_00037-of-00049.safetensors",
333
+ "model.layers.36.self_attn.o_proj.weight": "model_00037-of-00049.safetensors",
334
+ "model.layers.36.mlp.gate_proj.weight": "model_00037-of-00049.safetensors",
335
+ "model.layers.36.mlp.up_proj.weight": "model_00037-of-00049.safetensors",
336
+ "model.layers.36.mlp.down_proj.weight": "model_00037-of-00049.safetensors",
337
+ "model.layers.36.input_layernorm.weight": "model_00037-of-00049.safetensors",
338
+ "model.layers.36.post_attention_layernorm.weight": "model_00037-of-00049.safetensors",
339
+ "model.layers.37.self_attn.q_proj.weight": "model_00038-of-00049.safetensors",
340
+ "model.layers.37.self_attn.k_proj.weight": "model_00038-of-00049.safetensors",
341
+ "model.layers.37.self_attn.v_proj.weight": "model_00038-of-00049.safetensors",
342
+ "model.layers.37.self_attn.o_proj.weight": "model_00038-of-00049.safetensors",
343
+ "model.layers.37.mlp.gate_proj.weight": "model_00038-of-00049.safetensors",
344
+ "model.layers.37.mlp.up_proj.weight": "model_00038-of-00049.safetensors",
345
+ "model.layers.37.mlp.down_proj.weight": "model_00038-of-00049.safetensors",
346
+ "model.layers.37.input_layernorm.weight": "model_00038-of-00049.safetensors",
347
+ "model.layers.37.post_attention_layernorm.weight": "model_00038-of-00049.safetensors",
348
+ "model.layers.38.self_attn.q_proj.weight": "model_00039-of-00049.safetensors",
349
+ "model.layers.38.self_attn.k_proj.weight": "model_00039-of-00049.safetensors",
350
+ "model.layers.38.self_attn.v_proj.weight": "model_00039-of-00049.safetensors",
351
+ "model.layers.38.self_attn.o_proj.weight": "model_00039-of-00049.safetensors",
352
+ "model.layers.38.mlp.gate_proj.weight": "model_00039-of-00049.safetensors",
353
+ "model.layers.38.mlp.up_proj.weight": "model_00039-of-00049.safetensors",
354
+ "model.layers.38.mlp.down_proj.weight": "model_00039-of-00049.safetensors",
355
+ "model.layers.38.input_layernorm.weight": "model_00039-of-00049.safetensors",
356
+ "model.layers.38.post_attention_layernorm.weight": "model_00039-of-00049.safetensors",
357
+ "model.layers.39.self_attn.q_proj.weight": "model_00040-of-00049.safetensors",
358
+ "model.layers.39.self_attn.k_proj.weight": "model_00040-of-00049.safetensors",
359
+ "model.layers.39.self_attn.v_proj.weight": "model_00040-of-00049.safetensors",
360
+ "model.layers.39.self_attn.o_proj.weight": "model_00040-of-00049.safetensors",
361
+ "model.layers.39.mlp.gate_proj.weight": "model_00040-of-00049.safetensors",
362
+ "model.layers.39.mlp.up_proj.weight": "model_00040-of-00049.safetensors",
363
+ "model.layers.39.mlp.down_proj.weight": "model_00040-of-00049.safetensors",
364
+ "model.layers.39.input_layernorm.weight": "model_00040-of-00049.safetensors",
365
+ "model.layers.39.post_attention_layernorm.weight": "model_00040-of-00049.safetensors",
366
+ "model.layers.40.self_attn.q_proj.weight": "model_00041-of-00049.safetensors",
367
+ "model.layers.40.self_attn.k_proj.weight": "model_00041-of-00049.safetensors",
368
+ "model.layers.40.self_attn.v_proj.weight": "model_00041-of-00049.safetensors",
369
+ "model.layers.40.self_attn.o_proj.weight": "model_00041-of-00049.safetensors",
370
+ "model.layers.40.mlp.gate_proj.weight": "model_00041-of-00049.safetensors",
371
+ "model.layers.40.mlp.up_proj.weight": "model_00041-of-00049.safetensors",
372
+ "model.layers.40.mlp.down_proj.weight": "model_00041-of-00049.safetensors",
373
+ "model.layers.40.input_layernorm.weight": "model_00041-of-00049.safetensors",
374
+ "model.layers.40.post_attention_layernorm.weight": "model_00041-of-00049.safetensors",
375
+ "model.layers.41.self_attn.q_proj.weight": "model_00042-of-00049.safetensors",
376
+ "model.layers.41.self_attn.k_proj.weight": "model_00042-of-00049.safetensors",
377
+ "model.layers.41.self_attn.v_proj.weight": "model_00042-of-00049.safetensors",
378
+ "model.layers.41.self_attn.o_proj.weight": "model_00042-of-00049.safetensors",
379
+ "model.layers.41.mlp.gate_proj.weight": "model_00042-of-00049.safetensors",
380
+ "model.layers.41.mlp.up_proj.weight": "model_00042-of-00049.safetensors",
381
+ "model.layers.41.mlp.down_proj.weight": "model_00042-of-00049.safetensors",
382
+ "model.layers.41.input_layernorm.weight": "model_00042-of-00049.safetensors",
383
+ "model.layers.41.post_attention_layernorm.weight": "model_00042-of-00049.safetensors",
384
+ "model.layers.42.self_attn.q_proj.weight": "model_00043-of-00049.safetensors",
385
+ "model.layers.42.self_attn.k_proj.weight": "model_00043-of-00049.safetensors",
386
+ "model.layers.42.self_attn.v_proj.weight": "model_00043-of-00049.safetensors",
387
+ "model.layers.42.self_attn.o_proj.weight": "model_00043-of-00049.safetensors",
388
+ "model.layers.42.mlp.gate_proj.weight": "model_00043-of-00049.safetensors",
389
+ "model.layers.42.mlp.up_proj.weight": "model_00043-of-00049.safetensors",
390
+ "model.layers.42.mlp.down_proj.weight": "model_00043-of-00049.safetensors",
391
+ "model.layers.42.input_layernorm.weight": "model_00043-of-00049.safetensors",
392
+ "model.layers.42.post_attention_layernorm.weight": "model_00043-of-00049.safetensors",
393
+ "model.layers.43.self_attn.q_proj.weight": "model_00044-of-00049.safetensors",
394
+ "model.layers.43.self_attn.k_proj.weight": "model_00044-of-00049.safetensors",
395
+ "model.layers.43.self_attn.v_proj.weight": "model_00044-of-00049.safetensors",
396
+ "model.layers.43.self_attn.o_proj.weight": "model_00044-of-00049.safetensors",
397
+ "model.layers.43.mlp.gate_proj.weight": "model_00044-of-00049.safetensors",
398
+ "model.layers.43.mlp.up_proj.weight": "model_00044-of-00049.safetensors",
399
+ "model.layers.43.mlp.down_proj.weight": "model_00044-of-00049.safetensors",
400
+ "model.layers.43.input_layernorm.weight": "model_00044-of-00049.safetensors",
401
+ "model.layers.43.post_attention_layernorm.weight": "model_00044-of-00049.safetensors",
402
+ "model.layers.44.self_attn.q_proj.weight": "model_00045-of-00049.safetensors",
403
+ "model.layers.44.self_attn.k_proj.weight": "model_00045-of-00049.safetensors",
404
+ "model.layers.44.self_attn.v_proj.weight": "model_00045-of-00049.safetensors",
405
+ "model.layers.44.self_attn.o_proj.weight": "model_00045-of-00049.safetensors",
406
+ "model.layers.44.mlp.gate_proj.weight": "model_00045-of-00049.safetensors",
407
+ "model.layers.44.mlp.up_proj.weight": "model_00045-of-00049.safetensors",
408
+ "model.layers.44.mlp.down_proj.weight": "model_00045-of-00049.safetensors",
409
+ "model.layers.44.input_layernorm.weight": "model_00045-of-00049.safetensors",
410
+ "model.layers.44.post_attention_layernorm.weight": "model_00045-of-00049.safetensors",
411
+ "model.layers.45.self_attn.q_proj.weight": "model_00046-of-00049.safetensors",
412
+ "model.layers.45.self_attn.k_proj.weight": "model_00046-of-00049.safetensors",
413
+ "model.layers.45.self_attn.v_proj.weight": "model_00046-of-00049.safetensors",
414
+ "model.layers.45.self_attn.o_proj.weight": "model_00046-of-00049.safetensors",
415
+ "model.layers.45.mlp.gate_proj.weight": "model_00046-of-00049.safetensors",
416
+ "model.layers.45.mlp.up_proj.weight": "model_00046-of-00049.safetensors",
417
+ "model.layers.45.mlp.down_proj.weight": "model_00046-of-00049.safetensors",
418
+ "model.layers.45.input_layernorm.weight": "model_00046-of-00049.safetensors",
419
+ "model.layers.45.post_attention_layernorm.weight": "model_00046-of-00049.safetensors",
420
+ "model.layers.46.self_attn.q_proj.weight": "model_00047-of-00049.safetensors",
421
+ "model.layers.46.self_attn.k_proj.weight": "model_00047-of-00049.safetensors",
422
+ "model.layers.46.self_attn.v_proj.weight": "model_00047-of-00049.safetensors",
423
+ "model.layers.46.self_attn.o_proj.weight": "model_00047-of-00049.safetensors",
424
+ "model.layers.46.mlp.gate_proj.weight": "model_00047-of-00049.safetensors",
425
+ "model.layers.46.mlp.up_proj.weight": "model_00047-of-00049.safetensors",
426
+ "model.layers.46.mlp.down_proj.weight": "model_00047-of-00049.safetensors",
427
+ "model.layers.46.input_layernorm.weight": "model_00047-of-00049.safetensors",
428
+ "model.layers.46.post_attention_layernorm.weight": "model_00047-of-00049.safetensors",
429
+ "model.layers.47.self_attn.q_proj.weight": "model_00048-of-00049.safetensors",
430
+ "model.layers.47.self_attn.k_proj.weight": "model_00048-of-00049.safetensors",
431
+ "model.layers.47.self_attn.v_proj.weight": "model_00048-of-00049.safetensors",
432
+ "model.layers.47.self_attn.o_proj.weight": "model_00048-of-00049.safetensors",
433
+ "model.layers.47.mlp.gate_proj.weight": "model_00048-of-00049.safetensors",
434
+ "model.layers.47.mlp.up_proj.weight": "model_00048-of-00049.safetensors",
435
+ "model.layers.47.mlp.down_proj.weight": "model_00048-of-00049.safetensors",
436
+ "model.layers.47.input_layernorm.weight": "model_00048-of-00049.safetensors",
437
+ "model.layers.47.post_attention_layernorm.weight": "model_00048-of-00049.safetensors",
438
+ "model.embed_tokens.weight": "model_00049-of-00049.safetensors",
439
+ "model.norm.weight": "model_00049-of-00049.safetensors",
440
+ "lm_head.weight": "model_00049-of-00049.safetensors"
441
+ }
442
+ }
model_00001-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:977d8e9b1d59f2eb4c1c6329d465a2d146a05f554f8c23df0de5d621b7abad78
3
+ size 1384154152
model_00002-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd6c11a27006562cf2416bfc556dcf62be557bd7641e8b6b273bd4f3d30316b1
3
+ size 1384154152
model_00003-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30ec4f52c5a9dc5723327461b7ea2be130cfc85daeeb05d1eb4c8c28b5bee7eb
3
+ size 1384154152
model_00004-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:486fb915637ebf313a24c5cd1c3a25525a2bdf33848351933890f643d1fcfae3
3
+ size 1384154152
model_00005-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d5b0f821915cecf12f4ccbc504e5b3454e23bc45ed7dc7f9718a12d725f5fd1
3
+ size 1384154152
model_00006-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec9cd899f0c543cdd3412a4066a1ecbab9a5e0305d1686bb258490384257fe59
3
+ size 1384154152
model_00007-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:731e101fb6642de5bc1cb9a4061c4da3ff4fda64acc0990bab6f0c49671dbb0f
3
+ size 1384154152
model_00008-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7af23b4aa9d81de1b506aa9dca368acbf98c624d51f449934a669c6e037e658
3
+ size 1384154152
model_00009-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9461be370fd3dd7f8ad24a69397701f749e3d2df42533b8eb618013fde5588c9
3
+ size 1384154152
model_00010-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f9f32b8a1f28df18e1d73304949dfd95c828d7228b39e7deb65e6a623a4e305
3
+ size 1384154152
model_00011-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89be2fd7bf61624e6419d0ab2d01eefe089ca3d6ca095af45410978e0c49b36c
3
+ size 1384154160
model_00012-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73bd131eceaa7c9b8a49e9fcec7c60d497b84a964b8bbc05fa9eb11215fa0277
3
+ size 1384154160
model_00013-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f66da0a62f270f1f5167e29c0bdd7db708554b01f437d6469b6ff26715ec2943
3
+ size 1384154160
model_00014-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8526f3332840c355cb1c65dcbb7614f2e327f00d770296cec557ccdaac95cf25
3
+ size 1384154160
model_00015-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d37179ca0d4e1d5c95455ea520c598722cc95d09f267b320d43e8ef57d6451c
3
+ size 1384154160
model_00016-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95b4257cce5d542f76b119589ac89b3660753a9e12b82eac23549c857ecc0b93
3
+ size 1384154160
model_00017-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9305707807354cc33259281b0e7e3e925ed73053037e13ed3383a2ef9d60ac83
3
+ size 1384154160
model_00018-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6755fd80f8efc3b614c3011326499c18db913192ba448c6dc7de3d25443c6cff
3
+ size 1384154160
model_00019-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b06c7e0d844365e180eabf15b26c7eeaf26766fe0f2247d73d4556ca352ed643
3
+ size 1384154160
model_00020-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a820e5cb9dad64041a6531915103be94632f309e9f25c0a0cf27d96b132265d
3
+ size 1384154160
model_00021-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4780c90b41cf1b757565a3a19bb1a0a0323447369a474514d0c8b649de1a44ea
3
+ size 1384154160
model_00022-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:179218a39f9651c1af1bae14540427aaeeac8393d5f939171b6b5affcf3ecc8a
3
+ size 1384154160
model_00023-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45575dc0b6ea6dbc99c3f9a1d4ee61928e94ff64b6d07e5918e5b51ea4946613
3
+ size 1384154160
model_00024-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a1bad7aaa81e62b715f5437beb269247254ef3ac01d94ea19fc8d44f0d76a3f
3
+ size 1384154160
model_00025-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cee256437b4d11baefbae0b17597e10cb70a3d2659ca3086b40bff6f1c142fc
3
+ size 1384154160
model_00026-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85e567f3f9b0fe71e591b9a2af7a17e2f5b25c7d040a7adcf59b6143b0038b27
3
+ size 1384154160
model_00027-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14926d35e6bb96131a0e97965c02d1450d5f5effa33b3295356ddbbd70183021
3
+ size 1384154160
model_00028-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:148764b704da36c2b0707497c021f0392dba09d220c4bfe3f4eed199f8ae93a9
3
+ size 1384154160
model_00029-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:baa6e644f785278cbe5693de5b7f86a32e22a822f7833ed9de944e47aa7d97e1
3
+ size 1384154160
model_00030-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9cf4278a29e8902011fb6084319557a80ebcc6cde730725f6e386f480dd750e6
3
+ size 1384154160
model_00031-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ffa36effbdfef9ae8b4ce3bfca81dd2ac660cafb164f2cda457f3e93857a323c
3
+ size 1384154160
model_00032-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96c286f2b774fdbcf7a0348252c07802199921abd369d687fc47703bded05c29
3
+ size 1384154160
model_00033-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8102b7d9b735f6f155fee1ef76de2f8fc93e43fe87e1aa918a4d8df2c44676dc
3
+ size 1384154160
model_00034-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04ca88ea19209139e1ed49e8e913b728ed869e4f24536cc3d4e2d1cf9c2c12a6
3
+ size 1384154160
model_00035-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63cb5709f1f726776c334490596d406e828403cbe323d9e28aef870391d90900
3
+ size 1384154160
model_00036-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edfe861e7b1f0f6c491272f1e44e0fdcfdb3f7b23bc2afd5a11f66310ce8ec3b
3
+ size 1384154160
model_00037-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b29636c7a638afee3de573b8610a042af6928d16d85fdf234f16fdd4a1a8aab
3
+ size 1384154160
model_00038-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:321c588c48b82fdac5e573890e939d9b5b6f898966190f02c73430cd603e8f9b
3
+ size 1384154160
model_00039-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:624a372f1789f43a76253c67c2af86e93f8f3674cea320be88cefad458a52b9a
3
+ size 1384154160
model_00040-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35fdf6170ca12add6123806d3e5e5af05bed28e5d55f98790fdd65343ee6e9a7
3
+ size 1384154160
model_00041-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d64c50cf4c3218c545bdd5aa91c59d4238550188a273a5b76e9f1b1002c5231
3
+ size 1384154160
model_00042-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8710e8762a3dddf32e34c18ee88966eaeb70c3d9e6886e319e6c1a710c9388f
3
+ size 1384154160
model_00043-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1864bc504c09dc8946fa30a07f6b9f001798ee0d969a50f0b0dfe868796d0515
3
+ size 1384154160
model_00044-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb2dd6f35f7c197031a57beb0f07733980b37e5d917ec3e984352386b680d2d9
3
+ size 1384154160
model_00045-of-00049.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a40bd7d7ad1017e0b068f371a89a53f840f2767244f7b79c07b04278b7ae0210
3
+ size 1384154160