Volko76 commited on
Commit
bdfacac
·
verified ·
1 Parent(s): 6431bef

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. base_model/README.md +31 -29
  2. base_model/config.json +5 -5
  3. base_model/generation_config.json +1 -1
  4. hidden_states.safetensors +2 -2
  5. job_new.json +0 -0
  6. measurement.json +0 -0
  7. out_tensor/lm_head.safetensors +2 -2
  8. out_tensor/model.layers.0.mlp.down_proj.safetensors +2 -2
  9. out_tensor/model.layers.0.mlp.gate_proj.safetensors +2 -2
  10. out_tensor/model.layers.0.mlp.up_proj.safetensors +2 -2
  11. out_tensor/model.layers.0.self_attn.k_proj.safetensors +2 -2
  12. out_tensor/model.layers.0.self_attn.o_proj.safetensors +2 -2
  13. out_tensor/model.layers.0.self_attn.q_proj.safetensors +2 -2
  14. out_tensor/model.layers.0.self_attn.v_proj.safetensors +2 -2
  15. out_tensor/model.layers.1.mlp.down_proj.safetensors +2 -2
  16. out_tensor/model.layers.1.mlp.gate_proj.safetensors +2 -2
  17. out_tensor/model.layers.1.mlp.up_proj.safetensors +2 -2
  18. out_tensor/model.layers.1.self_attn.k_proj.safetensors +2 -2
  19. out_tensor/model.layers.1.self_attn.o_proj.safetensors +2 -2
  20. out_tensor/model.layers.1.self_attn.q_proj.safetensors +2 -2
  21. out_tensor/model.layers.1.self_attn.v_proj.safetensors +2 -2
  22. out_tensor/model.layers.10.mlp.down_proj.safetensors +2 -2
  23. out_tensor/model.layers.10.mlp.gate_proj.safetensors +2 -2
  24. out_tensor/model.layers.10.mlp.up_proj.safetensors +2 -2
  25. out_tensor/model.layers.10.self_attn.k_proj.safetensors +2 -2
  26. out_tensor/model.layers.10.self_attn.o_proj.safetensors +2 -2
  27. out_tensor/model.layers.10.self_attn.q_proj.safetensors +2 -2
  28. out_tensor/model.layers.10.self_attn.v_proj.safetensors +2 -2
  29. out_tensor/model.layers.11.mlp.down_proj.safetensors +2 -2
  30. out_tensor/model.layers.11.mlp.gate_proj.safetensors +2 -2
  31. out_tensor/model.layers.11.mlp.up_proj.safetensors +2 -2
  32. out_tensor/model.layers.11.self_attn.k_proj.safetensors +2 -2
  33. out_tensor/model.layers.11.self_attn.o_proj.safetensors +2 -2
  34. out_tensor/model.layers.11.self_attn.q_proj.safetensors +2 -2
  35. out_tensor/model.layers.11.self_attn.v_proj.safetensors +2 -2
  36. out_tensor/model.layers.12.mlp.down_proj.safetensors +2 -2
  37. out_tensor/model.layers.12.mlp.gate_proj.safetensors +2 -2
  38. out_tensor/model.layers.12.mlp.up_proj.safetensors +2 -2
  39. out_tensor/model.layers.12.self_attn.k_proj.safetensors +2 -2
  40. out_tensor/model.layers.12.self_attn.o_proj.safetensors +2 -2
  41. out_tensor/model.layers.12.self_attn.q_proj.safetensors +2 -2
  42. out_tensor/model.layers.12.self_attn.v_proj.safetensors +2 -2
  43. out_tensor/model.layers.13.mlp.down_proj.safetensors +2 -2
  44. out_tensor/model.layers.13.mlp.gate_proj.safetensors +2 -2
  45. out_tensor/model.layers.13.mlp.up_proj.safetensors +2 -2
  46. out_tensor/model.layers.13.self_attn.k_proj.safetensors +2 -2
  47. out_tensor/model.layers.13.self_attn.o_proj.safetensors +2 -2
  48. out_tensor/model.layers.13.self_attn.q_proj.safetensors +2 -2
  49. out_tensor/model.layers.13.self_attn.v_proj.safetensors +2 -2
  50. out_tensor/model.layers.14.mlp.down_proj.safetensors +2 -2
base_model/README.md CHANGED
@@ -1,41 +1,45 @@
1
  ---
2
  license: apache-2.0
3
- license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
4
  language:
5
  - en
 
 
6
  pipeline_tag: text-generation
7
- base_model: Qwen/Qwen2.5-0.5B
8
  tags:
 
 
9
  - chat
10
- library_name: transformers
 
11
  ---
12
 
13
- # Qwen2.5-0.5B-Instruct
 
14
 
15
  ## Introduction
16
 
17
- Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
18
 
19
- - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
20
- - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
21
- - **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
22
- - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
23
 
24
- **This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
25
  - Type: Causal Language Models
26
  - Training Stage: Pretraining & Post-training
27
  - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
28
- - Number of Parameters: 0.49B
29
- - Number of Paramaters (Non-Embedding): 0.36B
30
- - Number of Layers: 24
31
- - Number of Attention Heads (GQA): 14 for Q and 2 for KV
32
- - Context Length: Full 32,768 tokens and generation 8192 tokens
33
-
34
- For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
35
 
36
  ## Requirements
37
 
38
- The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
39
 
40
  With `transformers<4.37.0`, you will encounter the following error:
41
  ```
@@ -49,7 +53,7 @@ Here provides a code snippet with `apply_chat_template` to show you how to load
49
  ```python
50
  from transformers import AutoModelForCausalLM, AutoTokenizer
51
 
52
- model_name = "Qwen/Qwen2.5-0.5B-Instruct"
53
 
54
  model = AutoModelForCausalLM.from_pretrained(
55
  model_name,
@@ -58,7 +62,7 @@ model = AutoModelForCausalLM.from_pretrained(
58
  )
59
  tokenizer = AutoTokenizer.from_pretrained(model_name)
60
 
61
- prompt = "Give me a short introduction to large language model."
62
  messages = [
63
  {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
64
  {"role": "user", "content": prompt}
@@ -84,7 +88,7 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
84
 
85
  ## Evaluation & Performance
86
 
87
- Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
88
 
89
  For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
90
 
@@ -93,18 +97,16 @@ For requirements on GPU memory and the respective throughput, see results [here]
93
  If you find our work helpful, feel free to give us a cite.
94
 
95
  ```
96
- @misc{qwen2.5,
97
- title = {Qwen2.5: A Party of Foundation Models},
98
- url = {https://qwenlm.github.io/blog/qwen2.5/},
99
- author = {Qwen Team},
100
- month = {September},
101
- year = {2024}
102
  }
103
-
104
  @article{qwen2,
105
  title={Qwen2 Technical Report},
106
  author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
107
  journal={arXiv preprint arXiv:2407.10671},
108
  year={2024}
109
  }
110
- ```
 
1
  ---
2
  license: apache-2.0
3
+ license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct/blob/main/LICENSE
4
  language:
5
  - en
6
+ base_model:
7
+ - Qwen/Qwen2.5-Coder-1.5B
8
  pipeline_tag: text-generation
9
+ library_name: transformers
10
  tags:
11
+ - code
12
+ - codeqwen
13
  - chat
14
+ - qwen
15
+ - qwen-coder
16
  ---
17
 
18
+
19
+ # Qwen2.5-Coder-1.5B-Instruct
20
 
21
  ## Introduction
22
 
23
+ Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
24
 
25
+ - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
26
+ - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
 
 
27
 
28
+ **This repo contains the instruction-tuned 1.5B Qwen2.5-Coder model**, which has the following features:
29
  - Type: Causal Language Models
30
  - Training Stage: Pretraining & Post-training
31
  - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
32
+ - Number of Parameters: 1.54B
33
+ - Number of Paramaters (Non-Embedding): 1.31B
34
+ - Number of Layers: 28
35
+ - Number of Attention Heads (GQA): 12 for Q and 2 for KV
36
+ - Context Length: Full 32,768 tokens
37
+
38
+ For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
39
 
40
  ## Requirements
41
 
42
+ The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
43
 
44
  With `transformers<4.37.0`, you will encounter the following error:
45
  ```
 
53
  ```python
54
  from transformers import AutoModelForCausalLM, AutoTokenizer
55
 
56
+ model_name = "Qwen/Qwen2.5-Coder-1.5B-Instruct"
57
 
58
  model = AutoModelForCausalLM.from_pretrained(
59
  model_name,
 
62
  )
63
  tokenizer = AutoTokenizer.from_pretrained(model_name)
64
 
65
+ prompt = "write a quick sort algorithm."
66
  messages = [
67
  {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
68
  {"role": "user", "content": prompt}
 
88
 
89
  ## Evaluation & Performance
90
 
91
+ Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
92
 
93
  For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
94
 
 
97
  If you find our work helpful, feel free to give us a cite.
98
 
99
  ```
100
+ @article{hui2024qwen2,
101
+ title={Qwen2. 5-Coder Technical Report},
102
+ author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
103
+ journal={arXiv preprint arXiv:2409.12186},
104
+ year={2024}
 
105
  }
 
106
  @article{qwen2,
107
  title={Qwen2 Technical Report},
108
  author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
109
  journal={arXiv preprint arXiv:2407.10671},
110
  year={2024}
111
  }
112
+ ```
base_model/config.json CHANGED
@@ -6,21 +6,21 @@
6
  "bos_token_id": 151643,
7
  "eos_token_id": 151645,
8
  "hidden_act": "silu",
9
- "hidden_size": 896,
10
  "initializer_range": 0.02,
11
- "intermediate_size": 4864,
12
  "max_position_embeddings": 32768,
13
  "max_window_layers": 21,
14
  "model_type": "qwen2",
15
- "num_attention_heads": 14,
16
- "num_hidden_layers": 24,
17
  "num_key_value_heads": 2,
18
  "rms_norm_eps": 1e-06,
19
  "rope_theta": 1000000.0,
20
  "sliding_window": 32768,
21
  "tie_word_embeddings": true,
22
  "torch_dtype": "bfloat16",
23
- "transformers_version": "4.43.1",
24
  "use_cache": true,
25
  "use_sliding_window": false,
26
  "vocab_size": 151936
 
6
  "bos_token_id": 151643,
7
  "eos_token_id": 151645,
8
  "hidden_act": "silu",
9
+ "hidden_size": 1536,
10
  "initializer_range": 0.02,
11
+ "intermediate_size": 8960,
12
  "max_position_embeddings": 32768,
13
  "max_window_layers": 21,
14
  "model_type": "qwen2",
15
+ "num_attention_heads": 12,
16
+ "num_hidden_layers": 28,
17
  "num_key_value_heads": 2,
18
  "rms_norm_eps": 1e-06,
19
  "rope_theta": 1000000.0,
20
  "sliding_window": 32768,
21
  "tie_word_embeddings": true,
22
  "torch_dtype": "bfloat16",
23
+ "transformers_version": "4.44.0",
24
  "use_cache": true,
25
  "use_sliding_window": false,
26
  "vocab_size": 151936
base_model/generation_config.json CHANGED
@@ -11,4 +11,4 @@
11
  "top_p": 0.8,
12
  "top_k": 20,
13
  "transformers_version": "4.37.0"
14
- }
 
11
  "top_p": 0.8,
12
  "top_k": 20,
13
  "transformers_version": "4.37.0"
14
+ }
hidden_states.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d5264b50e363c84f3af7ccd990471017b5d9f9f5557b71092bc6daeed876eae0
3
- size 367010144
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7f6f93fcbf7179717e5b56aa5a3f8fe18f5d64d3d0db7ef2f8d5b91a4746c05
3
+ size 629154272
job_new.json CHANGED
The diff for this file is too large to render. See raw diff
 
measurement.json CHANGED
The diff for this file is too large to render. See raw diff
 
out_tensor/lm_head.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b6827e113bb3ff36a4949c464f0e32cd268bd3ca9965bf2a184eed6776e9c78a
3
- size 112360730
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8991d85238d264cad69c8f75d81eee6e09cd9a4f4224d44e0437d0c19672f370
3
+ size 185672440
out_tensor/model.layers.0.mlp.down_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a03387020556418fb44df92acd902994b70f756b24ef366e4b951dace658ae04
3
- size 4395372
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd723318e88b0833aabe9871453df3bb368bfd03c826888934a9e4c2c42626ef
3
+ size 13853108
out_tensor/model.layers.0.mlp.gate_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:906fddc2dafd1a08be0485428e85ebc2563183d2df4e4dcc854ec40097c7b5a1
3
- size 4379314
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a354b9987f90d356c1b48cb816bee54992fd27449f78be272e545b1309f8be88
3
+ size 13823064
out_tensor/model.layers.0.mlp.up_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4a8facc025f2ff10a0b92d412886d5e4b25c76b5904a2ad3fd6444ad97567165
3
- size 4379306
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6267fab26de077ad8d43fc19bbffe69ac9e3f76f31ccf89080c098b95c8297d
3
+ size 13823056
out_tensor/model.layers.0.self_attn.k_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ddc0c84a391f62b99339b9521c22e734e02ba9215463c462ab912123317bf740
3
- size 92432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a6f9b2a894c8ae52cc50d4c43cc2a3060dc760505f0a0c4be46939f33ce3424
3
+ size 402104
out_tensor/model.layers.0.self_attn.o_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f8adbd62d00f7932cb8a7a409e378ef7da61287b68e1f22be36e6c637d369273
3
- size 618936
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e60077a4709c727c55ce56b78ca5f6bcff09c5b19802111249a42c6abc88d6e0
3
+ size 2375264
out_tensor/model.layers.0.self_attn.q_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6bc3a4015e21952a34536637acfa079e780894f55b5c32a08fbcc9c647f1b4e1
3
- size 620832
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7f427c3de7c70946239334ccc0d8939365eb0999bfa5be15b8a4ac78677285f
3
+ size 2378440
out_tensor/model.layers.0.self_attn.v_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9865d2a275f8d79ff9f0321936f8ac5ad920579dbc02b7e77f18e187e05d899f
3
- size 121112
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49bebd2b583fa93edae9d55c61b72266e9582ea62440f2354a81175c94c79018
3
+ size 402104
out_tensor/model.layers.1.mlp.down_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0c7c620101fae0912c34cc0d1e8f58be65360fa5621dccb4ebf75bb2e46ced62
3
- size 3477868
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08becf994fc201eb01b73faf532c44080186e29c5c1b4f17f7f0e101b80f0d45
3
+ size 13853108
out_tensor/model.layers.1.mlp.gate_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a678531eaf7fbf131a54092980a68c9b7d03ebd9f20184d57cd976b4480e1d36
3
- size 3445426
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22e752ffcd2c771fc509246d8b88935b83ecb9b2fa2ee1a351aa77b06d08a27d
3
+ size 10955864
out_tensor/model.layers.1.mlp.up_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6250fdb6ef2ad9d7a6b00480091389d6c10b30e8c089fbf4a49881a393981501
3
- size 3445418
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d4cb5ec6391f2341108943ecc78de31166577290f9dc5affd2cfc76187b959d
3
+ size 10955856
out_tensor/model.layers.1.self_attn.k_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0e988105dca181fc0d084919f3ccbeec585d076a79f730137159ee9a9cb184b4
3
- size 90962
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:858a599fa109aa235c693fee5f39b18eb33a1bfba8b7d000054edb1e4039bc74
3
+ size 402104
out_tensor/model.layers.1.self_attn.o_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:aff756286b80dcf9aafa5d9ff4a1204436a296ddf7bba19bdb7338aa605b3935
3
- size 609402
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24d3fdf5b9ec1b19e1d302a25daa825064bb89def5d91730fc7633a475a75eef
3
+ size 2375264
out_tensor/model.layers.1.self_attn.q_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0f81cb8a8a68cedb918d60ec9eee6a43654f0259bcbb21bb9937ac3d594fb2e5
3
- size 611290
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdb3439d6a21cb771edf518f7371441a49a82f16ae1801e5b0131ff1bee3f1fd
3
+ size 2378440
out_tensor/model.layers.1.self_attn.v_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:31c5df6772e72a1645edf95984933ec7721b8cbb827c8a2128d802782d72f707
3
- size 90962
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66102fedabd171052d5e138385265077a090a50bd9953577162dbbc10b91e8de
3
+ size 402104
out_tensor/model.layers.10.mlp.down_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:909fc03a11858d19c59974aac457b9c4374385fb17dd4dee6e8936d2292ec382
3
- size 2440096
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0890cb8b8b716d343cde890c144fc65e9c17c87ec675453ff49366e2c6807db5
3
+ size 9130742
out_tensor/model.layers.10.mlp.gate_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cbd6680110273772ff443f0432b118336f6c0610113f8120b6198ab5bfb82ea5
3
- size 2309816
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:154730a17e7bc80bd68c63a47bd2854955a78b7c2db43546751131d38a1a62c3
3
+ size 8948824
out_tensor/model.layers.10.mlp.up_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2797952be77d2c487bc60aceb8cad5aebf9f6d1d40ccde2f6ac82bef73c5a4ad
3
- size 2387624
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8af33515509a2e1d6ac6d20fe2a0526ac4d21c917f89dfd4f095fb9f37c5d2b1
3
+ size 9092176
out_tensor/model.layers.10.self_attn.k_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9cc49ef39473669ba12411b9dc24f5cc76a5a659c5cf00d6dff37c1f20688be3
3
- size 119650
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7178c6554c8b22dedfc8ef96d4a755dabcf8860a010ed3ac51697f8c8375ad99
3
+ size 215448
out_tensor/model.layers.10.self_attn.o_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2c7967edd19c0754e23e46ea6d6f354cf45b9339fadca300dcf3a9a67e1b404a
3
- size 810106
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da6d92bfdade49f245d6ef98cfd6fd38f3c0d32a07386ddb851d018bd99d64ca
3
+ size 1254208
out_tensor/model.layers.10.self_attn.q_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b2b29c650592e756fe62553d25b3333dc4e18d7bb14d7e8089ca4a284a60409d
3
- size 812002
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e9aca85c7cc2de6cc7797f3726ad56243db7192f4b316f6900899a14a6e6637
3
+ size 1257384
out_tensor/model.layers.10.self_attn.v_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:37c0bf808ccf579acea9b9b1939130f20e9ed9db8b2557ac30f75c4840f3f741
3
- size 119650
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:228442fc608ae992c1a172293a3bcbc78d8d59b711b6225edaf30dcab91f7d6c
3
+ size 259480
out_tensor/model.layers.11.mlp.down_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f6a4020d80afec69d629f9410a940d7cf3d1d8d76fd09c21803aad6549f5a570
3
- size 2440096
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cc024412fbea0393c7ba498c78283350e21a911ac4311757051da64bf8e61c2
3
+ size 9284776
out_tensor/model.layers.11.mlp.gate_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d337fb8175a618421ffa816bb4afa11f9ad1bfc6960d5b439721dd16dc7b28d4
3
- size 2309816
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef650caebc599d862f1ff393dfae4d2fb79af6fe5f8c64e54cf10a15fe906aa2
3
+ size 9002800
out_tensor/model.layers.11.mlp.up_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:568d034943721030629f57bc92226f86be021d27f3297581752a6d5276aefebd
3
- size 2387624
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ddf399d499058519db02cb73d81179168a6fdeb50a02cbd8475fc1db96fbb4b2
3
+ size 9253672
out_tensor/model.layers.11.self_attn.k_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3e5b27582a8001a24bd3c2807626671f7c16e1373c2cdbc3f2e466f7d70555f2
3
- size 119650
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:060e0002f4e330da361315fe86d7f65316d99dd7649dcef37945709239e2ec76
3
+ size 215448
out_tensor/model.layers.11.self_attn.o_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a743bfbf0670a94ef2003e104378e2e54590219fd87d19f6c7b2d469a77fd6b1
3
- size 810106
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0af312c51f5e5c9c397e3ae7195c24d9084954aa763e17dd9cc5e8b9724a804c
3
+ size 1254208
out_tensor/model.layers.11.self_attn.q_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6b79a3dd51542adf3b754a705c754bcf714d634e6c2c9b591a3970d2bce15666
3
- size 812002
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fa8bc4f71e90115c4ba59be4922e13fc1946db55b31cb2e87ba0b6341341358
3
+ size 1257384
out_tensor/model.layers.11.self_attn.v_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3720a1c8e606bbddb62dfb3a161f3f855403661d88b581321f5a0c530c6345aa
3
- size 119650
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:54c5982233aa29f19b2bb0711b94c5db372872280777c5ed325fadd4b81f2549
3
+ size 259480
out_tensor/model.layers.12.mlp.down_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7f7c0469b06212863f1baa46880fba1f92ad160e3a22c0b8b4b16ebfb62f8e16
3
- size 2440096
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2faa2d4db1e942b3dade61e05344926f3e2cb667c43a51773c3a08e64801dce2
3
+ size 7650472
out_tensor/model.layers.12.mlp.gate_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:25cf488fdfeb16bbca35097d8e9be95c80edbf3d9bd59d3e1ef85e7465a2680c
3
- size 2309816
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a7f65551b5cbae9bdc961109d76eddc06f4675b8a90cbf405f966fd4ce20544
3
+ size 7282480
out_tensor/model.layers.12.mlp.up_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6cd79ecddf1904f1ea45f69935663d3afbe07283c6d6ea5597b1611d189d0ade
3
- size 2387624
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adf45321c4f70ae6a977c2636e4debb82a545741ac4dd6afb27cfe5a8e38b738
3
+ size 7533352
out_tensor/model.layers.12.self_attn.k_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:984c4a488f9d39f6b91447a484c348702c5cc18461039145ab8c830f6c918c2d
3
- size 92440
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b728a6a9b80b8eb0b6fa2c5a0487b8c9a70d42aa36f87d7b1f78a24c5033d8b2
3
+ size 402112
out_tensor/model.layers.12.self_attn.o_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6da6be20f30daa3cd498847876f7ac6840705042587442b1c1a5d0815c8ac971
3
- size 618944
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a1b18e7209d0c4bafe6b9e5ba4f8affaefe18a1bbf6c501e0ff950fb3b11117
3
+ size 2375272
out_tensor/model.layers.12.self_attn.q_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6f14c196d265488c47555b9479ebd3c4007d677c3a26beecb19393b03e5893ee
3
- size 620832
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b8a57e0690ae9c0e788ba8f770b1148b9444592427d05b97465081092523ca4
3
+ size 2378448
out_tensor/model.layers.12.self_attn.v_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ce2c4c315f754c18e3edcbacb5dd9bcfe726e0e832dcee3c57ea3a0c338f6867
3
- size 121120
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c3fc323e1e31079408e76938d50de8e98fdf45a1ccdc9ba13d797a48443691a
3
+ size 402112
out_tensor/model.layers.13.mlp.down_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d78681fd718a46a14cf5378f7d8debd93c0844ff4e98c9d33dd0465efbd0250c
3
- size 2956192
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28e4b14c900ec62d24073c3c275a0b6e08762f0ad579e8c4f8630bbe5bdba044
3
+ size 7478440
out_tensor/model.layers.13.mlp.gate_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2c703d3f5a872d6596bb69079577c54aeb85f1fd55a89837abe6b9753ca8e263
3
- size 2854584
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:712ddbc6a1e00ff94e8b4d7b5103c83b744af446f9056fe32ebe18223a4aaf3e
3
+ size 7103280
out_tensor/model.layers.13.mlp.up_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:84a22dfcdcc452d7b5084a6187f6755dc8280969c0587bd5c42385f8cb45c9ad
3
- size 2932392
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:748aed8e4994f5cc86a11cac6aca964719c5412bb2872a6cd274234ea9b2677d
3
+ size 7103272
out_tensor/model.layers.13.self_attn.k_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:21a7a43c125005293f8e913e7b2e3e3ad12e1be950b4c9273e0e72f30b26bd90
3
- size 92440
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e84cb37b12e53cbcf9e897dcf3b5402956b9981ba1226ace20d4dcbd96d2935
3
+ size 402112
out_tensor/model.layers.13.self_attn.o_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9f19719da67c2b887163503e7147ec1f9bb5f324a706bd3058ae9ae3bc906a24
3
- size 618944
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b302f7a73346f99e798aca6aa4275843905d00432faeeca3cbf7a4fc7bcfa36b
3
+ size 2375272
out_tensor/model.layers.13.self_attn.q_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2419550662a6364320521f29a545fb23c76c70928adaaf26121bfa9d56d9b080
3
- size 620832
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:091b064b38f0bf12dc995c59a699cb9c600f3d7d7da595a7503fd72ba836c40d
3
+ size 2378448
out_tensor/model.layers.13.self_attn.v_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:eecc1bff087bcc430a650ac2a07886655964c9b27d4c99509a3c008fe70268d9
3
- size 121120
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d7797e17965499315612b77d8edb5bf7dddcfc3ff2acf69b00a8c940e7c7ffe
3
+ size 402112
out_tensor/model.layers.14.mlp.down_proj.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8f679970c857042b96eea7ba9463850fdada07c4bf5e2b8a44628a13b0f71bbe
3
- size 2382752
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c2a7162293e50aff2e00d388e11e46d588477473dabb59f9c72ab40dbfb7bb1
3
+ size 7650472