nlile commited on
Commit
a17d90e
1 Parent(s): d87fe8b

Model save

Browse files
README.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: EleutherAI/pythia-12b-deduped
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: PE-12b-pythia
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # PE-12b-pythia
15
+
16
+ This model is a fine-tuned version of [EleutherAI/pythia-12b-deduped](https://huggingface.co/EleutherAI/pythia-12b-deduped) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.1421
19
+ - Rewards/chosen: 3.5045
20
+ - Rewards/rejected: -2.3171
21
+ - Rewards/accuracies: 0.9441
22
+ - Rewards/margins: 5.8216
23
+ - Logps/rejected: -95.5639
24
+ - Logps/chosen: -116.1507
25
+ - Logits/rejected: -0.4604
26
+ - Logits/chosen: -0.4355
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - learning_rate: 3e-07
46
+ - train_batch_size: 1
47
+ - eval_batch_size: 2
48
+ - seed: 42
49
+ - distributed_type: multi-GPU
50
+ - num_devices: 8
51
+ - gradient_accumulation_steps: 8
52
+ - total_train_batch_size: 64
53
+ - total_eval_batch_size: 16
54
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
+ - lr_scheduler_type: linear
56
+ - lr_scheduler_warmup_ratio: 0.1
57
+ - num_epochs: 3
58
+
59
+ ### Training results
60
+
61
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
62
+ |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
63
+ | 0.8825 | 0.05 | 100 | 0.8872 | 0.1884 | 0.1204 | 0.5056 | 0.0680 | -90.6889 | -122.7830 | -0.5017 | -0.4522 |
64
+ | 0.9136 | 0.09 | 200 | 0.8325 | 0.3253 | 0.0714 | 0.5894 | 0.2540 | -90.7870 | -122.5091 | -0.4960 | -0.4447 |
65
+ | 0.7507 | 0.14 | 300 | 0.7816 | 0.5741 | 0.2797 | 0.5670 | 0.2944 | -90.3703 | -122.0116 | -0.4909 | -0.4426 |
66
+ | 0.6142 | 0.18 | 400 | 0.6435 | 1.0753 | 0.4404 | 0.6369 | 0.6348 | -90.0489 | -121.0092 | -0.4793 | -0.4322 |
67
+ | 0.519 | 0.23 | 500 | 0.5196 | 1.7213 | 0.5624 | 0.7430 | 1.1590 | -89.8050 | -119.7171 | -0.4559 | -0.4084 |
68
+ | 0.4858 | 0.27 | 600 | 0.4351 | 2.2085 | 0.5923 | 0.7877 | 1.6162 | -89.7450 | -118.7428 | -0.4592 | -0.4138 |
69
+ | 0.4048 | 0.32 | 700 | 0.3878 | 2.6105 | 0.5736 | 0.8324 | 2.0369 | -89.7825 | -117.9388 | -0.4398 | -0.3953 |
70
+ | 0.3623 | 0.37 | 800 | 0.3383 | 2.7055 | 0.4610 | 0.8520 | 2.2446 | -90.0078 | -117.7487 | -0.4492 | -0.4046 |
71
+ | 0.308 | 0.41 | 900 | 0.3145 | 2.9742 | 0.3506 | 0.8520 | 2.6236 | -90.2285 | -117.2114 | -0.4381 | -0.3971 |
72
+ | 0.3092 | 0.46 | 1000 | 0.3125 | 3.1541 | 0.2687 | 0.8352 | 2.8854 | -90.3922 | -116.8515 | -0.4276 | -0.3926 |
73
+ | 0.2765 | 0.5 | 1100 | 0.2939 | 3.1208 | 0.1475 | 0.8603 | 2.9733 | -90.6347 | -116.9181 | -0.4615 | -0.4216 |
74
+ | 0.3058 | 0.55 | 1200 | 0.2772 | 2.9861 | -0.1371 | 0.8771 | 3.1232 | -91.2038 | -117.1875 | -0.4249 | -0.3887 |
75
+ | 0.2702 | 0.59 | 1300 | 0.2592 | 3.3217 | -0.0639 | 0.8715 | 3.3856 | -91.0574 | -116.5163 | -0.4497 | -0.4113 |
76
+ | 0.2316 | 0.64 | 1400 | 0.2491 | 3.3560 | -0.2934 | 0.8855 | 3.6494 | -91.5165 | -116.4477 | -0.4234 | -0.3869 |
77
+ | 0.2344 | 0.68 | 1500 | 0.2506 | 3.2223 | -0.2242 | 0.8687 | 3.4464 | -91.3780 | -116.7152 | -0.4515 | -0.4151 |
78
+ | 0.2332 | 0.73 | 1600 | 0.2350 | 3.2137 | -0.4070 | 0.8855 | 3.6207 | -91.7436 | -116.7324 | -0.4299 | -0.3936 |
79
+ | 0.2258 | 0.78 | 1700 | 0.2477 | 3.0894 | -0.5590 | 0.8939 | 3.6484 | -92.0476 | -116.9809 | -0.4316 | -0.3960 |
80
+ | 0.2526 | 0.82 | 1800 | 0.2277 | 3.2845 | -0.5527 | 0.8771 | 3.8373 | -92.0351 | -116.5907 | -0.4420 | -0.4076 |
81
+ | 0.2025 | 0.87 | 1900 | 0.2182 | 3.2061 | -0.8100 | 0.9022 | 4.0160 | -92.5496 | -116.7476 | -0.4319 | -0.3974 |
82
+ | 0.2253 | 0.91 | 2000 | 0.2149 | 3.2765 | -0.9756 | 0.9078 | 4.2521 | -92.8809 | -116.6067 | -0.4391 | -0.4023 |
83
+ | 0.2084 | 0.96 | 2100 | 0.2223 | 3.1160 | -1.0659 | 0.8939 | 4.1820 | -93.0615 | -116.9277 | -0.4283 | -0.3954 |
84
+ | 0.1896 | 1.0 | 2200 | 0.2100 | 3.1835 | -1.0131 | 0.8911 | 4.1966 | -92.9559 | -116.7927 | -0.4517 | -0.4154 |
85
+ | 0.2294 | 1.05 | 2300 | 0.2070 | 3.1205 | -1.0873 | 0.8939 | 4.2078 | -93.1043 | -116.9187 | -0.4412 | -0.4051 |
86
+ | 0.1897 | 1.1 | 2400 | 0.2011 | 3.1553 | -1.0875 | 0.9050 | 4.2428 | -93.1047 | -116.8492 | -0.4483 | -0.4136 |
87
+ | 0.1943 | 1.14 | 2500 | 0.1953 | 3.3317 | -1.2261 | 0.9022 | 4.5578 | -93.3819 | -116.4964 | -0.4488 | -0.4137 |
88
+ | 0.1749 | 1.19 | 2600 | 0.1975 | 3.2186 | -1.3232 | 0.8911 | 4.5419 | -93.5761 | -116.7225 | -0.4500 | -0.4160 |
89
+ | 0.1881 | 1.23 | 2700 | 0.1838 | 3.3207 | -1.3323 | 0.9274 | 4.6530 | -93.5944 | -116.5184 | -0.4262 | -0.3962 |
90
+ | 0.1611 | 1.28 | 2800 | 0.1833 | 3.2881 | -1.3588 | 0.9106 | 4.6469 | -93.6472 | -116.5835 | -0.4404 | -0.4091 |
91
+ | 0.1653 | 1.32 | 2900 | 0.1959 | 3.2545 | -1.6143 | 0.9190 | 4.8688 | -94.1584 | -116.6508 | -0.4252 | -0.3996 |
92
+ | 0.1613 | 1.37 | 3000 | 0.1779 | 3.3926 | -1.5190 | 0.9218 | 4.9117 | -93.9678 | -116.3744 | -0.4374 | -0.4071 |
93
+ | 0.1785 | 1.42 | 3100 | 0.1840 | 3.4053 | -1.6286 | 0.9246 | 5.0339 | -94.1868 | -116.3491 | -0.4280 | -0.3987 |
94
+ | 0.1544 | 1.46 | 3200 | 0.1686 | 3.5029 | -1.6389 | 0.9218 | 5.1418 | -94.2075 | -116.1539 | -0.4624 | -0.4309 |
95
+ | 0.1492 | 1.51 | 3300 | 0.1706 | 3.2854 | -1.8094 | 0.9330 | 5.0948 | -94.5485 | -116.5889 | -0.4148 | -0.3943 |
96
+ | 0.1719 | 1.55 | 3400 | 0.1691 | 3.5148 | -1.7457 | 0.9274 | 5.2605 | -94.4210 | -116.1301 | -0.4542 | -0.4253 |
97
+ | 0.1905 | 1.6 | 3500 | 0.1719 | 3.4941 | -1.7454 | 0.9246 | 5.2395 | -94.4204 | -116.1715 | -0.4479 | -0.4189 |
98
+ | 0.1354 | 1.64 | 3600 | 0.1749 | 3.5351 | -1.7024 | 0.9106 | 5.2375 | -94.3345 | -116.0895 | -0.4608 | -0.4303 |
99
+ | 0.1644 | 1.69 | 3700 | 0.1597 | 3.5736 | -1.6580 | 0.9246 | 5.2316 | -94.2457 | -116.0126 | -0.4469 | -0.4192 |
100
+ | 0.1598 | 1.73 | 3800 | 0.1613 | 3.6646 | -1.7035 | 0.9078 | 5.3681 | -94.3367 | -115.8306 | -0.4631 | -0.4349 |
101
+ | 0.1337 | 1.78 | 3900 | 0.1583 | 3.5502 | -1.8444 | 0.9134 | 5.3946 | -94.6184 | -116.0593 | -0.4658 | -0.4368 |
102
+ | 0.1534 | 1.83 | 4000 | 0.1572 | 3.5076 | -1.9137 | 0.9190 | 5.4213 | -94.7571 | -116.1446 | -0.4610 | -0.4328 |
103
+ | 0.1327 | 1.87 | 4100 | 0.1607 | 3.5711 | -1.9143 | 0.9218 | 5.4854 | -94.7583 | -116.0175 | -0.4404 | -0.4153 |
104
+ | 0.162 | 1.92 | 4200 | 0.1565 | 3.4852 | -2.0136 | 0.9330 | 5.4988 | -94.9568 | -116.1893 | -0.4641 | -0.4373 |
105
+ | 0.1471 | 1.96 | 4300 | 0.1524 | 3.5639 | -1.9766 | 0.9246 | 5.5406 | -94.8830 | -116.0319 | -0.4627 | -0.4338 |
106
+ | 0.1333 | 2.01 | 4400 | 0.1418 | 3.6173 | -1.9710 | 0.9162 | 5.5883 | -94.8717 | -115.9251 | -0.4608 | -0.4328 |
107
+ | 0.13 | 2.05 | 4500 | 0.1485 | 3.6275 | -1.9865 | 0.9358 | 5.6140 | -94.9027 | -115.9047 | -0.4604 | -0.4319 |
108
+ | 0.1311 | 2.1 | 4600 | 0.1503 | 3.4735 | -2.1194 | 0.9134 | 5.5928 | -95.1684 | -116.2128 | -0.4405 | -0.4123 |
109
+ | 0.1329 | 2.15 | 4700 | 0.1431 | 3.5793 | -2.1059 | 0.9218 | 5.6852 | -95.1415 | -116.0012 | -0.4519 | -0.4229 |
110
+ | 0.1346 | 2.19 | 4800 | 0.1494 | 3.6059 | -2.0642 | 0.9274 | 5.6701 | -95.0581 | -115.9479 | -0.4639 | -0.4332 |
111
+ | 0.1462 | 2.24 | 4900 | 0.1455 | 3.4721 | -2.1648 | 0.9218 | 5.6369 | -95.2593 | -116.2156 | -0.4553 | -0.4258 |
112
+ | 0.1221 | 2.28 | 5000 | 0.1538 | 3.6293 | -2.1472 | 0.9385 | 5.7764 | -95.2240 | -115.9012 | -0.4525 | -0.4268 |
113
+ | 0.1329 | 2.33 | 5100 | 0.1486 | 3.4734 | -2.1778 | 0.9358 | 5.6512 | -95.2853 | -116.2130 | -0.4578 | -0.4301 |
114
+ | 0.1284 | 2.37 | 5200 | 0.1527 | 3.4805 | -2.1670 | 0.9078 | 5.6474 | -95.2636 | -116.1988 | -0.4611 | -0.4329 |
115
+ | 0.1238 | 2.42 | 5300 | 0.1433 | 3.4570 | -2.1768 | 0.9274 | 5.6338 | -95.2832 | -116.2457 | -0.4451 | -0.4191 |
116
+ | 0.1317 | 2.46 | 5400 | 0.1421 | 3.5647 | -2.2232 | 0.9330 | 5.7880 | -95.3761 | -116.0303 | -0.4565 | -0.4342 |
117
+ | 0.131 | 2.51 | 5500 | 0.1478 | 3.4211 | -2.2681 | 0.9190 | 5.6892 | -95.4659 | -116.3175 | -0.4444 | -0.4147 |
118
+ | 0.1235 | 2.56 | 5600 | 0.1428 | 3.5292 | -2.2798 | 0.9413 | 5.8089 | -95.4892 | -116.1014 | -0.4485 | -0.4234 |
119
+ | 0.1122 | 2.6 | 5700 | 0.1445 | 3.6102 | -2.2363 | 0.9330 | 5.8465 | -95.4023 | -115.9393 | -0.4473 | -0.4233 |
120
+ | 0.1172 | 2.65 | 5800 | 0.1415 | 3.5813 | -2.1899 | 0.9246 | 5.7712 | -95.3095 | -115.9972 | -0.4648 | -0.4357 |
121
+ | 0.1257 | 2.69 | 5900 | 0.1428 | 3.4075 | -2.3047 | 0.9218 | 5.7122 | -95.5390 | -116.3447 | -0.4553 | -0.4269 |
122
+ | 0.1441 | 2.74 | 6000 | 0.1426 | 3.4287 | -2.3210 | 0.9190 | 5.7497 | -95.5717 | -116.3024 | -0.4673 | -0.4401 |
123
+ | 0.1359 | 2.78 | 6100 | 0.1479 | 3.4833 | -2.2993 | 0.9358 | 5.7826 | -95.5282 | -116.1931 | -0.4409 | -0.4173 |
124
+ | 0.1332 | 2.83 | 6200 | 0.1442 | 3.4741 | -2.2726 | 0.9330 | 5.7466 | -95.4748 | -116.2116 | -0.4512 | -0.4262 |
125
+ | 0.1454 | 2.88 | 6300 | 0.1397 | 3.4410 | -2.2911 | 0.9358 | 5.7320 | -95.5118 | -116.2778 | -0.4604 | -0.4355 |
126
+ | 0.1355 | 2.92 | 6400 | 0.1471 | 3.3740 | -2.3739 | 0.9330 | 5.7479 | -95.6775 | -116.4117 | -0.4473 | -0.4225 |
127
+ | 0.1114 | 2.97 | 6500 | 0.1397 | 3.4854 | -2.3222 | 0.9302 | 5.8076 | -95.5740 | -116.1889 | -0.4595 | -0.4345 |
128
+
129
+
130
+ ### Framework versions
131
+
132
+ - Transformers 4.35.0
133
+ - Pytorch 2.1.1+cu121
134
+ - Datasets 2.14.6
135
+ - Tokenizers 0.14.1
all_results.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_logits/chosen": -0.4354749321937561,
4
+ "eval_logits/rejected": -0.46039730310440063,
5
+ "eval_logps/chosen": -116.15072631835938,
6
+ "eval_logps/rejected": -95.56389617919922,
7
+ "eval_loss": 0.1421111524105072,
8
+ "eval_rewards/accuracies": 0.9441340565681458,
9
+ "eval_rewards/chosen": 3.5045158863067627,
10
+ "eval_rewards/margins": 5.8216233253479,
11
+ "eval_rewards/rejected": -2.317107915878296,
12
+ "eval_runtime": 911.8075,
13
+ "eval_samples": 2862,
14
+ "eval_samples_per_second": 3.139,
15
+ "eval_steps_per_second": 0.196,
16
+ "train_loss": 0.2317562538376319,
17
+ "train_runtime": 114179.2227,
18
+ "train_samples": 140201,
19
+ "train_samples_per_second": 3.684,
20
+ "train_steps_per_second": 0.058
21
+ }
eval_results.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_logits/chosen": -0.4354749321937561,
4
+ "eval_logits/rejected": -0.46039730310440063,
5
+ "eval_logps/chosen": -116.15072631835938,
6
+ "eval_logps/rejected": -95.56389617919922,
7
+ "eval_loss": 0.1421111524105072,
8
+ "eval_rewards/accuracies": 0.9441340565681458,
9
+ "eval_rewards/chosen": 3.5045158863067627,
10
+ "eval_rewards/margins": 5.8216233253479,
11
+ "eval_rewards/rejected": -2.317107915878296,
12
+ "eval_runtime": 911.8075,
13
+ "eval_samples": 2862,
14
+ "eval_samples_per_second": 3.139,
15
+ "eval_steps_per_second": 0.196
16
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "eos_token_id": 0,
5
+ "transformers_version": "4.35.0",
6
+ "use_cache": false
7
+ }
model-00001-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19d92ee4fd3edc2274e9e453d0332708b4c0ee44b6b4eeec15dfbea5052f8e0b
3
+ size 4924047696
model-00002-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04c9ab805e081b68e6985441f10d75745533eb2b7d3379e7545a09ae7c983021
3
+ size 4824474112
model-00003-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4b32b6c824f276f1adba172c6b1c54719906bac863e8967e0ce1529c882ed3d
3
+ size 4824484864
model-00004-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c67d5f47f84ceeb314f77459913affec5ab2464efd58976f5b71ad8d2f8dfa6
3
+ size 4981802000
model-00005-of-00005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16a13a45616a452a0013fbdc71b282b6e72856624b8c69647afa069fe5f7f027
3
+ size 4137388064
model.safetensors.index.json ADDED
@@ -0,0 +1,443 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 23692144640
4
+ },
5
+ "weight_map": {
6
+ "embed_out.weight": "model-00005-of-00005.safetensors",
7
+ "gpt_neox.embed_in.weight": "model-00001-of-00005.safetensors",
8
+ "gpt_neox.final_layer_norm.bias": "model-00005-of-00005.safetensors",
9
+ "gpt_neox.final_layer_norm.weight": "model-00005-of-00005.safetensors",
10
+ "gpt_neox.layers.0.attention.dense.bias": "model-00001-of-00005.safetensors",
11
+ "gpt_neox.layers.0.attention.dense.weight": "model-00001-of-00005.safetensors",
12
+ "gpt_neox.layers.0.attention.query_key_value.bias": "model-00001-of-00005.safetensors",
13
+ "gpt_neox.layers.0.attention.query_key_value.weight": "model-00001-of-00005.safetensors",
14
+ "gpt_neox.layers.0.input_layernorm.bias": "model-00001-of-00005.safetensors",
15
+ "gpt_neox.layers.0.input_layernorm.weight": "model-00001-of-00005.safetensors",
16
+ "gpt_neox.layers.0.mlp.dense_4h_to_h.bias": "model-00001-of-00005.safetensors",
17
+ "gpt_neox.layers.0.mlp.dense_4h_to_h.weight": "model-00001-of-00005.safetensors",
18
+ "gpt_neox.layers.0.mlp.dense_h_to_4h.bias": "model-00001-of-00005.safetensors",
19
+ "gpt_neox.layers.0.mlp.dense_h_to_4h.weight": "model-00001-of-00005.safetensors",
20
+ "gpt_neox.layers.0.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
21
+ "gpt_neox.layers.0.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
22
+ "gpt_neox.layers.1.attention.dense.bias": "model-00001-of-00005.safetensors",
23
+ "gpt_neox.layers.1.attention.dense.weight": "model-00001-of-00005.safetensors",
24
+ "gpt_neox.layers.1.attention.query_key_value.bias": "model-00001-of-00005.safetensors",
25
+ "gpt_neox.layers.1.attention.query_key_value.weight": "model-00001-of-00005.safetensors",
26
+ "gpt_neox.layers.1.input_layernorm.bias": "model-00001-of-00005.safetensors",
27
+ "gpt_neox.layers.1.input_layernorm.weight": "model-00001-of-00005.safetensors",
28
+ "gpt_neox.layers.1.mlp.dense_4h_to_h.bias": "model-00001-of-00005.safetensors",
29
+ "gpt_neox.layers.1.mlp.dense_4h_to_h.weight": "model-00001-of-00005.safetensors",
30
+ "gpt_neox.layers.1.mlp.dense_h_to_4h.bias": "model-00001-of-00005.safetensors",
31
+ "gpt_neox.layers.1.mlp.dense_h_to_4h.weight": "model-00001-of-00005.safetensors",
32
+ "gpt_neox.layers.1.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
33
+ "gpt_neox.layers.1.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
34
+ "gpt_neox.layers.10.attention.dense.bias": "model-00002-of-00005.safetensors",
35
+ "gpt_neox.layers.10.attention.dense.weight": "model-00002-of-00005.safetensors",
36
+ "gpt_neox.layers.10.attention.query_key_value.bias": "model-00002-of-00005.safetensors",
37
+ "gpt_neox.layers.10.attention.query_key_value.weight": "model-00002-of-00005.safetensors",
38
+ "gpt_neox.layers.10.input_layernorm.bias": "model-00002-of-00005.safetensors",
39
+ "gpt_neox.layers.10.input_layernorm.weight": "model-00002-of-00005.safetensors",
40
+ "gpt_neox.layers.10.mlp.dense_4h_to_h.bias": "model-00002-of-00005.safetensors",
41
+ "gpt_neox.layers.10.mlp.dense_4h_to_h.weight": "model-00002-of-00005.safetensors",
42
+ "gpt_neox.layers.10.mlp.dense_h_to_4h.bias": "model-00002-of-00005.safetensors",
43
+ "gpt_neox.layers.10.mlp.dense_h_to_4h.weight": "model-00002-of-00005.safetensors",
44
+ "gpt_neox.layers.10.post_attention_layernorm.bias": "model-00002-of-00005.safetensors",
45
+ "gpt_neox.layers.10.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
46
+ "gpt_neox.layers.11.attention.dense.bias": "model-00002-of-00005.safetensors",
47
+ "gpt_neox.layers.11.attention.dense.weight": "model-00002-of-00005.safetensors",
48
+ "gpt_neox.layers.11.attention.query_key_value.bias": "model-00002-of-00005.safetensors",
49
+ "gpt_neox.layers.11.attention.query_key_value.weight": "model-00002-of-00005.safetensors",
50
+ "gpt_neox.layers.11.input_layernorm.bias": "model-00002-of-00005.safetensors",
51
+ "gpt_neox.layers.11.input_layernorm.weight": "model-00002-of-00005.safetensors",
52
+ "gpt_neox.layers.11.mlp.dense_4h_to_h.bias": "model-00002-of-00005.safetensors",
53
+ "gpt_neox.layers.11.mlp.dense_4h_to_h.weight": "model-00002-of-00005.safetensors",
54
+ "gpt_neox.layers.11.mlp.dense_h_to_4h.bias": "model-00002-of-00005.safetensors",
55
+ "gpt_neox.layers.11.mlp.dense_h_to_4h.weight": "model-00002-of-00005.safetensors",
56
+ "gpt_neox.layers.11.post_attention_layernorm.bias": "model-00002-of-00005.safetensors",
57
+ "gpt_neox.layers.11.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
58
+ "gpt_neox.layers.12.attention.dense.bias": "model-00002-of-00005.safetensors",
59
+ "gpt_neox.layers.12.attention.dense.weight": "model-00002-of-00005.safetensors",
60
+ "gpt_neox.layers.12.attention.query_key_value.bias": "model-00002-of-00005.safetensors",
61
+ "gpt_neox.layers.12.attention.query_key_value.weight": "model-00002-of-00005.safetensors",
62
+ "gpt_neox.layers.12.input_layernorm.bias": "model-00002-of-00005.safetensors",
63
+ "gpt_neox.layers.12.input_layernorm.weight": "model-00002-of-00005.safetensors",
64
+ "gpt_neox.layers.12.mlp.dense_4h_to_h.bias": "model-00002-of-00005.safetensors",
65
+ "gpt_neox.layers.12.mlp.dense_4h_to_h.weight": "model-00002-of-00005.safetensors",
66
+ "gpt_neox.layers.12.mlp.dense_h_to_4h.bias": "model-00002-of-00005.safetensors",
67
+ "gpt_neox.layers.12.mlp.dense_h_to_4h.weight": "model-00002-of-00005.safetensors",
68
+ "gpt_neox.layers.12.post_attention_layernorm.bias": "model-00002-of-00005.safetensors",
69
+ "gpt_neox.layers.12.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
70
+ "gpt_neox.layers.13.attention.dense.bias": "model-00002-of-00005.safetensors",
71
+ "gpt_neox.layers.13.attention.dense.weight": "model-00002-of-00005.safetensors",
72
+ "gpt_neox.layers.13.attention.query_key_value.bias": "model-00002-of-00005.safetensors",
73
+ "gpt_neox.layers.13.attention.query_key_value.weight": "model-00002-of-00005.safetensors",
74
+ "gpt_neox.layers.13.input_layernorm.bias": "model-00002-of-00005.safetensors",
75
+ "gpt_neox.layers.13.input_layernorm.weight": "model-00002-of-00005.safetensors",
76
+ "gpt_neox.layers.13.mlp.dense_4h_to_h.bias": "model-00002-of-00005.safetensors",
77
+ "gpt_neox.layers.13.mlp.dense_4h_to_h.weight": "model-00002-of-00005.safetensors",
78
+ "gpt_neox.layers.13.mlp.dense_h_to_4h.bias": "model-00002-of-00005.safetensors",
79
+ "gpt_neox.layers.13.mlp.dense_h_to_4h.weight": "model-00002-of-00005.safetensors",
80
+ "gpt_neox.layers.13.post_attention_layernorm.bias": "model-00002-of-00005.safetensors",
81
+ "gpt_neox.layers.13.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
82
+ "gpt_neox.layers.14.attention.dense.bias": "model-00002-of-00005.safetensors",
83
+ "gpt_neox.layers.14.attention.dense.weight": "model-00002-of-00005.safetensors",
84
+ "gpt_neox.layers.14.attention.query_key_value.bias": "model-00002-of-00005.safetensors",
85
+ "gpt_neox.layers.14.attention.query_key_value.weight": "model-00002-of-00005.safetensors",
86
+ "gpt_neox.layers.14.input_layernorm.bias": "model-00002-of-00005.safetensors",
87
+ "gpt_neox.layers.14.input_layernorm.weight": "model-00002-of-00005.safetensors",
88
+ "gpt_neox.layers.14.mlp.dense_4h_to_h.bias": "model-00003-of-00005.safetensors",
89
+ "gpt_neox.layers.14.mlp.dense_4h_to_h.weight": "model-00003-of-00005.safetensors",
90
+ "gpt_neox.layers.14.mlp.dense_h_to_4h.bias": "model-00002-of-00005.safetensors",
91
+ "gpt_neox.layers.14.mlp.dense_h_to_4h.weight": "model-00002-of-00005.safetensors",
92
+ "gpt_neox.layers.14.post_attention_layernorm.bias": "model-00002-of-00005.safetensors",
93
+ "gpt_neox.layers.14.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
94
+ "gpt_neox.layers.15.attention.dense.bias": "model-00003-of-00005.safetensors",
95
+ "gpt_neox.layers.15.attention.dense.weight": "model-00003-of-00005.safetensors",
96
+ "gpt_neox.layers.15.attention.query_key_value.bias": "model-00003-of-00005.safetensors",
97
+ "gpt_neox.layers.15.attention.query_key_value.weight": "model-00003-of-00005.safetensors",
98
+ "gpt_neox.layers.15.input_layernorm.bias": "model-00003-of-00005.safetensors",
99
+ "gpt_neox.layers.15.input_layernorm.weight": "model-00003-of-00005.safetensors",
100
+ "gpt_neox.layers.15.mlp.dense_4h_to_h.bias": "model-00003-of-00005.safetensors",
101
+ "gpt_neox.layers.15.mlp.dense_4h_to_h.weight": "model-00003-of-00005.safetensors",
102
+ "gpt_neox.layers.15.mlp.dense_h_to_4h.bias": "model-00003-of-00005.safetensors",
103
+ "gpt_neox.layers.15.mlp.dense_h_to_4h.weight": "model-00003-of-00005.safetensors",
104
+ "gpt_neox.layers.15.post_attention_layernorm.bias": "model-00003-of-00005.safetensors",
105
+ "gpt_neox.layers.15.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
106
+ "gpt_neox.layers.16.attention.dense.bias": "model-00003-of-00005.safetensors",
107
+ "gpt_neox.layers.16.attention.dense.weight": "model-00003-of-00005.safetensors",
108
+ "gpt_neox.layers.16.attention.query_key_value.bias": "model-00003-of-00005.safetensors",
109
+ "gpt_neox.layers.16.attention.query_key_value.weight": "model-00003-of-00005.safetensors",
110
+ "gpt_neox.layers.16.input_layernorm.bias": "model-00003-of-00005.safetensors",
111
+ "gpt_neox.layers.16.input_layernorm.weight": "model-00003-of-00005.safetensors",
112
+ "gpt_neox.layers.16.mlp.dense_4h_to_h.bias": "model-00003-of-00005.safetensors",
113
+ "gpt_neox.layers.16.mlp.dense_4h_to_h.weight": "model-00003-of-00005.safetensors",
114
+ "gpt_neox.layers.16.mlp.dense_h_to_4h.bias": "model-00003-of-00005.safetensors",
115
+ "gpt_neox.layers.16.mlp.dense_h_to_4h.weight": "model-00003-of-00005.safetensors",
116
+ "gpt_neox.layers.16.post_attention_layernorm.bias": "model-00003-of-00005.safetensors",
117
+ "gpt_neox.layers.16.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
118
+ "gpt_neox.layers.17.attention.dense.bias": "model-00003-of-00005.safetensors",
119
+ "gpt_neox.layers.17.attention.dense.weight": "model-00003-of-00005.safetensors",
120
+ "gpt_neox.layers.17.attention.query_key_value.bias": "model-00003-of-00005.safetensors",
121
+ "gpt_neox.layers.17.attention.query_key_value.weight": "model-00003-of-00005.safetensors",
122
+ "gpt_neox.layers.17.input_layernorm.bias": "model-00003-of-00005.safetensors",
123
+ "gpt_neox.layers.17.input_layernorm.weight": "model-00003-of-00005.safetensors",
124
+ "gpt_neox.layers.17.mlp.dense_4h_to_h.bias": "model-00003-of-00005.safetensors",
125
+ "gpt_neox.layers.17.mlp.dense_4h_to_h.weight": "model-00003-of-00005.safetensors",
126
+ "gpt_neox.layers.17.mlp.dense_h_to_4h.bias": "model-00003-of-00005.safetensors",
127
+ "gpt_neox.layers.17.mlp.dense_h_to_4h.weight": "model-00003-of-00005.safetensors",
128
+ "gpt_neox.layers.17.post_attention_layernorm.bias": "model-00003-of-00005.safetensors",
129
+ "gpt_neox.layers.17.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
130
+ "gpt_neox.layers.18.attention.dense.bias": "model-00003-of-00005.safetensors",
131
+ "gpt_neox.layers.18.attention.dense.weight": "model-00003-of-00005.safetensors",
132
+ "gpt_neox.layers.18.attention.query_key_value.bias": "model-00003-of-00005.safetensors",
133
+ "gpt_neox.layers.18.attention.query_key_value.weight": "model-00003-of-00005.safetensors",
134
+ "gpt_neox.layers.18.input_layernorm.bias": "model-00003-of-00005.safetensors",
135
+ "gpt_neox.layers.18.input_layernorm.weight": "model-00003-of-00005.safetensors",
136
+ "gpt_neox.layers.18.mlp.dense_4h_to_h.bias": "model-00003-of-00005.safetensors",
137
+ "gpt_neox.layers.18.mlp.dense_4h_to_h.weight": "model-00003-of-00005.safetensors",
138
+ "gpt_neox.layers.18.mlp.dense_h_to_4h.bias": "model-00003-of-00005.safetensors",
139
+ "gpt_neox.layers.18.mlp.dense_h_to_4h.weight": "model-00003-of-00005.safetensors",
140
+ "gpt_neox.layers.18.post_attention_layernorm.bias": "model-00003-of-00005.safetensors",
141
+ "gpt_neox.layers.18.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
142
+ "gpt_neox.layers.19.attention.dense.bias": "model-00003-of-00005.safetensors",
143
+ "gpt_neox.layers.19.attention.dense.weight": "model-00003-of-00005.safetensors",
144
+ "gpt_neox.layers.19.attention.query_key_value.bias": "model-00003-of-00005.safetensors",
145
+ "gpt_neox.layers.19.attention.query_key_value.weight": "model-00003-of-00005.safetensors",
146
+ "gpt_neox.layers.19.input_layernorm.bias": "model-00003-of-00005.safetensors",
147
+ "gpt_neox.layers.19.input_layernorm.weight": "model-00003-of-00005.safetensors",
148
+ "gpt_neox.layers.19.mlp.dense_4h_to_h.bias": "model-00003-of-00005.safetensors",
149
+ "gpt_neox.layers.19.mlp.dense_4h_to_h.weight": "model-00003-of-00005.safetensors",
150
+ "gpt_neox.layers.19.mlp.dense_h_to_4h.bias": "model-00003-of-00005.safetensors",
151
+ "gpt_neox.layers.19.mlp.dense_h_to_4h.weight": "model-00003-of-00005.safetensors",
152
+ "gpt_neox.layers.19.post_attention_layernorm.bias": "model-00003-of-00005.safetensors",
153
+ "gpt_neox.layers.19.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
154
+ "gpt_neox.layers.2.attention.dense.bias": "model-00001-of-00005.safetensors",
155
+ "gpt_neox.layers.2.attention.dense.weight": "model-00001-of-00005.safetensors",
156
+ "gpt_neox.layers.2.attention.query_key_value.bias": "model-00001-of-00005.safetensors",
157
+ "gpt_neox.layers.2.attention.query_key_value.weight": "model-00001-of-00005.safetensors",
158
+ "gpt_neox.layers.2.input_layernorm.bias": "model-00001-of-00005.safetensors",
159
+ "gpt_neox.layers.2.input_layernorm.weight": "model-00001-of-00005.safetensors",
160
+ "gpt_neox.layers.2.mlp.dense_4h_to_h.bias": "model-00001-of-00005.safetensors",
161
+ "gpt_neox.layers.2.mlp.dense_4h_to_h.weight": "model-00001-of-00005.safetensors",
162
+ "gpt_neox.layers.2.mlp.dense_h_to_4h.bias": "model-00001-of-00005.safetensors",
163
+ "gpt_neox.layers.2.mlp.dense_h_to_4h.weight": "model-00001-of-00005.safetensors",
164
+ "gpt_neox.layers.2.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
165
+ "gpt_neox.layers.2.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
166
+ "gpt_neox.layers.20.attention.dense.bias": "model-00003-of-00005.safetensors",
167
+ "gpt_neox.layers.20.attention.dense.weight": "model-00003-of-00005.safetensors",
168
+ "gpt_neox.layers.20.attention.query_key_value.bias": "model-00003-of-00005.safetensors",
169
+ "gpt_neox.layers.20.attention.query_key_value.weight": "model-00003-of-00005.safetensors",
170
+ "gpt_neox.layers.20.input_layernorm.bias": "model-00003-of-00005.safetensors",
171
+ "gpt_neox.layers.20.input_layernorm.weight": "model-00003-of-00005.safetensors",
172
+ "gpt_neox.layers.20.mlp.dense_4h_to_h.bias": "model-00003-of-00005.safetensors",
173
+ "gpt_neox.layers.20.mlp.dense_4h_to_h.weight": "model-00003-of-00005.safetensors",
174
+ "gpt_neox.layers.20.mlp.dense_h_to_4h.bias": "model-00003-of-00005.safetensors",
175
+ "gpt_neox.layers.20.mlp.dense_h_to_4h.weight": "model-00003-of-00005.safetensors",
176
+ "gpt_neox.layers.20.post_attention_layernorm.bias": "model-00003-of-00005.safetensors",
177
+ "gpt_neox.layers.20.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
178
+ "gpt_neox.layers.21.attention.dense.bias": "model-00003-of-00005.safetensors",
179
+ "gpt_neox.layers.21.attention.dense.weight": "model-00003-of-00005.safetensors",
180
+ "gpt_neox.layers.21.attention.query_key_value.bias": "model-00003-of-00005.safetensors",
181
+ "gpt_neox.layers.21.attention.query_key_value.weight": "model-00003-of-00005.safetensors",
182
+ "gpt_neox.layers.21.input_layernorm.bias": "model-00003-of-00005.safetensors",
183
+ "gpt_neox.layers.21.input_layernorm.weight": "model-00003-of-00005.safetensors",
184
+ "gpt_neox.layers.21.mlp.dense_4h_to_h.bias": "model-00003-of-00005.safetensors",
185
+ "gpt_neox.layers.21.mlp.dense_4h_to_h.weight": "model-00003-of-00005.safetensors",
186
+ "gpt_neox.layers.21.mlp.dense_h_to_4h.bias": "model-00003-of-00005.safetensors",
187
+ "gpt_neox.layers.21.mlp.dense_h_to_4h.weight": "model-00003-of-00005.safetensors",
188
+ "gpt_neox.layers.21.post_attention_layernorm.bias": "model-00003-of-00005.safetensors",
189
+ "gpt_neox.layers.21.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
190
+ "gpt_neox.layers.22.attention.dense.bias": "model-00003-of-00005.safetensors",
191
+ "gpt_neox.layers.22.attention.dense.weight": "model-00003-of-00005.safetensors",
192
+ "gpt_neox.layers.22.attention.query_key_value.bias": "model-00003-of-00005.safetensors",
193
+ "gpt_neox.layers.22.attention.query_key_value.weight": "model-00003-of-00005.safetensors",
194
+ "gpt_neox.layers.22.input_layernorm.bias": "model-00003-of-00005.safetensors",
195
+ "gpt_neox.layers.22.input_layernorm.weight": "model-00003-of-00005.safetensors",
196
+ "gpt_neox.layers.22.mlp.dense_4h_to_h.bias": "model-00004-of-00005.safetensors",
197
+ "gpt_neox.layers.22.mlp.dense_4h_to_h.weight": "model-00004-of-00005.safetensors",
198
+ "gpt_neox.layers.22.mlp.dense_h_to_4h.bias": "model-00004-of-00005.safetensors",
199
+ "gpt_neox.layers.22.mlp.dense_h_to_4h.weight": "model-00004-of-00005.safetensors",
200
+ "gpt_neox.layers.22.post_attention_layernorm.bias": "model-00003-of-00005.safetensors",
201
+ "gpt_neox.layers.22.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
202
+ "gpt_neox.layers.23.attention.dense.bias": "model-00004-of-00005.safetensors",
203
+ "gpt_neox.layers.23.attention.dense.weight": "model-00004-of-00005.safetensors",
204
+ "gpt_neox.layers.23.attention.query_key_value.bias": "model-00004-of-00005.safetensors",
205
+ "gpt_neox.layers.23.attention.query_key_value.weight": "model-00004-of-00005.safetensors",
206
+ "gpt_neox.layers.23.input_layernorm.bias": "model-00004-of-00005.safetensors",
207
+ "gpt_neox.layers.23.input_layernorm.weight": "model-00004-of-00005.safetensors",
208
+ "gpt_neox.layers.23.mlp.dense_4h_to_h.bias": "model-00004-of-00005.safetensors",
209
+ "gpt_neox.layers.23.mlp.dense_4h_to_h.weight": "model-00004-of-00005.safetensors",
210
+ "gpt_neox.layers.23.mlp.dense_h_to_4h.bias": "model-00004-of-00005.safetensors",
211
+ "gpt_neox.layers.23.mlp.dense_h_to_4h.weight": "model-00004-of-00005.safetensors",
212
+ "gpt_neox.layers.23.post_attention_layernorm.bias": "model-00004-of-00005.safetensors",
213
+ "gpt_neox.layers.23.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
214
+ "gpt_neox.layers.24.attention.dense.bias": "model-00004-of-00005.safetensors",
215
+ "gpt_neox.layers.24.attention.dense.weight": "model-00004-of-00005.safetensors",
216
+ "gpt_neox.layers.24.attention.query_key_value.bias": "model-00004-of-00005.safetensors",
217
+ "gpt_neox.layers.24.attention.query_key_value.weight": "model-00004-of-00005.safetensors",
218
+ "gpt_neox.layers.24.input_layernorm.bias": "model-00004-of-00005.safetensors",
219
+ "gpt_neox.layers.24.input_layernorm.weight": "model-00004-of-00005.safetensors",
220
+ "gpt_neox.layers.24.mlp.dense_4h_to_h.bias": "model-00004-of-00005.safetensors",
221
+ "gpt_neox.layers.24.mlp.dense_4h_to_h.weight": "model-00004-of-00005.safetensors",
222
+ "gpt_neox.layers.24.mlp.dense_h_to_4h.bias": "model-00004-of-00005.safetensors",
223
+ "gpt_neox.layers.24.mlp.dense_h_to_4h.weight": "model-00004-of-00005.safetensors",
224
+ "gpt_neox.layers.24.post_attention_layernorm.bias": "model-00004-of-00005.safetensors",
225
+ "gpt_neox.layers.24.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
226
+ "gpt_neox.layers.25.attention.dense.bias": "model-00004-of-00005.safetensors",
227
+ "gpt_neox.layers.25.attention.dense.weight": "model-00004-of-00005.safetensors",
228
+ "gpt_neox.layers.25.attention.query_key_value.bias": "model-00004-of-00005.safetensors",
229
+ "gpt_neox.layers.25.attention.query_key_value.weight": "model-00004-of-00005.safetensors",
230
+ "gpt_neox.layers.25.input_layernorm.bias": "model-00004-of-00005.safetensors",
231
+ "gpt_neox.layers.25.input_layernorm.weight": "model-00004-of-00005.safetensors",
232
+ "gpt_neox.layers.25.mlp.dense_4h_to_h.bias": "model-00004-of-00005.safetensors",
233
+ "gpt_neox.layers.25.mlp.dense_4h_to_h.weight": "model-00004-of-00005.safetensors",
234
+ "gpt_neox.layers.25.mlp.dense_h_to_4h.bias": "model-00004-of-00005.safetensors",
235
+ "gpt_neox.layers.25.mlp.dense_h_to_4h.weight": "model-00004-of-00005.safetensors",
236
+ "gpt_neox.layers.25.post_attention_layernorm.bias": "model-00004-of-00005.safetensors",
237
+ "gpt_neox.layers.25.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
238
+ "gpt_neox.layers.26.attention.dense.bias": "model-00004-of-00005.safetensors",
239
+ "gpt_neox.layers.26.attention.dense.weight": "model-00004-of-00005.safetensors",
240
+ "gpt_neox.layers.26.attention.query_key_value.bias": "model-00004-of-00005.safetensors",
241
+ "gpt_neox.layers.26.attention.query_key_value.weight": "model-00004-of-00005.safetensors",
242
+ "gpt_neox.layers.26.input_layernorm.bias": "model-00004-of-00005.safetensors",
243
+ "gpt_neox.layers.26.input_layernorm.weight": "model-00004-of-00005.safetensors",
244
+ "gpt_neox.layers.26.mlp.dense_4h_to_h.bias": "model-00004-of-00005.safetensors",
245
+ "gpt_neox.layers.26.mlp.dense_4h_to_h.weight": "model-00004-of-00005.safetensors",
246
+ "gpt_neox.layers.26.mlp.dense_h_to_4h.bias": "model-00004-of-00005.safetensors",
247
+ "gpt_neox.layers.26.mlp.dense_h_to_4h.weight": "model-00004-of-00005.safetensors",
248
+ "gpt_neox.layers.26.post_attention_layernorm.bias": "model-00004-of-00005.safetensors",
249
+ "gpt_neox.layers.26.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
250
+ "gpt_neox.layers.27.attention.dense.bias": "model-00004-of-00005.safetensors",
251
+ "gpt_neox.layers.27.attention.dense.weight": "model-00004-of-00005.safetensors",
252
+ "gpt_neox.layers.27.attention.query_key_value.bias": "model-00004-of-00005.safetensors",
253
+ "gpt_neox.layers.27.attention.query_key_value.weight": "model-00004-of-00005.safetensors",
254
+ "gpt_neox.layers.27.input_layernorm.bias": "model-00004-of-00005.safetensors",
255
+ "gpt_neox.layers.27.input_layernorm.weight": "model-00004-of-00005.safetensors",
256
+ "gpt_neox.layers.27.mlp.dense_4h_to_h.bias": "model-00004-of-00005.safetensors",
257
+ "gpt_neox.layers.27.mlp.dense_4h_to_h.weight": "model-00004-of-00005.safetensors",
258
+ "gpt_neox.layers.27.mlp.dense_h_to_4h.bias": "model-00004-of-00005.safetensors",
259
+ "gpt_neox.layers.27.mlp.dense_h_to_4h.weight": "model-00004-of-00005.safetensors",
260
+ "gpt_neox.layers.27.post_attention_layernorm.bias": "model-00004-of-00005.safetensors",
261
+ "gpt_neox.layers.27.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
262
+ "gpt_neox.layers.28.attention.dense.bias": "model-00004-of-00005.safetensors",
263
+ "gpt_neox.layers.28.attention.dense.weight": "model-00004-of-00005.safetensors",
264
+ "gpt_neox.layers.28.attention.query_key_value.bias": "model-00004-of-00005.safetensors",
265
+ "gpt_neox.layers.28.attention.query_key_value.weight": "model-00004-of-00005.safetensors",
266
+ "gpt_neox.layers.28.input_layernorm.bias": "model-00004-of-00005.safetensors",
267
+ "gpt_neox.layers.28.input_layernorm.weight": "model-00004-of-00005.safetensors",
268
+ "gpt_neox.layers.28.mlp.dense_4h_to_h.bias": "model-00004-of-00005.safetensors",
269
+ "gpt_neox.layers.28.mlp.dense_4h_to_h.weight": "model-00004-of-00005.safetensors",
270
+ "gpt_neox.layers.28.mlp.dense_h_to_4h.bias": "model-00004-of-00005.safetensors",
271
+ "gpt_neox.layers.28.mlp.dense_h_to_4h.weight": "model-00004-of-00005.safetensors",
272
+ "gpt_neox.layers.28.post_attention_layernorm.bias": "model-00004-of-00005.safetensors",
273
+ "gpt_neox.layers.28.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
274
+ "gpt_neox.layers.29.attention.dense.bias": "model-00004-of-00005.safetensors",
275
+ "gpt_neox.layers.29.attention.dense.weight": "model-00004-of-00005.safetensors",
276
+ "gpt_neox.layers.29.attention.query_key_value.bias": "model-00004-of-00005.safetensors",
277
+ "gpt_neox.layers.29.attention.query_key_value.weight": "model-00004-of-00005.safetensors",
278
+ "gpt_neox.layers.29.input_layernorm.bias": "model-00004-of-00005.safetensors",
279
+ "gpt_neox.layers.29.input_layernorm.weight": "model-00004-of-00005.safetensors",
280
+ "gpt_neox.layers.29.mlp.dense_4h_to_h.bias": "model-00004-of-00005.safetensors",
281
+ "gpt_neox.layers.29.mlp.dense_4h_to_h.weight": "model-00004-of-00005.safetensors",
282
+ "gpt_neox.layers.29.mlp.dense_h_to_4h.bias": "model-00004-of-00005.safetensors",
283
+ "gpt_neox.layers.29.mlp.dense_h_to_4h.weight": "model-00004-of-00005.safetensors",
284
+ "gpt_neox.layers.29.post_attention_layernorm.bias": "model-00004-of-00005.safetensors",
285
+ "gpt_neox.layers.29.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
286
+ "gpt_neox.layers.3.attention.dense.bias": "model-00001-of-00005.safetensors",
287
+ "gpt_neox.layers.3.attention.dense.weight": "model-00001-of-00005.safetensors",
288
+ "gpt_neox.layers.3.attention.query_key_value.bias": "model-00001-of-00005.safetensors",
289
+ "gpt_neox.layers.3.attention.query_key_value.weight": "model-00001-of-00005.safetensors",
290
+ "gpt_neox.layers.3.input_layernorm.bias": "model-00001-of-00005.safetensors",
291
+ "gpt_neox.layers.3.input_layernorm.weight": "model-00001-of-00005.safetensors",
292
+ "gpt_neox.layers.3.mlp.dense_4h_to_h.bias": "model-00001-of-00005.safetensors",
293
+ "gpt_neox.layers.3.mlp.dense_4h_to_h.weight": "model-00001-of-00005.safetensors",
294
+ "gpt_neox.layers.3.mlp.dense_h_to_4h.bias": "model-00001-of-00005.safetensors",
295
+ "gpt_neox.layers.3.mlp.dense_h_to_4h.weight": "model-00001-of-00005.safetensors",
296
+ "gpt_neox.layers.3.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
297
+ "gpt_neox.layers.3.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
298
+ "gpt_neox.layers.30.attention.dense.bias": "model-00005-of-00005.safetensors",
299
+ "gpt_neox.layers.30.attention.dense.weight": "model-00005-of-00005.safetensors",
300
+ "gpt_neox.layers.30.attention.query_key_value.bias": "model-00004-of-00005.safetensors",
301
+ "gpt_neox.layers.30.attention.query_key_value.weight": "model-00004-of-00005.safetensors",
302
+ "gpt_neox.layers.30.input_layernorm.bias": "model-00004-of-00005.safetensors",
303
+ "gpt_neox.layers.30.input_layernorm.weight": "model-00004-of-00005.safetensors",
304
+ "gpt_neox.layers.30.mlp.dense_4h_to_h.bias": "model-00005-of-00005.safetensors",
305
+ "gpt_neox.layers.30.mlp.dense_4h_to_h.weight": "model-00005-of-00005.safetensors",
306
+ "gpt_neox.layers.30.mlp.dense_h_to_4h.bias": "model-00005-of-00005.safetensors",
307
+ "gpt_neox.layers.30.mlp.dense_h_to_4h.weight": "model-00005-of-00005.safetensors",
308
+ "gpt_neox.layers.30.post_attention_layernorm.bias": "model-00004-of-00005.safetensors",
309
+ "gpt_neox.layers.30.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
310
+ "gpt_neox.layers.31.attention.dense.bias": "model-00005-of-00005.safetensors",
311
+ "gpt_neox.layers.31.attention.dense.weight": "model-00005-of-00005.safetensors",
312
+ "gpt_neox.layers.31.attention.query_key_value.bias": "model-00005-of-00005.safetensors",
313
+ "gpt_neox.layers.31.attention.query_key_value.weight": "model-00005-of-00005.safetensors",
314
+ "gpt_neox.layers.31.input_layernorm.bias": "model-00005-of-00005.safetensors",
315
+ "gpt_neox.layers.31.input_layernorm.weight": "model-00005-of-00005.safetensors",
316
+ "gpt_neox.layers.31.mlp.dense_4h_to_h.bias": "model-00005-of-00005.safetensors",
317
+ "gpt_neox.layers.31.mlp.dense_4h_to_h.weight": "model-00005-of-00005.safetensors",
318
+ "gpt_neox.layers.31.mlp.dense_h_to_4h.bias": "model-00005-of-00005.safetensors",
319
+ "gpt_neox.layers.31.mlp.dense_h_to_4h.weight": "model-00005-of-00005.safetensors",
320
+ "gpt_neox.layers.31.post_attention_layernorm.bias": "model-00005-of-00005.safetensors",
321
+ "gpt_neox.layers.31.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
322
+ "gpt_neox.layers.32.attention.dense.bias": "model-00005-of-00005.safetensors",
323
+ "gpt_neox.layers.32.attention.dense.weight": "model-00005-of-00005.safetensors",
324
+ "gpt_neox.layers.32.attention.query_key_value.bias": "model-00005-of-00005.safetensors",
325
+ "gpt_neox.layers.32.attention.query_key_value.weight": "model-00005-of-00005.safetensors",
326
+ "gpt_neox.layers.32.input_layernorm.bias": "model-00005-of-00005.safetensors",
327
+ "gpt_neox.layers.32.input_layernorm.weight": "model-00005-of-00005.safetensors",
328
+ "gpt_neox.layers.32.mlp.dense_4h_to_h.bias": "model-00005-of-00005.safetensors",
329
+ "gpt_neox.layers.32.mlp.dense_4h_to_h.weight": "model-00005-of-00005.safetensors",
330
+ "gpt_neox.layers.32.mlp.dense_h_to_4h.bias": "model-00005-of-00005.safetensors",
331
+ "gpt_neox.layers.32.mlp.dense_h_to_4h.weight": "model-00005-of-00005.safetensors",
332
+ "gpt_neox.layers.32.post_attention_layernorm.bias": "model-00005-of-00005.safetensors",
333
+ "gpt_neox.layers.32.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
334
+ "gpt_neox.layers.33.attention.dense.bias": "model-00005-of-00005.safetensors",
335
+ "gpt_neox.layers.33.attention.dense.weight": "model-00005-of-00005.safetensors",
336
+ "gpt_neox.layers.33.attention.query_key_value.bias": "model-00005-of-00005.safetensors",
337
+ "gpt_neox.layers.33.attention.query_key_value.weight": "model-00005-of-00005.safetensors",
338
+ "gpt_neox.layers.33.input_layernorm.bias": "model-00005-of-00005.safetensors",
339
+ "gpt_neox.layers.33.input_layernorm.weight": "model-00005-of-00005.safetensors",
340
+ "gpt_neox.layers.33.mlp.dense_4h_to_h.bias": "model-00005-of-00005.safetensors",
341
+ "gpt_neox.layers.33.mlp.dense_4h_to_h.weight": "model-00005-of-00005.safetensors",
342
+ "gpt_neox.layers.33.mlp.dense_h_to_4h.bias": "model-00005-of-00005.safetensors",
343
+ "gpt_neox.layers.33.mlp.dense_h_to_4h.weight": "model-00005-of-00005.safetensors",
344
+ "gpt_neox.layers.33.post_attention_layernorm.bias": "model-00005-of-00005.safetensors",
345
+ "gpt_neox.layers.33.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
346
+ "gpt_neox.layers.34.attention.dense.bias": "model-00005-of-00005.safetensors",
347
+ "gpt_neox.layers.34.attention.dense.weight": "model-00005-of-00005.safetensors",
348
+ "gpt_neox.layers.34.attention.query_key_value.bias": "model-00005-of-00005.safetensors",
349
+ "gpt_neox.layers.34.attention.query_key_value.weight": "model-00005-of-00005.safetensors",
350
+ "gpt_neox.layers.34.input_layernorm.bias": "model-00005-of-00005.safetensors",
351
+ "gpt_neox.layers.34.input_layernorm.weight": "model-00005-of-00005.safetensors",
352
+ "gpt_neox.layers.34.mlp.dense_4h_to_h.bias": "model-00005-of-00005.safetensors",
353
+ "gpt_neox.layers.34.mlp.dense_4h_to_h.weight": "model-00005-of-00005.safetensors",
354
+ "gpt_neox.layers.34.mlp.dense_h_to_4h.bias": "model-00005-of-00005.safetensors",
355
+ "gpt_neox.layers.34.mlp.dense_h_to_4h.weight": "model-00005-of-00005.safetensors",
356
+ "gpt_neox.layers.34.post_attention_layernorm.bias": "model-00005-of-00005.safetensors",
357
+ "gpt_neox.layers.34.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
358
+ "gpt_neox.layers.35.attention.dense.bias": "model-00005-of-00005.safetensors",
359
+ "gpt_neox.layers.35.attention.dense.weight": "model-00005-of-00005.safetensors",
360
+ "gpt_neox.layers.35.attention.query_key_value.bias": "model-00005-of-00005.safetensors",
361
+ "gpt_neox.layers.35.attention.query_key_value.weight": "model-00005-of-00005.safetensors",
362
+ "gpt_neox.layers.35.input_layernorm.bias": "model-00005-of-00005.safetensors",
363
+ "gpt_neox.layers.35.input_layernorm.weight": "model-00005-of-00005.safetensors",
364
+ "gpt_neox.layers.35.mlp.dense_4h_to_h.bias": "model-00005-of-00005.safetensors",
365
+ "gpt_neox.layers.35.mlp.dense_4h_to_h.weight": "model-00005-of-00005.safetensors",
366
+ "gpt_neox.layers.35.mlp.dense_h_to_4h.bias": "model-00005-of-00005.safetensors",
367
+ "gpt_neox.layers.35.mlp.dense_h_to_4h.weight": "model-00005-of-00005.safetensors",
368
+ "gpt_neox.layers.35.post_attention_layernorm.bias": "model-00005-of-00005.safetensors",
369
+ "gpt_neox.layers.35.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
370
+ "gpt_neox.layers.4.attention.dense.bias": "model-00001-of-00005.safetensors",
371
+ "gpt_neox.layers.4.attention.dense.weight": "model-00001-of-00005.safetensors",
372
+ "gpt_neox.layers.4.attention.query_key_value.bias": "model-00001-of-00005.safetensors",
373
+ "gpt_neox.layers.4.attention.query_key_value.weight": "model-00001-of-00005.safetensors",
374
+ "gpt_neox.layers.4.input_layernorm.bias": "model-00001-of-00005.safetensors",
375
+ "gpt_neox.layers.4.input_layernorm.weight": "model-00001-of-00005.safetensors",
376
+ "gpt_neox.layers.4.mlp.dense_4h_to_h.bias": "model-00001-of-00005.safetensors",
377
+ "gpt_neox.layers.4.mlp.dense_4h_to_h.weight": "model-00001-of-00005.safetensors",
378
+ "gpt_neox.layers.4.mlp.dense_h_to_4h.bias": "model-00001-of-00005.safetensors",
379
+ "gpt_neox.layers.4.mlp.dense_h_to_4h.weight": "model-00001-of-00005.safetensors",
380
+ "gpt_neox.layers.4.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
381
+ "gpt_neox.layers.4.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
382
+ "gpt_neox.layers.5.attention.dense.bias": "model-00001-of-00005.safetensors",
383
+ "gpt_neox.layers.5.attention.dense.weight": "model-00001-of-00005.safetensors",
384
+ "gpt_neox.layers.5.attention.query_key_value.bias": "model-00001-of-00005.safetensors",
385
+ "gpt_neox.layers.5.attention.query_key_value.weight": "model-00001-of-00005.safetensors",
386
+ "gpt_neox.layers.5.input_layernorm.bias": "model-00001-of-00005.safetensors",
387
+ "gpt_neox.layers.5.input_layernorm.weight": "model-00001-of-00005.safetensors",
388
+ "gpt_neox.layers.5.mlp.dense_4h_to_h.bias": "model-00001-of-00005.safetensors",
389
+ "gpt_neox.layers.5.mlp.dense_4h_to_h.weight": "model-00001-of-00005.safetensors",
390
+ "gpt_neox.layers.5.mlp.dense_h_to_4h.bias": "model-00001-of-00005.safetensors",
391
+ "gpt_neox.layers.5.mlp.dense_h_to_4h.weight": "model-00001-of-00005.safetensors",
392
+ "gpt_neox.layers.5.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
393
+ "gpt_neox.layers.5.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
394
+ "gpt_neox.layers.6.attention.dense.bias": "model-00001-of-00005.safetensors",
395
+ "gpt_neox.layers.6.attention.dense.weight": "model-00001-of-00005.safetensors",
396
+ "gpt_neox.layers.6.attention.query_key_value.bias": "model-00001-of-00005.safetensors",
397
+ "gpt_neox.layers.6.attention.query_key_value.weight": "model-00001-of-00005.safetensors",
398
+ "gpt_neox.layers.6.input_layernorm.bias": "model-00001-of-00005.safetensors",
399
+ "gpt_neox.layers.6.input_layernorm.weight": "model-00001-of-00005.safetensors",
400
+ "gpt_neox.layers.6.mlp.dense_4h_to_h.bias": "model-00001-of-00005.safetensors",
401
+ "gpt_neox.layers.6.mlp.dense_4h_to_h.weight": "model-00001-of-00005.safetensors",
402
+ "gpt_neox.layers.6.mlp.dense_h_to_4h.bias": "model-00001-of-00005.safetensors",
403
+ "gpt_neox.layers.6.mlp.dense_h_to_4h.weight": "model-00001-of-00005.safetensors",
404
+ "gpt_neox.layers.6.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
405
+ "gpt_neox.layers.6.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
406
+ "gpt_neox.layers.7.attention.dense.bias": "model-00002-of-00005.safetensors",
407
+ "gpt_neox.layers.7.attention.dense.weight": "model-00002-of-00005.safetensors",
408
+ "gpt_neox.layers.7.attention.query_key_value.bias": "model-00002-of-00005.safetensors",
409
+ "gpt_neox.layers.7.attention.query_key_value.weight": "model-00002-of-00005.safetensors",
410
+ "gpt_neox.layers.7.input_layernorm.bias": "model-00001-of-00005.safetensors",
411
+ "gpt_neox.layers.7.input_layernorm.weight": "model-00001-of-00005.safetensors",
412
+ "gpt_neox.layers.7.mlp.dense_4h_to_h.bias": "model-00002-of-00005.safetensors",
413
+ "gpt_neox.layers.7.mlp.dense_4h_to_h.weight": "model-00002-of-00005.safetensors",
414
+ "gpt_neox.layers.7.mlp.dense_h_to_4h.bias": "model-00002-of-00005.safetensors",
415
+ "gpt_neox.layers.7.mlp.dense_h_to_4h.weight": "model-00002-of-00005.safetensors",
416
+ "gpt_neox.layers.7.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
417
+ "gpt_neox.layers.7.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
418
+ "gpt_neox.layers.8.attention.dense.bias": "model-00002-of-00005.safetensors",
419
+ "gpt_neox.layers.8.attention.dense.weight": "model-00002-of-00005.safetensors",
420
+ "gpt_neox.layers.8.attention.query_key_value.bias": "model-00002-of-00005.safetensors",
421
+ "gpt_neox.layers.8.attention.query_key_value.weight": "model-00002-of-00005.safetensors",
422
+ "gpt_neox.layers.8.input_layernorm.bias": "model-00002-of-00005.safetensors",
423
+ "gpt_neox.layers.8.input_layernorm.weight": "model-00002-of-00005.safetensors",
424
+ "gpt_neox.layers.8.mlp.dense_4h_to_h.bias": "model-00002-of-00005.safetensors",
425
+ "gpt_neox.layers.8.mlp.dense_4h_to_h.weight": "model-00002-of-00005.safetensors",
426
+ "gpt_neox.layers.8.mlp.dense_h_to_4h.bias": "model-00002-of-00005.safetensors",
427
+ "gpt_neox.layers.8.mlp.dense_h_to_4h.weight": "model-00002-of-00005.safetensors",
428
+ "gpt_neox.layers.8.post_attention_layernorm.bias": "model-00002-of-00005.safetensors",
429
+ "gpt_neox.layers.8.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
430
+ "gpt_neox.layers.9.attention.dense.bias": "model-00002-of-00005.safetensors",
431
+ "gpt_neox.layers.9.attention.dense.weight": "model-00002-of-00005.safetensors",
432
+ "gpt_neox.layers.9.attention.query_key_value.bias": "model-00002-of-00005.safetensors",
433
+ "gpt_neox.layers.9.attention.query_key_value.weight": "model-00002-of-00005.safetensors",
434
+ "gpt_neox.layers.9.input_layernorm.bias": "model-00002-of-00005.safetensors",
435
+ "gpt_neox.layers.9.input_layernorm.weight": "model-00002-of-00005.safetensors",
436
+ "gpt_neox.layers.9.mlp.dense_4h_to_h.bias": "model-00002-of-00005.safetensors",
437
+ "gpt_neox.layers.9.mlp.dense_4h_to_h.weight": "model-00002-of-00005.safetensors",
438
+ "gpt_neox.layers.9.mlp.dense_h_to_4h.bias": "model-00002-of-00005.safetensors",
439
+ "gpt_neox.layers.9.mlp.dense_h_to_4h.weight": "model-00002-of-00005.safetensors",
440
+ "gpt_neox.layers.9.post_attention_layernorm.bias": "model-00002-of-00005.safetensors",
441
+ "gpt_neox.layers.9.post_attention_layernorm.weight": "model-00002-of-00005.safetensors"
442
+ }
443
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "train_loss": 0.2317562538376319,
4
+ "train_runtime": 114179.2227,
5
+ "train_samples": 140201,
6
+ "train_samples_per_second": 3.684,
7
+ "train_steps_per_second": 0.058
8
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff