Alignment-Lab-AI commited on
Commit
9a65339
1 Parent(s): 2f1b078

Training in progress, epoch 2, checkpoint

Browse files
checkpoint-162/README.md ADDED
@@ -0,0 +1,219 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ base_model: mistralai/Mistral-7B-v0.1
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Shared by [optional]:** [More Information Needed]
22
+ - **Model type:** [More Information Needed]
23
+ - **Language(s) (NLP):** [More Information Needed]
24
+ - **License:** [More Information Needed]
25
+ - **Finetuned from model [optional]:** [More Information Needed]
26
+
27
+ ### Model Sources [optional]
28
+
29
+ <!-- Provide the basic links for the model. -->
30
+
31
+ - **Repository:** [More Information Needed]
32
+ - **Paper [optional]:** [More Information Needed]
33
+ - **Demo [optional]:** [More Information Needed]
34
+
35
+ ## Uses
36
+
37
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
38
+
39
+ ### Direct Use
40
+
41
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
42
+
43
+ [More Information Needed]
44
+
45
+ ### Downstream Use [optional]
46
+
47
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
48
+
49
+ [More Information Needed]
50
+
51
+ ### Out-of-Scope Use
52
+
53
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
54
+
55
+ [More Information Needed]
56
+
57
+ ## Bias, Risks, and Limitations
58
+
59
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
60
+
61
+ [More Information Needed]
62
+
63
+ ### Recommendations
64
+
65
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
66
+
67
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
68
+
69
+ ## How to Get Started with the Model
70
+
71
+ Use the code below to get started with the model.
72
+
73
+ [More Information Needed]
74
+
75
+ ## Training Details
76
+
77
+ ### Training Data
78
+
79
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
80
+
81
+ [More Information Needed]
82
+
83
+ ### Training Procedure
84
+
85
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
86
+
87
+ #### Preprocessing [optional]
88
+
89
+ [More Information Needed]
90
+
91
+
92
+ #### Training Hyperparameters
93
+
94
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
95
+
96
+ #### Speeds, Sizes, Times [optional]
97
+
98
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
99
+
100
+ [More Information Needed]
101
+
102
+ ## Evaluation
103
+
104
+ <!-- This section describes the evaluation protocols and provides the results. -->
105
+
106
+ ### Testing Data, Factors & Metrics
107
+
108
+ #### Testing Data
109
+
110
+ <!-- This should link to a Data Card if possible. -->
111
+
112
+ [More Information Needed]
113
+
114
+ #### Factors
115
+
116
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
117
+
118
+ [More Information Needed]
119
+
120
+ #### Metrics
121
+
122
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
123
+
124
+ [More Information Needed]
125
+
126
+ ### Results
127
+
128
+ [More Information Needed]
129
+
130
+ #### Summary
131
+
132
+
133
+
134
+ ## Model Examination [optional]
135
+
136
+ <!-- Relevant interpretability work for the model goes here -->
137
+
138
+ [More Information Needed]
139
+
140
+ ## Environmental Impact
141
+
142
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
143
+
144
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
145
+
146
+ - **Hardware Type:** [More Information Needed]
147
+ - **Hours used:** [More Information Needed]
148
+ - **Cloud Provider:** [More Information Needed]
149
+ - **Compute Region:** [More Information Needed]
150
+ - **Carbon Emitted:** [More Information Needed]
151
+
152
+ ## Technical Specifications [optional]
153
+
154
+ ### Model Architecture and Objective
155
+
156
+ [More Information Needed]
157
+
158
+ ### Compute Infrastructure
159
+
160
+ [More Information Needed]
161
+
162
+ #### Hardware
163
+
164
+ [More Information Needed]
165
+
166
+ #### Software
167
+
168
+ [More Information Needed]
169
+
170
+ ## Citation [optional]
171
+
172
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
173
+
174
+ **BibTeX:**
175
+
176
+ [More Information Needed]
177
+
178
+ **APA:**
179
+
180
+ [More Information Needed]
181
+
182
+ ## Glossary [optional]
183
+
184
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
185
+
186
+ [More Information Needed]
187
+
188
+ ## More Information [optional]
189
+
190
+ [More Information Needed]
191
+
192
+ ## Model Card Authors [optional]
193
+
194
+ [More Information Needed]
195
+
196
+ ## Model Card Contact
197
+
198
+ [More Information Needed]
199
+
200
+
201
+ ## Training procedure
202
+
203
+
204
+ The following `bitsandbytes` quantization config was used during training:
205
+ - quant_method: bitsandbytes
206
+ - load_in_8bit: False
207
+ - load_in_4bit: True
208
+ - llm_int8_threshold: 6.0
209
+ - llm_int8_skip_modules: None
210
+ - llm_int8_enable_fp32_cpu_offload: False
211
+ - llm_int8_has_fp16_weight: False
212
+ - bnb_4bit_quant_type: nf4
213
+ - bnb_4bit_use_double_quant: True
214
+ - bnb_4bit_compute_dtype: bfloat16
215
+
216
+ ### Framework versions
217
+
218
+
219
+ - PEFT 0.6.0
checkpoint-162/adapter_config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "mistralai/Mistral-7B-v0.1",
5
+ "bias": "none",
6
+ "fan_in_fan_out": null,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layers_pattern": null,
10
+ "layers_to_transform": null,
11
+ "lora_alpha": 16,
12
+ "lora_dropout": 0.05,
13
+ "modules_to_save": null,
14
+ "peft_type": "LORA",
15
+ "r": 32,
16
+ "rank_pattern": {},
17
+ "revision": null,
18
+ "target_modules": [
19
+ "v_proj",
20
+ "q_proj",
21
+ "k_proj",
22
+ "down_proj",
23
+ "up_proj",
24
+ "gate_proj",
25
+ "o_proj"
26
+ ],
27
+ "task_type": "CAUSAL_LM"
28
+ }
checkpoint-162/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0c0980d9bb0cc45f8403eacaf927b7f735caa35a3744b4c01edcc562a456af9
3
+ size 167832688
checkpoint-162/global_step162/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26c63973e85487c95edcd68d9230c96ed05c2fb0981d954826794a7c48b9e92c
3
+ size 503344023
checkpoint-162/global_step162/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d3f63d887830afebb2803000364ba07dbb02f8aba1864b2553b7c25920677d3
3
+ size 503344151
checkpoint-162/global_step162/mp_rank_00_model_states.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:220743990377680758123b9b63c46f96b761a6a8af5610b7dac832d66f8d4cf3
3
+ size 8197288999
checkpoint-162/latest ADDED
@@ -0,0 +1 @@
 
 
1
+ global_step162
checkpoint-162/rng_state_0.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2d897738d12eb40a539b2be496aba2897fb5e8fb0b21a74a3b7ec8a4cbae6e0
3
+ size 15607
checkpoint-162/rng_state_1.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c32cb489eb83cc4efe902da792ac03510d194b3e6f8c79f1aa80cf534dc0942
3
+ size 15607
checkpoint-162/trainer_state.json ADDED
@@ -0,0 +1,1063 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 2.0253164556962027,
5
+ "eval_steps": 20,
6
+ "global_step": 162,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.01,
13
+ "learning_rate": 0.0,
14
+ "loss": 0.6625,
15
+ "step": 1
16
+ },
17
+ {
18
+ "epoch": 0.01,
19
+ "eval_loss": 0.9669284820556641,
20
+ "eval_runtime": 22.2599,
21
+ "eval_samples_per_second": 11.231,
22
+ "eval_steps_per_second": 2.83,
23
+ "step": 1
24
+ },
25
+ {
26
+ "epoch": 0.03,
27
+ "learning_rate": 6.666666e-07,
28
+ "loss": 0.7012,
29
+ "step": 2
30
+ },
31
+ {
32
+ "epoch": 0.04,
33
+ "learning_rate": 1.3333332e-06,
34
+ "loss": 0.6954,
35
+ "step": 3
36
+ },
37
+ {
38
+ "epoch": 0.05,
39
+ "learning_rate": 1.9999997999999996e-06,
40
+ "loss": 0.6383,
41
+ "step": 4
42
+ },
43
+ {
44
+ "epoch": 0.06,
45
+ "learning_rate": 2.6666664e-06,
46
+ "loss": 0.6993,
47
+ "step": 5
48
+ },
49
+ {
50
+ "epoch": 0.08,
51
+ "learning_rate": 3.3333329999999998e-06,
52
+ "loss": 0.6388,
53
+ "step": 6
54
+ },
55
+ {
56
+ "epoch": 0.09,
57
+ "learning_rate": 3.999999599999999e-06,
58
+ "loss": 0.6847,
59
+ "step": 7
60
+ },
61
+ {
62
+ "epoch": 0.1,
63
+ "learning_rate": 4.6666662e-06,
64
+ "loss": 0.6973,
65
+ "step": 8
66
+ },
67
+ {
68
+ "epoch": 0.11,
69
+ "learning_rate": 5.3333328e-06,
70
+ "loss": 0.703,
71
+ "step": 9
72
+ },
73
+ {
74
+ "epoch": 0.13,
75
+ "learning_rate": 5.999999399999999e-06,
76
+ "loss": 0.6826,
77
+ "step": 10
78
+ },
79
+ {
80
+ "epoch": 0.14,
81
+ "learning_rate": 6.6666659999999995e-06,
82
+ "loss": 0.6939,
83
+ "step": 11
84
+ },
85
+ {
86
+ "epoch": 0.15,
87
+ "learning_rate": 7.333332599999999e-06,
88
+ "loss": 0.6879,
89
+ "step": 12
90
+ },
91
+ {
92
+ "epoch": 0.16,
93
+ "learning_rate": 7.999999199999998e-06,
94
+ "loss": 0.687,
95
+ "step": 13
96
+ },
97
+ {
98
+ "epoch": 0.18,
99
+ "learning_rate": 8.6666658e-06,
100
+ "loss": 0.6471,
101
+ "step": 14
102
+ },
103
+ {
104
+ "epoch": 0.19,
105
+ "learning_rate": 9.3333324e-06,
106
+ "loss": 0.6658,
107
+ "step": 15
108
+ },
109
+ {
110
+ "epoch": 0.2,
111
+ "learning_rate": 9.999998999999998e-06,
112
+ "loss": 0.6669,
113
+ "step": 16
114
+ },
115
+ {
116
+ "epoch": 0.22,
117
+ "learning_rate": 1.06666656e-05,
118
+ "loss": 0.6451,
119
+ "step": 17
120
+ },
121
+ {
122
+ "epoch": 0.23,
123
+ "learning_rate": 1.13333322e-05,
124
+ "loss": 0.6558,
125
+ "step": 18
126
+ },
127
+ {
128
+ "epoch": 0.24,
129
+ "learning_rate": 1.1999998799999998e-05,
130
+ "loss": 0.6714,
131
+ "step": 19
132
+ },
133
+ {
134
+ "epoch": 0.25,
135
+ "learning_rate": 1.26666654e-05,
136
+ "loss": 0.7123,
137
+ "step": 20
138
+ },
139
+ {
140
+ "epoch": 0.25,
141
+ "eval_loss": 0.9579811096191406,
142
+ "eval_runtime": 22.7924,
143
+ "eval_samples_per_second": 10.969,
144
+ "eval_steps_per_second": 2.764,
145
+ "step": 20
146
+ },
147
+ {
148
+ "epoch": 0.27,
149
+ "learning_rate": 1.3333331999999999e-05,
150
+ "loss": 0.6388,
151
+ "step": 21
152
+ },
153
+ {
154
+ "epoch": 0.28,
155
+ "learning_rate": 1.3999998599999999e-05,
156
+ "loss": 0.6355,
157
+ "step": 22
158
+ },
159
+ {
160
+ "epoch": 0.29,
161
+ "learning_rate": 1.4666665199999999e-05,
162
+ "loss": 0.6237,
163
+ "step": 23
164
+ },
165
+ {
166
+ "epoch": 0.3,
167
+ "learning_rate": 1.53333318e-05,
168
+ "loss": 0.6214,
169
+ "step": 24
170
+ },
171
+ {
172
+ "epoch": 0.32,
173
+ "learning_rate": 1.5999998399999997e-05,
174
+ "loss": 0.6045,
175
+ "step": 25
176
+ },
177
+ {
178
+ "epoch": 0.33,
179
+ "learning_rate": 1.6666665e-05,
180
+ "loss": 0.671,
181
+ "step": 26
182
+ },
183
+ {
184
+ "epoch": 0.34,
185
+ "learning_rate": 1.73333316e-05,
186
+ "loss": 0.6042,
187
+ "step": 27
188
+ },
189
+ {
190
+ "epoch": 0.35,
191
+ "learning_rate": 1.7999998199999998e-05,
192
+ "loss": 0.6722,
193
+ "step": 28
194
+ },
195
+ {
196
+ "epoch": 0.37,
197
+ "learning_rate": 1.86666648e-05,
198
+ "loss": 0.6132,
199
+ "step": 29
200
+ },
201
+ {
202
+ "epoch": 0.38,
203
+ "learning_rate": 1.9333331399999998e-05,
204
+ "loss": 0.5651,
205
+ "step": 30
206
+ },
207
+ {
208
+ "epoch": 0.39,
209
+ "learning_rate": 1.9999997999999996e-05,
210
+ "loss": 0.6525,
211
+ "step": 31
212
+ },
213
+ {
214
+ "epoch": 0.41,
215
+ "learning_rate": 2.0666664599999998e-05,
216
+ "loss": 0.6532,
217
+ "step": 32
218
+ },
219
+ {
220
+ "epoch": 0.42,
221
+ "learning_rate": 2.13333312e-05,
222
+ "loss": 0.6167,
223
+ "step": 33
224
+ },
225
+ {
226
+ "epoch": 0.43,
227
+ "learning_rate": 2.1999997799999997e-05,
228
+ "loss": 0.607,
229
+ "step": 34
230
+ },
231
+ {
232
+ "epoch": 0.44,
233
+ "learning_rate": 2.26666644e-05,
234
+ "loss": 0.5843,
235
+ "step": 35
236
+ },
237
+ {
238
+ "epoch": 0.46,
239
+ "learning_rate": 2.3333330999999997e-05,
240
+ "loss": 0.6093,
241
+ "step": 36
242
+ },
243
+ {
244
+ "epoch": 0.47,
245
+ "learning_rate": 2.3999997599999995e-05,
246
+ "loss": 0.5967,
247
+ "step": 37
248
+ },
249
+ {
250
+ "epoch": 0.48,
251
+ "learning_rate": 2.4666664199999997e-05,
252
+ "loss": 0.6015,
253
+ "step": 38
254
+ },
255
+ {
256
+ "epoch": 0.49,
257
+ "learning_rate": 2.53333308e-05,
258
+ "loss": 0.6263,
259
+ "step": 39
260
+ },
261
+ {
262
+ "epoch": 0.51,
263
+ "learning_rate": 2.59999974e-05,
264
+ "loss": 0.5923,
265
+ "step": 40
266
+ },
267
+ {
268
+ "epoch": 0.51,
269
+ "eval_loss": 0.9467829465866089,
270
+ "eval_runtime": 22.6276,
271
+ "eval_samples_per_second": 11.048,
272
+ "eval_steps_per_second": 2.784,
273
+ "step": 40
274
+ },
275
+ {
276
+ "epoch": 0.52,
277
+ "learning_rate": 2.6666663999999998e-05,
278
+ "loss": 0.5801,
279
+ "step": 41
280
+ },
281
+ {
282
+ "epoch": 0.53,
283
+ "learning_rate": 2.7333330599999996e-05,
284
+ "loss": 0.576,
285
+ "step": 42
286
+ },
287
+ {
288
+ "epoch": 0.54,
289
+ "learning_rate": 2.7999997199999998e-05,
290
+ "loss": 0.6317,
291
+ "step": 43
292
+ },
293
+ {
294
+ "epoch": 0.56,
295
+ "learning_rate": 2.8666663799999996e-05,
296
+ "loss": 0.5874,
297
+ "step": 44
298
+ },
299
+ {
300
+ "epoch": 0.57,
301
+ "learning_rate": 2.9333330399999998e-05,
302
+ "loss": 0.6109,
303
+ "step": 45
304
+ },
305
+ {
306
+ "epoch": 0.58,
307
+ "learning_rate": 2.9999997e-05,
308
+ "loss": 0.5274,
309
+ "step": 46
310
+ },
311
+ {
312
+ "epoch": 0.59,
313
+ "learning_rate": 3.06666636e-05,
314
+ "loss": 0.5698,
315
+ "step": 47
316
+ },
317
+ {
318
+ "epoch": 0.61,
319
+ "learning_rate": 3.133333019999999e-05,
320
+ "loss": 0.544,
321
+ "step": 48
322
+ },
323
+ {
324
+ "epoch": 0.62,
325
+ "learning_rate": 3.1999996799999994e-05,
326
+ "loss": 0.5586,
327
+ "step": 49
328
+ },
329
+ {
330
+ "epoch": 0.63,
331
+ "learning_rate": 3.2666663399999995e-05,
332
+ "loss": 0.5666,
333
+ "step": 50
334
+ },
335
+ {
336
+ "epoch": 0.65,
337
+ "learning_rate": 3.333333e-05,
338
+ "loss": 0.586,
339
+ "step": 51
340
+ },
341
+ {
342
+ "epoch": 0.66,
343
+ "learning_rate": 3.39999966e-05,
344
+ "loss": 0.6225,
345
+ "step": 52
346
+ },
347
+ {
348
+ "epoch": 0.67,
349
+ "learning_rate": 3.46666632e-05,
350
+ "loss": 0.5588,
351
+ "step": 53
352
+ },
353
+ {
354
+ "epoch": 0.68,
355
+ "learning_rate": 3.53333298e-05,
356
+ "loss": 0.5705,
357
+ "step": 54
358
+ },
359
+ {
360
+ "epoch": 0.7,
361
+ "learning_rate": 3.5999996399999996e-05,
362
+ "loss": 0.5628,
363
+ "step": 55
364
+ },
365
+ {
366
+ "epoch": 0.71,
367
+ "learning_rate": 3.6666663e-05,
368
+ "loss": 0.5868,
369
+ "step": 56
370
+ },
371
+ {
372
+ "epoch": 0.72,
373
+ "learning_rate": 3.73333296e-05,
374
+ "loss": 0.5583,
375
+ "step": 57
376
+ },
377
+ {
378
+ "epoch": 0.73,
379
+ "learning_rate": 3.7999996199999994e-05,
380
+ "loss": 0.5858,
381
+ "step": 58
382
+ },
383
+ {
384
+ "epoch": 0.75,
385
+ "learning_rate": 3.8666662799999996e-05,
386
+ "loss": 0.5756,
387
+ "step": 59
388
+ },
389
+ {
390
+ "epoch": 0.76,
391
+ "learning_rate": 3.93333294e-05,
392
+ "loss": 0.5822,
393
+ "step": 60
394
+ },
395
+ {
396
+ "epoch": 0.76,
397
+ "eval_loss": 0.9389348030090332,
398
+ "eval_runtime": 22.5892,
399
+ "eval_samples_per_second": 11.067,
400
+ "eval_steps_per_second": 2.789,
401
+ "step": 60
402
+ },
403
+ {
404
+ "epoch": 0.77,
405
+ "learning_rate": 3.999999599999999e-05,
406
+ "loss": 0.5667,
407
+ "step": 61
408
+ },
409
+ {
410
+ "epoch": 0.78,
411
+ "learning_rate": 4.0666662599999994e-05,
412
+ "loss": 0.5616,
413
+ "step": 62
414
+ },
415
+ {
416
+ "epoch": 0.8,
417
+ "learning_rate": 4.1333329199999995e-05,
418
+ "loss": 0.5678,
419
+ "step": 63
420
+ },
421
+ {
422
+ "epoch": 0.81,
423
+ "learning_rate": 4.19999958e-05,
424
+ "loss": 0.5514,
425
+ "step": 64
426
+ },
427
+ {
428
+ "epoch": 0.82,
429
+ "learning_rate": 4.26666624e-05,
430
+ "loss": 0.5732,
431
+ "step": 65
432
+ },
433
+ {
434
+ "epoch": 0.84,
435
+ "learning_rate": 4.3333329e-05,
436
+ "loss": 0.5794,
437
+ "step": 66
438
+ },
439
+ {
440
+ "epoch": 0.85,
441
+ "learning_rate": 4.3999995599999995e-05,
442
+ "loss": 0.5582,
443
+ "step": 67
444
+ },
445
+ {
446
+ "epoch": 0.86,
447
+ "learning_rate": 4.4666662199999996e-05,
448
+ "loss": 0.5575,
449
+ "step": 68
450
+ },
451
+ {
452
+ "epoch": 0.87,
453
+ "learning_rate": 4.53333288e-05,
454
+ "loss": 0.5558,
455
+ "step": 69
456
+ },
457
+ {
458
+ "epoch": 0.89,
459
+ "learning_rate": 4.599999539999999e-05,
460
+ "loss": 0.5707,
461
+ "step": 70
462
+ },
463
+ {
464
+ "epoch": 0.9,
465
+ "learning_rate": 4.6666661999999994e-05,
466
+ "loss": 0.54,
467
+ "step": 71
468
+ },
469
+ {
470
+ "epoch": 0.91,
471
+ "learning_rate": 4.7333328599999996e-05,
472
+ "loss": 0.5435,
473
+ "step": 72
474
+ },
475
+ {
476
+ "epoch": 0.92,
477
+ "learning_rate": 4.799999519999999e-05,
478
+ "loss": 0.5649,
479
+ "step": 73
480
+ },
481
+ {
482
+ "epoch": 0.94,
483
+ "learning_rate": 4.866666179999999e-05,
484
+ "loss": 0.5617,
485
+ "step": 74
486
+ },
487
+ {
488
+ "epoch": 0.95,
489
+ "learning_rate": 4.9333328399999994e-05,
490
+ "loss": 0.5651,
491
+ "step": 75
492
+ },
493
+ {
494
+ "epoch": 0.96,
495
+ "learning_rate": 4.9999994999999995e-05,
496
+ "loss": 0.5113,
497
+ "step": 76
498
+ },
499
+ {
500
+ "epoch": 0.97,
501
+ "learning_rate": 5.06666616e-05,
502
+ "loss": 0.5389,
503
+ "step": 77
504
+ },
505
+ {
506
+ "epoch": 0.99,
507
+ "learning_rate": 5.13333282e-05,
508
+ "loss": 0.5162,
509
+ "step": 78
510
+ },
511
+ {
512
+ "epoch": 1.0,
513
+ "learning_rate": 5.19999948e-05,
514
+ "loss": 0.5397,
515
+ "step": 79
516
+ },
517
+ {
518
+ "epoch": 1.01,
519
+ "learning_rate": 5.2666661399999995e-05,
520
+ "loss": 0.5794,
521
+ "step": 80
522
+ },
523
+ {
524
+ "epoch": 1.01,
525
+ "eval_loss": 0.9314417839050293,
526
+ "eval_runtime": 22.7753,
527
+ "eval_samples_per_second": 10.977,
528
+ "eval_steps_per_second": 2.766,
529
+ "step": 80
530
+ },
531
+ {
532
+ "epoch": 1.03,
533
+ "learning_rate": 5.3333327999999996e-05,
534
+ "loss": 0.5263,
535
+ "step": 81
536
+ },
537
+ {
538
+ "epoch": 1.01,
539
+ "learning_rate": 5.39999946e-05,
540
+ "loss": 0.5141,
541
+ "step": 82
542
+ },
543
+ {
544
+ "epoch": 1.03,
545
+ "learning_rate": 5.466666119999999e-05,
546
+ "loss": 0.5573,
547
+ "step": 83
548
+ },
549
+ {
550
+ "epoch": 1.04,
551
+ "learning_rate": 5.5333327799999994e-05,
552
+ "loss": 0.5629,
553
+ "step": 84
554
+ },
555
+ {
556
+ "epoch": 1.05,
557
+ "learning_rate": 5.5999994399999996e-05,
558
+ "loss": 0.5043,
559
+ "step": 85
560
+ },
561
+ {
562
+ "epoch": 1.06,
563
+ "learning_rate": 5.666666099999999e-05,
564
+ "loss": 0.562,
565
+ "step": 86
566
+ },
567
+ {
568
+ "epoch": 1.08,
569
+ "learning_rate": 5.733332759999999e-05,
570
+ "loss": 0.4978,
571
+ "step": 87
572
+ },
573
+ {
574
+ "epoch": 1.09,
575
+ "learning_rate": 5.7999994199999994e-05,
576
+ "loss": 0.5502,
577
+ "step": 88
578
+ },
579
+ {
580
+ "epoch": 1.1,
581
+ "learning_rate": 5.8666660799999995e-05,
582
+ "loss": 0.5499,
583
+ "step": 89
584
+ },
585
+ {
586
+ "epoch": 1.11,
587
+ "learning_rate": 5.93333274e-05,
588
+ "loss": 0.5547,
589
+ "step": 90
590
+ },
591
+ {
592
+ "epoch": 1.13,
593
+ "learning_rate": 5.9999994e-05,
594
+ "loss": 0.5452,
595
+ "step": 91
596
+ },
597
+ {
598
+ "epoch": 1.14,
599
+ "learning_rate": 6.066666059999999e-05,
600
+ "loss": 0.5544,
601
+ "step": 92
602
+ },
603
+ {
604
+ "epoch": 1.15,
605
+ "learning_rate": 6.13333272e-05,
606
+ "loss": 0.5483,
607
+ "step": 93
608
+ },
609
+ {
610
+ "epoch": 1.16,
611
+ "learning_rate": 6.19999938e-05,
612
+ "loss": 0.5641,
613
+ "step": 94
614
+ },
615
+ {
616
+ "epoch": 1.18,
617
+ "learning_rate": 6.266666039999998e-05,
618
+ "loss": 0.5316,
619
+ "step": 95
620
+ },
621
+ {
622
+ "epoch": 1.19,
623
+ "learning_rate": 6.333332699999999e-05,
624
+ "loss": 0.526,
625
+ "step": 96
626
+ },
627
+ {
628
+ "epoch": 1.2,
629
+ "learning_rate": 6.399999359999999e-05,
630
+ "loss": 0.5443,
631
+ "step": 97
632
+ },
633
+ {
634
+ "epoch": 1.22,
635
+ "learning_rate": 6.46666602e-05,
636
+ "loss": 0.5111,
637
+ "step": 98
638
+ },
639
+ {
640
+ "epoch": 1.23,
641
+ "learning_rate": 6.533332679999999e-05,
642
+ "loss": 0.5298,
643
+ "step": 99
644
+ },
645
+ {
646
+ "epoch": 1.24,
647
+ "learning_rate": 6.59999934e-05,
648
+ "loss": 0.5431,
649
+ "step": 100
650
+ },
651
+ {
652
+ "epoch": 1.24,
653
+ "eval_loss": 0.9199196100234985,
654
+ "eval_runtime": 22.8496,
655
+ "eval_samples_per_second": 10.941,
656
+ "eval_steps_per_second": 2.757,
657
+ "step": 100
658
+ },
659
+ {
660
+ "epoch": 1.25,
661
+ "learning_rate": 6.666666e-05,
662
+ "loss": 0.585,
663
+ "step": 101
664
+ },
665
+ {
666
+ "epoch": 1.27,
667
+ "learning_rate": 6.64406713220339e-05,
668
+ "loss": 0.5143,
669
+ "step": 102
670
+ },
671
+ {
672
+ "epoch": 1.28,
673
+ "learning_rate": 6.621468264406779e-05,
674
+ "loss": 0.5061,
675
+ "step": 103
676
+ },
677
+ {
678
+ "epoch": 1.29,
679
+ "learning_rate": 6.59886939661017e-05,
680
+ "loss": 0.5098,
681
+ "step": 104
682
+ },
683
+ {
684
+ "epoch": 1.3,
685
+ "learning_rate": 6.576270528813559e-05,
686
+ "loss": 0.5188,
687
+ "step": 105
688
+ },
689
+ {
690
+ "epoch": 1.32,
691
+ "learning_rate": 6.553671661016948e-05,
692
+ "loss": 0.5005,
693
+ "step": 106
694
+ },
695
+ {
696
+ "epoch": 1.33,
697
+ "learning_rate": 6.531072793220339e-05,
698
+ "loss": 0.5709,
699
+ "step": 107
700
+ },
701
+ {
702
+ "epoch": 1.34,
703
+ "learning_rate": 6.508473925423728e-05,
704
+ "loss": 0.4985,
705
+ "step": 108
706
+ },
707
+ {
708
+ "epoch": 1.35,
709
+ "learning_rate": 6.485875057627117e-05,
710
+ "loss": 0.5767,
711
+ "step": 109
712
+ },
713
+ {
714
+ "epoch": 1.37,
715
+ "learning_rate": 6.463276189830508e-05,
716
+ "loss": 0.517,
717
+ "step": 110
718
+ },
719
+ {
720
+ "epoch": 1.38,
721
+ "learning_rate": 6.440677322033897e-05,
722
+ "loss": 0.4759,
723
+ "step": 111
724
+ },
725
+ {
726
+ "epoch": 1.39,
727
+ "learning_rate": 6.418078454237288e-05,
728
+ "loss": 0.5536,
729
+ "step": 112
730
+ },
731
+ {
732
+ "epoch": 1.41,
733
+ "learning_rate": 6.395479586440677e-05,
734
+ "loss": 0.5657,
735
+ "step": 113
736
+ },
737
+ {
738
+ "epoch": 1.42,
739
+ "learning_rate": 6.372880718644068e-05,
740
+ "loss": 0.525,
741
+ "step": 114
742
+ },
743
+ {
744
+ "epoch": 1.43,
745
+ "learning_rate": 6.350281850847457e-05,
746
+ "loss": 0.5113,
747
+ "step": 115
748
+ },
749
+ {
750
+ "epoch": 1.44,
751
+ "learning_rate": 6.327682983050848e-05,
752
+ "loss": 0.4869,
753
+ "step": 116
754
+ },
755
+ {
756
+ "epoch": 1.46,
757
+ "learning_rate": 6.305084115254237e-05,
758
+ "loss": 0.5074,
759
+ "step": 117
760
+ },
761
+ {
762
+ "epoch": 1.47,
763
+ "learning_rate": 6.282485247457626e-05,
764
+ "loss": 0.5043,
765
+ "step": 118
766
+ },
767
+ {
768
+ "epoch": 1.48,
769
+ "learning_rate": 6.259886379661017e-05,
770
+ "loss": 0.5116,
771
+ "step": 119
772
+ },
773
+ {
774
+ "epoch": 1.49,
775
+ "learning_rate": 6.237287511864406e-05,
776
+ "loss": 0.5317,
777
+ "step": 120
778
+ },
779
+ {
780
+ "epoch": 1.49,
781
+ "eval_loss": 0.9122854471206665,
782
+ "eval_runtime": 22.8969,
783
+ "eval_samples_per_second": 10.919,
784
+ "eval_steps_per_second": 2.751,
785
+ "step": 120
786
+ },
787
+ {
788
+ "epoch": 1.51,
789
+ "learning_rate": 6.214688644067795e-05,
790
+ "loss": 0.5049,
791
+ "step": 121
792
+ },
793
+ {
794
+ "epoch": 1.52,
795
+ "learning_rate": 6.192089776271186e-05,
796
+ "loss": 0.496,
797
+ "step": 122
798
+ },
799
+ {
800
+ "epoch": 1.53,
801
+ "learning_rate": 6.169490908474575e-05,
802
+ "loss": 0.4951,
803
+ "step": 123
804
+ },
805
+ {
806
+ "epoch": 1.54,
807
+ "learning_rate": 6.146892040677966e-05,
808
+ "loss": 0.5489,
809
+ "step": 124
810
+ },
811
+ {
812
+ "epoch": 1.56,
813
+ "learning_rate": 6.124293172881355e-05,
814
+ "loss": 0.5071,
815
+ "step": 125
816
+ },
817
+ {
818
+ "epoch": 1.57,
819
+ "learning_rate": 6.1016943050847455e-05,
820
+ "loss": 0.5312,
821
+ "step": 126
822
+ },
823
+ {
824
+ "epoch": 1.58,
825
+ "learning_rate": 6.079095437288135e-05,
826
+ "loss": 0.4442,
827
+ "step": 127
828
+ },
829
+ {
830
+ "epoch": 1.59,
831
+ "learning_rate": 6.056496569491525e-05,
832
+ "loss": 0.4944,
833
+ "step": 128
834
+ },
835
+ {
836
+ "epoch": 1.61,
837
+ "learning_rate": 6.033897701694915e-05,
838
+ "loss": 0.4615,
839
+ "step": 129
840
+ },
841
+ {
842
+ "epoch": 1.62,
843
+ "learning_rate": 6.0112988338983045e-05,
844
+ "loss": 0.4855,
845
+ "step": 130
846
+ },
847
+ {
848
+ "epoch": 1.63,
849
+ "learning_rate": 5.9886999661016945e-05,
850
+ "loss": 0.4862,
851
+ "step": 131
852
+ },
853
+ {
854
+ "epoch": 1.65,
855
+ "learning_rate": 5.9661010983050844e-05,
856
+ "loss": 0.521,
857
+ "step": 132
858
+ },
859
+ {
860
+ "epoch": 1.66,
861
+ "learning_rate": 5.943502230508474e-05,
862
+ "loss": 0.5537,
863
+ "step": 133
864
+ },
865
+ {
866
+ "epoch": 1.67,
867
+ "learning_rate": 5.9209033627118636e-05,
868
+ "loss": 0.4828,
869
+ "step": 134
870
+ },
871
+ {
872
+ "epoch": 1.68,
873
+ "learning_rate": 5.898304494915254e-05,
874
+ "loss": 0.4932,
875
+ "step": 135
876
+ },
877
+ {
878
+ "epoch": 1.7,
879
+ "learning_rate": 5.8757056271186434e-05,
880
+ "loss": 0.4909,
881
+ "step": 136
882
+ },
883
+ {
884
+ "epoch": 1.71,
885
+ "learning_rate": 5.8531067593220333e-05,
886
+ "loss": 0.5211,
887
+ "step": 137
888
+ },
889
+ {
890
+ "epoch": 1.72,
891
+ "learning_rate": 5.830507891525423e-05,
892
+ "loss": 0.4851,
893
+ "step": 138
894
+ },
895
+ {
896
+ "epoch": 1.73,
897
+ "learning_rate": 5.807909023728813e-05,
898
+ "loss": 0.5172,
899
+ "step": 139
900
+ },
901
+ {
902
+ "epoch": 1.75,
903
+ "learning_rate": 5.7853101559322024e-05,
904
+ "loss": 0.5038,
905
+ "step": 140
906
+ },
907
+ {
908
+ "epoch": 1.75,
909
+ "eval_loss": 0.9127333760261536,
910
+ "eval_runtime": 22.9168,
911
+ "eval_samples_per_second": 10.909,
912
+ "eval_steps_per_second": 2.749,
913
+ "step": 140
914
+ },
915
+ {
916
+ "epoch": 1.76,
917
+ "learning_rate": 5.762711288135593e-05,
918
+ "loss": 0.5116,
919
+ "step": 141
920
+ },
921
+ {
922
+ "epoch": 1.77,
923
+ "learning_rate": 5.740112420338982e-05,
924
+ "loss": 0.5009,
925
+ "step": 142
926
+ },
927
+ {
928
+ "epoch": 1.78,
929
+ "learning_rate": 5.717513552542372e-05,
930
+ "loss": 0.4835,
931
+ "step": 143
932
+ },
933
+ {
934
+ "epoch": 1.8,
935
+ "learning_rate": 5.694914684745762e-05,
936
+ "loss": 0.4917,
937
+ "step": 144
938
+ },
939
+ {
940
+ "epoch": 1.81,
941
+ "learning_rate": 5.672315816949152e-05,
942
+ "loss": 0.4873,
943
+ "step": 145
944
+ },
945
+ {
946
+ "epoch": 1.82,
947
+ "learning_rate": 5.649716949152541e-05,
948
+ "loss": 0.5094,
949
+ "step": 146
950
+ },
951
+ {
952
+ "epoch": 1.84,
953
+ "learning_rate": 5.627118081355932e-05,
954
+ "loss": 0.5193,
955
+ "step": 147
956
+ },
957
+ {
958
+ "epoch": 1.85,
959
+ "learning_rate": 5.604519213559321e-05,
960
+ "loss": 0.4921,
961
+ "step": 148
962
+ },
963
+ {
964
+ "epoch": 1.86,
965
+ "learning_rate": 5.581920345762711e-05,
966
+ "loss": 0.4978,
967
+ "step": 149
968
+ },
969
+ {
970
+ "epoch": 1.87,
971
+ "learning_rate": 5.559321477966101e-05,
972
+ "loss": 0.4901,
973
+ "step": 150
974
+ },
975
+ {
976
+ "epoch": 1.89,
977
+ "learning_rate": 5.536722610169491e-05,
978
+ "loss": 0.5042,
979
+ "step": 151
980
+ },
981
+ {
982
+ "epoch": 1.9,
983
+ "learning_rate": 5.51412374237288e-05,
984
+ "loss": 0.4838,
985
+ "step": 152
986
+ },
987
+ {
988
+ "epoch": 1.91,
989
+ "learning_rate": 5.491524874576271e-05,
990
+ "loss": 0.485,
991
+ "step": 153
992
+ },
993
+ {
994
+ "epoch": 1.92,
995
+ "learning_rate": 5.46892600677966e-05,
996
+ "loss": 0.4967,
997
+ "step": 154
998
+ },
999
+ {
1000
+ "epoch": 1.94,
1001
+ "learning_rate": 5.44632713898305e-05,
1002
+ "loss": 0.4943,
1003
+ "step": 155
1004
+ },
1005
+ {
1006
+ "epoch": 1.95,
1007
+ "learning_rate": 5.4237282711864406e-05,
1008
+ "loss": 0.4999,
1009
+ "step": 156
1010
+ },
1011
+ {
1012
+ "epoch": 1.96,
1013
+ "learning_rate": 5.40112940338983e-05,
1014
+ "loss": 0.4456,
1015
+ "step": 157
1016
+ },
1017
+ {
1018
+ "epoch": 1.97,
1019
+ "learning_rate": 5.3785305355932205e-05,
1020
+ "loss": 0.4787,
1021
+ "step": 158
1022
+ },
1023
+ {
1024
+ "epoch": 1.99,
1025
+ "learning_rate": 5.35593166779661e-05,
1026
+ "loss": 0.4594,
1027
+ "step": 159
1028
+ },
1029
+ {
1030
+ "epoch": 2.0,
1031
+ "learning_rate": 5.3333327999999996e-05,
1032
+ "loss": 0.4744,
1033
+ "step": 160
1034
+ },
1035
+ {
1036
+ "epoch": 2.0,
1037
+ "eval_loss": 0.9105737805366516,
1038
+ "eval_runtime": 22.9129,
1039
+ "eval_samples_per_second": 10.911,
1040
+ "eval_steps_per_second": 2.75,
1041
+ "step": 160
1042
+ },
1043
+ {
1044
+ "epoch": 2.01,
1045
+ "learning_rate": 5.3107339322033896e-05,
1046
+ "loss": 0.5174,
1047
+ "step": 161
1048
+ },
1049
+ {
1050
+ "epoch": 2.03,
1051
+ "learning_rate": 5.2881350644067795e-05,
1052
+ "loss": 0.46,
1053
+ "step": 162
1054
+ }
1055
+ ],
1056
+ "logging_steps": 1,
1057
+ "max_steps": 395,
1058
+ "num_train_epochs": 5,
1059
+ "save_steps": 500,
1060
+ "total_flos": 9.165994568020132e+17,
1061
+ "trial_name": null,
1062
+ "trial_params": null
1063
+ }
checkpoint-162/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:973a78bb618179d478a634a5871239572b8d72704f5a768f05b104891449f5eb
3
+ size 6075
checkpoint-162/zero_to_fp32.py ADDED
@@ -0,0 +1,587 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+
3
+ # Copyright (c) Microsoft Corporation.
4
+ # SPDX-License-Identifier: Apache-2.0
5
+
6
+ # DeepSpeed Team
7
+
8
+ # This script extracts fp32 consolidated weights from a zero 1, 2 and 3 DeepSpeed checkpoints. It gets
9
+ # copied into the top level checkpoint dir, so the user can easily do the conversion at any point in
10
+ # the future. Once extracted, the weights don't require DeepSpeed and can be used in any
11
+ # application.
12
+ #
13
+ # example: python zero_to_fp32.py . pytorch_model.bin
14
+
15
+ import argparse
16
+ import torch
17
+ import glob
18
+ import math
19
+ import os
20
+ import re
21
+ from collections import OrderedDict
22
+ from dataclasses import dataclass
23
+
24
+ # while this script doesn't use deepspeed to recover data, since the checkpoints are pickled with
25
+ # DeepSpeed data structures it has to be available in the current python environment.
26
+ from deepspeed.utils import logger
27
+ from deepspeed.checkpoint.constants import (DS_VERSION, OPTIMIZER_STATE_DICT, SINGLE_PARTITION_OF_FP32_GROUPS,
28
+ FP32_FLAT_GROUPS, ZERO_STAGE, PARTITION_COUNT, PARAM_SHAPES, BUFFER_NAMES,
29
+ FROZEN_PARAM_SHAPES, FROZEN_PARAM_FRAGMENTS)
30
+
31
+
32
+ @dataclass
33
+ class zero_model_state:
34
+ buffers: dict()
35
+ param_shapes: dict()
36
+ shared_params: list
37
+ ds_version: int
38
+ frozen_param_shapes: dict()
39
+ frozen_param_fragments: dict()
40
+
41
+
42
+ debug = 0
43
+
44
+ # load to cpu
45
+ device = torch.device('cpu')
46
+
47
+
48
+ def atoi(text):
49
+ return int(text) if text.isdigit() else text
50
+
51
+
52
+ def natural_keys(text):
53
+ '''
54
+ alist.sort(key=natural_keys) sorts in human order
55
+ http://nedbatchelder.com/blog/200712/human_sorting.html
56
+ (See Toothy's implementation in the comments)
57
+ '''
58
+ return [atoi(c) for c in re.split(r'(\d+)', text)]
59
+
60
+
61
+ def get_model_state_file(checkpoint_dir, zero_stage):
62
+ if not os.path.isdir(checkpoint_dir):
63
+ raise FileNotFoundError(f"Directory '{checkpoint_dir}' doesn't exist")
64
+
65
+ # there should be only one file
66
+ if zero_stage <= 2:
67
+ file = os.path.join(checkpoint_dir, "mp_rank_00_model_states.pt")
68
+ elif zero_stage == 3:
69
+ file = os.path.join(checkpoint_dir, "zero_pp_rank_0_mp_rank_00_model_states.pt")
70
+
71
+ if not os.path.exists(file):
72
+ raise FileNotFoundError(f"can't find model states file at '{file}'")
73
+
74
+ return file
75
+
76
+
77
+ def get_checkpoint_files(checkpoint_dir, glob_pattern):
78
+ # XXX: need to test that this simple glob rule works for multi-node setup too
79
+ ckpt_files = sorted(glob.glob(os.path.join(checkpoint_dir, glob_pattern)), key=natural_keys)
80
+
81
+ if len(ckpt_files) == 0:
82
+ raise FileNotFoundError(f"can't find {glob_pattern} files in directory '{checkpoint_dir}'")
83
+
84
+ return ckpt_files
85
+
86
+
87
+ def get_optim_files(checkpoint_dir):
88
+ return get_checkpoint_files(checkpoint_dir, "*_optim_states.pt")
89
+
90
+
91
+ def get_model_state_files(checkpoint_dir):
92
+ return get_checkpoint_files(checkpoint_dir, "*_model_states.pt")
93
+
94
+
95
+ def parse_model_states(files):
96
+ zero_model_states = []
97
+ for file in files:
98
+ state_dict = torch.load(file, map_location=device)
99
+
100
+ if BUFFER_NAMES not in state_dict:
101
+ raise ValueError(f"{file} is not a model state checkpoint")
102
+ buffer_names = state_dict[BUFFER_NAMES]
103
+ if debug:
104
+ print("Found buffers:", buffer_names)
105
+
106
+ # recover just the buffers while restoring them to fp32 if they were saved in fp16
107
+ buffers = {k: v.float() for k, v in state_dict["module"].items() if k in buffer_names}
108
+ param_shapes = state_dict[PARAM_SHAPES]
109
+
110
+ # collect parameters that are included in param_shapes
111
+ param_names = []
112
+ for s in param_shapes:
113
+ for name in s.keys():
114
+ param_names.append(name)
115
+
116
+ # update with frozen parameters
117
+ frozen_param_shapes = state_dict.get(FROZEN_PARAM_SHAPES, None)
118
+ if frozen_param_shapes is not None:
119
+ if debug:
120
+ print(f"Found frozen_param_shapes: {frozen_param_shapes}")
121
+ param_names += list(frozen_param_shapes.keys())
122
+
123
+ # handle shared params
124
+ shared_params = [[k, v] for k, v in state_dict["shared_params"].items()]
125
+
126
+ ds_version = state_dict.get(DS_VERSION, None)
127
+
128
+ frozen_param_fragments = state_dict.get(FROZEN_PARAM_FRAGMENTS, None)
129
+
130
+ z_model_state = zero_model_state(buffers=buffers,
131
+ param_shapes=param_shapes,
132
+ shared_params=shared_params,
133
+ ds_version=ds_version,
134
+ frozen_param_shapes=frozen_param_shapes,
135
+ frozen_param_fragments=frozen_param_fragments)
136
+ zero_model_states.append(z_model_state)
137
+
138
+ return zero_model_states
139
+
140
+
141
+ def parse_optim_states(files, ds_checkpoint_dir):
142
+
143
+ total_files = len(files)
144
+ state_dicts = []
145
+ for f in files:
146
+ state_dict = torch.load(f, map_location=device)
147
+ # immediately discard the potentially huge 2 optimizer states as we only care for fp32 master weights
148
+ # and also handle the case where it was already removed by another helper script
149
+ state_dict["optimizer_state_dict"].pop("optimizer_state_dict", None)
150
+ state_dicts.append(state_dict)
151
+
152
+ if not ZERO_STAGE in state_dicts[0][OPTIMIZER_STATE_DICT]:
153
+ raise ValueError(f"{files[0]} is not a zero checkpoint")
154
+ zero_stage = state_dicts[0][OPTIMIZER_STATE_DICT][ZERO_STAGE]
155
+ world_size = state_dicts[0][OPTIMIZER_STATE_DICT][PARTITION_COUNT]
156
+
157
+ # For ZeRO-2 each param group can have different partition_count as data parallelism for expert
158
+ # parameters can be different from data parallelism for non-expert parameters. So we can just
159
+ # use the max of the partition_count to get the dp world_size.
160
+
161
+ if type(world_size) is list:
162
+ world_size = max(world_size)
163
+
164
+ if world_size != total_files:
165
+ raise ValueError(
166
+ f"Expected {world_size} of '*_optim_states.pt' under '{ds_checkpoint_dir}' but found {total_files} files. "
167
+ "Possibly due to an overwrite of an old checkpoint, or a checkpoint didn't get saved by one or more processes."
168
+ )
169
+
170
+ # the groups are named differently in each stage
171
+ if zero_stage <= 2:
172
+ fp32_groups_key = SINGLE_PARTITION_OF_FP32_GROUPS
173
+ elif zero_stage == 3:
174
+ fp32_groups_key = FP32_FLAT_GROUPS
175
+ else:
176
+ raise ValueError(f"unknown zero stage {zero_stage}")
177
+
178
+ if zero_stage <= 2:
179
+ fp32_flat_groups = [state_dicts[i][OPTIMIZER_STATE_DICT][fp32_groups_key] for i in range(len(state_dicts))]
180
+ elif zero_stage == 3:
181
+ # if there is more than one param group, there will be multiple flattened tensors - one
182
+ # flattened tensor per group - for simplicity merge them into a single tensor
183
+ #
184
+ # XXX: could make the script more memory efficient for when there are multiple groups - it
185
+ # will require matching the sub-lists of param_shapes for each param group flattened tensor
186
+
187
+ fp32_flat_groups = [
188
+ torch.cat(state_dicts[i][OPTIMIZER_STATE_DICT][fp32_groups_key], 0) for i in range(len(state_dicts))
189
+ ]
190
+
191
+ return zero_stage, world_size, fp32_flat_groups
192
+
193
+
194
+ def _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir):
195
+ """
196
+ Returns fp32 state_dict reconstructed from ds checkpoint
197
+
198
+ Args:
199
+ - ``ds_checkpoint_dir``: path to the deepspeed checkpoint folder (where the optimizer files are)
200
+
201
+ """
202
+ print(f"Processing zero checkpoint '{ds_checkpoint_dir}'")
203
+
204
+ optim_files = get_optim_files(ds_checkpoint_dir)
205
+ zero_stage, world_size, fp32_flat_groups = parse_optim_states(optim_files, ds_checkpoint_dir)
206
+ print(f"Detected checkpoint of type zero stage {zero_stage}, world_size: {world_size}")
207
+
208
+ model_files = get_model_state_files(ds_checkpoint_dir)
209
+
210
+ zero_model_states = parse_model_states(model_files)
211
+ print(f'Parsing checkpoint created by deepspeed=={zero_model_states[0].ds_version}')
212
+
213
+ if zero_stage <= 2:
214
+ return _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states)
215
+ elif zero_stage == 3:
216
+ return _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states)
217
+
218
+
219
+ def _zero2_merge_frozen_params(state_dict, zero_model_states):
220
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
221
+ return
222
+
223
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
224
+ frozen_param_fragments = zero_model_states[0].frozen_param_fragments
225
+
226
+ if debug:
227
+ num_elem = sum(s.numel() for s in frozen_param_shapes.values())
228
+ print(f'rank 0: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
229
+
230
+ wanted_params = len(frozen_param_shapes)
231
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
232
+ avail_numel = sum([p.numel() for p in frozen_param_fragments.values()])
233
+ print(f'Frozen params: Have {avail_numel} numels to process.')
234
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
235
+
236
+ total_params = 0
237
+ total_numel = 0
238
+ for name, shape in frozen_param_shapes.items():
239
+ total_params += 1
240
+ unpartitioned_numel = shape.numel()
241
+ total_numel += unpartitioned_numel
242
+
243
+ state_dict[name] = frozen_param_fragments[name]
244
+
245
+ if debug:
246
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
247
+
248
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
249
+
250
+
251
+ def _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
252
+ param_shapes = zero_model_states[0].param_shapes
253
+
254
+ # Reconstruction protocol:
255
+ #
256
+ # XXX: document this
257
+
258
+ if debug:
259
+ for i in range(world_size):
260
+ for j in range(len(fp32_flat_groups[0])):
261
+ print(f"{FP32_FLAT_GROUPS}[{i}][{j}].shape={fp32_flat_groups[i][j].shape}")
262
+
263
+ # XXX: memory usage doubles here (zero2)
264
+ num_param_groups = len(fp32_flat_groups[0])
265
+ merged_single_partition_of_fp32_groups = []
266
+ for i in range(num_param_groups):
267
+ merged_partitions = [sd[i] for sd in fp32_flat_groups]
268
+ full_single_fp32_vector = torch.cat(merged_partitions, 0)
269
+ merged_single_partition_of_fp32_groups.append(full_single_fp32_vector)
270
+ avail_numel = sum(
271
+ [full_single_fp32_vector.numel() for full_single_fp32_vector in merged_single_partition_of_fp32_groups])
272
+
273
+ if debug:
274
+ wanted_params = sum([len(shapes) for shapes in param_shapes])
275
+ wanted_numel = sum([sum(shape.numel() for shape in shapes.values()) for shapes in param_shapes])
276
+ # not asserting if there is a mismatch due to possible padding
277
+ print(f"Have {avail_numel} numels to process.")
278
+ print(f"Need {wanted_numel} numels in {wanted_params} params.")
279
+
280
+ # params
281
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
282
+ # out-of-core computing solution
283
+ total_numel = 0
284
+ total_params = 0
285
+ for shapes, full_single_fp32_vector in zip(param_shapes, merged_single_partition_of_fp32_groups):
286
+ offset = 0
287
+ avail_numel = full_single_fp32_vector.numel()
288
+ for name, shape in shapes.items():
289
+
290
+ unpartitioned_numel = shape.numel()
291
+ total_numel += unpartitioned_numel
292
+ total_params += 1
293
+
294
+ if debug:
295
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
296
+ state_dict[name] = full_single_fp32_vector.narrow(0, offset, unpartitioned_numel).view(shape)
297
+ offset += unpartitioned_numel
298
+
299
+ # Z2 started to align to 2*world_size to improve nccl performance. Therefore both offset and
300
+ # avail_numel can differ by anywhere between 0..2*world_size. Due to two unrelated complex
301
+ # paddings performed in the code it's almost impossible to predict the exact numbers w/o the
302
+ # live optimizer object, so we are checking that the numbers are within the right range
303
+ align_to = 2 * world_size
304
+
305
+ def zero2_align(x):
306
+ return align_to * math.ceil(x / align_to)
307
+
308
+ if debug:
309
+ print(f"original offset={offset}, avail_numel={avail_numel}")
310
+
311
+ offset = zero2_align(offset)
312
+ avail_numel = zero2_align(avail_numel)
313
+
314
+ if debug:
315
+ print(f"aligned offset={offset}, avail_numel={avail_numel}")
316
+
317
+ # Sanity check
318
+ if offset != avail_numel:
319
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
320
+
321
+ print(f"Reconstructed fp32 state dict with {total_params} params {total_numel} elements")
322
+
323
+
324
+ def _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states):
325
+ state_dict = OrderedDict()
326
+
327
+ # buffers
328
+ buffers = zero_model_states[0].buffers
329
+ state_dict.update(buffers)
330
+ if debug:
331
+ print(f"added {len(buffers)} buffers")
332
+
333
+ _zero2_merge_frozen_params(state_dict, zero_model_states)
334
+
335
+ _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
336
+
337
+ # recover shared parameters
338
+ for pair in zero_model_states[0].shared_params:
339
+ if pair[1] in state_dict:
340
+ state_dict[pair[0]] = state_dict[pair[1]]
341
+
342
+ return state_dict
343
+
344
+
345
+ def zero3_partitioned_param_info(unpartitioned_numel, world_size):
346
+ remainder = unpartitioned_numel % world_size
347
+ padding_numel = (world_size - remainder) if remainder else 0
348
+ partitioned_numel = math.ceil(unpartitioned_numel / world_size)
349
+ return partitioned_numel, padding_numel
350
+
351
+
352
+ def _zero3_merge_frozen_params(state_dict, world_size, zero_model_states):
353
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
354
+ return
355
+
356
+ if debug:
357
+ for i in range(world_size):
358
+ num_elem = sum(s.numel() for s in zero_model_states[i].frozen_param_fragments.values())
359
+ print(f'rank {i}: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
360
+
361
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
362
+ wanted_params = len(frozen_param_shapes)
363
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
364
+ avail_numel = sum([p.numel() for p in zero_model_states[0].frozen_param_fragments.values()]) * world_size
365
+ print(f'Frozen params: Have {avail_numel} numels to process.')
366
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
367
+
368
+ total_params = 0
369
+ total_numel = 0
370
+ for name, shape in zero_model_states[0].frozen_param_shapes.items():
371
+ total_params += 1
372
+ unpartitioned_numel = shape.numel()
373
+ total_numel += unpartitioned_numel
374
+
375
+ param_frags = tuple(model_state.frozen_param_fragments[name] for model_state in zero_model_states)
376
+ state_dict[name] = torch.cat(param_frags, 0).narrow(0, 0, unpartitioned_numel).view(shape)
377
+
378
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
379
+
380
+ if debug:
381
+ print(
382
+ f"Frozen params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
383
+ )
384
+
385
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
386
+
387
+
388
+ def _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
389
+ param_shapes = zero_model_states[0].param_shapes
390
+ avail_numel = fp32_flat_groups[0].numel() * world_size
391
+ # Reconstruction protocol: For zero3 we need to zip the partitions together at boundary of each
392
+ # param, re-consolidating each param, while dealing with padding if any
393
+
394
+ # merge list of dicts, preserving order
395
+ param_shapes = {k: v for d in param_shapes for k, v in d.items()}
396
+
397
+ if debug:
398
+ for i in range(world_size):
399
+ print(f"{FP32_FLAT_GROUPS}[{i}].shape={fp32_flat_groups[i].shape}")
400
+
401
+ wanted_params = len(param_shapes)
402
+ wanted_numel = sum(shape.numel() for shape in param_shapes.values())
403
+ # not asserting if there is a mismatch due to possible padding
404
+ avail_numel = fp32_flat_groups[0].numel() * world_size
405
+ print(f"Trainable params: Have {avail_numel} numels to process.")
406
+ print(f"Trainable params: Need {wanted_numel} numels in {wanted_params} params.")
407
+
408
+ # params
409
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
410
+ # out-of-core computing solution
411
+ offset = 0
412
+ total_numel = 0
413
+ total_params = 0
414
+ for name, shape in param_shapes.items():
415
+
416
+ unpartitioned_numel = shape.numel()
417
+ total_numel += unpartitioned_numel
418
+ total_params += 1
419
+
420
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
421
+
422
+ if debug:
423
+ print(
424
+ f"Trainable params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
425
+ )
426
+
427
+ # XXX: memory usage doubles here
428
+ state_dict[name] = torch.cat(
429
+ tuple(fp32_flat_groups[i].narrow(0, offset, partitioned_numel) for i in range(world_size)),
430
+ 0).narrow(0, 0, unpartitioned_numel).view(shape)
431
+ offset += partitioned_numel
432
+
433
+ offset *= world_size
434
+
435
+ # Sanity check
436
+ if offset != avail_numel:
437
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
438
+
439
+ print(f"Reconstructed Trainable fp32 state dict with {total_params} params {total_numel} elements")
440
+
441
+
442
+ def _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states):
443
+ state_dict = OrderedDict()
444
+
445
+ # buffers
446
+ buffers = zero_model_states[0].buffers
447
+ state_dict.update(buffers)
448
+ if debug:
449
+ print(f"added {len(buffers)} buffers")
450
+
451
+ _zero3_merge_frozen_params(state_dict, world_size, zero_model_states)
452
+
453
+ _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
454
+
455
+ # recover shared parameters
456
+ for pair in zero_model_states[0].shared_params:
457
+ if pair[1] in state_dict:
458
+ state_dict[pair[0]] = state_dict[pair[1]]
459
+
460
+ return state_dict
461
+
462
+
463
+ def get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag=None):
464
+ """
465
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated state_dict that can be loaded with
466
+ ``load_state_dict()`` and used for training without DeepSpeed or shared with others, for example
467
+ via a model hub.
468
+
469
+ Args:
470
+ - ``checkpoint_dir``: path to the desired checkpoint folder
471
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in 'latest' file. e.g., ``global_step14``
472
+
473
+ Returns:
474
+ - pytorch ``state_dict``
475
+
476
+ Note: this approach may not work if your application doesn't have sufficient free CPU memory and
477
+ you may need to use the offline approach using the ``zero_to_fp32.py`` script that is saved with
478
+ the checkpoint.
479
+
480
+ A typical usage might be ::
481
+
482
+ from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
483
+ # do the training and checkpoint saving
484
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu
485
+ model = model.cpu() # move to cpu
486
+ model.load_state_dict(state_dict)
487
+ # submit to model hub or save the model to share with others
488
+
489
+ In this example the ``model`` will no longer be usable in the deepspeed context of the same
490
+ application. i.e. you will need to re-initialize the deepspeed engine, since
491
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
492
+
493
+ If you want it all done for you, use ``load_state_dict_from_zero_checkpoint`` instead.
494
+
495
+ """
496
+ if tag is None:
497
+ latest_path = os.path.join(checkpoint_dir, 'latest')
498
+ if os.path.isfile(latest_path):
499
+ with open(latest_path, 'r') as fd:
500
+ tag = fd.read().strip()
501
+ else:
502
+ raise ValueError(f"Unable to find 'latest' file at {latest_path}")
503
+
504
+ ds_checkpoint_dir = os.path.join(checkpoint_dir, tag)
505
+
506
+ if not os.path.isdir(ds_checkpoint_dir):
507
+ raise FileNotFoundError(f"Directory '{ds_checkpoint_dir}' doesn't exist")
508
+
509
+ return _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir)
510
+
511
+
512
+ def convert_zero_checkpoint_to_fp32_state_dict(checkpoint_dir, output_file, tag=None):
513
+ """
514
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict`` file that can be
515
+ loaded with ``torch.load(file)`` + ``load_state_dict()`` and used for training without DeepSpeed.
516
+
517
+ Args:
518
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
519
+ - ``output_file``: path to the pytorch fp32 state_dict output file (e.g. path/pytorch_model.bin)
520
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
521
+ """
522
+
523
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag)
524
+ print(f"Saving fp32 state dict to {output_file}")
525
+ torch.save(state_dict, output_file)
526
+
527
+
528
+ def load_state_dict_from_zero_checkpoint(model, checkpoint_dir, tag=None):
529
+ """
530
+ 1. Put the provided model to cpu
531
+ 2. Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict``
532
+ 3. Load it into the provided model
533
+
534
+ Args:
535
+ - ``model``: the model object to update
536
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
537
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
538
+
539
+ Returns:
540
+ - ``model`: modified model
541
+
542
+ Make sure you have plenty of CPU memory available before you call this function. If you don't
543
+ have enough use the ``zero_to_fp32.py`` utility to do the conversion. You will find it
544
+ conveniently placed for you in the checkpoint folder.
545
+
546
+ A typical usage might be ::
547
+
548
+ from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
549
+ model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
550
+ # submit to model hub or save the model to share with others
551
+
552
+ Note, that once this was run, the ``model`` will no longer be usable in the deepspeed context
553
+ of the same application. i.e. you will need to re-initialize the deepspeed engine, since
554
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
555
+
556
+ """
557
+ logger.info(f"Extracting fp32 weights")
558
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag)
559
+
560
+ logger.info(f"Overwriting model with fp32 weights")
561
+ model = model.cpu()
562
+ model.load_state_dict(state_dict, strict=False)
563
+
564
+ return model
565
+
566
+
567
+ if __name__ == "__main__":
568
+
569
+ parser = argparse.ArgumentParser()
570
+ parser.add_argument("checkpoint_dir",
571
+ type=str,
572
+ help="path to the desired checkpoint folder, e.g., path/checkpoint-12")
573
+ parser.add_argument(
574
+ "output_file",
575
+ type=str,
576
+ help="path to the pytorch fp32 state_dict output file (e.g. path/checkpoint-12/pytorch_model.bin)")
577
+ parser.add_argument("-t",
578
+ "--tag",
579
+ type=str,
580
+ default=None,
581
+ help="checkpoint tag used as a unique identifier for checkpoint. e.g., global_step1")
582
+ parser.add_argument("-d", "--debug", action='store_true', help="enable debug")
583
+ args = parser.parse_args()
584
+
585
+ debug = args.debug
586
+
587
+ convert_zero_checkpoint_to_fp32_state_dict(args.checkpoint_dir, args.output_file, tag=args.tag)