dq158 commited on
Commit
6d94d46
1 Parent(s): 9cc278e

Training in progress, epoch 0, checkpoint

Browse files
last-checkpoint/README.md CHANGED
@@ -1,9 +1,207 @@
1
  ---
2
  library_name: peft
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ## Training procedure
5
 
 
6
  ### Framework versions
7
 
8
 
9
- - PEFT 0.5.0
 
1
  ---
2
  library_name: peft
3
+ base_model: google/flan-t5-xl
4
  ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Shared by [optional]:** [More Information Needed]
22
+ - **Model type:** [More Information Needed]
23
+ - **Language(s) (NLP):** [More Information Needed]
24
+ - **License:** [More Information Needed]
25
+ - **Finetuned from model [optional]:** [More Information Needed]
26
+
27
+ ### Model Sources [optional]
28
+
29
+ <!-- Provide the basic links for the model. -->
30
+
31
+ - **Repository:** [More Information Needed]
32
+ - **Paper [optional]:** [More Information Needed]
33
+ - **Demo [optional]:** [More Information Needed]
34
+
35
+ ## Uses
36
+
37
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
38
+
39
+ ### Direct Use
40
+
41
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
42
+
43
+ [More Information Needed]
44
+
45
+ ### Downstream Use [optional]
46
+
47
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
48
+
49
+ [More Information Needed]
50
+
51
+ ### Out-of-Scope Use
52
+
53
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
54
+
55
+ [More Information Needed]
56
+
57
+ ## Bias, Risks, and Limitations
58
+
59
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
60
+
61
+ [More Information Needed]
62
+
63
+ ### Recommendations
64
+
65
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
66
+
67
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
68
+
69
+ ## How to Get Started with the Model
70
+
71
+ Use the code below to get started with the model.
72
+
73
+ [More Information Needed]
74
+
75
+ ## Training Details
76
+
77
+ ### Training Data
78
+
79
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
80
+
81
+ [More Information Needed]
82
+
83
+ ### Training Procedure
84
+
85
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
86
+
87
+ #### Preprocessing [optional]
88
+
89
+ [More Information Needed]
90
+
91
+
92
+ #### Training Hyperparameters
93
+
94
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
95
+
96
+ #### Speeds, Sizes, Times [optional]
97
+
98
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
99
+
100
+ [More Information Needed]
101
+
102
+ ## Evaluation
103
+
104
+ <!-- This section describes the evaluation protocols and provides the results. -->
105
+
106
+ ### Testing Data, Factors & Metrics
107
+
108
+ #### Testing Data
109
+
110
+ <!-- This should link to a Data Card if possible. -->
111
+
112
+ [More Information Needed]
113
+
114
+ #### Factors
115
+
116
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
117
+
118
+ [More Information Needed]
119
+
120
+ #### Metrics
121
+
122
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
123
+
124
+ [More Information Needed]
125
+
126
+ ### Results
127
+
128
+ [More Information Needed]
129
+
130
+ #### Summary
131
+
132
+
133
+
134
+ ## Model Examination [optional]
135
+
136
+ <!-- Relevant interpretability work for the model goes here -->
137
+
138
+ [More Information Needed]
139
+
140
+ ## Environmental Impact
141
+
142
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
143
+
144
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
145
+
146
+ - **Hardware Type:** [More Information Needed]
147
+ - **Hours used:** [More Information Needed]
148
+ - **Cloud Provider:** [More Information Needed]
149
+ - **Compute Region:** [More Information Needed]
150
+ - **Carbon Emitted:** [More Information Needed]
151
+
152
+ ## Technical Specifications [optional]
153
+
154
+ ### Model Architecture and Objective
155
+
156
+ [More Information Needed]
157
+
158
+ ### Compute Infrastructure
159
+
160
+ [More Information Needed]
161
+
162
+ #### Hardware
163
+
164
+ [More Information Needed]
165
+
166
+ #### Software
167
+
168
+ [More Information Needed]
169
+
170
+ ## Citation [optional]
171
+
172
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
173
+
174
+ **BibTeX:**
175
+
176
+ [More Information Needed]
177
+
178
+ **APA:**
179
+
180
+ [More Information Needed]
181
+
182
+ ## Glossary [optional]
183
+
184
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
185
+
186
+ [More Information Needed]
187
+
188
+ ## More Information [optional]
189
+
190
+ [More Information Needed]
191
+
192
+ ## Model Card Authors [optional]
193
+
194
+ [More Information Needed]
195
+
196
+ ## Model Card Contact
197
+
198
+ [More Information Needed]
199
+
200
+
201
  ## Training procedure
202
 
203
+
204
  ### Framework versions
205
 
206
 
207
+ - PEFT 0.6.0
last-checkpoint/adapter_config.json CHANGED
@@ -1,4 +1,5 @@
1
  {
 
2
  "auto_mapping": null,
3
  "base_model_name_or_path": "google/flan-t5-xl",
4
  "bias": "none",
@@ -12,6 +13,7 @@
12
  "modules_to_save": null,
13
  "peft_type": "LORA",
14
  "r": 8,
 
15
  "revision": null,
16
  "target_modules": [
17
  "q",
 
1
  {
2
+ "alpha_pattern": {},
3
  "auto_mapping": null,
4
  "base_model_name_or_path": "google/flan-t5-xl",
5
  "bias": "none",
 
13
  "modules_to_save": null,
14
  "peft_type": "LORA",
15
  "r": 8,
16
+ "rank_pattern": {},
17
  "revision": null,
18
  "target_modules": [
19
  "q",
last-checkpoint/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f1aa666a0820316d639ecbb9a3210f6c9c4bbca651e14e7e1e9c30dbda30077
3
+ size 18915040
last-checkpoint/optimizer.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7a23348b4df2b6568801e91220a68ad91e38eac64a00e03279b8b947b4d9df81
3
- size 1256
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:483af34880db1e60d5b85208dbbf675a28e85f7a14902fe8d8350237950b438a
3
+ size 37990394
last-checkpoint/rng_state.pth CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a42394408b097029d48abe7849e174375ed618566b4a81191ad7d7fc92568157
3
  size 14244
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fada09e847f4827fb2ff301bcbc5ac90e601f0a0924ec853561aa6b1ceee0d7d
3
  size 14244
last-checkpoint/scheduler.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c5827e664fac8a0383184370b132f5205edaef37211d1ec86f95a9cda9307ffe
3
  size 1064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:643bb99c475ad4b5ec6c8c3e7ded21d1397bf5b4f083e0876d69871550486749
3
  size 1064
last-checkpoint/trainer_state.json CHANGED
@@ -1,100 +1,188 @@
1
  {
2
- "best_metric": 2.096945285797119,
3
- "best_model_checkpoint": "dq158/pingusPongus/checkpoint-790",
4
- "epoch": 2.998102466793169,
5
  "eval_steps": 500,
6
- "global_step": 2370,
7
  "is_hyper_param_search": false,
8
  "is_local_process_zero": true,
9
  "is_world_process_zero": true,
10
  "log_history": [
11
  {
12
- "epoch": 0.63,
13
- "learning_rate": 4.6266135493489015e-05,
14
- "loss": 2.2387,
15
  "step": 500
16
  },
17
  {
18
- "epoch": 1.0,
19
- "eval_bleu": 1.0,
20
- "eval_brevity_penalty": 1.0,
21
- "eval_length_ratio": 1.0,
22
- "eval_loss": 2.096945285797119,
23
- "eval_precisions": [
24
- 1.0,
25
- 1.0,
26
- 1.0,
27
- 1.0
28
- ],
29
- "eval_reference_length": 1439232,
30
- "eval_runtime": 879.4262,
31
- "eval_samples_per_second": 3.196,
32
- "eval_steps_per_second": 0.2,
33
- "eval_translation_length": 1439232,
34
- "step": 790
35
- },
36
- {
37
- "epoch": 1.27,
38
- "learning_rate": 3.2988191110253866e-05,
39
- "loss": 2.241,
40
  "step": 1000
41
  },
42
  {
43
- "epoch": 1.9,
44
- "learning_rate": 1.6035418002300977e-05,
45
- "loss": 2.2435,
46
  "step": 1500
47
  },
48
  {
49
- "epoch": 2.0,
50
- "eval_bleu": 1.0,
51
- "eval_brevity_penalty": 1.0,
52
- "eval_length_ratio": 1.0,
53
- "eval_loss": 2.096945285797119,
54
- "eval_precisions": [
55
- 1.0,
56
- 1.0,
57
- 1.0,
58
- 1.0
59
- ],
60
- "eval_reference_length": 1439232,
61
- "eval_runtime": 877.8823,
62
- "eval_samples_per_second": 3.202,
63
- "eval_steps_per_second": 0.2,
64
- "eval_translation_length": 1439232,
65
- "step": 1581
66
  },
67
  {
68
- "epoch": 2.53,
69
- "learning_rate": 3.206645306399442e-06,
70
- "loss": 2.2391,
71
- "step": 2000
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
  },
73
  {
74
- "epoch": 3.0,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
  "eval_bleu": 1.0,
76
  "eval_brevity_penalty": 1.0,
77
  "eval_length_ratio": 1.0,
78
- "eval_loss": 2.096945285797119,
79
  "eval_precisions": [
80
  1.0,
81
  1.0,
82
  1.0,
83
  1.0
84
  ],
85
- "eval_reference_length": 1439232,
86
- "eval_runtime": 865.4618,
87
- "eval_samples_per_second": 3.248,
88
- "eval_steps_per_second": 0.203,
89
- "eval_translation_length": 1439232,
90
- "step": 2370
91
  }
92
  ],
93
  "logging_steps": 500,
94
- "max_steps": 2370,
95
- "num_train_epochs": 3,
96
  "save_steps": 500,
97
- "total_flos": 6.496213080106598e+17,
98
  "trial_name": null,
99
  "trial_params": null
100
  }
 
1
  {
2
+ "best_metric": 2.910914182662964,
3
+ "best_model_checkpoint": "dq158/pingusPongus/checkpoint-12755",
4
+ "epoch": 0.9999608012230018,
5
  "eval_steps": 500,
6
+ "global_step": 12755,
7
  "is_hyper_param_search": false,
8
  "is_local_process_zero": true,
9
  "is_world_process_zero": true,
10
  "log_history": [
11
  {
12
+ "epoch": 0.04,
13
+ "learning_rate": 9.999026339091761e-05,
14
+ "loss": 3.861,
15
  "step": 500
16
  },
17
  {
18
+ "epoch": 0.08,
19
+ "learning_rate": 9.995071491552179e-05,
20
+ "loss": 3.3987,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  "step": 1000
22
  },
23
  {
24
+ "epoch": 0.12,
25
+ "learning_rate": 9.988077008354145e-05,
26
+ "loss": 3.3479,
27
  "step": 1500
28
  },
29
  {
30
+ "epoch": 0.16,
31
+ "learning_rate": 9.978047145829242e-05,
32
+ "loss": 3.2758,
33
+ "step": 2000
 
 
 
 
 
 
 
 
 
 
 
 
 
34
  },
35
  {
36
+ "epoch": 0.2,
37
+ "learning_rate": 9.964988007419194e-05,
38
+ "loss": 3.267,
39
+ "step": 2500
40
+ },
41
+ {
42
+ "epoch": 0.24,
43
+ "learning_rate": 9.948907539961771e-05,
44
+ "loss": 3.1955,
45
+ "step": 3000
46
+ },
47
+ {
48
+ "epoch": 0.27,
49
+ "learning_rate": 9.929815528854916e-05,
50
+ "loss": 3.1989,
51
+ "step": 3500
52
+ },
53
+ {
54
+ "epoch": 0.31,
55
+ "learning_rate": 9.907723592102063e-05,
56
+ "loss": 3.197,
57
+ "step": 4000
58
+ },
59
+ {
60
+ "epoch": 0.35,
61
+ "learning_rate": 9.882645173242273e-05,
62
+ "loss": 3.2314,
63
+ "step": 4500
64
+ },
65
+ {
66
+ "epoch": 0.39,
67
+ "learning_rate": 9.854595533169484e-05,
68
+ "loss": 3.1667,
69
+ "step": 5000
70
+ },
71
+ {
72
+ "epoch": 0.43,
73
+ "learning_rate": 9.823591740845831e-05,
74
+ "loss": 3.1677,
75
+ "step": 5500
76
+ },
77
+ {
78
+ "epoch": 0.47,
79
+ "learning_rate": 9.789652662914738e-05,
80
+ "loss": 3.1065,
81
+ "step": 6000
82
+ },
83
+ {
84
+ "epoch": 0.51,
85
+ "learning_rate": 9.752798952220046e-05,
86
+ "loss": 3.1374,
87
+ "step": 6500
88
+ },
89
+ {
90
+ "epoch": 0.55,
91
+ "learning_rate": 9.713053035238205e-05,
92
+ "loss": 3.112,
93
+ "step": 7000
94
  },
95
  {
96
+ "epoch": 0.59,
97
+ "learning_rate": 9.670439098431159e-05,
98
+ "loss": 3.1106,
99
+ "step": 7500
100
+ },
101
+ {
102
+ "epoch": 0.63,
103
+ "learning_rate": 9.624983073528232e-05,
104
+ "loss": 3.1226,
105
+ "step": 8000
106
+ },
107
+ {
108
+ "epoch": 0.67,
109
+ "learning_rate": 9.576712621745964e-05,
110
+ "loss": 3.0798,
111
+ "step": 8500
112
+ },
113
+ {
114
+ "epoch": 0.71,
115
+ "learning_rate": 9.525657116955533e-05,
116
+ "loss": 3.0946,
117
+ "step": 9000
118
+ },
119
+ {
120
+ "epoch": 0.74,
121
+ "learning_rate": 9.471847627807943e-05,
122
+ "loss": 3.0776,
123
+ "step": 9500
124
+ },
125
+ {
126
+ "epoch": 0.78,
127
+ "learning_rate": 9.415316898827921e-05,
128
+ "loss": 3.109,
129
+ "step": 10000
130
+ },
131
+ {
132
+ "epoch": 0.82,
133
+ "learning_rate": 9.356099330487995e-05,
134
+ "loss": 3.0728,
135
+ "step": 10500
136
+ },
137
+ {
138
+ "epoch": 0.86,
139
+ "learning_rate": 9.29423095827487e-05,
140
+ "loss": 3.0979,
141
+ "step": 11000
142
+ },
143
+ {
144
+ "epoch": 0.9,
145
+ "learning_rate": 9.229749430760868e-05,
146
+ "loss": 3.0728,
147
+ "step": 11500
148
+ },
149
+ {
150
+ "epoch": 0.94,
151
+ "learning_rate": 9.16269398669376e-05,
152
+ "loss": 3.046,
153
+ "step": 12000
154
+ },
155
+ {
156
+ "epoch": 0.98,
157
+ "learning_rate": 9.093105431118921e-05,
158
+ "loss": 3.0024,
159
+ "step": 12500
160
+ },
161
+ {
162
+ "epoch": 1.0,
163
  "eval_bleu": 1.0,
164
  "eval_brevity_penalty": 1.0,
165
  "eval_length_ratio": 1.0,
166
+ "eval_loss": 2.910914182662964,
167
  "eval_precisions": [
168
  1.0,
169
  1.0,
170
  1.0,
171
  1.0
172
  ],
173
+ "eval_reference_length": 5805056,
174
+ "eval_runtime": 8460.2982,
175
+ "eval_samples_per_second": 1.34,
176
+ "eval_steps_per_second": 0.335,
177
+ "eval_translation_length": 5805056,
178
+ "step": 12755
179
  }
180
  ],
181
  "logging_steps": 500,
182
+ "max_steps": 63775,
183
+ "num_train_epochs": 5,
184
  "save_steps": 500,
185
+ "total_flos": 8.74174568271446e+17,
186
  "trial_name": null,
187
  "trial_params": null
188
  }
last-checkpoint/training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b7694805b29bfb8491caf50687f3617fba8e8b17c948a4fabc0da0476e235ec2
3
- size 4664
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4056ae149e569ae64bff6f199f4ee50a2bd868d3d8032713f5c777a191c668f
3
+ size 4728