thankuxari commited on
Commit
9657173
·
verified ·
1 Parent(s): 74f0db0

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: bert-base-uncased
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.14.0
adapter_config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "bert-base-uncased",
5
+ "bias": "none",
6
+ "eva_config": null,
7
+ "exclude_modules": null,
8
+ "fan_in_fan_out": false,
9
+ "inference_mode": true,
10
+ "init_lora_weights": true,
11
+ "layer_replication": null,
12
+ "layers_pattern": null,
13
+ "layers_to_transform": null,
14
+ "loftq_config": {},
15
+ "lora_alpha": 16,
16
+ "lora_bias": false,
17
+ "lora_dropout": 0.1,
18
+ "megatron_config": null,
19
+ "megatron_core": "megatron.core",
20
+ "modules_to_save": [
21
+ "classifier",
22
+ "score"
23
+ ],
24
+ "peft_type": "LORA",
25
+ "r": 8,
26
+ "rank_pattern": {},
27
+ "revision": null,
28
+ "target_modules": [
29
+ "key",
30
+ "value",
31
+ "query",
32
+ "output.dense"
33
+ ],
34
+ "task_type": "SEQ_CLS",
35
+ "use_dora": false,
36
+ "use_rslora": false
37
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dec0b048cea6355e260541417fd8907acd2ebe40a00d0f723ee4e55fc4f1d7f9
3
+ size 3856816
optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:393a7a888ba9b636bbd7f5808ac32f656fd7ecbdc1349d856db533fc4abb25af
3
+ size 7782586
rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49f36393e5c48dfa11bd7e26978f1e011775484e929eadd7904427684d942ee4
3
+ size 14244
scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:588920e3b4441cb41d54fdc1abfe70a4232c690ed6d4e22e58aa367ebe80436d
3
+ size 1064
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": true,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "[PAD]",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "100": {
13
+ "content": "[UNK]",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "101": {
21
+ "content": "[CLS]",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "102": {
29
+ "content": "[SEP]",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "103": {
37
+ "content": "[MASK]",
38
+ "lstrip": false,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ }
44
+ },
45
+ "clean_up_tokenization_spaces": false,
46
+ "cls_token": "[CLS]",
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 512,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
trainer_state.json ADDED
@@ -0,0 +1,1137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 0.0001737244747346267,
3
+ "best_model_checkpoint": "bert-base-uncased-lora-prompt-classification_2\\checkpoint-1500",
4
+ "epoch": 6.0,
5
+ "eval_steps": 500,
6
+ "global_step": 1500,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.04,
13
+ "grad_norm": 2.0812888145446777,
14
+ "learning_rate": 4.966666666666667e-05,
15
+ "loss": 0.705,
16
+ "step": 10
17
+ },
18
+ {
19
+ "epoch": 0.08,
20
+ "grad_norm": 3.288998603820801,
21
+ "learning_rate": 4.933333333333334e-05,
22
+ "loss": 0.6976,
23
+ "step": 20
24
+ },
25
+ {
26
+ "epoch": 0.12,
27
+ "grad_norm": 1.5486328601837158,
28
+ "learning_rate": 4.9e-05,
29
+ "loss": 0.6641,
30
+ "step": 30
31
+ },
32
+ {
33
+ "epoch": 0.16,
34
+ "grad_norm": 2.7412655353546143,
35
+ "learning_rate": 4.866666666666667e-05,
36
+ "loss": 0.644,
37
+ "step": 40
38
+ },
39
+ {
40
+ "epoch": 0.2,
41
+ "grad_norm": 5.236827373504639,
42
+ "learning_rate": 4.8333333333333334e-05,
43
+ "loss": 0.5961,
44
+ "step": 50
45
+ },
46
+ {
47
+ "epoch": 0.24,
48
+ "grad_norm": 7.2681169509887695,
49
+ "learning_rate": 4.8e-05,
50
+ "loss": 0.5451,
51
+ "step": 60
52
+ },
53
+ {
54
+ "epoch": 0.28,
55
+ "grad_norm": 3.0430655479431152,
56
+ "learning_rate": 4.766666666666667e-05,
57
+ "loss": 0.4545,
58
+ "step": 70
59
+ },
60
+ {
61
+ "epoch": 0.32,
62
+ "grad_norm": 5.339339733123779,
63
+ "learning_rate": 4.7333333333333336e-05,
64
+ "loss": 0.365,
65
+ "step": 80
66
+ },
67
+ {
68
+ "epoch": 0.36,
69
+ "grad_norm": 3.311725378036499,
70
+ "learning_rate": 4.7e-05,
71
+ "loss": 0.2772,
72
+ "step": 90
73
+ },
74
+ {
75
+ "epoch": 0.4,
76
+ "grad_norm": 3.0469141006469727,
77
+ "learning_rate": 4.666666666666667e-05,
78
+ "loss": 0.1911,
79
+ "step": 100
80
+ },
81
+ {
82
+ "epoch": 0.44,
83
+ "grad_norm": 1.9386370182037354,
84
+ "learning_rate": 4.633333333333333e-05,
85
+ "loss": 0.1238,
86
+ "step": 110
87
+ },
88
+ {
89
+ "epoch": 0.48,
90
+ "grad_norm": 1.5056577920913696,
91
+ "learning_rate": 4.600000000000001e-05,
92
+ "loss": 0.0883,
93
+ "step": 120
94
+ },
95
+ {
96
+ "epoch": 0.52,
97
+ "grad_norm": 0.8885862827301025,
98
+ "learning_rate": 4.566666666666667e-05,
99
+ "loss": 0.052,
100
+ "step": 130
101
+ },
102
+ {
103
+ "epoch": 0.56,
104
+ "grad_norm": 0.5265452265739441,
105
+ "learning_rate": 4.5333333333333335e-05,
106
+ "loss": 0.0263,
107
+ "step": 140
108
+ },
109
+ {
110
+ "epoch": 0.6,
111
+ "grad_norm": 0.40089932084083557,
112
+ "learning_rate": 4.5e-05,
113
+ "loss": 0.0204,
114
+ "step": 150
115
+ },
116
+ {
117
+ "epoch": 0.64,
118
+ "grad_norm": 0.2629603147506714,
119
+ "learning_rate": 4.466666666666667e-05,
120
+ "loss": 0.0106,
121
+ "step": 160
122
+ },
123
+ {
124
+ "epoch": 0.68,
125
+ "grad_norm": 0.2234336882829666,
126
+ "learning_rate": 4.433333333333334e-05,
127
+ "loss": 0.008,
128
+ "step": 170
129
+ },
130
+ {
131
+ "epoch": 0.72,
132
+ "grad_norm": 0.15165063738822937,
133
+ "learning_rate": 4.4000000000000006e-05,
134
+ "loss": 0.0062,
135
+ "step": 180
136
+ },
137
+ {
138
+ "epoch": 0.76,
139
+ "grad_norm": 0.137285515666008,
140
+ "learning_rate": 4.3666666666666666e-05,
141
+ "loss": 0.0047,
142
+ "step": 190
143
+ },
144
+ {
145
+ "epoch": 0.8,
146
+ "grad_norm": 0.1138443574309349,
147
+ "learning_rate": 4.3333333333333334e-05,
148
+ "loss": 0.0041,
149
+ "step": 200
150
+ },
151
+ {
152
+ "epoch": 0.84,
153
+ "grad_norm": 0.10386146605014801,
154
+ "learning_rate": 4.3e-05,
155
+ "loss": 0.0035,
156
+ "step": 210
157
+ },
158
+ {
159
+ "epoch": 0.88,
160
+ "grad_norm": 0.09160862863063812,
161
+ "learning_rate": 4.266666666666667e-05,
162
+ "loss": 0.0031,
163
+ "step": 220
164
+ },
165
+ {
166
+ "epoch": 0.92,
167
+ "grad_norm": 0.09328138083219528,
168
+ "learning_rate": 4.233333333333334e-05,
169
+ "loss": 0.0029,
170
+ "step": 230
171
+ },
172
+ {
173
+ "epoch": 0.96,
174
+ "grad_norm": 0.07151716202497482,
175
+ "learning_rate": 4.2e-05,
176
+ "loss": 0.0024,
177
+ "step": 240
178
+ },
179
+ {
180
+ "epoch": 1.0,
181
+ "grad_norm": 0.07315541058778763,
182
+ "learning_rate": 4.166666666666667e-05,
183
+ "loss": 0.0023,
184
+ "step": 250
185
+ },
186
+ {
187
+ "epoch": 1.0,
188
+ "eval_accuracy": 1.0,
189
+ "eval_loss": 0.0018067833734676242,
190
+ "eval_runtime": 95.3571,
191
+ "eval_samples_per_second": 10.476,
192
+ "eval_steps_per_second": 0.661,
193
+ "step": 250
194
+ },
195
+ {
196
+ "epoch": 1.04,
197
+ "grad_norm": 0.05993200093507767,
198
+ "learning_rate": 4.133333333333333e-05,
199
+ "loss": 0.0021,
200
+ "step": 260
201
+ },
202
+ {
203
+ "epoch": 1.08,
204
+ "grad_norm": 0.06033262610435486,
205
+ "learning_rate": 4.1e-05,
206
+ "loss": 0.002,
207
+ "step": 270
208
+ },
209
+ {
210
+ "epoch": 1.12,
211
+ "grad_norm": 0.05426369234919548,
212
+ "learning_rate": 4.066666666666667e-05,
213
+ "loss": 0.0017,
214
+ "step": 280
215
+ },
216
+ {
217
+ "epoch": 1.16,
218
+ "grad_norm": 0.05296269804239273,
219
+ "learning_rate": 4.0333333333333336e-05,
220
+ "loss": 0.0017,
221
+ "step": 290
222
+ },
223
+ {
224
+ "epoch": 1.2,
225
+ "grad_norm": 0.0470573753118515,
226
+ "learning_rate": 4e-05,
227
+ "loss": 0.0015,
228
+ "step": 300
229
+ },
230
+ {
231
+ "epoch": 1.24,
232
+ "grad_norm": 0.04728998243808746,
233
+ "learning_rate": 3.966666666666667e-05,
234
+ "loss": 0.0015,
235
+ "step": 310
236
+ },
237
+ {
238
+ "epoch": 1.28,
239
+ "grad_norm": 0.04595053195953369,
240
+ "learning_rate": 3.933333333333333e-05,
241
+ "loss": 0.0014,
242
+ "step": 320
243
+ },
244
+ {
245
+ "epoch": 1.32,
246
+ "grad_norm": 0.04252336546778679,
247
+ "learning_rate": 3.9000000000000006e-05,
248
+ "loss": 0.0013,
249
+ "step": 330
250
+ },
251
+ {
252
+ "epoch": 1.3599999999999999,
253
+ "grad_norm": 0.037202149629592896,
254
+ "learning_rate": 3.866666666666667e-05,
255
+ "loss": 0.0012,
256
+ "step": 340
257
+ },
258
+ {
259
+ "epoch": 1.4,
260
+ "grad_norm": 0.03442235291004181,
261
+ "learning_rate": 3.8333333333333334e-05,
262
+ "loss": 0.0011,
263
+ "step": 350
264
+ },
265
+ {
266
+ "epoch": 1.44,
267
+ "grad_norm": 0.03448282927274704,
268
+ "learning_rate": 3.8e-05,
269
+ "loss": 0.0011,
270
+ "step": 360
271
+ },
272
+ {
273
+ "epoch": 1.48,
274
+ "grad_norm": 0.03979218006134033,
275
+ "learning_rate": 3.766666666666667e-05,
276
+ "loss": 0.0011,
277
+ "step": 370
278
+ },
279
+ {
280
+ "epoch": 1.52,
281
+ "grad_norm": 0.03626847639679909,
282
+ "learning_rate": 3.733333333333334e-05,
283
+ "loss": 0.0011,
284
+ "step": 380
285
+ },
286
+ {
287
+ "epoch": 1.56,
288
+ "grad_norm": 0.029680874198675156,
289
+ "learning_rate": 3.7e-05,
290
+ "loss": 0.001,
291
+ "step": 390
292
+ },
293
+ {
294
+ "epoch": 1.6,
295
+ "grad_norm": 0.03621572256088257,
296
+ "learning_rate": 3.6666666666666666e-05,
297
+ "loss": 0.001,
298
+ "step": 400
299
+ },
300
+ {
301
+ "epoch": 1.6400000000000001,
302
+ "grad_norm": 0.03038485161960125,
303
+ "learning_rate": 3.633333333333333e-05,
304
+ "loss": 0.0009,
305
+ "step": 410
306
+ },
307
+ {
308
+ "epoch": 1.6800000000000002,
309
+ "grad_norm": 0.02917036972939968,
310
+ "learning_rate": 3.6e-05,
311
+ "loss": 0.0008,
312
+ "step": 420
313
+ },
314
+ {
315
+ "epoch": 1.72,
316
+ "grad_norm": 0.02514103427529335,
317
+ "learning_rate": 3.566666666666667e-05,
318
+ "loss": 0.0009,
319
+ "step": 430
320
+ },
321
+ {
322
+ "epoch": 1.76,
323
+ "grad_norm": 0.02489582635462284,
324
+ "learning_rate": 3.5333333333333336e-05,
325
+ "loss": 0.0008,
326
+ "step": 440
327
+ },
328
+ {
329
+ "epoch": 1.8,
330
+ "grad_norm": 0.03024277277290821,
331
+ "learning_rate": 3.5e-05,
332
+ "loss": 0.0008,
333
+ "step": 450
334
+ },
335
+ {
336
+ "epoch": 1.8399999999999999,
337
+ "grad_norm": 0.024949118494987488,
338
+ "learning_rate": 3.466666666666667e-05,
339
+ "loss": 0.0007,
340
+ "step": 460
341
+ },
342
+ {
343
+ "epoch": 1.88,
344
+ "grad_norm": 0.02324504777789116,
345
+ "learning_rate": 3.433333333333333e-05,
346
+ "loss": 0.0007,
347
+ "step": 470
348
+ },
349
+ {
350
+ "epoch": 1.92,
351
+ "grad_norm": 0.0246137585490942,
352
+ "learning_rate": 3.4000000000000007e-05,
353
+ "loss": 0.0007,
354
+ "step": 480
355
+ },
356
+ {
357
+ "epoch": 1.96,
358
+ "grad_norm": 0.022741537541151047,
359
+ "learning_rate": 3.366666666666667e-05,
360
+ "loss": 0.0007,
361
+ "step": 490
362
+ },
363
+ {
364
+ "epoch": 2.0,
365
+ "grad_norm": 0.020862502977252007,
366
+ "learning_rate": 3.3333333333333335e-05,
367
+ "loss": 0.0006,
368
+ "step": 500
369
+ },
370
+ {
371
+ "epoch": 2.0,
372
+ "eval_accuracy": 1.0,
373
+ "eval_loss": 0.0005460651591420174,
374
+ "eval_runtime": 95.4464,
375
+ "eval_samples_per_second": 10.467,
376
+ "eval_steps_per_second": 0.66,
377
+ "step": 500
378
+ },
379
+ {
380
+ "epoch": 2.04,
381
+ "grad_norm": 0.023316698148846626,
382
+ "learning_rate": 3.3e-05,
383
+ "loss": 0.0007,
384
+ "step": 510
385
+ },
386
+ {
387
+ "epoch": 2.08,
388
+ "grad_norm": 0.02048688754439354,
389
+ "learning_rate": 3.266666666666667e-05,
390
+ "loss": 0.0006,
391
+ "step": 520
392
+ },
393
+ {
394
+ "epoch": 2.12,
395
+ "grad_norm": 0.01914156787097454,
396
+ "learning_rate": 3.233333333333333e-05,
397
+ "loss": 0.0006,
398
+ "step": 530
399
+ },
400
+ {
401
+ "epoch": 2.16,
402
+ "grad_norm": 0.021429840475320816,
403
+ "learning_rate": 3.2000000000000005e-05,
404
+ "loss": 0.0006,
405
+ "step": 540
406
+ },
407
+ {
408
+ "epoch": 2.2,
409
+ "grad_norm": 0.019221629947423935,
410
+ "learning_rate": 3.1666666666666666e-05,
411
+ "loss": 0.0006,
412
+ "step": 550
413
+ },
414
+ {
415
+ "epoch": 2.24,
416
+ "grad_norm": 0.01785707101225853,
417
+ "learning_rate": 3.1333333333333334e-05,
418
+ "loss": 0.0006,
419
+ "step": 560
420
+ },
421
+ {
422
+ "epoch": 2.2800000000000002,
423
+ "grad_norm": 0.0171416774392128,
424
+ "learning_rate": 3.1e-05,
425
+ "loss": 0.0006,
426
+ "step": 570
427
+ },
428
+ {
429
+ "epoch": 2.32,
430
+ "grad_norm": 0.015822874382138252,
431
+ "learning_rate": 3.066666666666667e-05,
432
+ "loss": 0.0005,
433
+ "step": 580
434
+ },
435
+ {
436
+ "epoch": 2.36,
437
+ "grad_norm": 0.018631841987371445,
438
+ "learning_rate": 3.0333333333333337e-05,
439
+ "loss": 0.0005,
440
+ "step": 590
441
+ },
442
+ {
443
+ "epoch": 2.4,
444
+ "grad_norm": 0.016446832567453384,
445
+ "learning_rate": 3e-05,
446
+ "loss": 0.0005,
447
+ "step": 600
448
+ },
449
+ {
450
+ "epoch": 2.44,
451
+ "grad_norm": 0.019351772964000702,
452
+ "learning_rate": 2.9666666666666672e-05,
453
+ "loss": 0.0005,
454
+ "step": 610
455
+ },
456
+ {
457
+ "epoch": 2.48,
458
+ "grad_norm": 0.021138865500688553,
459
+ "learning_rate": 2.9333333333333336e-05,
460
+ "loss": 0.0005,
461
+ "step": 620
462
+ },
463
+ {
464
+ "epoch": 2.52,
465
+ "grad_norm": 0.01532831508666277,
466
+ "learning_rate": 2.9e-05,
467
+ "loss": 0.0005,
468
+ "step": 630
469
+ },
470
+ {
471
+ "epoch": 2.56,
472
+ "grad_norm": 0.015546442940831184,
473
+ "learning_rate": 2.8666666666666668e-05,
474
+ "loss": 0.0005,
475
+ "step": 640
476
+ },
477
+ {
478
+ "epoch": 2.6,
479
+ "grad_norm": 0.016567401587963104,
480
+ "learning_rate": 2.8333333333333335e-05,
481
+ "loss": 0.0005,
482
+ "step": 650
483
+ },
484
+ {
485
+ "epoch": 2.64,
486
+ "grad_norm": 0.015167050994932652,
487
+ "learning_rate": 2.8000000000000003e-05,
488
+ "loss": 0.0004,
489
+ "step": 660
490
+ },
491
+ {
492
+ "epoch": 2.68,
493
+ "grad_norm": 0.013147111982107162,
494
+ "learning_rate": 2.7666666666666667e-05,
495
+ "loss": 0.0004,
496
+ "step": 670
497
+ },
498
+ {
499
+ "epoch": 2.7199999999999998,
500
+ "grad_norm": 0.01637696847319603,
501
+ "learning_rate": 2.733333333333333e-05,
502
+ "loss": 0.0004,
503
+ "step": 680
504
+ },
505
+ {
506
+ "epoch": 2.76,
507
+ "grad_norm": 0.012778673321008682,
508
+ "learning_rate": 2.7000000000000002e-05,
509
+ "loss": 0.0004,
510
+ "step": 690
511
+ },
512
+ {
513
+ "epoch": 2.8,
514
+ "grad_norm": 0.013170059770345688,
515
+ "learning_rate": 2.6666666666666667e-05,
516
+ "loss": 0.0004,
517
+ "step": 700
518
+ },
519
+ {
520
+ "epoch": 2.84,
521
+ "grad_norm": 0.012528815306723118,
522
+ "learning_rate": 2.633333333333333e-05,
523
+ "loss": 0.0004,
524
+ "step": 710
525
+ },
526
+ {
527
+ "epoch": 2.88,
528
+ "grad_norm": 0.01420772634446621,
529
+ "learning_rate": 2.6000000000000002e-05,
530
+ "loss": 0.0004,
531
+ "step": 720
532
+ },
533
+ {
534
+ "epoch": 2.92,
535
+ "grad_norm": 0.011915834620594978,
536
+ "learning_rate": 2.5666666666666666e-05,
537
+ "loss": 0.0004,
538
+ "step": 730
539
+ },
540
+ {
541
+ "epoch": 2.96,
542
+ "grad_norm": 0.012909920886158943,
543
+ "learning_rate": 2.5333333333333337e-05,
544
+ "loss": 0.0004,
545
+ "step": 740
546
+ },
547
+ {
548
+ "epoch": 3.0,
549
+ "grad_norm": 0.0116231394931674,
550
+ "learning_rate": 2.5e-05,
551
+ "loss": 0.0004,
552
+ "step": 750
553
+ },
554
+ {
555
+ "epoch": 3.0,
556
+ "eval_accuracy": 1.0,
557
+ "eval_loss": 0.00031199140357784927,
558
+ "eval_runtime": 95.5734,
559
+ "eval_samples_per_second": 10.453,
560
+ "eval_steps_per_second": 0.659,
561
+ "step": 750
562
+ },
563
+ {
564
+ "epoch": 3.04,
565
+ "grad_norm": 0.011687643826007843,
566
+ "learning_rate": 2.466666666666667e-05,
567
+ "loss": 0.0004,
568
+ "step": 760
569
+ },
570
+ {
571
+ "epoch": 3.08,
572
+ "grad_norm": 0.01360296830534935,
573
+ "learning_rate": 2.4333333333333336e-05,
574
+ "loss": 0.0004,
575
+ "step": 770
576
+ },
577
+ {
578
+ "epoch": 3.12,
579
+ "grad_norm": 0.010453453287482262,
580
+ "learning_rate": 2.4e-05,
581
+ "loss": 0.0004,
582
+ "step": 780
583
+ },
584
+ {
585
+ "epoch": 3.16,
586
+ "grad_norm": 0.01295279711484909,
587
+ "learning_rate": 2.3666666666666668e-05,
588
+ "loss": 0.0004,
589
+ "step": 790
590
+ },
591
+ {
592
+ "epoch": 3.2,
593
+ "grad_norm": 0.012378768995404243,
594
+ "learning_rate": 2.3333333333333336e-05,
595
+ "loss": 0.0003,
596
+ "step": 800
597
+ },
598
+ {
599
+ "epoch": 3.24,
600
+ "grad_norm": 0.010903855785727501,
601
+ "learning_rate": 2.3000000000000003e-05,
602
+ "loss": 0.0003,
603
+ "step": 810
604
+ },
605
+ {
606
+ "epoch": 3.2800000000000002,
607
+ "grad_norm": 0.010296747088432312,
608
+ "learning_rate": 2.2666666666666668e-05,
609
+ "loss": 0.0003,
610
+ "step": 820
611
+ },
612
+ {
613
+ "epoch": 3.32,
614
+ "grad_norm": 0.011389556340873241,
615
+ "learning_rate": 2.2333333333333335e-05,
616
+ "loss": 0.0003,
617
+ "step": 830
618
+ },
619
+ {
620
+ "epoch": 3.36,
621
+ "grad_norm": 0.009990000165998936,
622
+ "learning_rate": 2.2000000000000003e-05,
623
+ "loss": 0.0003,
624
+ "step": 840
625
+ },
626
+ {
627
+ "epoch": 3.4,
628
+ "grad_norm": 0.011618678458034992,
629
+ "learning_rate": 2.1666666666666667e-05,
630
+ "loss": 0.0003,
631
+ "step": 850
632
+ },
633
+ {
634
+ "epoch": 3.44,
635
+ "grad_norm": 0.010683366097509861,
636
+ "learning_rate": 2.1333333333333335e-05,
637
+ "loss": 0.0003,
638
+ "step": 860
639
+ },
640
+ {
641
+ "epoch": 3.48,
642
+ "grad_norm": 0.013111468404531479,
643
+ "learning_rate": 2.1e-05,
644
+ "loss": 0.0003,
645
+ "step": 870
646
+ },
647
+ {
648
+ "epoch": 3.52,
649
+ "grad_norm": 0.011576538905501366,
650
+ "learning_rate": 2.0666666666666666e-05,
651
+ "loss": 0.0008,
652
+ "step": 880
653
+ },
654
+ {
655
+ "epoch": 3.56,
656
+ "grad_norm": 0.012379034422338009,
657
+ "learning_rate": 2.0333333333333334e-05,
658
+ "loss": 0.0003,
659
+ "step": 890
660
+ },
661
+ {
662
+ "epoch": 3.6,
663
+ "grad_norm": 0.011951645836234093,
664
+ "learning_rate": 2e-05,
665
+ "loss": 0.0003,
666
+ "step": 900
667
+ },
668
+ {
669
+ "epoch": 3.64,
670
+ "grad_norm": 0.013675806112587452,
671
+ "learning_rate": 1.9666666666666666e-05,
672
+ "loss": 0.0003,
673
+ "step": 910
674
+ },
675
+ {
676
+ "epoch": 3.68,
677
+ "grad_norm": 0.01228983886539936,
678
+ "learning_rate": 1.9333333333333333e-05,
679
+ "loss": 0.0003,
680
+ "step": 920
681
+ },
682
+ {
683
+ "epoch": 3.7199999999999998,
684
+ "grad_norm": 0.012014049105346203,
685
+ "learning_rate": 1.9e-05,
686
+ "loss": 0.0003,
687
+ "step": 930
688
+ },
689
+ {
690
+ "epoch": 3.76,
691
+ "grad_norm": 0.011737428605556488,
692
+ "learning_rate": 1.866666666666667e-05,
693
+ "loss": 0.0003,
694
+ "step": 940
695
+ },
696
+ {
697
+ "epoch": 3.8,
698
+ "grad_norm": 0.011398055590689182,
699
+ "learning_rate": 1.8333333333333333e-05,
700
+ "loss": 0.0003,
701
+ "step": 950
702
+ },
703
+ {
704
+ "epoch": 3.84,
705
+ "grad_norm": 0.01040700078010559,
706
+ "learning_rate": 1.8e-05,
707
+ "loss": 0.0003,
708
+ "step": 960
709
+ },
710
+ {
711
+ "epoch": 3.88,
712
+ "grad_norm": 0.010074478574097157,
713
+ "learning_rate": 1.7666666666666668e-05,
714
+ "loss": 0.0003,
715
+ "step": 970
716
+ },
717
+ {
718
+ "epoch": 3.92,
719
+ "grad_norm": 0.010768290609121323,
720
+ "learning_rate": 1.7333333333333336e-05,
721
+ "loss": 0.0003,
722
+ "step": 980
723
+ },
724
+ {
725
+ "epoch": 3.96,
726
+ "grad_norm": 0.009498749859631062,
727
+ "learning_rate": 1.7000000000000003e-05,
728
+ "loss": 0.0003,
729
+ "step": 990
730
+ },
731
+ {
732
+ "epoch": 4.0,
733
+ "grad_norm": 0.009427339769899845,
734
+ "learning_rate": 1.6666666666666667e-05,
735
+ "loss": 0.0003,
736
+ "step": 1000
737
+ },
738
+ {
739
+ "epoch": 4.0,
740
+ "eval_accuracy": 1.0,
741
+ "eval_loss": 0.0002258356544189155,
742
+ "eval_runtime": 95.7158,
743
+ "eval_samples_per_second": 10.437,
744
+ "eval_steps_per_second": 0.658,
745
+ "step": 1000
746
+ },
747
+ {
748
+ "epoch": 4.04,
749
+ "grad_norm": 0.010377222672104836,
750
+ "learning_rate": 1.6333333333333335e-05,
751
+ "loss": 0.0003,
752
+ "step": 1010
753
+ },
754
+ {
755
+ "epoch": 4.08,
756
+ "grad_norm": 0.009830745868384838,
757
+ "learning_rate": 1.6000000000000003e-05,
758
+ "loss": 0.0003,
759
+ "step": 1020
760
+ },
761
+ {
762
+ "epoch": 4.12,
763
+ "grad_norm": 0.010411952622234821,
764
+ "learning_rate": 1.5666666666666667e-05,
765
+ "loss": 0.0003,
766
+ "step": 1030
767
+ },
768
+ {
769
+ "epoch": 4.16,
770
+ "grad_norm": 0.009721982292830944,
771
+ "learning_rate": 1.5333333333333334e-05,
772
+ "loss": 0.0003,
773
+ "step": 1040
774
+ },
775
+ {
776
+ "epoch": 4.2,
777
+ "grad_norm": 0.009437228552997112,
778
+ "learning_rate": 1.5e-05,
779
+ "loss": 0.0003,
780
+ "step": 1050
781
+ },
782
+ {
783
+ "epoch": 4.24,
784
+ "grad_norm": 0.009827992878854275,
785
+ "learning_rate": 1.4666666666666668e-05,
786
+ "loss": 0.0003,
787
+ "step": 1060
788
+ },
789
+ {
790
+ "epoch": 4.28,
791
+ "grad_norm": 0.008840350434184074,
792
+ "learning_rate": 1.4333333333333334e-05,
793
+ "loss": 0.0003,
794
+ "step": 1070
795
+ },
796
+ {
797
+ "epoch": 4.32,
798
+ "grad_norm": 0.008061910979449749,
799
+ "learning_rate": 1.4000000000000001e-05,
800
+ "loss": 0.0003,
801
+ "step": 1080
802
+ },
803
+ {
804
+ "epoch": 4.36,
805
+ "grad_norm": 0.008564349263906479,
806
+ "learning_rate": 1.3666666666666666e-05,
807
+ "loss": 0.0003,
808
+ "step": 1090
809
+ },
810
+ {
811
+ "epoch": 4.4,
812
+ "grad_norm": 0.008754014037549496,
813
+ "learning_rate": 1.3333333333333333e-05,
814
+ "loss": 0.0003,
815
+ "step": 1100
816
+ },
817
+ {
818
+ "epoch": 4.44,
819
+ "grad_norm": 0.00901862047612667,
820
+ "learning_rate": 1.3000000000000001e-05,
821
+ "loss": 0.0002,
822
+ "step": 1110
823
+ },
824
+ {
825
+ "epoch": 4.48,
826
+ "grad_norm": 0.008502613753080368,
827
+ "learning_rate": 1.2666666666666668e-05,
828
+ "loss": 0.0002,
829
+ "step": 1120
830
+ },
831
+ {
832
+ "epoch": 4.52,
833
+ "grad_norm": 0.009442449547350407,
834
+ "learning_rate": 1.2333333333333334e-05,
835
+ "loss": 0.0002,
836
+ "step": 1130
837
+ },
838
+ {
839
+ "epoch": 4.5600000000000005,
840
+ "grad_norm": 0.00908522866666317,
841
+ "learning_rate": 1.2e-05,
842
+ "loss": 0.0002,
843
+ "step": 1140
844
+ },
845
+ {
846
+ "epoch": 4.6,
847
+ "grad_norm": 0.008341205306351185,
848
+ "learning_rate": 1.1666666666666668e-05,
849
+ "loss": 0.0002,
850
+ "step": 1150
851
+ },
852
+ {
853
+ "epoch": 4.64,
854
+ "grad_norm": 0.009036746807396412,
855
+ "learning_rate": 1.1333333333333334e-05,
856
+ "loss": 0.0002,
857
+ "step": 1160
858
+ },
859
+ {
860
+ "epoch": 4.68,
861
+ "grad_norm": 0.008981692604720592,
862
+ "learning_rate": 1.1000000000000001e-05,
863
+ "loss": 0.0002,
864
+ "step": 1170
865
+ },
866
+ {
867
+ "epoch": 4.72,
868
+ "grad_norm": 0.008504008874297142,
869
+ "learning_rate": 1.0666666666666667e-05,
870
+ "loss": 0.0002,
871
+ "step": 1180
872
+ },
873
+ {
874
+ "epoch": 4.76,
875
+ "grad_norm": 0.008434013463556767,
876
+ "learning_rate": 1.0333333333333333e-05,
877
+ "loss": 0.0002,
878
+ "step": 1190
879
+ },
880
+ {
881
+ "epoch": 4.8,
882
+ "grad_norm": 0.008060247637331486,
883
+ "learning_rate": 1e-05,
884
+ "loss": 0.0002,
885
+ "step": 1200
886
+ },
887
+ {
888
+ "epoch": 4.84,
889
+ "grad_norm": 0.008253726176917553,
890
+ "learning_rate": 9.666666666666667e-06,
891
+ "loss": 0.0002,
892
+ "step": 1210
893
+ },
894
+ {
895
+ "epoch": 4.88,
896
+ "grad_norm": 0.0085303308442235,
897
+ "learning_rate": 9.333333333333334e-06,
898
+ "loss": 0.0002,
899
+ "step": 1220
900
+ },
901
+ {
902
+ "epoch": 4.92,
903
+ "grad_norm": 0.008753238245844841,
904
+ "learning_rate": 9e-06,
905
+ "loss": 0.0002,
906
+ "step": 1230
907
+ },
908
+ {
909
+ "epoch": 4.96,
910
+ "grad_norm": 0.008418572135269642,
911
+ "learning_rate": 8.666666666666668e-06,
912
+ "loss": 0.0002,
913
+ "step": 1240
914
+ },
915
+ {
916
+ "epoch": 5.0,
917
+ "grad_norm": 0.00724000995978713,
918
+ "learning_rate": 8.333333333333334e-06,
919
+ "loss": 0.0002,
920
+ "step": 1250
921
+ },
922
+ {
923
+ "epoch": 5.0,
924
+ "eval_accuracy": 1.0,
925
+ "eval_loss": 0.00018535394337959588,
926
+ "eval_runtime": 95.5057,
927
+ "eval_samples_per_second": 10.46,
928
+ "eval_steps_per_second": 0.66,
929
+ "step": 1250
930
+ },
931
+ {
932
+ "epoch": 5.04,
933
+ "grad_norm": 0.007719500456005335,
934
+ "learning_rate": 8.000000000000001e-06,
935
+ "loss": 0.0002,
936
+ "step": 1260
937
+ },
938
+ {
939
+ "epoch": 5.08,
940
+ "grad_norm": 0.0079694464802742,
941
+ "learning_rate": 7.666666666666667e-06,
942
+ "loss": 0.0002,
943
+ "step": 1270
944
+ },
945
+ {
946
+ "epoch": 5.12,
947
+ "grad_norm": 0.008065282367169857,
948
+ "learning_rate": 7.333333333333334e-06,
949
+ "loss": 0.0002,
950
+ "step": 1280
951
+ },
952
+ {
953
+ "epoch": 5.16,
954
+ "grad_norm": 0.007144368253648281,
955
+ "learning_rate": 7.000000000000001e-06,
956
+ "loss": 0.0002,
957
+ "step": 1290
958
+ },
959
+ {
960
+ "epoch": 5.2,
961
+ "grad_norm": 0.00810677744448185,
962
+ "learning_rate": 6.666666666666667e-06,
963
+ "loss": 0.0002,
964
+ "step": 1300
965
+ },
966
+ {
967
+ "epoch": 5.24,
968
+ "grad_norm": 0.0075319125317037106,
969
+ "learning_rate": 6.333333333333334e-06,
970
+ "loss": 0.0002,
971
+ "step": 1310
972
+ },
973
+ {
974
+ "epoch": 5.28,
975
+ "grad_norm": 0.007555096410214901,
976
+ "learning_rate": 6e-06,
977
+ "loss": 0.0002,
978
+ "step": 1320
979
+ },
980
+ {
981
+ "epoch": 5.32,
982
+ "grad_norm": 0.007818273268640041,
983
+ "learning_rate": 5.666666666666667e-06,
984
+ "loss": 0.0002,
985
+ "step": 1330
986
+ },
987
+ {
988
+ "epoch": 5.36,
989
+ "grad_norm": 0.007019245531409979,
990
+ "learning_rate": 5.333333333333334e-06,
991
+ "loss": 0.0002,
992
+ "step": 1340
993
+ },
994
+ {
995
+ "epoch": 5.4,
996
+ "grad_norm": 0.008325035683810711,
997
+ "learning_rate": 5e-06,
998
+ "loss": 0.0002,
999
+ "step": 1350
1000
+ },
1001
+ {
1002
+ "epoch": 5.44,
1003
+ "grad_norm": 0.007825586013495922,
1004
+ "learning_rate": 4.666666666666667e-06,
1005
+ "loss": 0.0002,
1006
+ "step": 1360
1007
+ },
1008
+ {
1009
+ "epoch": 5.48,
1010
+ "grad_norm": 0.008079560473561287,
1011
+ "learning_rate": 4.333333333333334e-06,
1012
+ "loss": 0.0002,
1013
+ "step": 1370
1014
+ },
1015
+ {
1016
+ "epoch": 5.52,
1017
+ "grad_norm": 0.008748218417167664,
1018
+ "learning_rate": 4.000000000000001e-06,
1019
+ "loss": 0.0002,
1020
+ "step": 1380
1021
+ },
1022
+ {
1023
+ "epoch": 5.5600000000000005,
1024
+ "grad_norm": 0.008296054787933826,
1025
+ "learning_rate": 3.666666666666667e-06,
1026
+ "loss": 0.0002,
1027
+ "step": 1390
1028
+ },
1029
+ {
1030
+ "epoch": 5.6,
1031
+ "grad_norm": 0.007903476245701313,
1032
+ "learning_rate": 3.3333333333333333e-06,
1033
+ "loss": 0.0002,
1034
+ "step": 1400
1035
+ },
1036
+ {
1037
+ "epoch": 5.64,
1038
+ "grad_norm": 0.007066323887556791,
1039
+ "learning_rate": 3e-06,
1040
+ "loss": 0.0002,
1041
+ "step": 1410
1042
+ },
1043
+ {
1044
+ "epoch": 5.68,
1045
+ "grad_norm": 0.008042844943702221,
1046
+ "learning_rate": 2.666666666666667e-06,
1047
+ "loss": 0.0002,
1048
+ "step": 1420
1049
+ },
1050
+ {
1051
+ "epoch": 5.72,
1052
+ "grad_norm": 0.007465668488293886,
1053
+ "learning_rate": 2.3333333333333336e-06,
1054
+ "loss": 0.0002,
1055
+ "step": 1430
1056
+ },
1057
+ {
1058
+ "epoch": 5.76,
1059
+ "grad_norm": 0.008133934810757637,
1060
+ "learning_rate": 2.0000000000000003e-06,
1061
+ "loss": 0.0002,
1062
+ "step": 1440
1063
+ },
1064
+ {
1065
+ "epoch": 5.8,
1066
+ "grad_norm": 0.006983849685639143,
1067
+ "learning_rate": 1.6666666666666667e-06,
1068
+ "loss": 0.0002,
1069
+ "step": 1450
1070
+ },
1071
+ {
1072
+ "epoch": 5.84,
1073
+ "grad_norm": 0.007427727337926626,
1074
+ "learning_rate": 1.3333333333333334e-06,
1075
+ "loss": 0.0002,
1076
+ "step": 1460
1077
+ },
1078
+ {
1079
+ "epoch": 5.88,
1080
+ "grad_norm": 0.006941487547010183,
1081
+ "learning_rate": 1.0000000000000002e-06,
1082
+ "loss": 0.0002,
1083
+ "step": 1470
1084
+ },
1085
+ {
1086
+ "epoch": 5.92,
1087
+ "grad_norm": 0.00762647669762373,
1088
+ "learning_rate": 6.666666666666667e-07,
1089
+ "loss": 0.0002,
1090
+ "step": 1480
1091
+ },
1092
+ {
1093
+ "epoch": 5.96,
1094
+ "grad_norm": 0.007048056926578283,
1095
+ "learning_rate": 3.3333333333333335e-07,
1096
+ "loss": 0.0002,
1097
+ "step": 1490
1098
+ },
1099
+ {
1100
+ "epoch": 6.0,
1101
+ "grad_norm": 0.007430908735841513,
1102
+ "learning_rate": 0.0,
1103
+ "loss": 0.0002,
1104
+ "step": 1500
1105
+ },
1106
+ {
1107
+ "epoch": 6.0,
1108
+ "eval_accuracy": 1.0,
1109
+ "eval_loss": 0.0001737244747346267,
1110
+ "eval_runtime": 95.6121,
1111
+ "eval_samples_per_second": 10.448,
1112
+ "eval_steps_per_second": 0.659,
1113
+ "step": 1500
1114
+ }
1115
+ ],
1116
+ "logging_steps": 10,
1117
+ "max_steps": 1500,
1118
+ "num_input_tokens_seen": 0,
1119
+ "num_train_epochs": 6,
1120
+ "save_steps": 500,
1121
+ "stateful_callbacks": {
1122
+ "TrainerControl": {
1123
+ "args": {
1124
+ "should_epoch_stop": false,
1125
+ "should_evaluate": false,
1126
+ "should_log": false,
1127
+ "should_save": true,
1128
+ "should_training_stop": true
1129
+ },
1130
+ "attributes": {}
1131
+ }
1132
+ },
1133
+ "total_flos": 6383847995006976.0,
1134
+ "train_batch_size": 16,
1135
+ "trial_name": null,
1136
+ "trial_params": null
1137
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d2a8c70f242690965017da6ac92115bba0e0fc29ff1f77a9bd2c922fd8d2b30
3
+ size 5368
vocab.txt ADDED
The diff for this file is too large to render. See raw diff