File size: 37,690 Bytes
c4fdd18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
[2024-01-26 12:54:39,523] torch.distributed.run: [WARNING] master_addr is only used for static rdzv_backend and when rdzv_endpoint is not specified.
01/26/2024 12:54:44 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: False
01/26/2024 12:54:44 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(
_n_gpu=1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
bf16=False,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_persistent_workers=False,
dataloader_pin_memory=True,
ddp_backend=None,
ddp_broadcast_buffers=None,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=False,
ddp_timeout=1800,
debug=[],
deepspeed=None,
disable_tqdm=False,
dispatch_batches=None,
do_eval=False,
do_predict=False,
do_train=False,
eval_accumulation_steps=None,
eval_delay=0,
eval_steps=None,
evaluation_strategy=no,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},
fsdp_min_num_params=0,
fsdp_transformer_layer_cls_to_wrap=None,
full_determinism=False,
generation_config=None,
generation_max_length=None,
generation_num_beams=None,
gradient_accumulation_steps=32,
gradient_checkpointing=False,
gradient_checkpointing_kwargs=None,
greater_is_better=None,
group_by_length=False,
half_precision_backend=auto,
hub_always_push=False,
hub_model_id=None,
hub_private_repo=False,
hub_strategy=every_save,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
include_inputs_for_metrics=False,
include_num_input_tokens_seen=False,
include_tokens_per_second=False,
jit_mode_eval=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=0.02,
length_column_name=length,
load_best_model_at_end=False,
local_rank=0,
log_level=passive,
log_level_replica=warning,
log_on_each_node=True,
logging_dir=output/privacy_detection_pt-20240126-125436-128-2e-2/runs/Jan26_12-54-44_ubuntu1804,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=1.0,
logging_strategy=steps,
lr_scheduler_kwargs={},
lr_scheduler_type=linear,
max_grad_norm=1.0,
max_steps=100,
metric_for_best_model=None,
mp_parameters=,
neftune_noise_alpha=None,
no_cuda=False,
num_train_epochs=3.0,
optim=adamw_torch,
optim_args=None,
output_dir=output/privacy_detection_pt-20240126-125436-128-2e-2,
overwrite_output_dir=False,
past_index=-1,
per_device_eval_batch_size=8,
per_device_train_batch_size=1,
predict_with_generate=False,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
ray_scope=last,
remove_unused_columns=True,
report_to=[],
resume_from_checkpoint=True,
run_name=output/privacy_detection_pt-20240126-125436-128-2e-2,
save_on_each_node=False,
save_only_model=False,
save_safetensors=False,
save_steps=500,
save_strategy=steps,
save_total_limit=None,
seed=42,
skip_memory_metrics=True,
sortish_sampler=False,
split_batches=False,
tf32=None,
torch_compile=False,
torch_compile_backend=None,
torch_compile_mode=None,
torchdynamo=None,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_cpu=False,
use_ipex=False,
use_legacy_prediction_loop=False,
use_mps_device=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
)
[INFO|configuration_utils.py:729] 2024-01-26 12:54:45,398 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--THUDM--chatglm3-6b/snapshots/37f2196f481f8989ea443be625d05f97043652ea/config.json
[INFO|configuration_utils.py:729] 2024-01-26 12:54:45,957 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--THUDM--chatglm3-6b/snapshots/37f2196f481f8989ea443be625d05f97043652ea/config.json
[INFO|configuration_utils.py:792] 2024-01-26 12:54:45,960 >> Model config ChatGLMConfig {
  "_name_or_path": "THUDM/chatglm3-6b",
  "add_bias_linear": false,
  "add_qkv_bias": true,
  "apply_query_key_layer_scaling": true,
  "apply_residual_connection_post_layernorm": false,
  "architectures": [
    "ChatGLMModel"
  ],
  "attention_dropout": 0.0,
  "attention_softmax_in_fp32": true,
  "auto_map": {
    "AutoConfig": "THUDM/chatglm3-6b--configuration_chatglm.ChatGLMConfig",
    "AutoModel": "THUDM/chatglm3-6b--modeling_chatglm.ChatGLMForConditionalGeneration",
    "AutoModelForCausalLM": "THUDM/chatglm3-6b--modeling_chatglm.ChatGLMForConditionalGeneration",
    "AutoModelForSeq2SeqLM": "THUDM/chatglm3-6b--modeling_chatglm.ChatGLMForConditionalGeneration",
    "AutoModelForSequenceClassification": "THUDM/chatglm3-6b--modeling_chatglm.ChatGLMForSequenceClassification"
  },
  "bias_dropout_fusion": true,
  "classifier_dropout": null,
  "eos_token_id": 2,
  "ffn_hidden_size": 13696,
  "fp32_residual_connection": false,
  "hidden_dropout": 0.0,
  "hidden_size": 4096,
  "kv_channels": 128,
  "layernorm_epsilon": 1e-05,
  "model_type": "chatglm",
  "multi_query_attention": true,
  "multi_query_group_num": 2,
  "num_attention_heads": 32,
  "num_layers": 28,
  "original_rope": true,
  "pad_token_id": 0,
  "padded_vocab_size": 65024,
  "post_layer_norm": true,
  "pre_seq_len": null,
  "prefix_projection": false,
  "quantization_bit": 0,
  "rmsnorm": true,
  "seq_length": 8192,
  "tie_word_embeddings": false,
  "torch_dtype": "float16",
  "transformers_version": "4.37.1",
  "use_cache": true,
  "vocab_size": 65024
}

[INFO|tokenization_utils_base.py:2027] 2024-01-26 12:54:46,519 >> loading file tokenizer.model from cache at /root/.cache/huggingface/hub/models--THUDM--chatglm3-6b/snapshots/37f2196f481f8989ea443be625d05f97043652ea/tokenizer.model
[INFO|tokenization_utils_base.py:2027] 2024-01-26 12:54:46,519 >> loading file added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:2027] 2024-01-26 12:54:46,519 >> loading file special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:2027] 2024-01-26 12:54:46,519 >> loading file tokenizer_config.json from cache at /root/.cache/huggingface/hub/models--THUDM--chatglm3-6b/snapshots/37f2196f481f8989ea443be625d05f97043652ea/tokenizer_config.json
[INFO|tokenization_utils_base.py:2027] 2024-01-26 12:54:46,519 >> loading file tokenizer.json from cache at None
[INFO|modeling_utils.py:3478] 2024-01-26 12:54:47,170 >> loading weights file model.safetensors from cache at /root/.cache/huggingface/hub/models--THUDM--chatglm3-6b/snapshots/37f2196f481f8989ea443be625d05f97043652ea/model.safetensors.index.json
[INFO|configuration_utils.py:826] 2024-01-26 12:54:47,177 >> Generate config GenerationConfig {
  "eos_token_id": 2,
  "pad_token_id": 0,
  "use_cache": false
}


Loading checkpoint shards:   0%|          | 0/7 [00:00<?, ?it/s]
Loading checkpoint shards:  14%|โ–ˆโ–        | 1/7 [00:02<00:15,  2.53s/it]
Loading checkpoint shards:  29%|โ–ˆโ–ˆโ–Š       | 2/7 [00:04<00:12,  2.48s/it]
Loading checkpoint shards:  43%|โ–ˆโ–ˆโ–ˆโ–ˆโ–Ž     | 3/7 [00:08<00:12,  3.15s/it]
Loading checkpoint shards:  57%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹    | 4/7 [00:09<00:06,  2.27s/it]
Loading checkpoint shards:  71%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–  | 5/7 [00:17<00:08,  4.27s/it]
Loading checkpoint shards:  86%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ | 6/7 [00:18<00:03,  3.18s/it]
Loading checkpoint shards: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 7/7 [00:19<00:00,  2.53s/it]
Loading checkpoint shards: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 7/7 [00:19<00:00,  2.84s/it]
[INFO|modeling_utils.py:4352] 2024-01-26 12:55:07,172 >> All model checkpoint weights were used when initializing ChatGLMForConditionalGeneration.

[WARNING|modeling_utils.py:4354] 2024-01-26 12:55:07,173 >> Some weights of ChatGLMForConditionalGeneration were not initialized from the model checkpoint at THUDM/chatglm3-6b and are newly initialized: ['transformer.prefix_encoder.embedding.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
[INFO|modeling_utils.py:3897] 2024-01-26 12:55:07,458 >> Generation config file not found, using a generation config created from the model config.
Sanity Check >>>>>>>>>>>>>
           '[gMASK]':  64790 ->   -100
               'sop':  64792 ->   -100
                  '':  30910 ->   -100
                 '่ฏท':  55073 ->   -100
                'ๆ‰พๅ‡บ':  40369 ->   -100
                'ไธ‹้ข':  33182 ->   -100
                'ๆ–‡ๆœฌ':  36704 ->   -100
                'ไธญ็š„':  31697 ->   -100
          'position':   6523 ->   -100
                 '๏ผš':  31211 ->   -100
                '่‰บๆœฏ':  31835 ->   -100
                 'ๆ˜ฏ':  54532 ->   -100
               '็›ธๅŒ็š„':  38815 ->   -100
                 '๏ผŒ':  31123 ->   -100
                '้Ÿณไน':  32000 ->   -100
                '็พŽๆœฏ':  33020 ->   -100
                'ไฝ“่‚ฒ':  32214 ->   -100
                 'ไธ‰':  54645 ->   -100
                 'ๆ ท':  54741 ->   -100
                '้ƒฝๆ˜ฏ':  31700 ->   -100
                '่‰บๆœฏ':  31835 ->   -100
                'ใ€‚๏ผŒ':  37843 ->   -100
                 'ไธ‰':  54645 ->   -100
                 'ๆ ท':  54741 ->   -100
                '่‰บๆœฏ':  31835 ->   -100
                '้ƒฝๆ˜ฏ':  31700 ->   -100
                 '้ ':  55518 ->   -100
                'ๆ„Ÿ่ง‰':  32044 ->   -100
                 '็š„':  54530 ->   -100
                 'ใ€‚':  31155 ->   -100
                'ๆ„Ÿ่ง‰':  32044 ->   -100
                'ๅฅฝ็Žฉ':  42814 ->   -100
                '่ตทๆฅ':  31841 ->   -100
                'ๅฐฑๅพˆ':  40030 ->   -100
                '่ฝปๆพ':  33550 ->   -100
                 '๏ผŒ':  31123 ->   -100
                'ๆ‰€ไปฅ':  31672 ->   -100
                'ๅซๅš':  35528 ->   -100
                 '็Žฉ':  55409 ->   -100
                '่‰บๆœฏ':  31835 ->   -100
                 'ใ€‚':  31155 ->   -100
                 'ๆฒก':  54721 ->   -100
                'ๆ„Ÿ่ง‰':  32044 ->   -100
               'ๆ‰พไธๅˆฐ':  37779 ->   -100
                 'ๅŒ—':  54760 ->   -100
                 '็š„':  54530 ->   -100
                'ๅนฒ่„†':  43396 ->   -100
                 'ๅˆซ':  54835 ->   -100
                 '็Žฉ':  55409 ->   -100
                 'ไบ†':  54537 ->   -100
                 '๏ผ':  31404 ->   -100
                 '๏ผŒ':  31123 ->   -100
                '้ฆ™ๆธฏ':  31776 ->   -100
                '็”ตๅฝฑ':  31867 ->   -100
                'ๅ›ฝ่ฏญ':  54385 ->   -100
                '้…้Ÿณ':  40392 ->   -100
                'ๅๅฎถ':  40465 ->   -100
                 'ๅ‘จ':  54896 ->   -100
                 'ๆ€':  54872 ->   -100
                 'ๅนณ':  54678 ->   -100
                 '๏ผŒ':  31123 ->   -100
               'ไปฃ่กจไฝœ':  43527 ->   -100
                 'ๆœ‰':  54536 ->   -100
               'TVB':  42671 ->   -100
                 'ใ€Š':  54611 ->   -100
                'ไธŠๆตท':  31770 ->   -100
                 'ๆปฉ':  56928 ->   -100
                 'ใ€‹':  54612 ->   -100
                 'ๅ‘จ':  54896 ->   -100
                 'ๆถฆ':  55826 ->   -100
                 'ๅ‘':  54559 ->   -100
                 '็ญ‰':  54609 ->   -100
                '้ฆ™ๆธฏ':  37944 ->  37944
                '็”ตๅฝฑ':  31867 ->  31867
                'ๅ›ฝ่ฏญ':  54385 ->  54385
                '้…้Ÿณ':  40392 ->  40392
                'ๅๅฎถ':  40465 ->  40465
                  '':      2 ->      2
<<<<<<<<<<<<< Sanity Check
01/26/2024 12:55:08 - WARNING - accelerate.utils.other - Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
[INFO|trainer.py:522] 2024-01-26 12:55:20,019 >> max_steps is given, it will override any value given in num_train_epochs
[WARNING|modeling_utils.py:2134] 2024-01-26 12:55:20,020 >> You are using an old version of the checkpointing format that is deprecated (We will also silently ignore `gradient_checkpointing_kwargs` in case you passed it).Please update to the new format on your modeling file. To use the new format, you need to completely remove the definition of the method `_set_gradient_checkpointing` in your model.
[INFO|trainer.py:1721] 2024-01-26 12:55:21,544 >> ***** Running training *****
[INFO|trainer.py:1722] 2024-01-26 12:55:21,544 >>   Num examples = 2,515
[INFO|trainer.py:1723] 2024-01-26 12:55:21,544 >>   Num Epochs = 2
[INFO|trainer.py:1724] 2024-01-26 12:55:21,544 >>   Instantaneous batch size per device = 1
[INFO|trainer.py:1727] 2024-01-26 12:55:21,544 >>   Total train batch size (w. parallel, distributed & accumulation) = 32
[INFO|trainer.py:1728] 2024-01-26 12:55:21,544 >>   Gradient Accumulation steps = 32
[INFO|trainer.py:1729] 2024-01-26 12:55:21,544 >>   Total optimization steps = 100
[INFO|trainer.py:1730] 2024-01-26 12:55:21,545 >>   Number of trainable parameters = 1,835,008

  0%|          | 0/100 [00:00<?, ?it/s]/home/vipuser/miniconda3/envs/GLM/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
  warnings.warn(

  1%|          | 1/100 [00:13<22:36, 13.70s/it]
                                               
{'loss': 0.8181, 'learning_rate': 0.0198, 'epoch': 0.01}

  1%|          | 1/100 [00:13<22:36, 13.70s/it]
  2%|โ–         | 2/100 [00:25<20:33, 12.59s/it]
                                               
{'loss': 0.787, 'learning_rate': 0.0196, 'epoch': 0.03}

  2%|โ–         | 2/100 [00:25<20:33, 12.59s/it]
  3%|โ–Ž         | 3/100 [00:37<19:50, 12.27s/it]
                                               
{'loss': 1.0047, 'learning_rate': 0.0194, 'epoch': 0.04}

  3%|โ–Ž         | 3/100 [00:37<19:50, 12.27s/it]
  4%|โ–         | 4/100 [00:49<19:25, 12.14s/it]
                                               
{'loss': 0.8688, 'learning_rate': 0.0192, 'epoch': 0.05}

  4%|โ–         | 4/100 [00:49<19:25, 12.14s/it]
  5%|โ–Œ         | 5/100 [01:01<19:09, 12.10s/it]
                                               
{'loss': 0.7173, 'learning_rate': 0.019, 'epoch': 0.06}

  5%|โ–Œ         | 5/100 [01:01<19:09, 12.10s/it]
  6%|โ–Œ         | 6/100 [01:13<18:55, 12.08s/it]
                                               
{'loss': 0.5175, 'learning_rate': 0.0188, 'epoch': 0.08}

  6%|โ–Œ         | 6/100 [01:13<18:55, 12.08s/it]
  7%|โ–‹         | 7/100 [01:25<18:45, 12.11s/it]
                                               
{'loss': 0.7559, 'learning_rate': 0.018600000000000002, 'epoch': 0.09}

  7%|โ–‹         | 7/100 [01:25<18:45, 12.11s/it]
  8%|โ–Š         | 8/100 [01:37<18:35, 12.13s/it]
                                               
{'loss': 0.9278, 'learning_rate': 0.0184, 'epoch': 0.1}

  8%|โ–Š         | 8/100 [01:37<18:35, 12.13s/it]
  9%|โ–‰         | 9/100 [01:49<18:25, 12.15s/it]
                                               
{'loss': 0.6011, 'learning_rate': 0.0182, 'epoch': 0.11}

  9%|โ–‰         | 9/100 [01:49<18:25, 12.15s/it]
 10%|โ–ˆ         | 10/100 [02:02<18:13, 12.15s/it]
                                                
{'loss': 0.8014, 'learning_rate': 0.018000000000000002, 'epoch': 0.13}

 10%|โ–ˆ         | 10/100 [02:02<18:13, 12.15s/it]
 11%|โ–ˆ         | 11/100 [02:14<18:01, 12.16s/it]
                                                
{'loss': 1.2581, 'learning_rate': 0.0178, 'epoch': 0.14}

 11%|โ–ˆ         | 11/100 [02:14<18:01, 12.16s/it]
 12%|โ–ˆโ–        | 12/100 [02:26<17:51, 12.18s/it]
                                                
{'loss': 0.9886, 'learning_rate': 0.0176, 'epoch': 0.15}

 12%|โ–ˆโ–        | 12/100 [02:26<17:51, 12.18s/it]
 13%|โ–ˆโ–Ž        | 13/100 [02:38<17:39, 12.18s/it]
                                                
{'loss': 0.7866, 'learning_rate': 0.0174, 'epoch': 0.17}

 13%|โ–ˆโ–Ž        | 13/100 [02:38<17:39, 12.18s/it]
 14%|โ–ˆโ–        | 14/100 [02:50<17:28, 12.19s/it]
                                                
{'loss': 0.936, 'learning_rate': 0.0172, 'epoch': 0.18}

 14%|โ–ˆโ–        | 14/100 [02:50<17:28, 12.19s/it]
 15%|โ–ˆโ–Œ        | 15/100 [03:03<17:17, 12.20s/it]
                                                
{'loss': 1.0503, 'learning_rate': 0.017, 'epoch': 0.19}

 15%|โ–ˆโ–Œ        | 15/100 [03:03<17:17, 12.20s/it]
 16%|โ–ˆโ–Œ        | 16/100 [03:15<17:04, 12.20s/it]
                                                
{'loss': 0.5689, 'learning_rate': 0.0168, 'epoch': 0.2}

 16%|โ–ˆโ–Œ        | 16/100 [03:15<17:04, 12.20s/it]
 17%|โ–ˆโ–‹        | 17/100 [03:27<16:52, 12.20s/it]
                                                
{'loss': 0.8576, 'learning_rate': 0.0166, 'epoch': 0.22}

 17%|โ–ˆโ–‹        | 17/100 [03:27<16:52, 12.20s/it]
 18%|โ–ˆโ–Š        | 18/100 [03:39<16:39, 12.19s/it]
                                                
{'loss': 1.0946, 'learning_rate': 0.016399999999999998, 'epoch': 0.23}

 18%|โ–ˆโ–Š        | 18/100 [03:39<16:39, 12.19s/it]
 19%|โ–ˆโ–‰        | 19/100 [03:51<16:27, 12.19s/it]
                                                
{'loss': 0.9075, 'learning_rate': 0.016200000000000003, 'epoch': 0.24}

 19%|โ–ˆโ–‰        | 19/100 [03:51<16:27, 12.19s/it]
 20%|โ–ˆโ–ˆ        | 20/100 [04:04<16:14, 12.18s/it]
                                                
{'loss': 1.1441, 'learning_rate': 0.016, 'epoch': 0.25}

 20%|โ–ˆโ–ˆ        | 20/100 [04:04<16:14, 12.18s/it]
 21%|โ–ˆโ–ˆ        | 21/100 [04:16<16:01, 12.17s/it]
                                                
{'loss': 0.7794, 'learning_rate': 0.0158, 'epoch': 0.27}

 21%|โ–ˆโ–ˆ        | 21/100 [04:16<16:01, 12.17s/it]
 22%|โ–ˆโ–ˆโ–       | 22/100 [04:28<15:50, 12.18s/it]
                                                
{'loss': 0.9574, 'learning_rate': 0.015600000000000001, 'epoch': 0.28}

 22%|โ–ˆโ–ˆโ–       | 22/100 [04:28<15:50, 12.18s/it]
 23%|โ–ˆโ–ˆโ–Ž       | 23/100 [04:40<15:37, 12.18s/it]
                                                
{'loss': 0.8937, 'learning_rate': 0.0154, 'epoch': 0.29}

 23%|โ–ˆโ–ˆโ–Ž       | 23/100 [04:40<15:37, 12.18s/it]
 24%|โ–ˆโ–ˆโ–       | 24/100 [04:52<15:25, 12.17s/it]
                                                
{'loss': 0.709, 'learning_rate': 0.0152, 'epoch': 0.31}

 24%|โ–ˆโ–ˆโ–       | 24/100 [04:52<15:25, 12.17s/it]
 25%|โ–ˆโ–ˆโ–Œ       | 25/100 [05:04<15:13, 12.18s/it]
                                                
{'loss': 0.8731, 'learning_rate': 0.015, 'epoch': 0.32}

 25%|โ–ˆโ–ˆโ–Œ       | 25/100 [05:04<15:13, 12.18s/it]
 26%|โ–ˆโ–ˆโ–Œ       | 26/100 [05:17<15:00, 12.17s/it]
                                                
{'loss': 0.719, 'learning_rate': 0.0148, 'epoch': 0.33}

 26%|โ–ˆโ–ˆโ–Œ       | 26/100 [05:17<15:00, 12.17s/it]
 27%|โ–ˆโ–ˆโ–‹       | 27/100 [05:29<14:49, 12.18s/it]
                                                
{'loss': 0.7419, 'learning_rate': 0.0146, 'epoch': 0.34}

 27%|โ–ˆโ–ˆโ–‹       | 27/100 [05:29<14:49, 12.18s/it]
 28%|โ–ˆโ–ˆโ–Š       | 28/100 [05:41<14:36, 12.17s/it]
                                                
{'loss': 0.9224, 'learning_rate': 0.0144, 'epoch': 0.36}

 28%|โ–ˆโ–ˆโ–Š       | 28/100 [05:41<14:36, 12.17s/it]
 29%|โ–ˆโ–ˆโ–‰       | 29/100 [05:53<14:25, 12.19s/it]
                                                
{'loss': 1.0802, 'learning_rate': 0.014199999999999999, 'epoch': 0.37}

 29%|โ–ˆโ–ˆโ–‰       | 29/100 [05:53<14:25, 12.19s/it]
 30%|โ–ˆโ–ˆโ–ˆ       | 30/100 [06:05<14:13, 12.19s/it]
                                                
{'loss': 0.8187, 'learning_rate': 0.013999999999999999, 'epoch': 0.38}

 30%|โ–ˆโ–ˆโ–ˆ       | 30/100 [06:05<14:13, 12.19s/it]
 31%|โ–ˆโ–ˆโ–ˆ       | 31/100 [06:17<14:00, 12.18s/it]
                                                
{'loss': 0.615, 'learning_rate': 0.0138, 'epoch': 0.39}

 31%|โ–ˆโ–ˆโ–ˆ       | 31/100 [06:17<14:00, 12.18s/it]
 32%|โ–ˆโ–ˆโ–ˆโ–      | 32/100 [06:30<13:48, 12.18s/it]
                                                
{'loss': 0.5214, 'learning_rate': 0.013600000000000001, 'epoch': 0.41}

 32%|โ–ˆโ–ˆโ–ˆโ–      | 32/100 [06:30<13:48, 12.18s/it]
 33%|โ–ˆโ–ˆโ–ˆโ–Ž      | 33/100 [06:42<13:36, 12.18s/it]
                                                
{'loss': 0.649, 'learning_rate': 0.0134, 'epoch': 0.42}

 33%|โ–ˆโ–ˆโ–ˆโ–Ž      | 33/100 [06:42<13:36, 12.18s/it]
 34%|โ–ˆโ–ˆโ–ˆโ–      | 34/100 [06:54<13:24, 12.18s/it]
                                                
{'loss': 0.6523, 'learning_rate': 0.013200000000000002, 'epoch': 0.43}

 34%|โ–ˆโ–ˆโ–ˆโ–      | 34/100 [06:54<13:24, 12.18s/it]
 35%|โ–ˆโ–ˆโ–ˆโ–Œ      | 35/100 [07:06<13:10, 12.16s/it]
                                                
{'loss': 0.7002, 'learning_rate': 0.013000000000000001, 'epoch': 0.45}

 35%|โ–ˆโ–ˆโ–ˆโ–Œ      | 35/100 [07:06<13:10, 12.16s/it]
 36%|โ–ˆโ–ˆโ–ˆโ–Œ      | 36/100 [07:18<12:57, 12.16s/it]
                                                
{'loss': 0.6161, 'learning_rate': 0.0128, 'epoch': 0.46}

 36%|โ–ˆโ–ˆโ–ˆโ–Œ      | 36/100 [07:18<12:57, 12.16s/it]
 37%|โ–ˆโ–ˆโ–ˆโ–‹      | 37/100 [07:30<12:46, 12.17s/it]
                                                
{'loss': 1.0374, 'learning_rate': 0.0126, 'epoch': 0.47}

 37%|โ–ˆโ–ˆโ–ˆโ–‹      | 37/100 [07:30<12:46, 12.17s/it]
 38%|โ–ˆโ–ˆโ–ˆโ–Š      | 38/100 [07:43<12:34, 12.17s/it]
                                                
{'loss': 1.0328, 'learning_rate': 0.0124, 'epoch': 0.48}

 38%|โ–ˆโ–ˆโ–ˆโ–Š      | 38/100 [07:43<12:34, 12.17s/it]
 39%|โ–ˆโ–ˆโ–ˆโ–‰      | 39/100 [07:55<12:22, 12.17s/it]
                                                
{'loss': 0.7637, 'learning_rate': 0.0122, 'epoch': 0.5}

 39%|โ–ˆโ–ˆโ–ˆโ–‰      | 39/100 [07:55<12:22, 12.17s/it]
 40%|โ–ˆโ–ˆโ–ˆโ–ˆ      | 40/100 [08:07<12:10, 12.17s/it]
                                                
{'loss': 0.6332, 'learning_rate': 0.012, 'epoch': 0.51}

 40%|โ–ˆโ–ˆโ–ˆโ–ˆ      | 40/100 [08:07<12:10, 12.17s/it]
 41%|โ–ˆโ–ˆโ–ˆโ–ˆ      | 41/100 [08:19<11:58, 12.18s/it]
                                                
{'loss': 0.74, 'learning_rate': 0.0118, 'epoch': 0.52}

 41%|โ–ˆโ–ˆโ–ˆโ–ˆ      | 41/100 [08:19<11:58, 12.18s/it]
 42%|โ–ˆโ–ˆโ–ˆโ–ˆโ–     | 42/100 [08:31<11:46, 12.17s/it]
                                                
{'loss': 0.7284, 'learning_rate': 0.0116, 'epoch': 0.53}

 42%|โ–ˆโ–ˆโ–ˆโ–ˆโ–     | 42/100 [08:31<11:46, 12.17s/it]
 43%|โ–ˆโ–ˆโ–ˆโ–ˆโ–Ž     | 43/100 [08:44<11:34, 12.18s/it]
                                                
{'loss': 0.9198, 'learning_rate': 0.011399999999999999, 'epoch': 0.55}

 43%|โ–ˆโ–ˆโ–ˆโ–ˆโ–Ž     | 43/100 [08:44<11:34, 12.18s/it]
 44%|โ–ˆโ–ˆโ–ˆโ–ˆโ–     | 44/100 [08:56<11:21, 12.17s/it]
                                                
{'loss': 0.626, 'learning_rate': 0.011200000000000002, 'epoch': 0.56}

 44%|โ–ˆโ–ˆโ–ˆโ–ˆโ–     | 44/100 [08:56<11:21, 12.17s/it]
 45%|โ–ˆโ–ˆโ–ˆโ–ˆโ–Œ     | 45/100 [09:08<11:09, 12.17s/it]
                                                
{'loss': 0.628, 'learning_rate': 0.011000000000000001, 'epoch': 0.57}

 45%|โ–ˆโ–ˆโ–ˆโ–ˆโ–Œ     | 45/100 [09:08<11:09, 12.17s/it]
 46%|โ–ˆโ–ˆโ–ˆโ–ˆโ–Œ     | 46/100 [09:20<10:57, 12.18s/it]
                                                
{'loss': 0.5322, 'learning_rate': 0.0108, 'epoch': 0.59}

 46%|โ–ˆโ–ˆโ–ˆโ–ˆโ–Œ     | 46/100 [09:20<10:57, 12.18s/it]
 47%|โ–ˆโ–ˆโ–ˆโ–ˆโ–‹     | 47/100 [09:32<10:44, 12.17s/it]
                                                
{'loss': 0.7844, 'learning_rate': 0.0106, 'epoch': 0.6}

 47%|โ–ˆโ–ˆโ–ˆโ–ˆโ–‹     | 47/100 [09:32<10:44, 12.17s/it]
 48%|โ–ˆโ–ˆโ–ˆโ–ˆโ–Š     | 48/100 [09:44<10:33, 12.18s/it]
                                                
{'loss': 0.5957, 'learning_rate': 0.010400000000000001, 'epoch': 0.61}

 48%|โ–ˆโ–ˆโ–ˆโ–ˆโ–Š     | 48/100 [09:44<10:33, 12.18s/it]
 49%|โ–ˆโ–ˆโ–ˆโ–ˆโ–‰     | 49/100 [09:57<10:21, 12.19s/it]
                                                
{'loss': 0.6681, 'learning_rate': 0.0102, 'epoch': 0.62}

 49%|โ–ˆโ–ˆโ–ˆโ–ˆโ–‰     | 49/100 [09:57<10:21, 12.19s/it]
 50%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ     | 50/100 [10:09<10:09, 12.18s/it]
                                                
{'loss': 0.8281, 'learning_rate': 0.01, 'epoch': 0.64}

 50%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ     | 50/100 [10:09<10:09, 12.18s/it]
 51%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ     | 51/100 [10:21<09:56, 12.17s/it]
                                                
{'loss': 0.5284, 'learning_rate': 0.0098, 'epoch': 0.65}

 51%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ     | 51/100 [10:21<09:56, 12.17s/it]
 52%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–    | 52/100 [10:33<09:44, 12.18s/it]
                                                
{'loss': 0.8251, 'learning_rate': 0.0096, 'epoch': 0.66}

 52%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–    | 52/100 [10:33<09:44, 12.18s/it]
 53%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž    | 53/100 [10:45<09:32, 12.19s/it]
                                                
{'loss': 0.9845, 'learning_rate': 0.0094, 'epoch': 0.67}

 53%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž    | 53/100 [10:45<09:32, 12.19s/it]
 54%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–    | 54/100 [10:58<09:20, 12.18s/it]
                                                
{'loss': 0.9525, 'learning_rate': 0.0092, 'epoch': 0.69}

 54%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–    | 54/100 [10:58<09:20, 12.18s/it]
 55%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ    | 55/100 [11:10<09:08, 12.18s/it]
                                                
{'loss': 0.9454, 'learning_rate': 0.009000000000000001, 'epoch': 0.7}

 55%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ    | 55/100 [11:10<09:08, 12.18s/it]
 56%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ    | 56/100 [11:22<08:56, 12.19s/it]
                                                
{'loss': 0.4058, 'learning_rate': 0.0088, 'epoch': 0.71}

 56%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ    | 56/100 [11:22<08:56, 12.19s/it]
 57%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹    | 57/100 [11:34<08:43, 12.18s/it]
                                                
{'loss': 0.5435, 'learning_rate': 0.0086, 'epoch': 0.73}

 57%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹    | 57/100 [11:34<08:43, 12.18s/it]
 58%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š    | 58/100 [11:46<08:31, 12.18s/it]
                                                
{'loss': 0.6892, 'learning_rate': 0.0084, 'epoch': 0.74}

 58%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š    | 58/100 [11:46<08:31, 12.18s/it]
 59%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰    | 59/100 [11:58<08:19, 12.18s/it]
                                                
{'loss': 0.6426, 'learning_rate': 0.008199999999999999, 'epoch': 0.75}

 59%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰    | 59/100 [11:58<08:19, 12.18s/it]
 60%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ    | 60/100 [12:11<08:07, 12.18s/it]
                                                
{'loss': 0.9414, 'learning_rate': 0.008, 'epoch': 0.76}

 60%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ    | 60/100 [12:11<08:07, 12.18s/it]
 61%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ    | 61/100 [12:23<07:55, 12.19s/it]
                                                
{'loss': 0.7945, 'learning_rate': 0.0078000000000000005, 'epoch': 0.78}

 61%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ    | 61/100 [12:23<07:55, 12.19s/it]
 62%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–   | 62/100 [12:35<07:42, 12.18s/it]
                                                
{'loss': 0.6295, 'learning_rate': 0.0076, 'epoch': 0.79}

 62%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–   | 62/100 [12:35<07:42, 12.18s/it]
 63%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž   | 63/100 [12:47<07:30, 12.18s/it]
                                                
{'loss': 0.7888, 'learning_rate': 0.0074, 'epoch': 0.8}

 63%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž   | 63/100 [12:47<07:30, 12.18s/it]
 64%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–   | 64/100 [12:59<07:18, 12.18s/it]
                                                
{'loss': 0.5454, 'learning_rate': 0.0072, 'epoch': 0.81}

 64%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–   | 64/100 [12:59<07:18, 12.18s/it]
 65%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ   | 65/100 [13:12<07:06, 12.18s/it]
                                                
{'loss': 0.711, 'learning_rate': 0.006999999999999999, 'epoch': 0.83}

 65%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ   | 65/100 [13:12<07:06, 12.18s/it]
 66%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ   | 66/100 [13:24<06:54, 12.18s/it]
                                                
{'loss': 0.713, 'learning_rate': 0.0068000000000000005, 'epoch': 0.84}

 66%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ   | 66/100 [13:24<06:54, 12.18s/it]
 67%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹   | 67/100 [13:36<06:42, 12.19s/it]
                                                
{'loss': 0.6058, 'learning_rate': 0.006600000000000001, 'epoch': 0.85}

 67%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹   | 67/100 [13:36<06:42, 12.19s/it]
 68%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š   | 68/100 [13:48<06:29, 12.17s/it]
                                                
{'loss': 0.8203, 'learning_rate': 0.0064, 'epoch': 0.87}

 68%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š   | 68/100 [13:48<06:29, 12.17s/it]
 69%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰   | 69/100 [14:00<06:17, 12.16s/it]
                                                
{'loss': 0.8275, 'learning_rate': 0.0062, 'epoch': 0.88}

 69%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰   | 69/100 [14:00<06:17, 12.16s/it]
 70%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ   | 70/100 [14:12<06:04, 12.16s/it]
                                                
{'loss': 0.4923, 'learning_rate': 0.006, 'epoch': 0.89}

 70%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ   | 70/100 [14:12<06:04, 12.16s/it]
 71%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ   | 71/100 [14:25<05:52, 12.16s/it]
                                                
{'loss': 0.5219, 'learning_rate': 0.0058, 'epoch': 0.9}

 71%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ   | 71/100 [14:25<05:52, 12.16s/it]
 72%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–  | 72/100 [14:37<05:41, 12.19s/it]
                                                
{'loss': 0.9954, 'learning_rate': 0.005600000000000001, 'epoch': 0.92}

 72%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–  | 72/100 [14:37<05:41, 12.19s/it]
 73%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž  | 73/100 [14:49<05:28, 12.18s/it]
                                                
{'loss': 0.6206, 'learning_rate': 0.0054, 'epoch': 0.93}

 73%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž  | 73/100 [14:49<05:28, 12.18s/it]
 74%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–  | 74/100 [15:01<05:16, 12.18s/it]
                                                
{'loss': 0.6064, 'learning_rate': 0.005200000000000001, 'epoch': 0.94}

 74%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–  | 74/100 [15:01<05:16, 12.18s/it]
 75%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ  | 75/100 [15:13<05:04, 12.19s/it]
                                                
{'loss': 0.6584, 'learning_rate': 0.005, 'epoch': 0.95}

 75%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ  | 75/100 [15:13<05:04, 12.19s/it]
 76%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ  | 76/100 [15:26<04:52, 12.19s/it]
                                                
{'loss': 0.8461, 'learning_rate': 0.0048, 'epoch': 0.97}

 76%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ  | 76/100 [15:26<04:52, 12.19s/it]
 77%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹  | 77/100 [15:38<04:40, 12.19s/it]
                                                
{'loss': 0.9615, 'learning_rate': 0.0046, 'epoch': 0.98}

 77%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹  | 77/100 [15:38<04:40, 12.19s/it]
 78%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š  | 78/100 [15:50<04:28, 12.19s/it]
                                                
{'loss': 0.6508, 'learning_rate': 0.0044, 'epoch': 0.99}

 78%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š  | 78/100 [15:50<04:28, 12.19s/it]
 79%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰  | 79/100 [16:02<04:16, 12.20s/it]
                                                
{'loss': 1.0089, 'learning_rate': 0.0042, 'epoch': 1.01}

 79%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰  | 79/100 [16:02<04:16, 12.20s/it]
 80%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ  | 80/100 [16:14<04:03, 12.17s/it]
                                                
{'loss': 0.7515, 'learning_rate': 0.004, 'epoch': 1.02}

 80%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ  | 80/100 [16:14<04:03, 12.17s/it]
 81%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ  | 81/100 [16:26<03:51, 12.18s/it]
                                                
{'loss': 0.4172, 'learning_rate': 0.0038, 'epoch': 1.03}

 81%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ  | 81/100 [16:26<03:51, 12.18s/it]
 82%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ– | 82/100 [16:39<03:39, 12.18s/it]
                                                
{'loss': 0.7634, 'learning_rate': 0.0036, 'epoch': 1.04}

 82%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ– | 82/100 [16:39<03:39, 12.18s/it]
 83%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž | 83/100 [16:51<03:26, 12.16s/it]
                                                
{'loss': 0.585, 'learning_rate': 0.0034000000000000002, 'epoch': 1.06}

 83%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž | 83/100 [16:51<03:26, 12.16s/it]
 84%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ– | 84/100 [17:03<03:14, 12.18s/it]
                                                
{'loss': 0.7668, 'learning_rate': 0.0032, 'epoch': 1.07}

 84%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ– | 84/100 [17:03<03:14, 12.18s/it]
 85%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ | 85/100 [17:15<03:02, 12.18s/it]
                                                
{'loss': 0.5403, 'learning_rate': 0.003, 'epoch': 1.08}

 85%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ | 85/100 [17:15<03:02, 12.18s/it]
 86%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ | 86/100 [17:27<02:50, 12.18s/it]
                                                
{'loss': 0.5995, 'learning_rate': 0.0028000000000000004, 'epoch': 1.09}

 86%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ | 86/100 [17:27<02:50, 12.18s/it]
 87%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹ | 87/100 [17:39<02:38, 12.18s/it]
                                                
{'loss': 0.4515, 'learning_rate': 0.0026000000000000003, 'epoch': 1.11}

 87%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹ | 87/100 [17:39<02:38, 12.18s/it]
 88%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š | 88/100 [17:52<02:26, 12.18s/it]
                                                
{'loss': 0.6288, 'learning_rate': 0.0024, 'epoch': 1.12}

 88%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š | 88/100 [17:52<02:26, 12.18s/it]
 89%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰ | 89/100 [18:04<02:13, 12.18s/it]
                                                
{'loss': 0.7387, 'learning_rate': 0.0022, 'epoch': 1.13}

 89%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰ | 89/100 [18:04<02:13, 12.18s/it]
 90%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ | 90/100 [18:16<02:01, 12.18s/it]
                                                
{'loss': 0.6517, 'learning_rate': 0.002, 'epoch': 1.15}

 90%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ | 90/100 [18:16<02:01, 12.18s/it]
 91%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ | 91/100 [18:28<01:49, 12.18s/it]
                                                
{'loss': 0.5389, 'learning_rate': 0.0018, 'epoch': 1.16}

 91%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ | 91/100 [18:28<01:49, 12.18s/it]
 92%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–| 92/100 [18:40<01:37, 12.20s/it]
                                                
{'loss': 0.4433, 'learning_rate': 0.0016, 'epoch': 1.17}

 92%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–| 92/100 [18:40<01:37, 12.20s/it]
 93%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž| 93/100 [18:53<01:25, 12.21s/it]
                                                
{'loss': 0.6643, 'learning_rate': 0.0014000000000000002, 'epoch': 1.18}

 93%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Ž| 93/100 [18:53<01:25, 12.21s/it]
 94%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–| 94/100 [19:05<01:13, 12.19s/it]
                                                
{'loss': 0.5825, 'learning_rate': 0.0012, 'epoch': 1.2}

 94%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–| 94/100 [19:05<01:13, 12.19s/it]
 95%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ| 95/100 [19:17<01:00, 12.18s/it]
                                                
{'loss': 0.7709, 'learning_rate': 0.001, 'epoch': 1.21}

 95%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ| 95/100 [19:17<01:00, 12.18s/it]
 96%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ| 96/100 [19:29<00:48, 12.18s/it]
                                                
{'loss': 0.562, 'learning_rate': 0.0008, 'epoch': 1.22}

 96%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ| 96/100 [19:29<00:48, 12.18s/it]
 97%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹| 97/100 [19:41<00:36, 12.19s/it]
                                                
{'loss': 0.5581, 'learning_rate': 0.0006, 'epoch': 1.23}

 97%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‹| 97/100 [19:41<00:36, 12.19s/it]
 98%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š| 98/100 [19:54<00:24, 12.19s/it]
                                                
{'loss': 0.4679, 'learning_rate': 0.0004, 'epoch': 1.25}

 98%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š| 98/100 [19:54<00:24, 12.19s/it]
 99%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰| 99/100 [20:06<00:12, 12.18s/it]
                                                
{'loss': 0.5063, 'learning_rate': 0.0002, 'epoch': 1.26}

 99%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‰| 99/100 [20:06<00:12, 12.18s/it]
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 100/100 [20:18<00:00, 12.19s/it]
                                                 
{'loss': 0.5527, 'learning_rate': 0.0, 'epoch': 1.27}

100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 100/100 [20:18<00:00, 12.19s/it][INFO|trainer.py:1962] 2024-01-26 13:15:40,013 >> 

Training completed. Do not forget to share your model on huggingface.co/models =)



                                                 
{'train_runtime': 1218.4689, 'train_samples_per_second': 2.626, 'train_steps_per_second': 0.082, 'train_loss': 0.7395605874061585, 'epoch': 1.27}

100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 100/100 [20:18<00:00, 12.19s/it]
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 100/100 [20:18<00:00, 12.18s/it]
Saving PrefixEncoder
[INFO|configuration_utils.py:473] 2024-01-26 13:15:40,038 >> Configuration saved in output/privacy_detection_pt-20240126-125436-128-2e-2/config.json
[INFO|configuration_utils.py:594] 2024-01-26 13:15:40,039 >> Configuration saved in output/privacy_detection_pt-20240126-125436-128-2e-2/generation_config.json
[INFO|modeling_utils.py:2495] 2024-01-26 13:15:40,068 >> Model weights saved in output/privacy_detection_pt-20240126-125436-128-2e-2/pytorch_model.bin
[INFO|tokenization_utils_base.py:2433] 2024-01-26 13:15:40,069 >> tokenizer config file saved in output/privacy_detection_pt-20240126-125436-128-2e-2/tokenizer_config.json
[INFO|tokenization_utils_base.py:2442] 2024-01-26 13:15:40,069 >> Special tokens file saved in output/privacy_detection_pt-20240126-125436-128-2e-2/special_tokens_map.json