gyr66
Add parameters
c4fdd18
raw
history blame contribute delete
No virus
37.7 kB
[2024-01-26 12:54:39,523] torch.distributed.run: [WARNING] master_addr is only used for static rdzv_backend and when rdzv_endpoint is not specified.
01/26/2024 12:54:44 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: False
01/26/2024 12:54:44 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(
_n_gpu=1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
bf16=False,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_persistent_workers=False,
dataloader_pin_memory=True,
ddp_backend=None,
ddp_broadcast_buffers=None,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=False,
ddp_timeout=1800,
debug=[],
deepspeed=None,
disable_tqdm=False,
dispatch_batches=None,
do_eval=False,
do_predict=False,
do_train=False,
eval_accumulation_steps=None,
eval_delay=0,
eval_steps=None,
evaluation_strategy=no,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},
fsdp_min_num_params=0,
fsdp_transformer_layer_cls_to_wrap=None,
full_determinism=False,
generation_config=None,
generation_max_length=None,
generation_num_beams=None,
gradient_accumulation_steps=32,
gradient_checkpointing=False,
gradient_checkpointing_kwargs=None,
greater_is_better=None,
group_by_length=False,
half_precision_backend=auto,
hub_always_push=False,
hub_model_id=None,
hub_private_repo=False,
hub_strategy=every_save,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
include_inputs_for_metrics=False,
include_num_input_tokens_seen=False,
include_tokens_per_second=False,
jit_mode_eval=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=0.02,
length_column_name=length,
load_best_model_at_end=False,
local_rank=0,
log_level=passive,
log_level_replica=warning,
log_on_each_node=True,
logging_dir=output/privacy_detection_pt-20240126-125436-128-2e-2/runs/Jan26_12-54-44_ubuntu1804,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=1.0,
logging_strategy=steps,
lr_scheduler_kwargs={},
lr_scheduler_type=linear,
max_grad_norm=1.0,
max_steps=100,
metric_for_best_model=None,
mp_parameters=,
neftune_noise_alpha=None,
no_cuda=False,
num_train_epochs=3.0,
optim=adamw_torch,
optim_args=None,
output_dir=output/privacy_detection_pt-20240126-125436-128-2e-2,
overwrite_output_dir=False,
past_index=-1,
per_device_eval_batch_size=8,
per_device_train_batch_size=1,
predict_with_generate=False,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
ray_scope=last,
remove_unused_columns=True,
report_to=[],
resume_from_checkpoint=True,
run_name=output/privacy_detection_pt-20240126-125436-128-2e-2,
save_on_each_node=False,
save_only_model=False,
save_safetensors=False,
save_steps=500,
save_strategy=steps,
save_total_limit=None,
seed=42,
skip_memory_metrics=True,
sortish_sampler=False,
split_batches=False,
tf32=None,
torch_compile=False,
torch_compile_backend=None,
torch_compile_mode=None,
torchdynamo=None,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_cpu=False,
use_ipex=False,
use_legacy_prediction_loop=False,
use_mps_device=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
)
[INFO|configuration_utils.py:729] 2024-01-26 12:54:45,398 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--THUDM--chatglm3-6b/snapshots/37f2196f481f8989ea443be625d05f97043652ea/config.json
[INFO|configuration_utils.py:729] 2024-01-26 12:54:45,957 >> loading configuration file config.json from cache at /root/.cache/huggingface/hub/models--THUDM--chatglm3-6b/snapshots/37f2196f481f8989ea443be625d05f97043652ea/config.json
[INFO|configuration_utils.py:792] 2024-01-26 12:54:45,960 >> Model config ChatGLMConfig {
"_name_or_path": "THUDM/chatglm3-6b",
"add_bias_linear": false,
"add_qkv_bias": true,
"apply_query_key_layer_scaling": true,
"apply_residual_connection_post_layernorm": false,
"architectures": [
"ChatGLMModel"
],
"attention_dropout": 0.0,
"attention_softmax_in_fp32": true,
"auto_map": {
"AutoConfig": "THUDM/chatglm3-6b--configuration_chatglm.ChatGLMConfig",
"AutoModel": "THUDM/chatglm3-6b--modeling_chatglm.ChatGLMForConditionalGeneration",
"AutoModelForCausalLM": "THUDM/chatglm3-6b--modeling_chatglm.ChatGLMForConditionalGeneration",
"AutoModelForSeq2SeqLM": "THUDM/chatglm3-6b--modeling_chatglm.ChatGLMForConditionalGeneration",
"AutoModelForSequenceClassification": "THUDM/chatglm3-6b--modeling_chatglm.ChatGLMForSequenceClassification"
},
"bias_dropout_fusion": true,
"classifier_dropout": null,
"eos_token_id": 2,
"ffn_hidden_size": 13696,
"fp32_residual_connection": false,
"hidden_dropout": 0.0,
"hidden_size": 4096,
"kv_channels": 128,
"layernorm_epsilon": 1e-05,
"model_type": "chatglm",
"multi_query_attention": true,
"multi_query_group_num": 2,
"num_attention_heads": 32,
"num_layers": 28,
"original_rope": true,
"pad_token_id": 0,
"padded_vocab_size": 65024,
"post_layer_norm": true,
"pre_seq_len": null,
"prefix_projection": false,
"quantization_bit": 0,
"rmsnorm": true,
"seq_length": 8192,
"tie_word_embeddings": false,
"torch_dtype": "float16",
"transformers_version": "4.37.1",
"use_cache": true,
"vocab_size": 65024
}
[INFO|tokenization_utils_base.py:2027] 2024-01-26 12:54:46,519 >> loading file tokenizer.model from cache at /root/.cache/huggingface/hub/models--THUDM--chatglm3-6b/snapshots/37f2196f481f8989ea443be625d05f97043652ea/tokenizer.model
[INFO|tokenization_utils_base.py:2027] 2024-01-26 12:54:46,519 >> loading file added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:2027] 2024-01-26 12:54:46,519 >> loading file special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:2027] 2024-01-26 12:54:46,519 >> loading file tokenizer_config.json from cache at /root/.cache/huggingface/hub/models--THUDM--chatglm3-6b/snapshots/37f2196f481f8989ea443be625d05f97043652ea/tokenizer_config.json
[INFO|tokenization_utils_base.py:2027] 2024-01-26 12:54:46,519 >> loading file tokenizer.json from cache at None
[INFO|modeling_utils.py:3478] 2024-01-26 12:54:47,170 >> loading weights file model.safetensors from cache at /root/.cache/huggingface/hub/models--THUDM--chatglm3-6b/snapshots/37f2196f481f8989ea443be625d05f97043652ea/model.safetensors.index.json
[INFO|configuration_utils.py:826] 2024-01-26 12:54:47,177 >> Generate config GenerationConfig {
"eos_token_id": 2,
"pad_token_id": 0,
"use_cache": false
}
Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] Loading checkpoint shards: 14%|█▍ | 1/7 [00:02<00:15, 2.53s/it] Loading checkpoint shards: 29%|██▊ | 2/7 [00:04<00:12, 2.48s/it] Loading checkpoint shards: 43%|████▎ | 3/7 [00:08<00:12, 3.15s/it] Loading checkpoint shards: 57%|█████▋ | 4/7 [00:09<00:06, 2.27s/it] Loading checkpoint shards: 71%|███████▏ | 5/7 [00:17<00:08, 4.27s/it] Loading checkpoint shards: 86%|████████▌ | 6/7 [00:18<00:03, 3.18s/it] Loading checkpoint shards: 100%|██████████| 7/7 [00:19<00:00, 2.53s/it] Loading checkpoint shards: 100%|██████████| 7/7 [00:19<00:00, 2.84s/it]
[INFO|modeling_utils.py:4352] 2024-01-26 12:55:07,172 >> All model checkpoint weights were used when initializing ChatGLMForConditionalGeneration.
[WARNING|modeling_utils.py:4354] 2024-01-26 12:55:07,173 >> Some weights of ChatGLMForConditionalGeneration were not initialized from the model checkpoint at THUDM/chatglm3-6b and are newly initialized: ['transformer.prefix_encoder.embedding.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
[INFO|modeling_utils.py:3897] 2024-01-26 12:55:07,458 >> Generation config file not found, using a generation config created from the model config.
Sanity Check >>>>>>>>>>>>>
'[gMASK]': 64790 -> -100
'sop': 64792 -> -100
'': 30910 -> -100
'请': 55073 -> -100
'找出': 40369 -> -100
'下面': 33182 -> -100
'文本': 36704 -> -100
'中的': 31697 -> -100
'position': 6523 -> -100
':': 31211 -> -100
'艺术': 31835 -> -100
'是': 54532 -> -100
'相同的': 38815 -> -100
',': 31123 -> -100
'音乐': 32000 -> -100
'美术': 33020 -> -100
'体育': 32214 -> -100
'三': 54645 -> -100
'样': 54741 -> -100
'都是': 31700 -> -100
'艺术': 31835 -> -100
'。,': 37843 -> -100
'三': 54645 -> -100
'样': 54741 -> -100
'艺术': 31835 -> -100
'都是': 31700 -> -100
'靠': 55518 -> -100
'感觉': 32044 -> -100
'的': 54530 -> -100
'。': 31155 -> -100
'感觉': 32044 -> -100
'好玩': 42814 -> -100
'起来': 31841 -> -100
'就很': 40030 -> -100
'轻松': 33550 -> -100
',': 31123 -> -100
'所以': 31672 -> -100
'叫做': 35528 -> -100
'玩': 55409 -> -100
'艺术': 31835 -> -100
'。': 31155 -> -100
'没': 54721 -> -100
'感觉': 32044 -> -100
'找不到': 37779 -> -100
'北': 54760 -> -100
'的': 54530 -> -100
'干脆': 43396 -> -100
'别': 54835 -> -100
'玩': 55409 -> -100
'了': 54537 -> -100
'!': 31404 -> -100
',': 31123 -> -100
'香港': 31776 -> -100
'电影': 31867 -> -100
'国语': 54385 -> -100
'配音': 40392 -> -100
'名家': 40465 -> -100
'周': 54896 -> -100
'思': 54872 -> -100
'平': 54678 -> -100
',': 31123 -> -100
'代表作': 43527 -> -100
'有': 54536 -> -100
'TVB': 42671 -> -100
'《': 54611 -> -100
'上海': 31770 -> -100
'滩': 56928 -> -100
'》': 54612 -> -100
'周': 54896 -> -100
'润': 55826 -> -100
'发': 54559 -> -100
'等': 54609 -> -100
'香港': 37944 -> 37944
'电影': 31867 -> 31867
'国语': 54385 -> 54385
'配音': 40392 -> 40392
'名家': 40465 -> 40465
'': 2 -> 2
<<<<<<<<<<<<< Sanity Check
01/26/2024 12:55:08 - WARNING - accelerate.utils.other - Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
[INFO|trainer.py:522] 2024-01-26 12:55:20,019 >> max_steps is given, it will override any value given in num_train_epochs
[WARNING|modeling_utils.py:2134] 2024-01-26 12:55:20,020 >> You are using an old version of the checkpointing format that is deprecated (We will also silently ignore `gradient_checkpointing_kwargs` in case you passed it).Please update to the new format on your modeling file. To use the new format, you need to completely remove the definition of the method `_set_gradient_checkpointing` in your model.
[INFO|trainer.py:1721] 2024-01-26 12:55:21,544 >> ***** Running training *****
[INFO|trainer.py:1722] 2024-01-26 12:55:21,544 >> Num examples = 2,515
[INFO|trainer.py:1723] 2024-01-26 12:55:21,544 >> Num Epochs = 2
[INFO|trainer.py:1724] 2024-01-26 12:55:21,544 >> Instantaneous batch size per device = 1
[INFO|trainer.py:1727] 2024-01-26 12:55:21,544 >> Total train batch size (w. parallel, distributed & accumulation) = 32
[INFO|trainer.py:1728] 2024-01-26 12:55:21,544 >> Gradient Accumulation steps = 32
[INFO|trainer.py:1729] 2024-01-26 12:55:21,544 >> Total optimization steps = 100
[INFO|trainer.py:1730] 2024-01-26 12:55:21,545 >> Number of trainable parameters = 1,835,008
0%| | 0/100 [00:00<?, ?it/s]/home/vipuser/miniconda3/envs/GLM/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
warnings.warn(
1%| | 1/100 [00:13<22:36, 13.70s/it] {'loss': 0.8181, 'learning_rate': 0.0198, 'epoch': 0.01}
1%| | 1/100 [00:13<22:36, 13.70s/it] 2%|▏ | 2/100 [00:25<20:33, 12.59s/it] {'loss': 0.787, 'learning_rate': 0.0196, 'epoch': 0.03}
2%|▏ | 2/100 [00:25<20:33, 12.59s/it] 3%|▎ | 3/100 [00:37<19:50, 12.27s/it] {'loss': 1.0047, 'learning_rate': 0.0194, 'epoch': 0.04}
3%|▎ | 3/100 [00:37<19:50, 12.27s/it] 4%|▍ | 4/100 [00:49<19:25, 12.14s/it] {'loss': 0.8688, 'learning_rate': 0.0192, 'epoch': 0.05}
4%|▍ | 4/100 [00:49<19:25, 12.14s/it] 5%|▌ | 5/100 [01:01<19:09, 12.10s/it] {'loss': 0.7173, 'learning_rate': 0.019, 'epoch': 0.06}
5%|▌ | 5/100 [01:01<19:09, 12.10s/it] 6%|▌ | 6/100 [01:13<18:55, 12.08s/it] {'loss': 0.5175, 'learning_rate': 0.0188, 'epoch': 0.08}
6%|▌ | 6/100 [01:13<18:55, 12.08s/it] 7%|▋ | 7/100 [01:25<18:45, 12.11s/it] {'loss': 0.7559, 'learning_rate': 0.018600000000000002, 'epoch': 0.09}
7%|▋ | 7/100 [01:25<18:45, 12.11s/it] 8%|▊ | 8/100 [01:37<18:35, 12.13s/it] {'loss': 0.9278, 'learning_rate': 0.0184, 'epoch': 0.1}
8%|▊ | 8/100 [01:37<18:35, 12.13s/it] 9%|▉ | 9/100 [01:49<18:25, 12.15s/it] {'loss': 0.6011, 'learning_rate': 0.0182, 'epoch': 0.11}
9%|▉ | 9/100 [01:49<18:25, 12.15s/it] 10%|█ | 10/100 [02:02<18:13, 12.15s/it] {'loss': 0.8014, 'learning_rate': 0.018000000000000002, 'epoch': 0.13}
10%|█ | 10/100 [02:02<18:13, 12.15s/it] 11%|█ | 11/100 [02:14<18:01, 12.16s/it] {'loss': 1.2581, 'learning_rate': 0.0178, 'epoch': 0.14}
11%|█ | 11/100 [02:14<18:01, 12.16s/it] 12%|█▏ | 12/100 [02:26<17:51, 12.18s/it] {'loss': 0.9886, 'learning_rate': 0.0176, 'epoch': 0.15}
12%|█▏ | 12/100 [02:26<17:51, 12.18s/it] 13%|█▎ | 13/100 [02:38<17:39, 12.18s/it] {'loss': 0.7866, 'learning_rate': 0.0174, 'epoch': 0.17}
13%|█▎ | 13/100 [02:38<17:39, 12.18s/it] 14%|█▍ | 14/100 [02:50<17:28, 12.19s/it] {'loss': 0.936, 'learning_rate': 0.0172, 'epoch': 0.18}
14%|█▍ | 14/100 [02:50<17:28, 12.19s/it] 15%|█▌ | 15/100 [03:03<17:17, 12.20s/it] {'loss': 1.0503, 'learning_rate': 0.017, 'epoch': 0.19}
15%|█▌ | 15/100 [03:03<17:17, 12.20s/it] 16%|█▌ | 16/100 [03:15<17:04, 12.20s/it] {'loss': 0.5689, 'learning_rate': 0.0168, 'epoch': 0.2}
16%|█▌ | 16/100 [03:15<17:04, 12.20s/it] 17%|█▋ | 17/100 [03:27<16:52, 12.20s/it] {'loss': 0.8576, 'learning_rate': 0.0166, 'epoch': 0.22}
17%|█▋ | 17/100 [03:27<16:52, 12.20s/it] 18%|█▊ | 18/100 [03:39<16:39, 12.19s/it] {'loss': 1.0946, 'learning_rate': 0.016399999999999998, 'epoch': 0.23}
18%|█▊ | 18/100 [03:39<16:39, 12.19s/it] 19%|█▉ | 19/100 [03:51<16:27, 12.19s/it] {'loss': 0.9075, 'learning_rate': 0.016200000000000003, 'epoch': 0.24}
19%|█▉ | 19/100 [03:51<16:27, 12.19s/it] 20%|██ | 20/100 [04:04<16:14, 12.18s/it] {'loss': 1.1441, 'learning_rate': 0.016, 'epoch': 0.25}
20%|██ | 20/100 [04:04<16:14, 12.18s/it] 21%|██ | 21/100 [04:16<16:01, 12.17s/it] {'loss': 0.7794, 'learning_rate': 0.0158, 'epoch': 0.27}
21%|██ | 21/100 [04:16<16:01, 12.17s/it] 22%|██▏ | 22/100 [04:28<15:50, 12.18s/it] {'loss': 0.9574, 'learning_rate': 0.015600000000000001, 'epoch': 0.28}
22%|██▏ | 22/100 [04:28<15:50, 12.18s/it] 23%|██▎ | 23/100 [04:40<15:37, 12.18s/it] {'loss': 0.8937, 'learning_rate': 0.0154, 'epoch': 0.29}
23%|██▎ | 23/100 [04:40<15:37, 12.18s/it] 24%|██▍ | 24/100 [04:52<15:25, 12.17s/it] {'loss': 0.709, 'learning_rate': 0.0152, 'epoch': 0.31}
24%|██▍ | 24/100 [04:52<15:25, 12.17s/it] 25%|██▌ | 25/100 [05:04<15:13, 12.18s/it] {'loss': 0.8731, 'learning_rate': 0.015, 'epoch': 0.32}
25%|██▌ | 25/100 [05:04<15:13, 12.18s/it] 26%|██▌ | 26/100 [05:17<15:00, 12.17s/it] {'loss': 0.719, 'learning_rate': 0.0148, 'epoch': 0.33}
26%|██▌ | 26/100 [05:17<15:00, 12.17s/it] 27%|██▋ | 27/100 [05:29<14:49, 12.18s/it] {'loss': 0.7419, 'learning_rate': 0.0146, 'epoch': 0.34}
27%|██▋ | 27/100 [05:29<14:49, 12.18s/it] 28%|██▊ | 28/100 [05:41<14:36, 12.17s/it] {'loss': 0.9224, 'learning_rate': 0.0144, 'epoch': 0.36}
28%|██▊ | 28/100 [05:41<14:36, 12.17s/it] 29%|██▉ | 29/100 [05:53<14:25, 12.19s/it] {'loss': 1.0802, 'learning_rate': 0.014199999999999999, 'epoch': 0.37}
29%|██▉ | 29/100 [05:53<14:25, 12.19s/it] 30%|███ | 30/100 [06:05<14:13, 12.19s/it] {'loss': 0.8187, 'learning_rate': 0.013999999999999999, 'epoch': 0.38}
30%|███ | 30/100 [06:05<14:13, 12.19s/it] 31%|███ | 31/100 [06:17<14:00, 12.18s/it] {'loss': 0.615, 'learning_rate': 0.0138, 'epoch': 0.39}
31%|███ | 31/100 [06:17<14:00, 12.18s/it] 32%|███▏ | 32/100 [06:30<13:48, 12.18s/it] {'loss': 0.5214, 'learning_rate': 0.013600000000000001, 'epoch': 0.41}
32%|███▏ | 32/100 [06:30<13:48, 12.18s/it] 33%|███▎ | 33/100 [06:42<13:36, 12.18s/it] {'loss': 0.649, 'learning_rate': 0.0134, 'epoch': 0.42}
33%|███▎ | 33/100 [06:42<13:36, 12.18s/it] 34%|███▍ | 34/100 [06:54<13:24, 12.18s/it] {'loss': 0.6523, 'learning_rate': 0.013200000000000002, 'epoch': 0.43}
34%|███▍ | 34/100 [06:54<13:24, 12.18s/it] 35%|███▌ | 35/100 [07:06<13:10, 12.16s/it] {'loss': 0.7002, 'learning_rate': 0.013000000000000001, 'epoch': 0.45}
35%|███▌ | 35/100 [07:06<13:10, 12.16s/it] 36%|███▌ | 36/100 [07:18<12:57, 12.16s/it] {'loss': 0.6161, 'learning_rate': 0.0128, 'epoch': 0.46}
36%|███▌ | 36/100 [07:18<12:57, 12.16s/it] 37%|███▋ | 37/100 [07:30<12:46, 12.17s/it] {'loss': 1.0374, 'learning_rate': 0.0126, 'epoch': 0.47}
37%|███▋ | 37/100 [07:30<12:46, 12.17s/it] 38%|███▊ | 38/100 [07:43<12:34, 12.17s/it] {'loss': 1.0328, 'learning_rate': 0.0124, 'epoch': 0.48}
38%|███▊ | 38/100 [07:43<12:34, 12.17s/it] 39%|███▉ | 39/100 [07:55<12:22, 12.17s/it] {'loss': 0.7637, 'learning_rate': 0.0122, 'epoch': 0.5}
39%|███▉ | 39/100 [07:55<12:22, 12.17s/it] 40%|████ | 40/100 [08:07<12:10, 12.17s/it] {'loss': 0.6332, 'learning_rate': 0.012, 'epoch': 0.51}
40%|████ | 40/100 [08:07<12:10, 12.17s/it] 41%|████ | 41/100 [08:19<11:58, 12.18s/it] {'loss': 0.74, 'learning_rate': 0.0118, 'epoch': 0.52}
41%|████ | 41/100 [08:19<11:58, 12.18s/it] 42%|████▏ | 42/100 [08:31<11:46, 12.17s/it] {'loss': 0.7284, 'learning_rate': 0.0116, 'epoch': 0.53}
42%|████▏ | 42/100 [08:31<11:46, 12.17s/it] 43%|████▎ | 43/100 [08:44<11:34, 12.18s/it] {'loss': 0.9198, 'learning_rate': 0.011399999999999999, 'epoch': 0.55}
43%|████▎ | 43/100 [08:44<11:34, 12.18s/it] 44%|████▍ | 44/100 [08:56<11:21, 12.17s/it] {'loss': 0.626, 'learning_rate': 0.011200000000000002, 'epoch': 0.56}
44%|████▍ | 44/100 [08:56<11:21, 12.17s/it] 45%|████▌ | 45/100 [09:08<11:09, 12.17s/it] {'loss': 0.628, 'learning_rate': 0.011000000000000001, 'epoch': 0.57}
45%|████▌ | 45/100 [09:08<11:09, 12.17s/it] 46%|████▌ | 46/100 [09:20<10:57, 12.18s/it] {'loss': 0.5322, 'learning_rate': 0.0108, 'epoch': 0.59}
46%|████▌ | 46/100 [09:20<10:57, 12.18s/it] 47%|████▋ | 47/100 [09:32<10:44, 12.17s/it] {'loss': 0.7844, 'learning_rate': 0.0106, 'epoch': 0.6}
47%|████▋ | 47/100 [09:32<10:44, 12.17s/it] 48%|████▊ | 48/100 [09:44<10:33, 12.18s/it] {'loss': 0.5957, 'learning_rate': 0.010400000000000001, 'epoch': 0.61}
48%|████▊ | 48/100 [09:44<10:33, 12.18s/it] 49%|████▉ | 49/100 [09:57<10:21, 12.19s/it] {'loss': 0.6681, 'learning_rate': 0.0102, 'epoch': 0.62}
49%|████▉ | 49/100 [09:57<10:21, 12.19s/it] 50%|█████ | 50/100 [10:09<10:09, 12.18s/it] {'loss': 0.8281, 'learning_rate': 0.01, 'epoch': 0.64}
50%|█████ | 50/100 [10:09<10:09, 12.18s/it] 51%|█████ | 51/100 [10:21<09:56, 12.17s/it] {'loss': 0.5284, 'learning_rate': 0.0098, 'epoch': 0.65}
51%|█████ | 51/100 [10:21<09:56, 12.17s/it] 52%|█████▏ | 52/100 [10:33<09:44, 12.18s/it] {'loss': 0.8251, 'learning_rate': 0.0096, 'epoch': 0.66}
52%|█████▏ | 52/100 [10:33<09:44, 12.18s/it] 53%|█████▎ | 53/100 [10:45<09:32, 12.19s/it] {'loss': 0.9845, 'learning_rate': 0.0094, 'epoch': 0.67}
53%|█████▎ | 53/100 [10:45<09:32, 12.19s/it] 54%|█████▍ | 54/100 [10:58<09:20, 12.18s/it] {'loss': 0.9525, 'learning_rate': 0.0092, 'epoch': 0.69}
54%|█████▍ | 54/100 [10:58<09:20, 12.18s/it] 55%|█████▌ | 55/100 [11:10<09:08, 12.18s/it] {'loss': 0.9454, 'learning_rate': 0.009000000000000001, 'epoch': 0.7}
55%|█████▌ | 55/100 [11:10<09:08, 12.18s/it] 56%|█████▌ | 56/100 [11:22<08:56, 12.19s/it] {'loss': 0.4058, 'learning_rate': 0.0088, 'epoch': 0.71}
56%|█████▌ | 56/100 [11:22<08:56, 12.19s/it] 57%|█████▋ | 57/100 [11:34<08:43, 12.18s/it] {'loss': 0.5435, 'learning_rate': 0.0086, 'epoch': 0.73}
57%|█████▋ | 57/100 [11:34<08:43, 12.18s/it] 58%|█████▊ | 58/100 [11:46<08:31, 12.18s/it] {'loss': 0.6892, 'learning_rate': 0.0084, 'epoch': 0.74}
58%|█████▊ | 58/100 [11:46<08:31, 12.18s/it] 59%|█████▉ | 59/100 [11:58<08:19, 12.18s/it] {'loss': 0.6426, 'learning_rate': 0.008199999999999999, 'epoch': 0.75}
59%|█████▉ | 59/100 [11:58<08:19, 12.18s/it] 60%|██████ | 60/100 [12:11<08:07, 12.18s/it] {'loss': 0.9414, 'learning_rate': 0.008, 'epoch': 0.76}
60%|██████ | 60/100 [12:11<08:07, 12.18s/it] 61%|██████ | 61/100 [12:23<07:55, 12.19s/it] {'loss': 0.7945, 'learning_rate': 0.0078000000000000005, 'epoch': 0.78}
61%|██████ | 61/100 [12:23<07:55, 12.19s/it] 62%|██████▏ | 62/100 [12:35<07:42, 12.18s/it] {'loss': 0.6295, 'learning_rate': 0.0076, 'epoch': 0.79}
62%|██████▏ | 62/100 [12:35<07:42, 12.18s/it] 63%|██████▎ | 63/100 [12:47<07:30, 12.18s/it] {'loss': 0.7888, 'learning_rate': 0.0074, 'epoch': 0.8}
63%|██████▎ | 63/100 [12:47<07:30, 12.18s/it] 64%|██████▍ | 64/100 [12:59<07:18, 12.18s/it] {'loss': 0.5454, 'learning_rate': 0.0072, 'epoch': 0.81}
64%|██████▍ | 64/100 [12:59<07:18, 12.18s/it] 65%|██████▌ | 65/100 [13:12<07:06, 12.18s/it] {'loss': 0.711, 'learning_rate': 0.006999999999999999, 'epoch': 0.83}
65%|██████▌ | 65/100 [13:12<07:06, 12.18s/it] 66%|██████▌ | 66/100 [13:24<06:54, 12.18s/it] {'loss': 0.713, 'learning_rate': 0.0068000000000000005, 'epoch': 0.84}
66%|██████▌ | 66/100 [13:24<06:54, 12.18s/it] 67%|██████▋ | 67/100 [13:36<06:42, 12.19s/it] {'loss': 0.6058, 'learning_rate': 0.006600000000000001, 'epoch': 0.85}
67%|██████▋ | 67/100 [13:36<06:42, 12.19s/it] 68%|██████▊ | 68/100 [13:48<06:29, 12.17s/it] {'loss': 0.8203, 'learning_rate': 0.0064, 'epoch': 0.87}
68%|██████▊ | 68/100 [13:48<06:29, 12.17s/it] 69%|██████▉ | 69/100 [14:00<06:17, 12.16s/it] {'loss': 0.8275, 'learning_rate': 0.0062, 'epoch': 0.88}
69%|██████▉ | 69/100 [14:00<06:17, 12.16s/it] 70%|███████ | 70/100 [14:12<06:04, 12.16s/it] {'loss': 0.4923, 'learning_rate': 0.006, 'epoch': 0.89}
70%|███████ | 70/100 [14:12<06:04, 12.16s/it] 71%|███████ | 71/100 [14:25<05:52, 12.16s/it] {'loss': 0.5219, 'learning_rate': 0.0058, 'epoch': 0.9}
71%|███████ | 71/100 [14:25<05:52, 12.16s/it] 72%|███████▏ | 72/100 [14:37<05:41, 12.19s/it] {'loss': 0.9954, 'learning_rate': 0.005600000000000001, 'epoch': 0.92}
72%|███████▏ | 72/100 [14:37<05:41, 12.19s/it] 73%|███████▎ | 73/100 [14:49<05:28, 12.18s/it] {'loss': 0.6206, 'learning_rate': 0.0054, 'epoch': 0.93}
73%|███████▎ | 73/100 [14:49<05:28, 12.18s/it] 74%|███████▍ | 74/100 [15:01<05:16, 12.18s/it] {'loss': 0.6064, 'learning_rate': 0.005200000000000001, 'epoch': 0.94}
74%|███████▍ | 74/100 [15:01<05:16, 12.18s/it] 75%|███████▌ | 75/100 [15:13<05:04, 12.19s/it] {'loss': 0.6584, 'learning_rate': 0.005, 'epoch': 0.95}
75%|███████▌ | 75/100 [15:13<05:04, 12.19s/it] 76%|███████▌ | 76/100 [15:26<04:52, 12.19s/it] {'loss': 0.8461, 'learning_rate': 0.0048, 'epoch': 0.97}
76%|███████▌ | 76/100 [15:26<04:52, 12.19s/it] 77%|███████▋ | 77/100 [15:38<04:40, 12.19s/it] {'loss': 0.9615, 'learning_rate': 0.0046, 'epoch': 0.98}
77%|███████▋ | 77/100 [15:38<04:40, 12.19s/it] 78%|███████▊ | 78/100 [15:50<04:28, 12.19s/it] {'loss': 0.6508, 'learning_rate': 0.0044, 'epoch': 0.99}
78%|███████▊ | 78/100 [15:50<04:28, 12.19s/it] 79%|███████▉ | 79/100 [16:02<04:16, 12.20s/it] {'loss': 1.0089, 'learning_rate': 0.0042, 'epoch': 1.01}
79%|███████▉ | 79/100 [16:02<04:16, 12.20s/it] 80%|████████ | 80/100 [16:14<04:03, 12.17s/it] {'loss': 0.7515, 'learning_rate': 0.004, 'epoch': 1.02}
80%|████████ | 80/100 [16:14<04:03, 12.17s/it] 81%|████████ | 81/100 [16:26<03:51, 12.18s/it] {'loss': 0.4172, 'learning_rate': 0.0038, 'epoch': 1.03}
81%|████████ | 81/100 [16:26<03:51, 12.18s/it] 82%|████████▏ | 82/100 [16:39<03:39, 12.18s/it] {'loss': 0.7634, 'learning_rate': 0.0036, 'epoch': 1.04}
82%|████████▏ | 82/100 [16:39<03:39, 12.18s/it] 83%|████████▎ | 83/100 [16:51<03:26, 12.16s/it] {'loss': 0.585, 'learning_rate': 0.0034000000000000002, 'epoch': 1.06}
83%|████████▎ | 83/100 [16:51<03:26, 12.16s/it] 84%|████████▍ | 84/100 [17:03<03:14, 12.18s/it] {'loss': 0.7668, 'learning_rate': 0.0032, 'epoch': 1.07}
84%|████████▍ | 84/100 [17:03<03:14, 12.18s/it] 85%|████████▌ | 85/100 [17:15<03:02, 12.18s/it] {'loss': 0.5403, 'learning_rate': 0.003, 'epoch': 1.08}
85%|████████▌ | 85/100 [17:15<03:02, 12.18s/it] 86%|████████▌ | 86/100 [17:27<02:50, 12.18s/it] {'loss': 0.5995, 'learning_rate': 0.0028000000000000004, 'epoch': 1.09}
86%|████████▌ | 86/100 [17:27<02:50, 12.18s/it] 87%|████████▋ | 87/100 [17:39<02:38, 12.18s/it] {'loss': 0.4515, 'learning_rate': 0.0026000000000000003, 'epoch': 1.11}
87%|████████▋ | 87/100 [17:39<02:38, 12.18s/it] 88%|████████▊ | 88/100 [17:52<02:26, 12.18s/it] {'loss': 0.6288, 'learning_rate': 0.0024, 'epoch': 1.12}
88%|████████▊ | 88/100 [17:52<02:26, 12.18s/it] 89%|████████▉ | 89/100 [18:04<02:13, 12.18s/it] {'loss': 0.7387, 'learning_rate': 0.0022, 'epoch': 1.13}
89%|████████▉ | 89/100 [18:04<02:13, 12.18s/it] 90%|█████████ | 90/100 [18:16<02:01, 12.18s/it] {'loss': 0.6517, 'learning_rate': 0.002, 'epoch': 1.15}
90%|█████████ | 90/100 [18:16<02:01, 12.18s/it] 91%|█████████ | 91/100 [18:28<01:49, 12.18s/it] {'loss': 0.5389, 'learning_rate': 0.0018, 'epoch': 1.16}
91%|█████████ | 91/100 [18:28<01:49, 12.18s/it] 92%|█████████▏| 92/100 [18:40<01:37, 12.20s/it] {'loss': 0.4433, 'learning_rate': 0.0016, 'epoch': 1.17}
92%|█████████▏| 92/100 [18:40<01:37, 12.20s/it] 93%|█████████▎| 93/100 [18:53<01:25, 12.21s/it] {'loss': 0.6643, 'learning_rate': 0.0014000000000000002, 'epoch': 1.18}
93%|█████████▎| 93/100 [18:53<01:25, 12.21s/it] 94%|█████████▍| 94/100 [19:05<01:13, 12.19s/it] {'loss': 0.5825, 'learning_rate': 0.0012, 'epoch': 1.2}
94%|█████████▍| 94/100 [19:05<01:13, 12.19s/it] 95%|█████████▌| 95/100 [19:17<01:00, 12.18s/it] {'loss': 0.7709, 'learning_rate': 0.001, 'epoch': 1.21}
95%|█████████▌| 95/100 [19:17<01:00, 12.18s/it] 96%|█████████▌| 96/100 [19:29<00:48, 12.18s/it] {'loss': 0.562, 'learning_rate': 0.0008, 'epoch': 1.22}
96%|█████████▌| 96/100 [19:29<00:48, 12.18s/it] 97%|█████████▋| 97/100 [19:41<00:36, 12.19s/it] {'loss': 0.5581, 'learning_rate': 0.0006, 'epoch': 1.23}
97%|█████████▋| 97/100 [19:41<00:36, 12.19s/it] 98%|█████████▊| 98/100 [19:54<00:24, 12.19s/it] {'loss': 0.4679, 'learning_rate': 0.0004, 'epoch': 1.25}
98%|█████████▊| 98/100 [19:54<00:24, 12.19s/it] 99%|█████████▉| 99/100 [20:06<00:12, 12.18s/it] {'loss': 0.5063, 'learning_rate': 0.0002, 'epoch': 1.26}
99%|█████████▉| 99/100 [20:06<00:12, 12.18s/it] 100%|██████████| 100/100 [20:18<00:00, 12.19s/it] {'loss': 0.5527, 'learning_rate': 0.0, 'epoch': 1.27}
100%|██████████| 100/100 [20:18<00:00, 12.19s/it][INFO|trainer.py:1962] 2024-01-26 13:15:40,013 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
{'train_runtime': 1218.4689, 'train_samples_per_second': 2.626, 'train_steps_per_second': 0.082, 'train_loss': 0.7395605874061585, 'epoch': 1.27}
100%|██████████| 100/100 [20:18<00:00, 12.19s/it] 100%|██████████| 100/100 [20:18<00:00, 12.18s/it]
Saving PrefixEncoder
[INFO|configuration_utils.py:473] 2024-01-26 13:15:40,038 >> Configuration saved in output/privacy_detection_pt-20240126-125436-128-2e-2/config.json
[INFO|configuration_utils.py:594] 2024-01-26 13:15:40,039 >> Configuration saved in output/privacy_detection_pt-20240126-125436-128-2e-2/generation_config.json
[INFO|modeling_utils.py:2495] 2024-01-26 13:15:40,068 >> Model weights saved in output/privacy_detection_pt-20240126-125436-128-2e-2/pytorch_model.bin
[INFO|tokenization_utils_base.py:2433] 2024-01-26 13:15:40,069 >> tokenizer config file saved in output/privacy_detection_pt-20240126-125436-128-2e-2/tokenizer_config.json
[INFO|tokenization_utils_base.py:2442] 2024-01-26 13:15:40,069 >> Special tokens file saved in output/privacy_detection_pt-20240126-125436-128-2e-2/special_tokens_map.json