|
2024-04-26 02:18:42,687 - trainer - INFO - Use pytorch device: cuda, with gpu_number=4 |
|
2024-04-26 02:18:42,687 - trainer - INFO - see seed for random, numpy and torch 122 |
|
2024-04-26 02:18:43,540 - trainer - INFO - module.0.gpt.transformer.wte.weight torch.Size([50259, 768]) |
|
2024-04-26 02:18:43,540 - trainer - INFO - module.0.gpt.transformer.wpe.weight torch.Size([1024, 768]) |
|
2024-04-26 02:18:43,542 - trainer - INFO - module.0.gpt.transformer.h.0.ln_1.weight torch.Size([768]) |
|
2024-04-26 02:18:43,542 - trainer - INFO - module.0.gpt.transformer.h.0.ln_1.bias torch.Size([768]) |
|
2024-04-26 02:18:43,543 - trainer - INFO - module.0.gpt.transformer.h.0.attn.c_attn.weight torch.Size([768, 2304]) |
|
2024-04-26 02:18:43,543 - trainer - INFO - module.0.gpt.transformer.h.0.attn.c_attn.bias torch.Size([2304]) |
|
2024-04-26 02:18:43,544 - trainer - INFO - module.0.gpt.transformer.h.0.attn.c_proj.weight torch.Size([768, 768]) |
|
2024-04-26 02:18:43,544 - trainer - INFO - module.0.gpt.transformer.h.0.attn.c_proj.bias torch.Size([768]) |
|
2024-04-26 02:18:43,544 - trainer - INFO - module.0.gpt.transformer.h.0.ln_2.weight torch.Size([768]) |
|
2024-04-26 02:18:43,545 - trainer - INFO - module.0.gpt.transformer.h.0.ln_2.bias torch.Size([768]) |
|
2024-04-26 02:18:43,545 - trainer - INFO - module.0.gpt.transformer.h.0.mlp.c_fc.weight torch.Size([768, 3072]) |
|
2024-04-26 02:18:43,546 - trainer - INFO - module.0.gpt.transformer.h.0.mlp.c_fc.bias torch.Size([3072]) |
|
2024-04-26 02:18:43,546 - trainer - INFO - module.0.gpt.transformer.h.0.mlp.c_proj.weight torch.Size([3072, 768]) |
|
2024-04-26 02:18:43,547 - trainer - INFO - module.0.gpt.transformer.h.0.mlp.c_proj.bias torch.Size([768]) |
|
2024-04-26 02:18:43,547 - trainer - INFO - module.0.gpt.transformer.h.1.ln_1.weight torch.Size([768]) |
|
2024-04-26 02:18:43,547 - trainer - INFO - module.0.gpt.transformer.h.1.ln_1.bias torch.Size([768]) |
|
2024-04-26 02:18:43,548 - trainer - INFO - module.0.gpt.transformer.h.1.attn.c_attn.weight torch.Size([768, 2304]) |
|
2024-04-26 02:18:43,548 - trainer - INFO - module.0.gpt.transformer.h.1.attn.c_attn.bias torch.Size([2304]) |
|
2024-04-26 02:18:43,549 - trainer - INFO - module.0.gpt.transformer.h.1.attn.c_proj.weight torch.Size([768, 768]) |
|
2024-04-26 02:18:43,549 - trainer - INFO - module.0.gpt.transformer.h.1.attn.c_proj.bias torch.Size([768]) |
|
2024-04-26 02:18:43,549 - trainer - INFO - module.0.gpt.transformer.h.1.ln_2.weight torch.Size([768]) |
|
2024-04-26 02:18:43,550 - trainer - INFO - module.0.gpt.transformer.h.1.ln_2.bias torch.Size([768]) |
|
2024-04-26 02:18:43,550 - trainer - INFO - module.0.gpt.transformer.h.1.mlp.c_fc.weight torch.Size([768, 3072]) |
|
2024-04-26 02:18:43,551 - trainer - INFO - module.0.gpt.transformer.h.1.mlp.c_fc.bias torch.Size([3072]) |
|
2024-04-26 02:18:43,551 - trainer - INFO - module.0.gpt.transformer.h.1.mlp.c_proj.weight torch.Size([3072, 768]) |
|
2024-04-26 02:18:43,551 - trainer - INFO - module.0.gpt.transformer.h.1.mlp.c_proj.bias torch.Size([768]) |
|
2024-04-26 02:18:43,552 - trainer - INFO - module.0.gpt.transformer.h.2.ln_1.weight torch.Size([768]) |
|
2024-04-26 02:18:43,552 - trainer - INFO - module.0.gpt.transformer.h.2.ln_1.bias torch.Size([768]) |
|
2024-04-26 02:18:43,553 - trainer - INFO - module.0.gpt.transformer.h.2.attn.c_attn.weight torch.Size([768, 2304]) |
|
2024-04-26 02:18:43,553 - trainer - INFO - module.0.gpt.transformer.h.2.attn.c_attn.bias torch.Size([2304]) |
|
2024-04-26 02:18:43,554 - trainer - INFO - module.0.gpt.transformer.h.2.attn.c_proj.weight torch.Size([768, 768]) |
|
2024-04-26 02:18:43,554 - trainer - INFO - module.0.gpt.transformer.h.2.attn.c_proj.bias torch.Size([768]) |
|
2024-04-26 02:18:43,554 - trainer - INFO - module.0.gpt.transformer.h.2.ln_2.weight torch.Size([768]) |
|
2024-04-26 02:18:43,555 - trainer - INFO - module.0.gpt.transformer.h.2.ln_2.bias torch.Size([768]) |
|
2024-04-26 02:18:43,555 - trainer - INFO - module.0.gpt.transformer.h.2.mlp.c_fc.weight torch.Size([768, 3072]) |
|
2024-04-26 02:18:43,555 - trainer - INFO - module.0.gpt.transformer.h.2.mlp.c_fc.bias torch.Size([3072]) |
|
2024-04-26 02:18:43,556 - trainer - INFO - module.0.gpt.transformer.h.2.mlp.c_proj.weight torch.Size([3072, 768]) |
|
2024-04-26 02:18:43,556 - trainer - INFO - module.0.gpt.transformer.h.2.mlp.c_proj.bias torch.Size([768]) |
|
2024-04-26 02:18:43,557 - trainer - INFO - module.0.gpt.transformer.h.3.ln_1.weight torch.Size([768]) |
|
2024-04-26 02:18:43,557 - trainer - INFO - module.0.gpt.transformer.h.3.ln_1.bias torch.Size([768]) |
|
2024-04-26 02:18:43,558 - trainer - INFO - module.0.gpt.transformer.h.3.attn.c_attn.weight torch.Size([768, 2304]) |
|
2024-04-26 02:18:43,558 - trainer - INFO - module.0.gpt.transformer.h.3.attn.c_attn.bias torch.Size([2304]) |
|
2024-04-26 02:18:43,559 - trainer - INFO - module.0.gpt.transformer.h.3.attn.c_proj.weight torch.Size([768, 768]) |
|
2024-04-26 02:18:43,559 - trainer - INFO - module.0.gpt.transformer.h.3.attn.c_proj.bias torch.Size([768]) |
|
2024-04-26 02:18:43,559 - trainer - INFO - module.0.gpt.transformer.h.3.ln_2.weight torch.Size([768]) |
|
2024-04-26 02:18:43,560 - trainer - INFO - module.0.gpt.transformer.h.3.ln_2.bias torch.Size([768]) |
|
2024-04-26 02:18:43,560 - trainer - INFO - module.0.gpt.transformer.h.3.mlp.c_fc.weight torch.Size([768, 3072]) |
|
2024-04-26 02:18:43,561 - trainer - INFO - module.0.gpt.transformer.h.3.mlp.c_fc.bias torch.Size([3072]) |
|
2024-04-26 02:18:43,561 - trainer - INFO - module.0.gpt.transformer.h.3.mlp.c_proj.weight torch.Size([3072, 768]) |
|
2024-04-26 02:18:43,562 - trainer - INFO - module.0.gpt.transformer.h.3.mlp.c_proj.bias torch.Size([768]) |
|
2024-04-26 02:18:43,562 - trainer - INFO - module.0.gpt.transformer.h.4.ln_1.weight torch.Size([768]) |
|
2024-04-26 02:18:43,562 - trainer - INFO - module.0.gpt.transformer.h.4.ln_1.bias torch.Size([768]) |
|
2024-04-26 02:18:43,563 - trainer - INFO - module.0.gpt.transformer.h.4.attn.c_attn.weight torch.Size([768, 2304]) |
|
2024-04-26 02:18:43,563 - trainer - INFO - module.0.gpt.transformer.h.4.attn.c_attn.bias torch.Size([2304]) |
|
2024-04-26 02:18:43,564 - trainer - INFO - module.0.gpt.transformer.h.4.attn.c_proj.weight torch.Size([768, 768]) |
|
2024-04-26 02:18:43,564 - trainer - INFO - module.0.gpt.transformer.h.4.attn.c_proj.bias torch.Size([768]) |
|
2024-04-26 02:18:43,564 - trainer - INFO - module.0.gpt.transformer.h.4.ln_2.weight torch.Size([768]) |
|
2024-04-26 02:18:43,565 - trainer - INFO - module.0.gpt.transformer.h.4.ln_2.bias torch.Size([768]) |
|
2024-04-26 02:18:43,565 - trainer - INFO - module.0.gpt.transformer.h.4.mlp.c_fc.weight torch.Size([768, 3072]) |
|
2024-04-26 02:18:43,566 - trainer - INFO - module.0.gpt.transformer.h.4.mlp.c_fc.bias torch.Size([3072]) |
|
2024-04-26 02:18:43,566 - trainer - INFO - module.0.gpt.transformer.h.4.mlp.c_proj.weight torch.Size([3072, 768]) |
|
2024-04-26 02:18:43,567 - trainer - INFO - module.0.gpt.transformer.h.4.mlp.c_proj.bias torch.Size([768]) |
|
2024-04-26 02:18:43,567 - trainer - INFO - module.0.gpt.transformer.h.5.ln_1.weight torch.Size([768]) |
|
2024-04-26 02:18:43,567 - trainer - INFO - module.0.gpt.transformer.h.5.ln_1.bias torch.Size([768]) |
|
2024-04-26 02:18:43,568 - trainer - INFO - module.0.gpt.transformer.h.5.attn.c_attn.weight torch.Size([768, 2304]) |
|
2024-04-26 02:18:43,568 - trainer - INFO - module.0.gpt.transformer.h.5.attn.c_attn.bias torch.Size([2304]) |
|
2024-04-26 02:18:43,569 - trainer - INFO - module.0.gpt.transformer.h.5.attn.c_proj.weight torch.Size([768, 768]) |
|
2024-04-26 02:18:43,569 - trainer - INFO - module.0.gpt.transformer.h.5.attn.c_proj.bias torch.Size([768]) |
|
2024-04-26 02:18:43,570 - trainer - INFO - module.0.gpt.transformer.h.5.ln_2.weight torch.Size([768]) |
|
2024-04-26 02:18:43,570 - trainer - INFO - module.0.gpt.transformer.h.5.ln_2.bias torch.Size([768]) |
|
2024-04-26 02:18:43,570 - trainer - INFO - module.0.gpt.transformer.h.5.mlp.c_fc.weight torch.Size([768, 3072]) |
|
2024-04-26 02:18:43,571 - trainer - INFO - module.0.gpt.transformer.h.5.mlp.c_fc.bias torch.Size([3072]) |
|
2024-04-26 02:18:43,571 - trainer - INFO - module.0.gpt.transformer.h.5.mlp.c_proj.weight torch.Size([3072, 768]) |
|
2024-04-26 02:18:43,572 - trainer - INFO - module.0.gpt.transformer.h.5.mlp.c_proj.bias torch.Size([768]) |
|
2024-04-26 02:18:43,572 - trainer - INFO - module.0.gpt.transformer.ln_f.weight torch.Size([768]) |
|
2024-04-26 02:18:43,573 - trainer - INFO - module.0.gpt.transformer.ln_f.bias torch.Size([768]) |
|
2024-04-26 02:18:43,573 - trainer - INFO - module.0.gpt.lm_head.weight torch.Size([50259, 768]) |
|
2024-04-26 02:18:43,573 - trainer - INFO - DataParallel( |
|
(module): Sequential( |
|
(0): GPTSingleHead( |
|
(gpt): GPT2LMHeadModel( |
|
(transformer): GPT2Model( |
|
(wte): Embedding(50259, 768) |
|
(wpe): Embedding(1024, 768) |
|
(drop): Dropout(p=0.1, inplace=False) |
|
(h): ModuleList( |
|
(0-5): 6 x GPT2Block( |
|
(ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True) |
|
(attn): GPT2Attention( |
|
(c_attn): Conv1D() |
|
(c_proj): Conv1D() |
|
(attn_dropout): Dropout(p=0.1, inplace=False) |
|
(resid_dropout): Dropout(p=0.1, inplace=False) |
|
) |
|
(ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True) |
|
(mlp): GPT2MLP( |
|
(c_fc): Conv1D() |
|
(c_proj): Conv1D() |
|
(act): NewGELUActivation() |
|
(dropout): Dropout(p=0.1, inplace=False) |
|
) |
|
) |
|
) |
|
(ln_f): LayerNorm((768,), eps=1e-05, elementwise_affine=True) |
|
) |
|
(lm_head): Linear(in_features=768, out_features=50259, bias=False) |
|
) |
|
) |
|
(1): EmptyHeads() |
|
) |
|
) |
|
2024-04-26 02:18:43,574 - trainer - INFO - Total params: 81914112 |
|
2024-04-26 02:18:43,574 - trainer - INFO - Trainable params: 81914112 |
|
2024-04-26 02:18:43,574 - trainer - INFO - Non-trainable params: 0 |
|
2024-04-26 02:18:43,590 - trainer - INFO - Warmup-steps: 8160 |
|
2024-04-26 02:18:43,594 - trainer - INFO - ***** Running training ***** |
|
2024-04-26 02:18:43,594 - trainer - INFO - Num of training examples (actually iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 02:18:43,594 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 02:18:43,594 - trainer - INFO - Steps per Epoch = 8160 or iterations per epoch = 8160 |
|
2024-04-26 02:18:43,594 - trainer - INFO - Num of Epochs = 5 |
|
2024-04-26 02:18:43,594 - trainer - INFO - Best score (perplexity) = -inf |
|
2024-04-26 02:18:43,594 - trainer - INFO - Eval every 200 steps or every 200 iterations |
|
2024-04-26 02:18:43,594 - trainer - INFO - Early stop = 3 |
|
2024-04-26 02:18:43,594 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 02:18:43,594 - trainer - INFO - Total optimization steps = 40800 |
|
2024-04-26 02:18:43,594 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 02:25:03,634 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 02:25:04,787 - trainer - INFO - Save check-point at epoch=0 step=200 |
|
2024-04-26 02:25:04,788 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 02:25:04,788 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 02:25:04,788 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 02:25:04,788 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 02:25:04,788 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 02:25:04,788 - trainer - INFO - Best score (perplexity) = -270.8600158691406 |
|
2024-04-26 02:25:04,788 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 02:25:04,788 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 02:25:04,788 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 02:25:04,788 - trainer - INFO - Time spent since last evaluation = 0h 6m 21s |
|
2024-04-26 02:25:04,788 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 02:25:04,788 - trainer - INFO - Steps = 200/40800 |
|
2024-04-26 02:25:04,788 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 02:25:04,788 - trainer - INFO - dev_loss = 5.601602 || dev_eval_scores = {'perplexity': 270.8600158691406} |
|
2024-04-26 02:25:04,789 - trainer - INFO - train_loss = 14.094216346740723 |
|
2024-04-26 02:25:04,789 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 02:31:25,346 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 02:31:31,186 - trainer - INFO - Save check-point at epoch=0 step=400 |
|
2024-04-26 02:31:31,187 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 02:31:31,187 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 02:31:31,187 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 02:31:31,187 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 02:31:31,187 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 02:31:31,187 - trainer - INFO - Best score (perplexity) = -10.156302452087402 |
|
2024-04-26 02:31:31,187 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 02:31:31,187 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 02:31:31,187 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 02:31:31,187 - trainer - INFO - Time spent since last evaluation = 0h 6m 26s |
|
2024-04-26 02:31:31,187 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 02:31:31,187 - trainer - INFO - Steps = 400/40800 |
|
2024-04-26 02:31:31,187 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 02:31:31,187 - trainer - INFO - dev_loss = 2.318094 || dev_eval_scores = {'perplexity': 10.156302452087402} |
|
2024-04-26 02:31:31,220 - trainer - INFO - train_loss = 8.5648775100708 |
|
2024-04-26 02:31:31,220 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 02:37:51,756 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 02:37:57,711 - trainer - INFO - Save check-point at epoch=0 step=600 |
|
2024-04-26 02:37:57,711 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 02:37:57,711 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 02:37:57,711 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 02:37:57,711 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 02:37:57,711 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 02:37:57,711 - trainer - INFO - Best score (perplexity) = -7.607259750366211 |
|
2024-04-26 02:37:57,712 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 02:37:57,712 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 02:37:57,712 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 02:37:57,712 - trainer - INFO - Time spent since last evaluation = 0h 6m 26s |
|
2024-04-26 02:37:57,712 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 02:37:57,712 - trainer - INFO - Steps = 600/40800 |
|
2024-04-26 02:37:57,712 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 02:37:57,712 - trainer - INFO - dev_loss = 2.029103 || dev_eval_scores = {'perplexity': 7.607259750366211} |
|
2024-04-26 02:37:57,712 - trainer - INFO - train_loss = 6.4544525146484375 |
|
2024-04-26 02:37:57,712 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 02:44:17,920 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 02:44:23,766 - trainer - INFO - Save check-point at epoch=0 step=800 |
|
2024-04-26 02:44:23,766 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 02:44:23,767 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 02:44:23,767 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 02:44:23,767 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 02:44:23,767 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 02:44:23,767 - trainer - INFO - Best score (perplexity) = -6.791029453277588 |
|
2024-04-26 02:44:23,767 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 02:44:23,767 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 02:44:23,767 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 02:44:23,767 - trainer - INFO - Time spent since last evaluation = 0h 6m 26s |
|
2024-04-26 02:44:23,767 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 02:44:23,767 - trainer - INFO - Steps = 800/40800 |
|
2024-04-26 02:44:23,767 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 02:44:23,767 - trainer - INFO - dev_loss = 1.915603 || dev_eval_scores = {'perplexity': 6.791029453277588} |
|
2024-04-26 02:44:23,767 - trainer - INFO - train_loss = 5.3493781089782715 |
|
2024-04-26 02:44:23,768 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 02:50:43,526 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 02:50:49,464 - trainer - INFO - Save check-point at epoch=0 step=1000 |
|
2024-04-26 02:50:49,464 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 02:50:49,464 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 02:50:49,464 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 02:50:49,464 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 02:50:49,464 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 02:50:49,464 - trainer - INFO - Best score (perplexity) = -6.073063373565674 |
|
2024-04-26 02:50:49,464 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 02:50:49,464 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 02:50:49,464 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 02:50:49,464 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 02:50:49,465 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 02:50:49,465 - trainer - INFO - Steps = 1000/40800 |
|
2024-04-26 02:50:49,465 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 02:50:49,465 - trainer - INFO - dev_loss = 1.803863 || dev_eval_scores = {'perplexity': 6.073063373565674} |
|
2024-04-26 02:50:49,465 - trainer - INFO - train_loss = 4.66662073135376 |
|
2024-04-26 02:50:49,465 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 02:57:09,707 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 02:57:09,707 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 02:57:09,707 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 02:57:09,707 - trainer - INFO - Early stop count = 1/3 |
|
2024-04-26 02:57:09,707 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 02:57:09,707 - trainer - INFO - Best score (perplexity) = -6.073063373565674 |
|
2024-04-26 02:57:09,708 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 02:57:09,708 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 02:57:09,708 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 02:57:09,708 - trainer - INFO - Time spent since last evaluation = 0h 6m 20s |
|
2024-04-26 02:57:09,708 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 02:57:09,708 - trainer - INFO - Steps = 1200/40800 |
|
2024-04-26 02:57:09,708 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 02:57:09,708 - trainer - INFO - dev_loss = 1.808444 || dev_eval_scores = {'perplexity': 6.100945472717285} |
|
2024-04-26 02:57:09,708 - trainer - INFO - train_loss = 4.205338001251221 |
|
2024-04-26 02:57:09,708 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 03:03:30,335 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 03:03:36,292 - trainer - INFO - Save check-point at epoch=0 step=1400 |
|
2024-04-26 03:03:36,292 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 03:03:36,292 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 03:03:36,292 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 03:03:36,292 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 03:03:36,292 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 03:03:36,292 - trainer - INFO - Best score (perplexity) = -5.51066780090332 |
|
2024-04-26 03:03:36,292 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 03:03:36,292 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 03:03:36,292 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 03:03:36,292 - trainer - INFO - Time spent since last evaluation = 0h 6m 26s |
|
2024-04-26 03:03:36,293 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 03:03:36,293 - trainer - INFO - Steps = 1400/40800 |
|
2024-04-26 03:03:36,293 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 03:03:36,293 - trainer - INFO - dev_loss = 1.706686 || dev_eval_scores = {'perplexity': 5.51066780090332} |
|
2024-04-26 03:03:36,293 - trainer - INFO - train_loss = 3.8646857738494873 |
|
2024-04-26 03:03:36,293 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 03:09:56,141 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 03:10:02,087 - trainer - INFO - Save check-point at epoch=0 step=1600 |
|
2024-04-26 03:10:02,087 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 03:10:02,088 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 03:10:02,088 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 03:10:02,088 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 03:10:02,088 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 03:10:02,088 - trainer - INFO - Best score (perplexity) = -5.361582279205322 |
|
2024-04-26 03:10:02,088 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 03:10:02,088 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 03:10:02,088 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 03:10:02,088 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 03:10:02,088 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 03:10:02,088 - trainer - INFO - Steps = 1600/40800 |
|
2024-04-26 03:10:02,088 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 03:10:02,088 - trainer - INFO - dev_loss = 1.679259 || dev_eval_scores = {'perplexity': 5.361582279205322} |
|
2024-04-26 03:10:02,088 - trainer - INFO - train_loss = 3.60662579536438 |
|
2024-04-26 03:10:02,089 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 03:16:22,224 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 03:16:28,030 - trainer - INFO - Save check-point at epoch=0 step=1800 |
|
2024-04-26 03:16:28,030 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 03:16:28,030 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 03:16:28,030 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 03:16:28,030 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 03:16:28,030 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 03:16:28,030 - trainer - INFO - Best score (perplexity) = -5.1808762550354 |
|
2024-04-26 03:16:28,031 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 03:16:28,031 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 03:16:28,031 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 03:16:28,031 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 03:16:28,031 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 03:16:28,031 - trainer - INFO - Steps = 1800/40800 |
|
2024-04-26 03:16:28,031 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 03:16:28,031 - trainer - INFO - dev_loss = 1.644974 || dev_eval_scores = {'perplexity': 5.1808762550354} |
|
2024-04-26 03:16:28,031 - trainer - INFO - train_loss = 3.401608943939209 |
|
2024-04-26 03:16:28,031 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 03:22:47,629 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 03:22:53,548 - trainer - INFO - Save check-point at epoch=0 step=2000 |
|
2024-04-26 03:22:53,549 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 03:22:53,549 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 03:22:53,549 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 03:22:53,549 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 03:22:53,549 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 03:22:53,549 - trainer - INFO - Best score (perplexity) = -4.970845699310303 |
|
2024-04-26 03:22:53,549 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 03:22:53,549 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 03:22:53,549 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 03:22:53,549 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 03:22:53,549 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 03:22:53,549 - trainer - INFO - Steps = 2000/40800 |
|
2024-04-26 03:22:53,549 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 03:22:53,549 - trainer - INFO - dev_loss = 1.603590 || dev_eval_scores = {'perplexity': 4.970845699310303} |
|
2024-04-26 03:22:53,550 - trainer - INFO - train_loss = 3.2337915897369385 |
|
2024-04-26 03:22:53,550 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 03:29:13,045 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 03:29:18,974 - trainer - INFO - Save check-point at epoch=0 step=2200 |
|
2024-04-26 03:29:18,975 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 03:29:18,975 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 03:29:18,975 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 03:29:18,975 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 03:29:18,975 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 03:29:18,975 - trainer - INFO - Best score (perplexity) = -4.858333587646484 |
|
2024-04-26 03:29:18,975 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 03:29:18,975 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 03:29:18,975 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 03:29:18,976 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 03:29:18,976 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 03:29:18,976 - trainer - INFO - Steps = 2200/40800 |
|
2024-04-26 03:29:18,976 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 03:29:18,976 - trainer - INFO - dev_loss = 1.580696 || dev_eval_scores = {'perplexity': 4.858333587646484} |
|
2024-04-26 03:29:18,976 - trainer - INFO - train_loss = 3.092155694961548 |
|
2024-04-26 03:29:18,976 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 03:35:38,899 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 03:35:44,832 - trainer - INFO - Save check-point at epoch=0 step=2400 |
|
2024-04-26 03:35:44,832 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 03:35:44,832 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 03:35:44,832 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 03:35:44,832 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 03:35:44,832 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 03:35:44,832 - trainer - INFO - Best score (perplexity) = -4.7346601486206055 |
|
2024-04-26 03:35:44,833 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 03:35:44,833 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 03:35:44,833 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 03:35:44,833 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 03:35:44,833 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 03:35:44,833 - trainer - INFO - Steps = 2400/40800 |
|
2024-04-26 03:35:44,833 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 03:35:44,833 - trainer - INFO - dev_loss = 1.554910 || dev_eval_scores = {'perplexity': 4.7346601486206055} |
|
2024-04-26 03:35:44,833 - trainer - INFO - train_loss = 2.974703311920166 |
|
2024-04-26 03:35:44,833 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 03:42:04,939 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 03:42:10,876 - trainer - INFO - Save check-point at epoch=0 step=2600 |
|
2024-04-26 03:42:10,877 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 03:42:10,877 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 03:42:10,877 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 03:42:10,877 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 03:42:10,877 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 03:42:10,877 - trainer - INFO - Best score (perplexity) = -4.624922275543213 |
|
2024-04-26 03:42:10,877 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 03:42:10,877 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 03:42:10,877 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 03:42:10,877 - trainer - INFO - Time spent since last evaluation = 0h 6m 26s |
|
2024-04-26 03:42:10,877 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 03:42:10,877 - trainer - INFO - Steps = 2600/40800 |
|
2024-04-26 03:42:10,877 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 03:42:10,877 - trainer - INFO - dev_loss = 1.531460 || dev_eval_scores = {'perplexity': 4.624922275543213} |
|
2024-04-26 03:42:10,878 - trainer - INFO - train_loss = 2.8716752529144287 |
|
2024-04-26 03:42:10,878 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 03:48:30,754 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 03:48:36,689 - trainer - INFO - Save check-point at epoch=0 step=2800 |
|
2024-04-26 03:48:36,690 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 03:48:36,690 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 03:48:36,690 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 03:48:36,690 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 03:48:36,690 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 03:48:36,690 - trainer - INFO - Best score (perplexity) = -4.533045291900635 |
|
2024-04-26 03:48:36,690 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 03:48:36,690 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 03:48:36,690 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 03:48:36,690 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 03:48:36,690 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 03:48:36,690 - trainer - INFO - Steps = 2800/40800 |
|
2024-04-26 03:48:36,690 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 03:48:36,691 - trainer - INFO - dev_loss = 1.511394 || dev_eval_scores = {'perplexity': 4.533045291900635} |
|
2024-04-26 03:48:36,691 - trainer - INFO - train_loss = 2.781400680541992 |
|
2024-04-26 03:48:36,691 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 03:54:56,573 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 03:55:02,481 - trainer - INFO - Save check-point at epoch=0 step=3000 |
|
2024-04-26 03:55:02,482 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 03:55:02,482 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 03:55:02,482 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 03:55:02,482 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 03:55:02,482 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 03:55:02,482 - trainer - INFO - Best score (perplexity) = -4.453883647918701 |
|
2024-04-26 03:55:02,482 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 03:55:02,482 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 03:55:02,482 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 03:55:02,482 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 03:55:02,482 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 03:55:02,482 - trainer - INFO - Steps = 3000/40800 |
|
2024-04-26 03:55:02,482 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 03:55:02,482 - trainer - INFO - dev_loss = 1.493776 || dev_eval_scores = {'perplexity': 4.453883647918701} |
|
2024-04-26 03:55:02,482 - trainer - INFO - train_loss = 2.702195167541504 |
|
2024-04-26 03:55:02,483 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 04:01:21,916 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:01:27,748 - trainer - INFO - Save check-point at epoch=0 step=3200 |
|
2024-04-26 04:01:27,748 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 04:01:27,749 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:01:27,749 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 04:01:27,749 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 04:01:27,749 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 04:01:27,749 - trainer - INFO - Best score (perplexity) = -4.359768867492676 |
|
2024-04-26 04:01:27,749 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 04:01:27,749 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 04:01:27,749 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 04:01:27,749 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 04:01:27,749 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 04:01:27,749 - trainer - INFO - Steps = 3200/40800 |
|
2024-04-26 04:01:27,749 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 04:01:27,749 - trainer - INFO - dev_loss = 1.472419 || dev_eval_scores = {'perplexity': 4.359768867492676} |
|
2024-04-26 04:01:27,749 - trainer - INFO - train_loss = 2.6316800117492676 |
|
2024-04-26 04:01:27,749 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 04:07:47,186 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:07:52,996 - trainer - INFO - Save check-point at epoch=0 step=3400 |
|
2024-04-26 04:07:52,996 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 04:07:52,996 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:07:52,996 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 04:07:52,996 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 04:07:52,996 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 04:07:52,996 - trainer - INFO - Best score (perplexity) = -4.2930779457092285 |
|
2024-04-26 04:07:52,996 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 04:07:52,996 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 04:07:52,997 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 04:07:52,997 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 04:07:52,997 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 04:07:52,997 - trainer - INFO - Steps = 3400/40800 |
|
2024-04-26 04:07:52,997 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 04:07:52,997 - trainer - INFO - dev_loss = 1.457004 || dev_eval_scores = {'perplexity': 4.2930779457092285} |
|
2024-04-26 04:07:52,997 - trainer - INFO - train_loss = 2.5693111419677734 |
|
2024-04-26 04:07:52,997 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 04:14:12,633 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:14:18,310 - trainer - INFO - Save check-point at epoch=0 step=3600 |
|
2024-04-26 04:14:18,310 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 04:14:18,310 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:14:18,310 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 04:14:18,310 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 04:14:18,310 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 04:14:18,310 - trainer - INFO - Best score (perplexity) = -4.221639633178711 |
|
2024-04-26 04:14:18,310 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 04:14:18,310 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 04:14:18,310 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 04:14:18,311 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 04:14:18,311 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 04:14:18,311 - trainer - INFO - Steps = 3600/40800 |
|
2024-04-26 04:14:18,311 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 04:14:18,311 - trainer - INFO - dev_loss = 1.440224 || dev_eval_scores = {'perplexity': 4.221639633178711} |
|
2024-04-26 04:14:18,311 - trainer - INFO - train_loss = 2.5129594802856445 |
|
2024-04-26 04:14:18,311 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 04:20:38,684 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:20:44,511 - trainer - INFO - Save check-point at epoch=0 step=3800 |
|
2024-04-26 04:20:44,512 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 04:20:44,512 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:20:44,512 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 04:20:44,512 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 04:20:44,512 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 04:20:44,512 - trainer - INFO - Best score (perplexity) = -4.147531986236572 |
|
2024-04-26 04:20:44,512 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 04:20:44,512 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 04:20:44,512 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 04:20:44,512 - trainer - INFO - Time spent since last evaluation = 0h 6m 26s |
|
2024-04-26 04:20:44,512 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 04:20:44,512 - trainer - INFO - Steps = 3800/40800 |
|
2024-04-26 04:20:44,512 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 04:20:44,513 - trainer - INFO - dev_loss = 1.422513 || dev_eval_scores = {'perplexity': 4.147531986236572} |
|
2024-04-26 04:20:44,513 - trainer - INFO - train_loss = 2.460688829421997 |
|
2024-04-26 04:20:44,513 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 04:27:04,526 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:27:10,385 - trainer - INFO - Save check-point at epoch=0 step=4000 |
|
2024-04-26 04:27:10,386 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 04:27:10,386 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:27:10,386 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 04:27:10,386 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 04:27:10,386 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 04:27:10,386 - trainer - INFO - Best score (perplexity) = -4.087435722351074 |
|
2024-04-26 04:27:10,386 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 04:27:10,386 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 04:27:10,386 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 04:27:10,386 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 04:27:10,386 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 04:27:10,386 - trainer - INFO - Steps = 4000/40800 |
|
2024-04-26 04:27:10,386 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 04:27:10,386 - trainer - INFO - dev_loss = 1.407918 || dev_eval_scores = {'perplexity': 4.087435722351074} |
|
2024-04-26 04:27:10,387 - trainer - INFO - train_loss = 2.4136698246002197 |
|
2024-04-26 04:27:10,387 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 04:33:30,165 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:33:36,001 - trainer - INFO - Save check-point at epoch=0 step=4200 |
|
2024-04-26 04:33:36,001 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 04:33:36,001 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:33:36,001 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 04:33:36,001 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 04:33:36,001 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 04:33:36,001 - trainer - INFO - Best score (perplexity) = -4.028451442718506 |
|
2024-04-26 04:33:36,002 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 04:33:36,002 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 04:33:36,002 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 04:33:36,002 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 04:33:36,002 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 04:33:36,002 - trainer - INFO - Steps = 4200/40800 |
|
2024-04-26 04:33:36,002 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 04:33:36,002 - trainer - INFO - dev_loss = 1.393382 || dev_eval_scores = {'perplexity': 4.028451442718506} |
|
2024-04-26 04:33:36,002 - trainer - INFO - train_loss = 2.3706307411193848 |
|
2024-04-26 04:33:36,002 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 04:39:55,706 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:40:01,545 - trainer - INFO - Save check-point at epoch=0 step=4400 |
|
2024-04-26 04:40:01,545 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 04:40:01,545 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:40:01,545 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 04:40:01,545 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 04:40:01,545 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 04:40:01,545 - trainer - INFO - Best score (perplexity) = -3.976846694946289 |
|
2024-04-26 04:40:01,546 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 04:40:01,546 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 04:40:01,546 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 04:40:01,546 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 04:40:01,546 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 04:40:01,546 - trainer - INFO - Steps = 4400/40800 |
|
2024-04-26 04:40:01,546 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 04:40:01,546 - trainer - INFO - dev_loss = 1.380489 || dev_eval_scores = {'perplexity': 3.976846694946289} |
|
2024-04-26 04:40:01,546 - trainer - INFO - train_loss = 2.330047369003296 |
|
2024-04-26 04:40:01,546 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 04:46:21,905 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:46:27,763 - trainer - INFO - Save check-point at epoch=0 step=4600 |
|
2024-04-26 04:46:27,764 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 04:46:27,764 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:46:27,764 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 04:46:27,764 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 04:46:27,764 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 04:46:27,764 - trainer - INFO - Best score (perplexity) = -3.920635461807251 |
|
2024-04-26 04:46:27,764 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 04:46:27,764 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 04:46:27,764 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 04:46:27,764 - trainer - INFO - Time spent since last evaluation = 0h 6m 26s |
|
2024-04-26 04:46:27,764 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 04:46:27,764 - trainer - INFO - Steps = 4600/40800 |
|
2024-04-26 04:46:27,764 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 04:46:27,764 - trainer - INFO - dev_loss = 1.366254 || dev_eval_scores = {'perplexity': 3.920635461807251} |
|
2024-04-26 04:46:27,765 - trainer - INFO - train_loss = 2.2929983139038086 |
|
2024-04-26 04:46:27,765 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 04:52:47,547 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:52:53,122 - trainer - INFO - Save check-point at epoch=0 step=4800 |
|
2024-04-26 04:52:53,122 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 04:52:53,122 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:52:53,122 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 04:52:53,122 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 04:52:53,122 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 04:52:53,122 - trainer - INFO - Best score (perplexity) = -3.866814613342285 |
|
2024-04-26 04:52:53,122 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 04:52:53,122 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 04:52:53,122 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 04:52:53,123 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 04:52:53,123 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 04:52:53,123 - trainer - INFO - Steps = 4800/40800 |
|
2024-04-26 04:52:53,123 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 04:52:53,123 - trainer - INFO - dev_loss = 1.352431 || dev_eval_scores = {'perplexity': 3.866814613342285} |
|
2024-04-26 04:52:53,123 - trainer - INFO - train_loss = 2.2579383850097656 |
|
2024-04-26 04:52:53,123 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 04:59:12,707 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:59:18,585 - trainer - INFO - Save check-point at epoch=0 step=5000 |
|
2024-04-26 04:59:18,586 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 04:59:18,586 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 04:59:18,586 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 04:59:18,586 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 04:59:18,586 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 04:59:18,586 - trainer - INFO - Best score (perplexity) = -3.827284574508667 |
|
2024-04-26 04:59:18,586 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 04:59:18,586 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 04:59:18,586 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 04:59:18,586 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 04:59:18,586 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 04:59:18,586 - trainer - INFO - Steps = 5000/40800 |
|
2024-04-26 04:59:18,586 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 04:59:18,586 - trainer - INFO - dev_loss = 1.342156 || dev_eval_scores = {'perplexity': 3.827284574508667} |
|
2024-04-26 04:59:18,587 - trainer - INFO - train_loss = 2.225395679473877 |
|
2024-04-26 04:59:18,587 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 05:05:39,819 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 05:05:45,329 - trainer - INFO - Save check-point at epoch=0 step=5200 |
|
2024-04-26 05:05:45,330 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 05:05:45,330 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 05:05:45,330 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 05:05:45,330 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 05:05:45,330 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 05:05:45,330 - trainer - INFO - Best score (perplexity) = -3.7697432041168213 |
|
2024-04-26 05:05:45,330 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 05:05:45,330 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 05:05:45,330 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 05:05:45,330 - trainer - INFO - Time spent since last evaluation = 0h 6m 26s |
|
2024-04-26 05:05:45,330 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 05:05:45,330 - trainer - INFO - Steps = 5200/40800 |
|
2024-04-26 05:05:45,330 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 05:05:45,330 - trainer - INFO - dev_loss = 1.327007 || dev_eval_scores = {'perplexity': 3.7697432041168213} |
|
2024-04-26 05:05:45,331 - trainer - INFO - train_loss = 2.194683790206909 |
|
2024-04-26 05:05:45,331 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 05:12:05,019 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 05:12:10,879 - trainer - INFO - Save check-point at epoch=0 step=5400 |
|
2024-04-26 05:12:10,879 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 05:12:10,879 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 05:12:10,879 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 05:12:10,879 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 05:12:10,880 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 05:12:10,880 - trainer - INFO - Best score (perplexity) = -3.732077121734619 |
|
2024-04-26 05:12:10,880 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 05:12:10,880 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 05:12:10,880 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 05:12:10,880 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 05:12:10,880 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 05:12:10,880 - trainer - INFO - Steps = 5400/40800 |
|
2024-04-26 05:12:10,880 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 05:12:10,880 - trainer - INFO - dev_loss = 1.316965 || dev_eval_scores = {'perplexity': 3.732077121734619} |
|
2024-04-26 05:12:10,880 - trainer - INFO - train_loss = 2.16521954536438 |
|
2024-04-26 05:12:10,880 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 05:18:31,741 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 05:18:37,604 - trainer - INFO - Save check-point at epoch=0 step=5600 |
|
2024-04-26 05:18:37,605 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 05:18:37,605 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 05:18:37,605 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 05:18:37,605 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 05:18:37,605 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 05:18:37,605 - trainer - INFO - Best score (perplexity) = -3.6822173595428467 |
|
2024-04-26 05:18:37,605 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 05:18:37,605 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 05:18:37,605 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 05:18:37,605 - trainer - INFO - Time spent since last evaluation = 0h 6m 26s |
|
2024-04-26 05:18:37,605 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 05:18:37,605 - trainer - INFO - Steps = 5600/40800 |
|
2024-04-26 05:18:37,605 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 05:18:37,605 - trainer - INFO - dev_loss = 1.303515 || dev_eval_scores = {'perplexity': 3.6822173595428467} |
|
2024-04-26 05:18:37,606 - trainer - INFO - train_loss = 2.1381325721740723 |
|
2024-04-26 05:18:37,606 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 05:24:57,533 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 05:25:03,410 - trainer - INFO - Save check-point at epoch=0 step=5800 |
|
2024-04-26 05:25:03,411 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 05:25:03,411 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 05:25:03,411 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 05:25:03,411 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 05:25:03,411 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 05:25:03,411 - trainer - INFO - Best score (perplexity) = -3.641592264175415 |
|
2024-04-26 05:25:03,411 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 05:25:03,411 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 05:25:03,411 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 05:25:03,411 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 05:25:03,411 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 05:25:03,411 - trainer - INFO - Steps = 5800/40800 |
|
2024-04-26 05:25:03,411 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 05:25:03,411 - trainer - INFO - dev_loss = 1.292421 || dev_eval_scores = {'perplexity': 3.641592264175415} |
|
2024-04-26 05:25:03,412 - trainer - INFO - train_loss = 2.113192319869995 |
|
2024-04-26 05:25:03,412 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 05:31:23,054 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 05:31:28,936 - trainer - INFO - Save check-point at epoch=0 step=6000 |
|
2024-04-26 05:31:28,937 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 05:31:28,937 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 05:31:28,937 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 05:31:28,937 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 05:31:28,937 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 05:31:28,937 - trainer - INFO - Best score (perplexity) = -3.602872133255005 |
|
2024-04-26 05:31:28,937 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 05:31:28,937 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 05:31:28,937 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 05:31:28,937 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 05:31:28,937 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 05:31:28,937 - trainer - INFO - Steps = 6000/40800 |
|
2024-04-26 05:31:28,937 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 05:31:28,938 - trainer - INFO - dev_loss = 1.281731 || dev_eval_scores = {'perplexity': 3.602872133255005} |
|
2024-04-26 05:31:28,938 - trainer - INFO - train_loss = 2.0891873836517334 |
|
2024-04-26 05:31:28,938 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 05:37:49,590 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 05:37:55,449 - trainer - INFO - Save check-point at epoch=0 step=6200 |
|
2024-04-26 05:37:55,449 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 05:37:55,450 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 05:37:55,450 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 05:37:55,450 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 05:37:55,450 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 05:37:55,450 - trainer - INFO - Best score (perplexity) = -3.5650696754455566 |
|
2024-04-26 05:37:55,450 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 05:37:55,450 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 05:37:55,450 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 05:37:55,450 - trainer - INFO - Time spent since last evaluation = 0h 6m 26s |
|
2024-04-26 05:37:55,450 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 05:37:55,450 - trainer - INFO - Steps = 6200/40800 |
|
2024-04-26 05:37:55,450 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 05:37:55,450 - trainer - INFO - dev_loss = 1.271184 || dev_eval_scores = {'perplexity': 3.5650696754455566} |
|
2024-04-26 05:37:55,450 - trainer - INFO - train_loss = 2.066126585006714 |
|
2024-04-26 05:37:55,451 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 05:44:15,283 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 05:44:21,132 - trainer - INFO - Save check-point at epoch=0 step=6400 |
|
2024-04-26 05:44:21,133 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 05:44:21,133 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 05:44:21,133 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 05:44:21,133 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 05:44:21,133 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 05:44:21,133 - trainer - INFO - Best score (perplexity) = -3.517021894454956 |
|
2024-04-26 05:44:21,133 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 05:44:21,133 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 05:44:21,133 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 05:44:21,133 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 05:44:21,133 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 05:44:21,133 - trainer - INFO - Steps = 6400/40800 |
|
2024-04-26 05:44:21,133 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 05:44:21,133 - trainer - INFO - dev_loss = 1.257615 || dev_eval_scores = {'perplexity': 3.517021894454956} |
|
2024-04-26 05:44:21,134 - trainer - INFO - train_loss = 2.0438156127929688 |
|
2024-04-26 05:44:21,134 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 05:50:41,206 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 05:50:47,135 - trainer - INFO - Save check-point at epoch=0 step=6600 |
|
2024-04-26 05:50:47,135 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 05:50:47,135 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 05:50:47,135 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 05:50:47,135 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 05:50:47,135 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 05:50:47,135 - trainer - INFO - Best score (perplexity) = -3.4847798347473145 |
|
2024-04-26 05:50:47,135 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 05:50:47,135 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 05:50:47,136 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 05:50:47,136 - trainer - INFO - Time spent since last evaluation = 0h 6m 26s |
|
2024-04-26 05:50:47,136 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 05:50:47,136 - trainer - INFO - Steps = 6600/40800 |
|
2024-04-26 05:50:47,136 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 05:50:47,136 - trainer - INFO - dev_loss = 1.248405 || dev_eval_scores = {'perplexity': 3.4847798347473145} |
|
2024-04-26 05:50:47,136 - trainer - INFO - train_loss = 2.022505283355713 |
|
2024-04-26 05:50:47,136 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 05:57:06,888 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 05:57:12,755 - trainer - INFO - Save check-point at epoch=0 step=6800 |
|
2024-04-26 05:57:12,755 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 05:57:12,756 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 05:57:12,756 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 05:57:12,756 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 05:57:12,756 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 05:57:12,756 - trainer - INFO - Best score (perplexity) = -3.441448450088501 |
|
2024-04-26 05:57:12,756 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 05:57:12,756 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 05:57:12,756 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 05:57:12,756 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 05:57:12,756 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 05:57:12,756 - trainer - INFO - Steps = 6800/40800 |
|
2024-04-26 05:57:12,756 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 05:57:12,756 - trainer - INFO - dev_loss = 1.235892 || dev_eval_scores = {'perplexity': 3.441448450088501} |
|
2024-04-26 05:57:12,756 - trainer - INFO - train_loss = 2.0026967525482178 |
|
2024-04-26 05:57:12,757 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 06:03:32,820 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 06:03:38,713 - trainer - INFO - Save check-point at epoch=0 step=7000 |
|
2024-04-26 06:03:38,714 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 06:03:38,714 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 06:03:38,714 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 06:03:38,714 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 06:03:38,714 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 06:03:38,714 - trainer - INFO - Best score (perplexity) = -3.3976998329162598 |
|
2024-04-26 06:03:38,714 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 06:03:38,714 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 06:03:38,714 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 06:03:38,714 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 06:03:38,714 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 06:03:38,714 - trainer - INFO - Steps = 7000/40800 |
|
2024-04-26 06:03:38,714 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 06:03:38,715 - trainer - INFO - dev_loss = 1.223099 || dev_eval_scores = {'perplexity': 3.3976998329162598} |
|
2024-04-26 06:03:38,715 - trainer - INFO - train_loss = 1.983184576034546 |
|
2024-04-26 06:03:38,715 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 06:09:59,334 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 06:10:05,217 - trainer - INFO - Save check-point at epoch=0 step=7200 |
|
2024-04-26 06:10:05,217 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 06:10:05,217 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 06:10:05,217 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 06:10:05,217 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 06:10:05,217 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 06:10:05,217 - trainer - INFO - Best score (perplexity) = -3.3713600635528564 |
|
2024-04-26 06:10:05,217 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 06:10:05,217 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 06:10:05,218 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 06:10:05,218 - trainer - INFO - Time spent since last evaluation = 0h 6m 26s |
|
2024-04-26 06:10:05,218 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 06:10:05,218 - trainer - INFO - Steps = 7200/40800 |
|
2024-04-26 06:10:05,218 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 06:10:05,218 - trainer - INFO - dev_loss = 1.215316 || dev_eval_scores = {'perplexity': 3.3713600635528564} |
|
2024-04-26 06:10:05,218 - trainer - INFO - train_loss = 1.9642337560653687 |
|
2024-04-26 06:10:05,218 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 06:16:24,736 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 06:16:30,613 - trainer - INFO - Save check-point at epoch=0 step=7400 |
|
2024-04-26 06:16:30,613 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 06:16:30,613 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 06:16:30,613 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 06:16:30,613 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 06:16:30,613 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 06:16:30,613 - trainer - INFO - Best score (perplexity) = -3.334381341934204 |
|
2024-04-26 06:16:30,613 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 06:16:30,613 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 06:16:30,613 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 06:16:30,613 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 06:16:30,613 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 06:16:30,614 - trainer - INFO - Steps = 7400/40800 |
|
2024-04-26 06:16:30,614 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 06:16:30,614 - trainer - INFO - dev_loss = 1.204287 || dev_eval_scores = {'perplexity': 3.334381341934204} |
|
2024-04-26 06:16:30,614 - trainer - INFO - train_loss = 1.9464004039764404 |
|
2024-04-26 06:16:30,614 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 06:22:50,090 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 06:22:55,903 - trainer - INFO - Save check-point at epoch=0 step=7600 |
|
2024-04-26 06:22:55,903 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 06:22:55,903 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 06:22:55,903 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 06:22:55,903 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 06:22:55,903 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 06:22:55,903 - trainer - INFO - Best score (perplexity) = -3.299593448638916 |
|
2024-04-26 06:22:55,903 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 06:22:55,903 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 06:22:55,903 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 06:22:55,903 - trainer - INFO - Time spent since last evaluation = 0h 6m 25s |
|
2024-04-26 06:22:55,903 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 06:22:55,904 - trainer - INFO - Steps = 7600/40800 |
|
2024-04-26 06:22:55,904 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 06:22:55,904 - trainer - INFO - dev_loss = 1.193799 || dev_eval_scores = {'perplexity': 3.299593448638916} |
|
2024-04-26 06:22:55,904 - trainer - INFO - train_loss = 1.9291884899139404 |
|
2024-04-26 06:22:55,904 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 06:29:16,451 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 06:29:22,361 - trainer - INFO - Save check-point at epoch=0 step=7800 |
|
2024-04-26 06:29:22,361 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 06:29:22,361 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 06:29:22,361 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 06:29:22,361 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 06:29:22,362 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 06:29:22,362 - trainer - INFO - Best score (perplexity) = -3.2615699768066406 |
|
2024-04-26 06:29:22,362 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 06:29:22,362 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 06:29:22,362 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 06:29:22,362 - trainer - INFO - Time spent since last evaluation = 0h 6m 26s |
|
2024-04-26 06:29:22,362 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 06:29:22,362 - trainer - INFO - Steps = 7800/40800 |
|
2024-04-26 06:29:22,362 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 06:29:22,362 - trainer - INFO - dev_loss = 1.182209 || dev_eval_scores = {'perplexity': 3.2615699768066406} |
|
2024-04-26 06:29:22,362 - trainer - INFO - train_loss = 1.9121544361114502 |
|
2024-04-26 06:29:22,362 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 06:35:42,699 - trainer - INFO - Save model to tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 06:35:48,617 - trainer - INFO - Save check-point at epoch=0 step=8000 |
|
2024-04-26 06:35:48,617 - trainer - INFO - ***** Evaluation report ***** |
|
2024-04-26 06:35:48,618 - trainer - INFO - Output path (short): tmp/model/distilgpt2_fine_tuned_coder |
|
2024-04-26 06:35:48,618 - trainer - INFO - Early stop on: perplexity |
|
2024-04-26 06:35:48,618 - trainer - INFO - Early stop count = 0/3 |
|
2024-04-26 06:35:48,618 - trainer - INFO - Eval steps = 200 or (iterations = 200) |
|
2024-04-26 06:35:48,618 - trainer - INFO - Best score (perplexity) = -3.232813835144043 |
|
2024-04-26 06:35:48,618 - trainer - INFO - Gradient Accumulation steps = 1 |
|
2024-04-26 06:35:48,618 - trainer - INFO - Num of training examples (actually no. of iterations per epoch for Iterable Dataset) = 130556 |
|
2024-04-26 06:35:48,618 - trainer - INFO - Num of development examples (actually no. of iterations per epoch for Iterable Dataset) = 14507 |
|
2024-04-26 06:35:48,618 - trainer - INFO - Time spent since last evaluation = 0h 6m 26s |
|
2024-04-26 06:35:48,618 - trainer - INFO - Epoch = 1/5 |
|
2024-04-26 06:35:48,618 - trainer - INFO - Steps = 8000/40800 |
|
2024-04-26 06:35:48,618 - trainer - INFO - Instantaneous batch size per GPU = 4 and n_gpu = 4 so the input batch size = 16 |
|
2024-04-26 06:35:48,618 - trainer - INFO - dev_loss = 1.173353 || dev_eval_scores = {'perplexity': 3.232813835144043} |
|
2024-04-26 06:35:48,618 - trainer - INFO - train_loss = 1.8961435556411743 |
|
2024-04-26 06:35:48,619 - trainer - INFO - |
|
******************************************** |
|
2024-04-26 06:37:54,340 - trainer - INFO - epoch 1 ends, 4 epoches left |
|
2024-04-26 06:37:54,862 - trainer - INFO - |
|
global_average_loss=1.8839226961135864,global_steps=8160 on training set |
|
|