dzungpham's picture
upload best checkpoints 200 with f1 score 0.68
b00f41c verified
2026-04-17 08:00:34,522 - INFO - train_pipeline - Logging to ./output_checkpoints/graphcodebert-robust/training.log
2026-04-17 08:00:34,525 - INFO - train_pipeline - ===== Training Configuration =====
2026-04-17 08:00:34,526 - INFO - train_pipeline - model_name : microsoft/graphcodebert-base
2026-04-17 08:00:34,528 - INFO - train_pipeline - output_dir : ./output_checkpoints/graphcodebert-robust
2026-04-17 08:00:34,529 - INFO - train_pipeline - num_epochs : 5
2026-04-17 08:00:34,531 - INFO - train_pipeline - batch_size : 32
2026-04-17 08:00:34,533 - INFO - train_pipeline - learning_rate : 2e-05
2026-04-17 08:00:34,535 - INFO - train_pipeline - max_length : 512
2026-04-17 08:00:34,536 - INFO - train_pipeline - num_labels : 2
2026-04-17 08:00:34,538 - INFO - train_pipeline - use_wandb : True
2026-04-17 08:00:34,540 - INFO - train_pipeline - freeze_base : True
2026-04-17 08:00:34,541 - INFO - train_pipeline - loss_type : r-drop
2026-04-17 08:00:34,542 - INFO - train_pipeline - focal_alpha : 1.0
2026-04-17 08:00:34,544 - INFO - train_pipeline - focal_gamma : 2.0
2026-04-17 08:00:34,545 - INFO - train_pipeline - r_drop_alpha : 4.0
2026-04-17 08:00:34,546 - INFO - train_pipeline - infonce_temperature : 0.07
2026-04-17 08:00:34,548 - INFO - train_pipeline - infonce_weight : 0.5
2026-04-17 08:00:34,550 - INFO - train_pipeline - seed : 42
2026-04-17 08:00:34,552 - INFO - train_pipeline - resume_from_checkpoint : None
2026-04-17 08:00:34,553 - INFO - train_pipeline - label_smoothing : 0.1
2026-04-17 08:00:34,554 - INFO - train_pipeline - adversarial_epsilon : 0.5
2026-04-17 08:00:34,556 - INFO - train_pipeline - use_swa : True
2026-04-17 08:00:34,557 - INFO - train_pipeline - swa_start_epoch : 2
2026-04-17 08:00:34,558 - INFO - train_pipeline - swa_lr : 1e-05
2026-04-17 08:00:34,559 - INFO - train_pipeline - data_augmentation : True
2026-04-17 08:00:34,561 - INFO - train_pipeline - aug_rename_prob : 0.3
2026-04-17 08:00:34,562 - INFO - train_pipeline - aug_format_prob : 0.3
2026-04-17 08:00:34,564 - INFO - train_pipeline - =================================
2026-04-17 08:00:35,711 - INFO - train_pipeline - Model placed on cuda
2026-04-17 08:00:35,716 - INFO - train_pipeline - ===== Model Architecture =====
2026-04-17 08:00:35,718 - INFO - train_pipeline -
RobertaForSequenceClassification(
(roberta): RobertaModel(
(embeddings): RobertaEmbeddings(
(word_embeddings): Embedding(50265, 768, padding_idx=1)
(position_embeddings): Embedding(514, 768, padding_idx=1)
(token_type_embeddings): Embedding(1, 768)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): RobertaEncoder(
(layer): ModuleList(
(0-11): 12 x RobertaLayer(
(attention): RobertaAttention(
(self): RobertaSdpaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): RobertaSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): RobertaIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): RobertaOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
)
(classifier): RobertaClassificationHead(
(dense): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(out_proj): Linear(in_features=768, out_features=2, bias=True)
)
)
2026-04-17 08:00:35,722 - INFO - train_pipeline - ===== Parameter Summary =====
2026-04-17 08:00:35,723 - INFO - train_pipeline - Total Parameters: 124,647,170
2026-04-17 08:00:35,724 - INFO - train_pipeline - Trainable Parameters: 592,130
2026-04-17 08:00:35,725 - INFO - train_pipeline - Non-trainable Parameters: 124,055,040
2026-04-17 08:00:35,727 - INFO - train_pipeline - ===== Tokenizer Summary =====
2026-04-17 08:00:35,747 - INFO - train_pipeline - Vocab size: 50265 | Special tokens: ['<s>', '</s>', '<unk>', '<pad>', '<mask>']
2026-04-17 08:00:35,749 - INFO - train_pipeline - ===== End of Architecture Log =====
2026-04-17 08:00:35,751 - INFO - train_pipeline - Data augmentation enabled (rename=0.3, format=0.3)
2026-04-17 08:00:36,645 - INFO - train_pipeline - === Starting training with robust regularisation ===