home-standard-results / frozen_nb-bert-base_256_0.001_128.txt
ececet's picture
commit files to HF hub
2782ca5
raw
history blame
3.37 kB
Started at: 10:30:17
nb-bert-base, 0.001, 256
({'_name_or_path': '/disk4/folder1/working/checkpoints/huggingface/native_pytorch/step4_8/', 'attention_probs_dropout_prob': 0.1, 'directionality': 'bidi', 'gradient_checkpointing': False, 'hidden_act': 'gelu', 'hidden_dropout_prob': 0.1, 'hidden_size': 768, 'initializer_range': 0.02, 'intermediate_size': 3072, 'layer_norm_eps': 1e-12, 'max_position_embeddings': 512, 'model_type': 'bert', 'num_attention_heads': 12, 'num_hidden_layers': 12, 'pad_token_id': 0, 'pooler_fc_size': 768, 'pooler_num_attention_heads': 12, 'pooler_num_fc_layers': 3, 'pooler_size_per_head': 128, 'pooler_type': 'first_token_transform', 'position_embedding_type': 'absolute', 'type_vocab_size': 2, 'vocab_size': 119547, '_commit_hash': '82b194c0b3ea1fcad65f1eceee04adb26f9f71ac'}, {})
Epoch: 0
Training loss: 0.47940237939357755 - MAE: 0.5434171712030036
Validation loss : 0.23652320272392696 - MAE: 0.36585742989414266
Epoch: 1
Training loss: 0.19740469574928285 - MAE: 0.3411209507189351
Validation loss : 0.1725800467862023 - MAE: 0.3144852905041802
Epoch: 2
Training loss: 0.17109984636306763 - MAE: 0.3128534143375846
Validation loss : 0.16326186226473915 - MAE: 0.30687719294438875
Epoch: 3
Training loss: 0.16422436118125916 - MAE: 0.30485287376578113
Validation loss : 0.16014324128627777 - MAE: 0.3049535582779377
Epoch: 4
Training loss: 0.16267379015684127 - MAE: 0.3022380635677996
Validation loss : 0.1582481645875507 - MAE: 0.3032693708849441
Epoch: 5
Training loss: 0.16067761451005935 - MAE: 0.30131915324107467
Validation loss : 0.1568084243271086 - MAE: 0.3019273425213427
Epoch: 6
Training loss: 0.15667623430490493 - MAE: 0.29712363202431746
Validation loss : 0.1547286965780788 - MAE: 0.298546701036528
Epoch: 7
Training loss: 0.15718585073947908 - MAE: 0.2974300979537428
Validation loss : 0.1539307576086786 - MAE: 0.2980814362265448
Epoch: 8
Training loss: 0.15291019290685653 - MAE: 0.29303670151400624
Validation loss : 0.1535865076714092 - MAE: 0.2982686715681697
Epoch: 9
Training loss: 0.15521344304084778 - MAE: 0.29492937333944125
Validation loss : 0.1528200540277693 - MAE: 0.29709660292160345
Epoch: 10
Training loss: 0.1533287388086319 - MAE: 0.2937649839701093
Validation loss : 0.15205161107911003 - MAE: 0.2960053508684892
Epoch: 11
Training loss: 0.1523037502169609 - MAE: 0.2920799676662576
Validation loss : 0.15153188506762186 - MAE: 0.2956201504833244
Epoch: 12
Training loss: 0.15251484006643296 - MAE: 0.2920164276458986
Validation loss : 0.1513606756925583 - MAE: 0.2959255912992334
Epoch: 13
Training loss: 0.15158091008663177 - MAE: 0.2921574382180624
Validation loss : 0.15118350088596344 - MAE: 0.2959302312369976
Epoch: 14
Training loss: 0.15188901841640473 - MAE: 0.2916754215403849
Validation loss : 0.15076235019498402 - MAE: 0.295445874231345
Epoch: 15
Training loss: 0.1504104283452034 - MAE: 0.2906248577366968
Validation loss : 0.14995380325449836 - MAE: 0.2940183973699766
Epoch: 16
Training loss: 0.14938753724098205 - MAE: 0.2893151198584333
Validation loss : 0.14919970598485735 - MAE: 0.2926672756577582
Epoch: 17
Training loss: 0.14956610560417175 - MAE: 0.2899769703078993
Validation loss : 0.14881501843531927 - MAE: 0.2919988212149979
Epoch: 18
Training loss: 0.14946476131677627 - MAE: 0.28843070296425904
Validation loss : 0.1484667675362693 - MAE: 0.2908643244420066
Epoch: 19