home-standard-results / frozen_nb-bert-base_320_0.001_512.txt
ececet's picture
commit files to HF hub
035a68a
Started at: 19:59:36
nb-bert-base, 0.001, 320
({'_name_or_path': '/disk4/folder1/working/checkpoints/huggingface/native_pytorch/step4_8/', 'attention_probs_dropout_prob': 0.1, 'directionality': 'bidi', 'gradient_checkpointing': False, 'hidden_act': 'gelu', 'hidden_dropout_prob': 0.1, 'hidden_size': 768, 'initializer_range': 0.02, 'intermediate_size': 3072, 'layer_norm_eps': 1e-12, 'max_position_embeddings': 512, 'model_type': 'bert', 'num_attention_heads': 12, 'num_hidden_layers': 12, 'pad_token_id': 0, 'pooler_fc_size': 768, 'pooler_num_attention_heads': 12, 'pooler_num_fc_layers': 3, 'pooler_size_per_head': 128, 'pooler_type': 'first_token_transform', 'position_embedding_type': 'absolute', 'type_vocab_size': 2, 'vocab_size': 119547, '_commit_hash': '82b194c0b3ea1fcad65f1eceee04adb26f9f71ac'}, {})
Epoch: 0
Started at: 08:43:48
nb-bert-base, 0.001, 320
({'_name_or_path': '/disk4/folder1/working/checkpoints/huggingface/native_pytorch/step4_8/', 'attention_probs_dropout_prob': 0.1, 'directionality': 'bidi', 'gradient_checkpointing': False, 'hidden_act': 'gelu', 'hidden_dropout_prob': 0.1, 'hidden_size': 768, 'initializer_range': 0.02, 'intermediate_size': 3072, 'layer_norm_eps': 1e-12, 'max_position_embeddings': 512, 'model_type': 'bert', 'num_attention_heads': 12, 'num_hidden_layers': 12, 'pad_token_id': 0, 'pooler_fc_size': 768, 'pooler_num_attention_heads': 12, 'pooler_num_fc_layers': 3, 'pooler_size_per_head': 128, 'pooler_type': 'first_token_transform', 'position_embedding_type': 'absolute', 'type_vocab_size': 2, 'vocab_size': 119547, '_commit_hash': '82b194c0b3ea1fcad65f1eceee04adb26f9f71ac'}, {})
Epoch: 0
Training loss: 0.5151217855513096 - MAE: 0.5709627828340156
Validation loss : 0.2530486115387508 - MAE: 0.3794793018729891
Epoch: 1
Training loss: 0.22132941260933875 - MAE: 0.36308932887849105
Validation loss : 0.18178956849234446 - MAE: 0.33422993724658395
Epoch: 2
Training loss: 0.1807804986834526 - MAE: 0.32391827749935836
Validation loss : 0.1691616369145257 - MAE: 0.3194961844436014
Epoch: 3
Training loss: 0.16955089792609215 - MAE: 0.3127070982966003
Validation loss : 0.16161458726440156 - MAE: 0.3090284268632521
Epoch: 4
Training loss: 0.1653532862663269 - MAE: 0.3086919556242476
Validation loss : 0.15732865567718232 - MAE: 0.30360814254621993
Epoch: 5
Training loss: 0.16227756589651107 - MAE: 0.3042661441997587
Validation loss : 0.15460806446416037 - MAE: 0.30049916526709397
Epoch: 6
Training loss: 0.16006522327661515 - MAE: 0.30157349356756313
Validation loss : 0.1526533163019589 - MAE: 0.2983025067294122
Epoch: 7
Training loss: 0.15598474368453025 - MAE: 0.2972633061454769
Validation loss : 0.15159909852913447 - MAE: 0.29780595918171027
Epoch: 8
Training loss: 0.15715683475136757 - MAE: 0.2993698693050316
Validation loss : 0.14978873516832078 - MAE: 0.2951820444435869
Epoch: 9