home-standard-results / frozen_nb-bert-base_320_0.001_128.txt
ececet's picture
commit files to HF hub
609d584
raw
history blame
3.38 kB
Started at: 11:05:41
nb-bert-base, 0.001, 320
({'_name_or_path': '/disk4/folder1/working/checkpoints/huggingface/native_pytorch/step4_8/', 'attention_probs_dropout_prob': 0.1, 'directionality': 'bidi', 'gradient_checkpointing': False, 'hidden_act': 'gelu', 'hidden_dropout_prob': 0.1, 'hidden_size': 768, 'initializer_range': 0.02, 'intermediate_size': 3072, 'layer_norm_eps': 1e-12, 'max_position_embeddings': 512, 'model_type': 'bert', 'num_attention_heads': 12, 'num_hidden_layers': 12, 'pad_token_id': 0, 'pooler_fc_size': 768, 'pooler_num_attention_heads': 12, 'pooler_num_fc_layers': 3, 'pooler_size_per_head': 128, 'pooler_type': 'first_token_transform', 'position_embedding_type': 'absolute', 'type_vocab_size': 2, 'vocab_size': 119547, '_commit_hash': '82b194c0b3ea1fcad65f1eceee04adb26f9f71ac'}, {})
Epoch: 0
Training loss: 0.5472435295581818 - MAE: 0.5962500959471848
Validation loss : 0.19041005628449575 - MAE: 0.33918628303996384
Epoch: 1
Training loss: 0.2095787413418293 - MAE: 0.34864738662823064
Validation loss : 0.20635795167514256 - MAE: 0.35940594357273065
Epoch: 2
Training loss: 0.1750541977584362 - MAE: 0.3156862424402112
Validation loss : 0.1741191872528621 - MAE: 0.32463109275690744
Epoch: 3
Training loss: 0.16520684361457824 - MAE: 0.30684230381369754
Validation loss : 0.16484951972961426 - MAE: 0.3122384296878921
Epoch: 4
Training loss: 0.16403881311416627 - MAE: 0.30506741840528667
Validation loss : 0.16140927055052348 - MAE: 0.30796976329260456
Epoch: 5
Training loss: 0.16092105135321616 - MAE: 0.30192065722430705
Validation loss : 0.15897798431771143 - MAE: 0.3044519051557594
Epoch: 6
Training loss: 0.15942454375326634 - MAE: 0.3003764766498005
Validation loss : 0.15748265704938344 - MAE: 0.3025005888374177
Epoch: 7
Training loss: 0.15759666711091996 - MAE: 0.2984491071305166
Validation loss : 0.15631395046200072 - MAE: 0.3007706974913341
Epoch: 8
Training loss: 0.15621527656912804 - MAE: 0.2959810530360974
Validation loss : 0.15541660892111914 - MAE: 0.29963030970178867
Epoch: 9
Training loss: 0.15488663874566555 - MAE: 0.2954068972163193
Validation loss : 0.15462575533560344 - MAE: 0.2988305399712944
Epoch: 10
Training loss: 0.1556474920362234 - MAE: 0.2959293174554759
Validation loss : 0.1539457706468446 - MAE: 0.2983930250378654
Epoch: 11
Training loss: 0.1543809361755848 - MAE: 0.29385977034887
Validation loss : 0.15342799999884196 - MAE: 0.2980436222979471
Epoch: 12
Training loss: 0.15267271548509598 - MAE: 0.29245825593211605
Validation loss : 0.153261165533747 - MAE: 0.2986657179646947
Epoch: 13
Training loss: 0.1523422531783581 - MAE: 0.2918371255581563
Validation loss : 0.15237470609801157 - MAE: 0.2966377530077025
Epoch: 14
Training loss: 0.15303245820105077 - MAE: 0.29261876655856034
Validation loss : 0.15225172255720412 - MAE: 0.2974314613530794
Epoch: 15
Training loss: 0.15144352577626705 - MAE: 0.2910897974609225
Validation loss : 0.15169690655810492 - MAE: 0.29647749666292256
Epoch: 16
Training loss: 0.15123000517487525 - MAE: 0.2911931321636385
Validation loss : 0.1512265865291868 - MAE: 0.29530716351758135
Epoch: 17
Training loss: 0.1505414254963398 - MAE: 0.2900492442805445
Validation loss : 0.15123657137155533 - MAE: 0.29635793559953066
Epoch: 18
Training loss: 0.14705862291157246 - MAE: 0.2873711578902365
Validation loss : 0.15065684169530869 - MAE: 0.29512666448111197
Epoch: 19