Edit model card

kasrahabib/bert-base-cased-finetuned-iso29148-req-detector

This model is a fine-tuned version of google-bert/bert-base-cased on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0299
  • Validation Loss: 0.4860
  • Epoch: 29

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3570, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
2.4480 1.6442 0
1.3378 0.9792 1
0.8492 0.7320 2
0.5922 0.6017 3
0.4316 0.5206 4
0.3071 0.5053 5
0.2353 0.5071 6
0.1666 0.4643 7
0.1324 0.4386 8
0.1098 0.4460 9
0.0923 0.4525 10
0.0808 0.4382 11
0.0676 0.4699 12
0.0581 0.4628 13
0.0627 0.4889 14
0.0534 0.4837 15
0.0482 0.4609 16
0.0459 0.4586 17
0.0474 0.4725 18
0.0419 0.4758 19
0.0391 0.4964 20
0.0387 0.4799 21
0.0375 0.4705 22
0.0347 0.4780 23
0.0349 0.4844 24
0.0335 0.4862 25
0.0308 0.4862 26
0.0291 0.4903 27
0.0293 0.4886 28
0.0299 0.4860 29

Framework versions

  • Transformers 4.40.1
  • TensorFlow 2.15.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
2

Finetuned from