Edit model card

kasrahabib/roberta-base-finetuned-iso29148-f_nf_req-cls

This model is a fine-tuned version of FacebookAI/roberta-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0019
  • Validation Loss: 0.6444
  • Epoch: 29

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4710, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
0.4775 0.3493 0
0.2723 0.3205 1
0.1702 0.3163 2
0.0925 0.3759 3
0.0803 0.4358 4
0.0444 0.5255 5
0.0227 0.5733 6
0.0342 0.5173 7
0.0231 0.5098 8
0.0144 0.5852 9
0.0067 0.6479 10
0.0039 0.7709 11
0.0222 0.5779 12
0.0144 0.6940 13
0.0213 0.5848 14
0.0047 0.6554 15
0.0036 0.6801 16
0.0011 0.7073 17
0.0082 0.7724 18
0.0084 0.6025 19
0.0009 0.6345 20
0.0034 0.6718 21
0.0032 0.6396 22
0.0030 0.6050 23
0.0005 0.6186 24
0.0006 0.6282 25
0.0005 0.6337 26
0.0004 0.6417 27
0.0003 0.6447 28
0.0019 0.6444 29

Framework versions

  • Transformers 4.40.1
  • TensorFlow 2.15.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
18

Finetuned from