Edit model card

kasrahabib/all-MiniLM-L6-v2-finetuned-iso29148-f_nf_req-embdr

This model is a fine-tuned version of sentence-transformers/all-MiniLM-L6-v2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0009
  • Validation Loss: 0.6623
  • Epoch: 29

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4710, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
0.5280 0.3710 0
0.3075 0.3428 1
0.2140 0.3139 2
0.1252 0.3637 3
0.0794 0.3695 4
0.0506 0.4162 5
0.0384 0.4577 6
0.0253 0.4791 7
0.0190 0.5735 8
0.0119 0.5711 9
0.0141 0.5977 10
0.0131 0.5945 11
0.0060 0.6052 12
0.0098 0.6270 13
0.0080 0.6484 14
0.0098 0.6139 15
0.0064 0.6103 16
0.0067 0.6232 17
0.0078 0.6205 18
0.0067 0.6126 19
0.0039 0.6108 20
0.0039 0.6407 21
0.0052 0.6501 22
0.0043 0.6523 23
0.0048 0.6800 24
0.0071 0.6644 25
0.0014 0.6600 26
0.0026 0.6578 27
0.0010 0.6613 28
0.0009 0.6623 29

Framework versions

  • Transformers 4.40.1
  • TensorFlow 2.15.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
5

Finetuned from