Edit model card

kasrahabib/all-MiniLM-L6-v2-finetuned-iso29148-nf_sub_req-embdr

This model is a fine-tuned version of sentence-transformers/all-MiniLM-L6-v2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 2.1511
  • Validation Loss: 2.3317
  • Epoch: 29

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 270, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
2.7091 2.7071 0
2.6985 2.7027 1
2.6918 2.6973 2
2.6773 2.6898 3
2.6661 2.6802 4
2.6445 2.6683 5
2.6254 2.6545 6
2.6012 2.6390 7
2.5743 2.6219 8
2.5453 2.6027 9
2.5160 2.5818 10
2.4806 2.5587 11
2.4560 2.5357 12
2.4157 2.5126 13
2.3972 2.4922 14
2.3592 2.4719 15
2.3356 2.4495 16
2.3171 2.4337 17
2.2835 2.4169 18
2.2617 2.4000 19
2.2424 2.3856 20
2.2282 2.3738 21
2.2124 2.3625 22
2.2028 2.3552 23
2.1886 2.3472 24
2.1780 2.3425 25
2.1655 2.3373 26
2.1571 2.3344 27
2.1571 2.3324 28
2.1511 2.3317 29

Framework versions

  • Transformers 4.40.1
  • TensorFlow 2.15.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
3
Inference API
This model can be loaded on Inference API (serverless).

Finetuned from