mmiteva/qa_model_test

This model is a fine-tuned version of distilbert-base-cased on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.4469
  • Train End Logits Accuracy: 0.8470
  • Train Start Logits Accuracy: 0.8386
  • Validation Loss: 1.0938
  • Validation End Logits Accuracy: 0.7318
  • Validation Start Logits Accuracy: 0.7255
  • Epoch: 4

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 108280, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Train End Logits Accuracy Train Start Logits Accuracy Validation Loss Validation End Logits Accuracy Validation Start Logits Accuracy Epoch
1.4847 0.5934 0.5787 1.1159 0.6724 0.6590 0
0.9507 0.7042 0.6909 1.0094 0.6973 0.6875 1
0.7253 0.7637 0.7515 0.9841 0.7182 0.7124 2
0.5678 0.8090 0.7986 1.0107 0.7260 0.7194 3
0.4469 0.8470 0.8386 1.0938 0.7318 0.7255 4

Framework versions

  • Transformers 4.20.1
  • TensorFlow 2.9.2
  • Datasets 2.1.0
  • Tokenizers 0.12.1
Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support