Edit model card

vrx2/distilbert-base-uncased-finetuned-squad

This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.9741
  • Train End Logits Accuracy: 0.7291
  • Train Start Logits Accuracy: 0.6924
  • Validation Loss: 1.1179
  • Validation End Logits Accuracy: 0.6960
  • Validation Start Logits Accuracy: 0.6616
  • Epoch: 1

Model description

just a bench test of my laptop's capabilities

Intended uses & limitations

testing purposes

Training and evaluation data

trained on squad v2

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Train End Logits Accuracy Train Start Logits Accuracy Validation Loss Validation End Logits Accuracy Validation Start Logits Accuracy Epoch
1.5188 0.6049 0.5697 1.1433 0.6878 0.6498 0
0.9741 0.7291 0.6924 1.1179 0.6960 0.6616 1

Framework versions

  • Transformers 4.34.0
  • TensorFlow 2.14.0
  • Datasets 2.14.5
  • Tokenizers 0.14.1
Downloads last month
1
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train vrx2/distilbert-base-uncased-finetuned-squad