DistilBERT-base-uncased-english-finetuned-squad

This model was finetuned on squad dataset. Use TFDistilBertForQuestionAnswering to import model. Requires DistilBertTokenizerFast to generate tokens that are accepted by this model.

Model description

Base DistilBERT model finetuned using squad dataset for NLP tasks such as context based question answering.

Training procedure

Trained for 3 epochs.

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: Adam with learning_rate=5e-5
  • training_precision: float32

Training results

Loss on final epoch: 0.6417 & validation loss: 1.2772 evaluation yet to be done.

Framework versions

  • Transformers 4.44.2
  • TensorFlow 2.17.0
  • Datasets 3.0.0
  • Tokenizers 0.19.1
Downloads last month
21
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for chintaaaaaan/DistilBERT-base-uncased-english-finetuned-squad

Finetuned
(8030)
this model

Dataset used to train chintaaaaaan/DistilBERT-base-uncased-english-finetuned-squad