Edit model card

Question Answering with Hugging Face Transformers and Keras 🤗❤️

This model is a fine-tuned version of distilbert-base-cased on SQuAD dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.9300
  • Validation Loss: 1.1437
  • Epoch: 1

Model description

Question answering model based on distilbert-base-cased, trained with 🤗Transformers + ❤️Keras.

Intended uses & limitations

This model is trained for Question Answering tutorial for Keras.io.

Training and evaluation data

It is trained on SQuAD question answering dataset. ⁉️

Training procedure

Find the notebook in Keras Examples here. ❤️

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
  • training_precision: mixed_float16

Training results

Train Loss Validation Loss Epoch
1.5145 1.1500 0
0.9300 1.1437 1

Framework versions

  • Transformers 4.16.0.dev0
  • TensorFlow 2.6.0
  • Datasets 1.16.2.dev0
  • Tokenizers 0.10.3
Downloads last month
158
Hosted inference API
Question Answering
Examples
Examples
This model can be loaded on the Inference API on-demand.

Dataset used to train keras-io/transformers-qa

Spaces using keras-io/transformers-qa

Evaluation results