Edit model card

distilroberta-base fined-tuned on SQuAD (https://huggingface.co/datasets/squad)

Hyperparameters:

  • epochs: 1
  • lr: 1e-5
  • train batch sie: 16
  • optimizer: adamW
  • lr_scheduler: linear
  • num warming steps: 0
  • max_length: 512

Results on the dev set:

  • 'exact_match': 76.37653736991486
  • 'f1': 84.5528918750732

It took 1h 20 min to train on Colab.

Downloads last month
39
Hosted inference API
Question Answering
Examples
Examples
This model can be loaded on the Inference API on-demand.

Dataset used to train haritzpuerto/distilroberta-squad_1.1

Evaluation results