Edit model card

Model description

RoBERTa-base fine-tuned on SQuAD 2.0 : Encoder-based Transformer Language model, pretrained with Dynamic Masking, No Next Sentence Prediction and increased Batch size compared to BERT.
Suitable for Question-Answering tasks, predicts answer spans within the context provided.

Language model: roberta-base
Language: English
Downstream-task: Question-Answering
Training data: Train-set SQuAD 2.0
Evaluation data: Evaluation-set SQuAD 2.0
Hardware Accelerator used: GPU Tesla T4

Intended uses & limitations

For Question-Answering -

!pip install transformers
from transformers import pipeline
model_checkpoint = "IProject-10/roberta-base-finetuned-squad2"
question_answerer = pipeline("question-answering", model=model_checkpoint)

context = """
🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration
between them. It's straightforward to train your models with one before loading them for inference with the other.
"""

question = "Which deep learning libraries back 🤗 Transformers?"
question_answerer(question=question, context=context)

Results

Evaluation on SQuAD 2.0 validation dataset:

 exact: 79.71868946348859,
 f1: 83.049614486567,
 total: 11873,
 HasAns_exact: 78.39068825910931,
 HasAns_f1: 85.06209055313944,
 HasAns_total: 5928,
 NoAns_exact: 81.04289318755256,
 NoAns_f1: 81.04289318755256,
 NoAns_total: 5945,
 best_exact: 79.71868946348859,
 best_exact_thresh: 0.9997376203536987,
 best_f1: 83.04961448656734,
 best_f1_thresh: 0.9997376203536987,
 total_time_in_seconds: 226.245504546,
 samples_per_second: 52.47839078095801,
 latency_in_second': 0.019055462355428283

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
0.8921 1.0 8239 0.8899
0.6186 2.0 16478 0.8880
0.4393 3.0 24717 0.9785

This model is a fine-tuned version of roberta-base on the squad_v2 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9785

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.2
  • Tokenizers 0.13.3
Downloads last month
9

Finetuned from

Dataset used to train IProject-10/roberta-base-finetuned-squad2