roberta-base-finetuned-squad
This model is a fine-tuned version of ozgurkk/roberta-base-finetuned-squad on the SQuAD 2.0 dataset.
Model description
roberta-base model fine-tuned on SQuAD 2.0
- Language model: roberta-base
- Language: English
- Downstream-task: Extractive QA
- Training data: SQuAD 2.0
- Eval data: SQuAD 2.0
Intended uses & limitations
Question-Answering
Training and evaluation data
https://rajpurkar.github.io/SQuAD-explorer/
Evaluation Results
- "exact": 77.89943569443275
- "f1": 81.32467973552446
- "HasAns_exact": 77.09176788124157
- "HasAns_f1": 83.95207869431226
- "NoAns_exact": 78.70479394449117
- "NoAns_f1": 78.70479394449117
- "best_exact": 79.07858165585783
- "best_f1": 82.15878087260025
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 31
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for ozgurkk/roberta-base-finetuned-squad
Unable to build the model tree, the base model loops to the model itself. Learn more.