roberta-base-squad2-nq-bioasq
Model description
This model is a fine-tuned version of nlpconnect/roberta-base-squad2-nq on the BioASQ 10b dataset.
Intended uses & limitations
Cross-domain question answering!
Training and evaluation data
Training: BioASQ 10B with SQUAD sampled evenly to match the same samples as BioASQ 10B Eval: BioASQ 9B Eval with SQUAD Eval sampled evenly to match the same samples as BioASQ 9B Eval
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
Training results
Went from untrained exact match: 60.9% (f1 71.8%) to exact match: 95.2% (96.6% f1) on BioASQ 9B held out training set. Scores on SQUAD+BioASQ remained stable at exact match: 72.5% (f1 81.4%) to 88.5% (f1 93.3%).
Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
- Downloads last month
- 117
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.