Edit model card

biomedical_question_answering

This model is a fine-tuned version of microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext on a custom dataset of question answer pairs annotated from research papers from Pubmed. It achieves the following results on the evaluation set:

  • Loss: 2.6629

Model description

Model finetuned on PubmedBERT using custom daatset

Intended uses & limitations

For question answering related to biomedical research papers.

Training and evaluation data

Data https://huggingface.co/datasets/Shushant/BiomedicalQuestionAnsweringDataset

Training procedure

Finetuning using Trainer API

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 236 1.6866
No log 2.0 472 1.5432
0.737 3.0 708 1.7998
0.737 4.0 944 1.9746
0.2893 5.0 1180 1.9510
0.2893 6.0 1416 2.1479
0.1562 7.0 1652 2.3304
0.1562 8.0 1888 2.5882
0.0823 9.0 2124 2.6494
0.0823 10.0 2360 2.6629

Framework versions

Citation Plain Text

S. Pudasaini and S. Shakya, "Question Answering on Biomedical Research Papers using Transfer Learning on BERT-Base Models," 2023 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), Kirtipur, Nepal, 2023, pp. 496-501, doi: 10.1109/I-SMAC58438.2023.10290240.

Citation Bibtex

@INPROCEEDINGS{10290240, author={Pudasaini, Shushanta and Shakya, Subarna}, booktitle={2023 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC)}, title={Question Answering on Biomedical Research Papers using Transfer Learning on BERT-Base Models}, year={2023}, volume={}, number={}, pages={496-501}, doi={10.1109/I-SMAC58438.2023.10290240}}

Downloads last month
3

Dataset used to train Shushant/biomedical_question_answering