PubMedBERT-Large-LitCovid-v1.3hh
This model is a fine-tuned version of microsoft/BiomedNLP-BiomedBERT-large-uncased-abstract on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.0491
- Hamming loss: 0.0120
- F1 micro: 0.8926
- F1 macro: 0.5147
- F1 weighted: 0.9005
- F1 samples: 0.8991
- Precision micro: 0.8601
- Precision macro: 0.4443
- Precision weighted: 0.8773
- Precision samples: 0.8949
- Recall micro: 0.9276
- Recall macro: 0.6794
- Recall weighted: 0.9276
- Recall samples: 0.9391
- Roc Auc: 0.9595
- Accuracy: 0.7345
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.11455156472885997
- num_epochs: 5
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Hamming loss | F1 micro | F1 macro | F1 weighted | F1 samples | Precision micro | Precision macro | Precision weighted | Precision samples | Recall micro | Recall macro | Recall weighted | Recall samples | Roc Auc | Accuracy |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1.7664 | 1.0 | 4543 | 0.7307 | 0.0260 | 0.7934 | 0.3058 | 0.8407 | 0.8375 | 0.6920 | 0.2476 | 0.7786 | 0.8050 | 0.9296 | 0.6254 | 0.9296 | 0.9405 | 0.9531 | 0.5695 |
1.2652 | 2.0 | 9086 | 0.6934 | 0.0179 | 0.8492 | 0.4112 | 0.8710 | 0.8656 | 0.7749 | 0.3399 | 0.8188 | 0.8377 | 0.9394 | 0.6981 | 0.9394 | 0.9481 | 0.9620 | 0.6367 |
1.0967 | 3.0 | 13629 | 0.6460 | 0.0159 | 0.8631 | 0.4208 | 0.8849 | 0.8761 | 0.8008 | 0.3493 | 0.8460 | 0.8555 | 0.9360 | 0.7053 | 0.9360 | 0.9446 | 0.9614 | 0.6683 |
0.7557 | 4.0 | 18172 | 0.8881 | 0.0123 | 0.8901 | 0.5048 | 0.8966 | 0.8952 | 0.8527 | 0.4416 | 0.8674 | 0.8865 | 0.9309 | 0.6630 | 0.9309 | 0.9409 | 0.9609 | 0.7224 |
0.4182 | 5.0 | 22715 | 1.0491 | 0.0120 | 0.8926 | 0.5147 | 0.9005 | 0.8991 | 0.8601 | 0.4443 | 0.8773 | 0.8949 | 0.9276 | 0.6794 | 0.9276 | 0.9391 | 0.9595 | 0.7345 |
Framework versions
- Transformers 4.28.0
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3
- Downloads last month
- 1
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.