--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: BioBERT-LitCovid-v1.3hh results: [] --- # BioBERT-LitCovid-v1.3hh This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9050 - Hamming loss: 0.0147 - F1 micro: 0.8717 - F1 macro: 0.4368 - F1 weighted: 0.8882 - F1 samples: 0.8857 - Precision micro: 0.8176 - Precision macro: 0.3560 - Precision weighted: 0.8520 - Precision samples: 0.8728 - Recall micro: 0.9334 - Recall macro: 0.7011 - Recall weighted: 0.9334 - Recall samples: 0.9438 - Roc Auc: 0.9608 - Accuracy: 0.7014 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.11492820779210673 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Hamming loss | F1 micro | F1 macro | F1 weighted | F1 samples | Precision micro | Precision macro | Precision weighted | Precision samples | Recall micro | Recall macro | Recall weighted | Recall samples | Roc Auc | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:------------:|:--------:|:--------:|:-----------:|:----------:|:---------------:|:---------------:|:------------------:|:-----------------:|:------------:|:------------:|:---------------:|:--------------:|:-------:|:--------:| | 1.1889 | 1.0 | 2272 | 0.4213 | 0.0512 | 0.6596 | 0.2446 | 0.8084 | 0.7608 | 0.5126 | 0.1941 | 0.7385 | 0.7077 | 0.9250 | 0.8376 | 0.9250 | 0.9404 | 0.9376 | 0.4492 | | 0.8405 | 2.0 | 4544 | 0.4523 | 0.0234 | 0.8101 | 0.3434 | 0.8586 | 0.8435 | 0.7177 | 0.2700 | 0.8104 | 0.8130 | 0.9296 | 0.7802 | 0.9296 | 0.9421 | 0.9544 | 0.5954 | | 0.6991 | 3.0 | 6816 | 0.5218 | 0.0214 | 0.8253 | 0.3595 | 0.8703 | 0.8563 | 0.7327 | 0.2829 | 0.8184 | 0.8238 | 0.9447 | 0.7721 | 0.9447 | 0.9534 | 0.9626 | 0.6190 | | 0.3865 | 4.0 | 9088 | 0.8428 | 0.0155 | 0.8655 | 0.4279 | 0.8826 | 0.8808 | 0.8092 | 0.3453 | 0.8458 | 0.8667 | 0.9302 | 0.6992 | 0.9302 | 0.9417 | 0.9589 | 0.6917 | | 0.1332 | 5.0 | 11360 | 0.9050 | 0.0147 | 0.8717 | 0.4368 | 0.8882 | 0.8857 | 0.8176 | 0.3560 | 0.8520 | 0.8728 | 0.9334 | 0.7011 | 0.9334 | 0.9438 | 0.9608 | 0.7014 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.13.3