--- license: mit tags: - generated_from_trainer datasets: - bc4chemd_ner metrics: - precision - recall - f1 - accuracy model-index: - name: bc4chemd_ner-Bio_ClinicalBERT-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: bc4chemd_ner type: bc4chemd_ner args: bc4chemd metrics: - name: Precision type: precision value: 0.8944236722550557 - name: Recall type: recall value: 0.8777321865383098 - name: F1 type: f1 value: 0.8859993229654115 - name: Accuracy type: accuracy value: 0.9908228496683563 --- # bc4chemd_ner-Bio_ClinicalBERT-finetuned-ner This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the bc4chemd_ner dataset. It achieves the following results on the evaluation set: - Loss: 0.0641 - Precision: 0.8944 - Recall: 0.8777 - F1: 0.8860 - Accuracy: 0.9908 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.006 | 1.0 | 1918 | 0.0310 | 0.8697 | 0.8510 | 0.8602 | 0.9894 | | 0.0097 | 2.0 | 3836 | 0.0345 | 0.8855 | 0.8637 | 0.8745 | 0.9898 | | 0.0058 | 3.0 | 5754 | 0.0359 | 0.8733 | 0.8836 | 0.8784 | 0.9902 | | 0.0014 | 4.0 | 7672 | 0.0440 | 0.8723 | 0.8842 | 0.8782 | 0.9903 | | 0.0005 | 5.0 | 9590 | 0.0539 | 0.8862 | 0.8673 | 0.8766 | 0.9903 | | 0.0001 | 6.0 | 11508 | 0.0558 | 0.8939 | 0.8628 | 0.8781 | 0.9904 | | 0.0001 | 7.0 | 13426 | 0.0558 | 0.8846 | 0.8729 | 0.8787 | 0.9903 | | 0.0012 | 8.0 | 15344 | 0.0635 | 0.8935 | 0.8696 | 0.8814 | 0.9905 | | 0.0 | 9.0 | 17262 | 0.0624 | 0.8897 | 0.8831 | 0.8864 | 0.9908 | | 0.0002 | 10.0 | 19180 | 0.0641 | 0.8944 | 0.8777 | 0.8860 | 0.9908 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1