--- license: apache-2.0 tags: - generated_from_trainer - NER datasets: - blurb model-index: - name: bert-base-cased-finetuned-ner-BC2GM-IOB results: [] language: - en metrics: - seqeval pipeline_tag: token-classification --- # bert-base-cased-finetuned-ner-BC2GM-IOB This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased). It achieves the following results on the evaluation set: - Loss: 0.0813 - Gene - Precision: 0.752111423914654 - Recall: 0.8025296442687747 - F1: 0.7765029830197338 - Number: 6325 - Overall - Precision: 0.7521 - Recall: 0.8025 - F1: 0.7765 - Accuracy: 0.9736 ## Model description For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Token%20Classification/Monolingual/EMBO-BLURB/NER%20Project%20Using%20EMBO-BLURB%20Dataset.ipynb ## Intended uses & limitations This model is intended to demonstrate my ability to solve a complex problem using technology. ## Training and evaluation data Dataset Source: https://huggingface.co/datasets/EMBO/BLURB ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Gene Precision | Gene Recall | Gene F1 | Gene Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:---------:|:---------:|:---------:|:-----------------:|:--------------:|:------:|:------:| | 0.0882 | 1.0 | 786 | 0.0771 | 0.7383 | 0.7538 | 0.7460 | 6325 | 0.7383 | 0.7538 | 0.7460 | 0.9697 | | 0.0547 | 2.0 | 1572 | 0.0823 | 0.7617 | 0.7758 | 0.7687 | 6325 | 0.7617 | 0.7758 | 0.7687 | 0.9732 | | 0.0356 | 3.0 | 2358 | 0.0813 | 0.7521 | 0.8025 | 0.7765 | 6325 | 0.7521 | 0.8025 | 0.7765 | 0.9736 | *All values in the above chart are rounded to the nearest ten-thousandth. ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.11.0 - Tokenizers 0.13.3