--- tags: - generated_from_trainer datasets: - jnlpba metrics: - precision - recall - f1 - accuracy model-index: - name: electramed-small-JNLPBA-ner results: - task: name: Token Classification type: token-classification dataset: name: jnlpba type: jnlpba config: jnlpba split: train args: jnlpba metrics: - name: Precision type: precision value: 0.8224512128396863 - name: Recall type: recall value: 0.878188899707887 - name: F1 type: f1 value: 0.8494066679223958 - name: Accuracy type: accuracy value: 0.9620705451213926 --- # electramed-small-JNLPBA-ner This model is a fine-tuned version of [giacomomiolo/electramed_small_scivocab](https://huggingface.co/giacomomiolo/electramed_small_scivocab) on the jnlpba dataset. It achieves the following results on the evaluation set: - Loss: 0.1167 - Precision: 0.8225 - Recall: 0.8782 - F1: 0.8494 - Accuracy: 0.9621 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.398 | 1.0 | 2087 | 0.1941 | 0.7289 | 0.7936 | 0.7599 | 0.9441 | | 0.0771 | 2.0 | 4174 | 0.1542 | 0.7734 | 0.8348 | 0.8029 | 0.9514 | | 0.1321 | 3.0 | 6261 | 0.1413 | 0.7890 | 0.8492 | 0.8180 | 0.9546 | | 0.2302 | 4.0 | 8348 | 0.1326 | 0.8006 | 0.8589 | 0.8287 | 0.9562 | | 0.0723 | 5.0 | 10435 | 0.1290 | 0.7997 | 0.8715 | 0.8340 | 0.9574 | | 0.171 | 6.0 | 12522 | 0.1246 | 0.8115 | 0.8722 | 0.8408 | 0.9593 | | 0.1058 | 7.0 | 14609 | 0.1204 | 0.8148 | 0.8757 | 0.8441 | 0.9604 | | 0.1974 | 8.0 | 16696 | 0.1178 | 0.8181 | 0.8779 | 0.8470 | 0.9614 | | 0.0663 | 9.0 | 18783 | 0.1168 | 0.8239 | 0.8781 | 0.8501 | 0.9620 | | 0.1022 | 10.0 | 20870 | 0.1167 | 0.8225 | 0.8782 | 0.8494 | 0.9621 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1