--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert_chinese_mc_base-BioNER-EN-ZH results: [] --- # bert_chinese_mc_base-BioNER-EN-ZH This model is a fine-tuned version of [StivenLancheros/bert_chinese_mc_base-BioNER-EN](https://huggingface.co/StivenLancheros/bert_chinese_mc_base-BioNER-EN) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3611 - Precision: 0.6967 - Recall: 0.7980 - F1: 0.7439 - Accuracy: 0.9215 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.4895 | 1.0 | 680 | 0.6248 | 0.4389 | 0.6486 | 0.5235 | 0.8301 | | 0.3569 | 2.0 | 1360 | 0.6207 | 0.4931 | 0.7204 | 0.5854 | 0.8481 | | 0.2778 | 3.0 | 2040 | 0.4876 | 0.5723 | 0.7371 | 0.6443 | 0.8864 | | 0.2558 | 4.0 | 2720 | 0.4496 | 0.5882 | 0.7446 | 0.6572 | 0.8892 | | 0.2363 | 5.0 | 3400 | 0.4674 | 0.5845 | 0.7619 | 0.6615 | 0.8892 | | 0.2129 | 6.0 | 4080 | 0.4311 | 0.6148 | 0.7674 | 0.6827 | 0.9005 | | 0.2019 | 7.0 | 4760 | 0.3930 | 0.6428 | 0.7710 | 0.7011 | 0.9103 | | 0.1912 | 8.0 | 5440 | 0.4031 | 0.6438 | 0.7815 | 0.7060 | 0.9095 | | 0.1741 | 9.0 | 6120 | 0.3914 | 0.6506 | 0.7765 | 0.7080 | 0.9101 | | 0.1727 | 10.0 | 6800 | 0.3808 | 0.6530 | 0.7814 | 0.7114 | 0.9117 | | 0.1625 | 11.0 | 7480 | 0.4047 | 0.6545 | 0.7828 | 0.7129 | 0.9106 | | 0.1546 | 12.0 | 8160 | 0.3803 | 0.6543 | 0.7849 | 0.7137 | 0.9115 | | 0.1515 | 13.0 | 8840 | 0.3635 | 0.6828 | 0.7979 | 0.7359 | 0.9217 | | 0.1415 | 14.0 | 9520 | 0.3872 | 0.6718 | 0.7962 | 0.7287 | 0.9160 | | 0.1425 | 15.0 | 10200 | 0.3699 | 0.6879 | 0.7939 | 0.7371 | 0.9193 | | 0.1327 | 16.0 | 10880 | 0.3762 | 0.6869 | 0.7977 | 0.7382 | 0.9184 | | 0.1307 | 17.0 | 11560 | 0.3732 | 0.6822 | 0.8013 | 0.7369 | 0.9181 | | 0.1309 | 18.0 | 12240 | 0.3629 | 0.6956 | 0.7970 | 0.7428 | 0.9208 | | 0.1268 | 19.0 | 12920 | 0.3643 | 0.6930 | 0.7990 | 0.7423 | 0.9210 | | 0.1257 | 20.0 | 13600 | 0.3611 | 0.6967 | 0.7980 | 0.7439 | 0.9215 | ### Framework versions - Transformers 4.27.2 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2