--- base_model: microsoft/mdeberta-v3-base library_name: transformers license: mit metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer model-index: - name: scenario-non-kd-pre-ner-full-mdeberta_data-univner_half66 results: [] --- # scenario-non-kd-pre-ner-full-mdeberta_data-univner_half66 This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1996 - Precision: 0.7561 - Recall: 0.7723 - F1: 0.7641 - Accuracy: 0.9755 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 66 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-------:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.304 | 0.5828 | 500 | 0.1829 | 0.3627 | 0.3414 | 0.3517 | 0.9389 | | 0.155 | 1.1655 | 1000 | 0.1258 | 0.5315 | 0.6722 | 0.5937 | 0.9577 | | 0.0973 | 1.7483 | 1500 | 0.1032 | 0.6320 | 0.7086 | 0.6681 | 0.9670 | | 0.0707 | 2.3310 | 2000 | 0.1055 | 0.6478 | 0.7443 | 0.6927 | 0.9683 | | 0.0584 | 2.9138 | 2500 | 0.1007 | 0.6755 | 0.7436 | 0.7079 | 0.9698 | | 0.0431 | 3.4965 | 3000 | 0.1186 | 0.6613 | 0.7552 | 0.7051 | 0.9686 | | 0.0392 | 4.0793 | 3500 | 0.1087 | 0.6946 | 0.7627 | 0.7270 | 0.9718 | | 0.0273 | 4.6620 | 4000 | 0.1095 | 0.7451 | 0.7286 | 0.7367 | 0.9736 | | 0.0233 | 5.2448 | 4500 | 0.1259 | 0.6876 | 0.7772 | 0.7297 | 0.9719 | | 0.0189 | 5.8275 | 5000 | 0.1201 | 0.7222 | 0.7526 | 0.7371 | 0.9730 | | 0.0161 | 6.4103 | 5500 | 0.1339 | 0.7334 | 0.7342 | 0.7338 | 0.9734 | | 0.0153 | 6.9930 | 6000 | 0.1338 | 0.7226 | 0.7533 | 0.7376 | 0.9737 | | 0.0109 | 7.5758 | 6500 | 0.1320 | 0.7345 | 0.7634 | 0.7486 | 0.9742 | | 0.0108 | 8.1585 | 7000 | 0.1427 | 0.7189 | 0.7617 | 0.7397 | 0.9728 | | 0.0089 | 8.7413 | 7500 | 0.1423 | 0.7268 | 0.7647 | 0.7453 | 0.9738 | | 0.0075 | 9.3240 | 8000 | 0.1471 | 0.7332 | 0.7638 | 0.7482 | 0.9740 | | 0.0071 | 9.9068 | 8500 | 0.1501 | 0.7502 | 0.7466 | 0.7484 | 0.9744 | | 0.0064 | 10.4895 | 9000 | 0.1558 | 0.7133 | 0.7813 | 0.7458 | 0.9734 | | 0.0058 | 11.0723 | 9500 | 0.1495 | 0.7514 | 0.7599 | 0.7556 | 0.9750 | | 0.0047 | 11.6550 | 10000 | 0.1632 | 0.7146 | 0.7627 | 0.7379 | 0.9727 | | 0.0043 | 12.2378 | 10500 | 0.1707 | 0.7259 | 0.7699 | 0.7472 | 0.9740 | | 0.0039 | 12.8205 | 11000 | 0.1648 | 0.7415 | 0.7608 | 0.7510 | 0.9742 | | 0.0041 | 13.4033 | 11500 | 0.1717 | 0.7247 | 0.7693 | 0.7463 | 0.9739 | | 0.0036 | 13.9860 | 12000 | 0.1711 | 0.7251 | 0.7687 | 0.7463 | 0.9737 | | 0.0029 | 14.5688 | 12500 | 0.1757 | 0.7275 | 0.7751 | 0.7505 | 0.9741 | | 0.0026 | 15.1515 | 13000 | 0.1817 | 0.7527 | 0.7563 | 0.7545 | 0.9744 | | 0.0027 | 15.7343 | 13500 | 0.1779 | 0.7534 | 0.7554 | 0.7544 | 0.9750 | | 0.0028 | 16.3170 | 14000 | 0.1826 | 0.7505 | 0.7578 | 0.7541 | 0.9747 | | 0.0026 | 16.8998 | 14500 | 0.1793 | 0.7573 | 0.7681 | 0.7627 | 0.9755 | | 0.0022 | 17.4825 | 15000 | 0.1803 | 0.7460 | 0.7722 | 0.7589 | 0.9753 | | 0.002 | 18.0653 | 15500 | 0.1858 | 0.7331 | 0.7807 | 0.7561 | 0.9746 | | 0.0018 | 18.6480 | 16000 | 0.1875 | 0.7451 | 0.7624 | 0.7536 | 0.9740 | | 0.0018 | 19.2308 | 16500 | 0.1896 | 0.7484 | 0.7661 | 0.7572 | 0.9749 | | 0.0014 | 19.8135 | 17000 | 0.1862 | 0.7592 | 0.7689 | 0.7640 | 0.9755 | | 0.0018 | 20.3963 | 17500 | 0.1936 | 0.7559 | 0.7567 | 0.7563 | 0.9749 | | 0.0014 | 20.9790 | 18000 | 0.1908 | 0.7514 | 0.7680 | 0.7596 | 0.9751 | | 0.0012 | 21.5618 | 18500 | 0.1956 | 0.7464 | 0.7692 | 0.7576 | 0.9751 | | 0.0015 | 22.1445 | 19000 | 0.1986 | 0.7352 | 0.7751 | 0.7546 | 0.9746 | | 0.0012 | 22.7273 | 19500 | 0.1936 | 0.7277 | 0.7804 | 0.7531 | 0.9746 | | 0.001 | 23.3100 | 20000 | 0.1975 | 0.7358 | 0.7781 | 0.7564 | 0.9749 | | 0.0011 | 23.8928 | 20500 | 0.1956 | 0.7485 | 0.7749 | 0.7615 | 0.9754 | | 0.001 | 24.4755 | 21000 | 0.1950 | 0.7522 | 0.7728 | 0.7624 | 0.9754 | | 0.0009 | 25.0583 | 21500 | 0.1958 | 0.7522 | 0.7713 | 0.7616 | 0.9755 | | 0.0006 | 25.6410 | 22000 | 0.1998 | 0.7454 | 0.7741 | 0.7595 | 0.9751 | | 0.0006 | 26.2238 | 22500 | 0.2026 | 0.7496 | 0.7725 | 0.7609 | 0.9753 | | 0.0008 | 26.8065 | 23000 | 0.1991 | 0.7609 | 0.7638 | 0.7623 | 0.9755 | | 0.0006 | 27.3893 | 23500 | 0.1962 | 0.7547 | 0.7772 | 0.7658 | 0.9758 | | 0.0006 | 27.9720 | 24000 | 0.1995 | 0.7551 | 0.7728 | 0.7638 | 0.9755 | | 0.0005 | 28.5548 | 24500 | 0.2003 | 0.7538 | 0.7738 | 0.7636 | 0.9754 | | 0.0006 | 29.1375 | 25000 | 0.1996 | 0.7574 | 0.7694 | 0.7634 | 0.9755 | | 0.0005 | 29.7203 | 25500 | 0.1996 | 0.7561 | 0.7723 | 0.7641 | 0.9755 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.1.1+cu121 - Datasets 2.14.5 - Tokenizers 0.19.1