UNER_subword_tk_en_lora_alpha_1024_drop_0.3_rank_512_seed_42
This model is a fine-tuned version of xlm-roberta-base on the universalner/universal_ner en_ewt dataset. It achieves the following results on the evaluation set:
- Loss: 0.0633
- Precision: 0.7732
- Recall: 0.8292
- F1: 0.8002
- Accuracy: 0.9844
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35.0
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
No log | 1.0 | 392 | 0.1362 | 0.2922 | 0.3903 | 0.3342 | 0.9569 |
0.2046 | 2.0 | 784 | 0.0889 | 0.5868 | 0.6822 | 0.6309 | 0.9745 |
0.085 | 3.0 | 1176 | 0.0772 | 0.6687 | 0.7940 | 0.7260 | 0.9778 |
0.0591 | 4.0 | 1568 | 0.0692 | 0.7085 | 0.7950 | 0.7493 | 0.9802 |
0.0591 | 5.0 | 1960 | 0.0692 | 0.6894 | 0.8251 | 0.7512 | 0.9791 |
0.0496 | 6.0 | 2352 | 0.0664 | 0.6937 | 0.8157 | 0.7498 | 0.9791 |
0.0448 | 7.0 | 2744 | 0.0671 | 0.7007 | 0.8313 | 0.7604 | 0.9797 |
0.0409 | 8.0 | 3136 | 0.0674 | 0.7200 | 0.8147 | 0.7644 | 0.9814 |
0.0388 | 9.0 | 3528 | 0.0635 | 0.7306 | 0.8478 | 0.7849 | 0.9816 |
0.0388 | 10.0 | 3920 | 0.0620 | 0.7481 | 0.8209 | 0.7828 | 0.9832 |
0.0357 | 11.0 | 4312 | 0.0586 | 0.7758 | 0.8240 | 0.7992 | 0.9844 |
0.0333 | 12.0 | 4704 | 0.0611 | 0.7606 | 0.8354 | 0.7963 | 0.9840 |
0.0323 | 13.0 | 5096 | 0.0601 | 0.7819 | 0.8240 | 0.8024 | 0.9844 |
0.0323 | 14.0 | 5488 | 0.0638 | 0.7203 | 0.8292 | 0.7709 | 0.9812 |
0.0303 | 15.0 | 5880 | 0.0600 | 0.7737 | 0.8354 | 0.8034 | 0.9841 |
0.0293 | 16.0 | 6272 | 0.0602 | 0.7703 | 0.8333 | 0.8006 | 0.9841 |
0.0271 | 17.0 | 6664 | 0.0609 | 0.7634 | 0.8416 | 0.8006 | 0.9841 |
0.0269 | 18.0 | 7056 | 0.0641 | 0.7569 | 0.8478 | 0.7998 | 0.9835 |
0.0269 | 19.0 | 7448 | 0.0594 | 0.7793 | 0.8261 | 0.8020 | 0.9849 |
0.0263 | 20.0 | 7840 | 0.0608 | 0.7873 | 0.8199 | 0.8032 | 0.9850 |
0.025 | 21.0 | 8232 | 0.0606 | 0.7812 | 0.8240 | 0.8020 | 0.9850 |
0.0236 | 22.0 | 8624 | 0.0639 | 0.7558 | 0.8364 | 0.7941 | 0.9839 |
0.0228 | 23.0 | 9016 | 0.0620 | 0.7668 | 0.8375 | 0.8006 | 0.9845 |
0.0228 | 24.0 | 9408 | 0.0612 | 0.7647 | 0.8344 | 0.7980 | 0.9842 |
0.0229 | 25.0 | 9800 | 0.0618 | 0.7584 | 0.8385 | 0.7965 | 0.9839 |
0.0227 | 26.0 | 10192 | 0.0631 | 0.7678 | 0.8385 | 0.8016 | 0.9842 |
0.0216 | 27.0 | 10584 | 0.0628 | 0.7883 | 0.8364 | 0.8117 | 0.9850 |
0.0216 | 28.0 | 10976 | 0.0611 | 0.7765 | 0.8344 | 0.8044 | 0.9849 |
0.0203 | 29.0 | 11368 | 0.0615 | 0.7755 | 0.8406 | 0.8068 | 0.9847 |
0.02 | 30.0 | 11760 | 0.0629 | 0.7743 | 0.8344 | 0.8032 | 0.9847 |
0.0197 | 31.0 | 12152 | 0.0620 | 0.7763 | 0.8333 | 0.8038 | 0.9843 |
0.0197 | 32.0 | 12544 | 0.0633 | 0.7750 | 0.8271 | 0.8002 | 0.9845 |
0.0197 | 33.0 | 12936 | 0.0631 | 0.7813 | 0.8323 | 0.8060 | 0.9845 |
0.0192 | 34.0 | 13328 | 0.0629 | 0.7768 | 0.8323 | 0.8036 | 0.9845 |
0.0188 | 35.0 | 13720 | 0.0633 | 0.7732 | 0.8292 | 0.8002 | 0.9844 |
Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
Unable to determine this model's library. Check the
docs
.
Finetuned from
Dataset used to train Darius07/UNER_subword_tk_en_lora_alpha_1024_drop_0.3_rank_512_seed_42
Evaluation results
- Precision on universalner/universal_ner en_ewtvalidation set self-reported0.773
- Recall on universalner/universal_ner en_ewtvalidation set self-reported0.829
- F1 on universalner/universal_ner en_ewtvalidation set self-reported0.800
- Accuracy on universalner/universal_ner en_ewtvalidation set self-reported0.984