CNEC_2_0_Supertypes_Czert-B-base-cased
This model is a fine-tuned version of UWB-AIR/Czert-B-base-cased on the cnec dataset. It achieves the following results on the evaluation set:
- Loss: 0.2429
- Precision: 0.8320
- Recall: 0.8860
- F1: 0.8582
- Accuracy: 0.9590
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
No log | 1.0 | 113 | 0.2231 | 0.7053 | 0.7472 | 0.7256 | 0.9363 |
No log | 2.0 | 226 | 0.1791 | 0.7584 | 0.8170 | 0.7866 | 0.9490 |
No log | 3.0 | 339 | 0.1746 | 0.7742 | 0.8385 | 0.8051 | 0.9508 |
No log | 4.0 | 452 | 0.1783 | 0.7836 | 0.8509 | 0.8158 | 0.9512 |
0.2584 | 5.0 | 565 | 0.1742 | 0.7902 | 0.8558 | 0.8217 | 0.9541 |
0.2584 | 6.0 | 678 | 0.1653 | 0.8044 | 0.8645 | 0.8334 | 0.9565 |
0.2584 | 7.0 | 791 | 0.1694 | 0.8103 | 0.8715 | 0.8398 | 0.9579 |
0.2584 | 8.0 | 904 | 0.1838 | 0.8001 | 0.8678 | 0.8326 | 0.9556 |
0.0804 | 9.0 | 1017 | 0.1804 | 0.8204 | 0.8753 | 0.8469 | 0.9571 |
0.0804 | 10.0 | 1130 | 0.1918 | 0.8196 | 0.8761 | 0.8469 | 0.9576 |
0.0804 | 11.0 | 1243 | 0.2018 | 0.8169 | 0.8790 | 0.8468 | 0.9578 |
0.0804 | 12.0 | 1356 | 0.2067 | 0.8220 | 0.8815 | 0.8507 | 0.9579 |
0.0804 | 13.0 | 1469 | 0.2060 | 0.8285 | 0.8876 | 0.8570 | 0.9585 |
0.049 | 14.0 | 1582 | 0.2084 | 0.8271 | 0.8815 | 0.8534 | 0.9589 |
0.049 | 15.0 | 1695 | 0.2171 | 0.8257 | 0.8806 | 0.8523 | 0.9585 |
0.049 | 16.0 | 1808 | 0.2246 | 0.8307 | 0.8839 | 0.8565 | 0.9586 |
0.049 | 17.0 | 1921 | 0.2225 | 0.8288 | 0.8881 | 0.8574 | 0.9590 |
0.0338 | 18.0 | 2034 | 0.2272 | 0.8351 | 0.8889 | 0.8611 | 0.9598 |
0.0338 | 19.0 | 2147 | 0.2307 | 0.8337 | 0.8864 | 0.8593 | 0.9593 |
0.0338 | 20.0 | 2260 | 0.2387 | 0.8302 | 0.8864 | 0.8574 | 0.9588 |
0.0338 | 21.0 | 2373 | 0.2387 | 0.8338 | 0.8868 | 0.8595 | 0.9585 |
0.0338 | 22.0 | 2486 | 0.2400 | 0.8343 | 0.8881 | 0.8603 | 0.9592 |
0.0261 | 23.0 | 2599 | 0.2422 | 0.8319 | 0.8872 | 0.8587 | 0.9590 |
0.0261 | 24.0 | 2712 | 0.2431 | 0.8317 | 0.8860 | 0.858 | 0.9589 |
0.0261 | 25.0 | 2825 | 0.2429 | 0.8320 | 0.8860 | 0.8582 | 0.9590 |
Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
- Downloads last month
- 2
Finetuned from
Evaluation results
- Precision on cnecvalidation set self-reported0.832
- Recall on cnecvalidation set self-reported0.886
- F1 on cnecvalidation set self-reported0.858
- Accuracy on cnecvalidation set self-reported0.959