autotrain-radesky-lab-span-v1
This model is a fine-tuned version of bert-base-uncased on the datasaur-dev/datasaur-MTFiZjUwM2Q-ZWJiZDRmNGI dataset. It achieves the following results on the evaluation set:
- Loss: 0.2518
- Precision: 0.7854
- Recall: 0.8385
- F1: 0.8111
- Accuracy: 0.9710
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 25
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
No log | 1.0 | 455 | 0.2565 | 0.5114 | 0.4688 | 0.4891 | 0.9440 |
0.2875 | 2.0 | 910 | 0.3292 | 0.2957 | 0.2865 | 0.2910 | 0.9238 |
0.1173 | 3.0 | 1365 | 0.1931 | 0.4347 | 0.7448 | 0.5489 | 0.9531 |
0.0945 | 4.0 | 1820 | 0.1780 | 0.5147 | 0.7292 | 0.6034 | 0.9578 |
0.0559 | 5.0 | 2275 | 0.1924 | 0.5496 | 0.75 | 0.6344 | 0.9592 |
0.0412 | 6.0 | 2730 | 0.1673 | 0.6637 | 0.7708 | 0.7133 | 0.9654 |
0.0309 | 7.0 | 3185 | 0.1928 | 0.64 | 0.75 | 0.6906 | 0.9635 |
0.0231 | 8.0 | 3640 | 0.1938 | 0.6332 | 0.7552 | 0.6888 | 0.9643 |
0.0191 | 9.0 | 4095 | 0.1856 | 0.6667 | 0.7812 | 0.7194 | 0.9670 |
0.018 | 10.0 | 4550 | 0.2042 | 0.6610 | 0.8125 | 0.7290 | 0.9659 |
0.0138 | 11.0 | 5005 | 0.2254 | 0.6245 | 0.7969 | 0.7002 | 0.9649 |
0.0138 | 12.0 | 5460 | 0.2193 | 0.7318 | 0.8385 | 0.7816 | 0.9693 |
0.0104 | 13.0 | 5915 | 0.2287 | 0.6568 | 0.8073 | 0.7243 | 0.9643 |
0.0088 | 14.0 | 6370 | 0.2258 | 0.6943 | 0.8281 | 0.7553 | 0.9683 |
0.0052 | 15.0 | 6825 | 0.2323 | 0.7537 | 0.7969 | 0.7747 | 0.9677 |
0.0091 | 16.0 | 7280 | 0.2226 | 0.7067 | 0.8281 | 0.7626 | 0.9678 |
0.0039 | 17.0 | 7735 | 0.2152 | 0.7393 | 0.8125 | 0.7742 | 0.9696 |
0.006 | 18.0 | 8190 | 0.2687 | 0.7340 | 0.7760 | 0.7544 | 0.9672 |
0.0024 | 19.0 | 8645 | 0.2464 | 0.7358 | 0.8125 | 0.7723 | 0.9690 |
0.0004 | 20.0 | 9100 | 0.2463 | 0.7583 | 0.8333 | 0.7940 | 0.9694 |
0.0003 | 21.0 | 9555 | 0.2466 | 0.7805 | 0.8333 | 0.8060 | 0.9700 |
0.001 | 22.0 | 10010 | 0.2514 | 0.7822 | 0.8229 | 0.8020 | 0.9706 |
0.001 | 23.0 | 10465 | 0.2518 | 0.7854 | 0.8385 | 0.8111 | 0.9710 |
0.0002 | 24.0 | 10920 | 0.2586 | 0.7833 | 0.8281 | 0.8051 | 0.9705 |
0.0002 | 25.0 | 11375 | 0.2650 | 0.7681 | 0.8281 | 0.7970 | 0.9697 |
Framework versions
- Transformers 4.48.0
- Pytorch 2.5.1+cu124
- Datasets 2.20.0
- Tokenizers 0.21.0
- Downloads last month
- 2
Model tree for datasaur-dev/autotrain-radesky-lab-span-v1
Base model
google-bert/bert-base-uncasedEvaluation results
- Precision on datasaur-dev/datasaur-MTFiZjUwM2Q-ZWJiZDRmNGIvalidation set self-reported0.785
- Recall on datasaur-dev/datasaur-MTFiZjUwM2Q-ZWJiZDRmNGIvalidation set self-reported0.839
- F1 on datasaur-dev/datasaur-MTFiZjUwM2Q-ZWJiZDRmNGIvalidation set self-reported0.811
- Accuracy on datasaur-dev/datasaur-MTFiZjUwM2Q-ZWJiZDRmNGIvalidation set self-reported0.971