model_v1_complete_training_wt_init_48_mini
This model is a fine-tuned version of on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.7920
- Accuracy: 0.4992
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 25
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
5.9411 | 0.25 | 30000 | 5.8833 | 0.1518 |
5.6408 | 0.49 | 60000 | 5.5265 | 0.1908 |
4.5385 | 0.74 | 90000 | 4.3133 | 0.3138 |
4.1015 | 0.98 | 120000 | 3.8996 | 0.3583 |
3.9119 | 1.23 | 150000 | 3.7199 | 0.3783 |
3.7832 | 1.47 | 180000 | 3.6039 | 0.3920 |
3.6686 | 1.72 | 210000 | 3.5057 | 0.4033 |
3.5793 | 1.97 | 240000 | 3.4227 | 0.4137 |
3.5128 | 2.21 | 270000 | 3.3645 | 0.4209 |
3.4597 | 2.46 | 300000 | 3.3219 | 0.4261 |
3.4263 | 2.7 | 330000 | 3.2841 | 0.4312 |
3.3909 | 2.95 | 360000 | 3.2547 | 0.4348 |
3.3635 | 3.2 | 390000 | 3.2284 | 0.4379 |
3.3488 | 3.44 | 420000 | 3.2060 | 0.4409 |
3.3239 | 3.69 | 450000 | 3.1872 | 0.4436 |
3.3062 | 3.93 | 480000 | 3.1660 | 0.4462 |
3.2841 | 4.18 | 510000 | 3.1493 | 0.4485 |
3.2663 | 4.42 | 540000 | 3.1355 | 0.4503 |
3.259 | 4.67 | 570000 | 3.1229 | 0.4519 |
3.2429 | 4.92 | 600000 | 3.1096 | 0.4535 |
3.2234 | 5.16 | 630000 | 3.0947 | 0.4554 |
3.2115 | 5.41 | 660000 | 3.0818 | 0.4573 |
3.2011 | 5.65 | 690000 | 3.0685 | 0.4590 |
3.1898 | 5.9 | 720000 | 3.0464 | 0.4619 |
3.1651 | 6.14 | 750000 | 3.0226 | 0.4658 |
3.1477 | 6.39 | 780000 | 3.0025 | 0.4689 |
3.1276 | 6.64 | 810000 | 2.9838 | 0.4718 |
3.1102 | 6.88 | 840000 | 2.9690 | 0.4740 |
3.1046 | 7.13 | 870000 | 2.9563 | 0.4757 |
3.0817 | 7.37 | 900000 | 2.9477 | 0.4771 |
3.0813 | 7.62 | 930000 | 2.9397 | 0.4785 |
3.0709 | 7.87 | 960000 | 2.9259 | 0.4804 |
3.0528 | 8.11 | 990000 | 2.9208 | 0.4812 |
3.0541 | 8.36 | 1020000 | 2.9089 | 0.4829 |
3.0469 | 8.6 | 1050000 | 2.9015 | 0.4839 |
3.0377 | 8.85 | 1080000 | 2.8960 | 0.4848 |
3.0284 | 9.09 | 1110000 | 2.8859 | 0.4861 |
3.0224 | 9.34 | 1140000 | 2.8819 | 0.4867 |
3.019 | 9.59 | 1170000 | 2.8731 | 0.4878 |
3.0094 | 9.83 | 1200000 | 2.8687 | 0.4885 |
3.0065 | 10.08 | 1230000 | 2.8635 | 0.4893 |
2.9983 | 10.32 | 1260000 | 2.8561 | 0.4900 |
2.9834 | 10.57 | 1290000 | 2.8524 | 0.4907 |
2.9873 | 10.81 | 1320000 | 2.8484 | 0.4911 |
2.978 | 11.06 | 1350000 | 2.8414 | 0.4924 |
2.9709 | 11.31 | 1380000 | 2.8375 | 0.4927 |
2.9695 | 11.55 | 1410000 | 2.8353 | 0.4932 |
2.9607 | 11.8 | 1440000 | 2.8290 | 0.4941 |
2.9636 | 12.04 | 1470000 | 2.8267 | 0.4944 |
2.9584 | 12.29 | 1500000 | 2.8247 | 0.4946 |
2.9546 | 12.54 | 1530000 | 2.8196 | 0.4951 |
2.9544 | 12.78 | 1560000 | 2.8146 | 0.4959 |
2.9486 | 13.03 | 1590000 | 2.8132 | 0.4964 |
2.9413 | 13.27 | 1620000 | 2.8099 | 0.4967 |
2.9381 | 13.52 | 1650000 | 2.8081 | 0.4968 |
2.9389 | 13.76 | 1680000 | 2.8057 | 0.4973 |
2.9374 | 14.01 | 1710000 | 2.8028 | 0.4977 |
2.9341 | 14.26 | 1740000 | 2.8000 | 0.4978 |
2.9275 | 14.5 | 1770000 | 2.7978 | 0.4984 |
2.9319 | 14.75 | 1800000 | 2.7947 | 0.4989 |
2.9304 | 14.99 | 1830000 | 2.7920 | 0.4992 |
Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
- Downloads last month
- 3
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.