roberta-case-clean-25
This model is a fine-tuned version of roberta-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.1783
- Accuracy: 0.88
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 224
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
No log | 1.0 | 44 | 0.7612 | 0.8867 |
No log | 2.0 | 88 | 1.0991 | 0.8667 |
No log | 3.0 | 132 | 1.0580 | 0.8867 |
No log | 4.0 | 176 | 1.4624 | 0.8533 |
No log | 5.0 | 220 | 1.1583 | 0.88 |
No log | 6.0 | 264 | 1.1773 | 0.88 |
No log | 7.0 | 308 | 1.1942 | 0.8733 |
No log | 8.0 | 352 | 1.2109 | 0.8733 |
No log | 9.0 | 396 | 1.2206 | 0.88 |
No log | 10.0 | 440 | 1.3737 | 0.8667 |
No log | 11.0 | 484 | 1.0994 | 0.8733 |
0.0093 | 12.0 | 528 | 1.2048 | 0.88 |
0.0093 | 13.0 | 572 | 1.1263 | 0.8733 |
0.0093 | 14.0 | 616 | 1.1459 | 0.88 |
0.0093 | 15.0 | 660 | 1.1525 | 0.88 |
0.0093 | 16.0 | 704 | 1.1575 | 0.88 |
0.0093 | 17.0 | 748 | 1.1619 | 0.88 |
0.0093 | 18.0 | 792 | 1.1657 | 0.88 |
0.0093 | 19.0 | 836 | 1.1688 | 0.88 |
0.0093 | 20.0 | 880 | 1.1717 | 0.88 |
0.0093 | 21.0 | 924 | 1.1738 | 0.88 |
0.0093 | 22.0 | 968 | 1.1758 | 0.88 |
0.0034 | 23.0 | 1012 | 1.1772 | 0.88 |
0.0034 | 24.0 | 1056 | 1.1780 | 0.88 |
0.0034 | 25.0 | 1100 | 1.1783 | 0.88 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
- Downloads last month
- 1
This model does not have enough activity to be deployed to Inference API (serverless) yet.
Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.