small-mlm-rotten_tomatoes-from-scratch-custom-tokenizer-target-conll2003
This model is a fine-tuned version of muhtasham/small-mlm-rotten_tomatoes-from-scratch-custom-tokenizer on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.3065
- Precision: 0.5377
- Recall: 0.6510
- F1: 0.5890
- Accuracy: 0.9171
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
0.6635 | 1.14 | 500 | 0.4842 | 0.2323 | 0.3435 | 0.2771 | 0.8491 |
0.438 | 2.28 | 1000 | 0.4003 | 0.2914 | 0.4507 | 0.3540 | 0.8705 |
0.36 | 3.42 | 1500 | 0.3711 | 0.3166 | 0.5279 | 0.3959 | 0.8757 |
0.3033 | 4.56 | 2000 | 0.3286 | 0.3780 | 0.5332 | 0.4423 | 0.8925 |
0.2611 | 5.69 | 2500 | 0.3228 | 0.4091 | 0.5905 | 0.4833 | 0.8962 |
0.2225 | 6.83 | 3000 | 0.3063 | 0.4403 | 0.6232 | 0.5160 | 0.9032 |
0.1883 | 7.97 | 3500 | 0.2909 | 0.4681 | 0.6255 | 0.5355 | 0.9081 |
0.156 | 9.11 | 4000 | 0.2942 | 0.4990 | 0.6425 | 0.5617 | 0.9117 |
0.1314 | 10.25 | 4500 | 0.2981 | 0.5124 | 0.6669 | 0.5796 | 0.9143 |
0.1122 | 11.39 | 5000 | 0.3065 | 0.5377 | 0.6510 | 0.5890 | 0.9171 |
Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
- Downloads last month
- 118
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.