small-mlm-glue-qnli-target-glue-stsb
This model is a fine-tuned version of muhtasham/small-mlm-glue-qnli on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.5609
- Pearson: 0.8733
- Spearmanr: 0.8698
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
Training results
Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
---|---|---|---|---|---|
0.8114 | 2.78 | 500 | 0.6300 | 0.8675 | 0.8683 |
0.2884 | 5.56 | 1000 | 0.5861 | 0.8694 | 0.8671 |
0.1718 | 8.33 | 1500 | 0.6147 | 0.8656 | 0.8626 |
0.1246 | 11.11 | 2000 | 0.6181 | 0.8710 | 0.8673 |
0.0994 | 13.89 | 2500 | 0.5773 | 0.8714 | 0.8681 |
0.0822 | 16.67 | 3000 | 0.5948 | 0.8729 | 0.8690 |
0.0686 | 19.44 | 3500 | 0.5569 | 0.8744 | 0.8706 |
0.0602 | 22.22 | 4000 | 0.5671 | 0.8758 | 0.8719 |
0.0559 | 25.0 | 4500 | 0.5501 | 0.8728 | 0.8694 |
0.0514 | 27.78 | 5000 | 0.5609 | 0.8733 | 0.8698 |
Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
- Downloads last month
- 1
This model does not have enough activity to be deployed to Inference API (serverless) yet.
Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.