hinglish-finetuned
This model is a fine-tuned version of verloop/Hinglish-Bert on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 2.0786
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
3.3784 | 1.0 | 80 | 3.0527 |
3.0398 | 2.0 | 160 | 2.8067 |
2.9133 | 3.0 | 240 | 2.7252 |
2.7872 | 4.0 | 320 | 2.5783 |
2.6205 | 5.0 | 400 | 2.5050 |
2.5979 | 6.0 | 480 | 2.4654 |
2.5655 | 7.0 | 560 | 2.4091 |
2.5412 | 8.0 | 640 | 2.3630 |
2.4479 | 9.0 | 720 | 2.3754 |
2.3724 | 10.0 | 800 | 2.2860 |
2.3842 | 11.0 | 880 | 2.2812 |
2.3411 | 12.0 | 960 | 2.2038 |
2.2617 | 13.0 | 1040 | 2.1887 |
2.3141 | 14.0 | 1120 | 2.1966 |
2.2115 | 15.0 | 1200 | 2.1248 |
2.2363 | 16.0 | 1280 | 2.1006 |
2.2191 | 17.0 | 1360 | 2.1248 |
2.1856 | 18.0 | 1440 | 2.0872 |
2.2009 | 19.0 | 1520 | 2.0299 |
2.2364 | 20.0 | 1600 | 2.0193 |
2.1785 | 21.0 | 1680 | 2.0227 |
2.1934 | 22.0 | 1760 | 2.0540 |
2.1479 | 23.0 | 1840 | 2.0381 |
2.0973 | 24.0 | 1920 | 1.9885 |
2.1376 | 25.0 | 2000 | 2.0142 |
Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.