hbertv1-Massive-intent_48_w_in
This model is a fine-tuned version of gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48 on the massive dataset. It achieves the following results on the evaluation set:
- Loss: 0.8264
- Accuracy: 0.8736
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
1.6907 | 1.0 | 180 | 0.8443 | 0.7777 |
0.7472 | 2.0 | 360 | 0.6977 | 0.8210 |
0.5222 | 3.0 | 540 | 0.6538 | 0.8352 |
0.3848 | 4.0 | 720 | 0.6461 | 0.8357 |
0.284 | 5.0 | 900 | 0.6195 | 0.8524 |
0.2051 | 6.0 | 1080 | 0.6218 | 0.8574 |
0.149 | 7.0 | 1260 | 0.6915 | 0.8495 |
0.1108 | 8.0 | 1440 | 0.7420 | 0.8574 |
0.0806 | 9.0 | 1620 | 0.7204 | 0.8549 |
0.0565 | 10.0 | 1800 | 0.7570 | 0.8603 |
0.0355 | 11.0 | 1980 | 0.7622 | 0.8677 |
0.0246 | 12.0 | 2160 | 0.8344 | 0.8647 |
0.0124 | 13.0 | 2340 | 0.8276 | 0.8682 |
0.0072 | 14.0 | 2520 | 0.8264 | 0.8736 |
0.0042 | 15.0 | 2700 | 0.8328 | 0.8736 |
Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
- Downloads last month
- 10