--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: add_bert_12_layer_model_complete_training_new_96 results: [] --- # add_bert_12_layer_model_complete_training_new_96 This model is a fine-tuned version of [gokuls/add_bert_12_layer_model_complete_training_new_48](https://huggingface.co/gokuls/add_bert_12_layer_model_complete_training_new_48) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.4112 - Accuracy: 0.1893 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10000 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 5.8144 | 0.08 | 10000 | 5.7474 | 0.1593 | | 5.7889 | 0.16 | 20000 | 5.7204 | 0.1604 | | 5.6347 | 0.25 | 30000 | 5.6966 | 0.1623 | | 5.7138 | 0.33 | 40000 | 5.6725 | 0.1636 | | 5.6769 | 0.41 | 50000 | 5.6518 | 0.1658 | | 5.6603 | 0.49 | 60000 | 5.6290 | 0.1686 | | 5.5852 | 0.57 | 70000 | 5.6076 | 0.1707 | | 5.6607 | 0.66 | 80000 | 5.5906 | 0.1720 | | 5.5823 | 0.74 | 90000 | 5.5719 | 0.1739 | | 5.6124 | 0.82 | 100000 | 5.5543 | 0.1759 | | 5.6478 | 0.9 | 110000 | 5.5358 | 0.1776 | | 5.4795 | 0.98 | 120000 | 5.5203 | 0.1787 | | 5.4557 | 1.07 | 130000 | 5.5028 | 0.1804 | | 5.5585 | 1.15 | 140000 | 5.4923 | 0.1814 | | 5.6387 | 1.23 | 150000 | 5.4781 | 0.1825 | | 5.479 | 1.31 | 160000 | 5.4663 | 0.1833 | | 5.3951 | 1.39 | 170000 | 5.4512 | 0.1851 | | 5.5062 | 1.47 | 180000 | 5.4411 | 0.1864 | | 5.4553 | 1.56 | 190000 | 5.4244 | 0.1881 | | 5.5461 | 1.64 | 200000 | 5.4112 | 0.1893 | ### Framework versions - Transformers 4.30.1 - Pytorch 1.14.0a0+410ce96 - Datasets 2.12.0 - Tokenizers 0.13.3