--- license: mit base_model: haryoaw/teacher_tweet_eval_sentiment_xlmr_base tags: - generated_from_trainer datasets: - tweet_eval metrics: - accuracy - f1 model-index: - name: scenario-kd-from-post-finetune-gold-silver-div-2-data-tweet_eval-sentiment-model results: [] --- # scenario-kd-from-post-finetune-gold-silver-div-2-data-tweet_eval-sentiment-model This model is a fine-tuned version of [haryoaw/teacher_tweet_eval_sentiment_xlmr_base](https://huggingface.co/haryoaw/teacher_tweet_eval_sentiment_xlmr_base) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.3344 - Accuracy: 0.734 - F1: 0.7061 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6969 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 1.838 | 0.7 | 1000 | 1.8178 | 0.71 | 0.6779 | | 1.3953 | 1.4 | 2000 | 1.5675 | 0.72 | 0.6988 | | 1.2818 | 2.1 | 3000 | 1.6086 | 0.7205 | 0.6892 | | 1.1321 | 2.81 | 4000 | 1.6146 | 0.7235 | 0.7013 | | 0.9348 | 3.51 | 5000 | 1.6493 | 0.719 | 0.6902 | | 0.9139 | 4.21 | 6000 | 1.5044 | 0.719 | 0.7026 | | 0.8836 | 4.91 | 7000 | 1.4467 | 0.728 | 0.7068 | | 0.7857 | 5.61 | 8000 | 1.4894 | 0.727 | 0.6971 | | 0.7366 | 6.31 | 9000 | 1.4895 | 0.7265 | 0.7052 | | 0.7471 | 7.01 | 10000 | 1.4055 | 0.73 | 0.7089 | | 0.7067 | 7.71 | 11000 | 1.4671 | 0.729 | 0.7055 | | 0.6668 | 8.42 | 12000 | 1.4966 | 0.729 | 0.7011 | | 0.6608 | 9.12 | 13000 | 1.4326 | 0.722 | 0.7006 | | 0.6371 | 9.82 | 14000 | 1.4281 | 0.724 | 0.7005 | | 0.6047 | 10.52 | 15000 | 1.4260 | 0.7225 | 0.6954 | | 0.5929 | 11.22 | 16000 | 1.4013 | 0.7235 | 0.6974 | | 0.5963 | 11.92 | 17000 | 1.3812 | 0.727 | 0.7053 | | 0.5777 | 12.62 | 18000 | 1.3694 | 0.727 | 0.7029 | | 0.5587 | 13.32 | 19000 | 1.4384 | 0.724 | 0.6951 | | 0.5631 | 14.03 | 20000 | 1.3488 | 0.732 | 0.7097 | | 0.538 | 14.73 | 21000 | 1.3758 | 0.7285 | 0.7042 | | 0.5365 | 15.43 | 22000 | 1.3314 | 0.7305 | 0.7059 | | 0.5339 | 16.13 | 23000 | 1.3576 | 0.729 | 0.7046 | | 0.5232 | 16.83 | 24000 | 1.3652 | 0.732 | 0.7065 | | 0.5082 | 17.53 | 25000 | 1.4092 | 0.729 | 0.7015 | | 0.5015 | 18.23 | 26000 | 1.3548 | 0.726 | 0.7019 | | 0.5035 | 18.93 | 27000 | 1.3637 | 0.7335 | 0.7076 | | 0.4932 | 19.64 | 28000 | 1.3489 | 0.7205 | 0.6968 | | 0.4867 | 20.34 | 29000 | 1.3753 | 0.7385 | 0.7135 | | 0.4819 | 21.04 | 30000 | 1.3652 | 0.727 | 0.7050 | | 0.4782 | 21.74 | 31000 | 1.3339 | 0.743 | 0.7194 | | 0.4712 | 22.44 | 32000 | 1.3229 | 0.728 | 0.7059 | | 0.4677 | 23.14 | 33000 | 1.3203 | 0.7365 | 0.7114 | | 0.4736 | 23.84 | 34000 | 1.3605 | 0.7345 | 0.7114 | | 0.4551 | 24.54 | 35000 | 1.3491 | 0.724 | 0.7016 | | 0.458 | 25.25 | 36000 | 1.3663 | 0.739 | 0.7063 | | 0.4545 | 25.95 | 37000 | 1.3502 | 0.734 | 0.7087 | | 0.4471 | 26.65 | 38000 | 1.3435 | 0.7375 | 0.7100 | | 0.441 | 27.35 | 39000 | 1.3069 | 0.7435 | 0.7155 | | 0.4495 | 28.05 | 40000 | 1.3175 | 0.7285 | 0.7037 | | 0.4411 | 28.75 | 41000 | 1.3344 | 0.734 | 0.7061 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.14.5 - Tokenizers 0.13.3