simonycl's picture
update model card README.md
cb141f2
|
raw
history blame
10.6 kB
metadata
license: mit
base_model: roberta-base
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: best_model-yelp_polarity-32-21
    results: []

best_model-yelp_polarity-32-21

This model is a fine-tuned version of roberta-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4088
  • Accuracy: 0.9531

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 150

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 2 0.4153 0.9531
No log 2.0 4 0.4162 0.9531
No log 3.0 6 0.4170 0.9531
No log 4.0 8 0.4185 0.9531
0.0656 5.0 10 0.4208 0.9531
0.0656 6.0 12 0.4234 0.9531
0.0656 7.0 14 0.4266 0.9531
0.0656 8.0 16 0.4282 0.9531
0.0656 9.0 18 0.4298 0.9531
0.0228 10.0 20 0.4312 0.9531
0.0228 11.0 22 0.4322 0.9531
0.0228 12.0 24 0.4309 0.9531
0.0228 13.0 26 0.4287 0.9531
0.0228 14.0 28 0.4264 0.9531
0.0275 15.0 30 0.4230 0.9531
0.0275 16.0 32 0.4179 0.9531
0.0275 17.0 34 0.4115 0.9531
0.0275 18.0 36 0.4048 0.9531
0.0275 19.0 38 0.3992 0.9531
0.0051 20.0 40 0.3981 0.9531
0.0051 21.0 42 0.3985 0.9531
0.0051 22.0 44 0.3989 0.9531
0.0051 23.0 46 0.4033 0.9531
0.0051 24.0 48 0.4085 0.9531
0.0002 25.0 50 0.4128 0.9531
0.0002 26.0 52 0.4163 0.9531
0.0002 27.0 54 0.4192 0.9531
0.0002 28.0 56 0.4214 0.9531
0.0002 29.0 58 0.4230 0.9531
0.0001 30.0 60 0.4242 0.9531
0.0001 31.0 62 0.4251 0.9531
0.0001 32.0 64 0.4195 0.9531
0.0001 33.0 66 0.4142 0.9531
0.0001 34.0 68 0.4096 0.9531
0.0002 35.0 70 0.4013 0.9531
0.0002 36.0 72 0.3900 0.9531
0.0002 37.0 74 0.3817 0.9531
0.0002 38.0 76 0.4000 0.9375
0.0002 39.0 78 0.4307 0.9375
0.0001 40.0 80 0.4355 0.9375
0.0001 41.0 82 0.4225 0.9375
0.0001 42.0 84 0.4100 0.9375
0.0001 43.0 86 0.3992 0.9375
0.0001 44.0 88 0.3900 0.9375
0.0 45.0 90 0.3836 0.9375
0.0 46.0 92 0.3797 0.9531
0.0 47.0 94 0.3776 0.9531
0.0 48.0 96 0.3767 0.9531
0.0 49.0 98 0.3763 0.9531
0.0 50.0 100 0.3763 0.9531
0.0 51.0 102 0.3765 0.9531
0.0 52.0 104 0.3768 0.9531
0.0 53.0 106 0.3772 0.9531
0.0 54.0 108 0.3775 0.9531
0.0 55.0 110 0.3778 0.9531
0.0 56.0 112 0.3781 0.9531
0.0 57.0 114 0.3784 0.9531
0.0 58.0 116 0.3787 0.9531
0.0 59.0 118 0.3791 0.9531
0.0 60.0 120 0.3794 0.9531
0.0 61.0 122 0.3798 0.9531
0.0 62.0 124 0.3801 0.9531
0.0 63.0 126 0.3804 0.9531
0.0 64.0 128 0.3809 0.9531
0.0 65.0 130 0.3813 0.9531
0.0 66.0 132 0.3816 0.9531
0.0 67.0 134 0.3820 0.9531
0.0 68.0 136 0.3824 0.9531
0.0 69.0 138 0.3828 0.9531
0.0 70.0 140 0.3831 0.9531
0.0 71.0 142 0.3834 0.9531
0.0 72.0 144 0.3837 0.9531
0.0 73.0 146 0.3841 0.9531
0.0 74.0 148 0.3845 0.9531
0.0 75.0 150 0.3849 0.9531
0.0 76.0 152 0.3852 0.9531
0.0 77.0 154 0.3855 0.9531
0.0 78.0 156 0.3858 0.9531
0.0 79.0 158 0.3860 0.9531
0.0 80.0 160 0.3862 0.9531
0.0 81.0 162 0.3863 0.9531
0.0 82.0 164 0.3865 0.9531
0.0 83.0 166 0.3866 0.9531
0.0 84.0 168 0.3867 0.9531
0.0 85.0 170 0.3865 0.9531
0.0 86.0 172 0.3864 0.9531
0.0 87.0 174 0.3863 0.9531
0.0 88.0 176 0.3863 0.9531
0.0 89.0 178 0.3863 0.9531
0.0 90.0 180 0.3863 0.9531
0.0 91.0 182 0.3864 0.9531
0.0 92.0 184 0.3865 0.9531
0.0 93.0 186 0.3866 0.9531
0.0 94.0 188 0.3870 0.9531
0.0 95.0 190 0.3878 0.9531
0.0 96.0 192 0.3885 0.9531
0.0 97.0 194 0.3891 0.9531
0.0 98.0 196 0.3896 0.9531
0.0 99.0 198 0.3903 0.9531
0.0 100.0 200 0.3910 0.9531
0.0 101.0 202 0.3916 0.9531
0.0 102.0 204 0.3922 0.9531
0.0 103.0 206 0.3928 0.9531
0.0 104.0 208 0.3932 0.9531
0.0 105.0 210 0.3936 0.9531
0.0 106.0 212 0.3940 0.9531
0.0 107.0 214 0.3943 0.9531
0.0 108.0 216 0.3946 0.9531
0.0 109.0 218 0.3949 0.9531
0.0 110.0 220 0.3951 0.9531
0.0 111.0 222 0.3953 0.9531
0.0 112.0 224 0.3954 0.9531
0.0 113.0 226 0.3956 0.9531
0.0 114.0 228 0.3958 0.9531
0.0 115.0 230 0.3962 0.9531
0.0 116.0 232 0.3969 0.9531
0.0 117.0 234 0.3976 0.9531
0.0 118.0 236 0.3981 0.9531
0.0 119.0 238 0.3987 0.9531
0.0 120.0 240 0.3992 0.9531
0.0 121.0 242 0.3996 0.9531
0.0 122.0 244 0.3999 0.9531
0.0 123.0 246 0.4002 0.9531
0.0 124.0 248 0.4005 0.9531
0.0 125.0 250 0.4009 0.9531
0.0 126.0 252 0.4012 0.9531
0.0 127.0 254 0.4015 0.9531
0.0 128.0 256 0.4017 0.9531
0.0 129.0 258 0.4020 0.9531
0.0 130.0 260 0.4023 0.9531
0.0 131.0 262 0.4025 0.9531
0.0 132.0 264 0.4028 0.9531
0.0 133.0 266 0.4031 0.9531
0.0 134.0 268 0.4034 0.9531
0.0 135.0 270 0.4037 0.9531
0.0 136.0 272 0.4039 0.9531
0.0 137.0 274 0.4041 0.9531
0.0 138.0 276 0.4044 0.9531
0.0 139.0 278 0.4046 0.9531
0.0 140.0 280 0.4049 0.9531
0.0 141.0 282 0.4052 0.9531
0.0 142.0 284 0.4054 0.9531
0.0 143.0 286 0.4056 0.9531
0.0 144.0 288 0.4059 0.9531
0.0 145.0 290 0.4061 0.9531
0.0 146.0 292 0.4063 0.9531
0.0 147.0 294 0.4068 0.9531
0.0 148.0 296 0.4072 0.9531
0.0 149.0 298 0.4076 0.9531
0.0 150.0 300 0.4088 0.9531

Framework versions

  • Transformers 4.32.0.dev0
  • Pytorch 2.0.1+cu118
  • Datasets 2.4.0
  • Tokenizers 0.13.3