Edit model card

right_as_train_context_roberta-large_20e

This model is a fine-tuned version of FacebookAI/roberta-large on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.0698
  • Val Accuracy: 0.8315
  • Val Precision Macro: 0.8251
  • Val Recall Macro: 0.8236
  • Val F1 Macro: 0.8243
  • Val Precision Weighted: 0.8315
  • Val Recall Weighted: 0.8315
  • Val F1 Weighted: 0.8315

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Val Accuracy Val Precision Macro Val Recall Macro Val F1 Macro Val Precision Weighted Val Recall Weighted Val F1 Weighted
0.4675 1.0 4017 0.5295 0.7930 0.7883 0.7814 0.7832 0.7950 0.7930 0.7926
0.3484 2.0 8034 0.5219 0.8106 0.8024 0.8005 0.8012 0.8109 0.8106 0.8105
0.2493 3.0 12051 0.6031 0.8187 0.8089 0.8131 0.8108 0.8197 0.8187 0.8190
0.1975 4.0 16068 0.7936 0.8226 0.8167 0.8133 0.8148 0.8226 0.8226 0.8225
0.1536 5.0 20085 1.0773 0.8139 0.8126 0.7991 0.8045 0.8146 0.8139 0.8130
0.1247 6.0 24102 1.1831 0.8247 0.8168 0.8172 0.8170 0.8247 0.8247 0.8247
0.0989 7.0 28119 1.3600 0.8211 0.8156 0.8095 0.8123 0.8205 0.8211 0.8205
0.0818 8.0 32136 1.4785 0.8256 0.8158 0.8221 0.8187 0.8275 0.8256 0.8262
0.062 9.0 36153 1.6175 0.8244 0.8167 0.8164 0.8165 0.8245 0.8244 0.8244
0.0536 10.0 40170 1.6854 0.8201 0.8149 0.8097 0.8121 0.8195 0.8201 0.8197
0.0373 11.0 44187 1.6336 0.8240 0.8188 0.8126 0.8155 0.8234 0.8240 0.8234
0.0349 12.0 48204 1.6960 0.8289 0.8202 0.8232 0.8216 0.8297 0.8289 0.8293
0.0222 13.0 52221 1.8910 0.8216 0.8167 0.8096 0.8128 0.8209 0.8216 0.8208
0.0147 14.0 56238 1.8448 0.8320 0.8253 0.8246 0.8247 0.8328 0.8320 0.8322
0.0168 15.0 60255 1.8517 0.8337 0.8257 0.8286 0.8271 0.8345 0.8337 0.8340
0.0128 16.0 64272 1.9199 0.8326 0.8263 0.8240 0.8251 0.8324 0.8326 0.8325
0.0077 17.0 68289 1.9848 0.8308 0.8231 0.8237 0.8234 0.8309 0.8308 0.8309
0.005 18.0 72306 2.0593 0.8292 0.8258 0.8187 0.8218 0.8292 0.8292 0.8288
0.0018 19.0 76323 2.0637 0.8293 0.8229 0.8207 0.8218 0.8291 0.8293 0.8292
0.0019 20.0 80340 2.0698 0.8315 0.8251 0.8236 0.8243 0.8315 0.8315 0.8315

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.1.2
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
11
Safetensors
Model size
355M params
Tensor type
F32
·

Finetuned from