Edit model card

arg-quality-regression

This model is a fine-tuned version of google-bert/bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0342
  • Mse: 0.0342
  • Mae: 0.1359
  • R2: 0.1353
  • Accuracy: 0.9808

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 11

Training results

Training Loss Epoch Step Validation Loss Mse Mae R2 Accuracy
0.0277 1.0 1512 0.0398 0.0398 0.1450 -0.0046 0.9736
0.0218 2.0 3024 0.0342 0.0342 0.1359 0.1353 0.9808
0.0169 3.0 4536 0.0367 0.0367 0.1409 0.0717 0.9783
0.0114 4.0 6048 0.0400 0.0400 0.1477 -0.0108 0.9751
0.0075 5.0 7560 0.0439 0.0439 0.1564 -0.1093 0.9704
0.006 6.0 9072 0.0465 0.0465 0.1626 -0.1749 0.9661
0.0051 7.0 10584 0.0429 0.0429 0.1574 -0.0851 0.9729
0.0037 8.0 12096 0.0440 0.0440 0.1590 -0.1123 0.9720
0.0035 9.0 13608 0.0412 0.0412 0.1534 -0.0401 0.9755
0.0029 10.0 15120 0.0415 0.0415 0.1537 -0.0487 0.9743
0.0028 11.0 16632 0.0438 0.0438 0.1589 -0.1080 0.9712

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
130
Safetensors
Model size
109M params
Tensor type
F32
·

Finetuned from