Edit model card

bert-small-finetuned-glue-rte

This model is a fine-tuned version of google/bert_uncased_L-4_H-512_A-8 on the glue dataset. It achieves the following results on the evaluation set:

  • Loss: 2.8715
  • Accuracy: 0.6318

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 2.62 50 1.8285 0.6318
No log 5.26 100 2.0806 0.6462
No log 7.87 150 2.1598 0.6282
No log 10.51 200 2.2774 0.6318
No log 13.15 250 2.3676 0.6245
No log 15.77 300 2.4581 0.6462
No log 18.41 350 2.6175 0.6354
No log 21.05 400 2.6697 0.6354
No log 23.67 450 2.7717 0.6354
0.0101 26.31 500 2.7975 0.6462
0.0101 28.92 550 2.8532 0.6390
0.0101 31.56 600 2.9054 0.6209
0.0101 34.21 650 2.8715 0.6318

Framework versions

  • Transformers 4.21.2
  • Pytorch 1.12.1+cu113
  • Datasets 2.4.0
  • Tokenizers 0.12.1
Downloads last month
4
Safetensors
Model size
28.8M params
Tensor type
I64
·
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train muhtasham/bert-small-finetuned-glue-rte

Evaluation results