Edit model card

laptop_sentence_classfication_wangChanBERTa

This model is a fine-tuned version of airesearch/wangchanberta-base-att-spm-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2752
  • Accuracy: 0.9

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 25 0.6130 0.7077
No log 2.0 50 0.4832 0.7769
No log 3.0 75 0.4457 0.8154
No log 4.0 100 0.4696 0.7692
No log 5.0 125 0.4378 0.8077
No log 6.0 150 0.4698 0.8077
No log 7.0 175 0.3654 0.8615
No log 8.0 200 0.3795 0.8615
No log 9.0 225 0.4212 0.8692
No log 10.0 250 0.4153 0.8538
No log 11.0 275 0.3723 0.8692
No log 12.0 300 0.3590 0.8538
No log 13.0 325 0.2553 0.9077
No log 14.0 350 0.2713 0.9
No log 15.0 375 0.2699 0.9077
No log 16.0 400 0.2563 0.9154
No log 17.0 425 0.2536 0.9231
No log 18.0 450 0.2529 0.9154
No log 19.0 475 0.2743 0.9
0.3025 20.0 500 0.2752 0.9

Framework versions

  • Transformers 4.29.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
11