DD0101's picture
update model card README.md
22c7e79
metadata
base_model: vinai/phobert-base
tags:
  - generated_from_trainer
metrics:
  - precision
  - recall
  - f1
  - accuracy
model-index:
  - name: disfluency-large-3
    results: []

disfluency-large-3

This model is a fine-tuned version of vinai/phobert-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0403
  • Precision: 0.9904
  • Recall: 0.9880
  • F1: 0.9892
  • Accuracy: 0.9962

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
No log 1.0 280 0.0331 0.9719 0.9754 0.9736 0.9926
0.0853 2.0 560 0.0354 0.9771 0.9736 0.9753 0.9923
0.0853 3.0 840 0.0360 0.9759 0.9754 0.9757 0.9928
0.0119 4.0 1120 0.0255 0.9850 0.9838 0.9844 0.9948
0.0119 5.0 1400 0.0300 0.9873 0.9850 0.9862 0.9952
0.0063 6.0 1680 0.0412 0.9848 0.9742 0.9795 0.9927
0.0063 7.0 1960 0.0304 0.9844 0.9838 0.9841 0.9952
0.0039 8.0 2240 0.0344 0.9855 0.9820 0.9837 0.9939
0.004 9.0 2520 0.0522 0.9740 0.9681 0.9711 0.9911
0.004 10.0 2800 0.0305 0.9790 0.9790 0.9790 0.9943
0.0022 11.0 3080 0.0355 0.9837 0.9820 0.9829 0.9945
0.0022 12.0 3360 0.0400 0.9795 0.9772 0.9783 0.9935
0.002 13.0 3640 0.0394 0.9826 0.9814 0.9820 0.9943
0.002 14.0 3920 0.0452 0.9795 0.9772 0.9783 0.9930
0.0015 15.0 4200 0.0405 0.9825 0.9808 0.9817 0.9935
0.0015 16.0 4480 0.0373 0.9832 0.9826 0.9829 0.9941
0.0013 17.0 4760 0.0361 0.9832 0.9850 0.9841 0.9946
0.0013 18.0 5040 0.0447 0.9807 0.9790 0.9798 0.9937
0.0013 19.0 5320 0.0340 0.9874 0.9856 0.9865 0.9955
0.0009 20.0 5600 0.0374 0.9873 0.9826 0.9849 0.9948
0.0009 21.0 5880 0.0410 0.9843 0.9784 0.9813 0.9943
0.0007 22.0 6160 0.0275 0.9892 0.9862 0.9877 0.9961
0.0007 23.0 6440 0.0360 0.9891 0.9850 0.9871 0.9960
0.0011 24.0 6720 0.0323 0.9868 0.9850 0.9859 0.9954
0.0006 25.0 7000 0.0386 0.9867 0.9820 0.9843 0.9949
0.0006 26.0 7280 0.0408 0.9819 0.9802 0.9811 0.9940
0.0005 27.0 7560 0.0357 0.9867 0.9826 0.9846 0.9953
0.0005 28.0 7840 0.0370 0.9843 0.9820 0.9832 0.9946
0.0004 29.0 8120 0.0313 0.9880 0.9874 0.9877 0.9960
0.0004 30.0 8400 0.0363 0.9892 0.9862 0.9877 0.9956
0.0004 31.0 8680 0.0402 0.9843 0.9826 0.9835 0.9946
0.0004 32.0 8960 0.0321 0.9868 0.9850 0.9859 0.9956
0.0004 33.0 9240 0.0362 0.9861 0.9838 0.9850 0.9950
0.0003 34.0 9520 0.0307 0.9886 0.9880 0.9883 0.9964
0.0003 35.0 9800 0.0350 0.9880 0.9862 0.9871 0.9956
0.0001 36.0 10080 0.0343 0.9868 0.9856 0.9862 0.9956
0.0001 37.0 10360 0.0374 0.9874 0.9856 0.9865 0.9952
0.0003 38.0 10640 0.0333 0.9874 0.9868 0.9871 0.9957
0.0003 39.0 10920 0.0331 0.9886 0.9862 0.9874 0.9956
0.0001 40.0 11200 0.0349 0.9880 0.9868 0.9874 0.9961
0.0001 41.0 11480 0.0407 0.9880 0.9868 0.9874 0.9958
0.0001 42.0 11760 0.0389 0.9874 0.9868 0.9871 0.9959
0.0001 43.0 12040 0.0387 0.9892 0.9874 0.9883 0.9961
0.0001 44.0 12320 0.0414 0.9886 0.9868 0.9877 0.9959
0.0001 45.0 12600 0.0386 0.9886 0.9868 0.9877 0.9961
0.0001 46.0 12880 0.0408 0.9892 0.9874 0.9883 0.9961
0.0 47.0 13160 0.0402 0.9898 0.9880 0.9889 0.9962
0.0 48.0 13440 0.0411 0.9886 0.9868 0.9877 0.9959
0.0 49.0 13720 0.0403 0.9904 0.9880 0.9892 0.9962
0.0 50.0 14000 0.0402 0.9904 0.9880 0.9892 0.9962

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.1
  • Tokenizers 0.13.3