Edit model card

LayoutLMv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_baseline

This model is a fine-tuned version of microsoft/layoutlmv3-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9950
  • Accuracy: 0.78

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 6
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 24
  • total_train_batch_size: 144
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 60

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.9 5 2.7384 0.105
No log 1.9 10 2.6717 0.14
No log 2.9 15 2.6154 0.21
No log 3.9 20 2.5111 0.245
No log 4.9 25 2.4349 0.2575
No log 5.9 30 2.3158 0.3
No log 6.9 35 2.2286 0.335
No log 7.9 40 2.1051 0.3675
No log 8.9 45 2.0340 0.41
No log 9.9 50 1.8834 0.495
No log 10.9 55 1.7616 0.5275
No log 11.9 60 1.6547 0.5575
No log 12.9 65 1.5398 0.585
No log 13.9 70 1.4345 0.615
No log 14.9 75 1.3810 0.63
No log 15.9 80 1.2689 0.685
No log 16.9 85 1.2218 0.6625
No log 17.9 90 1.1964 0.6825
No log 18.9 95 1.1421 0.6875
No log 19.9 100 1.1136 0.71
No log 20.9 105 1.0863 0.715
No log 21.9 110 1.0472 0.7075
No log 22.9 115 1.0367 0.7375
No log 23.9 120 1.0132 0.7175
No log 24.9 125 0.9760 0.7375
No log 25.9 130 0.9697 0.7275
No log 26.9 135 0.9621 0.7375
No log 27.9 140 0.9532 0.745
No log 28.9 145 0.9258 0.7475
No log 29.9 150 0.9703 0.7475
No log 30.9 155 0.9199 0.765
No log 31.9 160 0.9678 0.745
No log 32.9 165 0.9110 0.7675
No log 33.9 170 0.9723 0.755
No log 34.9 175 0.9083 0.78
No log 35.9 180 0.9427 0.76
No log 36.9 185 0.9301 0.77
No log 37.9 190 0.9318 0.765
No log 38.9 195 0.9486 0.77
No log 39.9 200 0.9676 0.755
No log 40.9 205 0.9586 0.7675
No log 41.9 210 0.9516 0.7625
No log 42.9 215 0.9796 0.7625
No log 43.9 220 0.9764 0.77
No log 44.9 225 0.9704 0.7675
No log 45.9 230 0.9842 0.775
No log 46.9 235 1.0011 0.7625
No log 47.9 240 0.9978 0.7625
No log 48.9 245 0.9873 0.775
No log 49.9 250 0.9848 0.7825
No log 50.9 255 0.9857 0.7775
No log 51.9 260 0.9975 0.775
No log 52.9 265 0.9933 0.78
No log 53.9 270 0.9840 0.78
No log 54.9 275 0.9814 0.78
No log 55.9 280 0.9860 0.775
No log 56.9 285 0.9922 0.78
No log 57.9 290 0.9949 0.78
No log 58.9 295 0.9953 0.78
No log 59.9 300 0.9950 0.78

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1.post200
  • Datasets 2.9.0
  • Tokenizers 0.13.2
Downloads last month
9