Edit model card

layout2

This model is a fine-tuned version of microsoft/layoutlmv2-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 4.9306

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
5.3459 0.22 50 4.5773
4.4535 0.44 100 4.3372
4.0847 0.66 150 3.7812
3.8993 0.88 200 3.5418
3.5723 1.11 250 3.4054
3.1247 1.33 300 3.1567
3.1067 1.55 350 2.8839
2.9002 1.77 400 2.9007
2.7197 1.99 450 3.1319
2.0518 2.21 500 2.8704
1.9376 2.43 550 2.5644
1.9025 2.65 600 2.5909
2.1491 2.88 650 2.2353
1.5984 3.1 700 2.2114
1.4801 3.32 750 2.5351
1.2501 3.54 800 2.2936
1.2647 3.76 850 2.2360
1.1896 3.98 900 2.3516
0.8484 4.2 950 2.9913
0.8924 4.42 1000 2.6146
0.8342 4.65 1050 3.0150
1.0721 4.87 1100 2.8298
0.6878 5.09 1150 2.9925
0.6807 5.31 1200 2.9388
0.6139 5.53 1250 3.0299
0.7792 5.75 1300 2.4786
0.6837 5.97 1350 3.0713
0.4388 6.19 1400 3.2049
0.5377 6.42 1450 3.1746
0.6028 6.64 1500 3.1708
0.6295 6.86 1550 3.4203
0.5441 7.08 1600 3.3876
0.4105 7.3 1650 3.4065
0.3848 7.52 1700 3.4103
0.507 7.74 1750 3.2958
0.3741 7.96 1800 3.9773
0.297 8.19 1850 4.1055
0.4131 8.41 1900 3.7588
0.3332 8.63 1950 3.6752
0.2608 8.85 2000 4.2848
0.2652 9.07 2050 3.6098
0.1967 9.29 2100 3.7971
0.3771 9.51 2150 3.6206
0.1677 9.73 2200 4.3747
0.1711 9.96 2250 4.2229
0.0949 10.18 2300 4.5390
0.1804 10.4 2350 4.3297
0.1313 10.62 2400 4.3467
0.2292 10.84 2450 4.4368
0.1342 11.06 2500 4.2101
0.1411 11.28 2550 4.2723
0.0907 11.5 2600 4.3821
0.1656 11.73 2650 4.3373
0.0757 11.95 2700 4.3297
0.1571 12.17 2750 4.1193
0.1015 12.39 2800 4.2492
0.0785 12.61 2850 4.3998
0.1003 12.83 2900 4.3994
0.0884 13.05 2950 4.4914
0.0261 13.27 3000 4.5869
0.1138 13.5 3050 4.4546
0.1481 13.72 3100 4.6197
0.0814 13.94 3150 4.4034
0.0079 14.16 3200 4.7623
0.0641 14.38 3250 4.5933
0.0499 14.6 3300 4.6543
0.1052 14.82 3350 4.2287
0.0775 15.04 3400 4.3898
0.0561 15.27 3450 4.5852
0.0443 15.49 3500 4.6139
0.006 15.71 3550 4.7572
0.1517 15.93 3600 4.5492
0.0473 16.15 3650 4.6196
0.0534 16.37 3700 4.5726
0.0069 16.59 3750 4.7234
0.0515 16.81 3800 4.7475
0.0187 17.04 3850 4.7945
0.0616 17.26 3900 4.7330
0.0385 17.48 3950 4.6939
0.0377 17.7 4000 4.7966
0.0039 17.92 4050 4.7507
0.027 18.14 4100 4.7653
0.0193 18.36 4150 4.7589
0.0318 18.58 4200 4.8174
0.0094 18.81 4250 4.8776
0.035 19.03 4300 4.9437
0.0118 19.25 4350 4.9520
0.0036 19.47 4400 4.9428
0.0082 19.69 4450 4.9414
0.004 19.91 4500 4.9306

Framework versions

  • Transformers 4.32.0
  • Pytorch 2.0.0+cu118
  • Datasets 2.17.1
  • Tokenizers 0.13.2
Downloads last month
5

Finetuned from