layoutlm-captive-corp-70

This model is a fine-tuned version of microsoft/layoutlmv3-base on the layoutlmv3 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1795
  • Precision: 0.9517
  • Recall: 0.9680
  • F1: 0.9598
  • Accuracy: 0.9713

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
0.0311 1.0 23 0.1756 0.9478 0.9658 0.9567 0.9708
0.0282 2.0 46 0.1781 0.9486 0.9649 0.9567 0.9705
0.0296 3.0 69 0.1780 0.9481 0.9635 0.9558 0.9697
0.0252 4.0 92 0.1870 0.9463 0.9608 0.9535 0.9674
0.0255 5.0 115 0.1780 0.9476 0.9622 0.9549 0.9697
0.022 6.0 138 0.1779 0.9469 0.9644 0.9556 0.9700
0.0196 7.0 161 0.1806 0.9503 0.9649 0.9575 0.9695
0.0184 8.0 184 0.1827 0.9524 0.9653 0.9588 0.9705
0.0173 9.0 207 0.1888 0.9489 0.9626 0.9557 0.9687
0.0175 10.0 230 0.1799 0.9500 0.9667 0.9582 0.9708
0.0155 11.0 253 0.1814 0.9486 0.9653 0.9569 0.9710
0.0144 12.0 276 0.1853 0.9473 0.9640 0.9556 0.9702
0.0139 13.0 299 0.1840 0.9469 0.9649 0.9558 0.9697
0.0133 14.0 322 0.1797 0.9504 0.9662 0.9582 0.9718
0.0126 15.0 345 0.1832 0.9508 0.9671 0.9589 0.9710
0.0121 16.0 368 0.1795 0.9517 0.9680 0.9598 0.9713
0.0117 17.0 391 0.1844 0.9504 0.9658 0.9580 0.9705
0.0108 18.0 414 0.1850 0.9500 0.9662 0.9580 0.9702
0.0105 19.0 437 0.1847 0.9517 0.9671 0.9593 0.9713
0.0107 20.0 460 0.1846 0.9482 0.9653 0.9567 0.9702
0.0098 21.0 483 0.1863 0.9495 0.9662 0.9578 0.9702
0.0099 22.0 506 0.1871 0.9473 0.9635 0.9553 0.9692
0.0091 23.0 529 0.1879 0.9482 0.9644 0.9562 0.9700
0.0091 24.0 552 0.1859 0.9517 0.9671 0.9593 0.9713
0.0091 25.0 575 0.1849 0.9479 0.9671 0.9574 0.9713
0.0088 26.0 598 0.1883 0.9495 0.9662 0.9578 0.9702
0.0083 27.0 621 0.1884 0.9495 0.9658 0.9576 0.9700
0.0079 28.0 644 0.1890 0.9499 0.9658 0.9578 0.9710
0.008 29.0 667 0.1921 0.9491 0.9658 0.9574 0.9700
0.0075 30.0 690 0.1904 0.9504 0.9662 0.9582 0.9713
0.0075 31.0 713 0.1907 0.9504 0.9667 0.9585 0.9710
0.0075 32.0 736 0.1904 0.9504 0.9662 0.9582 0.9710
0.008 33.0 759 0.1935 0.9508 0.9653 0.9580 0.9705
0.0071 34.0 782 0.1950 0.9486 0.9644 0.9564 0.9700
0.007 35.0 805 0.1934 0.9478 0.9644 0.9560 0.9705
0.0072 36.0 828 0.1938 0.9486 0.9653 0.9569 0.9702
0.0068 37.0 851 0.1946 0.9482 0.9649 0.9565 0.9697
0.0066 38.0 874 0.1946 0.9486 0.9653 0.9569 0.9700
0.0068 39.0 897 0.1947 0.9508 0.9658 0.9582 0.9705
0.007 40.0 920 0.1942 0.9486 0.9640 0.9562 0.9697
0.0066 41.0 943 0.1937 0.9490 0.9649 0.9569 0.9702
0.0065 42.0 966 0.1943 0.9478 0.9644 0.9560 0.9695
0.0067 43.0 989 0.1936 0.9508 0.9662 0.9584 0.9708
0.0071 44.0 1012 0.1944 0.9521 0.9667 0.9593 0.9710
0.0064 45.0 1035 0.1940 0.9517 0.9671 0.9593 0.9713
0.0063 46.0 1058 0.1938 0.9499 0.9658 0.9578 0.9708
0.0066 47.0 1081 0.1944 0.9499 0.9658 0.9578 0.9708
0.0063 48.0 1104 0.1955 0.9495 0.9653 0.9573 0.9705
0.007 49.0 1127 0.1955 0.9495 0.9653 0.9573 0.9705
0.0063 50.0 1150 0.1954 0.9495 0.9653 0.9573 0.9705

Framework versions

  • Transformers 4.49.0.dev0
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
7
Safetensors
Model size
126M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for jfrish/layoutlm-captive-corp-70

Finetuned
(241)
this model

Evaluation results