Edit model card

2024-01-02_one_stage_subgraphs_entropyreg_txt_vis_conc_6_ramp

This model is a fine-tuned version of microsoft/layoutlmv3-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1266
  • Accuracy: 0.705
  • Exit 0 Accuracy: 0.195
  • Exit 1 Accuracy: 0.7025

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 24
  • total_train_batch_size: 192
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 60

Training results

Training Loss Epoch Step Validation Loss Accuracy Exit 0 Accuracy Exit 1 Accuracy
No log 0.96 4 2.7544 0.115 0.0575 0.0625
No log 1.96 8 2.6911 0.135 0.125 0.0625
No log 2.96 12 2.6410 0.1775 0.1225 0.18
No log 3.96 16 2.5664 0.2025 0.125 0.1825
No log 4.96 20 2.5036 0.2475 0.1225 0.2475
No log 5.96 24 2.4172 0.28 0.12 0.2275
No log 6.96 28 2.3247 0.3 0.1275 0.2225
No log 7.96 32 2.2355 0.36 0.14 0.2525
No log 8.96 36 2.1384 0.4025 0.1375 0.315
No log 9.96 40 2.0150 0.465 0.14 0.3475
No log 10.96 44 1.9193 0.4925 0.1425 0.37
No log 11.96 48 1.7777 0.5375 0.145 0.4325
No log 12.96 52 1.6960 0.56 0.15 0.5
No log 13.96 56 1.5905 0.59 0.155 0.49
No log 14.96 60 1.5197 0.625 0.155 0.5275
No log 15.96 64 1.4335 0.6475 0.1525 0.5425
No log 16.96 68 1.3831 0.6575 0.1575 0.5675
No log 17.96 72 1.3216 0.6775 0.155 0.575
No log 18.96 76 1.2973 0.6825 0.1575 0.5825
No log 19.96 80 1.2342 0.6975 0.1575 0.6025
No log 20.96 84 1.2190 0.6825 0.16 0.605
No log 21.96 88 1.1758 0.7125 0.1625 0.62
No log 22.96 92 1.1612 0.685 0.1675 0.625
No log 23.96 96 1.1329 0.6925 0.1675 0.64
No log 24.96 100 1.1001 0.7125 0.1675 0.635
No log 25.96 104 1.0943 0.7025 0.175 0.645
No log 26.96 108 1.0794 0.7125 0.18 0.6475
No log 27.96 112 1.0919 0.6925 0.185 0.6475
No log 28.96 116 1.0630 0.72 0.1875 0.6575
No log 29.96 120 1.0831 0.7 0.1875 0.655
No log 30.96 124 1.0581 0.695 0.1875 0.6625
No log 31.96 128 1.0588 0.715 0.1875 0.66
No log 32.96 132 1.0624 0.6975 0.185 0.675
No log 33.96 136 1.0355 0.71 0.1875 0.675
No log 34.96 140 1.0777 0.6925 0.1875 0.665
No log 35.96 144 1.0514 0.71 0.19 0.675
No log 36.96 148 1.0678 0.7 0.1925 0.6825
No log 37.96 152 1.0610 0.7025 0.1925 0.68
No log 38.96 156 1.0726 0.7025 0.195 0.69
No log 39.96 160 1.0818 0.7025 0.195 0.69
No log 40.96 164 1.0893 0.6975 0.1925 0.685
No log 41.96 168 1.0980 0.695 0.195 0.69
No log 42.96 172 1.1009 0.7025 0.1925 0.6925
No log 43.96 176 1.0896 0.705 0.1925 0.695
No log 44.96 180 1.0697 0.7125 0.1925 0.695
No log 45.96 184 1.1185 0.7025 0.1925 0.695
No log 46.96 188 1.0956 0.705 0.1925 0.6925
No log 47.96 192 1.1095 0.71 0.19 0.6975
No log 48.96 196 1.1233 0.7075 0.1925 0.7025
No log 49.96 200 1.1281 0.705 0.1925 0.7025
No log 50.96 204 1.1428 0.6975 0.1925 0.7025
No log 51.96 208 1.1292 0.7025 0.1925 0.71
No log 52.96 212 1.1218 0.7025 0.19 0.7125
No log 53.96 216 1.1143 0.7075 0.1925 0.7025
No log 54.96 220 1.1192 0.7125 0.195 0.7025
No log 55.96 224 1.1338 0.715 0.195 0.7025
No log 56.96 228 1.1333 0.71 0.195 0.7075
No log 57.96 232 1.1291 0.7025 0.195 0.7025
No log 58.96 236 1.1268 0.705 0.195 0.705
No log 59.96 240 1.1266 0.705 0.195 0.7025

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1.post200
  • Datasets 2.9.0
  • Tokenizers 0.13.2
Downloads last month
9