Edit model card

test_mae_flysheet

This model is a fine-tuned version of facebook/vit-mae-base on the davanstrien/flysheet dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2675

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3.75e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 1337
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 100.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
2.284 1.0 28 2.2812
2.137 2.0 56 2.0288
1.6016 3.0 84 1.2437
0.8055 4.0 112 0.7419
0.5304 5.0 140 0.5151
0.4873 6.0 168 0.4884
0.442 7.0 196 0.4441
0.4039 8.0 224 0.4159
0.3866 9.0 252 0.3975
0.391 10.0 280 0.3869
0.3549 11.0 308 0.3801
0.3462 12.0 336 0.3577
0.3402 13.0 364 0.3519
0.3357 14.0 392 0.3447
0.3474 15.0 420 0.3369
0.3254 16.0 448 0.3386
0.3033 17.0 476 0.3294
0.3047 18.0 504 0.3274
0.3103 19.0 532 0.3209
0.3067 20.0 560 0.3186
0.2959 21.0 588 0.3190
0.2899 22.0 616 0.3147
0.2872 23.0 644 0.3082
0.2956 24.0 672 0.3070
0.2865 25.0 700 0.3072
0.2947 26.0 728 0.3072
0.2811 27.0 756 0.3131
0.2935 28.0 784 0.3069
0.2814 29.0 812 0.3043
0.2753 30.0 840 0.2984
0.2823 31.0 868 0.2995
0.2962 32.0 896 0.3012
0.2869 33.0 924 0.3050
0.2833 34.0 952 0.2960
0.2892 35.0 980 0.3039
0.2764 36.0 1008 0.3010
0.2807 37.0 1036 0.2998
0.2843 38.0 1064 0.2989
0.2808 39.0 1092 0.2970
0.2862 40.0 1120 0.2940
0.2601 41.0 1148 0.2952
0.2742 42.0 1176 0.2940
0.2791 43.0 1204 0.2997
0.2759 44.0 1232 0.2951
0.2819 45.0 1260 0.2896
0.287 46.0 1288 0.2938
0.2711 47.0 1316 0.2973
0.2782 48.0 1344 0.2946
0.2674 49.0 1372 0.2913
0.268 50.0 1400 0.2944
0.2624 51.0 1428 0.2940
0.2842 52.0 1456 0.2978
0.2753 53.0 1484 0.2951
0.2733 54.0 1512 0.2880
0.2782 55.0 1540 0.2969
0.2789 56.0 1568 0.2919
0.2815 57.0 1596 0.2916
0.2629 58.0 1624 0.2947
0.2716 59.0 1652 0.2828
0.2623 60.0 1680 0.2924
0.2773 61.0 1708 0.2765
0.268 62.0 1736 0.2754
0.2839 63.0 1764 0.2744
0.2684 64.0 1792 0.2744
0.2865 65.0 1820 0.2716
0.2845 66.0 1848 0.2769
0.2663 67.0 1876 0.2754
0.269 68.0 1904 0.2737
0.2681 69.0 1932 0.2697
0.2748 70.0 1960 0.2779
0.2769 71.0 1988 0.2728
0.2805 72.0 2016 0.2729
0.2771 73.0 2044 0.2728
0.2717 74.0 2072 0.2749
0.267 75.0 2100 0.2732
0.2812 76.0 2128 0.2743
0.2749 77.0 2156 0.2739
0.2746 78.0 2184 0.2730
0.2707 79.0 2212 0.2743
0.2644 80.0 2240 0.2740
0.2691 81.0 2268 0.2727
0.2679 82.0 2296 0.2771
0.2748 83.0 2324 0.2744
0.2744 84.0 2352 0.2703
0.2715 85.0 2380 0.2733
0.2682 86.0 2408 0.2715
0.2641 87.0 2436 0.2722
0.274 88.0 2464 0.2748
0.2669 89.0 2492 0.2753
0.2707 90.0 2520 0.2724
0.2755 91.0 2548 0.2703
0.2769 92.0 2576 0.2737
0.2659 93.0 2604 0.2721
0.2674 94.0 2632 0.2763
0.2723 95.0 2660 0.2723
0.2723 96.0 2688 0.2744
0.272 97.0 2716 0.2686
0.27 98.0 2744 0.2728
0.2721 99.0 2772 0.2743
0.2692 100.0 2800 0.2748

Framework versions

  • Transformers 4.18.0.dev0
  • Pytorch 1.10.0+cu111
  • Datasets 1.18.4
  • Tokenizers 0.11.6
Downloads last month
4
Unable to determine this model’s pipeline type. Check the docs .

Finetuned from