Edit model card

videomae-base-ipm_all_videos_gb

This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2748
  • Accuracy: 0.6870

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 4800

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.5051 0.01 60 2.5234 0.0870
2.4957 1.01 120 2.5401 0.1217
2.5475 2.01 180 2.5675 0.0870
2.4659 3.01 240 2.5836 0.0957
2.2644 4.01 300 2.5035 0.0696
2.3548 5.01 360 2.2569 0.1217
2.0341 6.01 420 2.3958 0.1565
2.2919 7.01 480 2.6096 0.0696
2.0857 8.01 540 2.3223 0.1217
1.7473 9.01 600 2.5414 0.1652
1.885 10.01 660 1.7822 0.3043
1.9496 11.01 720 1.8052 0.3130
1.2315 12.01 780 2.1955 0.2435
1.3549 13.01 840 2.1262 0.3130
1.5121 14.01 900 2.0316 0.2783
1.4504 15.01 960 1.7596 0.2957
1.2991 16.01 1020 1.6413 0.3652
1.2299 17.01 1080 1.5417 0.4087
1.2965 18.01 1140 1.7243 0.3739
1.2431 19.01 1200 1.7556 0.3478
1.3807 20.01 1260 1.4580 0.4435
1.3961 21.01 1320 1.6514 0.4
1.0119 22.01 1380 1.5449 0.3391
1.3799 23.01 1440 1.5126 0.3304
1.6871 24.01 1500 2.0675 0.2783
1.2707 25.01 1560 1.7128 0.3739
1.1495 26.01 1620 1.6387 0.3217
1.6151 27.01 1680 1.6192 0.3913
1.0587 28.01 1740 1.6008 0.4522
1.2169 29.01 1800 1.6739 0.4348
1.1116 30.01 1860 1.7693 0.3913
1.0939 31.01 1920 1.6540 0.3913
0.9307 32.01 1980 1.5583 0.4957
0.9539 33.01 2040 1.8836 0.4174
0.9804 34.01 2100 1.5656 0.4522
1.334 35.01 2160 1.5375 0.4609
1.0897 36.01 2220 1.4327 0.4087
0.864 37.01 2280 1.6372 0.3913
0.9678 38.01 2340 1.4537 0.4609
1.3184 39.01 2400 1.3085 0.4783
1.1462 40.01 2460 1.4954 0.4696
0.7875 41.01 2520 1.4692 0.4870
0.9552 42.01 2580 1.3797 0.4174
0.8053 43.01 2640 1.3216 0.5043
0.9231 44.01 2700 1.2134 0.5739
0.734 45.01 2760 1.1676 0.5304
0.5427 46.01 2820 1.2179 0.4783
0.7171 47.01 2880 1.2749 0.5304
0.6977 48.01 2940 1.3707 0.5304
0.6911 49.01 3000 1.2520 0.5478
0.6166 50.01 3060 1.3687 0.5304
0.4025 51.01 3120 1.4041 0.5652
0.6147 52.01 3180 1.3030 0.6435
0.5787 53.01 3240 1.4109 0.5913
0.7157 54.01 3300 1.3183 0.6
0.3391 55.01 3360 1.4333 0.5913
0.7482 56.01 3420 1.4549 0.5826
0.5182 57.01 3480 1.4193 0.5652
0.7383 58.01 3540 1.4043 0.5565
0.8862 59.01 3600 1.4041 0.6
0.3481 60.01 3660 1.3164 0.6435
0.763 61.01 3720 1.2947 0.5913
0.7397 62.01 3780 1.2785 0.6696
0.514 63.01 3840 1.3180 0.6522
0.6582 64.01 3900 1.3520 0.6696
0.3929 65.01 3960 1.3391 0.6609
0.7623 66.01 4020 1.4349 0.6348
0.6235 67.01 4080 1.2897 0.6522
0.449 68.01 4140 1.3150 0.6696
0.639 69.01 4200 1.4241 0.6087
0.473 70.01 4260 1.2578 0.6609
0.5478 71.01 4320 1.2770 0.6522
0.4732 72.01 4380 1.2893 0.6783
0.5489 73.01 4440 1.2195 0.7043
0.3907 74.01 4500 1.2523 0.6957
0.2572 75.01 4560 1.2149 0.7043
0.5022 76.01 4620 1.2934 0.6696
0.2958 77.01 4680 1.2726 0.6783
0.7009 78.01 4740 1.2779 0.6957
0.49 79.01 4800 1.2748 0.6870

Framework versions

  • Transformers 4.29.1
  • Pytorch 2.0.1+cu117
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
3
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.