Edit model card

videomae-base-finetuned-isl-numbers_2

This model is a fine-tuned version of latif98/videomae-base-finetuned-isl-numbers on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2759
  • Accuracy: 0.6839

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 3800

Training results

Training Loss Epoch Step Validation Loss Accuracy
3.5502 0.02 76 3.4251 0.0945
3.1152 1.02 152 3.0364 0.2756
2.6365 2.02 228 2.6197 0.3780
2.3879 3.02 304 2.1519 0.4646
1.9396 4.02 380 2.0804 0.4173
1.9285 5.02 456 1.9335 0.4488
1.5843 6.02 532 1.7907 0.4803
1.2387 7.02 608 1.8962 0.3858
1.2578 8.02 684 1.7191 0.4488
0.9611 9.02 760 1.7362 0.4882
0.9247 10.02 836 1.3898 0.5906
0.8107 11.02 912 1.9588 0.4094
0.7618 12.02 988 1.1416 0.6614
0.7083 13.02 1064 1.2812 0.6614
0.7098 14.02 1140 1.4601 0.5197
0.4601 15.02 1216 1.1276 0.6693
0.5684 16.02 1292 1.4792 0.5591
0.5044 17.02 1368 1.1236 0.6614
0.4551 18.02 1444 1.3894 0.6063
0.3488 19.02 1520 1.2918 0.6614
0.4711 20.02 1596 1.2510 0.6299
0.3451 21.02 1672 1.1265 0.6693
0.394 22.02 1748 1.1676 0.6378
0.234 23.02 1824 1.0714 0.7087
0.2318 24.02 1900 1.2647 0.6378
0.4294 25.02 1976 1.0250 0.7480
0.2084 26.02 2052 1.1361 0.6850
0.1724 27.02 2128 0.8791 0.7402
0.1715 28.02 2204 0.7549 0.7559
0.2719 29.02 2280 0.7708 0.7717
0.2021 30.02 2356 1.1394 0.7165
0.0999 31.02 2432 0.7838 0.7717
0.1473 32.02 2508 1.3809 0.6457
0.0939 33.02 2584 0.7839 0.7874
0.0952 34.02 2660 1.0636 0.7008
0.2684 35.02 2736 0.9194 0.7323
0.1628 36.02 2812 0.7346 0.8031
0.0584 37.02 2888 1.0112 0.7323
0.0567 38.02 2964 1.0584 0.7323
0.1358 39.02 3040 1.0566 0.7323
0.0796 40.02 3116 0.9323 0.7480
0.0828 41.02 3192 0.7611 0.7953
0.0661 42.02 3268 0.7284 0.7874
0.0882 43.02 3344 0.6982 0.7953
0.0398 44.02 3420 0.8586 0.7717
0.2085 45.02 3496 0.7990 0.7717
0.0509 46.02 3572 0.7134 0.8268
0.0791 47.02 3648 0.6887 0.8189
0.0469 48.02 3724 0.7159 0.8031
0.0621 49.02 3800 0.7062 0.8031

Framework versions

  • Transformers 4.40.0
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.19.1
Downloads last month
3
Safetensors
Model size
86.3M params
Tensor type
F32
ยท
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for latif98/videomae-base-finetuned-isl-numbers_2

Finetuned
(2)
this model

Spaces using latif98/videomae-base-finetuned-isl-numbers_2 2