Edit model card

Visualize in Weights & Biases

videomae-finetuned-v2

This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4511
  • Accuracy: 0.825

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 90

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.0556 5 0.6644 0.52
0.649 1.0556 10 0.5890 0.68
0.649 2.0556 15 0.6915 0.48
0.5368 3.0556 20 0.5428 0.48
0.5368 4.0556 25 0.2507 0.92
0.3519 5.0556 30 0.6503 0.68
0.3519 6.0556 35 0.6544 0.68
0.4579 7.0556 40 0.2332 0.92
0.4579 8.0556 45 0.4506 0.88
0.3166 9.0556 50 0.2587 0.88
0.3166 10.0556 55 0.1353 0.92
0.2761 11.0556 60 0.3067 0.92
0.2761 12.0556 65 0.4782 0.84
0.2316 13.0556 70 0.3868 0.84
0.2316 14.0556 75 0.3565 0.88
0.173 15.0556 80 0.3623 0.92
0.173 16.0556 85 0.2870 0.92
0.1488 17.0556 90 0.2836 0.92

Framework versions

  • Transformers 4.42.3
  • Pytorch 2.1.2
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
86.2M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Siccimo/videomae-finetuned-v2

Finetuned
(366)
this model