Amit7Singh's picture
Model save
dd92ecc verified
|
raw
history blame
2.33 kB
metadata
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: videomae-base-ssbd-trim-yolo
    results: []

videomae-base-ssbd-trim-yolo

This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3670
  • Accuracy: 0.7949

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 630

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.9077 0.0683 43 1.1114 0.5278
1.0662 1.0683 86 1.4397 0.25
0.7166 2.0683 129 1.1111 0.5556
0.5385 3.0683 172 1.4677 0.6111
0.5025 4.0683 215 1.3852 0.6667
0.8695 5.0683 258 0.8738 0.8056
0.5152 6.0683 301 1.6813 0.6944
0.1175 7.0683 344 1.2260 0.8333
0.5459 8.0683 387 1.5197 0.75
0.0563 9.0683 430 1.8295 0.7222
0.2366 10.0683 473 1.2773 0.7778
0.0009 11.0683 516 1.4973 0.7778
0.143 12.0683 559 1.5192 0.7778
0.0016 13.0683 602 1.8634 0.7222
0.0185 14.0444 630 1.8943 0.7222

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1