--- library_name: transformers license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base-finetuned-kinetics tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-kinetics-allkisa-0219 results: [] --- # videomae-base-finetuned-kinetics-allkisa-0219 This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3020 - Accuracy: 0.9394 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - training_steps: 7840 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.0617 | 0.0251 | 197 | 0.3995 | 0.8433 | | 0.0008 | 1.0251 | 394 | 0.3034 | 0.8715 | | 0.0006 | 2.0251 | 591 | 0.3897 | 0.8762 | | 1.2877 | 3.0251 | 788 | 0.3241 | 0.8809 | | 0.0024 | 4.0251 | 985 | 0.3393 | 0.8856 | | 0.0002 | 5.0251 | 1182 | 0.3336 | 0.9013 | | 0.0005 | 6.0251 | 1379 | 0.4018 | 0.9028 | | 0.0001 | 7.0251 | 1576 | 0.4943 | 0.8966 | | 0.0004 | 8.0251 | 1773 | 0.6253 | 0.8323 | | 0.0003 | 9.0251 | 1970 | 0.3540 | 0.9075 | | 0.0005 | 10.0251 | 2167 | 0.4543 | 0.9013 | | 0.0003 | 11.0251 | 2364 | 0.6340 | 0.8746 | | 0.0 | 12.0251 | 2561 | 0.4580 | 0.9028 | | 0.0004 | 13.0251 | 2758 | 0.5239 | 0.8997 | | 0.0003 | 14.0251 | 2955 | 0.4695 | 0.9028 | | 0.0025 | 15.0251 | 3152 | 0.6634 | 0.8762 | | 0.0001 | 16.0251 | 3349 | 0.5013 | 0.9075 | | 0.0 | 17.0251 | 3546 | 0.4318 | 0.9216 | | 0.0001 | 18.0251 | 3743 | 0.4857 | 0.9169 | | 0.0001 | 19.0251 | 3940 | 0.4436 | 0.9138 | | 0.0002 | 20.0251 | 4137 | 0.5982 | 0.8950 | | 0.0059 | 21.0251 | 4334 | 0.4260 | 0.9248 | | 0.0376 | 22.0251 | 4531 | 0.4158 | 0.9263 | | 0.0 | 23.0251 | 4728 | 0.4742 | 0.9169 | | 0.0 | 24.0251 | 4925 | 0.4715 | 0.9107 | | 0.0002 | 25.0251 | 5122 | 0.4703 | 0.9044 | | 0.0004 | 26.0251 | 5319 | 0.4396 | 0.9138 | | 0.0001 | 27.0251 | 5516 | 0.4424 | 0.9201 | | 0.0 | 28.0251 | 5713 | 0.4931 | 0.8966 | | 0.0001 | 29.0251 | 5910 | 0.5090 | 0.9028 | | 0.0001 | 30.0251 | 6107 | 0.5055 | 0.9028 | | 0.0 | 31.0251 | 6304 | 0.5457 | 0.8934 | | 0.0227 | 32.0251 | 6501 | 0.5026 | 0.9013 | ### Framework versions - Transformers 4.48.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0