billskar23 commited on
Commit
5673b0a
1 Parent(s): b38107a

Training in progress, epoch 19

Browse files
README.md CHANGED
@@ -21,11 +21,11 @@ should probably proofread and complete it, then remove this comment. -->
21
 
22
  This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
23
  It achieves the following results on the evaluation set:
24
- - Loss: 3.0328
25
- - Accuracy: 0.6154
26
- - Precision: 0.3787
27
- - Recall: 0.6154
28
- - F1: 0.4689
29
 
30
  ## Model description
31
 
@@ -45,38 +45,38 @@ More information needed
45
 
46
  The following hyperparameters were used during training:
47
  - learning_rate: 5e-05
48
- - train_batch_size: 4
49
- - eval_batch_size: 4
50
  - seed: 42
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_ratio: 0.1
54
- - training_steps: 10240
55
 
56
  ### Training results
57
 
58
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
59
- |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
60
- | 0.3985 | 1.0 | 512 | 0.2857 | 0.9231 | 0.9316 | 0.9231 | 0.9211 |
61
- | 0.4718 | 2.0 | 1024 | 1.4444 | 0.6154 | 0.3787 | 0.6154 | 0.4689 |
62
- | 0.181 | 3.0 | 1536 | 3.1151 | 0.3846 | 0.3077 | 0.3846 | 0.3419 |
63
- | 0.113 | 4.0 | 2048 | 1.9200 | 0.6154 | 0.3787 | 0.6154 | 0.4689 |
64
- | 0.0771 | 5.0 | 2560 | 1.4445 | 0.5385 | 0.4974 | 0.5385 | 0.5064 |
65
- | 0.0003 | 6.0 | 3072 | 1.6911 | 0.7692 | 0.7839 | 0.7692 | 0.7720 |
66
- | 0.0002 | 7.0 | 3584 | 2.2123 | 0.6154 | 0.3787 | 0.6154 | 0.4689 |
67
- | 0.0002 | 8.0 | 4096 | 3.1463 | 0.6154 | 0.3787 | 0.6154 | 0.4689 |
68
- | 0.0001 | 9.0 | 4608 | 1.7846 | 0.6154 | 0.3787 | 0.6154 | 0.4689 |
69
- | 0.0 | 10.0 | 5120 | 2.4646 | 0.6154 | 0.3787 | 0.6154 | 0.4689 |
70
- | 0.0 | 11.0 | 5632 | 2.4743 | 0.6154 | 0.3787 | 0.6154 | 0.4689 |
71
- | 0.0 | 12.0 | 6144 | 2.5687 | 0.6154 | 0.3787 | 0.6154 | 0.4689 |
72
- | 0.0 | 13.0 | 6656 | 2.6551 | 0.6154 | 0.3787 | 0.6154 | 0.4689 |
73
- | 0.0 | 14.0 | 7168 | 2.7365 | 0.6154 | 0.3787 | 0.6154 | 0.4689 |
74
- | 0.0 | 15.0 | 7680 | 2.8134 | 0.6154 | 0.3787 | 0.6154 | 0.4689 |
75
- | 0.0 | 16.0 | 8192 | 2.8798 | 0.6154 | 0.3787 | 0.6154 | 0.4689 |
76
- | 0.0 | 17.0 | 8704 | 2.9415 | 0.6154 | 0.3787 | 0.6154 | 0.4689 |
77
- | 0.0 | 18.0 | 9216 | 2.9896 | 0.6154 | 0.3787 | 0.6154 | 0.4689 |
78
- | 0.0 | 19.0 | 9728 | 3.0225 | 0.6154 | 0.3787 | 0.6154 | 0.4689 |
79
- | 0.0 | 20.0 | 10240 | 3.0328 | 0.6154 | 0.3787 | 0.6154 | 0.4689 |
80
 
81
 
82
  ### Framework versions
 
21
 
22
  This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
23
  It achieves the following results on the evaluation set:
24
+ - Loss: 4.7834
25
+ - Accuracy: 0.5333
26
+ - Precision: 0.2844
27
+ - Recall: 0.5333
28
+ - F1: 0.3710
29
 
30
  ## Model description
31
 
 
45
 
46
  The following hyperparameters were used during training:
47
  - learning_rate: 5e-05
48
+ - train_batch_size: 8
49
+ - eval_batch_size: 8
50
  - seed: 42
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_ratio: 0.1
54
+ - training_steps: 5100
55
 
56
  ### Training results
57
 
58
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
59
+ |:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
60
+ | 0.515 | 1.0 | 256 | 0.8388 | 0.5333 | 0.2844 | 0.5333 | 0.3710 |
61
+ | 0.4899 | 2.0 | 512 | 1.6684 | 0.5333 | 0.2844 | 0.5333 | 0.3710 |
62
+ | 0.3165 | 3.0 | 768 | 1.9704 | 0.6 | 0.7714 | 0.6 | 0.5045 |
63
+ | 0.053 | 4.0 | 1024 | 3.4786 | 0.5333 | 0.2844 | 0.5333 | 0.3710 |
64
+ | 0.0772 | 5.0 | 1280 | 3.9716 | 0.5333 | 0.2844 | 0.5333 | 0.3710 |
65
+ | 0.1053 | 6.0 | 1536 | 2.9126 | 0.6 | 0.7714 | 0.6 | 0.5045 |
66
+ | 0.0867 | 7.0 | 1792 | 3.8445 | 0.5333 | 0.2844 | 0.5333 | 0.3710 |
67
+ | 0.0001 | 8.0 | 2048 | 4.1078 | 0.5333 | 0.2844 | 0.5333 | 0.3710 |
68
+ | 0.0001 | 9.0 | 2304 | 4.2770 | 0.5333 | 0.2844 | 0.5333 | 0.3710 |
69
+ | 0.0 | 10.0 | 2560 | 4.3905 | 0.5333 | 0.2844 | 0.5333 | 0.3710 |
70
+ | 0.0 | 11.0 | 2816 | 4.4740 | 0.5333 | 0.2844 | 0.5333 | 0.3710 |
71
+ | 0.0 | 12.0 | 3072 | 4.5399 | 0.5333 | 0.2844 | 0.5333 | 0.3710 |
72
+ | 0.0 | 13.0 | 3328 | 4.5972 | 0.5333 | 0.2844 | 0.5333 | 0.3710 |
73
+ | 0.0 | 14.0 | 3584 | 4.6437 | 0.5333 | 0.2844 | 0.5333 | 0.3710 |
74
+ | 0.0 | 15.0 | 3840 | 4.6843 | 0.5333 | 0.2844 | 0.5333 | 0.3710 |
75
+ | 0.0 | 16.0 | 4096 | 4.7171 | 0.5333 | 0.2844 | 0.5333 | 0.3710 |
76
+ | 0.0 | 17.0 | 4352 | 4.7472 | 0.5333 | 0.2844 | 0.5333 | 0.3710 |
77
+ | 0.0 | 18.0 | 4608 | 4.7671 | 0.5333 | 0.2844 | 0.5333 | 0.3710 |
78
+ | 0.0 | 19.0 | 4864 | 4.7798 | 0.5333 | 0.2844 | 0.5333 | 0.3710 |
79
+ | 0.0 | 19.9219 | 5100 | 4.7834 | 0.5333 | 0.2844 | 0.5333 | 0.3710 |
80
 
81
 
82
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:df975da0f6e34605b4ab9adb08fa3520e029e4bd0a5102d9c476fef925c361cf
3
  size 344937368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4828255d636ac026975f7c75399807528f0c8e72b90e25f0a290224ca3e1029
3
  size 344937368
runs/Sep26_16-54-02_hmudgx/events.out.tfevents.1727358844.hmudgx.917888.4 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:34bf38949f336385757d461ca7902c1ec427197a2adee64d4863e66aeed39f4d
3
- size 117013
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a618a05c1eef7b0c80bb8e28a714beef26235271844d25a3e0400313f87959a
3
+ size 122481
runs/Sep26_16-56-41_hmudgx/events.out.tfevents.1727359003.hmudgx.918034.4 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:eb5dbc7aea351ae55e901d6c34dee9c9cb184ce64ab2567da7d62f34c72bd910
3
- size 207905
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06d3c579ef62472cdf0729b10929fa2871da1b3e7e67d66d525e98276f5eea9c
3
+ size 230160