Dijaaa commited on
Commit
eb186a9
·
1 Parent(s): 3568716

Model save

Browse files
Files changed (1) hide show
  1. README.md +11 -6
README.md CHANGED
@@ -17,8 +17,8 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 2.6369
21
- - Accuracy: 0.0383
22
 
23
  ## Model description
24
 
@@ -41,22 +41,27 @@ The following hyperparameters were used during training:
41
  - train_batch_size: 8
42
  - eval_batch_size: 8
43
  - seed: 42
 
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_ratio: 0.1
47
- - training_steps: 22
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
- | 2.5694 | 0.55 | 12 | 2.7178 | 0.0939 |
54
- | 2.4679 | 1.45 | 22 | 2.7647 | 0.0280 |
 
 
 
 
55
 
56
 
57
  ### Framework versions
58
 
59
  - Transformers 4.34.0
60
- - Pytorch 2.0.1+cu118
61
  - Datasets 2.14.5
62
  - Tokenizers 0.14.1
 
17
 
18
  This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 2.1449
21
+ - Accuracy: 0.2689
22
 
23
  ## Model description
24
 
 
41
  - train_batch_size: 8
42
  - eval_batch_size: 8
43
  - seed: 42
44
+ - distributed_type: tpu
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_ratio: 0.1
48
+ - training_steps: 504
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
53
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
54
+ | 2.3843 | 0.17 | 85 | 2.5040 | 0.1662 |
55
+ | 2.2227 | 1.17 | 170 | 2.3376 | 0.1935 |
56
+ | 2.1317 | 2.17 | 255 | 2.2547 | 0.2329 |
57
+ | 2.0338 | 3.17 | 340 | 2.1470 | 0.2485 |
58
+ | 1.9639 | 4.17 | 425 | 2.0700 | 0.2558 |
59
+ | 1.8565 | 5.16 | 504 | 2.0882 | 0.2625 |
60
 
61
 
62
  ### Framework versions
63
 
64
  - Transformers 4.34.0
65
+ - Pytorch 2.0.0+cu117
66
  - Datasets 2.14.5
67
  - Tokenizers 0.14.1