Siccimo commited on
Commit
681ad5b
1 Parent(s): bb0580e

Model save

Browse files
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ base_model: MCG-NJU/videomae-base
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ model-index:
9
+ - name: videomae-base-finetuned-ucf101-subset
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/siccimo/huggingface/runs/65akq4kx)
17
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/siccimo/huggingface/runs/65akq4kx)
18
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/siccimo/huggingface/runs/65akq4kx)
19
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/siccimo/huggingface/runs/65akq4kx)
20
+ # videomae-base-finetuned-ucf101-subset
21
+
22
+ This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
23
+ It achieves the following results on the evaluation set:
24
+ - Loss: 0.2589
25
+ - Accuracy: 0.9097
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 5e-05
45
+ - train_batch_size: 16
46
+ - eval_batch_size: 16
47
+ - seed: 42
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: linear
50
+ - lr_scheduler_warmup_ratio: 0.1
51
+ - training_steps: 148
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
56
+ |:-------------:|:------:|:----:|:---------------:|:--------:|
57
+ | 2.0115 | 0.1284 | 19 | 1.5469 | 0.5429 |
58
+ | 1.2145 | 1.1284 | 38 | 0.9201 | 0.7 |
59
+ | 0.6166 | 2.1284 | 57 | 0.5548 | 0.8286 |
60
+ | 0.3255 | 3.1284 | 76 | 0.3556 | 0.9 |
61
+ | 0.1945 | 4.1284 | 95 | 0.2918 | 0.8857 |
62
+ | 0.098 | 5.1284 | 114 | 0.3874 | 0.8714 |
63
+ | 0.0571 | 6.1284 | 133 | 0.1540 | 0.9571 |
64
+ | 0.0387 | 7.1014 | 148 | 0.2547 | 0.8571 |
65
+
66
+
67
+ ### Framework versions
68
+
69
+ - Transformers 4.42.3
70
+ - Pytorch 2.1.2
71
+ - Datasets 2.20.0
72
+ - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9b5e6e29905ff93b42390cbcb74eb4ed554ccf0e3206c1941fddf4a9ce16e469
3
  size 344961984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7331fa28cee99ba05198f7638f9b4071b914ce77a9335b24108e0ee172cf2075
3
  size 344961984
runs/Jul19_11-52-50_c85a676f2b00/events.out.tfevents.1721390865.c85a676f2b00.34.6 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00bfc32886a2447b2649a637875895ba8c3d9841be4f7429bb8fc0e9c530a6d7
3
+ size 411