RickyIG commited on
Commit
8ec6561
1 Parent(s): e775ec0

End of training

Browse files
Files changed (2) hide show
  1. README.md +118 -0
  2. pytorch_model.bin +1 -1
README.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: google/vit-base-patch16-224-in21k
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - imagefolder
8
+ metrics:
9
+ - accuracy
10
+ model-index:
11
+ - name: emotion_face_image_classification_v2
12
+ results:
13
+ - task:
14
+ name: Image Classification
15
+ type: image-classification
16
+ dataset:
17
+ name: imagefolder
18
+ type: imagefolder
19
+ config: default
20
+ split: train
21
+ args: default
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.48125
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # emotion_face_image_classification_v2
32
+
33
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 1.5157
36
+ - Accuracy: 0.4813
37
+
38
+ ## Model description
39
+
40
+ More information needed
41
+
42
+ ## Intended uses & limitations
43
+
44
+ More information needed
45
+
46
+ ## Training and evaluation data
47
+
48
+ More information needed
49
+
50
+ ## Training procedure
51
+
52
+ ### Training hyperparameters
53
+
54
+ The following hyperparameters were used during training:
55
+ - learning_rate: 5e-05
56
+ - train_batch_size: 64
57
+ - eval_batch_size: 64
58
+ - seed: 42
59
+ - gradient_accumulation_steps: 4
60
+ - total_train_batch_size: 256
61
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: cosine_with_restarts
63
+ - lr_scheduler_warmup_ratio: 0.1
64
+ - lr_scheduler_warmup_steps: 150
65
+ - num_epochs: 50
66
+
67
+ ### Training results
68
+
69
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
70
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
71
+ | No log | 0.8 | 2 | 2.0924 | 0.15 |
72
+ | No log | 2.0 | 5 | 2.1024 | 0.0938 |
73
+ | No log | 2.8 | 7 | 2.0935 | 0.1375 |
74
+ | No log | 4.0 | 10 | 2.0893 | 0.15 |
75
+ | No log | 4.8 | 12 | 2.0900 | 0.15 |
76
+ | No log | 6.0 | 15 | 2.0987 | 0.0813 |
77
+ | No log | 6.8 | 17 | 2.0901 | 0.1 |
78
+ | No log | 8.0 | 20 | 2.0872 | 0.15 |
79
+ | No log | 8.8 | 22 | 2.0831 | 0.1375 |
80
+ | No log | 10.0 | 25 | 2.0750 | 0.1437 |
81
+ | No log | 10.8 | 27 | 2.0744 | 0.175 |
82
+ | No log | 12.0 | 30 | 2.0778 | 0.1437 |
83
+ | No log | 12.8 | 32 | 2.0729 | 0.1812 |
84
+ | No log | 14.0 | 35 | 2.0676 | 0.1625 |
85
+ | No log | 14.8 | 37 | 2.0694 | 0.1688 |
86
+ | No log | 16.0 | 40 | 2.0562 | 0.1625 |
87
+ | No log | 16.8 | 42 | 2.0498 | 0.1938 |
88
+ | No log | 18.0 | 45 | 2.0393 | 0.2188 |
89
+ | No log | 18.8 | 47 | 2.0458 | 0.2062 |
90
+ | No log | 20.0 | 50 | 2.0289 | 0.2125 |
91
+ | No log | 20.8 | 52 | 2.0226 | 0.2437 |
92
+ | No log | 22.0 | 55 | 1.9997 | 0.2625 |
93
+ | No log | 22.8 | 57 | 1.9855 | 0.3187 |
94
+ | No log | 24.0 | 60 | 1.9571 | 0.3187 |
95
+ | No log | 24.8 | 62 | 1.9473 | 0.3375 |
96
+ | No log | 26.0 | 65 | 1.9080 | 0.3187 |
97
+ | No log | 26.8 | 67 | 1.8894 | 0.35 |
98
+ | No log | 28.0 | 70 | 1.8407 | 0.375 |
99
+ | No log | 28.8 | 72 | 1.8083 | 0.3438 |
100
+ | No log | 30.0 | 75 | 1.7652 | 0.3563 |
101
+ | No log | 30.8 | 77 | 1.7281 | 0.3563 |
102
+ | No log | 32.0 | 80 | 1.6729 | 0.4062 |
103
+ | No log | 32.8 | 82 | 1.6527 | 0.3937 |
104
+ | No log | 34.0 | 85 | 1.6044 | 0.4562 |
105
+ | No log | 34.8 | 87 | 1.5899 | 0.4313 |
106
+ | No log | 36.0 | 90 | 1.5488 | 0.4313 |
107
+ | No log | 36.8 | 92 | 1.5340 | 0.45 |
108
+ | No log | 38.0 | 95 | 1.5227 | 0.4875 |
109
+ | No log | 38.8 | 97 | 1.4846 | 0.4875 |
110
+ | No log | 40.0 | 100 | 1.4579 | 0.4688 |
111
+
112
+
113
+ ### Framework versions
114
+
115
+ - Transformers 4.33.2
116
+ - Pytorch 2.0.1+cu118
117
+ - Datasets 2.14.5
118
+ - Tokenizers 0.13.3
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9904a73d2082a9b4ce55486428d58e38860a53f1d3dd3dadf8afa88bbfb932c1
3
  size 343287149
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:688f94f16bc3a5fe6f5e7e31a5c6b09075fbfcbae243f1e09debb6457e1fafc8
3
  size 343287149