mrm8488 commited on
Commit
7e6c6d0
1 Parent(s): 46cdfc1

Initial commit

Browse files
README.md ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - masked-image-modeling
4
+ - generated_from_trainer
5
+ datasets:
6
+ - cifar10
7
+ model-index:
8
+ - name: vit-cifar10
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # vit-cifar10
16
+
17
+ This model is a fine-tuned version of [](https://huggingface.co/) on the cifar10 dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.0891
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 2e-05
39
+ - train_batch_size: 16
40
+ - eval_batch_size: 16
41
+ - seed: 1337
42
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
+ - lr_scheduler_type: linear
44
+ - num_epochs: 100.0
45
+
46
+ ### Training results
47
+
48
+ | Training Loss | Epoch | Step | Validation Loss |
49
+ |:-------------:|:-----:|:------:|:---------------:|
50
+ | 0.289 | 1.0 | 2657 | 0.2941 |
51
+ | 0.2858 | 2.0 | 5314 | 0.2809 |
52
+ | 0.2693 | 3.0 | 7971 | 0.2738 |
53
+ | 0.2578 | 4.0 | 10628 | 0.2546 |
54
+ | 0.2211 | 5.0 | 13285 | 0.2153 |
55
+ | 0.1799 | 6.0 | 15942 | 0.1795 |
56
+ | 0.158 | 7.0 | 18599 | 0.1623 |
57
+ | 0.1481 | 8.0 | 21256 | 0.1453 |
58
+ | 0.1391 | 9.0 | 23913 | 0.1368 |
59
+ | 0.1348 | 10.0 | 26570 | 0.1354 |
60
+ | 0.129 | 11.0 | 29227 | 0.1249 |
61
+ | 0.126 | 12.0 | 31884 | 0.1229 |
62
+ | 0.1216 | 13.0 | 34541 | 0.1184 |
63
+ | 0.1175 | 14.0 | 37198 | 0.1185 |
64
+ | 0.1137 | 15.0 | 39855 | 0.1146 |
65
+ | 0.1125 | 16.0 | 42512 | 0.1117 |
66
+ | 0.1112 | 17.0 | 45169 | 0.1100 |
67
+ | 0.1108 | 18.0 | 47826 | 0.1089 |
68
+ | 0.1061 | 19.0 | 50483 | 0.1070 |
69
+ | 0.1073 | 20.0 | 53140 | 0.1076 |
70
+ | 0.1066 | 21.0 | 55797 | 0.1061 |
71
+ | 0.1065 | 22.0 | 58454 | 0.1056 |
72
+ | 0.1045 | 23.0 | 61111 | 0.1037 |
73
+ | 0.1052 | 24.0 | 63768 | 0.1055 |
74
+ | 0.102 | 25.0 | 66425 | 0.1028 |
75
+ | 0.1025 | 26.0 | 69082 | 0.1034 |
76
+ | 0.1037 | 27.0 | 71739 | 0.1025 |
77
+ | 0.1022 | 28.0 | 74396 | 0.1014 |
78
+ | 0.1026 | 29.0 | 77053 | 0.1011 |
79
+ | 0.1022 | 30.0 | 79710 | 0.1001 |
80
+ | 0.0997 | 31.0 | 82367 | 0.1007 |
81
+ | 0.0998 | 32.0 | 85024 | 0.1016 |
82
+ | 0.1019 | 33.0 | 87681 | 0.1008 |
83
+ | 0.0999 | 34.0 | 90338 | 0.1000 |
84
+ | 0.0998 | 35.0 | 92995 | 0.0993 |
85
+ | 0.0994 | 36.0 | 95652 | 0.0992 |
86
+ | 0.0966 | 37.0 | 98309 | 0.0991 |
87
+ | 0.0997 | 38.0 | 100966 | 0.0970 |
88
+ | 0.0991 | 39.0 | 103623 | 0.0979 |
89
+ | 0.099 | 40.0 | 106280 | 0.0983 |
90
+ | 0.0974 | 41.0 | 108937 | 0.0980 |
91
+ | 0.0974 | 42.0 | 111594 | 0.0971 |
92
+ | 0.0972 | 43.0 | 114251 | 0.0970 |
93
+ | 0.0991 | 44.0 | 116908 | 0.0970 |
94
+ | 0.0979 | 45.0 | 119565 | 0.0972 |
95
+ | 0.097 | 46.0 | 122222 | 0.0970 |
96
+ | 0.0936 | 47.0 | 124879 | 0.0967 |
97
+ | 0.0948 | 48.0 | 127536 | 0.0967 |
98
+ | 0.0974 | 49.0 | 130193 | 0.0954 |
99
+ | 0.0958 | 50.0 | 132850 | 0.0954 |
100
+ | 0.0948 | 51.0 | 135507 | 0.0955 |
101
+ | 0.095 | 52.0 | 138164 | 0.0953 |
102
+ | 0.0939 | 53.0 | 140821 | 0.0945 |
103
+ | 0.0961 | 54.0 | 143478 | 0.0948 |
104
+ | 0.0964 | 55.0 | 146135 | 0.0955 |
105
+ | 0.0934 | 56.0 | 148792 | 0.0948 |
106
+ | 0.0965 | 57.0 | 151449 | 0.0943 |
107
+ | 0.0966 | 58.0 | 154106 | 0.0941 |
108
+ | 0.0926 | 59.0 | 156763 | 0.0938 |
109
+ | 0.0928 | 60.0 | 159420 | 0.0942 |
110
+ | 0.093 | 61.0 | 162077 | 0.0936 |
111
+ | 0.0939 | 62.0 | 164734 | 0.0939 |
112
+ | 0.0936 | 63.0 | 167391 | 0.0936 |
113
+ | 0.093 | 64.0 | 170048 | 0.0929 |
114
+ | 0.0929 | 65.0 | 172705 | 0.0930 |
115
+ | 0.0917 | 66.0 | 175362 | 0.0925 |
116
+ | 0.0948 | 67.0 | 178019 | 0.0932 |
117
+ | 0.0931 | 68.0 | 180676 | 0.0927 |
118
+ | 0.0911 | 69.0 | 183333 | 0.0922 |
119
+ | 0.0923 | 70.0 | 185990 | 0.0924 |
120
+ | 0.0923 | 71.0 | 188647 | 0.0923 |
121
+ | 0.0929 | 72.0 | 191304 | 0.0919 |
122
+ | 0.0916 | 73.0 | 193961 | 0.0923 |
123
+ | 0.0927 | 74.0 | 196618 | 0.0921 |
124
+ | 0.0907 | 75.0 | 199275 | 0.0922 |
125
+ | 0.0927 | 76.0 | 201932 | 0.0919 |
126
+ | 0.0925 | 77.0 | 204589 | 0.0913 |
127
+ | 0.0921 | 78.0 | 207246 | 0.0917 |
128
+ | 0.0895 | 79.0 | 209903 | 0.0912 |
129
+ | 0.0916 | 80.0 | 212560 | 0.0914 |
130
+ | 0.09 | 81.0 | 215217 | 0.0909 |
131
+ | 0.0916 | 82.0 | 217874 | 0.0908 |
132
+ | 0.0902 | 83.0 | 220531 | 0.0907 |
133
+ | 0.0911 | 84.0 | 223188 | 0.0910 |
134
+ | 0.091 | 85.0 | 225845 | 0.0903 |
135
+ | 0.0903 | 86.0 | 228502 | 0.0905 |
136
+ | 0.0907 | 87.0 | 231159 | 0.0901 |
137
+ | 0.0908 | 88.0 | 233816 | 0.0907 |
138
+ | 0.0911 | 89.0 | 236473 | 0.0902 |
139
+ | 0.0905 | 90.0 | 239130 | 0.0906 |
140
+ | 0.089 | 91.0 | 241787 | 0.0901 |
141
+ | 0.0908 | 92.0 | 244444 | 0.0896 |
142
+ | 0.0894 | 93.0 | 247101 | 0.0892 |
143
+ | 0.0899 | 94.0 | 249758 | 0.0893 |
144
+ | 0.0899 | 95.0 | 252415 | 0.0897 |
145
+ | 0.0904 | 96.0 | 255072 | 0.0898 |
146
+ | 0.0906 | 97.0 | 257729 | 0.0894 |
147
+ | 0.0892 | 98.0 | 260386 | 0.0894 |
148
+ | 0.0881 | 99.0 | 263043 | 0.0892 |
149
+ | 0.09 | 100.0 | 265700 | 0.0894 |
150
+
151
+
152
+ ### Framework versions
153
+
154
+ - Transformers 4.19.0.dev0
155
+ - Pytorch 1.10.0+cu111
156
+ - Datasets 2.0.0
157
+ - Tokenizers 0.11.6
all_results.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 100.0,
3
+ "eval_loss": 0.08910132944583893,
4
+ "eval_runtime": 45.8239,
5
+ "eval_samples_per_second": 163.67,
6
+ "eval_steps_per_second": 10.235,
7
+ "train_loss": 0.10943094944119408,
8
+ "train_runtime": 65782.603,
9
+ "train_samples_per_second": 64.607,
10
+ "train_steps_per_second": 4.039
11
+ }
config.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "ViTForMaskedImageModeling"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.0,
6
+ "encoder_stride": 16,
7
+ "hidden_act": "gelu",
8
+ "hidden_dropout_prob": 0.0,
9
+ "hidden_size": 768,
10
+ "image_size": 224,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 3072,
13
+ "layer_norm_eps": 1e-12,
14
+ "model_type": "vit",
15
+ "num_attention_heads": 12,
16
+ "num_channels": 3,
17
+ "num_hidden_layers": 12,
18
+ "patch_size": 16,
19
+ "qkv_bias": true,
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.19.0.dev0"
22
+ }
eval_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 100.0,
3
+ "eval_loss": 0.08910132944583893,
4
+ "eval_runtime": 45.8239,
5
+ "eval_samples_per_second": 163.67,
6
+ "eval_steps_per_second": 10.235
7
+ }
preprocessor_config.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_normalize": true,
3
+ "do_resize": true,
4
+ "feature_extractor_type": "ViTFeatureExtractor",
5
+ "image_mean": [
6
+ 0.5,
7
+ 0.5,
8
+ 0.5
9
+ ],
10
+ "image_std": [
11
+ 0.5,
12
+ 0.5,
13
+ 0.5
14
+ ],
15
+ "resample": 2,
16
+ "size": 224
17
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88eee8496777af95b61b63983843b1ce7864ad60215323f66965a16f3864096e
3
+ size 345626543
runs/Apr11_17-11-42_ddf33a9556e6/1649697146.5389287/events.out.tfevents.1649697146.ddf33a9556e6.409.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25403a91784ca9d077606ef7a2c1f6df6b99aea256d7a524f772790031ae8b6f
3
+ size 4982
runs/Apr11_17-11-42_ddf33a9556e6/events.out.tfevents.1649697146.ddf33a9556e6.409.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ddb6c63f938fe81f81b704200fa1734db6dde8ed215cbdc46b38f51a5f9539e6
3
+ size 455803
runs/Apr11_17-11-42_ddf33a9556e6/events.out.tfevents.1649762976.ddf33a9556e6.409.2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad591d30e03f234c899d3ce727a1a1c324709c4a555d6431fc3d87f2a9c3becc
3
+ size 316
train_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 100.0,
3
+ "train_loss": 0.10943094944119408,
4
+ "train_runtime": 65782.603,
5
+ "train_samples_per_second": 64.607,
6
+ "train_steps_per_second": 4.039
7
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff
 
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d8a59c85ae869b3befd1f864f08545eb9bfb8cbf00b71da56bb6a0e4a594468
3
+ size 3183