Akshay0706 commited on
Commit
3042bd4
1 Parent(s): 50ae4fd

End of training

Browse files
README.md ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: google/vit-base-patch16-224-in21k
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - imagefolder
8
+ metrics:
9
+ - accuracy
10
+ - f1
11
+ model-index:
12
+ - name: Cinnamon-Plant-50-Epochs-Model
13
+ results:
14
+ - task:
15
+ name: Image Classification
16
+ type: image-classification
17
+ dataset:
18
+ name: imagefolder
19
+ type: imagefolder
20
+ config: default
21
+ split: train
22
+ args: default
23
+ metrics:
24
+ - name: Accuracy
25
+ type: accuracy
26
+ value: 0.8958333333333334
27
+ - name: F1
28
+ type: f1
29
+ value: 0.8959694989106755
30
+ ---
31
+
32
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
33
+ should probably proofread and complete it, then remove this comment. -->
34
+
35
+ # Cinnamon-Plant-50-Epochs-Model
36
+
37
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
38
+ It achieves the following results on the evaluation set:
39
+ - Loss: 0.3989
40
+ - Accuracy: 0.8958
41
+ - F1: 0.8960
42
+
43
+ ## Model description
44
+
45
+ More information needed
46
+
47
+ ## Intended uses & limitations
48
+
49
+ More information needed
50
+
51
+ ## Training and evaluation data
52
+
53
+ More information needed
54
+
55
+ ## Training procedure
56
+
57
+ ### Training hyperparameters
58
+
59
+ The following hyperparameters were used during training:
60
+ - learning_rate: 2e-05
61
+ - train_batch_size: 16
62
+ - eval_batch_size: 16
63
+ - seed: 42
64
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
+ - lr_scheduler_type: linear
66
+ - num_epochs: 50
67
+
68
+ ### Training results
69
+
70
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
71
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
72
+ | 0.0428 | 1.0 | 18 | 0.2528 | 0.9167 | 0.9167 |
73
+ | 0.0218 | 2.0 | 36 | 0.3322 | 0.875 | 0.8763 |
74
+ | 0.0149 | 3.0 | 54 | 0.2954 | 0.875 | 0.8763 |
75
+ | 0.0121 | 4.0 | 72 | 0.2941 | 0.8958 | 0.8965 |
76
+ | 0.0106 | 5.0 | 90 | 0.3093 | 0.875 | 0.8763 |
77
+ | 0.0096 | 6.0 | 108 | 0.3130 | 0.8958 | 0.8965 |
78
+ | 0.0088 | 7.0 | 126 | 0.3227 | 0.875 | 0.8763 |
79
+ | 0.0082 | 8.0 | 144 | 0.3197 | 0.9167 | 0.9170 |
80
+ | 0.0077 | 9.0 | 162 | 0.3323 | 0.8958 | 0.8965 |
81
+ | 0.0073 | 10.0 | 180 | 0.3310 | 0.9167 | 0.9170 |
82
+ | 0.0069 | 11.0 | 198 | 0.3378 | 0.9167 | 0.9170 |
83
+ | 0.0066 | 12.0 | 216 | 0.3427 | 0.8958 | 0.8965 |
84
+ | 0.0064 | 13.0 | 234 | 0.3437 | 0.9167 | 0.9170 |
85
+ | 0.0061 | 14.0 | 252 | 0.3483 | 0.9167 | 0.9170 |
86
+ | 0.0059 | 15.0 | 270 | 0.3504 | 0.9167 | 0.9170 |
87
+ | 0.0057 | 16.0 | 288 | 0.3539 | 0.9167 | 0.9170 |
88
+ | 0.0055 | 17.0 | 306 | 0.3597 | 0.8958 | 0.8965 |
89
+ | 0.0054 | 18.0 | 324 | 0.3623 | 0.8958 | 0.8965 |
90
+ | 0.0052 | 19.0 | 342 | 0.3638 | 0.8958 | 0.8965 |
91
+ | 0.0051 | 20.0 | 360 | 0.3670 | 0.8958 | 0.8965 |
92
+ | 0.0049 | 21.0 | 378 | 0.3672 | 0.9167 | 0.9170 |
93
+ | 0.0048 | 22.0 | 396 | 0.3690 | 0.9167 | 0.9170 |
94
+ | 0.0047 | 23.0 | 414 | 0.3704 | 0.9167 | 0.9170 |
95
+ | 0.0046 | 24.0 | 432 | 0.3735 | 0.9167 | 0.9170 |
96
+ | 0.0045 | 25.0 | 450 | 0.3748 | 0.8958 | 0.8960 |
97
+ | 0.0044 | 26.0 | 468 | 0.3775 | 0.9167 | 0.9170 |
98
+ | 0.0044 | 27.0 | 486 | 0.3779 | 0.8958 | 0.8960 |
99
+ | 0.0043 | 28.0 | 504 | 0.3797 | 0.8958 | 0.8960 |
100
+ | 0.0042 | 29.0 | 522 | 0.3818 | 0.8958 | 0.8960 |
101
+ | 0.0041 | 30.0 | 540 | 0.3840 | 0.8958 | 0.8960 |
102
+ | 0.0041 | 31.0 | 558 | 0.3845 | 0.8958 | 0.8960 |
103
+ | 0.004 | 32.0 | 576 | 0.3861 | 0.8958 | 0.8960 |
104
+ | 0.004 | 33.0 | 594 | 0.3877 | 0.8958 | 0.8960 |
105
+ | 0.0039 | 34.0 | 612 | 0.3892 | 0.8958 | 0.8960 |
106
+ | 0.0039 | 35.0 | 630 | 0.3901 | 0.8958 | 0.8960 |
107
+ | 0.0038 | 36.0 | 648 | 0.3912 | 0.8958 | 0.8960 |
108
+ | 0.0038 | 37.0 | 666 | 0.3921 | 0.8958 | 0.8960 |
109
+ | 0.0038 | 38.0 | 684 | 0.3929 | 0.8958 | 0.8960 |
110
+ | 0.0037 | 39.0 | 702 | 0.3935 | 0.8958 | 0.8960 |
111
+ | 0.0037 | 40.0 | 720 | 0.3940 | 0.8958 | 0.8960 |
112
+ | 0.0037 | 41.0 | 738 | 0.3951 | 0.8958 | 0.8960 |
113
+ | 0.0036 | 42.0 | 756 | 0.3958 | 0.8958 | 0.8960 |
114
+ | 0.0036 | 43.0 | 774 | 0.3964 | 0.8958 | 0.8960 |
115
+ | 0.0036 | 44.0 | 792 | 0.3973 | 0.8958 | 0.8960 |
116
+ | 0.0036 | 45.0 | 810 | 0.3978 | 0.8958 | 0.8960 |
117
+ | 0.0036 | 46.0 | 828 | 0.3982 | 0.8958 | 0.8960 |
118
+ | 0.0036 | 47.0 | 846 | 0.3985 | 0.8958 | 0.8960 |
119
+ | 0.0036 | 48.0 | 864 | 0.3987 | 0.8958 | 0.8960 |
120
+ | 0.0035 | 49.0 | 882 | 0.3989 | 0.8958 | 0.8960 |
121
+ | 0.0035 | 50.0 | 900 | 0.3989 | 0.8958 | 0.8960 |
122
+
123
+
124
+ ### Framework versions
125
+
126
+ - Transformers 4.35.0
127
+ - Pytorch 2.1.0+cu118
128
+ - Datasets 2.14.6
129
+ - Tokenizers 0.14.1
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 50.0,
3
+ "total_flos": 1.0771386556428288e+18,
4
+ "train_loss": 0.031592438941200576,
5
+ "train_runtime": 2755.6722,
6
+ "train_samples": 278,
7
+ "train_samples_per_second": 5.044,
8
+ "train_steps_per_second": 0.327
9
+ }
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "google/vit-base-patch16-224-in21k",
3
+ "architectures": [
4
+ "ViTForImageClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.0,
7
+ "encoder_stride": 16,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.0,
10
+ "hidden_size": 768,
11
+ "id2label": {
12
+ "0": 0,
13
+ "1": 1
14
+ },
15
+ "image_size": 224,
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 3072,
18
+ "label2id": {
19
+ "0": 0,
20
+ "1": 1
21
+ },
22
+ "layer_norm_eps": 1e-12,
23
+ "model_type": "vit",
24
+ "num_attention_heads": 12,
25
+ "num_channels": 3,
26
+ "num_hidden_layers": 12,
27
+ "patch_size": 16,
28
+ "problem_type": "single_label_classification",
29
+ "qkv_bias": true,
30
+ "torch_dtype": "float32",
31
+ "transformers_version": "4.35.0"
32
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e37c84bdcd867d2d78a84e706326e0bab56be50fd8051acbf77650eb71272a3
3
+ size 343223968
preprocessor_config.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_normalize": true,
3
+ "do_rescale": true,
4
+ "do_resize": true,
5
+ "image_mean": [
6
+ 0.5,
7
+ 0.5,
8
+ 0.5
9
+ ],
10
+ "image_processor_type": "ViTImageProcessor",
11
+ "image_std": [
12
+ 0.5,
13
+ 0.5,
14
+ 0.5
15
+ ],
16
+ "resample": 2,
17
+ "rescale_factor": 0.00392156862745098,
18
+ "size": {
19
+ "height": 224,
20
+ "width": 224
21
+ }
22
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 50.0,
3
+ "total_flos": 1.0771386556428288e+18,
4
+ "train_loss": 0.031592438941200576,
5
+ "train_runtime": 2755.6722,
6
+ "train_samples": 278,
7
+ "train_samples_per_second": 5.044,
8
+ "train_steps_per_second": 0.327
9
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c4070b2db35a06fe652bdae2fe7f96b465eaaaa7f56dc812e431e254fa32669
3
+ size 4536