Dhyey8 commited on
Commit
f6d535f
1 Parent(s): 305db3a

Model save

Browse files
README.md ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: google/vit-base-patch16-224
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - imagefolder
8
+ metrics:
9
+ - accuracy
10
+ model-index:
11
+ - name: vit-base-patch16-224-finetuned-teeth_dataset
12
+ results:
13
+ - task:
14
+ name: Image Classification
15
+ type: image-classification
16
+ dataset:
17
+ name: imagefolder
18
+ type: imagefolder
19
+ config: default
20
+ split: train
21
+ args: default
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.9347826086956522
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # vit-base-patch16-224-finetuned-teeth_dataset
32
+
33
+ This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 1.1649
36
+ - Accuracy: 0.9348
37
+
38
+ ## Model description
39
+
40
+ More information needed
41
+
42
+ ## Intended uses & limitations
43
+
44
+ More information needed
45
+
46
+ ## Training and evaluation data
47
+
48
+ More information needed
49
+
50
+ ## Training procedure
51
+
52
+ ### Training hyperparameters
53
+
54
+ The following hyperparameters were used during training:
55
+ - learning_rate: 5e-05
56
+ - train_batch_size: 32
57
+ - eval_batch_size: 32
58
+ - seed: 42
59
+ - gradient_accumulation_steps: 4
60
+ - total_train_batch_size: 128
61
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: linear
63
+ - lr_scheduler_warmup_ratio: 0.1
64
+ - num_epochs: 50
65
+
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
+ | No log | 0.8 | 3 | 4.6533 | 0.0087 |
71
+ | No log | 1.87 | 7 | 4.5848 | 0.0065 |
72
+ | 4.6048 | 2.93 | 11 | 4.4608 | 0.0304 |
73
+ | 4.6048 | 4.0 | 15 | 4.2857 | 0.0848 |
74
+ | 4.6048 | 4.8 | 18 | 4.1470 | 0.1152 |
75
+ | 4.2716 | 5.87 | 22 | 3.9641 | 0.2043 |
76
+ | 4.2716 | 6.93 | 26 | 3.7705 | 0.3152 |
77
+ | 3.7404 | 8.0 | 30 | 3.5809 | 0.4196 |
78
+ | 3.7404 | 8.8 | 33 | 3.4766 | 0.4522 |
79
+ | 3.7404 | 9.87 | 37 | 3.2981 | 0.5087 |
80
+ | 3.1589 | 10.93 | 41 | 3.1132 | 0.6087 |
81
+ | 3.1589 | 12.0 | 45 | 2.9494 | 0.6696 |
82
+ | 3.1589 | 12.8 | 48 | 2.8361 | 0.6783 |
83
+ | 2.6384 | 13.87 | 52 | 2.6521 | 0.7348 |
84
+ | 2.6384 | 14.93 | 56 | 2.4943 | 0.7587 |
85
+ | 2.1342 | 16.0 | 60 | 2.3422 | 0.7848 |
86
+ | 2.1342 | 16.8 | 63 | 2.2327 | 0.8109 |
87
+ | 2.1342 | 17.87 | 67 | 2.0834 | 0.8261 |
88
+ | 1.714 | 18.93 | 71 | 1.9834 | 0.8565 |
89
+ | 1.714 | 20.0 | 75 | 1.8932 | 0.8674 |
90
+ | 1.714 | 20.8 | 78 | 1.8618 | 0.8587 |
91
+ | 1.4427 | 21.87 | 82 | 1.6974 | 0.8891 |
92
+ | 1.4427 | 22.93 | 86 | 1.6663 | 0.8891 |
93
+ | 1.1858 | 24.0 | 90 | 1.6014 | 0.8848 |
94
+ | 1.1858 | 24.8 | 93 | 1.5112 | 0.9043 |
95
+ | 1.1858 | 25.87 | 97 | 1.4732 | 0.9109 |
96
+ | 1.0222 | 26.93 | 101 | 1.4304 | 0.9065 |
97
+ | 1.0222 | 28.0 | 105 | 1.3915 | 0.9130 |
98
+ | 1.0222 | 28.8 | 108 | 1.3509 | 0.9217 |
99
+ | 0.8306 | 29.87 | 112 | 1.3054 | 0.9283 |
100
+ | 0.8306 | 30.93 | 116 | 1.2870 | 0.9261 |
101
+ | 0.7391 | 32.0 | 120 | 1.2645 | 0.9283 |
102
+ | 0.7391 | 32.8 | 123 | 1.2454 | 0.9261 |
103
+ | 0.7391 | 33.87 | 127 | 1.2395 | 0.9283 |
104
+ | 0.6971 | 34.93 | 131 | 1.2076 | 0.9304 |
105
+ | 0.6971 | 36.0 | 135 | 1.1821 | 0.9326 |
106
+ | 0.6971 | 36.8 | 138 | 1.1736 | 0.9348 |
107
+ | 0.6758 | 37.87 | 142 | 1.1671 | 0.9326 |
108
+ | 0.6758 | 38.93 | 146 | 1.1656 | 0.9348 |
109
+ | 0.6445 | 40.0 | 150 | 1.1649 | 0.9348 |
110
+
111
+
112
+ ### Framework versions
113
+
114
+ - Transformers 4.38.2
115
+ - Pytorch 2.2.1+cu121
116
+ - Datasets 2.18.0
117
+ - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bbdd1270f4de3246f784d9ef2e8e48269561babbc3a0f3296eacfb046654b244
3
  size 343500824
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71d4c40b078654491537fe137baab4961b3614512fd3e661c27fdd912a82ffb5
3
  size 343500824
runs/Apr07_07-36-43_dc3c4f3a0653/events.out.tfevents.1712475670.dc3c4f3a0653.306.1 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3dec047807cd6e1f5980526960526da94d65e8fc4092a5fc9b32b1f5bde2680f
3
- size 23135
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a3f5acab4e33cdeb5b2c8d9838b77abfa9f1b9769009b49b0ff2b2e51bbaad83
3
+ size 23489