rshrott commited on
Commit
f6586e1
1 Parent(s): 52989c2

Model save

Browse files
README.md CHANGED
@@ -2,7 +2,6 @@
2
  license: apache-2.0
3
  base_model: google/vit-base-patch16-224-in21k
4
  tags:
5
- - image-classification
6
  - generated_from_trainer
7
  datasets:
8
  - renovation
@@ -15,7 +14,7 @@ model-index:
15
  name: Image Classification
16
  type: image-classification
17
  dataset:
18
- name: renovations
19
  type: renovation
20
  config: default
21
  split: validation
@@ -23,8 +22,7 @@ model-index:
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
- value: 0.6027397260273972
27
- pipeline_tag: image-classification
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,10 +30,10 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  # vit-base-renovation2
34
 
35
- This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the renovations dataset.
36
  It achieves the following results on the evaluation set:
37
- - Loss: 1.2384
38
- - Accuracy: 0.6027
39
 
40
  ## Model description
41
 
@@ -67,30 +65,30 @@ The following hyperparameters were used during training:
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
- | 0.273 | 0.2 | 25 | 1.2384 | 0.6027 |
71
- | 0.5153 | 0.4 | 50 | 1.4060 | 0.5845 |
72
- | 0.2792 | 0.6 | 75 | 1.3026 | 0.5936 |
73
- | 0.5516 | 0.81 | 100 | 1.3999 | 0.6027 |
74
- | 0.4247 | 1.01 | 125 | 1.2621 | 0.5982 |
75
- | 0.1556 | 1.21 | 150 | 1.5661 | 0.5571 |
76
- | 0.1458 | 1.41 | 175 | 1.3459 | 0.6347 |
77
- | 0.1595 | 1.61 | 200 | 1.5278 | 0.5982 |
78
- | 0.1195 | 1.81 | 225 | 1.5303 | 0.6256 |
79
- | 0.1507 | 2.02 | 250 | 1.7701 | 0.5845 |
80
- | 0.023 | 2.22 | 275 | 1.5354 | 0.6301 |
81
- | 0.028 | 2.42 | 300 | 1.6535 | 0.6301 |
82
- | 0.0698 | 2.62 | 325 | 1.6772 | 0.6438 |
83
- | 0.0516 | 2.82 | 350 | 1.4380 | 0.6804 |
84
- | 0.0136 | 3.02 | 375 | 1.6561 | 0.6484 |
85
- | 0.0325 | 3.23 | 400 | 1.6028 | 0.6621 |
86
- | 0.0149 | 3.43 | 425 | 1.6261 | 0.6621 |
87
- | 0.0082 | 3.63 | 450 | 1.6615 | 0.6621 |
88
- | 0.0093 | 3.83 | 475 | 1.6878 | 0.6530 |
 
89
 
90
- ---
91
  ### Framework versions
92
 
93
- - Transformers 4.38.2
94
  - Pytorch 2.2.1+cu121
95
  - Datasets 2.18.0
96
- - Tokenizers 0.15.2
 
2
  license: apache-2.0
3
  base_model: google/vit-base-patch16-224-in21k
4
  tags:
 
5
  - generated_from_trainer
6
  datasets:
7
  - renovation
 
14
  name: Image Classification
15
  type: image-classification
16
  dataset:
17
+ name: renovation
18
  type: renovation
19
  config: default
20
  split: validation
 
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.7031963470319634
 
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
 
31
  # vit-base-renovation2
32
 
33
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the renovation dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.9788
36
+ - Accuracy: 0.7032
37
 
38
  ## Model description
39
 
 
65
 
66
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
67
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
68
+ | 1.359 | 0.2 | 25 | 1.2074 | 0.4658 |
69
+ | 1.1384 | 0.4 | 50 | 1.1213 | 0.5205 |
70
+ | 1.0866 | 0.6 | 75 | 0.9746 | 0.6301 |
71
+ | 1.1787 | 0.81 | 100 | 1.0523 | 0.5662 |
72
+ | 0.9242 | 1.01 | 125 | 0.9543 | 0.6256 |
73
+ | 0.7945 | 1.21 | 150 | 0.9200 | 0.6119 |
74
+ | 0.8379 | 1.41 | 175 | 0.8447 | 0.6712 |
75
+ | 0.7253 | 1.61 | 200 | 0.8642 | 0.6575 |
76
+ | 0.6344 | 1.81 | 225 | 0.8443 | 0.6438 |
77
+ | 0.6521 | 2.02 | 250 | 0.8273 | 0.6667 |
78
+ | 0.3627 | 2.22 | 275 | 0.8653 | 0.6712 |
79
+ | 0.2523 | 2.42 | 300 | 0.8748 | 0.6895 |
80
+ | 0.363 | 2.62 | 325 | 0.8407 | 0.6849 |
81
+ | 0.3433 | 2.82 | 350 | 0.9696 | 0.6484 |
82
+ | 0.2874 | 3.02 | 375 | 0.9290 | 0.6804 |
83
+ | 0.1682 | 3.23 | 400 | 0.9713 | 0.6575 |
84
+ | 0.1575 | 3.43 | 425 | 0.9963 | 0.6804 |
85
+ | 0.0822 | 3.63 | 450 | 0.9473 | 0.7123 |
86
+ | 0.1678 | 3.83 | 475 | 0.9788 | 0.7032 |
87
+
88
 
 
89
  ### Framework versions
90
 
91
+ - Transformers 4.39.1
92
  - Pytorch 2.2.1+cu121
93
  - Datasets 2.18.0
94
+ - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0b1962199a6da927eb05d9f93f48ea1e9bd246aaec8e428b67aadef732e8a11b
3
  size 343239356
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ec8459792611f30b48ccef4d3cea268b3c38761db15db7e278e1647d4d3c098
3
  size 343239356
runs/Mar22_21-16-36_9f0b864d5439/events.out.tfevents.1711142204.9f0b864d5439.318.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:956e412021523647196326224cd72b001cf084dcbe30ffcbb5a457cf61e436d3
3
- size 20792
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95bee9b11bc610ff8c19806a4b48a7a4fdfc640557986350364563332edbd125
3
+ size 21568