rshrott commited on
Commit
412612a
1 Parent(s): 448e483

Model save

Browse files
README.md CHANGED
@@ -2,7 +2,6 @@
2
  license: apache-2.0
3
  base_model: google/vit-base-patch16-224-in21k
4
  tags:
5
- - image-classification
6
  - generated_from_trainer
7
  datasets:
8
  - renovation
@@ -15,7 +14,7 @@ model-index:
15
  name: Image Classification
16
  type: image-classification
17
  dataset:
18
- name: renovations
19
  type: renovation
20
  config: default
21
  split: validation
@@ -23,7 +22,7 @@ model-index:
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
- value: 0.6772727272727272
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -31,10 +30,10 @@ should probably proofread and complete it, then remove this comment. -->
31
 
32
  # vit-base-renovation
33
 
34
- This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the renovations dataset.
35
  It achieves the following results on the evaluation set:
36
- - Loss: 0.7634
37
- - Accuracy: 0.6773
38
 
39
  ## Model description
40
 
@@ -60,35 +59,36 @@ The following hyperparameters were used during training:
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: linear
62
  - num_epochs: 4
 
63
 
64
  ### Training results
65
 
66
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
67
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
68
- | 0.9741 | 0.2 | 25 | 0.9575 | 0.4818 |
69
- | 0.9827 | 0.4 | 50 | 0.9344 | 0.5182 |
70
- | 0.8578 | 0.6 | 75 | 0.8343 | 0.6182 |
71
- | 0.9373 | 0.81 | 100 | 0.8896 | 0.5909 |
72
- | 0.7462 | 1.01 | 125 | 0.7969 | 0.6364 |
73
- | 0.6953 | 1.21 | 150 | 0.8157 | 0.6364 |
74
- | 0.5461 | 1.41 | 175 | 0.7634 | 0.6773 |
75
- | 0.6445 | 1.61 | 200 | 0.7743 | 0.6545 |
76
- | 0.5437 | 1.81 | 225 | 0.7717 | 0.65 |
77
- | 0.5911 | 2.02 | 250 | 0.8339 | 0.6364 |
78
- | 0.2483 | 2.22 | 275 | 0.8596 | 0.6318 |
79
- | 0.378 | 2.42 | 300 | 0.9897 | 0.6182 |
80
- | 0.2742 | 2.62 | 325 | 0.8965 | 0.6909 |
81
- | 0.1898 | 2.82 | 350 | 1.0262 | 0.6682 |
82
- | 0.2116 | 3.02 | 375 | 1.1058 | 0.6409 |
83
- | 0.0702 | 3.23 | 400 | 1.0473 | 0.6545 |
84
- | 0.0566 | 3.43 | 425 | 1.0962 | 0.6682 |
85
- | 0.0775 | 3.63 | 450 | 1.1502 | 0.65 |
86
- | 0.0485 | 3.83 | 475 | 1.1838 | 0.6455 |
87
 
88
 
89
  ### Framework versions
90
 
91
- - Transformers 4.31.0
92
- - Pytorch 2.0.1+cu118
93
- - Datasets 2.14.2
94
- - Tokenizers 0.13.3
 
2
  license: apache-2.0
3
  base_model: google/vit-base-patch16-224-in21k
4
  tags:
 
5
  - generated_from_trainer
6
  datasets:
7
  - renovation
 
14
  name: Image Classification
15
  type: image-classification
16
  dataset:
17
+ name: renovation
18
  type: renovation
19
  config: default
20
  split: validation
 
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.6681818181818182
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
 
31
  # vit-base-renovation
32
 
33
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the renovation dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 1.1725
36
+ - Accuracy: 0.6682
37
 
38
  ## Model description
39
 
 
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
  - num_epochs: 4
62
+ - mixed_precision_training: Native AMP
63
 
64
  ### Training results
65
 
66
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
67
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
68
+ | 1.0036 | 0.2 | 25 | 0.9849 | 0.5 |
69
+ | 0.8051 | 0.4 | 50 | 0.9106 | 0.5545 |
70
+ | 0.8336 | 0.6 | 75 | 0.9004 | 0.5955 |
71
+ | 0.786 | 0.81 | 100 | 0.7701 | 0.6455 |
72
+ | 0.7854 | 1.01 | 125 | 0.7561 | 0.6227 |
73
+ | 0.4603 | 1.21 | 150 | 0.8105 | 0.6409 |
74
+ | 0.4934 | 1.41 | 175 | 0.8746 | 0.6182 |
75
+ | 0.5315 | 1.61 | 200 | 0.8267 | 0.6636 |
76
+ | 0.5251 | 1.81 | 225 | 0.8585 | 0.65 |
77
+ | 0.4386 | 2.02 | 250 | 0.7101 | 0.6909 |
78
+ | 0.2627 | 2.22 | 275 | 1.0042 | 0.6409 |
79
+ | 0.1524 | 2.42 | 300 | 0.9489 | 0.6545 |
80
+ | 0.1272 | 2.62 | 325 | 1.0663 | 0.65 |
81
+ | 0.186 | 2.82 | 350 | 1.0831 | 0.6545 |
82
+ | 0.1544 | 3.02 | 375 | 1.1153 | 0.6364 |
83
+ | 0.0803 | 3.23 | 400 | 1.0399 | 0.6409 |
84
+ | 0.041 | 3.43 | 425 | 1.0911 | 0.6818 |
85
+ | 0.0685 | 3.63 | 450 | 1.1890 | 0.6591 |
86
+ | 0.0475 | 3.83 | 475 | 1.1725 | 0.6682 |
87
 
88
 
89
  ### Framework versions
90
 
91
+ - Transformers 4.38.2
92
+ - Pytorch 2.2.1+cu121
93
+ - Datasets 2.18.0
94
+ - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:50b922d8834db19831b60175a2357d62d19604c04e17e863ead0f67944602edb
3
  size 343227052
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e3c45dca0fcb675b14fde544c41d738b97c4755eeb3ffe7360795b70bdb6519
3
  size 343227052
runs/Mar18_12-58-57_f3a456390bcc/events.out.tfevents.1710766739.f3a456390bcc.682.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9aad1c6fe968a44ca5b46f0e66c497091592daa6b56c793488b405070dc3bfd2
3
- size 20585
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:018e57ba8c825ddea475f58b429c46189510112c5e1458daa0fc256ac834699d
3
+ size 21361