zqTensor commited on
Commit
d3151a9
·
verified ·
1 Parent(s): a9451f9

Model save

Browse files
Files changed (3) hide show
  1. README.md +43 -45
  2. config.json +0 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -3,8 +3,6 @@ library_name: transformers
3
  license: apache-2.0
4
  base_model: google/vit-base-patch16-224-in21k
5
  tags:
6
- - image-classification
7
- - vision
8
  - generated_from_trainer
9
  metrics:
10
  - accuracy
@@ -18,10 +16,10 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # vit-base-beans
20
 
21
- This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.0079
24
  - Accuracy: 1.0
 
25
 
26
  ## Model description
27
 
@@ -50,7 +48,7 @@ The following hyperparameters were used during training:
50
  - total_eval_batch_size: 16
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
- - num_epochs: 50.0
54
 
55
  ### Training results
56
 
@@ -61,46 +59,46 @@ The following hyperparameters were used during training:
61
  | 0.1438 | 3.0 | 390 | 0.9699 | 0.0981 |
62
  | 0.0833 | 4.0 | 520 | 0.9925 | 0.0656 |
63
  | 0.1107 | 5.0 | 650 | 0.9774 | 0.0817 |
64
- | 0.098 | 11.0 | 715 | 0.0570 | 0.9925 |
65
- | 0.0935 | 12.0 | 780 | 0.0418 | 1.0 |
66
- | 0.0907 | 13.0 | 845 | 0.1093 | 0.9699 |
67
- | 0.0947 | 14.0 | 910 | 0.0347 | 1.0 |
68
- | 0.1259 | 15.0 | 975 | 0.0710 | 0.9850 |
69
- | 0.0325 | 16.0 | 1040 | 0.0587 | 0.9774 |
70
- | 0.1397 | 17.0 | 1105 | 0.0495 | 0.9925 |
71
- | 0.0456 | 18.0 | 1170 | 0.0519 | 0.9774 |
72
- | 0.0439 | 19.0 | 1235 | 0.0216 | 1.0 |
73
- | 0.0484 | 20.0 | 1300 | 0.0316 | 0.9925 |
74
- | 0.0276 | 21.0 | 1365 | 0.0192 | 1.0 |
75
- | 0.0348 | 22.0 | 1430 | 0.0177 | 1.0 |
76
- | 0.0326 | 23.0 | 1495 | 0.0175 | 1.0 |
77
- | 0.1014 | 24.0 | 1560 | 0.0235 | 0.9925 |
78
- | 0.0395 | 25.0 | 1625 | 0.0451 | 0.9850 |
79
- | 0.0265 | 26.0 | 1690 | 0.0297 | 0.9925 |
80
- | 0.0569 | 27.0 | 1755 | 0.0263 | 0.9925 |
81
- | 0.0666 | 28.0 | 1820 | 0.0245 | 0.9850 |
82
- | 0.0285 | 29.0 | 1885 | 0.0418 | 0.9774 |
83
- | 0.0892 | 30.0 | 1950 | 0.0204 | 0.9925 |
84
- | 0.0371 | 31.0 | 2015 | 0.0339 | 0.9850 |
85
- | 0.0105 | 32.0 | 2080 | 0.0143 | 1.0 |
86
- | 0.0563 | 33.0 | 2145 | 0.0140 | 1.0 |
87
- | 0.0573 | 34.0 | 2210 | 0.0102 | 1.0 |
88
- | 0.0409 | 35.0 | 2275 | 0.0096 | 1.0 |
89
- | 0.0523 | 36.0 | 2340 | 0.0149 | 0.9925 |
90
- | 0.0131 | 37.0 | 2405 | 0.0197 | 0.9925 |
91
- | 0.0329 | 38.0 | 2470 | 0.0109 | 1.0 |
92
- | 0.0577 | 39.0 | 2535 | 0.0096 | 1.0 |
93
- | 0.0085 | 40.0 | 2600 | 0.0147 | 0.9925 |
94
- | 0.0618 | 41.0 | 2665 | 0.0094 | 1.0 |
95
- | 0.0847 | 42.0 | 2730 | 0.0197 | 0.9925 |
96
- | 0.0291 | 43.0 | 2795 | 0.0089 | 1.0 |
97
- | 0.0568 | 44.0 | 2860 | 0.0087 | 1.0 |
98
- | 0.0077 | 45.0 | 2925 | 0.0104 | 1.0 |
99
- | 0.008 | 46.0 | 2990 | 0.0138 | 1.0 |
100
- | 0.0272 | 47.0 | 3055 | 0.0081 | 1.0 |
101
- | 0.008 | 48.0 | 3120 | 0.0084 | 1.0 |
102
- | 0.0112 | 49.0 | 3185 | 0.0082 | 1.0 |
103
- | 0.013 | 50.0 | 3250 | 0.0079 | 1.0 |
104
 
105
 
106
  ### Framework versions
 
3
  license: apache-2.0
4
  base_model: google/vit-base-patch16-224-in21k
5
  tags:
 
 
6
  - generated_from_trainer
7
  metrics:
8
  - accuracy
 
16
 
17
  # vit-base-beans
18
 
19
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
 
21
  - Accuracy: 1.0
22
+ - Loss: 0.0079
23
 
24
  ## Model description
25
 
 
48
  - total_eval_batch_size: 16
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
+ - num_epochs: 3.0
52
 
53
  ### Training results
54
 
 
59
  | 0.1438 | 3.0 | 390 | 0.9699 | 0.0981 |
60
  | 0.0833 | 4.0 | 520 | 0.9925 | 0.0656 |
61
  | 0.1107 | 5.0 | 650 | 0.9774 | 0.0817 |
62
+ | 0.098 | 11.0 | 715 | 0.9925 | 0.0570 |
63
+ | 0.0935 | 12.0 | 780 | 1.0 | 0.0418 |
64
+ | 0.0907 | 13.0 | 845 | 0.9699 | 0.1093 |
65
+ | 0.0947 | 14.0 | 910 | 1.0 | 0.0347 |
66
+ | 0.1259 | 15.0 | 975 | 0.9850 | 0.0710 |
67
+ | 0.0325 | 16.0 | 1040 | 0.9774 | 0.0587 |
68
+ | 0.1397 | 17.0 | 1105 | 0.9925 | 0.0495 |
69
+ | 0.0456 | 18.0 | 1170 | 0.9774 | 0.0519 |
70
+ | 0.0439 | 19.0 | 1235 | 1.0 | 0.0216 |
71
+ | 0.0484 | 20.0 | 1300 | 0.9925 | 0.0316 |
72
+ | 0.0276 | 21.0 | 1365 | 1.0 | 0.0192 |
73
+ | 0.0348 | 22.0 | 1430 | 1.0 | 0.0177 |
74
+ | 0.0326 | 23.0 | 1495 | 1.0 | 0.0175 |
75
+ | 0.1014 | 24.0 | 1560 | 0.9925 | 0.0235 |
76
+ | 0.0395 | 25.0 | 1625 | 0.9850 | 0.0451 |
77
+ | 0.0265 | 26.0 | 1690 | 0.9925 | 0.0297 |
78
+ | 0.0569 | 27.0 | 1755 | 0.9925 | 0.0263 |
79
+ | 0.0666 | 28.0 | 1820 | 0.9850 | 0.0245 |
80
+ | 0.0285 | 29.0 | 1885 | 0.9774 | 0.0418 |
81
+ | 0.0892 | 30.0 | 1950 | 0.9925 | 0.0204 |
82
+ | 0.0371 | 31.0 | 2015 | 0.9850 | 0.0339 |
83
+ | 0.0105 | 32.0 | 2080 | 1.0 | 0.0143 |
84
+ | 0.0563 | 33.0 | 2145 | 1.0 | 0.0140 |
85
+ | 0.0573 | 34.0 | 2210 | 1.0 | 0.0102 |
86
+ | 0.0409 | 35.0 | 2275 | 1.0 | 0.0096 |
87
+ | 0.0523 | 36.0 | 2340 | 0.9925 | 0.0149 |
88
+ | 0.0131 | 37.0 | 2405 | 0.9925 | 0.0197 |
89
+ | 0.0329 | 38.0 | 2470 | 1.0 | 0.0109 |
90
+ | 0.0577 | 39.0 | 2535 | 1.0 | 0.0096 |
91
+ | 0.0085 | 40.0 | 2600 | 0.9925 | 0.0147 |
92
+ | 0.0618 | 41.0 | 2665 | 1.0 | 0.0094 |
93
+ | 0.0847 | 42.0 | 2730 | 0.9925 | 0.0197 |
94
+ | 0.0291 | 43.0 | 2795 | 1.0 | 0.0089 |
95
+ | 0.0568 | 44.0 | 2860 | 1.0 | 0.0087 |
96
+ | 0.0077 | 45.0 | 2925 | 1.0 | 0.0104 |
97
+ | 0.008 | 46.0 | 2990 | 1.0 | 0.0138 |
98
+ | 0.0272 | 47.0 | 3055 | 1.0 | 0.0081 |
99
+ | 0.008 | 48.0 | 3120 | 1.0 | 0.0084 |
100
+ | 0.0112 | 49.0 | 3185 | 1.0 | 0.0082 |
101
+ | 0.013 | 50.0 | 3250 | 1.0 | 0.0079 |
102
 
103
 
104
  ### Framework versions
config.json CHANGED
@@ -28,7 +28,6 @@
28
  "num_channels": 3,
29
  "num_hidden_layers": 12,
30
  "patch_size": 16,
31
- "problem_type": "single_label_classification",
32
  "qkv_bias": true,
33
  "torch_dtype": "float32",
34
  "transformers_version": "4.45.0.dev0"
 
28
  "num_channels": 3,
29
  "num_hidden_layers": 12,
30
  "patch_size": 16,
 
31
  "qkv_bias": true,
32
  "torch_dtype": "float32",
33
  "transformers_version": "4.45.0.dev0"
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d8a71bc8b6e59a8c186761409f46da2225287f92d88c4fd16865315a3a06d6b5
3
  size 5240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5821938548d773da97cbce4b0f762a51312d8b2f8765f999f527ebf8146931b7
3
  size 5240