Alimuddin commited on
Commit
172cbe6
1 Parent(s): c6d966f

End of training

Browse files
Files changed (2) hide show
  1. README.md +24 -29
  2. pytorch_model.bin +1 -1
README.md CHANGED
@@ -1,10 +1,10 @@
1
  ---
2
  license: apache-2.0
3
- base_model: google/vit-base-patch16-224-in21k
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
- - imagefolder
8
  metrics:
9
  - accuracy
10
  model-index:
@@ -14,15 +14,15 @@ model-index:
14
  name: Image Classification
15
  type: image-classification
16
  dataset:
17
- name: imagefolder
18
- type: imagefolder
19
  config: default
20
- split: train
21
  args: default
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.4375
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -30,10 +30,10 @@ should probably proofread and complete it, then remove this comment. -->
30
 
31
  # image_classification
32
 
33
- This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 1.5566
36
- - Accuracy: 0.4375
37
 
38
  ## Model description
39
 
@@ -52,38 +52,33 @@ More information needed
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
- - learning_rate: 1e-05
56
- - train_batch_size: 15
57
- - eval_batch_size: 15
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
- - num_epochs: 15
62
 
63
  ### Training results
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
66
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
67
- | No log | 1.0 | 43 | 2.0423 | 0.2562 |
68
- | No log | 2.0 | 86 | 1.9764 | 0.2812 |
69
- | No log | 3.0 | 129 | 1.8803 | 0.3125 |
70
- | No log | 4.0 | 172 | 1.7690 | 0.3187 |
71
- | No log | 5.0 | 215 | 1.6910 | 0.375 |
72
- | No log | 6.0 | 258 | 1.6397 | 0.3688 |
73
- | No log | 7.0 | 301 | 1.6053 | 0.4688 |
74
- | No log | 8.0 | 344 | 1.5674 | 0.4875 |
75
- | No log | 9.0 | 387 | 1.5714 | 0.4625 |
76
- | No log | 10.0 | 430 | 1.5394 | 0.4938 |
77
- | No log | 11.0 | 473 | 1.5183 | 0.4375 |
78
- | 1.6941 | 12.0 | 516 | 1.5211 | 0.4938 |
79
- | 1.6941 | 13.0 | 559 | 1.4997 | 0.4562 |
80
- | 1.6941 | 14.0 | 602 | 1.5191 | 0.4375 |
81
- | 1.6941 | 15.0 | 645 | 1.4892 | 0.4875 |
82
 
83
 
84
  ### Framework versions
85
 
86
- - Transformers 4.33.2
87
  - Pytorch 2.0.1+cu118
88
  - Datasets 2.14.5
89
  - Tokenizers 0.13.3
 
1
  ---
2
  license: apache-2.0
3
+ base_model: facebook/convnext-large-224-22k-1k
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
+ - imagenet_10
8
  metrics:
9
  - accuracy
10
  model-index:
 
14
  name: Image Classification
15
  type: image-classification
16
  dataset:
17
+ name: imagenet_10
18
+ type: imagenet_10
19
  config: default
20
+ split: train[:7000]
21
  args: default
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.9942857142857143
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
 
31
  # image_classification
32
 
33
+ This model is a fine-tuned version of [facebook/convnext-large-224-22k-1k](https://huggingface.co/facebook/convnext-large-224-22k-1k) on the imagenet_10 dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.0357
36
+ - Accuracy: 0.9943
37
 
38
  ## Model description
39
 
 
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
+ - learning_rate: 0.0001
56
+ - train_batch_size: 17
57
+ - eval_batch_size: 17
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
+ - num_epochs: 10
62
 
63
  ### Training results
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
66
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
67
+ | No log | 1.0 | 330 | 0.0637 | 0.9843 |
68
+ | 0.0602 | 2.0 | 660 | 0.0664 | 0.9821 |
69
+ | 0.0602 | 3.0 | 990 | 0.0843 | 0.9843 |
70
+ | 0.0468 | 4.0 | 1320 | 0.0452 | 0.9879 |
71
+ | 0.0313 | 5.0 | 1650 | 0.0347 | 0.9914 |
72
+ | 0.0313 | 6.0 | 1980 | 0.0432 | 0.9914 |
73
+ | 0.0232 | 7.0 | 2310 | 0.0314 | 0.99 |
74
+ | 0.0223 | 8.0 | 2640 | 0.0337 | 0.9921 |
75
+ | 0.0223 | 9.0 | 2970 | 0.0381 | 0.99 |
76
+ | 0.0177 | 10.0 | 3300 | 0.0321 | 0.9921 |
 
 
 
 
 
77
 
78
 
79
  ### Framework versions
80
 
81
+ - Transformers 4.33.3
82
  - Pytorch 2.0.1+cu118
83
  - Datasets 2.14.5
84
  - Tokenizers 0.13.3
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4de1c19f9e1b0bfedb7db2f8805232a794384314db1e80981781bebdd3510998
3
  size 785101485
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b75edd40f7f11683848bf8633c211b986ff22540102c2efcdb94c97004b49b9d
3
  size 785101485