Raihan004 commited on
Commit
299ac26
1 Parent(s): bcf18c7

Model save

Browse files
README.md CHANGED
@@ -2,7 +2,6 @@
2
  license: apache-2.0
3
  base_model: google/vit-base-patch16-224-in21k
4
  tags:
5
- - image-classification
6
  - generated_from_trainer
7
  datasets:
8
  - imagefolder
@@ -15,7 +14,7 @@ model-index:
15
  name: Image Classification
16
  type: image-classification
17
  dataset:
18
- name: Action_small_dataset
19
  type: imagefolder
20
  config: default
21
  split: train
@@ -23,7 +22,7 @@ model-index:
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
- value: 0.8785276073619632
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -31,10 +30,10 @@ should probably proofread and complete it, then remove this comment. -->
31
 
32
  # Action_all_10_class
33
 
34
- This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Action_small_dataset dataset.
35
  It achieves the following results on the evaluation set:
36
- - Loss: 0.4146
37
- - Accuracy: 0.8785
38
 
39
  ## Model description
40
 
@@ -53,33 +52,47 @@ More information needed
53
  ### Training hyperparameters
54
 
55
  The following hyperparameters were used during training:
56
- - learning_rate: 0.0002
57
  - train_batch_size: 16
58
  - eval_batch_size: 8
59
  - seed: 42
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: linear
62
- - num_epochs: 5
63
  - mixed_precision_training: Native AMP
64
 
65
  ### Training results
66
 
67
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
68
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
69
- | 1.1239 | 0.35 | 100 | 0.9934 | 0.7117 |
70
- | 0.8878 | 0.69 | 200 | 0.7667 | 0.7706 |
71
- | 0.9037 | 1.04 | 300 | 0.6369 | 0.8098 |
72
- | 0.7307 | 1.38 | 400 | 0.5772 | 0.8319 |
73
- | 0.6624 | 1.73 | 500 | 0.6925 | 0.7718 |
74
- | 0.5781 | 2.08 | 600 | 0.5439 | 0.8405 |
75
- | 0.5537 | 2.42 | 700 | 0.5257 | 0.8331 |
76
- | 0.4112 | 2.77 | 800 | 0.4500 | 0.8564 |
77
- | 0.3263 | 3.11 | 900 | 0.4911 | 0.8417 |
78
- | 0.4592 | 3.46 | 1000 | 0.4551 | 0.8712 |
79
- | 0.3204 | 3.81 | 1100 | 0.4325 | 0.8724 |
80
- | 0.3268 | 4.15 | 1200 | 0.4529 | 0.8540 |
81
- | 0.4267 | 4.5 | 1300 | 0.4356 | 0.8724 |
82
- | 0.2886 | 4.84 | 1400 | 0.4146 | 0.8785 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83
 
84
 
85
  ### Framework versions
 
2
  license: apache-2.0
3
  base_model: google/vit-base-patch16-224-in21k
4
  tags:
 
5
  - generated_from_trainer
6
  datasets:
7
  - imagefolder
 
14
  name: Image Classification
15
  type: image-classification
16
  dataset:
17
+ name: imagefolder
18
  type: imagefolder
19
  config: default
20
  split: train
 
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.8711656441717791
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
 
31
  # Action_all_10_class
32
 
33
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.4745
36
+ - Accuracy: 0.8712
37
 
38
  ## Model description
39
 
 
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
+ - learning_rate: 0.0001
56
  - train_batch_size: 16
57
  - eval_batch_size: 8
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
+ - num_epochs: 10
62
  - mixed_precision_training: Native AMP
63
 
64
  ### Training results
65
 
66
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
67
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
68
+ | 1.1996 | 0.35 | 100 | 1.0635 | 0.7730 |
69
+ | 1.0335 | 0.69 | 200 | 0.8392 | 0.7718 |
70
+ | 0.6279 | 1.04 | 300 | 0.6463 | 0.8294 |
71
+ | 0.8633 | 1.38 | 400 | 0.7172 | 0.7926 |
72
+ | 0.5851 | 1.73 | 500 | 0.5858 | 0.8380 |
73
+ | 0.5305 | 2.08 | 600 | 0.5780 | 0.8356 |
74
+ | 0.5511 | 2.42 | 700 | 0.5313 | 0.8393 |
75
+ | 0.4657 | 2.77 | 800 | 0.5443 | 0.8368 |
76
+ | 0.3615 | 3.11 | 900 | 0.5038 | 0.8429 |
77
+ | 0.5301 | 3.46 | 1000 | 0.5101 | 0.8503 |
78
+ | 0.4108 | 3.81 | 1100 | 0.5212 | 0.8479 |
79
+ | 0.4223 | 4.15 | 1200 | 0.5328 | 0.8429 |
80
+ | 0.3877 | 4.5 | 1300 | 0.5815 | 0.8294 |
81
+ | 0.3879 | 4.84 | 1400 | 0.5151 | 0.8503 |
82
+ | 0.2797 | 5.19 | 1500 | 0.5160 | 0.8564 |
83
+ | 0.2628 | 5.54 | 1600 | 0.4618 | 0.8699 |
84
+ | 0.3404 | 5.88 | 1700 | 0.4903 | 0.8675 |
85
+ | 0.3033 | 6.23 | 1800 | 0.4861 | 0.8663 |
86
+ | 0.214 | 6.57 | 1900 | 0.4853 | 0.8687 |
87
+ | 0.2763 | 6.92 | 2000 | 0.4705 | 0.8736 |
88
+ | 0.3009 | 7.27 | 2100 | 0.4723 | 0.8626 |
89
+ | 0.1543 | 7.61 | 2200 | 0.4983 | 0.8638 |
90
+ | 0.2407 | 7.96 | 2300 | 0.4742 | 0.8650 |
91
+ | 0.2679 | 8.3 | 2400 | 0.4935 | 0.8724 |
92
+ | 0.1508 | 8.65 | 2500 | 0.4826 | 0.8675 |
93
+ | 0.2129 | 9.0 | 2600 | 0.4981 | 0.8712 |
94
+ | 0.1131 | 9.34 | 2700 | 0.4718 | 0.8712 |
95
+ | 0.2144 | 9.69 | 2800 | 0.4745 | 0.8712 |
96
 
97
 
98
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:378bc1b2a46162fca2d8cbae6c384887e379afe64609c177c0f1d7b05a3e6ea6
3
  size 343248584
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f61316bc1a243b059e9dcd0fe71ce8b98779581ae8518ed8add02819e691e39
3
  size 343248584
runs/Apr30_15-11-45_55836b80922a/events.out.tfevents.1714489905.55836b80922a.34.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88d6525d44b914e3e1c91518d38e9a4c460c17124ef47c2a7c66008a426f2647
3
+ size 75946
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:405d5f7c60f45612e847cd409de8106be3239247e18b525f29a24d38270667fb
3
  size 4920
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:296f8badb767e43d6b269820ee660789df95b0d63355645c9bcbe9fba99da0c4
3
  size 4920