anaghasavit commited on
Commit
1beddef
1 Parent(s): dd402ec

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +91 -0
README.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - imagefolder
7
+ metrics:
8
+ - accuracy
9
+ model-index:
10
+ - name: beit-base-patch16-224-pt22k-ft22k-finetunedt
11
+ results:
12
+ - task:
13
+ name: Image Classification
14
+ type: image-classification
15
+ dataset:
16
+ name: imagefolder
17
+ type: imagefolder
18
+ config: train
19
+ split: train
20
+ args: train
21
+ metrics:
22
+ - name: Accuracy
23
+ type: accuracy
24
+ value: 1.0
25
+ ---
26
+
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment. -->
29
+
30
+ # beit-base-patch16-224-pt22k-ft22k-finetunedt
31
+
32
+ This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset.
33
+ It achieves the following results on the evaluation set:
34
+ - Loss: 0.0000
35
+ - Accuracy: 1.0
36
+
37
+ ## Model description
38
+
39
+ More information needed
40
+
41
+ ## Intended uses & limitations
42
+
43
+ More information needed
44
+
45
+ ## Training and evaluation data
46
+
47
+ More information needed
48
+
49
+ ## Training procedure
50
+
51
+ ### Training hyperparameters
52
+
53
+ The following hyperparameters were used during training:
54
+ - learning_rate: 5e-05
55
+ - train_batch_size: 32
56
+ - eval_batch_size: 32
57
+ - seed: 42
58
+ - gradient_accumulation_steps: 4
59
+ - total_train_batch_size: 128
60
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
+ - lr_scheduler_type: linear
62
+ - lr_scheduler_warmup_ratio: 0.1
63
+ - num_epochs: 15
64
+
65
+ ### Training results
66
+
67
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
68
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
69
+ | 0.7369 | 1.0 | 25 | 0.0425 | 0.9972 |
70
+ | 0.007 | 2.0 | 50 | 0.0005 | 1.0 |
71
+ | 0.0041 | 3.0 | 75 | 0.0003 | 1.0 |
72
+ | 0.0011 | 4.0 | 100 | 0.0002 | 1.0 |
73
+ | 0.0008 | 5.0 | 125 | 0.0001 | 1.0 |
74
+ | 0.0055 | 6.0 | 150 | 0.0002 | 1.0 |
75
+ | 0.0007 | 7.0 | 175 | 0.0001 | 1.0 |
76
+ | 0.0047 | 8.0 | 200 | 0.0001 | 1.0 |
77
+ | 0.0005 | 9.0 | 225 | 0.0001 | 1.0 |
78
+ | 0.006 | 10.0 | 250 | 0.0001 | 1.0 |
79
+ | 0.0065 | 11.0 | 275 | 0.0001 | 1.0 |
80
+ | 0.0023 | 12.0 | 300 | 0.0001 | 1.0 |
81
+ | 0.0003 | 13.0 | 325 | 0.0001 | 1.0 |
82
+ | 0.0011 | 14.0 | 350 | 0.0000 | 1.0 |
83
+ | 0.0003 | 15.0 | 375 | 0.0000 | 1.0 |
84
+
85
+
86
+ ### Framework versions
87
+
88
+ - Transformers 4.26.0
89
+ - Pytorch 1.13.1+cu116
90
+ - Datasets 2.9.0
91
+ - Tokenizers 0.13.2