gcperk20 commited on
Commit
886beea
1 Parent(s): 686a43f

Model save

Browse files
Files changed (2) hide show
  1. README.md +97 -0
  2. pytorch_model.bin +1 -1
README.md ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: facebook/deit-small-patch16-224
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - imagefolder
8
+ metrics:
9
+ - accuracy
10
+ model-index:
11
+ - name: deit-small-patch16-224-finetuned-piid
12
+ results:
13
+ - task:
14
+ name: Image Classification
15
+ type: image-classification
16
+ dataset:
17
+ name: imagefolder
18
+ type: imagefolder
19
+ config: default
20
+ split: val
21
+ args: default
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.7671232876712328
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # deit-small-patch16-224-finetuned-piid
32
+
33
+ This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.6202
36
+ - Accuracy: 0.7671
37
+
38
+ ## Model description
39
+
40
+ More information needed
41
+
42
+ ## Intended uses & limitations
43
+
44
+ More information needed
45
+
46
+ ## Training and evaluation data
47
+
48
+ More information needed
49
+
50
+ ## Training procedure
51
+
52
+ ### Training hyperparameters
53
+
54
+ The following hyperparameters were used during training:
55
+ - learning_rate: 5e-05
56
+ - train_batch_size: 8
57
+ - eval_batch_size: 8
58
+ - seed: 42
59
+ - gradient_accumulation_steps: 4
60
+ - total_train_batch_size: 32
61
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: linear
63
+ - lr_scheduler_warmup_ratio: 0.1
64
+ - num_epochs: 20
65
+
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
+ | 1.1537 | 0.98 | 20 | 1.0005 | 0.5479 |
71
+ | 0.7025 | 2.0 | 41 | 0.8481 | 0.5936 |
72
+ | 0.6581 | 2.98 | 61 | 0.6351 | 0.7215 |
73
+ | 0.5019 | 4.0 | 82 | 0.6696 | 0.7215 |
74
+ | 0.4708 | 4.98 | 102 | 0.5861 | 0.7534 |
75
+ | 0.3647 | 6.0 | 123 | 0.5584 | 0.7763 |
76
+ | 0.2973 | 6.98 | 143 | 0.5784 | 0.7671 |
77
+ | 0.2827 | 8.0 | 164 | 0.5851 | 0.7671 |
78
+ | 0.237 | 8.98 | 184 | 0.6791 | 0.7626 |
79
+ | 0.2505 | 10.0 | 205 | 0.5550 | 0.7626 |
80
+ | 0.2018 | 10.98 | 225 | 0.5446 | 0.7626 |
81
+ | 0.1841 | 12.0 | 246 | 0.5497 | 0.7443 |
82
+ | 0.1692 | 12.98 | 266 | 0.5917 | 0.7717 |
83
+ | 0.1624 | 14.0 | 287 | 0.5254 | 0.7763 |
84
+ | 0.1518 | 14.98 | 307 | 0.5296 | 0.7808 |
85
+ | 0.1275 | 16.0 | 328 | 0.5858 | 0.7626 |
86
+ | 0.1107 | 16.98 | 348 | 0.5919 | 0.7763 |
87
+ | 0.1192 | 18.0 | 369 | 0.6027 | 0.7717 |
88
+ | 0.0842 | 18.98 | 389 | 0.6435 | 0.7717 |
89
+ | 0.1472 | 19.51 | 400 | 0.6202 | 0.7671 |
90
+
91
+
92
+ ### Framework versions
93
+
94
+ - Transformers 4.33.3
95
+ - Pytorch 2.0.1+cu118
96
+ - Datasets 2.14.5
97
+ - Tokenizers 0.13.3
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:48a70247c5573d8aefb35a41c2aeb18426fd4ca905f893df0ca57af55c4cdc3a
3
  size 86736749
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a95655fe0c7349cc4f07be89bf45022fbbf6fdd517ada64f0eeac96bbfaaabcd
3
  size 86736749