selmamalak commited on
Commit
a7cf66a
1 Parent(s): 566bfe4

Model save

Browse files
Files changed (1) hide show
  1. README.md +82 -0
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - medmnist-v2
8
+ metrics:
9
+ - accuracy
10
+ - precision
11
+ - recall
12
+ - f1
13
+ base_model: google/vit-base-patch16-224-in21k
14
+ model-index:
15
+ - name: blood-vit-base-finetuned
16
+ results: []
17
+ ---
18
+
19
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
20
+ should probably proofread and complete it, then remove this comment. -->
21
+
22
+ # blood-vit-base-finetuned
23
+
24
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the medmnist-v2 dataset.
25
+ It achieves the following results on the evaluation set:
26
+ - Loss: 0.0627
27
+ - Accuracy: 0.9790
28
+ - Precision: 0.9764
29
+ - Recall: 0.9812
30
+ - F1: 0.9786
31
+
32
+ ## Model description
33
+
34
+ More information needed
35
+
36
+ ## Intended uses & limitations
37
+
38
+ More information needed
39
+
40
+ ## Training and evaluation data
41
+
42
+ More information needed
43
+
44
+ ## Training procedure
45
+
46
+ ### Training hyperparameters
47
+
48
+ The following hyperparameters were used during training:
49
+ - learning_rate: 0.005
50
+ - train_batch_size: 16
51
+ - eval_batch_size: 16
52
+ - seed: 42
53
+ - gradient_accumulation_steps: 4
54
+ - total_train_batch_size: 64
55
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
56
+ - lr_scheduler_type: linear
57
+ - num_epochs: 10
58
+ - mixed_precision_training: Native AMP
59
+
60
+ ### Training results
61
+
62
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
63
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
64
+ | 0.4059 | 1.0 | 187 | 0.1878 | 0.9311 | 0.9132 | 0.9328 | 0.9201 |
65
+ | 0.3796 | 2.0 | 374 | 0.2729 | 0.9083 | 0.9131 | 0.8875 | 0.8861 |
66
+ | 0.424 | 3.0 | 561 | 0.3701 | 0.8668 | 0.8797 | 0.8520 | 0.8492 |
67
+ | 0.3141 | 4.0 | 748 | 0.1849 | 0.9381 | 0.9267 | 0.9336 | 0.9283 |
68
+ | 0.2553 | 5.0 | 935 | 0.1075 | 0.9644 | 0.9630 | 0.9612 | 0.9617 |
69
+ | 0.2686 | 6.0 | 1122 | 0.1679 | 0.9486 | 0.9561 | 0.9437 | 0.9489 |
70
+ | 0.2556 | 7.0 | 1309 | 0.0934 | 0.9661 | 0.9651 | 0.9599 | 0.9619 |
71
+ | 0.1777 | 8.0 | 1496 | 0.0835 | 0.9696 | 0.9697 | 0.9683 | 0.9686 |
72
+ | 0.1607 | 9.0 | 1683 | 0.0739 | 0.9772 | 0.9733 | 0.9792 | 0.9759 |
73
+ | 0.1898 | 10.0 | 1870 | 0.0627 | 0.9790 | 0.9764 | 0.9812 | 0.9786 |
74
+
75
+
76
+ ### Framework versions
77
+
78
+ - PEFT 0.9.0
79
+ - Transformers 4.38.2
80
+ - Pytorch 2.2.1+cu121
81
+ - Datasets 2.18.0
82
+ - Tokenizers 0.15.2