File size: 4,260 Bytes
3b26cfd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-large-patch32-384-finetuned-melanoma
  results:
  - task:
      name: Image Classification
      type: image-classification
    dataset:
      name: imagefolder
      type: imagefolder
      config: default
      split: train
      args: default
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.8272727272727273
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# vit-large-patch32-384-finetuned-melanoma

This model is a fine-tuned version of [google/vit-large-patch32-384](https://huggingface.co/google/vit-large-patch32-384) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0767
- Accuracy: 0.8273

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40

### Training results

| Training Loss | Epoch | Step  | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0081        | 1.0   | 550   | 0.7650          | 0.68     |
| 0.7527        | 2.0   | 1100  | 0.6693          | 0.7364   |
| 0.6234        | 3.0   | 1650  | 0.6127          | 0.7709   |
| 2.6284        | 4.0   | 2200  | 0.6788          | 0.7655   |
| 0.1406        | 5.0   | 2750  | 0.6657          | 0.7836   |
| 0.317         | 6.0   | 3300  | 0.6936          | 0.78     |
| 2.5358        | 7.0   | 3850  | 0.7104          | 0.7909   |
| 1.5802        | 8.0   | 4400  | 0.6928          | 0.8      |
| 0.088         | 9.0   | 4950  | 0.8060          | 0.7982   |
| 0.0183        | 10.0  | 5500  | 0.7811          | 0.8091   |
| 0.0074        | 11.0  | 6050  | 0.7185          | 0.7945   |
| 0.0448        | 12.0  | 6600  | 0.8780          | 0.7909   |
| 0.4288        | 13.0  | 7150  | 0.8229          | 0.82     |
| 0.017         | 14.0  | 7700  | 0.7516          | 0.8182   |
| 0.0057        | 15.0  | 8250  | 0.7974          | 0.7964   |
| 1.7571        | 16.0  | 8800  | 0.7866          | 0.8218   |
| 1.3159        | 17.0  | 9350  | 0.8491          | 0.8073   |
| 1.649         | 18.0  | 9900  | 0.8432          | 0.7891   |
| 0.0014        | 19.0  | 10450 | 0.8870          | 0.82     |
| 0.002         | 20.0  | 11000 | 0.9460          | 0.8236   |
| 0.3717        | 21.0  | 11550 | 0.8866          | 0.8327   |
| 0.0025        | 22.0  | 12100 | 1.0287          | 0.8073   |
| 0.0094        | 23.0  | 12650 | 0.9696          | 0.8091   |
| 0.002         | 24.0  | 13200 | 0.9659          | 0.8018   |
| 0.1001        | 25.0  | 13750 | 0.9712          | 0.8327   |
| 0.2953        | 26.0  | 14300 | 1.0512          | 0.8236   |
| 0.0141        | 27.0  | 14850 | 1.0503          | 0.82     |
| 0.612         | 28.0  | 15400 | 1.2020          | 0.8109   |
| 0.0792        | 29.0  | 15950 | 1.0498          | 0.8364   |
| 0.0117        | 30.0  | 16500 | 1.0079          | 0.8327   |
| 0.0568        | 31.0  | 17050 | 1.0199          | 0.8255   |
| 0.0001        | 32.0  | 17600 | 1.0319          | 0.8291   |
| 0.075         | 33.0  | 18150 | 1.0427          | 0.8382   |
| 0.001         | 34.0  | 18700 | 1.1289          | 0.8382   |
| 0.0001        | 35.0  | 19250 | 1.0589          | 0.8364   |
| 0.0006        | 36.0  | 19800 | 1.0349          | 0.8236   |
| 0.0023        | 37.0  | 20350 | 1.1192          | 0.8273   |
| 0.0002        | 38.0  | 20900 | 1.0863          | 0.8273   |
| 0.2031        | 39.0  | 21450 | 1.0604          | 0.8255   |
| 0.0006        | 40.0  | 22000 | 1.0767          | 0.8273   |


### Framework versions

- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2