Model save
Browse files- README.md +21 -32
- config.json +8 -0
- model.safetensors +1 -1
- runs/Oct02_17-11-29_593ca4d065ee/events.out.tfevents.1727889099.593ca4d065ee.197.0 +3 -0
- training_args.bin +1 -1
README.md
CHANGED
@@ -3,28 +3,12 @@ library_name: transformers
|
|
3 |
license: apache-2.0
|
4 |
base_model: google/vit-base-patch16-224-in21k
|
5 |
tags:
|
6 |
-
- image-classification
|
7 |
- generated_from_trainer
|
8 |
-
datasets:
|
9 |
-
- imagefolder
|
10 |
metrics:
|
11 |
- accuracy
|
12 |
model-index:
|
13 |
- name: finetuned-fake-food
|
14 |
-
results:
|
15 |
-
- task:
|
16 |
-
name: Image Classification
|
17 |
-
type: image-classification
|
18 |
-
dataset:
|
19 |
-
name: indian_food_images
|
20 |
-
type: imagefolder
|
21 |
-
config: default
|
22 |
-
split: train
|
23 |
-
args: default
|
24 |
-
metrics:
|
25 |
-
- name: Accuracy
|
26 |
-
type: accuracy
|
27 |
-
value: 0.8828996282527881
|
28 |
---
|
29 |
|
30 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -32,10 +16,10 @@ should probably proofread and complete it, then remove this comment. -->
|
|
32 |
|
33 |
# finetuned-fake-food
|
34 |
|
35 |
-
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on
|
36 |
It achieves the following results on the evaluation set:
|
37 |
-
- Loss: 0.
|
38 |
-
- Accuracy: 0.
|
39 |
|
40 |
## Model description
|
41 |
|
@@ -54,29 +38,34 @@ More information needed
|
|
54 |
### Training hyperparameters
|
55 |
|
56 |
The following hyperparameters were used during training:
|
57 |
-
- learning_rate:
|
58 |
- train_batch_size: 4
|
59 |
- eval_batch_size: 8
|
60 |
- seed: 42
|
61 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
62 |
- lr_scheduler_type: cosine
|
63 |
-
- num_epochs:
|
64 |
- mixed_precision_training: Native AMP
|
65 |
|
66 |
### Training results
|
67 |
|
68 |
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|
69 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
|
70 |
-
| 0.
|
71 |
-
| 0.
|
72 |
-
| 0.
|
73 |
-
| 0.
|
74 |
-
| 0.
|
75 |
-
| 0.
|
76 |
-
| 0.
|
77 |
-
| 0.
|
78 |
-
| 0.
|
79 |
-
| 0.
|
|
|
|
|
|
|
|
|
|
|
80 |
|
81 |
|
82 |
### Framework versions
|
|
|
3 |
license: apache-2.0
|
4 |
base_model: google/vit-base-patch16-224-in21k
|
5 |
tags:
|
|
|
6 |
- generated_from_trainer
|
|
|
|
|
7 |
metrics:
|
8 |
- accuracy
|
9 |
model-index:
|
10 |
- name: finetuned-fake-food
|
11 |
+
results: []
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
---
|
13 |
|
14 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
16 |
|
17 |
# finetuned-fake-food
|
18 |
|
19 |
+
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
|
20 |
It achieves the following results on the evaluation set:
|
21 |
+
- Loss: 0.4941
|
22 |
+
- Accuracy: 0.8387
|
23 |
|
24 |
## Model description
|
25 |
|
|
|
38 |
### Training hyperparameters
|
39 |
|
40 |
The following hyperparameters were used during training:
|
41 |
+
- learning_rate: 3e-05
|
42 |
- train_batch_size: 4
|
43 |
- eval_batch_size: 8
|
44 |
- seed: 42
|
45 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
46 |
- lr_scheduler_type: cosine
|
47 |
+
- num_epochs: 15
|
48 |
- mixed_precision_training: Native AMP
|
49 |
|
50 |
### Training results
|
51 |
|
52 |
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|
53 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
|
54 |
+
| 0.6061 | 1.0 | 176 | 0.5937 | 0.6855 |
|
55 |
+
| 0.481 | 2.0 | 352 | 0.5138 | 0.8226 |
|
56 |
+
| 0.5522 | 3.0 | 528 | 0.4973 | 0.8065 |
|
57 |
+
| 0.4092 | 4.0 | 704 | 0.5557 | 0.7903 |
|
58 |
+
| 0.4882 | 5.0 | 880 | 0.4998 | 0.7984 |
|
59 |
+
| 0.4442 | 6.0 | 1056 | 0.4647 | 0.8387 |
|
60 |
+
| 0.5749 | 7.0 | 1232 | 0.4464 | 0.8306 |
|
61 |
+
| 0.4529 | 8.0 | 1408 | 0.5366 | 0.8065 |
|
62 |
+
| 0.5287 | 9.0 | 1584 | 0.4633 | 0.8387 |
|
63 |
+
| 0.3821 | 10.0 | 1760 | 0.4983 | 0.8387 |
|
64 |
+
| 0.2409 | 11.0 | 1936 | 0.4855 | 0.8548 |
|
65 |
+
| 0.2025 | 12.0 | 2112 | 0.5102 | 0.8387 |
|
66 |
+
| 0.2045 | 13.0 | 2288 | 0.4942 | 0.8387 |
|
67 |
+
| 0.4097 | 14.0 | 2464 | 0.4954 | 0.8387 |
|
68 |
+
| 0.5798 | 15.0 | 2640 | 0.4941 | 0.8387 |
|
69 |
|
70 |
|
71 |
### Framework versions
|
config.json
CHANGED
@@ -8,9 +8,17 @@
|
|
8 |
"hidden_act": "gelu",
|
9 |
"hidden_dropout_prob": 0.0,
|
10 |
"hidden_size": 768,
|
|
|
|
|
|
|
|
|
11 |
"image_size": 224,
|
12 |
"initializer_range": 0.02,
|
13 |
"intermediate_size": 3072,
|
|
|
|
|
|
|
|
|
14 |
"layer_norm_eps": 1e-12,
|
15 |
"model_type": "vit",
|
16 |
"num_attention_heads": 12,
|
|
|
8 |
"hidden_act": "gelu",
|
9 |
"hidden_dropout_prob": 0.0,
|
10 |
"hidden_size": 768,
|
11 |
+
"id2label": {
|
12 |
+
"0": 0,
|
13 |
+
"1": 1
|
14 |
+
},
|
15 |
"image_size": 224,
|
16 |
"initializer_range": 0.02,
|
17 |
"intermediate_size": 3072,
|
18 |
+
"label2id": {
|
19 |
+
"0": "0",
|
20 |
+
"1": "1"
|
21 |
+
},
|
22 |
"layer_norm_eps": 1e-12,
|
23 |
"model_type": "vit",
|
24 |
"num_attention_heads": 12,
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 343223968
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:eabdafadb31d1f3b20274a36080a30cb90146d186e9cc8ac84766d1b49fdbee5
|
3 |
size 343223968
|
runs/Oct02_17-11-29_593ca4d065ee/events.out.tfevents.1727889099.593ca4d065ee.197.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:38043eafb6f1efac1e883cd1a222047e229b8cf0516942f13323d7c95a9299d5
|
3 |
+
size 65730
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 5176
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4fe01866a6f83bf87f30c930ef50951b616117b48ec3954223fcdcd977dd5774
|
3 |
size 5176
|