google-vit-base-patch16-224-batch32-lr0.005-standford-dogs
This model is a fine-tuned version of google/vit-base-patch16-224 on the stanford-dogs dataset. It achieves the following results on the evaluation set:
- Loss: 0.4511
- Accuracy: 0.8797
- F1: 0.8759
- Precision: 0.8812
- Recall: 0.8766
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
---|---|---|---|---|---|---|---|
4.8453 | 0.0777 | 10 | 4.6341 | 0.0355 | 0.0304 | 0.0311 | 0.0364 |
4.5433 | 0.1553 | 20 | 4.3107 | 0.1246 | 0.0982 | 0.1263 | 0.1225 |
4.2752 | 0.2330 | 30 | 3.9697 | 0.2719 | 0.2176 | 0.2518 | 0.2632 |
3.9872 | 0.3107 | 40 | 3.6402 | 0.4274 | 0.3661 | 0.4264 | 0.4167 |
3.7182 | 0.3883 | 50 | 3.3251 | 0.5362 | 0.4888 | 0.5817 | 0.5247 |
3.473 | 0.4660 | 60 | 3.0453 | 0.6220 | 0.5815 | 0.6516 | 0.6115 |
3.2252 | 0.5437 | 70 | 2.7739 | 0.6817 | 0.6506 | 0.7194 | 0.6713 |
2.9976 | 0.6214 | 80 | 2.5391 | 0.7046 | 0.6756 | 0.7286 | 0.6954 |
2.762 | 0.6990 | 90 | 2.2990 | 0.7505 | 0.7258 | 0.7646 | 0.7421 |
2.5763 | 0.7767 | 100 | 2.1075 | 0.7646 | 0.7434 | 0.7793 | 0.7556 |
2.4357 | 0.8544 | 110 | 1.9226 | 0.7850 | 0.7652 | 0.8027 | 0.7768 |
2.2669 | 0.9320 | 120 | 1.7673 | 0.8008 | 0.7838 | 0.8149 | 0.7938 |
2.1459 | 1.0097 | 130 | 1.6339 | 0.8175 | 0.8058 | 0.8291 | 0.8110 |
1.9822 | 1.0874 | 140 | 1.5204 | 0.8214 | 0.8114 | 0.8366 | 0.8151 |
1.8701 | 1.1650 | 150 | 1.4219 | 0.8173 | 0.8091 | 0.8330 | 0.8117 |
1.8007 | 1.2427 | 160 | 1.3224 | 0.8292 | 0.8205 | 0.8390 | 0.8233 |
1.8004 | 1.3204 | 170 | 1.2553 | 0.8324 | 0.8243 | 0.8413 | 0.8271 |
1.6511 | 1.3981 | 180 | 1.1728 | 0.8372 | 0.8282 | 0.8467 | 0.8314 |
1.548 | 1.4757 | 190 | 1.1091 | 0.8394 | 0.8300 | 0.8500 | 0.8340 |
1.5634 | 1.5534 | 200 | 1.0561 | 0.8345 | 0.8263 | 0.8444 | 0.8287 |
1.5163 | 1.6311 | 210 | 0.9983 | 0.8457 | 0.8382 | 0.8512 | 0.8409 |
1.3883 | 1.7087 | 220 | 0.9574 | 0.8499 | 0.8425 | 0.8545 | 0.8452 |
1.3161 | 1.7864 | 230 | 0.9129 | 0.8511 | 0.8425 | 0.8564 | 0.8457 |
1.304 | 1.8641 | 240 | 0.8727 | 0.8535 | 0.8454 | 0.8570 | 0.8487 |
1.3268 | 1.9417 | 250 | 0.8412 | 0.8511 | 0.8441 | 0.8572 | 0.8473 |
1.2388 | 2.0194 | 260 | 0.8104 | 0.8569 | 0.8482 | 0.8608 | 0.8522 |
1.1333 | 2.0971 | 270 | 0.7920 | 0.8557 | 0.8486 | 0.8596 | 0.8516 |
1.1305 | 2.1748 | 280 | 0.7565 | 0.8579 | 0.8505 | 0.8630 | 0.8534 |
1.1849 | 2.2524 | 290 | 0.7498 | 0.8593 | 0.8536 | 0.8646 | 0.8549 |
1.1287 | 2.3301 | 300 | 0.7348 | 0.8593 | 0.8533 | 0.8653 | 0.8552 |
1.0537 | 2.4078 | 310 | 0.7120 | 0.8554 | 0.8496 | 0.8586 | 0.8515 |
1.1157 | 2.4854 | 320 | 0.6832 | 0.8622 | 0.8552 | 0.8662 | 0.8579 |
1.1008 | 2.5631 | 330 | 0.6705 | 0.8618 | 0.8546 | 0.8640 | 0.8574 |
1.0512 | 2.6408 | 340 | 0.6557 | 0.8630 | 0.8563 | 0.8636 | 0.8593 |
1.0641 | 2.7184 | 350 | 0.6490 | 0.8632 | 0.8581 | 0.8691 | 0.8596 |
1.0446 | 2.7961 | 360 | 0.6301 | 0.8652 | 0.8597 | 0.8692 | 0.8612 |
1.0104 | 2.8738 | 370 | 0.6287 | 0.8632 | 0.8562 | 0.8668 | 0.8588 |
1.0544 | 2.9515 | 380 | 0.6150 | 0.8644 | 0.8579 | 0.8657 | 0.8602 |
1.0074 | 3.0291 | 390 | 0.6061 | 0.8683 | 0.8617 | 0.8712 | 0.8641 |
0.9329 | 3.1068 | 400 | 0.6001 | 0.8661 | 0.8591 | 0.8750 | 0.8620 |
0.9049 | 3.1845 | 410 | 0.5925 | 0.8686 | 0.8617 | 0.8731 | 0.8647 |
0.9815 | 3.2621 | 420 | 0.5806 | 0.8686 | 0.8622 | 0.8717 | 0.8644 |
0.9507 | 3.3398 | 430 | 0.5793 | 0.8673 | 0.8613 | 0.8691 | 0.8638 |
0.9608 | 3.4175 | 440 | 0.5721 | 0.8671 | 0.8614 | 0.8683 | 0.8636 |
0.9409 | 3.4951 | 450 | 0.5688 | 0.8652 | 0.8591 | 0.8658 | 0.8612 |
0.8856 | 3.5728 | 460 | 0.5563 | 0.8700 | 0.8650 | 0.8714 | 0.8667 |
0.9099 | 3.6505 | 470 | 0.5557 | 0.8661 | 0.8613 | 0.8681 | 0.8622 |
0.9167 | 3.7282 | 480 | 0.5527 | 0.8686 | 0.8639 | 0.8701 | 0.8648 |
0.9077 | 3.8058 | 490 | 0.5431 | 0.8705 | 0.8669 | 0.8722 | 0.8674 |
0.9005 | 3.8835 | 500 | 0.5390 | 0.8732 | 0.8697 | 0.8749 | 0.8701 |
0.8596 | 3.9612 | 510 | 0.5375 | 0.8707 | 0.8655 | 0.8732 | 0.8668 |
0.8856 | 4.0388 | 520 | 0.5254 | 0.8705 | 0.8651 | 0.8741 | 0.8663 |
0.8869 | 4.1165 | 530 | 0.5238 | 0.8717 | 0.8657 | 0.8731 | 0.8680 |
0.8069 | 4.1942 | 540 | 0.5188 | 0.8732 | 0.8671 | 0.8744 | 0.8695 |
0.8474 | 4.2718 | 550 | 0.5188 | 0.8710 | 0.8649 | 0.8729 | 0.8671 |
0.8243 | 4.3495 | 560 | 0.5177 | 0.8727 | 0.8684 | 0.8756 | 0.8696 |
0.8437 | 4.4272 | 570 | 0.5107 | 0.8727 | 0.8682 | 0.8742 | 0.8693 |
0.7761 | 4.5049 | 580 | 0.5025 | 0.8739 | 0.8700 | 0.8751 | 0.8708 |
0.784 | 4.5825 | 590 | 0.5016 | 0.8768 | 0.8717 | 0.8778 | 0.8734 |
0.8055 | 4.6602 | 600 | 0.5019 | 0.8739 | 0.8701 | 0.8772 | 0.8710 |
0.8109 | 4.7379 | 610 | 0.4960 | 0.8771 | 0.8724 | 0.8785 | 0.8740 |
0.8697 | 4.8155 | 620 | 0.4887 | 0.8793 | 0.8749 | 0.8816 | 0.8757 |
0.7996 | 4.8932 | 630 | 0.4878 | 0.8773 | 0.8719 | 0.8782 | 0.8734 |
0.8002 | 4.9709 | 640 | 0.4847 | 0.8785 | 0.8738 | 0.8807 | 0.8752 |
0.7404 | 5.0485 | 650 | 0.4888 | 0.8771 | 0.8726 | 0.8795 | 0.8739 |
0.7326 | 5.1262 | 660 | 0.4883 | 0.8746 | 0.8701 | 0.8772 | 0.8718 |
0.797 | 5.2039 | 670 | 0.4892 | 0.8729 | 0.8689 | 0.8752 | 0.8701 |
0.8084 | 5.2816 | 680 | 0.4800 | 0.8793 | 0.8752 | 0.8817 | 0.8763 |
0.8025 | 5.3592 | 690 | 0.4762 | 0.8768 | 0.8727 | 0.8771 | 0.8736 |
0.7087 | 5.4369 | 700 | 0.4762 | 0.8783 | 0.8750 | 0.8807 | 0.8756 |
0.7502 | 5.5146 | 710 | 0.4754 | 0.8785 | 0.8754 | 0.8801 | 0.8759 |
0.7386 | 5.5922 | 720 | 0.4738 | 0.8793 | 0.8754 | 0.8807 | 0.8760 |
0.8173 | 5.6699 | 730 | 0.4712 | 0.8793 | 0.8750 | 0.8801 | 0.8762 |
0.8213 | 5.7476 | 740 | 0.4696 | 0.8790 | 0.8750 | 0.8795 | 0.8756 |
0.7184 | 5.8252 | 750 | 0.4714 | 0.8805 | 0.8759 | 0.8826 | 0.8768 |
0.7168 | 5.9029 | 760 | 0.4682 | 0.8749 | 0.8695 | 0.8771 | 0.8715 |
0.7558 | 5.9806 | 770 | 0.4673 | 0.8761 | 0.8711 | 0.8787 | 0.8729 |
0.7169 | 6.0583 | 780 | 0.4678 | 0.8783 | 0.8736 | 0.8801 | 0.8749 |
0.7042 | 6.1359 | 790 | 0.4628 | 0.8759 | 0.8710 | 0.8773 | 0.8724 |
0.7332 | 6.2136 | 800 | 0.4672 | 0.8766 | 0.8720 | 0.8790 | 0.8731 |
0.7027 | 6.2913 | 810 | 0.4644 | 0.8785 | 0.8736 | 0.8805 | 0.8749 |
0.7283 | 6.3689 | 820 | 0.4642 | 0.8776 | 0.8724 | 0.8793 | 0.8740 |
0.7305 | 6.4466 | 830 | 0.4613 | 0.8780 | 0.8729 | 0.8785 | 0.8742 |
0.7186 | 6.5243 | 840 | 0.4606 | 0.8768 | 0.8723 | 0.8783 | 0.8734 |
0.759 | 6.6019 | 850 | 0.4592 | 0.8766 | 0.8719 | 0.8769 | 0.8730 |
0.6865 | 6.6796 | 860 | 0.4580 | 0.8771 | 0.8727 | 0.8782 | 0.8737 |
0.689 | 6.7573 | 870 | 0.4574 | 0.8776 | 0.8735 | 0.8788 | 0.8745 |
0.6851 | 6.8350 | 880 | 0.4561 | 0.8802 | 0.8764 | 0.8815 | 0.8773 |
0.7158 | 6.9126 | 890 | 0.4547 | 0.8795 | 0.8759 | 0.8808 | 0.8766 |
0.6938 | 6.9903 | 900 | 0.4533 | 0.8800 | 0.8759 | 0.8810 | 0.8768 |
0.6596 | 7.0680 | 910 | 0.4540 | 0.8800 | 0.8759 | 0.8808 | 0.8768 |
0.7519 | 7.1456 | 920 | 0.4530 | 0.8800 | 0.8758 | 0.8809 | 0.8769 |
0.6836 | 7.2233 | 930 | 0.4519 | 0.8793 | 0.8753 | 0.8806 | 0.8762 |
0.7407 | 7.3010 | 940 | 0.4520 | 0.8788 | 0.8751 | 0.8807 | 0.8757 |
0.6823 | 7.3786 | 950 | 0.4522 | 0.8785 | 0.8750 | 0.8802 | 0.8753 |
0.7029 | 7.4563 | 960 | 0.4524 | 0.8785 | 0.8746 | 0.8802 | 0.8753 |
0.6536 | 7.5340 | 970 | 0.4515 | 0.8795 | 0.8756 | 0.8812 | 0.8763 |
0.6837 | 7.6117 | 980 | 0.4513 | 0.8800 | 0.8761 | 0.8815 | 0.8768 |
0.6604 | 7.6893 | 990 | 0.4512 | 0.8797 | 0.8759 | 0.8812 | 0.8766 |
0.683 | 7.7670 | 1000 | 0.4511 | 0.8797 | 0.8759 | 0.8812 | 0.8766 |
Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for amaye15/google-vit-base-patch16-224-batch32-lr0.005-standford-dogs
Base model
google/vit-base-patch16-224Evaluation results
- Accuracy on stanford-dogsself-reported0.880
- F1 on stanford-dogsself-reported0.876
- Precision on stanford-dogsself-reported0.881
- Recall on stanford-dogsself-reported0.877