aditnnda's picture
End of training
5a304cb
|
raw
history blame
5.72 kB
metadata
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
  - generated_from_keras_callback
model-index:
  - name: aditnnda/felidae_klasifikasi_fix
    results: []

aditnnda/felidae_klasifikasi_fix

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.2941
  • Train Accuracy: 0.9180
  • Validation Loss: 0.2691
  • Validation Accuracy: 0.9180
  • Epoch: 49

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 9100, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
  • training_precision: float32

Training results

Train Loss Train Accuracy Validation Loss Validation Accuracy Epoch
1.5677 0.6885 1.4799 0.6885 0
1.4926 0.9180 1.3584 0.9180 1
1.3957 0.9016 1.2332 0.9016 2
1.3151 0.8361 1.1098 0.8361 3
1.1756 0.8525 0.9886 0.8525 4
1.1173 0.8852 0.8775 0.8852 5
1.0196 0.9016 0.8042 0.9016 6
0.9451 0.9344 0.6991 0.9344 7
0.9189 0.9508 0.6396 0.9508 8
0.8269 0.9672 0.5717 0.9672 9
0.7581 0.9836 0.5063 0.9836 10
0.7328 0.9672 0.4772 0.9672 11
0.6639 0.9508 0.4691 0.9508 12
0.7082 0.9180 0.4597 0.9180 13
0.6607 0.9672 0.4006 0.9672 14
0.7278 0.9508 0.3966 0.9508 15
0.6285 0.9508 0.3669 0.9508 16
0.5902 0.9836 0.3321 0.9836 17
0.5963 0.9344 0.3899 0.9344 18
0.6273 0.9344 0.3246 0.9344 19
0.5917 0.9016 0.4248 0.9016 20
0.5190 0.9180 0.3575 0.9180 21
0.4991 0.9508 0.3060 0.9508 22
0.4861 0.9344 0.3244 0.9344 23
0.4650 0.9344 0.3014 0.9344 24
0.5044 0.9508 0.2727 0.9508 25
0.4232 0.9672 0.2439 0.9672 26
0.4247 0.9672 0.2233 0.9672 27
0.4732 0.9508 0.2857 0.9508 28
0.4222 1.0 0.1937 1.0 29
0.4165 0.9836 0.1996 0.9836 30
0.3979 0.9508 0.2074 0.9508 31
0.3691 0.9344 0.2627 0.9344 32
0.3094 0.9344 0.2641 0.9344 33
0.3726 0.9508 0.2224 0.9508 34
0.3503 0.9508 0.2241 0.9508 35
0.3832 0.9344 0.2274 0.9344 36
0.3765 0.9508 0.2421 0.9508 37
0.3927 0.9508 0.1979 0.9508 38
0.2968 0.9672 0.1857 0.9672 39
0.3489 0.9508 0.2158 0.9508 40
0.4102 0.9672 0.1951 0.9672 41
0.3842 0.9672 0.1971 0.9672 42
0.3417 0.8852 0.3686 0.8852 43
0.3219 0.9344 0.2255 0.9344 44
0.3671 0.9672 0.1570 0.9672 45
0.3948 0.9344 0.2217 0.9344 46
0.3201 0.9672 0.1993 0.9672 47
0.3612 0.9508 0.1936 0.9508 48
0.2941 0.9180 0.2691 0.9180 49

Framework versions

  • Transformers 4.35.1
  • TensorFlow 2.14.0
  • Datasets 2.14.6
  • Tokenizers 0.14.1