File size: 3,560 Bytes
d8f72aa 57ba489 d8f72aa cb8ec90 6a344e9 34d5c9b 07ebf7d 931b302 0d6ac6c 0b4f693 fb84636 d9de5e1 bc58679 fe56fea d01f1cb a84f9b6 d97089e 5692cd7 471a87c f4e22b4 92cb4da 72332d3 3d2b7e5 e2e7c32 1d7d6f3 0d8253d c200e9b ec1b76b 8de2858 c1ad5d8 4ad3dc8 57ba489 d8f72aa |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
---
license: apache-2.0
base_model: google/vit-base-patch16-384
tags:
- generated_from_keras_callback
model-index:
- name: Prahas10/shingles
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Prahas10/shingles
This model is a fine-tuned version of [google/vit-base-patch16-384](https://huggingface.co/google/vit-base-patch16-384) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0993
- Validation Loss: 0.6967
- Train Accuracy: 0.8166
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 4e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 4e-05, 'decay_steps': 127899.75, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 10370.25, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.0001}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 5.2368 | 5.2154 | 0.0047 | 0 |
| 5.1655 | 5.1337 | 0.0113 | 1 |
| 5.0415 | 4.9860 | 0.0278 | 2 |
| 4.8179 | 4.7812 | 0.0781 | 3 |
| 4.4541 | 4.4703 | 0.1844 | 4 |
| 3.9330 | 4.0779 | 0.2841 | 5 |
| 3.3155 | 3.6691 | 0.3650 | 6 |
| 2.6546 | 3.3371 | 0.4313 | 7 |
| 2.0435 | 3.0037 | 0.4727 | 8 |
| 1.5258 | 2.7059 | 0.5193 | 9 |
| 1.1079 | 2.4174 | 0.5588 | 10 |
| 0.7989 | 2.3590 | 0.5532 | 11 |
| 0.5857 | 1.9721 | 0.6298 | 12 |
| 0.4337 | 1.7442 | 0.6896 | 13 |
| 0.3352 | 1.7334 | 0.6580 | 14 |
| 0.2641 | 1.6197 | 0.6670 | 15 |
| 0.2042 | 1.7021 | 0.6289 | 16 |
| 0.1642 | 1.3843 | 0.7070 | 17 |
| 0.1500 | 1.4422 | 0.6787 | 18 |
| 0.1251 | 1.2797 | 0.7098 | 19 |
| 0.1093 | 0.9233 | 0.8020 | 20 |
| 0.1215 | 0.9209 | 0.7977 | 21 |
| 0.1007 | 0.9143 | 0.7803 | 22 |
| 0.0811 | 0.7952 | 0.8090 | 23 |
| 0.0953 | 0.7678 | 0.8260 | 24 |
| 0.1033 | 0.8928 | 0.7705 | 25 |
| 0.0636 | 0.3480 | 0.9271 | 26 |
| 0.0880 | 0.5916 | 0.8669 | 27 |
| 0.0861 | 0.8892 | 0.7789 | 28 |
| 0.0993 | 0.6967 | 0.8166 | 29 |
### Framework versions
- Transformers 4.41.0
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|