File size: 3,097 Bytes
989823e e3048b4 989823e e3048b4 af61c0a e3048b4 af61c0a e3048b4 fbf44e9 4099131 21f2b08 e60ea41 717a6f0 f4096b3 8669279 218a98e faf9056 3f46e12 4bceb24 af61c0a e3048b4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
---
license: other
tags:
- generated_from_keras_callback
model-index:
- name: AhamadShaik/SegFormer_RESIZE_NLM
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# AhamadShaik/SegFormer_RESIZE_NLM
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0626
- Train Dice Coef: 0.8412
- Train Iou: 0.7294
- Validation Loss: 0.0496
- Validation Dice Coef: 0.8789
- Validation Iou: 0.7853
- Train Lr: 1e-04
- Epoch: 12
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Dice Coef | Train Iou | Validation Loss | Validation Dice Coef | Validation Iou | Train Lr | Epoch |
|:----------:|:---------------:|:---------:|:---------------:|:--------------------:|:--------------:|:--------:|:-----:|
| 0.2282 | 0.5657 | 0.4102 | 0.1322 | 0.6524 | 0.4967 | 1e-04 | 0 |
| 0.1354 | 0.6853 | 0.5329 | 0.0855 | 0.7853 | 0.6544 | 1e-04 | 1 |
| 0.1105 | 0.7364 | 0.5924 | 0.0737 | 0.8147 | 0.6916 | 1e-04 | 2 |
| 0.0985 | 0.7610 | 0.6226 | 0.0632 | 0.8518 | 0.7440 | 1e-04 | 3 |
| 0.0933 | 0.7745 | 0.6399 | 0.0627 | 0.8455 | 0.7351 | 1e-04 | 4 |
| 0.0886 | 0.7856 | 0.6535 | 0.0584 | 0.8603 | 0.7566 | 1e-04 | 5 |
| 0.0831 | 0.7971 | 0.6695 | 0.0559 | 0.8621 | 0.7596 | 1e-04 | 6 |
| 0.0770 | 0.8107 | 0.6867 | 0.0530 | 0.8726 | 0.7756 | 1e-04 | 7 |
| 0.0741 | 0.8160 | 0.6942 | 0.0512 | 0.8775 | 0.7832 | 1e-04 | 8 |
| 0.0750 | 0.8163 | 0.6945 | 0.0581 | 0.8627 | 0.7606 | 1e-04 | 9 |
| 0.0678 | 0.8306 | 0.7138 | 0.0531 | 0.8719 | 0.7745 | 1e-04 | 10 |
| 0.0659 | 0.8341 | 0.7196 | 0.0519 | 0.8738 | 0.7781 | 1e-04 | 11 |
| 0.0626 | 0.8412 | 0.7294 | 0.0496 | 0.8789 | 0.7853 | 1e-04 | 12 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.10.1
- Datasets 2.11.0
- Tokenizers 0.13.3
|