Labira/LabiraPJOK_6_100_Full
This model is a fine-tuned version of Labira/LabiraPJOK_5_100_Full on an unknown dataset. It achieves the following results on the evaluation set:
- Train Loss: 0.1856
- Validation Loss: 0.0721
- Epoch: 99
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
Training results
Train Loss | Validation Loss | Epoch |
---|---|---|
2.0582 | 1.6013 | 0 |
1.3251 | 1.3889 | 1 |
0.8805 | 1.0654 | 2 |
0.8392 | 0.6530 | 3 |
0.3612 | 0.3225 | 4 |
0.3515 | 0.2072 | 5 |
0.2917 | 0.1839 | 6 |
0.2735 | 0.5011 | 7 |
0.2863 | 0.5152 | 8 |
0.3594 | 0.5083 | 9 |
0.2413 | 0.4202 | 10 |
0.3135 | 0.3112 | 11 |
0.2592 | 0.2370 | 12 |
0.2292 | 0.2129 | 13 |
0.2270 | 0.1288 | 14 |
0.2107 | 0.1385 | 15 |
0.1990 | 0.1431 | 16 |
0.1920 | 0.1420 | 17 |
0.2805 | 0.1550 | 18 |
0.2343 | 0.1466 | 19 |
0.2061 | 0.1351 | 20 |
0.1422 | 0.1275 | 21 |
0.1669 | 0.1235 | 22 |
0.1482 | 0.1215 | 23 |
0.1162 | 0.1202 | 24 |
0.1288 | 0.1102 | 25 |
0.1435 | 0.1094 | 26 |
0.2018 | 0.1077 | 27 |
0.0912 | 0.0939 | 28 |
0.1054 | 0.0915 | 29 |
0.1274 | 0.0775 | 30 |
0.0758 | 0.0783 | 31 |
0.1480 | 0.0800 | 32 |
0.0722 | 0.0811 | 33 |
0.0978 | 0.0799 | 34 |
0.1078 | 0.0782 | 35 |
0.0815 | 0.0765 | 36 |
0.0744 | 0.0753 | 37 |
0.1194 | 0.0745 | 38 |
0.1327 | 0.0744 | 39 |
0.1164 | 0.0749 | 40 |
0.0480 | 0.0756 | 41 |
0.0424 | 0.0759 | 42 |
0.0830 | 0.0761 | 43 |
0.0842 | 0.0760 | 44 |
0.1157 | 0.0751 | 45 |
0.1100 | 0.0744 | 46 |
0.0937 | 0.0741 | 47 |
0.1211 | 0.0739 | 48 |
0.0880 | 0.0737 | 49 |
0.1047 | 0.0738 | 50 |
0.1037 | 0.0741 | 51 |
0.1366 | 0.0860 | 52 |
0.0815 | 0.0913 | 53 |
0.1404 | 0.0913 | 54 |
0.0952 | 0.1043 | 55 |
0.0658 | 0.1044 | 56 |
0.1319 | 0.1045 | 57 |
0.0918 | 0.1152 | 58 |
0.1372 | 0.1151 | 59 |
0.1203 | 0.1148 | 60 |
0.1251 | 0.1146 | 61 |
0.0606 | 0.1144 | 62 |
0.1407 | 0.1141 | 63 |
0.1266 | 0.1139 | 64 |
0.1025 | 0.1138 | 65 |
0.1077 | 0.1136 | 66 |
0.1312 | 0.1136 | 67 |
0.0987 | 0.1135 | 68 |
0.1199 | 0.1135 | 69 |
0.1427 | 0.1136 | 70 |
0.1271 | 0.1024 | 71 |
0.1049 | 0.1024 | 72 |
0.1073 | 0.1027 | 73 |
0.1162 | 0.1029 | 74 |
0.0863 | 0.1029 | 75 |
0.1062 | 0.1028 | 76 |
0.1034 | 0.1027 | 77 |
0.0984 | 0.1026 | 78 |
0.0988 | 0.1024 | 79 |
0.1153 | 0.1023 | 80 |
0.1020 | 0.1022 | 81 |
0.0990 | 0.1019 | 82 |
0.0881 | 0.0884 | 83 |
0.1330 | 0.0865 | 84 |
0.1972 | 0.0717 | 85 |
0.1165 | 0.0719 | 86 |
0.1853 | 0.0722 | 87 |
0.0734 | 0.0722 | 88 |
0.1391 | 0.0722 | 89 |
0.0942 | 0.0721 | 90 |
0.0817 | 0.0721 | 91 |
0.0757 | 0.0720 | 92 |
0.0738 | 0.0720 | 93 |
0.1871 | 0.0720 | 94 |
0.1965 | 0.0720 | 95 |
0.0812 | 0.0721 | 96 |
0.1010 | 0.0721 | 97 |
0.0709 | 0.0721 | 98 |
0.1856 | 0.0721 | 99 |
Framework versions
- Transformers 4.46.2
- TensorFlow 2.17.0
- Datasets 3.1.0
- Tokenizers 0.20.3
- Downloads last month
- 11
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for Labira/LabiraPJOK_6_100_Full
Base model
indolem/indobert-base-uncased
Finetuned
Labira/LabiraPJOK_1_100_Full
Finetuned
Labira/LabiraPJOK_2_100_Full
Finetuned
Labira/LabiraPJOK_3_100_Full
Finetuned
Labira/LabiraPJOK_5_100_Full