Labira/LabiraPJOK_1_100_Full

This model is a fine-tuned version of indolem/indobert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0992
  • Validation Loss: 0.1051
  • Epoch: 48

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
5.7712 5.1521 0
4.8965 4.2303 1
4.1582 3.5226 2
3.4388 2.9489 3
2.8492 2.4067 4
2.5532 2.0347 5
2.1511 1.5826 6
1.7456 1.1703 7
1.3312 1.1496 8
1.1274 0.7130 9
0.9445 0.6299 10
0.8513 0.4422 11
0.8144 0.6906 12
0.6876 0.3412 13
0.6116 0.2922 14
0.5507 0.3807 15
0.5042 0.2770 16
0.2865 0.1692 17
0.4181 0.1840 18
0.3010 0.2490 19
0.3049 0.1310 20
0.4021 0.1106 21
0.4394 0.5673 22
0.3956 0.4617 23
0.3175 0.1589 24
0.2753 0.1354 25
0.1207 0.1039 26
0.1652 0.0881 27
0.2387 0.1040 28
0.1674 0.1306 29
0.1609 0.1319 30
0.1121 0.1215 31
0.1457 0.1124 32
0.1767 0.1144 33
0.1225 0.1322 34
0.1313 0.1385 35
0.1473 0.1586 36
0.2459 0.1693 37
0.1657 0.1705 38
0.1716 0.1314 39
0.1124 0.1108 40
0.1493 0.1058 41
0.1312 0.1161 42
0.1232 0.1264 43
0.1169 0.1181 44
0.1268 0.1092 45
0.0955 0.1074 46
0.1465 0.1051 47
0.0992 0.1051 48

Framework versions

  • Transformers 4.46.2
  • TensorFlow 2.17.0
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
35
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Labira/LabiraPJOK_1_100_Full

Finetuned
(369)
this model
Finetunes
1 model