Edit model card

tunarebus/indonesian-roberta-base-posp-tagger-finetuned-tweetdinastipolitik

This model is a fine-tuned version of w11wo/indonesian-roberta-base-posp-tagger on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 5.1186
  • Validation Loss: 5.2613
  • Epoch: 33

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -980, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
  • training_precision: mixed_float16

Training results

Train Loss Validation Loss Epoch
7.6412 7.4740 0
7.5082 7.3516 1
7.4002 7.2847 2
7.2814 7.1402 3
7.1623 6.9974 4
7.0393 6.9544 5
6.9725 6.8025 6
6.8398 6.7491 7
6.7369 6.6489 8
6.6285 6.5881 9
6.5252 6.4414 10
6.5094 6.3932 11
6.4500 6.2782 12
6.3615 6.2943 13
6.3066 6.2166 14
6.2139 6.1871 15
6.1785 6.1024 16
5.9922 6.0100 17
6.0045 6.0078 18
5.9443 5.9436 19
5.8311 5.8359 20
5.8252 5.8134 21
5.7629 5.7652 22
5.6416 5.7127 23
5.5541 5.6641 24
5.5813 5.5408 25
5.5254 5.6771 26
5.5086 5.5469 27
5.4037 5.5299 28
5.3488 5.4362 29
5.2641 5.3928 30
5.2668 5.3659 31
5.1900 5.3624 32
5.1186 5.2613 33

Framework versions

  • Transformers 4.35.2
  • TensorFlow 2.15.0
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
0
Inference API
Examples
Mask token: <mask>
This model can be loaded on Inference API (serverless).

Finetuned from