fine-tuned-DatasetQAS-TYDI-QA-ID-with-indobert-base-uncased-without-ITTL-without-freeze-LR-1e-05

This model is a fine-tuned version of indolem/indobert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1493
  • Exact Match: 60.5585
  • F1: 75.1071
  • Precision: 76.3329
  • Recall: 81.4497

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.06
  • num_epochs: 16

Training results

Training Loss Epoch Step Validation Loss Exact Match F1 Precision Recall
6.1192 0.5 38 4.8873 4.0140 16.4529 16.4981 38.0734
5.4384 0.99 76 2.8628 16.7539 29.8825 29.0280 50.2974
3.1591 1.5 114 2.4374 24.2583 36.1059 35.6027 53.7380
2.4014 1.99 152 2.2367 30.0175 41.9697 41.9505 53.7706
2.4014 2.5 190 2.0861 33.5079 45.2875 45.6044 55.6393
2.1121 2.99 228 1.8134 41.1867 52.1539 53.0988 60.0665
1.8437 3.5 266 1.5977 46.0733 59.5453 60.0688 69.5715
1.5105 3.99 304 1.3928 51.4834 65.0228 65.8592 72.3641
1.5105 4.5 342 1.3275 54.9738 68.7090 69.9803 75.8245
1.2337 4.99 380 1.2185 56.8935 70.5705 72.3556 75.7959
1.1333 5.5 418 1.2537 57.2426 70.9476 72.6953 75.6818
0.9915 5.99 456 1.1484 58.4642 73.3124 75.0975 78.1646
0.9915 6.5 494 1.1665 59.3368 74.0503 75.6279 79.6335
0.8931 6.99 532 1.1316 59.6859 74.4803 75.9433 79.8837
0.8498 7.5 570 1.1414 60.9075 75.3350 76.5606 81.1204
0.7783 7.99 608 1.1332 60.3839 75.2719 76.8970 81.1038
0.7783 8.5 646 1.1133 61.2565 75.3214 76.9111 81.1566
0.7209 8.99 684 1.1493 60.5585 75.1071 76.3329 81.4497

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu117
  • Datasets 2.2.0
  • Tokenizers 0.13.2
Downloads last month
17
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.