Edit model card

fine-tuned-DatasetQAS-Squad-ID-with-indobert-base-uncased-without-ITTL-without-freeze-LR-1e-05

This model is a fine-tuned version of indolem/indobert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5081
  • Exact Match: 48.0441
  • F1: 64.5121

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Exact Match F1
2.0164 0.5 463 1.8536 39.4212 54.9381
1.8356 1.0 926 1.6789 43.3667 59.1353
1.6492 1.5 1389 1.6111 45.0324 61.1553
1.6051 2.0 1852 1.5722 45.5119 62.1336
1.4925 2.5 2315 1.5679 46.5130 63.6738
1.5049 3.0 2778 1.5260 47.2197 64.2953
1.3868 3.5 3241 1.5213 47.6571 64.6843
1.3574 4.0 3704 1.5065 47.8758 64.3476
1.3199 4.5 4167 1.5169 47.6403 64.4273
1.3024 5.0 4630 1.5081 48.0441 64.5121

Framework versions

  • Transformers 4.27.4
  • Pytorch 1.13.1+cu117
  • Datasets 2.2.0
  • Tokenizers 0.13.2
Downloads last month
4
Inference API
This model can be loaded on Inference API (serverless).