zeon8985army's picture
End of training
c8537a2 verified
metadata
base_model: zeon8985army/IndonesiaLukasLargeV3_2
datasets:
  - '-'
language:
  - id
library_name: peft
tags:
  - id-asr-leaderboard
  - generated_from_trainer
model-index:
  - name: zeon8985army/IndonesiaLukasLargeV3_3
    results: []

zeon8985army/IndonesiaLukasLargeV3_3

This model is a fine-tuned version of zeon8985army/IndonesiaLukasLargeV3_3 on the Mozilla & GoogleFleur dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1630

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 6e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 12
  • training_steps: 585
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.1375 0.1047 36 0.1698
0.1478 0.2093 72 0.1691
0.1443 0.3140 108 0.1684
0.1428 0.4186 144 0.1679
0.1339 0.5233 180 0.1675
0.1442 0.6279 216 0.1666
0.1473 0.7326 252 0.1666
0.1372 0.8372 288 0.1662
0.1585 0.9419 324 0.1650
0.1356 1.0465 360 0.1642
0.1402 1.1512 396 0.1639
0.1245 1.2558 432 0.1627
0.1231 1.3605 468 0.1635
0.1345 1.4651 504 0.1631
0.1256 1.5698 540 0.1636
0.1426 1.6744 576 0.1630

Framework versions

  • PEFT 0.9.0
  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.19.1