Edit model card

albert-large-v2-spoken-squad

This model is a fine-tuned version of albert-large-v2 on the Spoken Squad dataset. It achieves the following results on the evaluation set:

  • Exact Match: 66.7026
  • F1: 79.3491
  • Loss: 1.0481

Model description

Results on Spoken Squad Test Sets

Test Set Test Loss Samples Exact Match F1
Test 1.183 5351 71.2951 80.4348
Test WER44 6.2158 5351 45.9727 60.8491
Test WER54 6.2158 5351 45.9727 60.8491

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Exact Match F1 Validation Loss
1.0444 1.0 2088 63.6584 77.0975 1.0645
0.8017 2.0 4176 66.3524 79.3253 0.9756
0.5426 3.0 6264 66.7026 79.3491 1.0481

Framework versions

  • Transformers 4.24.0
  • Pytorch 1.13.1
  • Datasets 2.8.0
  • Tokenizers 0.11.0
Downloads last month
2