Pageee's picture
End of training
7b7305e verified
|
raw
history blame
2.38 kB
metadata
language:
  - en
base_model: distil-small.en
tags:
  - generated_from_trainer
datasets:
  - librispeech_asr
metrics:
  - wer
model-index:
  - name: DistilFT-English-10h
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: librispeech
          type: librispeech_asr
          config: default
          split: None
          args: 'config: en, split: test-clean'
        metrics:
          - name: Wer
            type: wer
            value: 4.4905114250188545

DistilFT-English-10h

This model is a fine-tuned version of distil-small.en on the librispeech dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2318
  • Wer: 4.4905

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-07
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 300
  • training_steps: 1000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.651 0.5556 100 0.9641 3.4754
0.5006 1.1111 200 0.7651 3.5039
0.3531 1.6667 300 0.5188 3.5121
0.2176 2.2222 400 0.3514 4.0258
0.1834 2.7778 500 0.2878 4.3132
0.1587 3.3333 600 0.2589 4.4049
0.1553 3.8889 700 0.2447 4.5007
0.1566 4.4444 800 0.2370 4.5007
0.1226 5.0 900 0.2332 4.5048
0.1533 5.5556 1000 0.2318 4.4905

Framework versions

  • Transformers 4.41.0.dev0
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1