Edit model card
Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

whisper-small-CV16-GF-AS-jp

This model is a fine-tuned version of openai/whisper-small on the common_voice_16_0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7695 This model is a fine-tuned version of openai/whisper-small on the google/fleurs dataset. This model is a fine-tuned version of openai/whisper-small on the joujiboi/japanese-anime-speech dataset.

Model description

3 concantenated datasets LoRA peft on a windows 10 no linux. (Work in progress- this is a test run)

Intended uses & limitations

Test run for large model.

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 2
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 500
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.3266 1.0 500 0.7695

Framework versions

  • PEFT 0.8.2
  • Transformers 4.38.0.dev0
  • Pytorch 2.2.0+cu118
  • Datasets 2.16.2.dev0
  • Tokenizers 0.15.1
Downloads last month
0
Inference API
or
Inference API (serverless) does not yet support peft models for this pipeline type.

Adapter for

Datasets used to train sin2piusc/whisper-small-CV16-GF-AS-jp