psst_batch_size_4_base_model
This model is a fine-tuned version of facebook/wav2vec2-base-960h on the None dataset. It achieves the following results on the evaluation set:
- Loss: 3.6743
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
14.1952 | 1.68 | 100 | 3.6352 |
3.9092 | 3.36 | 200 | 3.7223 |
3.9981 | 5.04 | 300 | 3.6864 |
3.7209 | 6.72 | 400 | 3.6310 |
3.9395 | 8.4 | 500 | 3.7229 |
3.7126 | 10.08 | 600 | 3.6163 |
3.6999 | 11.76 | 700 | 3.6776 |
3.7203 | 13.45 | 800 | 3.7568 |
3.7202 | 15.13 | 900 | 3.6998 |
3.7023 | 16.81 | 1000 | 3.6943 |
3.689 | 18.49 | 1100 | 3.6501 |
3.7009 | 20.17 | 1200 | 3.6973 |
3.6882 | 21.85 | 1300 | 3.6938 |
3.6907 | 23.53 | 1400 | 3.6795 |
3.6869 | 25.21 | 1500 | 3.6727 |
3.681 | 26.89 | 1600 | 3.6749 |
3.6968 | 28.57 | 1700 | 3.6743 |
Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
- Downloads last month
- 106
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.