--- license: mit base_model: guilhermebastos96/speecht5_finetuned_sip_ai_v3 tags: - generated_from_trainer model-index: - name: speecht5_finetuned_sip_ai_v4_globo_female results: [] --- # speecht5_finetuned_sip_ai_v4_globo_female This model is a fine-tuned version of [guilhermebastos96/speecht5_finetuned_sip_ai_v3](https://huggingface.co/guilhermebastos96/speecht5_finetuned_sip_ai_v3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3367 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 10000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.3933 | 7.14 | 1000 | 0.3580 | | 0.3845 | 14.27 | 2000 | 0.3482 | | 0.374 | 21.41 | 3000 | 0.3425 | | 0.3774 | 28.55 | 4000 | 0.3405 | | 0.365 | 35.68 | 5000 | 0.3393 | | 0.3673 | 42.82 | 6000 | 0.3368 | | 0.3659 | 49.96 | 7000 | 0.3387 | | 0.3649 | 57.09 | 8000 | 0.3375 | | 0.3618 | 64.23 | 9000 | 0.3374 | | 0.3634 | 71.36 | 10000 | 0.3367 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2