--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: transcriber-t5-v8-new results: [] --- # transcriber-t5-v8-new This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0818 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1008 | 0.72 | 500 | 0.1306 | | 0.069 | 1.43 | 1000 | 0.1227 | | 0.1052 | 2.15 | 1500 | 0.1209 | | 0.1017 | 2.86 | 2000 | 0.0992 | | 0.0828 | 3.58 | 2500 | 0.0919 | | 0.0471 | 4.29 | 3000 | 0.0927 | | 0.0769 | 5.01 | 3500 | 0.0849 | | 0.0732 | 5.72 | 4000 | 0.0862 | | 0.0801 | 6.44 | 4500 | 0.0857 | | 0.0428 | 7.15 | 5000 | 0.0815 | | 0.1119 | 7.87 | 5500 | 0.0790 | | 0.0692 | 8.58 | 6000 | 0.0780 | | 0.0684 | 9.3 | 6500 | 0.0818 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3