--- license: apache-2.0 base_model: google-t5/t5-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: t5_base_ledgar results: [] --- # t5_base_ledgar This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5004 - Accuracy: 0.8664 - F1 Macro: 0.7948 - F1 Micro: 0.8664 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Micro | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:| | 1.443 | 0.11 | 100 | 1.1133 | 0.7291 | 0.5312 | 0.7291 | | 0.8813 | 0.21 | 200 | 0.8404 | 0.7712 | 0.6296 | 0.7712 | | 0.761 | 0.32 | 300 | 0.7386 | 0.8021 | 0.6789 | 0.8021 | | 0.7358 | 0.43 | 400 | 0.7313 | 0.805 | 0.6787 | 0.805 | | 0.7624 | 0.53 | 500 | 0.6561 | 0.8164 | 0.7134 | 0.8164 | | 0.7067 | 0.64 | 600 | 0.6419 | 0.821 | 0.7273 | 0.821 | | 0.6298 | 0.75 | 700 | 0.6412 | 0.8254 | 0.7230 | 0.8254 | | 0.6544 | 0.85 | 800 | 0.6277 | 0.8217 | 0.7223 | 0.8217 | | 0.5781 | 0.96 | 900 | 0.6054 | 0.8305 | 0.7420 | 0.8305 | | 0.4674 | 1.07 | 1000 | 0.6210 | 0.8346 | 0.7371 | 0.8346 | | 0.4929 | 1.17 | 1100 | 0.5876 | 0.8387 | 0.7423 | 0.8387 | | 0.566 | 1.28 | 1200 | 0.5779 | 0.8475 | 0.7633 | 0.8475 | | 0.4577 | 1.39 | 1300 | 0.5772 | 0.8435 | 0.7508 | 0.8435 | | 0.4233 | 1.49 | 1400 | 0.5581 | 0.8476 | 0.7625 | 0.8476 | | 0.4567 | 1.6 | 1500 | 0.5688 | 0.8462 | 0.7576 | 0.8462 | | 0.483 | 1.71 | 1600 | 0.5547 | 0.8478 | 0.7609 | 0.8478 | | 0.4649 | 1.81 | 1700 | 0.5396 | 0.851 | 0.7680 | 0.851 | | 0.4288 | 1.92 | 1800 | 0.5235 | 0.8577 | 0.7759 | 0.8577 | | 0.3445 | 2.03 | 1900 | 0.5204 | 0.8603 | 0.7791 | 0.8603 | | 0.3014 | 2.13 | 2000 | 0.5269 | 0.8607 | 0.7862 | 0.8607 | | 0.3301 | 2.24 | 2100 | 0.5234 | 0.8591 | 0.7826 | 0.8591 | | 0.3069 | 2.35 | 2200 | 0.5266 | 0.8624 | 0.7851 | 0.8624 | | 0.3095 | 2.45 | 2300 | 0.5155 | 0.8629 | 0.7846 | 0.8629 | | 0.3164 | 2.56 | 2400 | 0.5106 | 0.8646 | 0.7909 | 0.8646 | | 0.2914 | 2.67 | 2500 | 0.5055 | 0.8647 | 0.7934 | 0.8647 | | 0.2946 | 2.77 | 2600 | 0.5027 | 0.8643 | 0.7917 | 0.8643 | | 0.3012 | 2.88 | 2700 | 0.5009 | 0.8671 | 0.7953 | 0.8671 | | 0.3181 | 2.99 | 2800 | 0.5004 | 0.8664 | 0.7948 | 0.8664 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2