--- license: apache-2.0 base_model: google-t5/t5-small tags: - generated_from_trainer metrics: - accuracy model-index: - name: t5_small_ledgar results: [] --- # t5_small_ledgar This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5465 - Accuracy: 0.8527 - F1 Macro: 0.7698 - F1 Micro: 0.8527 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Micro | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:| | 2.3898 | 0.11 | 100 | 1.8531 | 0.6083 | 0.3305 | 0.6083 | | 1.1887 | 0.21 | 200 | 1.0730 | 0.7307 | 0.5340 | 0.7307 | | 0.946 | 0.32 | 300 | 0.8826 | 0.77 | 0.6068 | 0.77 | | 0.8383 | 0.43 | 400 | 0.8016 | 0.7851 | 0.6351 | 0.7851 | | 0.8559 | 0.53 | 500 | 0.7437 | 0.8011 | 0.6747 | 0.8011 | | 0.7944 | 0.64 | 600 | 0.7068 | 0.8091 | 0.6933 | 0.8091 | | 0.7151 | 0.75 | 700 | 0.6853 | 0.8191 | 0.6983 | 0.8191 | | 0.7077 | 0.85 | 800 | 0.6666 | 0.8187 | 0.7120 | 0.8187 | | 0.6645 | 0.96 | 900 | 0.6476 | 0.8196 | 0.7211 | 0.8196 | | 0.5918 | 1.07 | 1000 | 0.6469 | 0.8297 | 0.7262 | 0.8297 | | 0.5866 | 1.17 | 1100 | 0.6309 | 0.8288 | 0.7286 | 0.8288 | | 0.6665 | 1.28 | 1200 | 0.6188 | 0.8363 | 0.7473 | 0.8363 | | 0.5684 | 1.39 | 1300 | 0.6118 | 0.837 | 0.7456 | 0.837 | | 0.4986 | 1.49 | 1400 | 0.6117 | 0.8374 | 0.7520 | 0.8374 | | 0.5786 | 1.6 | 1500 | 0.6104 | 0.8363 | 0.7462 | 0.8363 | | 0.5956 | 1.71 | 1600 | 0.5965 | 0.8365 | 0.7455 | 0.8365 | | 0.5653 | 1.81 | 1700 | 0.5817 | 0.8425 | 0.7588 | 0.8425 | | 0.5292 | 1.92 | 1800 | 0.5732 | 0.842 | 0.7516 | 0.842 | | 0.4674 | 2.03 | 1900 | 0.5670 | 0.8456 | 0.7544 | 0.8456 | | 0.452 | 2.13 | 2000 | 0.5686 | 0.847 | 0.7615 | 0.847 | | 0.4827 | 2.24 | 2100 | 0.5636 | 0.8461 | 0.7716 | 0.8461 | | 0.4617 | 2.35 | 2200 | 0.5611 | 0.8491 | 0.7613 | 0.8491 | | 0.4508 | 2.45 | 2300 | 0.5594 | 0.8499 | 0.7610 | 0.8499 | | 0.432 | 2.56 | 2400 | 0.5532 | 0.85 | 0.7654 | 0.85 | | 0.4298 | 2.67 | 2500 | 0.5521 | 0.8503 | 0.7666 | 0.8503 | | 0.4627 | 2.77 | 2600 | 0.5511 | 0.85 | 0.7661 | 0.85 | | 0.4353 | 2.88 | 2700 | 0.5466 | 0.8532 | 0.7706 | 0.8532 | | 0.4371 | 2.99 | 2800 | 0.5465 | 0.8527 | 0.7698 | 0.8527 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2