--- license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: t5_cause_classifier results: [] --- # t5_cause_classifier This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2104 - F1: 0.8112 - Accuracy: 0.3196 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:| | No log | 1.0 | 124 | 0.3976 | 0.5740 | 0.0 | | No log | 2.0 | 248 | 0.2948 | 0.7116 | 0.0907 | | No log | 3.0 | 372 | 0.2657 | 0.7396 | 0.1200 | | No log | 4.0 | 496 | 0.2514 | 0.7599 | 0.1724 | | 0.3526 | 5.0 | 620 | 0.2406 | 0.7706 | 0.2006 | | 0.3526 | 6.0 | 744 | 0.2318 | 0.7809 | 0.2147 | | 0.3526 | 7.0 | 868 | 0.2267 | 0.7910 | 0.2591 | | 0.3526 | 8.0 | 992 | 0.2227 | 0.7949 | 0.2742 | | 0.2263 | 9.0 | 1116 | 0.2178 | 0.7993 | 0.2772 | | 0.2263 | 10.0 | 1240 | 0.2161 | 0.8028 | 0.2923 | | 0.2263 | 11.0 | 1364 | 0.2158 | 0.8024 | 0.2873 | | 0.2263 | 12.0 | 1488 | 0.2144 | 0.8033 | 0.2984 | | 0.2005 | 13.0 | 1612 | 0.2125 | 0.8074 | 0.3054 | | 0.2005 | 14.0 | 1736 | 0.2118 | 0.8076 | 0.3206 | | 0.2005 | 15.0 | 1860 | 0.2120 | 0.8093 | 0.3095 | | 0.2005 | 16.0 | 1984 | 0.2122 | 0.8088 | 0.3196 | | 0.1877 | 17.0 | 2108 | 0.2109 | 0.8096 | 0.3155 | | 0.1877 | 18.0 | 2232 | 0.2106 | 0.8103 | 0.3135 | | 0.1877 | 19.0 | 2356 | 0.2109 | 0.8099 | 0.3155 | | 0.1877 | 20.0 | 2480 | 0.2104 | 0.8112 | 0.3196 | ### Framework versions - Transformers 4.41.1 - Pytorch 1.13.1+cu117 - Datasets 2.19.1 - Tokenizers 0.19.1