sum_model_0318_20epochs
This is only trained with 5 epoch! It should be epoch 11 to 15 for the t5 model which I've changed the embedding and lm head to 45000 vocabularies.
This model is a fine-tuned version of weny22/sum_model_0318 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.1007
- Rouge1: 0.1995
- Rouge2: 0.0701
- Rougel: 0.1598
- Rougelsum: 0.1598
- Gen Len: 18.946
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
2.6348 | 1.0 | 1071 | 2.1654 | 0.1935 | 0.0663 | 0.156 | 0.156 | 18.9213 |
2.5474 | 2.0 | 2142 | 2.1300 | 0.1957 | 0.0671 | 0.1572 | 0.1572 | 18.918 |
2.5049 | 3.0 | 3213 | 2.1041 | 0.2004 | 0.0697 | 0.1601 | 0.1602 | 18.9313 |
2.4683 | 4.0 | 4284 | 2.0994 | 0.1992 | 0.0695 | 0.1594 | 0.1595 | 18.94 |
2.4529 | 5.0 | 5355 | 2.1007 | 0.1995 | 0.0701 | 0.1598 | 0.1598 | 18.946 |
Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
- Downloads last month
- 4