longt5_xl_gov_memsum_25
This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 2.3918
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 15.0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.4372 | 1.0 | 68 | 1.6270 |
0.3678 | 1.99 | 136 | 1.8330 |
0.3026 | 2.99 | 204 | 1.8467 |
0.2785 | 3.99 | 272 | 1.9830 |
0.2489 | 5.0 | 341 | 2.1279 |
0.181 | 6.0 | 409 | 2.2981 |
0.1753 | 6.99 | 477 | 2.3683 |
0.1511 | 7.99 | 545 | 2.3130 |
0.1483 | 8.99 | 613 | 2.5342 |
0.2277 | 10.0 | 682 | 2.3054 |
0.1952 | 10.99 | 750 | 2.2331 |
0.1773 | 11.99 | 818 | 2.1944 |
0.1524 | 12.99 | 886 | 2.3607 |
0.1373 | 14.0 | 955 | 2.3946 |
0.1238 | 14.95 | 1020 | 2.3918 |
Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
- Downloads last month
- 6
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.