flan-t5-large-extraction-cnndm_8000-all-loss-ep10
This model is a fine-tuned version of google/flan-t5-large on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.6575
- Hint Hit: 1.7631
- Hint Hit Num: 2.4865
- Hint Precision: 0.453
- Hint Bleu: 4.5231
- Num: 5.3805
- Gen Len: 18.9892
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 96
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Hint Hit | Hint Hit Num | Hint Precision | Hint Bleu | Num | Gen Len |
---|---|---|---|---|---|---|---|---|---|
2.0921 | 0.8 | 200 | 1.7095 | 1.6641 | 2.376 | 0.4415 | 4.2976 | 5.2234 | 18.9901 |
1.876 | 1.6 | 400 | 1.6834 | 1.7129 | 2.4296 | 0.4501 | 4.4331 | 5.2965 | 18.9881 |
1.8142 | 2.4 | 600 | 1.6666 | 1.7411 | 2.4613 | 0.4519 | 4.4792 | 5.3419 | 18.9868 |
1.7704 | 3.2 | 800 | 1.6575 | 1.7631 | 2.4865 | 0.453 | 4.5231 | 5.3805 | 18.9892 |
1.7315 | 4.0 | 1000 | 1.6601 | 1.7121 | 2.4268 | 0.4469 | 4.4013 | 5.2852 | 18.9858 |
1.6928 | 4.8 | 1200 | 1.6577 | 1.6925 | 2.4148 | 0.4468 | 4.3806 | 5.3039 | 18.9817 |
1.6707 | 5.6 | 1400 | 1.6605 | 1.6666 | 2.3931 | 0.444 | 4.3596 | 5.2993 | 18.9846 |
Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.