metadata
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: flan-t5-large-finetuned-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: train
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 48.8719
widget:
- text: |-
Sid: Wanna catch a movie?
Annie: sure what do you have in mind?
Sid; the Aquaman? :D
Annie: haha isn't it a bit childish
Sid: noooooo I mean yes but it's the highest grossing movie this week
Annie: seriously?
Sid: yeah?
Annie: okay let's see what the fuss is all about
flan-t5-large-finetuned-samsum
This model is a fine-tuned version of google/flan-t5-large on the samsum dataset. It achieves the following results on the evaluation set:
- Loss: 1.2099
- Rouge1: 48.8719
- Rouge2: 25.5658
- Rougel: 41.6686
- Rougelsum: 45.2419
- Gen Len: 17.1880
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
1.1871 | 1.0 | 1842 | 1.2099 | 48.8719 | 25.5658 | 41.6686 | 45.2419 | 17.1880 |
1.0344 | 2.0 | 3684 | 1.2168 | 48.9633 | 25.5702 | 41.449 | 45.2238 | 17.3810 |
0.9457 | 3.0 | 5526 | 1.2322 | 49.2708 | 25.8481 | 41.9485 | 45.3808 | 17.1392 |
0.8706 | 4.0 | 7368 | 1.2459 | 49.4742 | 26.3099 | 42.0051 | 45.4181 | 17.2369 |
0.8173 | 5.0 | 9210 | 1.2660 | 49.5398 | 26.1602 | 41.9861 | 45.4851 | 17.3040 |
Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2