Edit model card

favsbot_filtersort_using_t5_summarization

This model is a fine-tuned version of t5-small on the filter_sort dataset. It achieves the following results on the evaluation set:

  • Loss: 2.3327
  • Rouge1: 15.7351
  • Rouge2: 0.0
  • Rougel: 13.4803
  • Rougelsum: 13.5134
  • Gen Len: 12.6667

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 5 3.8161 14.754 0.0 12.6197 12.6426 10.5
4.8789 2.0 10 3.6423 14.754 0.0 12.6197 12.6426 10.5
4.8789 3.0 15 3.4687 14.754 0.0 12.6197 12.6426 10.5
4.5407 4.0 20 3.3086 14.754 0.0 12.6197 12.6426 10.5
4.5407 5.0 25 3.1726 14.754 0.0 12.6197 12.6426 10.5
4.2216 6.0 30 3.0464 15.7792 0.0 13.5134 13.5411 12.6667
4.2216 7.0 35 2.9326 15.7792 0.0 13.5134 13.5411 12.6667
4.0021 8.0 40 2.8305 15.7792 0.0 13.5134 13.5411 12.6667
4.0021 9.0 45 2.7386 15.7792 0.0 13.5134 13.5411 12.6667
3.7634 10.0 50 2.6588 15.7792 0.0 13.5134 13.5411 12.6667
3.7634 11.0 55 2.5916 15.7792 0.0 13.5134 13.5411 12.6667
3.6224 12.0 60 2.5358 15.7792 0.0 13.5134 13.5411 12.6667
3.6224 13.0 65 2.4895 15.7792 0.0 13.5134 13.5411 12.6667
3.496 14.0 70 2.4486 15.7792 0.0 13.5134 13.5411 12.6667
3.496 15.0 75 2.4140 15.7792 0.0 13.5134 13.5411 12.6667
3.4157 16.0 80 2.3857 15.7351 0.0 13.4803 13.5134 12.6667
3.4157 17.0 85 2.3622 15.7351 0.0 13.4803 13.5134 12.6667
3.3964 18.0 90 2.3455 15.7351 0.0 13.4803 13.5134 12.6667
3.3964 19.0 95 2.3361 15.7351 0.0 13.4803 13.5134 12.6667
3.3502 20.0 100 2.3327 15.7351 0.0 13.4803 13.5134 12.6667

Framework versions

  • Transformers 4.21.1
  • Pytorch 1.12.1
  • Datasets 2.4.0
  • Tokenizers 0.12.1
Downloads last month
9

Evaluation results