Edit model card

t5_recommendation_sports_equipment_english

This model is a fine-tuned version of t5-large on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4517
  • Rouge1: 56.9841
  • Rouge2: 47.6190
  • Rougel: 57.4603
  • Rougelsum: 57.1429
  • Gen Len: 3.9048

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 0.96 6 6.7882 8.8889 0.9524 8.7668 8.7302 19.0
No log 1.96 12 2.3412 18.0952 0.0 18.0952 18.0952 3.2381
No log 2.96 18 0.8550 11.9048 4.7619 11.9048 11.9048 4.0
No log 3.96 24 0.7481 32.3810 4.7619 32.0635 32.0635 3.9048
No log 4.96 30 0.7208 21.2698 4.7619 20.9524 20.6349 3.6190
No log 5.96 36 0.6293 31.7460 23.8095 31.7460 30.9524 3.6667
No log 6.96 42 0.6203 42.8571 33.3333 43.6508 42.8571 3.9048
No log 7.96 48 0.6352 47.6190 33.3333 47.6190 47.6190 3.8095
No log 8.96 54 0.5334 52.6190 42.8571 53.0952 52.6984 3.9524
No log 9.96 60 0.4517 56.9841 47.6190 57.4603 57.1429 3.9048

Framework versions

  • Transformers 4.26.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.8.0
  • Tokenizers 0.13.3
Downloads last month
6