Edit model card

t5-small-canadaWildfireKP

This model is a fine-tuned version of t5-small on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9108
  • Rouge1: 45.9651
  • Rouge2: 39.4386
  • Rougel: 45.9311
  • Rougelsum: 45.9452
  • Gen Len: 8.7484

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 8

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.2141 1.0 6015 1.0340 43.8621 37.4812 43.7904 43.7672 9.4331
1.0575 2.0 12030 1.0058 46.0317 39.5737 46.0188 46.032 9.3363
0.9392 3.0 18045 0.9552 44.3467 37.8118 44.3349 44.3262 8.9630
0.8959 4.0 24060 0.9384 45.3347 38.7573 45.313 45.3346 8.9434
0.8197 5.0 30075 0.9164 45.3703 38.8341 45.3714 45.3623 8.7409
0.8302 6.0 36090 0.9161 45.5709 39.1509 45.5322 45.53 8.7904
0.7883 7.0 42105 0.9108 45.9651 39.4386 45.9311 45.9452 8.7484
0.7381 8.0 48120 0.9142 45.4087 38.9851 45.4047 45.4117 8.6583

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
3
Safetensors
Model size
60.5M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for rizvi-rahil786/t5-small-canadaWildfireKP

Base model

google-t5/t5-small
Finetuned
(1379)
this model