Edit model card

nb-mt5-base-finetuned-no-email-summary

This model is a fine-tuned version of NbAiLab/nb-mt5-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 4.0562
  • Rouge1: 23.2345
  • Rouge2: 9.8761
  • Rougel: 21.7798
  • Rougelsum: 21.7753

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 8

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
13.949 1.0 785 7.1715 2.2238 0.0 2.2256 2.2308
7.0892 2.0 1570 5.8052 12.5372 3.2395 12.1514 12.1407
5.9863 3.0 2355 5.0910 18.7517 6.61 17.5784 17.6163
5.3048 4.0 3140 4.6418 19.8868 7.6496 18.5489 18.5228
4.8419 5.0 3925 4.3614 19.7772 7.714 18.6827 18.6603
4.5517 6.0 4710 4.1811 22.1143 8.8939 20.6157 20.6484
4.3651 7.0 5495 4.0683 22.7613 9.6087 21.2802 21.297
4.2712 8.0 6280 4.0562 23.2345 9.8761 21.7798 21.7753

Framework versions

  • Transformers 4.28.1
  • Pytorch 1.13.1+cu117
  • Datasets 2.11.0
  • Tokenizers 0.13.3
Downloads last month
10
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.