Edit model card

flan-t5-base-text_summarization_data_6_epochs

This model is a fine-tuned version of google/flan-t5-base. It achieves the following results on the evaluation set:

  • Loss: 1.6783
  • Rouge1: 43.5994
  • Rouge2: 20.4446
  • Rougel: 40.132
  • Rougelsum: 40.1692
  • Gen Len: 14.5837

Model description

This is a text summarization model.

For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Text%20Summarization/Text-Summarized%20Data%20-%20Comparison/Flan-T5%20-%20Text%20Summarization%20-%206%20Epochs.ipynb

Intended uses & limitations

This model is intended to demonstrate my ability to solve a complex problem using technology.

Training and evaluation data

Dataset Source: https://www.kaggle.com/datasets/cuitengfeui/textsummarization-data

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 6

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 RougeL RougeLsum Gen Len
2.0079 1.0 1174 1.7150 43.4218 19.8984 40.0059 40.0582 14.5011
1.9122 2.0 2348 1.7020 44.0374 20.5756 40.5915 40.5683 14.617
1.8588 3.0 3522 1.6881 43.9498 20.5633 40.4656 40.5116 14.4528
1.8243 4.0 4696 1.6812 43.6024 20.4845 40.1784 40.2211 14.5075
1.7996 5.0 5870 1.6780 43.6652 20.553 40.2209 40.2651 14.5236
1.7876 6.0 7044 1.6783 43.5994 20.4446 40.132 40.1692 14.5837

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu116
  • Datasets 2.10.1
  • Tokenizers 0.13.2
Downloads last month
12
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including DunnBC22/flan-t5-base-text_summarization_data_6_epochs