Edit model card

summary_about_me

This model is a fine-tuned version of d0rj/rut5-base-summ on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9918
  • Rouge1: 0.9677
  • Rouge2: 0.8966
  • Rougel: 0.9677
  • Rougelsum: 0.9677
  • Gen Len: 79.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 25
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 50 1.3458 0.0 0.0 0.0 0.0 20.0
No log 2.0 100 1.3283 0.0 0.0 0.0 0.0 20.0
No log 3.0 150 1.3000 0.0 0.0 0.0 0.0 17.0
No log 4.0 200 1.2688 0.0 0.0 0.0 0.0 17.0
No log 5.0 250 1.2354 0.0 0.0 0.0 0.0 17.0
No log 6.0 300 1.2041 0.0 0.0 0.0 0.0 20.0
No log 7.0 350 1.1791 0.0 0.0 0.0 0.0 10.0
No log 8.0 400 1.1403 0.0 0.0 0.0 0.0 17.0
No log 9.0 450 1.1153 0.0 0.0 0.0 0.0 17.0
2.0999 10.0 500 1.0938 0.0 0.0 0.0 0.0 17.0
2.0999 11.0 550 1.0813 0.0 0.0 0.0 0.0 17.0
2.0999 12.0 600 1.0607 0.1176 0.0 0.1176 0.1176 35.0
2.0999 13.0 650 1.0508 0.9333 0.8571 0.9333 0.9333 44.0
2.0999 14.0 700 1.0386 0.9333 0.8571 0.9333 0.9333 44.0
2.0999 15.0 750 1.0293 0.9333 0.8571 0.9333 0.9333 44.0
2.0999 16.0 800 1.0210 0.9333 0.8571 0.9333 0.9333 44.0
2.0999 17.0 850 1.0151 0.9333 0.8571 0.9333 0.9333 44.0
2.0999 18.0 900 1.0084 0.0 0.0 0.0 0.0 10.0
2.0999 19.0 950 1.0039 0.9677 0.8966 0.9677 0.9677 79.0
1.8806 20.0 1000 0.9999 0.9677 0.8966 0.9677 0.9677 79.0
1.8806 21.0 1050 0.9963 0.9677 0.8966 0.9677 0.9677 79.0
1.8806 22.0 1100 0.9943 0.9677 0.8966 0.9677 0.9677 79.0
1.8806 23.0 1150 0.9932 0.9677 0.8966 0.9677 0.9677 79.0
1.8806 24.0 1200 0.9925 0.9677 0.8966 0.9677 0.9677 79.0
1.8806 25.0 1250 0.9918 0.9677 0.8966 0.9677 0.9677 79.0

Framework versions

  • PEFT 0.11.1
  • Transformers 4.41.0
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
3
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for davidataka/summary_about_me

Adapter
(1)
this model