Hasanur525's picture
End of training
a01eb4f verified
metadata
base_model: Hasanur525/deed-summarization_version_5
tags:
  - generated_from_trainer
metrics:
  - rouge
model-index:
  - name: deed-summarization_version_10
    results: []

deed-summarization_version_10

This model is a fine-tuned version of Hasanur525/deed-summarization_version_5 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4150
  • Rouge1: 0.3247
  • Rouge2: 0.1432
  • Rougel: 0.3268
  • Rougelsum: 0.3201
  • Gen Len: 98.4206

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 5000
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
2.6983 1.0 529 1.8154 0.0 0.0 0.0 0.0 72.9565
2.3213 2.0 1058 1.5403 0.0 0.0 0.0 0.0 82.2401
1.0315 3.0 1587 1.2686 0.0 0.0 0.0 0.0 88.1635
1.7308 4.0 2116 1.0681 0.0 0.0 0.0 0.0 89.7155
1.1384 5.0 2645 0.9338 0.0 0.0 0.0 0.0 93.0586
1.6608 6.0 3174 0.8329 0.0199 0.0 0.0199 0.0199 95.5454
1.8287 7.0 3703 0.7506 0.0099 0.0 0.0099 0.0099 96.9036
0.4304 8.0 4232 0.6827 0.0742 0.036 0.0692 0.069 96.8894
1.1026 9.0 4761 0.6189 0.0888 0.0516 0.0888 0.0859 97.5312
0.8345 10.0 5290 0.5662 0.0497 0.0189 0.0443 0.0443 96.8025
0.3368 11.0 5819 0.5291 0.0394 0.0258 0.0398 0.0394 97.9783
0.2668 12.0 6348 0.5010 0.1466 0.0379 0.1368 0.1345 97.4386
0.8294 13.0 6877 0.4787 0.1815 0.0683 0.1744 0.167 97.8639
0.4896 14.0 7406 0.4603 0.1946 0.0732 0.1948 0.1899 97.707
0.4353 15.0 7935 0.4446 0.158 0.0664 0.1476 0.1456 97.8837
1.8165 16.0 8464 0.4314 0.3104 0.1119 0.3005 0.2917 98.4329
0.3503 17.0 8993 0.4236 0.2872 0.1234 0.2785 0.2681 98.2259
0.5756 18.0 9522 0.4199 0.339 0.1242 0.3348 0.3252 98.31
0.7974 19.0 10051 0.4176 0.3437 0.1568 0.3477 0.338 98.3932
0.224 20.0 10580 0.4150 0.3247 0.1432 0.3268 0.3201 98.4206

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.1.0.dev20230811+cu121
  • Datasets 2.17.0
  • Tokenizers 0.15.2