librarian-bot's picture
Librarian Bot: Add base_model information to model
de9285c
|
raw
history blame
5.88 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - cnn_dailymail
metrics:
  - rouge
base_model: allenai/led-base-16384
model-index:
  - name: led-large-16384-cnn_dailymail
    results:
      - task:
          type: text2text-generation
          name: Sequence-to-sequence Language Modeling
        dataset:
          name: cnn_dailymail
          type: cnn_dailymail
          config: 3.0.0
          split: test
          args: 3.0.0
        metrics:
          - type: rouge
            value: 0.3869876274946419
            name: Rouge1

led-large-16384-cnn_dailymail

This model is a fine-tuned version of allenai/led-base-16384 on the cnn_dailymail dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5544
  • Rouge1: 0.3870
  • Rouge2: 0.1736
  • Rougel: 0.2599
  • Rougelsum: 0.3653

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 64
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
1.9531 0.4 500 1.8639 0.3485 0.1441 0.2275 0.3288
1.9563 0.8 1000 1.8260 0.3538 0.1482 0.2315 0.3343
1.7176 1.2 1500 1.8208 0.3628 0.1527 0.2383 0.3433
1.7197 1.6 2000 1.8162 0.3696 0.1602 0.2434 0.3486
1.8086 2.0 2500 1.7924 0.3558 0.1533 0.2334 0.3361
1.2448 2.4 3000 1.8510 0.3703 0.1591 0.2447 0.3483
1.3574 2.8 3500 1.8277 0.3741 0.1593 0.2422 0.3540
1.0966 3.2 4000 1.8924 0.3682 0.1576 0.2424 0.3479
0.9938 3.6 4500 1.8957 0.3723 0.1599 0.2451 0.3511
1.0735 4.0 5000 1.8772 0.3653 0.1557 0.2399 0.3454
0.9106 4.4 5500 1.9401 0.3720 0.1585 0.2436 0.3504
1.015 4.8 6000 1.9320 0.3725 0.1570 0.2429 0.3515
1.7854 0.36 6500 1.7800 0.3624 0.1544 0.2390 0.3422
1.9079 0.39 7000 1.7629 0.3573 0.1553 0.2352 0.3370
1.7606 3.34 7500 1.6902 0.3783 0.1673 0.2521 0.3570
1.7571 3.57 8000 1.6563 0.3802 0.1691 0.2538 0.3587
1.6602 3.79 8500 1.6439 0.3814 0.1693 0.2548 0.3600
1.6614 4.01 9000 1.6312 0.3812 0.1691 0.2544 0.3599
1.668 4.24 9500 1.6189 0.3815 0.1689 0.2550 0.3603
1.6491 4.46 10000 1.6172 0.3799 0.1681 0.2540 0.3586
1.5994 4.68 10500 1.6132 0.3825 0.1702 0.2560 0.3610
1.6493 4.9 11000 1.6093 0.3828 0.1701 0.2561 0.3613
1.6769 5.13 11500 1.6074 0.3831 0.1706 0.2569 0.3619
1.6554 5.35 12000 1.6044 0.3817 0.1695 0.2559 0.3605
1.6155 5.57 12500 1.6010 0.3825 0.1700 0.2561 0.3608
1.5863 5.8 13000 1.5981 0.3829 0.1704 0.2569 0.3614
1.6306 6.02 13500 1.6004 0.3831 0.1702 0.2563 0.3618
1.6425 6.24 14000 1.5987 0.3821 0.1698 0.2561 0.3610
1.6863 6.46 14500 1.5876 0.3837 0.1710 0.2569 0.3622
1.6085 6.69 15000 1.5815 0.3836 0.1717 0.2573 0.3621
1.6267 6.91 15500 1.5792 0.3852 0.1722 0.2579 0.3633
1.5637 7.13 16000 1.5768 0.3830 0.1709 0.2568 0.3611
1.5586 7.36 16500 1.5740 0.3833 0.1706 0.2567 0.3617
1.5389 7.58 17000 1.5689 0.3858 0.1729 0.2590 0.3640
1.5694 7.8 17500 1.5645 0.3853 0.1731 0.2589 0.3636
1.5265 8.02 18000 1.5621 0.3871 0.1733 0.2596 0.3654
1.5273 8.25 18500 1.5624 0.3861 0.1726 0.2588 0.3646
1.5148 8.47 19000 1.5602 0.3866 0.1733 0.2592 0.3651
1.532 8.69 19500 1.5599 0.3859 0.1732 0.2593 0.3642
1.5113 8.92 20000 1.5602 0.3877 0.1748 0.2606 0.3658
1.5133 9.14 20500 1.5595 0.3855 0.1725 0.2587 0.3637
1.4875 9.36 21000 1.5572 0.3873 0.1741 0.2600 0.3654
1.5038 9.59 21500 1.5557 0.3860 0.1728 0.2590 0.3641
1.5062 9.81 22000 1.5544 0.3870 0.1736 0.2599 0.3653

Framework versions

  • Transformers 4.27.1
  • Pytorch 2.0.0+cu118
  • Datasets 2.10.1
  • Tokenizers 0.13.2