t5-base-pt-asqa-ob / README.md
din0s's picture
Librarian Bot: Update dataset YAML metadata for model (#1)
ce03f89
|
raw
history blame
2.65 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
datasets: din0s/asqa
model-index:
  - name: t5-base-pt-asqa-ob
    results: []

t5-base-pt-asqa-ob

This model is a fine-tuned version of din0s/t5-base-msmarco-nlgen-ob on the ASQA dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7481
  • Rougelsum: 12.3722

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rougelsum
No log 1.0 355 1.8760 11.5138
2.1344 2.0 710 1.8322 11.6843
1.979 3.0 1065 1.8109 11.8592
1.979 4.0 1420 1.7967 11.9466
1.9493 5.0 1775 1.7871 12.0333
1.9099 6.0 2130 1.7778 12.0805
1.9099 7.0 2485 1.7720 12.1659
1.8748 8.0 2840 1.7668 12.2039
1.8584 9.0 3195 1.7628 12.2506
1.8362 10.0 3550 1.7601 12.2557
1.8362 11.0 3905 1.7575 12.2718
1.8134 12.0 4260 1.7562 12.2789
1.7996 13.0 4615 1.7538 12.3179
1.7996 14.0 4970 1.7529 12.3035
1.8049 15.0 5325 1.7519 12.3317
1.7898 16.0 5680 1.7510 12.3717
1.7872 17.0 6035 1.7497 12.3750
1.7872 18.0 6390 1.7486 12.3580
1.7759 19.0 6745 1.7483 12.3698
1.785 20.0 7100 1.7481 12.3722

Framework versions

  • Transformers 4.23.0.dev0
  • Pytorch 1.12.1+cu102
  • Datasets 2.4.0
  • Tokenizers 0.12.1