--- library_name: transformers license: cc datasets: - billsum language: - en metrics: - rouge pipeline_tag: summarization --- # Model Card for Model ID This is the T5 Model on that is trained on the BillSum dataset.
Trained on the Google Colab's T4 GPU
# Training Arguments training_args = Seq2SeqTrainingArguments(
"bert-on-the-billsum",
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
weight_decay=0.01,
save_total_limit=3,
num_train_epochs=4,
predict_with_generate=True,
fp16=True,
push_to_hub=True,
) # Training Results Epoch Training_Loss Validation_Loss Rouge1 Rouge2 Rougel Rougelsum Gen_Len
1 No log 1.920353 0.193900 0.105400 0.168900 0.168800 19.000000
2 2.296300 1.860363 0.195300 0.105100 0.171400 0.171400 19.000000
3 1.927000 1.834611 0.195500 0.106800 0.171600 0.171500 19.000000
4 1.849100 1.826394 0.195000 0.105200 0.171400 0.171300 19.000000