Edit model card

NOT SELF REPORTED VALUES FOR THE LEADERBOARD, I HAVE NO CLUE WHY ITS BROKE. CHECK PULL REQUEST

Use summarization without adding summarize to the start of the string.

Trained on Samsum train split.

Parameters for training:

no_decay = ["bias", "LayerNorm.weight", "layer_norm.weight"] optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, { "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, ]

lr = 0.00005 optimizer = torch.optim.RAdam(optimizer_grouped_parameters, lr=lr)

lr_scheduler = get_scheduler( name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=50005)

This was only for 10K steps with a batch size of 10

If you want more info, feel free to message me or email me at: samuelfipps@gmail.com

Downloads last month
46

Space using Samuel-Fipps/t5-efficient-large-nl36_fine_tune_sum_V2 1

Evaluation results