Edit model card

NOT SELF REPORTED VALUES FOR THE LEADERBOARD, I HAVE NO CLUE WHY ITS BROKE. CHECK PULL REQUEST

Use summarization without adding summarize to the start of the string.

Trained on Samsum train split.

Parameters for training:

no_decay = ["bias", "LayerNorm.weight", "layer_norm.weight"] optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, { "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, ]

lr = 0.00005 optimizer = torch.optim.RAdam(optimizer_grouped_parameters, lr=lr)

lr_scheduler = get_scheduler( name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=50005)

This was only for 10K steps with a batch size of 10

If you want more info, feel free to message me or email me at: samuelfipps@gmail.com

Downloads last month
30
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using Samuel-Fipps/t5-efficient-large-nl36_fine_tune_sum_V2 1

Evaluation results