Edit model card

Description

This model was trained by fine-tuning the facebook/bart-large-xsum model using these parameters and the samsum dataset.

Development

Usage

from transformers import pipeline

model = pipeline("summarization", model="adedamolade26/bart-finetuned-samsum")

conversation = '''Jack: Cocktails later?
May: YES!!!
May: You read my mind...
Jack: Possibly a little tightly strung today?
May: Sigh... without question.
Jack: Thought so.
May: A little drink will help!
Jack: Maybe two!
'''
model(conversation)

Training Parameters

evaluation_strategy = "epoch",
save_strategy = 'epoch',
load_best_model_at_end = True,
metric_for_best_model = 'eval_loss',
seed = 42,
learning_rate=2e-5,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=2,
weight_decay=0.01,
save_total_limit=2,
num_train_epochs=4,
predict_with_generate=True,
fp16=True,
report_to="none"

References

Model Training process was adapted from Luis Fernando Torres's Kaggle Notebook: ๐Ÿ“ Text Summarization with Large Language Models

Downloads last month
31

Dataset used to train adedamola26/bart-finetuned-samsum

Space using adedamola26/bart-finetuned-samsum 1

Evaluation results