|
--- |
|
|
|
|
|
{} |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
## Description |
|
Automatic Text Summarization is one of the most challenging and interesting problems in the field of Natural Language Processing (NLP). It is a process of generating a concise and meaningful summary of text from multiple text resources such as books, news articles, blog posts, research papers, emails, and tweets. |
|
This model is a developed and fine-tuned for enhanced performance on dialogue summarization as a part of NLP assignment. |
|
|
|
## Model Details |
|
# Loading summarization pipeline and model |
|
summarizer = pipeline('summarization', model = '/content/BART_FINETUNED_TEXT_SUMMARY') |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
|
|
|
|
- **Developed by:** [Anupriya Sen and Ashutosh Kumar for NLP learning purpose and based on BART architecture] |
|
- |
|
|
|
## How to Use |
|
|
|
# Loading summarization pipeline and model |
|
summarizer = pipeline('summarization', model = '/content/BART_FINETUNED_TEXT_SUMMARY') |
|
give input |
|
Model will provide the contextual output summary of a given paragraph or dialogue |
|
|
|
conversation = '''Soma: Do you think it's a good idea to invest in stocks? |
|
Emily: I'm skeptical. The market is very volatile, and you could lose money. |
|
Sarah: True. But there's also a high upside, right? |
|
''' |
|
model(conversation) |
|
|
|
## Training Details |
|
|
|
evaluation_strategy = "epoch", |
|
save_strategy = 'epoch', |
|
load_best_model_at_end = True, |
|
metric_for_best_model = 'eval_loss', |
|
seed = 42, |
|
learning_rate=2e-5, |
|
per_device_train_batch_size=4, |
|
per_device_eval_batch_size=4, |
|
gradient_accumulation_steps=2, |
|
weight_decay=0.01, |
|
save_total_limit=2, |
|
num_train_epochs=4, |
|
predict_with_generate=True, |
|
report_to="none" |
|
|
|
|
|
|