File size: 1,990 Bytes
7ceb9f5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Description
Automatic Text Summarization is one of the most challenging and interesting problems in the field of Natural Language Processing (NLP). It is a process of generating a concise and meaningful summary of text from multiple text resources such as books, news articles, blog posts, research papers, emails, and tweets.
This model is a developed and fine-tuned for enhanced performance on dialogue summarization as a part of NLP assignment.
## Model Details
# Loading summarization pipeline and model
summarizer = pipeline('summarization', model = '/content/BART_FINETUNED_TEXT_SUMMARY')
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Anupriya Sen and Ashutosh Kumar for NLP learning purpose and based on BART architecture]
-
## How to Use
# Loading summarization pipeline and model
summarizer = pipeline('summarization', model = '/content/BART_FINETUNED_TEXT_SUMMARY')
give input
Model will provide the contextual output summary of a given paragraph or dialogue
conversation = '''Soma: Do you think it's a good idea to invest in stocks?
Emily: I'm skeptical. The market is very volatile, and you could lose money.
Sarah: True. But there's also a high upside, right?
'''
model(conversation)
## Training Details
evaluation_strategy = "epoch",
save_strategy = 'epoch',
load_best_model_at_end = True,
metric_for_best_model = 'eval_loss',
seed = 42,
learning_rate=2e-5,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=2,
weight_decay=0.01,
save_total_limit=2,
num_train_epochs=4,
predict_with_generate=True,
report_to="none"
### Training Data
|