Anupriya1322 commited on
Commit
7ceb9f5
1 Parent(s): a8ca657

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -0
README.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
3
+ # Doc / guide: https://huggingface.co/docs/hub/model-cards
4
+ {}
5
+ ---
6
+
7
+ # Model Card for Model ID
8
+
9
+ <!-- Provide a quick summary of what the model is/does. -->
10
+
11
+ ## Description
12
+ Automatic Text Summarization is one of the most challenging and interesting problems in the field of Natural Language Processing (NLP). It is a process of generating a concise and meaningful summary of text from multiple text resources such as books, news articles, blog posts, research papers, emails, and tweets.
13
+ This model is a developed and fine-tuned for enhanced performance on dialogue summarization as a part of NLP assignment.
14
+
15
+ ## Model Details
16
+ # Loading summarization pipeline and model
17
+ summarizer = pipeline('summarization', model = '/content/BART_FINETUNED_TEXT_SUMMARY')
18
+
19
+ ### Model Description
20
+
21
+ <!-- Provide a longer summary of what this model is. -->
22
+
23
+
24
+
25
+ - **Developed by:** [Anupriya Sen and Ashutosh Kumar for NLP learning purpose and based on BART architecture]
26
+ -
27
+
28
+ ## How to Use
29
+
30
+ # Loading summarization pipeline and model
31
+ summarizer = pipeline('summarization', model = '/content/BART_FINETUNED_TEXT_SUMMARY')
32
+ give input
33
+ Model will provide the contextual output summary of a given paragraph or dialogue
34
+
35
+ conversation = '''Soma: Do you think it's a good idea to invest in stocks?
36
+ Emily: I'm skeptical. The market is very volatile, and you could lose money.
37
+ Sarah: True. But there's also a high upside, right?
38
+ '''
39
+ model(conversation)
40
+
41
+ ## Training Details
42
+
43
+ evaluation_strategy = "epoch",
44
+ save_strategy = 'epoch',
45
+ load_best_model_at_end = True,
46
+ metric_for_best_model = 'eval_loss',
47
+ seed = 42,
48
+ learning_rate=2e-5,
49
+ per_device_train_batch_size=4,
50
+ per_device_eval_batch_size=4,
51
+ gradient_accumulation_steps=2,
52
+ weight_decay=0.01,
53
+ save_total_limit=2,
54
+ num_train_epochs=4,
55
+ predict_with_generate=True,
56
+ report_to="none"
57
+
58
+
59
+ ### Training Data
60
+