Mousumi commited on
Commit
5ceec1b
1 Parent(s): 4883011

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ### Description:
2
+ BART Model has been finetuned on CNN/DailyMail Dataset with Sample Size 10000.
3
+ ### How TO Use:
4
+
5
+ ```
6
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
7
+ import torch
8
+
9
+
10
+ src_text = [" PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow.", "In the end, it played out like a movie. A tense, heartbreaking story, and then a surprise twist at the end. As eight of Mary Jane Veloso's fellow death row inmates -- mostly foreigners, like her -- were put to death by firing squad early Wednesday in a wooded grove on the Indonesian island of Nusa Kambangan, the Filipina maid and mother of two was spared, at least for now. Her family was returning from what they thought was their final visit to the prison on so-called \"execution island\" when a Philippine TV crew flagged their bus down to tell them of the decision to postpone her execution. Her ecstatic mother, Celia Veloso, told CNN: \"We are so happy, so happy. I thought I had lost my daughter already but God is so good. Thank you to everyone who helped us."]
11
+
12
+ torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
13
+
14
+ tokenizer = AutoTokenizer.from_pretrained("Mousumi/finetuned_bart")
15
+
16
+ model = AutoModelForSeq2SeqLM.from_pretrained("Mousumi/finetuned_bart").to(torch_device)
17
+
18
+ no_samples = len(src_text)
19
+ result = []
20
+
21
+ for i in range(no_samples):
22
+ with tokenizer.as_target_tokenizer():
23
+ tokenized_text = tokenizer([src_text[i]], return_tensors='pt', padding=True, truncation=True)
24
+ batch = tokenized_text.to(torch_device)
25
+ translated = model.generate(**batch)
26
+ tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
27
+ result.append(tgt_text[0])
28
+
29
+ print(result)
30
+
31
+ ```