julien-c HF staff commited on
Commit
8f46181
1 Parent(s): c47ee60

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/tuner007/pegasus_qa/README.md

Files changed (1) hide show
  1. README.md +30 -0
README.md ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Pegasus for question-answering
2
+ Pegasus model fine-tuned for QA using text-to-text approach
3
+
4
+ ## Model in Action 🚀
5
+ ```
6
+ import torch
7
+ from transformers import PegasusForConditionalGeneration, PegasusTokenizer
8
+ model_name = 'tuner007/pegasus_qa'
9
+ torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
10
+ tokenizer = PegasusTokenizer.from_pretrained(model_name)
11
+ model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device)
12
+
13
+ def get_answer(question, context):
14
+ input_text = "question: %s text: %s" % (question,context)
15
+ batch = tokenizer.prepare_seq2seq_batch([input_text], truncation=True, padding='longest', return_tensors="pt").to(torch_device)
16
+ translated = model.generate(**batch)
17
+ tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
18
+ return tgt_text[0]
19
+ ```
20
+ #### Example:
21
+ ```
22
+ context = "PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."
23
+ question = "How many customers were affected by the shutoffs?"
24
+ get_answer(question, context)
25
+ # output: '800 thousand'
26
+ ```
27
+
28
+
29
+ > Created by Arpit Rajauria
30
+ [![Twitter icon](https://cdn0.iconfinder.com/data/icons/shift-logotypes/32/Twitter-32.png)](https://twitter.com/arpit_rajauria)