File size: 1,189 Bytes
a6d5558
ee22573
a6d5558
 
 
 
 
5d31847
a6d5558
 
 
1a26f28
a6d5558
 
 
 
 
 
 
 
 
 
 
 
 
70911a1
ec743a9
a6d5558
 
ec743a9
 
 
5d31847
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
{}
---

## Model description
[PEGASUS](https://github.com/google-research/pegasus) fine-tuned for paraphrasing

## Model in Action:
```
import torch
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
model_name = 'dhhivvva/graamyfy'
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device)

def get_response(input_text,num_return_sequences,num_beams):
  batch = tokenizer([input_text],truncation=True,padding='longest',max_length=60, return_tensors="pt").to(torch_device)
  translated = model.generate(**batch,max_length=60,num_beams=num_beams, num_return_sequences=num_return_sequences, temperature=1.5)
  tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
  return tgt_text
```
#### Example: 
```
num_beams = 10
num_return_sequences = 3
context = "I like chocolate ice cream better than vanilla ice cream."
get_response(context,num_return_sequences,num_beams)
# output:
['I like chocolate ice cream.',
 'I like chocolate ice cream more than ice cream.',
 'I prefer chocolate ice cream.']
```