AgaMiko commited on
Commit
3f7d4b3
1 Parent(s): b9dbe29

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -7
README.md CHANGED
@@ -89,14 +89,12 @@ inputs = ["Christina Katrakis, who spoke to the BBC from Vorokhta in western Ukr
89
  "Hello, I'd like to order a pizza with salami topping."]
90
 
91
  for sample in inputs:
92
- input_sequences = [task_prefix + sample]
93
- input_ids = tokenizer(input_sequences, return_tensors='pt', truncation=True).input_ids
94
- output = vlt5.generate(input_ids, no_repeat_ngram_size=3, num_beams=4)
95
- predicted = tokenizer.decode(output[0], skip_special_tokens=True)
96
- print(sample, "\n --->", predicted)
97
 
98
-
99
-
100
  ```
101
  # Inference
102
  Our results showed that the best generation results were achieved with `no_repeat_ngram_size=3, num_beams=4`
 
89
  "Hello, I'd like to order a pizza with salami topping."]
90
 
91
  for sample in inputs:
92
+ input_sequences = [task_prefix + sample]
93
+ input_ids = tokenizer(input_sequences, return_tensors='pt', truncation=True).input_ids
94
+ output = model.generate(input_ids, no_repeat_ngram_size=3, num_beams=4)
95
+ predicted = tokenizer.decode(output[0], skip_special_tokens=True)
96
+ print(sample, "\n --->", predicted)
97
 
 
 
98
  ```
99
  # Inference
100
  Our results showed that the best generation results were achieved with `no_repeat_ngram_size=3, num_beams=4`