yj2773 commited on
Commit
41ad6b4
1 Parent(s): f86b18a

Updated README.md with pipeline example

Browse files

Added this small snippet using transformers.Text2TextGenerationPipeline for ease of use.

Files changed (1) hide show
  1. README.md +23 -0
README.md CHANGED
@@ -38,6 +38,29 @@ This model is based on the T5-base model. We used "transfer learning" to get our
38
 
39
  [Author's LinkedIn](https://www.linkedin.com/in/vladimir-vorobev/) link
40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  ## Deploying example
42
  ```python
43
  from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
 
38
 
39
  [Author's LinkedIn](https://www.linkedin.com/in/vladimir-vorobev/) link
40
 
41
+ ## Using with pipeline
42
+ ```python
43
+ from transformers import pipeline
44
+
45
+ generator = pipeline(model="humarin/chatgpt_paraphraser_on_T5_base")
46
+ ```
47
+
48
+ **Input**
49
+ ```python
50
+ generator('What are the best places to see in New York?', num_return_sequences=5, do_sample=True)
51
+ ```
52
+ **Output**
53
+ ```python
54
+ [{'generated_text': 'Which locations in New York are worth visiting and why?'},
55
+ {'generated_text': 'Can you recommend any must-see sites in New York?'},
56
+ {'generated_text': 'Which Bostonian sites are considered the funniest to visit in New York?'},
57
+ {'generated_text': 'Which are the top destinations to discover in New York?'},
58
+ {'generated_text': 'What are some must-see attractions in New York?'}]
59
+ ```
60
+
61
+
62
+ You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
63
+
64
  ## Deploying example
65
  ```python
66
  from transformers import AutoTokenizer, AutoModelForSeq2SeqLM