uer commited on
Commit
7e9daa9
1 Parent(s): d3552f7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -21,8 +21,7 @@ You can use the model directly with a pipeline for text generation:
21
  When the parameter skip_special_tokens is True:
22
 
23
  ```python
24
- >>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline
25
- >>> from transformers import TextGenerationPipeline,
26
  >>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-poem")
27
  >>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-poem")
28
  >>> text_generator = TextGenerationPipeline(model, tokenizer)
@@ -33,8 +32,7 @@ When the parameter skip_special_tokens is True:
33
  When the parameter skip_special_tokens is False:
34
 
35
  ```python
36
- >>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline
37
- >>> from transformers import TextGenerationPipeline,
38
  >>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-poem")
39
  >>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-poem")
40
  >>> text_generator = TextGenerationPipeline(model, tokenizer)
 
21
  When the parameter skip_special_tokens is True:
22
 
23
  ```python
24
+ >>> from transformers import BertTokenizer, GPT2LMHeadModel,TextGenerationPipeline
 
25
  >>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-poem")
26
  >>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-poem")
27
  >>> text_generator = TextGenerationPipeline(model, tokenizer)
 
32
  When the parameter skip_special_tokens is False:
33
 
34
  ```python
35
+ >>> from transformers import BertTokenizer, GPT2LMHeadModel,TextGenerationPipeline
 
36
  >>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-chinese-poem")
37
  >>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-chinese-poem")
38
  >>> text_generator = TextGenerationPipeline(model, tokenizer)