Dwaraka commited on
Commit
18b63f5
1 Parent(s): 03eb02b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -10
README.md CHANGED
@@ -109,17 +109,18 @@ TRAINING Loss: 3.422200(At Step:40000)
109
  ### How to Use:
110
 
111
  We can use the model directly with a pipeline for text generation:
112
-
113
  !pip install transformers
114
- >>>from transformers import GPT2Tokenizer, GPT2LMHeadModel
115
- >>>model_name = "Dwaraka/PROJECT_GUTENBERG_GOTHIC_FICTION_TEXT_GENERATION_gpt2"
116
- >>>model = GPT2LMHeadModel.from_pretrained(model_name)
117
- >>>tokenizer = GPT2Tokenizer.from_pretrained(model_name)
118
- >>>prompt= "Once upon a time, in a dark and spooky castle, there lived a "
119
- >>>input_ids = tokenizer.encode(prompt, return_tensors="pt" )
120
- >>>output = model.generate(input_ids, max_length=50, do_sample=True)
121
- >>>generated_text = tokenizer.decode(output[0], skip_special_tokens=True )
122
- >>>print(generated_text)
 
123
 
124
  ### Github Link :
125
  This Fine-Tuned model is available at: https://github.com/DwarakaVelasiri/Dwaraka-PROJECT_GUTENBERG_GOTHIC_FICTION_TEXT_GENERATION_gpt2
 
109
  ### How to Use:
110
 
111
  We can use the model directly with a pipeline for text generation:
112
+ ``` python
113
  !pip install transformers
114
+ from transformers import GPT2Tokenizer, GPT2LMHeadModel
115
+ model_name = "Dwaraka/PROJECT_GUTENBERG_GOTHIC_FICTION_TEXT_GENERATION_gpt2"
116
+ model = GPT2LMHeadModel.from_pretrained(model_name)
117
+ tokenizer = GPT2Tokenizer.from_pretrained(model_name)
118
+ prompt= "Once upon a time, in a dark and spooky castle, there lived a "
119
+ input_ids = tokenizer.encode(prompt, return_tensors="pt" )
120
+ output = model.generate(input_ids, max_length=50, do_sample=True)
121
+ generated_text = tokenizer.decode(output[0], skip_special_tokens=True )
122
+ print(generated_text)
123
+ ```
124
 
125
  ### Github Link :
126
  This Fine-Tuned model is available at: https://github.com/DwarakaVelasiri/Dwaraka-PROJECT_GUTENBERG_GOTHIC_FICTION_TEXT_GENERATION_gpt2