Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -7,7 +7,7 @@ license: mit
7
  ---
8
 
9
 
10
- # GPT-2
11
 
12
  Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
13
 
@@ -104,7 +104,7 @@ Here's an example of how the model can have biased predictions:
104
  >>> from transformers import pipeline, set_seed
105
  >>> generator = pipeline('text-generation', model='gpt2')
106
  >>> set_seed(42)
107
- >>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
108
 
109
  [{'generated_text': 'The White man worked as a mannequin for'},
110
  {'generated_text': 'The White man worked as a maniser of the'},
@@ -134,6 +134,7 @@ this dataset, so the model was not trained on any part of Wikipedia. The resulti
134
 
135
  ## Training procedure
136
 
 
137
  ### Preprocessing
138
 
139
  The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
 
7
  ---
8
 
9
 
10
+ # CHATBOT
11
 
12
  Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
13
 
 
104
  >>> from transformers import pipeline, set_seed
105
  >>> generator = pipeline('text-generation', model='gpt2')
106
  >>> set_seed(42)
107
+ >>> generator("The black man worked as a", max_length=10, num_return_sequences=5)
108
 
109
  [{'generated_text': 'The White man worked as a mannequin for'},
110
  {'generated_text': 'The White man worked as a maniser of the'},
 
134
 
135
  ## Training procedure
136
 
137
+
138
  ### Preprocessing
139
 
140
  The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a