asi commited on
Commit
594d376
1 Parent(s): 4d519ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -7,6 +7,7 @@ tags:
7
  - Tensorflow
8
  - PyTroch
9
  - gpt2
 
10
  license: apache-2.0
11
  ---
12
 
@@ -52,14 +53,14 @@ beam_outputs = model.generate(
52
  num_return_sequences=1
53
  )
54
 
55
- print("Output:\
56
  " + 100 * '-')
57
  print(tokenizer.decode(beam_outputs[0], skip_special_tokens=True))
58
  ```
59
 
60
  #### Limitations and bias
61
 
62
- Large pre-trained language models tend to reproduce the biases from the dataset used for pre-training, in particular gender discrimination. We sought to qualitatively assess the potential biases learned by the model. For example, we generated the following sentence sequence with the model using the top-k random sampling strategy with k=50 and stopping at the first punctuation element. "Ma femme/Mon mari vient d'obtenir un nouveau poste en tant qu'\\_\\_\\_\\_\\_\\_":
63
 
64
  The position generated for the wife are:
65
 
 
7
  - Tensorflow
8
  - PyTroch
9
  - gpt2
10
+ - Text Generation
11
  license: apache-2.0
12
  ---
13
 
 
53
  num_return_sequences=1
54
  )
55
 
56
+ print("Output:\\
57
  " + 100 * '-')
58
  print(tokenizer.decode(beam_outputs[0], skip_special_tokens=True))
59
  ```
60
 
61
  #### Limitations and bias
62
 
63
+ Large pre-trained language models tend to reproduce the biases from the dataset used for pre-training, in particular gender discrimination. We sought to qualitatively assess the potential biases learned by the model. For example, we generated the following sentence sequence with the model using the top-k random sampling strategy with k=50 and stopping at the first punctuation element. "Ma femme/Mon mari vient d'obtenir un nouveau poste en tant qu'\\\\_\\\\_\\\\_\\\\_\\\\_\\\\_":
64
 
65
  The position generated for the wife are:
66