DeividasM commited on
Commit
32931d6
1 Parent(s): 3cfbcd0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -23,7 +23,7 @@ This model was pre-trained with 180MB of Lithuanian Wikipedia. The texts are tok
23
  ## Training
24
  The model was trained on wiki-corpus for 40 hours using NVIDIA Tesla P100 GPU.
25
 
26
- ## How to use
27
 
28
  ### Load model
29
 
@@ -57,7 +57,9 @@ print(tokenizer.decode(outputs[0]))
57
  ## Limitations and bias
58
  The training data used for this model come from Lithuanian Wikipedia. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their model card:
59
 
 
60
  "Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes."
 
61
 
62
  ## Author
63
 
 
23
  ## Training
24
  The model was trained on wiki-corpus for 40 hours using NVIDIA Tesla P100 GPU.
25
 
26
+ ### How to use
27
 
28
  ### Load model
29
 
 
57
  ## Limitations and bias
58
  The training data used for this model come from Lithuanian Wikipedia. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their model card:
59
 
60
+ ```
61
  "Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes."
62
+ ```
63
 
64
  ## Author
65