NeMo
PyTorch
text generation
causal-lm
okuchaiev commited on
Commit
c2444f4
·
1 Parent(s): 260a829

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -74,9 +74,9 @@ img {
74
 
75
  ## Model Description
76
 
77
- NVLLM-GPT 2B is a transformer-based language model. GPT refers to a class of transformer decoder-only models similar to GPT-2 and 3 while 2B refers to the total trainable parameter count (2 Billion) [1, 2].
78
 
79
- This model was trained with [NeMo Megatron](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html).
80
 
81
  ## Model Architecture improvements
82
 
@@ -168,7 +168,8 @@ The model was trained on 1.1T tokens obtained from publicly available data sourc
168
 
169
  ## Limitations
170
 
171
- The model was trained on the data originally crawled from the Internet. This data contains toxic language and societal biases. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts.
 
172
 
173
  ## References
174
 
 
74
 
75
  ## Model Description
76
 
77
+ GPT-2B-001 is a transformer-based language model. GPT refers to a class of transformer decoder-only models similar to GPT-2 and 3 while 2B refers to the total trainable parameter count (2 Billion) [1, 2].
78
 
79
+ This model was trained on 1.1T tokens with [NeMo](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html).
80
 
81
  ## Model Architecture improvements
82
 
 
168
 
169
  ## Limitations
170
 
171
+ The model was trained on the data originally crawled from the Internet. This data contains toxic language and societal biases. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts.
172
+ We did not perform any bias/toxicity removal or model alignment on this checkpoint.
173
 
174
  ## References
175