lg commited on
Commit
b41a392
1 Parent(s): 1172dff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -23,7 +23,7 @@ GPT-Neo 2.7B was trained on the Pile, a large scale curated dataset created by E
23
 
24
  ## Training procedure
25
 
26
- This model was trained for 400,000 steps on the Pile. It was trained as a masked autoregressive language model, using cross-entropy loss.
27
 
28
  ## Intended Use and Limitations
29
 
23
 
24
  ## Training procedure
25
 
26
+ This model was trained for 420 billion tokens over 400,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss.
27
 
28
  ## Intended Use and Limitations
29