wei commited on
Commit
b69f766
1 Parent(s): fcd1a47

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -45,7 +45,7 @@ The supervised training tasks datasets can be downloaded on [Link](https://www.d
45
 
46
  ### Multi-task Pretraining
47
 
48
- The model was trained on a single TPU Pod V3-8 for 340,000 steps in total, using sequence length 512 (batch size 4096).
49
  It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
50
  The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
51
 
45
 
46
  ### Multi-task Pretraining
47
 
48
+ The model was trained on a single TPU Pod V3-8 for 180,000 steps in total, using sequence length 512 (batch size 4096).
49
  It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
50
  The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
51