pszemraj commited on
Commit
a90dc0b
1 Parent(s): 6ace57a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -0
README.md CHANGED
@@ -228,6 +228,24 @@ The following hyperparameters were used during training:
228
  - data type: TF32
229
  - num_epochs: 2
230
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
231
  ### Framework versions
232
 
233
  - Transformers 4.22.0
 
228
  - data type: TF32
229
  - num_epochs: 2
230
 
231
+ #### Epochs 7 & 8
232
+
233
+ - epochs 5 & 6 were trained with 12288 tokens input
234
+ - this fixes that with 2 epochs at 16384 tokens input
235
+
236
+ The following hyperparameters were used during training:
237
+ - learning_rate: 0.0004
238
+ - train_batch_size: 4
239
+ - eval_batch_size: 1
240
+ - seed: 42
241
+ - distributed_type: multi-GPU
242
+ - gradient_accumulation_steps: 16
243
+ - total_train_batch_size: 64
244
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
245
+ - lr_scheduler_type: cosine
246
+ - lr_scheduler_warmup_ratio: 0.03
247
+ - num_epochs: 2
248
+
249
  ### Framework versions
250
 
251
  - Transformers 4.22.0