pszemraj commited on
Commit
2f6950d
1 Parent(s): 9745b8e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -241,7 +241,7 @@ The following hyperparameters were used during training:
241
  - distributed_type: multi-GPU
242
  - gradient_accumulation_steps: 16
243
  - total_train_batch_size: 64
244
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
245
  - lr_scheduler_type: cosine
246
  - lr_scheduler_warmup_ratio: 0.03
247
  - num_epochs: 2
 
241
  - distributed_type: multi-GPU
242
  - gradient_accumulation_steps: 16
243
  - total_train_batch_size: 64
244
+ - optimizer: _ADAN_ using lucidrains' `adan-pytorch` with default betas
245
  - lr_scheduler_type: cosine
246
  - lr_scheduler_warmup_ratio: 0.03
247
  - num_epochs: 2