quanttawz commited on
Commit
ed4a215
1 Parent(s): 6ca3183

End of training

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -41,9 +41,11 @@ The following hyperparameters were used during training:
41
  - train_batch_size: 4
42
  - eval_batch_size: 8
43
  - seed: 42
 
 
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: cosine
46
- - training_steps: 3
47
 
48
  ### Training results
49
 
 
41
  - train_batch_size: 4
42
  - eval_batch_size: 8
43
  - seed: 42
44
+ - gradient_accumulation_steps: 10
45
+ - total_train_batch_size: 40
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: cosine
48
+ - training_steps: 10
49
 
50
  ### Training results
51