Writing logs to ./outputs/2024-02-28-10-23-08-068616/train_log.txt. Wrote original training args to ./outputs/2024-02-28-10-23-08-068616/training_args.json. ***** Running training ***** Num examples = 9600 Num epochs = 5 Num clean epochs = 5 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 8 Gradient accumulation steps = 1 Total optimization steps = 6000 ========================================================== Epoch 1 Running clean epoch 1/5 Train accuracy: 88.40% Eval accuracy: 91.67% Best score found. Saved model to ./outputs/2024-02-28-10-23-08-068616/best_model/ ========================================================== Epoch 2 Running clean epoch 2/5 Train accuracy: 94.80% Eval accuracy: 93.67% Best score found. Saved model to ./outputs/2024-02-28-10-23-08-068616/best_model/ ========================================================== Epoch 3 Running clean epoch 3/5 Train accuracy: 97.62% Eval accuracy: 94.67% Best score found. Saved model to ./outputs/2024-02-28-10-23-08-068616/best_model/ ========================================================== Epoch 4 Running clean epoch 4/5 Train accuracy: 98.95% Eval accuracy: 94.33% ========================================================== Epoch 5 Running clean epoch 5/5 Train accuracy: 99.34% Eval accuracy: 94.58% Wrote README to ./outputs/2024-02-28-10-23-08-068616/README.md.