{'loss': 10.7305, 'learning_rate': 1.9200000000000003e-06, 'epoch': 3.12} {'loss': 3.0098, 'learning_rate': 3.920000000000001e-06, 'epoch': 6.25} {'loss': 2.9327, 'learning_rate': 5.92e-06, 'epoch': 9.37} {'loss': 2.8216, 'learning_rate': 7.92e-06, 'epoch': 12.49} {'loss': 2.3731, 'learning_rate': 9.920000000000002e-06, 'epoch': 15.62} {'eval_loss': 1.5517226457595825, 'eval_wer': 0.9499121265377856, 'eval_runtime': 24.8327, 'eval_samples_per_second': 20.497, 'eval_steps_per_second': 1.289, 'epoch': 15.62} {'loss': 1.9105, 'learning_rate': 1.1920000000000001e-05, 'epoch': 18.74} {'loss': 1.714, 'learning_rate': 1.392e-05, 'epoch': 21.86} {'loss': 1.5476, 'learning_rate': 1.5920000000000003e-05, 'epoch': 24.98} {'loss': 1.4238, 'learning_rate': 1.792e-05, 'epoch': 28.12} {'loss': 1.3312, 'learning_rate': 1.9920000000000002e-05, 'epoch': 31.25} {'eval_loss': 0.8717297911643982, 'eval_wer': 0.6189220855301699, 'eval_runtime': 24.7966, 'eval_samples_per_second': 20.527, 'eval_steps_per_second': 1.29, 'epoch': 31.25} {'loss': 1.2049, 'learning_rate': 1.912727272727273e-05, 'epoch': 34.37} {'loss': 1.1346, 'learning_rate': 1.821818181818182e-05, 'epoch': 37.49} {'loss': 1.0533, 'learning_rate': 1.730909090909091e-05, 'epoch': 40.62} {'loss': 0.9638, 'learning_rate': 1.64e-05, 'epoch': 43.74} {'loss': 0.9135, 'learning_rate': 1.549090909090909e-05, 'epoch': 46.86} {'eval_loss': 0.8298946619033813, 'eval_wer': 0.5310486233157586, 'eval_runtime': 24.721, 'eval_samples_per_second': 20.59, 'eval_steps_per_second': 1.294, 'epoch': 46.86} {'loss': 0.8568, 'learning_rate': 1.4581818181818184e-05, 'epoch': 49.98} {'loss': 0.8141, 'learning_rate': 1.3672727272727273e-05, 'epoch': 53.12} {'loss': 0.7526, 'learning_rate': 1.2763636363636365e-05, 'epoch': 56.25} {'loss': 0.7177, 'learning_rate': 1.1854545454545457e-05, 'epoch': 59.37} {'loss': 0.6719, 'learning_rate': 1.0945454545454545e-05, 'epoch': 62.49} {'eval_loss': 0.8842366933822632, 'eval_wer': 0.5043936731107206, 'eval_runtime': 25.0435, 'eval_samples_per_second': 20.325, 'eval_steps_per_second': 1.278, 'epoch': 62.49} {'loss': 0.6552, 'learning_rate': 1.0036363636363637e-05, 'epoch': 65.62} {'loss': 0.6145, 'learning_rate': 9.127272727272727e-06, 'epoch': 68.74} {'loss': 0.596, 'learning_rate': 8.21818181818182e-06, 'epoch': 71.86} {'loss': 0.5719, 'learning_rate': 7.30909090909091e-06, 'epoch': 74.98} {'loss': 0.5583, 'learning_rate': 6.4000000000000006e-06, 'epoch': 78.12} {'eval_loss': 0.9093144536018372, 'eval_wer': 0.4800820152314001, 'eval_runtime': 24.6074, 'eval_samples_per_second': 20.685, 'eval_steps_per_second': 1.3, 'epoch': 78.12} {'loss': 0.5417, 'learning_rate': 5.490909090909091e-06, 'epoch': 81.25} {'loss': 0.5241, 'learning_rate': 4.581818181818183e-06, 'epoch': 84.37} {'loss': 0.4901, 'learning_rate': 3.672727272727273e-06, 'epoch': 87.49} {'loss': 0.4882, 'learning_rate': 2.763636363636364e-06, 'epoch': 90.62} {'loss': 0.4728, 'learning_rate': 1.8545454545454546e-06, 'epoch': 93.74} {'eval_loss': 0.9488239884376526, 'eval_wer': 0.48125366139425896, 'eval_runtime': 24.6884, 'eval_samples_per_second': 20.617, 'eval_steps_per_second': 1.296, 'epoch': 93.74} {'loss': 0.4682, 'learning_rate': 9.454545454545455e-07, 'epoch': 96.86} {'loss': 0.4634, 'learning_rate': 3.636363636363637e-08, 'epoch': 99.98} {'train_runtime': 8387.3816, 'train_samples_per_second': 12.34, 'train_steps_per_second': 0.382, 'train_loss': 1.4163260304927825, 'epoch': 99.98} ***** train metrics ***** epoch = 99.98 train_loss = 1.4163 train_runtime = 2:19:47.38 train_samples = 1035 train_samples_per_second = 12.34 train_steps_per_second = 0.382 02/03/2022 18:01:03 - INFO - __main__ - *** Evaluate *** ***** eval metrics ***** epoch = 99.98 eval_loss = 0.9562 eval_runtime = 0:00:24.83 eval_samples = 509 eval_samples_per_second = 20.497 eval_steps_per_second = 1.289 eval_wer = 0.4801 02/03/2022 18:04:19 - WARNING - huggingface_hub.repository - Adding files tracked by Git LFS: ['wandb/offline-run-20220203_154548-23cvd7o7/run-23cvd7o7.wandb']. This may take a bit of time if the files are large. 02/03/2022 18:05:13 - WARNING - huggingface_hub.repository - Several commits (2) will be pushed upstream. 02/03/2022 18:05:13 - WARNING - huggingface_hub.repository - The progress bars may be unreliable. 02/03/2022 18:07:27 - WARNING - huggingface_hub.repository - To https://huggingface.co/jcmc/wav2vec-cv7-1b-ir f30c4d7..a0c1812 main -> main 02/03/2022 18:07:33 - WARNING - huggingface_hub.repository - To https://huggingface.co/jcmc/wav2vec-cv7-1b-ir a0c1812..e90ef2f main -> main 16%|█████████████████████████ | 500/3200 [20:22<1:57:09, 2.60s/it]The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length. ***** Running Evaluation ***** Num examples = 509 Batch size = 16 16%|█████████████████████████ | 500/3200 [20:47<1:57:09, 2.60s/it]Saving model checkpoint to ./checkpoint-500 Configuration saved in ./checkpoint-500/config.json Model weights saved in ./checkpoint-500/pytorch_model.bin Configuration saved in ./checkpoint-500/preprocessor_config.json Configuration saved in ./preprocessor_config.json 31%|█████████████████████████████████████████████████▋ | 1000/3200 [42:27<1:10:05, 1.91s/it]The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length. ***** Running Evaluation ***** Num examples = 509 Batch size = 16 31%|█████████████████████████████████████████████████▋ | 1000/3200 [42:51<1:10:05, 1.91s/it]Saving model checkpoint to ./checkpoint-1000 Configuration saved in ./checkpoint-1000/config.json Model weights saved in ./checkpoint-1000/pytorch_model.bin Configuration saved in ./checkpoint-1000/preprocessor_config.json 47%|█████████████████████████████████████████████████████████████████████████▌ | 1500/3200 [1:03:33<1:10:34, 2.49s/it]The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length. ***** Running Evaluation ***** Num examples = 509 Batch size = 16 47%|█████████████████████████████████████████████████████████████████████████▌ | 1500/3200 [1:03:57<1:10:34, 2.49s/it]Saving model checkpoint to ./checkpoint-1500 Configuration saved in ./checkpoint-1500/config.json Model weights saved in ./checkpoint-1500/pytorch_model.bin Configuration saved in ./checkpoint-1500/preprocessor_config.json 62%|███████████████████████████████████████████████████████████████████████████████████████████████████▍ | 2000/3200 [1:24:29<36:01, 1.80s/it]The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length. ***** Running Evaluation ***** Num examples = 509 Batch size = 16 62%|███████████████████████████████████████████████████████████████████████████████████████████████████▍ | 2000/3200 [1:24:54<36:01, 1.80s/it]Saving model checkpoint to ./checkpoint-2000 Configuration saved in ./checkpoint-2000/config.json Model weights saved in ./checkpoint-2000/pytorch_model.bin Configuration saved in ./checkpoint-2000/preprocessor_config.json Deleting older checkpoint [checkpoint-500] due to args.save_total_limit 78%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 2500/3200 [1:45:29<31:58, 2.74s/it]The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length. ***** Running Evaluation ***** Num examples = 509 Batch size = 16 78%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 2500/3200 [1:45:54<31:58, 2.74s/it]Saving model checkpoint to ./checkpoint-2500 Configuration saved in ./checkpoint-2500/config.json Model weights saved in ./checkpoint-2500/pytorch_model.bin Configuration saved in ./checkpoint-2500/preprocessor_config.json Deleting older checkpoint [checkpoint-1000] due to args.save_total_limit 94%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 3000/3200 [2:06:18<05:55, 1.78s/it]The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length. ***** Running Evaluation ***** Num examples = 509 Batch size = 16 94%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 3000/3200 [2:06:42<05:55, 1.78s/it]Saving model checkpoint to ./checkpoint-3000 Configuration saved in ./checkpoint-3000/config.json Model weights saved in ./checkpoint-3000/pytorch_model.bin Configuration saved in ./checkpoint-3000/preprocessor_config.json Deleting older checkpoint [checkpoint-1500] due to args.save_total_limit 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3200/3200 [2:15:04<00:00, 1.84s/it] Training completed. Do not forget to share your model on huggingface.co/models =) 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3200/3200 [2:15:04<00:00, 2.53s/it] Saving model checkpoint to ./ Configuration saved in ./config.json Model weights saved in ./pytorch_model.bin Configuration saved in ./preprocessor_config.json The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length. ***** Running Evaluation ***** Num examples = 509 Batch size = 16 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 32/32 [00:24<00:00, 1.33it/s] Saving model checkpoint to ./ Configuration saved in ./config.json Model weights saved in ./pytorch_model.bin Configuration saved in ./preprocessor_config.json Adding files tracked by Git LFS: ['wandb/offline-run-20220203_154548-23cvd7o7/run-23cvd7o7.wandb']. This may take a bit of time if the files are large. Several commits (2) will be pushed upstream. The progress bars may be unreliable. Upload file pytorch_model.bin: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋| 3.58G/3.59G [02:11<00:00, 30.5MB/s]To https://huggingface.co/jcmc/wav2vec-cv7-1b-ir f30c4d7..a0c1812 main -> main20203_154548-23cvd7o7/run-23cvd7o7.wandb: 100%|█████████████████████████████████████████████████████████████████████████████████████| 39.6M/39.6M [00:18<00:00, 19.7MB/s] Upload file runs/Feb03_15-40-29_job-829e2c87-5501-41ef-ad65-a05e9b64bfd7/events.out.tfevents.1643911287.job-829e2c87-5501-41ef-ad65-a05e9b64bfd7.27059.2: 100%|██████████████████| 358/358 [00:00 main