shpotes's picture
Training in progress, step 1500
cfacc5d
raw
history blame
5.84 kB
1%|β–Œ | 100/11800 [12:45<25:55:55, 7.98s/it]
2%|β–ˆ | 199/11800 [24:54<20:08:45, 6.25s/it]
3%|β–ˆβ–Œ | 299/11800 [37:18<29:05:53, 9.11s/it]
3%|β–ˆβ–ˆ | 400/11800 [49:53<22:55:28, 7.24s/it]
4%|β–ˆβ–ˆβ– | 499/11800 [1:01:50<17:39:50, 5.63s/it]
4%|β–ˆβ–ˆβ– | 500/11800 [1:01:55<16:52:37, 5.38s/it]The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.
***** Running Evaluation *****
Num examples = 6463
Batch size = 72
Configuration saved in ./checkpoint-500/config.json
Model weights saved in ./checkpoint-500/pytorch_model.bin
Configuration saved in ./checkpoint-500/preprocessor_config.json
{'eval_loss': 0.2469930201768875, 'eval_wer': 0.36629738582545746, 'eval_runtime': 294.3209, 'eval_samples_per_second': 21.959, 'eval_steps_per_second': 0.306, 'epoch': 4.24}
Configuration saved in ./preprocessor_config.json
5%|β–ˆβ–ˆβ–‰ | 599/11800 [1:19:42<27:05:31, 8.71s/it]
6%|β–ˆβ–ˆβ–ˆβ– | 699/11800 [1:31:54<19:48:30, 6.42s/it]
7%|β–ˆβ–ˆβ–ˆβ–‰ | 800/11800 [1:44:25<27:46:25, 9.09s/it]
8%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 899/11800 [1:56:44<23:11:00, 7.66s/it]
8%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 1000/11800 [2:08:58<16:37:42, 5.54s/it]The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.
***** Running Evaluation *****
Num examples = 6463
Batch size = 72
{'loss': 0.1435, 'learning_rate': 0.00099519617359424, 'epoch': 8.47}
Configuration saved in ./checkpoint-1000/config.json
Model weights saved in ./checkpoint-1000/pytorch_model.bin
Configuration saved in ./checkpoint-1000/preprocessor_config.json
{'eval_loss': 0.20002000033855438, 'eval_wer': 0.2791095533162254, 'eval_runtime': 295.0975, 'eval_samples_per_second': 21.901, 'eval_steps_per_second': 0.305, 'epoch': 8.47}
Configuration saved in ./preprocessor_config.json
Deleting older checkpoint [checkpoint-500] due to args.save_total_limit
9%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 1099/11800 [2:27:28<26:00:03, 8.75s/it]
10%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 1199/11800 [2:39:47<20:07:56, 6.84s/it]
11%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 1300/11800 [2:52:03<27:51:31, 9.55s/it]
12%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 1400/11800 [3:04:32<21:51:09, 7.56s/it]
13%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 1499/11800 [3:16:35<16:58:53, 5.93s/it]
13%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 1500/11800 [3:16:41<16:33:01, 5.78s/it]The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length.
***** Running Evaluation *****
Num examples = 6463
Batch size = 72
Configuration saved in ./checkpoint-1500/config.json
{'eval_loss': 0.20303700864315033, 'eval_wer': 0.26521457929106423, 'eval_runtime': 301.1246, 'eval_samples_per_second': 21.463, 'eval_steps_per_second': 0.299, 'epoch': 12.71}
Model weights saved in ./checkpoint-1500/pytorch_model.bin
Configuration saved in ./checkpoint-1500/preprocessor_config.json
Configuration saved in ./preprocessor_config.json
Adding files tracked by Git LFS: ['wandb/run-20220205_233515-2f29fa6z/run-2f29fa6z.wandb']. This may take a bit of time if the files are large.