Plim's picture
Training in progress, step 1000
1fb32b9
raw history blame
No virus
4.26 kB
1%|β–Š | 100/17440 [12:17<19:56:25, 4.14s/it]
1%|β–ˆβ–‹ | 199/17440 [24:34<20:52:21, 4.36s/it]
2%|β–ˆβ–ˆβ–Œ | 300/17440 [37:07<20:04:05, 4.22s/it]
2%|β–ˆβ–ˆβ–ˆβ– | 399/17440 [49:31<21:14:28, 4.49s/it]
3%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 499/17440 [1:01:50<20:45:51, 4.41s/it]
3%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 600/17440 [1:14:19<19:27:18, 4.16s/it]
4%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 700/17440 [1:27:12<19:52:48, 4.28s/it]
5%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 799/17440 [1:39:39<20:49:39, 4.51s/it]
5%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 899/17440 [1:52:23<21:09:25, 4.60s/it]
6%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 999/17440 [2:05:08<20:53:06, 4.57s/it]
6%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 1000/17440 [2:05:11<19:26:40, 4.26s/it]The following columns in the evaluation set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length. If input_length are not expected by `Wav2Vec2ForCTC.forward`, you can safely ignore this message.
***** Running Evaluation *****
Num examples = 16021
Batch size = 16
Configuration saved in ./checkpoint-1000/config.json
{'eval_loss': inf, 'eval_wer': 0.9997048122028068, 'eval_runtime': 711.5482, 'eval_samples_per_second': 22.516, 'eval_steps_per_second': 1.408, 'epoch': 0.29}
Model weights saved in ./checkpoint-1000/pytorch_model.bin
Configuration saved in ./checkpoint-1000/preprocessor_config.json